Salesforce integrates ethics and values in the use of AI, daily decision-making processes and to make contact with customers.
Dreamforce 2019: Salesforce wants ethics to be embedded in their business
Salesforce integrates ethics and values in the use of AI, daily decision-making processes and to make contact with customers.
At Dreamforce 2019 in San Francisco, Bill Detwiler from TechRepublic spoke with Salesforce Chief Ethical and Human Use Officer Paula Goldman and Salesforce Architect of Ethical AI Kathy Baxter about the methods Salesforce uses to add values and ethics to technology use and daily business practices. The following is an edited transcript of the interview.
Bill Detwiler: A big part of what companies are trying to do today to attract top talent, to get in touch with customers, what they expect is to incorporate ethics and values into their business practices on a daily basis. I am very happy to be here with two people to explain how Salesforce does that and I will let them introduce themselves.
Paula Goldman: My name is Paula Goldman and I am the Chief Ethical and Human Use Officer at Salesforce. This is, we believe, a first position of its kind, and it is really about looking at all our technology products and making sure that we bake ethics in the way they are designed and the way they live in the world.
We have a number of really exciting processes that we have developed, recognizing that we are at a critical moment in the technology industry, that the technology industry is increasingly under fire and people understand the impact of technology on daily life. That is why we work with our technical and product teams to integrate ethics into daily decision-making.
We also have an Ethical Use Advisory Council, and that advisory board has employees and executives and external experts and we have issues that are discussed in how our technology is used in the world and we have a policy making process to ensure that our technology is used for the greatest possible benefit.
SEE: The ethical challenges of AI: a guide manual (free PDF) (TechRepublic)
Bill Detwiler: And Kathy, we were talking a bit before the interview started about some of the technology, or some of the practice that goes with it, and you gave me some examples. First introduce yourself to the viewers and then tell us a bit about that technology.
Kathy Baxter: I am Kathy Baxter and I am an architect of ethical AI. There are many different things we do when we think of bakethics. In addition to the Ethics Advisory Council, we have a data science review board so that teams work on models, they can come in and receive feedback on those models, not only from a quality point of view, but also from an ethical point of view. We have specific functions in our Einstein products that help our customers identify: Are the training and evaluation data representative? Are the models biased? It therefore gives them the tools to help them use our Einstein AI in a responsible manner. And I think one of the examples of how we have really incorporated ethical considerations into a specific product is our AI for Good product, called SharkEye.
It is a really great project with the Benioff Oceans Institute and UC, San Diego for studying sharks. Large white sharks come in large numbers to the North American coast for a long time and we have to study them and discover the impact of climate change on them. How do we create tools to help people and sharks safely share the ocean?
That’s why we combined our Field Service Live product with our Einstein Vision Project product to track the sharks in real time and alert lifeguards when they have to close the beach. I think many people may assume that this kind of AI does not entail ethical risks for Good projects, but they do. Any form of technology can have unintended consequences.
We have spent a lot of time thinking about what some of them might be. We wanted to protect the privacy of citizens on the beach. Use drone technology to track the sharks – recording only starts when it’s over the ocean. We have never trained Einstein Vision on people, so it is only trained to follow the great whites (sharks).
In addition, we ensure that it is only certified drone operators who are employed by Benioff Oceans Initiative, so we do not have to worry about a whole group of people coming onto the beach with drones and interrupting the experience, or using the information to to hunt sharks or somehow in other ways, harm the sharks. Many considerations have been made to ensure that what we have made will protect both sharks and civilians.
Bill Detwiler: Paula, that’s so important when you look at those unintended consequences because you are talking about many technology companies, it was gathering all the data we can and then we will find out how we can protect it or what we can do with it later .
What that has done is bred a lot, you can say animosity, perhaps distrust, at customer level, at individual level. How do you address Paula? From a business perspective – how do you go back to the company or to customers and say, “We know you have these concerns, but here we are going to address them.”
Paula Goldman: I think the most important question you’ve asked is, how do we think about unexpected consequences? And that is currently the most important question for the technology industry. It is a simple question that can have far-reaching consequences for the products that we design. I will say that we have recently experimented with a number of methods with our tech and product teams. We have organized a number of workshops called Consequence Scanning where we sit down with a functional product team and say: “Okay, you are about to build this function, what are the expected consequences and what are the unexpected consequences?”
It sounds super simple, but you start to see the light bulb go out and people think: ‘Oh, wow, I hadn’t thought about that possible direction. Let us build this type of function to help our products use it responsibly and help our customers use it responsibly. “I think you heard Kathy talk about it. There are a number of features in our AI products that help people understand why they get the predictions they get. How can they ensure that, for example, if they do not want to use ways to make decisions, they can easily make that choice? Those are the types of distinguishing factors that I think are starting to distinguish the Salesforce product because we ask ourselves these questions early in the process.
Bill Detwiler: So, from a technical perspective, you talked about the teams you put together when designing a new product, a new service to help you think about some of those unconscious prejudices or the unintended consequences that might result. How do we approach that technically? You talked about that with Einstein and AI, so you decided not to train Einstein on people and only focus on sharks. What else can we technically do to solve that problem?
Kathy Baxter: We have a brilliant research scientist named Nazneen, whose expertise lies in explainable AI and identifying bias in models. We know that trusted AI must be responsible, responsible, transparent, empowering and inclusive. The explainability of what a model does affects so many of those things because you understand why the AI makes the recommendation it makes. What are the factors that are included? Who are the individuals who are excluded and who are included? Does someone get an advantage over someone else?
Being able to understand whether that AI makes inclusive decisions by being transparent – then you can hold it responsible. These things are all interchangeable with each other, so it is really important to think from the beginning, from the conception of the function, from the conception of the model, from the data collection.
If you had the chance to see the keynote of the survey on Tuesday, there was a demonstration from our voting assistant and we are really proud because we make a huge effort to obtain representative training data for different genders, such as English with a dozen different accents. So English with a German accent, with a French accent, so that it is as inclusive as possible. It works for as many people as possible. These are all different decisions at every point of the way in which we must ensure that we invest in it.
Paula Goldman: I think that equality is a core value for Salesforce and I think it is the key question for technical ethics. Ultimately, does technology make the world a more equal place or a less equal place? And we ask ourselves that question every day. We ask ourselves, who is behind these decisions? How do we ensure that, whether it concerns training data or a policy decision, how do we ensure that we are involved in that decision?
Bill Detwiler: And this is a somewhat more difficult question and we recently saw Marc (Benioff) discuss a little bit of this on the keynote. As a company (in general), and not just Salesforce, how do you specifically deal with the countless ethical positions that different customers or different people have in society?
We all have different expectations and experiences and different opinions about things. As a complete secondary activity, I actually teach as a university teacher, and one of the lessons I teach is ethics for professionals in criminal law. One of the things I always talk about is building those ethical frameworks. My audience and my students are mostly majors of criminal law, public security majors. So for them this has consequences for real life and death.
But you haven’t seen that often in the business world. So I am really interested in how you deal with what one person considers ethical, the other not. How do you approach that within Salesforce and how do you think companies should take a broader approach?
Paula Goldman: So that is a very important question, and I will say that we also approach this with a spirit of humility. We don’t pretend to have all the answers, but one thing that is very important is that we listen very actively and intentionally. I was trained as an anthropologist. I have a PhD in anthropology, and then it is the core competence to really understand and integrate different worldviews.
That’s why we have the council, why we constantly talk to members of civil society, activists, government employees, and create so many channels for voice and for people who feel comfortable to express concerns or opinions.
The other thing I will say is that it is really important, this is value driven. So we have a number of values as a company. We have derived a number of principles from this. We actually questioned all our employees and said, “What should be the most important ethical usage principles?” And in the order they told us, we have included that in our decision-making process. So it is really about broad listening and value-based decision-making for companies.
Bill Detwiler: Would you like to add something to that, Kathy?
Kathy Baxter: Yes, I would say that when I talk to customers and they ask if we want to create an ethical AI process, how do we start? We always say: ‘start with your values’. I would be surprised to find a company or organization that does not have a kind of mission statement, a kind of values on which they are based. And that is the framework on which you build, and then you can start by drawing up an overview of the priority framework. What are those things that we want to ensure that we focus on giving something back to society or focus on building? Those are the things we want to focus our efforts on.
And what are the things that are either the red lines, we are not going to do those things or they are just a lower priority? By having that, you can now really help to focus decision making so that when a team comes up with a great idea, such as: “Oh, let’s do a function that X does. Or a customer asks: “Can you build a function that Y does?” You have that framework to make that decision and say: ‘Yes, this fully fits our values. This is one of our priorities, “or” This is not one of our priorities. We have decided that we are not going to invest our resources in this area, so we are just not going in that specific area. ‘It really helps to have that documented and everyone is really clear about it.
Bill Detwiler: Is that really the challenge? When you have large organizations, tens of thousands, hundreds of thousands of employees, how do you tackle the pushback that you can get internally: “Hey, with this new function, this new product, we think there is a very large market for us. We think this is a profit center. We think this is a great opportunity to generate income. “And as Marc said, we need a new form of capitalism to some extent. In addition to pure profit, we must also take factors into account. How do you approach that pushback within a private commercial company?
Paula Goldman: I think it’s a legitimate question, and I don’t want to minimize its complexity, but what I will say is that it’s all about trust. Ultimately, we are truly convinced that the more ethics we integrate into our product development, the more this will benefit trust between us and our customers and trust between our customers and their customers. Everyone is concerned about where technology is going, and if we can help people with appropriate crash barriers, I actually think it’s going to be a win-win. And that’s why I’m so optimistic about what we do.
Kathy Baxter: Because we are a platform, there is so much flexibility with that platform. With our tools we can raise flags, for example. With Einstein Discovery we enable you to check in what we call protecting certain fields that you do not want to use in a model, age, race, gender. We then find other fields in your data set that are strongly related to this.
So we will say: ‘Oh, zip code is strongly correlated with race. Do you want to protect that too? ‘We are not going to do it for them, we are giving customers control. It is about giving customers the tools to make informed decisions for themselves, so that they also use tools that match their values.
Executive Briefing Newsletter
Discover the secrets of the success of IT leadership with these tips on project management, budgets and dealing with daily challenges.
Delivered on Tuesday and Thursday
Register today