Tech'ed Up

Building AI that Doesn't Scale Harm • Miriam Vogel

June 30, 2022 bWitched Media
Tech'ed Up
Building AI that Doesn't Scale Harm • Miriam Vogel
Show Notes Transcript

CEO of Equal AI and White House technology advisor, Miriam Vogel, joins Niki in the studio to discuss her work to reduce algorithmic bias in the deployment of artificial intelligence. In this episode, Miriam explains why this is a critical moment to make sure we are using AI as a solution to increase fairness and build trust, and not as a scaling function for harms that already exist. 

“We have this critical moment here where we can take safeguards and make sure that we are using AI as a solution and not as a scaling function for harm.” -Miriam Vogel

Intro: 

[music plays]

Niki: I’m Niki Christoff and welcome to Tech’ed Up. My guest today is Miriam Vogel. She is the President and CEO of EqualAI and advises Congress and the White House on the responsible development of emerging technology. Miriam and I unpack the fundamental challenge here - How do we harness AI to create more fairness in the world, and avoid unwittingly scaling historic bias?

Transcript:

Niki: Today in the studio. We have Miriam Vogel. Thank you for coming in.

Miriam: Thank you for having me. It's great to be here.

Niki: So you have a podcast [Miriam: I do]  called “In AI We Trust.” [Miriam: Exactly] And it's got a question mark. It's “In AI We Trust?”

Miriam: [interrupts] That's exactly right. We don't know. We're learning. We're figuring out how we can build this trust. 

Niki: So, that's what we're gonna talk about today. Your background, how you ended up in this space, we have a pretty sophisticated audience of a lot of DC people who understand the point of the policy, but I wanna dig in a little more on what you're seeing practically and then what we need to do to solve these things. 

So, let's start with your background. You are an attorney, you were at the DOJ. You've worked in the White House and you've found yourself in artificial intelligence. 

Miriam: Yeah! Not where I expected to land for sure. But when I look back, it was actually meant to be this way in so many ways. It was perfect training for what I'm doing now. I had worked in the tech space as a lawyer. And I'd done other things as well. And I'd also worked in government policy on bias, on reducing bias, on understanding bias and identifying policy measures to limit it. I worked under President Obama for his Equal Pay Task Force. I had the privilege of leading that and understood, got to understand the challenges of workplace bias and how it was holding back so many people from thriving. 

And then when I was at the Justice Department, I had the good fortune to work for Sally Yates who asked me to help start up a project for her creating implicit bias training for federal law enforcement, because she saw the through line that it was the same problem, different constituency. And needed to be addressed in the same way, in this holistic way. There aren't a lot of bad actors. There are those who are creating bad AI.  There were law enforcement that were not on board, but for the most part, our approach was we wanna help you do your job the best you can. We wanna support you. And that's really the approach we take at Equal AI. [clears throat] We want AI to be as effective as it can be, as inclusive as it can be.

We know that, that can only happen when, we take this lens of “Where is the bias?” Where is the harmful bias that will, uh, be bad in terms of whom you are hurting, who you're not, including who cannot use your AI. Who's excluded by your artificial intelligence? 

Niki: So, let's, you said there are, are, in your experience so far, not that many bad actors. People who listen to this podcast, know that I frequently criticize the People's Republic of China, which,  it's just a major issue for me. [Miriam: mm-hmm] And I think that AI is a really interesting space where we have a lot of attention on removing bias, [Miriam: mm-hmm] not just in the United States, but in Europe too. [Miriam: mm-hmm] Removing bias, being thoughtful about these extremely powerful tools that are already in practice.

And I wanna talk a little bit about who is using AI [Miriam: mm-hmm] and who thinks they're using AI [Miriam: mm-hmm].  And I, and then within the context of a global stage where China doesn't care about human rights,[Miriam: mm-hmm] China doesn't care about bias in their AI. They're just, you know, let it rip. I mean, this is my assumption about how they're using it. And so in some ways, obviously it's the right thing to do to make sure that we comply with existing laws on the books and try to have a better experience for people. But I'm conscious that we're within an extremely competitive geopolitical environment where other people are using AI in ways that can directly harm us. 

Miriam: Well, I'm so glad you brought this up because it is so important that when we're talking about making inclusive AI, we underscore the point that this is not intended to hold up our AI creation. This is not intended to impede. We can't afford that. To the contrary, we need to be able to compete with what tools we have and what advantages we have.

China will always have more data. We will not win on, on data. Where we can have an advantage where we do have an advantage is, in, in all the different actors, all the different stakeholders who we can engage in, participating in creating and testing and using our AI so that we have broader markets, so that we have more effective AI.

So, I think a really important point is we cannot take our eyes off the fact that there is this competition, for sure. That is a reality for any company that's creating, building, deploying AI. On the other hand, if we think about, where are our competitive advantages? I would say, making sure that we do have our values embedded in our AI makes it more useful for us and makes our AI more effective.

Niki: One of the things I've been thinking about this week, which has been- we're taping this just a few days after Roe was overturned by the Supreme Court. I've been thinking a lot about trust in institutions. And I do think one advantage or at least we can shore up is trust in companies and trust in our technology because clearly people are not trusting Congress. I mean, I'd say government right now. I don't know, F minus, like, not going great. [Miriam: chuckles] I don't think, I don't think people feel great. I don't feel great about it. But if we can shore up some of the trust in tech companies, and I'm an apologist for the tech industry, but also other companies, then I think that actually helps us grow our economy. It, like, maybe can eliminate some of the despair, of people feeling like they're controlled.

Miriam: I think that's such a great point. I think there's a lot of evidence to show that people feel less despair when they can feel confidence than they can feel trust when they don't feel that doom is on the horizon. And I, I think that it, AI, responsible AI comes down to building trust. You're absolutely right. It also is about building it and deploying it in a culture that supports trust and responsibility. I think those are two pieces, so closely interconnected, but as you say, a key variable here is can we trust the AI that we are using day in, day out?

I mean, if you think about just this morning alone, how many times we've both used AI in ways that have made our lives a little better? There are some seminal ways that it's impacting our life. But, y’know, taking Waze and getting the most effective route here was, was a,a gain, a win. I love saving a few extra minutes every time I open up Waze. Opening up my phone with my face so I could do it while we were in the car as a passenger, as opposed to getting nauseous, trying to punch in a password. There are so many little ways and big ways impacting us, but I have to trust that what I am using will not be used against me, will help me, will add to my life.

Niki: This a great point. So, I've actually become sort of, I'm not a luddite, but I don't have any IoT devices in my house. [Miriam: mm-hmm] I hate the facial recognition.  I had that home button forever. [Miriam: mm-hmm]  I do love Waze, especially when it routes you like through the Pentagon parking lot. And you're like, “Wow. Oh my gosh.” [Miriam: laughs] Brilliant! Brilliant!  But I'm sort of one of these people who’s kind of uncomfortable, not because I don't, I actually worked at Google for eight years. I trust Google. [Miriam: mm-hmm]  I know that maybe that's controversial to say, but I think they're filled with good people trying to do the right thing and they are more secure and better at this than some alternatives could be. 

Let's talk about the companies you work with and some of the good work that they're doing.  Amazon had a bad use case that you highlighted. I've heard you talk about it before. Let's talk through that because one of the things we wanna do is encourage people to look at how they're actually doing this and make changes.

Miriam: Yeah. Well, so one of the best use cases for a harm we know of from bias in AI is the Amazon HR program. And it is hard to identify problems with AI to understand where these biases are lurking for reasons that, y’know, we both know, that it's in the black box. It's not something companies necessarily wanna advertise. But with the Amazon case, Reuters’ reporter understood, learned that they had built this HR program to help them sift through resumes and, I see this as a success story, because what they learned, they only got through testing themselves. They only identified the discrimination that they could have spread by testing and making sure, being thoughtful about “For whom could this fail?” I think that becomes the key question. And so in this case, they came to learn that women were being disadvantaged, that if you had any indication of being a female on your resume, you would get a lower score and would not be as likely to be successful in the process.

Where does this come from? Well, in that case, from what we understand, it came from 10 years of data from their company on, on, who was employed and, and promoted. So the AI learned a pattern that we all know that in certain companies, there's an advantage to being male.

Niki: All companies! [Both laugh] Sorry. [Miriam:  No! laughs]  I'm feeling militant this week. Sorry. [Miriam: still laughing ]Listeners. Gentle listeners…[Miriam: Your hostess is feeling militant] [Both laugh]

Miriam: You know, you gotta own where you're feeling. [Niki: chuckles] And it's interesting because it's an intractable issue in so many ways. Seemingly. I mean, what I think is also interesting is that AI can be part of the solution. And so, if we're taking a step back and realizing AI is a reflection of our society, AI did not create that problem. If it was used, it would have scaled that discrimination. However, It identified the discrimination, to their credit Amazon spent a lot of, as I understand it, a lot of resources trying to turn it around and then scrapped the program. I, I give them a lot of credit for doing that. When you've spent all this time and resources in, in building a program, it's very hard to scrap it.

The other thing that I think is interesting is most companies you talk to will tell you, or be honest with themselves at some point, I hope, about the AI that they're using in their HR systems, as well as many other functions. But have they tested, have they asked, “For whom will this fail? Who is being disadvantaged by this AI program?”  where there are so many different places where bias could be embedding. 

The way we look at it at Equal AI is, human bias can embed at each human touchpoint throughout the AI life cycle. So, that example we just talked about with Amazon was in data sets, but, you have to start from the beginning: Who has the privilege of designing an AI solution? It's a unique subset that gets to create the solution that's AI enabled in the first place. And then you have: Who's designing it? Who's developing it? Again, often homogeneous groups. You have the data. That has historical bias embedded in it. You have testing: Who is, who's invited to participate in the testing process? And, y’know, that's not being used as broadly as I think it could to help be a solution.

So if we think of AI as a reflection and we think of it as an opportunity for a solution to do better then we can overcome some of these seemingly intractable problems. 

Niki: So, a positive example is, when I was at Salesforce, they started automatically adjusting, only in the U.S., because this is where they collected the data, salaries to see if there was any disparity based on…it was supposedly based on gender. What was fascinating is you'd just get a nudge. It would say, “Niki, Emily's getting a raise on your team.” And it would just automatically increase her salary, which was amazing. At the higher levels, men tended to be paid a little bit less because, at the time, to retain top female talent, you were paying top dollar. So, but I think that's the right answer. You're using the system. You're looking across what's happening and you're trying to create a little more fairness. I was very much in favor of this program but you can see where there are ways to use it to make things better and there are ways where you might just reinforcing unintentionally, outcomes that are based on, y’know, reality.

Miriam: It's so true. We can't blame AI for the problem, for sure. And if we're smart, we'll use it as a way to create solutions. One thing that I think is, is really a great illustration is the fact that two of the articles that demonstrate both the potential harms and potential benefits of AI are both written by the same person, Ziad Obermeyer, who did these, many, great studies, but two in particular. One, he worked with other researchers where they identified bias in the algorithms, that were used by United Healthcare and Optum. And what they found was that if you were black and poor, you would be offered less care than if you were white and wealthy. It was this striking finding that they weren't even looking for. They were going about the research for different reasons and came across this striking finding that was, you know, one of the worst use cases you imagine. If you're talking about healthcare, life or death, somebody's wellness, wellbeing, and ability to have access to healthcare services. I mean, those are some of the use cases that keep us up at night with AI. 

On the other end of the spectrum, he also did a study about how AI could be used in a different way. When he built different data points that included the patient perspective, they were able to create an AI program that was better able to identify knee pain, particularly in black patients where pain had been underscored and under-evaluated previously. And so, as opposed to with human doctors, where there had been a differentiation based on race as to whether patients felt that their pain was appropriately diagnosed and treated, the AI was able to create a two times better outcome. Again, because they used it as part of the solution, knowing that there was bias in the data set, knowing that were in this society with our human biases and instead of ignoring it or, or feeling just badly about it, they did something about it and they used it to optimize a better AI solution.

Niki:  I think you've just touched on two things that are really important. One is that it's not just tech companies using AI. So, insurance companies, I mean, you could look at an actuarial table from 25 years ago and you see how they just calculate your life expectancy [chuckles]  [Miriam: mm-hmm] based on data. Now they have huge computers doing that. So, I think insurance is a really interesting use case and can be quite problematic, as you point out. But related to that is healthcare. Y’know, when, when people present at a hospital with different symptoms, women are often under diagnosed as having a heart attack. And I think that there are ways you can help doctors, y’know, potentially- I don't know anything about this. I'm literally talking about something I know nothing about- But that you could help people overcome their own biases in this woman's presenting with whatever symptoms, “Let's make sure we evaluate for a heart attack.” 

Miriam: That’s exactly right. So, if you just built an AI system based on the available data sets, you would- ayou were using the AI to identify symptoms of different diseases or a heart attack, you would miss many signs, many early signs. For women, we present with different symptoms often, in certain occasions, than, than males do. And so, given that most data sets are built majority male, often Caucasian male, particularly in the healthcare space it could very easily miss the early symptoms of a female heart attack. 

You’re looking at users who are very far removed from the developers of the program. And so, this physician, this nurse, has no idea that they're presenting their patient with a false negative. Y’know, we're using AI in really exciting ways in cancer identification now, but that only works if you know who has been trained, what populations that success rate is for. If you have a patient in front of you, who's not well represented in that success rate you'd need to know that you could be presenting them with a false negative of cancer, of a heart attack, etc.

Niki:  And I think this goes to sort of a core tenant of how we think about companies and technology. It's actually in the interest of the hospital and the insurance company to get it right. You don't wanna miss things that people are suffering from, cuz that's actually costly in the end. I mean, not to be just like a total capitalist about it, but it's true. If they get it right, that saves lives. It saves later healthcare costs. You don't have as extreme things. So, it's in their interest to do it. And it's also the right thing to do. 

Miriam: Absolutely. We break it down into four main boxes as to why an organization must think about bias in AI. And I'm sympathetic. I've been a general counsel. I know that the C-suite has many fires to put out on a daily, hourly basis. So why does this need to be one of the top? 

We would argue it does need to, for many reasons, including employee satisfaction, if your employees are building something that they don't have trust in, they don't think that you are benefiting those users. They don't think that there is, that it's reliable, that it's responsible, your employees will not wanna be a part of it. And employee talent retention, as we know, is an invaluable resource and a hot topic. 

Brand integrity. If people can't entrust the, the work that you're doing, if they can't trust the AI-enabled functions, if they can't trust the recommendations that you're offering that have been aided by AI, are they gonna trust your brand? And once you've lost that trust, how would you ever get it back? You also can have a competitive advantage. I mean, if you are being more broad-minded in who could be using your AI technology, there are more markets that you have access to. 

And the point that generally gets people to lean in is the upcoming litigation. I fully expect that in the near future, as soon as lawyers learn how to speak AI, they're gonna understand that we're talking about a space where there will be scaled harms, and there are deep pockets. And my goal through Equal AI is to help companies prepare, so they are not caught off guard. Once you have the litigation and the liability, it means people have been harmed.

So, we have this critical moment here where we can take safeguards and make sure that we are using AI as a solution and not as a scaling function for harm. 

Niki: And this leads to the final thing I wanna talk about, which is some of the work you're doing with the U.S. government. But one thing that occurs to me, as I'm a lawyer too, and I started out in white collar, criminal defense. (My parents are so proud.) But one of the things that always occurs to me is, you don't want a record of your bad acts. 

Is there a way to create a safe harbor or to reduce litigation so that people aren't disincentivized from looking under the hood of what's happening at their companies?  Because it seems like we wanna encourage people to really see what they're doing rather than not looking at it so that they don't have evidence that something's gone wrong. 

Miriam: I think that's a great point. And, and I think we're at a turning point. I think companies used to think, “Let's ignore the problem so that we don't have a record of it. We're not aware of it. We have plausible deniability.” I think that's no longer going to be an option, currently, and certainly not in the near future because there's the EEOC and DOJ had a historic statement where they have told you, if you are using AI in hiring functions, you have to make sure that you are not discriminating based on disability.

We'll start to see more and more of the government regulators making statements about the fact that AI doesn't protect you. The fact that discrimination is stemming from an AI-delivered or supported recommendation does not make you immune. So, it is a real question. How do you do this work without creating a record that would be held against you?

I think- that's something we talk a lot about at Equal AI. We support a lot of companies. We work with lawyers. We work with governments because we wanna make sure that we're encouraging this work. I think at the end of the day, there's no question, you need to have good AI hygiene. And part of that is understanding where are your data sets gaps? Where are the problems based on the AI development within your control or with vendors that you're working with? Asking these questions, understanding where are there gonna be vulnerabilities and how you are deploying making use of this AI system. You have to have accountability, you have to have routine cadence in place of testing. AI will continue to iterate your process needs to keep up with that. and so I think both in the, uh, expectations of different governments, as well as what will come to be best practices,  that will be expected. 

Niki: And again, just making the point that these, this doesn't just apply to tech companies. We often think about tech companies when we're talking about tech, but this could apply to anybody using AI in their HR practices. And for hiring, I wanna bring something else up that you were talking about lawyers, and again, you and I have both worked in legal departments. When a lot of tech companies are based in California, which has more protected classes of individuals than the federal protection. So, veterans…political party affiliation.  If you really looked at some of the way that people are evaluating resumes at tech companies in California, is there bias against people with conservative political leanings? I don't know. My hunch is that there is, but I think that you're right. It could be a tool to protect legal departments if they're actually thinking ahead of time about potential class actions. Because if you get to a really broad class of people who've been harmed or excluded. You're just asking for a lawsuit. 

Miriam: And we actually can't afford to not be inclusive. I mean, again, getting back to the big picture of where we as a society need to be. AI can be a really important tool in making sure that we succeed. That our country, that our, our democratic values are those that are put forward through the AI that is in so many critical functions that people throughout the world are using day in, day out. And so, uh, the more that you are including those values in the building of your AI, the better AI you'll get. And the other, the flip side is, the way you create better AI is making sure that your AI is more inclusive. 

We need to make sure that different stakeholders, perspectives and experience are accounted for in the AI that we build. And in fact, when you look at the talent pool, we can't afford for anyone to be left behind. We need everybody to be participating in the AI creation. And it doesn't mean that everyone needs to be a coder in computer science. There's many different roles in order to create effective AI. But I would argue that the more people participating the better the outcome.

Niki: Absolutely. Cuz you're bringing a perspective that might not be, that you just know intuitively because you're in a different, y’know, category of person than someone who historically has been building these tools. Okay. Last thing, you're helping the White House. Tell me about your work there?

Miriam:  I was honored to be selected, to be the chair of NAIAC. We affectionately call it the National AI Advisory Committee. There are 27 of us, and they picked a really broad cross section from industry, academia, civil society. Our mandate is to advise the president and the White House on AI policy. We have two goals right now, that we've all accepted.

One, is to make recommendations in the next year. And the second, is to convene an AI conversation across the country. And so, because we're a public committee we have a mandate to do many of our meetings publicly, and we wanna use that to amplify the discussion about AI, to make sure that we're being inclusive. To listen to known experts around the country and to also acknowledge voices that are not heard enough in this conversation, while we invite more people to participate in the conversation.

We’ve done the first step of breaking into working groups so that each working group can help drive the work forward. We have one on trustworthy AI, workforce, research development, international collaboration, and competitiveness. We will have another one, a subcommittee on law enforcement that has not yet been developed. But each of these working groups will drive the recommendations and the public hearings so that we can move quickly, which we need to do.

Niki:  I'm so glad you guys are doing this. I think speed is of the essence. And I do think in this town, in D.C., there's a lot of really good work on trying to get the AI policy right. So that we don't have one arm tied behind our back, but are actually building systems that people trust, which helps us grow. 

Miriam: You're right. We have so much to build off of. We don't need to create new reports in this first year. We don't need to do extensive studies right now because so many of them have been done. So many really smart think tanks, groups, organizations, have put together these action plans to help us move forward quickly as a country. And so, I think our first task is seeing the good work that's been done, figuring out which can we do most quickly and most effectively. And then, it's a three year appointment so some of the longer term pieces we will certainly get to, but right now, how can we make action that will put us in a better-  it will put us in a better position to ensure that our AI, our AI policy, our country is able to thrive. 

Niki: I'm for it! Let's get the country thriving.  [both laugh]

Miriam: Let's do it. Good. I'm glad you're on board.

Niki:  I'm on board! Miriam, thank you for coming in today.

Miriam: Thank you, Niki. This has been good fun.

Outro:

[music plays]

Niki:  Thank you everyone for tuning in this week. I normally try to keep this show apolitical. We’re based in DC and we are up to our ears in politics. But this week, when I listened back to the episode, I noticed an edge in my voice. And when I thought about it, I realized that edge was coming from a place of fear from me, fear as a woman. It’s based on the Supreme Court’s ruling from last week and I know I’m not the only person feeling that.  If you’re feeling nervous, or scared, or freaked out by what’s happening with the Supreme Court and the government join me in doing something, anything. Write a check, give to organizations in states that are impacted, open up a conversation with the women and people in your life around you because I know if I’m feeling scared it means other people are too. 

So, we'll go back to being apolitical going forward, but I just felt like I need to say something, anything.  See you next week!