Tech'ed Up
What's happening on the frontlines of tech? Tune in for a zippy conversation about emerging technology hosted by industry veteran Niki Christoff. From the C-Suite and Capitol Hill to AI and crypto, quantum computing to the decentralized internet, Niki breaks down the trends in tech to help savvy listeners get even smarter. Guests include experts, enthusiasts, regulators, policymakers, CEOs, and reporters.
New episodes premiere bi-weekly on Thursdays. Subscribe for the latest episodes on YouTube or listen on your podcast app of choice.
Tech'ed Up
AI Mythbusting • Nikki Pope (NVIDIA)
NVIDIA’s Head of AI & Legal Ethics, Nikki Pope, talks about why how we talk about artificial intelligence matters and why making the tech representative of more people is important. She and Niki break down some of the myths surrounding the tech and re-examine what regulators should be focused on when they think of “existential threat” and AI. Spoiler alert - it’s not what Hollywood thinks!
“...democratization of AI means making sure that we don't leave cultures and languages and communities behind as we all go running breakneck into the future.” -Nikki Pope
- Follow Nikki Pope on LinkedIn
- Read more about Te Hiku Media
- Learn more about NVIDIA’s Trustworthy AI initiative
- Learn More at www.techedup.com
- Check out video on YouTube
- Follow Niki on LinkedIn
Niki: I'm Niki Christoff and welcome to Tech’ed Up. Today, I have a fellow Nikki on the podcast, Nikki Pope, Head of AI and Legal Ethics at NVIDIA.
She and I are busting some myths, discussing the importance of how we talk about artificial intelligence, and we touch on one of her passion projects to make the tech more representative and more useful.
Nikki, thank you so much for agreeing to come on the podcast.
Nikki Pope: My pleasure.
Niki: So you're joining us remotely. I don't want to be corny, but I'm pretty stoked [chuckling] to have another Nikki that I'm doing anything with professionally. So -
Nikki Pope: [interrupts, chuckling] Yeah, I rarely come across another Nikki professionally.
Niki: I know! So, anyway, this is a delight for me because of that. And plus, we're talking about one of my favorite topics, which is AI, but mostly because you have some fresh takes on AI, which will be interesting to our audience.
Your professional trajectory is really, really interesting to me. Do you want to talk just for a minute about how you ended up looking at algorithms and the impact they have on people?
Nikki Pope: Sure. I think it was 2005. I joined the advisory board of the Northern California Innocence Project and they work to exonerate people who have been wrongfully convicted.
That led me down a path to looking at, um, the use of technology in, in criminal justice. I left my law firm and took on the role of Managing Director of the High Tech Law Institute at Santa Clara University at the law school. And so my academic research was in predictive algorithms in criminal justice, looking at. sentencing, bail decisions, all those sorts of things where courts, prosecutors and police departments use this technology to determine how to treat someone who has been arrested.
It was really fascinating work and very eye-opening about some of the problems with AI, particularly as they relate to women and minorities. As I was doing this work and writing my papers and, y’know, living my academic life [chuckling], I got a call from someone at NVIDIA, who was one of the partners at my old law firm.
He asked me if I wanted to, um, come work at NVIDIA in this new position that they created: Head of AI and Legal Ethics.
And I didn't know what that was, but I had been trying to get into NVIDIA for years. It was in an area that I know something about and that I'm really passionate about. So, you know, long story short, I joined NVIDIA in 2020.
Niki: And so, for people who may not know, what does NVIDIA do?
Nikki Pope: My family didn't know what NVIDIA did either. So when I told them I was leaving my academic job to go work at NVIDIA, they were like, “What is that?” Before the AI craze, I think people looked at NVIDIA as a company that created chips for games for gaming.
What we do is build platforms. Computer platforms are AI platforms now. Since OpenAI introduced ChatGPT a couple of years ago, everything is kind of exploded, and we not only build the, the processing units, the GPUs and CPUs that, that tech companies, small and large, use to train and run their models, we also build models ourselves. We build models for our customers.
Jensen [Huang], who is our CEO, likes to say, “We are a platform company.” So, we are a full stack company, and anything along that stack, we can provide.
Niki: So, when I think of NVIDIA; you're one of just a very few set of companies that are able to build that out. So, you've been in the headlines quite a bit. To your point, you guys are working extensively in AI now [Nikki Pope: yeah] and so, your job is to think about the ethics behind it.
Nikki Pope: Correct. So I, I think, and not just about models, but about AI generally everything from what, y’know, providers or what vendors we work with to what models we put out to how we train the models, what we train them on, what kinds of use cases we will and will not support or endorse, everything.
The interesting thing to me about AI is it has some amazing uses that can help society and humanity, y’know, developing drugs that perform better or even that are targeted towards specific diseases that affects specific communities. There are just so many opportunities to improve the quality of people's lives, to improve the environment, to help with education, like kids who, whose parents can't afford a tutor, but you could create an AI tutor assistant that maybe they could afford.
So, there's so many amazing things that we can do with AI. And then there's the few things that people would do that are, y’know, nobody wants to see. I read an article last year that James Earl Jones had agreed to license his voice to Disney for, in perpetuity, as Darth Vader.
And so, he has had his voice recorded.
Niki: That's like amazing! [chuckling excitedly] What if that was your, what if that was your actual career? That's amazing. [Nikki Pope: laughs] Sorry, keep going.
Nikki Pope: No, but so, but -
Niki: I love this business opportunity for him, but yes! [Nikki Pope: laughs]
Nikki Pope: But, and that's the thing. It's like, if you grew up watching Star Wars and you know that voice, it would be really weird if there was another voice doing Darth Vader 30 years from now. And so, we can do this. We can preserve his voice and have a performance with his permission. To me, that's fantastic.
But then, the flip side of voice cloning is It's, y’know, people who fake your grandson's voice and then call you trying to scam money out of you because they're saying that, y’know, he was, “He's in jail and needs 5,000 dollars.”
That's the seedy side of what people might do. It's incumbent upon us in the industry to figure out ways to keep that bad stuff from happening while allowing the good stuff to continue to happen.
Niki: I love that example. I've not heard it before, but the idea that cloning his voice could be used for this really cool, long-term, y’know, people love it, right? It's for pop culture moments and it's a, it's a very fuzzy bunny kind of example of AI.
You and I have talked previously about the idea of existential threat. I work a lot with founders and CEOs, and they've sort of split into two camps. I've got my doomers on one side who think that there's this existential threat to humanity - “We’re all going to die because of AI.” And then you have extreme tech optimists who believe in this utopian future based in AI.
And the truth is there are going to be complications and two sides of the coin [Nikki Pope: mm-hmm] in every industry, basically. But one of the things you said to me previously that I thought was super interesting is you said, “We're thinking about the term existential threat all wrong.”[Nikki Pope: Yeah] Do you want to explain what you had said to me?
Nikki Pope: Sure. I think it is dependent upon where you are and who you are what an existential threat is.
So if you, I talked about predictive algorithms using the criminal justice system, right? If you are a person who lives in Detroit, and not to pick on Detroit [chuckles], but they've had some situations where the police have used facial recognition technology to identify people they think committed a crime, and then they end up arresting the wrong people.
In one case, you have a guy who's, he was arrested in front of his wife and kids. He was taken off to jail. He ended up having some stress issues. He had a heart attack. He lost his job. That's an existential threat. The existence of that family was threatened by that technology.
So, when I talk about existential threat, I'm talking about real people and the problems that they face from the technology right now. So, if you're a woman, a single mother, and you're trying to get a mortgage - maybe that AI will not treat you fairly because it has been trained on data that's biased against single mothers.
When I think about existential threat, I think of the way AI can adversely impact a real person today, not, y’know, humanity 10, 15, 20, 50 years from now.
And I have to say this, I never understand how the existential threat to humanity could manifest itself if all you have to do is unplug the computer and turn off the AI.
Niki: So, this leads to something we also have talked about previously with it, which is myth-busting. So, you said to me that you think it's sort of a problem that we've created in our minds, this idea of a sentient computer. So, you just said, if you unplug this thing, it's not going to be able to destroy us.
Talk more about what the tech really is. Like, what are we getting wrong about AI?
Nikki Pope: Yeah. [chuckles] People think about robots and they think Terminator. There are some pop culture shows that have said things like, “Yeah, a Tesla is sentient,” but it's not, it's, it's a, it's a computer, and it's, it does some cool stuff, but it's not thinking.
And, and, and we, we even talk about chatbots in that way where we say, y’know, “I was talking with ChatGPT.”
Well, no, you, you weren't. You input a prompt that was a question, or a statement, or a request, and it generated a response, but it's not thinking, and it's not conversing with you. It is predicting what the next word would be in this sentence to respond to the question that you asked.
That's, [chuckling] that's really all it's doing. And so, I think the more we think about AI as a machine that, that identifies patterns and predicts responses, the better off we’ll be.
Niki: We're often anthropomorphizing these machines. Like, if you think of it as it's just a bot that took a lot of input and is predicting the next likely thing based on pattern recognition versus it's a husky-voiced Scarlett Johansson as, like, a romantic assistant or even Alexa, I watch people get angry at Alexa. [Nikki Pope: laughs]
I don't have Alexa, but people who get angry at their Internet of Things devices. And it's, like, it's just a machine, right? It's like doing its best with the inputs it has. I know it doesn't always understand your voice, but you're not actually angry at a woman in your kitchen.
And so, I do think that you're right. That's leading to, we're almost psyching ourselves into thinking of this tech in a way that is bigger, scarier, but also more sophisticated than it really is.
Nikki Pope: Yeah, exactly. And that's the point that whenever I talk about these things, that's the point that I want to get across that these are tools and they are tools that can help us do whatever it is we're doing better.
If it's studying for an exam or writing a paper or briefing a meeting that you've attended or creating a presentation. Y’know, there's this great app called Gamma that I, I discovered. It creates PowerPoint presentations and so I typed in, “Create a five page, five-slide presentation on Trustworthy AI.” And that's all I said. That's all I input. And it created five slides with text and images. And it was great.
I reviewed it. I made a couple of changes, and I said, “I could deliver this presentation.” And it took about [pause] a minute? If I had done it [Niki: Yeah. And] I think we're taking me an hour.
Niki: So I, to plug another app, I use Canva, my team uses Canva and they've got a new feature, same thing, you can create an image in your slide.
And I was working with a local newspapers’ coalition and I said, “I want, sort of, a vintage image, image of Main Street with the lights going out,” or something like this. It was amazing. It looked incredible. It just popped it right up, which would have taken me forever. I would have had to do licensing. And that's not my core competency.
So, that's the other thing. If you think of it as a tool to supplement what may not be your core competency. I still need to bring creativity and human brains and EQ and all of the relationships to my job. It's not like it's going to eliminate what I'm doing for a living, but it's going to assist me.
As people start to use it, it will be less scary, but they'll also see the limitations. ‘Cause like I asked Chat GPT to write a friendlier email for me and it ended with “Thanks a bunch!” - which doesn't sound like me at all. [cross talk] and so we'll also bump into the limitations.
Nikki Pope: Yeah. And that's, I mean, one of the limitations that gets into this area of bias and how these machines are trained.
In the early days of image generation, I did a little test, and I typed in, “American company CEO,” and I got four middle-aged white guys. And [Niki: Named John?] Exactly!
[both laughing]
Nikki Pope: And so I, then I typed in, “American company, CEOs, diverse.” And I got, you know, some guys with blonde hair and some guys with dark hair. [both laugh] [Niki: not good] There were no women. And I eventually got it to, got this image generated to generate a picture, four pictures of four CEOs. Three were white men and one was an Asian man. I never got a woman. I never got a black person [chuckles] and [laughing] no matter what I did.
It's much better now if I type in, y’know, “American company CEO,” I'm going to get one woman because 25 percent of CEOs in the US are women. If I say black, I will get black people. If I say Asian, I will get Asian people. It's because the companies that build these models have to make the extra effort to make sure that the data that they are trained on includes these people and is representative of the community that's going to be using the model or go back and after the fact, put in some sort of guardrails.
It just, to me, underscores how, how much more work there is to do and how these are not thinking sentient beings. They're just machines and they can only do [Niki: Right] what we asked them to do. And they can only deliver based on what we've, the inputs that we've put in, what we've trained them on.
Niki: And so, this leads to one of your passion projects. We've had several people on the podcast who bring up the issues of representative imagery or representation in general. If you're scraping the internet, which is largely English-based, which has demographic representation that's not even remotely representative of the real world.
Natasha Tiku, who's a reporter, came on, and she talked about, “Well, if you're dissuaded as a person of color or a woman from participating on the internet because the internet can be pretty hostile, then you're actually not adding to the data set,” right? It's harder for these companies to, to scrape a representative amount of subjects for these models if they're not able to get it from datasets, but the other option is to just create better datasets, which is something you're really focused on with language.
Nikki Pope: Yeah. It is a passion project of mine. There are about 7,000 languages in the world. English is the most dominant of them, and it is the language of the internet for the most part. If you just take English, and maybe Mandarin, and Spanish, and German, and maybe French, that's, like, six languages out of 7,000.
So, there are a whole lot of people who the AI doesn't understand, or doesn't interact with, or is not understanding accurately. It may not seem important, but when we start developing AI that assists doctors with, with medical exams or with determinations of someone's illness is. It's really important.
If the AI doesn't understand, I don't know, Swahili, how is this AI that is supposed to be recommending some sort of treatment or care going to understand a woman who is speaking Swahili and has a problem with her pregnancy?
It becomes really important that, that these models that we build can communicate with the community who's going to be using them. Now I'm going to get on my little soapbox for a minute. [Niki: Do it! Do it!] That does not mean taking an English/Swahili dictionary and converting the English to Swahili because that doesn't work.
It's not a hundred percent. You're going to miss some words and you may miss some slang words, y’know, you may miss some local color words that would be essential in being able to make the proper diagnosis.
One of the cool things about working at NVIDIA is if you have an interest in something, you're going to find 10 or 20 more people who are interested in that same thing. And then you create a Slack channel, and then you, y’know, try and solve the problem. We're forming our group and focusing on how we can make sure that communities that want to engage in AI are empowered to do that on their own terms, representing their own cultures, representing their own language, and controlling their own AI destiny. Y’know, creating their own data sets. Building their own A.P.I. Doing all of that for the benefit of their community.
There's a company in, in New Zealand, Te Hiku Media that has done this with a with a dialect of the Maori language. {interrupts self] Where, this Te Hiku Media is actually a radio station, and they took on a project of building AI speech recognition, automatic speech recognition model in their language.
They got the buy-in from the elders in the community. I mean, they already had a relationship, so they were trusted. They built their own data sets. They had people telling stories, reading articles, submitting photographs, everything. They built their own language model, and they are now offering the API for that model, they license it, the revenue goes back into the community, and they're controlling their own destiny.
When, when I hear people talk about democratizing AI, that's what democratization of AI means to me. It means making sure that we don't leave cultures and languages and communities behind as we all go, y’know, running breakneck into the future.
Niki: I love the way you're thinking about this from a bottom-up, sort of, a groundswell of people owning their own participation in the technology because I do often think of it as top-down, like “These big companies need to do X, Y, Z.”
And what you're saying is there's an opportunity for people to participate and even make money licensing a tool that's going to help their own community. The other thing is I often use the example, and truly, until we just started talking about this, I hadn't thought of it.
I use the example of going into an emergency room, and I say things like, “AI is going to assist that ER admitting attendee with diagnosing you,” but that doesn't work if they cannot understand every language, right?
So, and I hadn't really thought about the idea that you can't just shove in, y’know, Google Translate and get exactly the nuance you need in that sort of scenario. So, it's a good point that there's a lot of wood to chop with getting better imagery, better prompts, better participation, and then just larger and more diverse models themselves.
It won't work to ask Big Tech companies to, sort of, adjust the dials on things. That's not really going to get us to AI that is helpful to people and helps them in the way they need to be assisted, which the whole point is these are assistive.
Nikki Pope: It is presumptuous of us to think that we know what the solution is for a community that we're not a part of. I have relatives who are in Cajun country and I also have relatives who are in Gullah, Geechee country in South Carolina, and I can't understand half of what they say. I know they're speaking English, but I don't understand it.
There are lots of pockets of communities like that in this country where we cannot presume, y’know, we're this tech company, we're full of really smart people that we know the answer, and we know how to make this work in those places, because we, we just don't.
Niki: I think this is such a good point. So, if I had to recap kind of what we've talked about, what you're saying is when you think existential threat, go to a personal level of someone being convicted of a crime because facial recognition did not work or being arrested and tried for a crime.
Even if it comes out in the jury, it destroys their life. Think of someone getting potentially not a mortgage based on inputs that are not really accurate to them. So, it's this idea of the algorithms creating an existential crisis for an individual. That [Nikki Pope: exactly] that's how we should be thinking of that term with the risks.
Nikki Pope: Today, right now, that is what the problem is. And we should address those problems.
Niki: These are threats and, and, and outcomes that are happening right now and impacting people right now. I'm talking to you from Washington where we get a little wrapped around the axle and sometimes miss the basics that are right in front of us.
We talk about bias, but just talking about the impact, right? We have laws against some of these harms that exists on the books today that we can potentially use to prevent these, but just thinking about it in that way. [Nikki Pope: Exactly]
Maybe we stop thinking of AI as a being or some anthropomorphized robot when really it's, like, just a series of prompts followed by the next likely word you're thinking of.
We need to stop treating it like Hollywood does.
Nikki Pope: Yeah, I mean, it's great that Hollywood does that because I love those movies just like the next person, but think of it like it's a toaster [Niki: laughs], and if your toaster is burning your toast, you turn the dial so that it's a little less dark and you tinker with it until you get the toast you want.
It's a tool to make toast. In this case, it's a tool to create a presentation or a tool to summarize a meeting or meeting notes. That's what it is. And that's, if we think about it in that way and not use terms like [interrupts self] anybody who knows me at NVIDIA is going to probably laugh when I say this, like “hallucination.”
People say, “Yeah, well, you know, LLMs hallucinate.” No, they don't. People, humans, hallucinate. LLMs create incorrect information. They're not hallucinating [Niki: Right] because they didn't think about it.
So, when we stop using these terms that are for people and stop anthropomorphizing these machines, I think we are a step closer to, to getting to where we need to be when we talk about them and we talk about adopting them as tools that can improve our lives and improve our performance, free up some time to do something you're more interested in.
Like when, when I created that presentation in three minutes that would have taken me a half hour. I could spend that 27 minutes doing something else. Now, y’know, I probably could spend that 27 minutes doing something like reading a research paper or something, or I could spend it sitting at the pool [chuckling], y’know, but I can do something that's more enjoyable.
Niki: When you, [laughing] when you started to say reading a research paper, I was like, I'd be listening to a true crime podcast and staring into the middle distance with my extra 27 minutes, but it would be mine to do that with!
Nikki Pope: Exactly! We should be thinking about AI as ways in which we can improve the quality of our lives, the quality of our work, how we spend our time. When I think about it that way, I'm really excited about AI and about the technology in the future.
Niki: I am, too. And I, I so appreciated you coming on and talking to us and giving us a little bit of a straightening out on how we talk about what this can do, what it means, and the way we think about the threats because it's a paradigm shift from how I often think about it, and, and how I often hear about it spoken of in the tech community and just general zeitgeist.
If you think of it in these terms and we start to adjust our language in a responsible way, we're going to get a more ethical outcome.
So, Nikki, I've so enjoyed having you on the podcast.
I'm really grateful for you taking the time.
Nikki Pope: Thank you. It's been a pleasure.