Tech'ed Up

AI Deep Dive • Dorothy Chou

April 20, 2023 bWitched Media
Tech'ed Up
AI Deep Dive • Dorothy Chou
Show Notes Transcript

Head of Public Affairs for DeepMind, Dorothy Chou, is in Washington from London and joins Niki in the Tech’ed Up Studio to talk about the massive impact AI is already having on science, work, and the future of our economy. They cover the hard questions we all should be asking as this tech develops at breakneck speed, discuss the role of government in shaping this emerging tech, and highlight the importance of wrangling all kinds of people to shape the data and norms that underpin AI. 

“There needs to be a real renegotiation of what all of our roles are in this process as technology makes it into society. And I think that the dialogue between scientists and policymakers, it's never been more important.” -Dorothy Chou

Intro: 

[music plays] 

Niki: I’m Niki Christoff and welcome to Tech’ed Up.

Today I’m joined in the studio by Dorothy Chou, Head of Public Affairs at DeepMind. She’s in DC from London, pounding the pavement to talk AI. We’re tackling important questions about the real-world, right-now, culture-impacting changes this tech is bringing at warp speed. We go beyond the fun stuff, and scary stuff, of ChatGPT to explore the data, norms, and values that can - and definitely will - shape this tech. 

Transcript:   

Niki: Today, I am delighted to welcome my good friend and longtime colleague, Dorothy Chou. Welcome to the podcast. 

Dorothy: Thanks, Niki. You've known me my whole adult life. [laughs] 

Niki: I've known you your whole adult life. I was actually just thinking this, that we're taping this on my 45th birthday. [Dorothy: gasps] I know. Which is crazy.  [Dorothy: Happy Birthday!] Which means we met when I was 29.

Dorothy: Oh my gosh. 

Niki: We've been working together forever. [Dorothy: forever!]  Yeah, forever. 

So, Dorothy is here to talk today about deep mind, about artificial intelligence. I'm so [pause] excited for you being- this big job running public affairs for an artificial intelligence company and you've defected from the United States to London but are back in DC visiting.

Dorothy: Yeah, I mean, this is home for me. So, lots of, I think, rights of passage in DC that we experienced together, but, yeah, happy to be across the pond and also happy to be back. 

Niki: I wanna talk a bunch of things. So, just as quick background, we worked together at Google and at Uber on policy and comms teams. [Dorothy: mm-hmm] You were the Head of Corporate Communications at Dropbox. [Dorothy: mm-hmm] 

You're now in London, which is where DeepMind is and all the headlines right now are about OpenAI,  [Dorothy: chuckles] which is funded by Microsoft. It's a competitor in some ways, but actually let's start with what's DeepMind and and how are you different than maybe OpenAI or some of the other players in the space?

Dorothy: Well, first off, I think more competition in this space is, can only be a good thing, to be totally honest. But yeah, we are based across the pond. We were founded in 2010 and then acquired in 2014 by Google and Alphabet. We got about 1500 people located in, I think, the UK, France, Canada, and here in the States. So, largely headquartered in the UK.

Our mission is to solve intelligence to advance science and benefit humanity. And what that means in practice is building safe systems that can really accelerate scientific breakthroughs that will really yield research that can have a really positive impact on society.

And so, we've got like a lot of different teams, y’know, we take a really multidisciplinary approach. So, in addition to folks who work on machine learning and from the engineering side, we've got ethicists at the company. We've got people with backgrounds in philosophy. We've got folks with lots of different backgrounds, including neuroscience.

I think we take like a much more holistic approach to building AI and it's really exciting! 

Niki: You so, sort of by implication, you didn't say this, I'll say it, [Dorothy: chuckles] versus just putting out into the wild, like a consumer-facing app where I feel like over just the past few weeks, it's like, “The computers are gonna take over, they're gonna take all our jobs!  [Dorothy: chuckles] And then we're gonna be pets on leashes of robots.”

Dorothy: [laughs] Oh my god! 

Niki: It just seems like maybe handing it to everybody, it was, I mean, I'm not saying, I do think maybe it's good for people to understand what's happening because AI's not new. Again, like you said, DeepMind was founded in 2010, but suddenly it's like they're just using it, and they're digging into it, and it's freaking them out.

Dorothy: Yeah. Look, I think there are a lot of different ways to release the technology and in our minds, you know, transformative technology deserves really exceptional care. And so, one of the most important things that we've decided on is that we think you shouldn't beta a test in the public. [Niki: Mm-hmm] 

And of course, I think the counter-argument to that, if you take it too strictly is that, well, “You've gotta be able to test these things to understand if they're working and then make adjustments.” And that's true. It's just that there's a whole spectrum between fully open and releasing it to everybody in it's, like, kind of infant state, [chuckling] and fully closed that you can kind of play along. And we really think that these systems need to be really rigorously tested with a whole bunch of people to ensure it works for them before opening it up to everyone.

Now, that's not to say that, y’know, the public isn’t an, an important part of making these decisions, but that's exactly the point, right? We have to have really deliberative conversations with people from all walks of society. If you think it's going to disrupt a certain industry, we should be talking to those industries to figure out how to incorporate these technologies.

I mean, you and I were at Uber, that was incredibly disruptive and y’know, I wonder if looking back, there would be another way to manage those conversations that were much more partnered with the transportation sector.

[both laugh]

Niki: Instead of just throwing cease and desist letters in the trash? [Dorothy: Exactly] 

[both laugh]  

Dorothy: Well, here's the thing with tech, like, we don't think, for whatever reason, and we, I'm using as a general sense, but technology historically has not looked at government as something that's important when actually these people are democratically elected. That is literally your biggest user signal.

The role of the government is to define the public interest in line with what people want. And s,o you have to play that game. You have to be involved cuz you have to listen to what the people want. 

Niki: This is a really interesting, it's sort of, I'm an institutionalist, which is not popular at the moment, but I do think [Dorothy: laughs] it's actually highly unpopular.

But having worked at Google for as long as we did, [Dorothy: yeah]  I was there for eight years, you were there maybe even longer than that, 

So you and I, having been inside of Google, I know that that is a company that [Dorothy: mm-hmm]  tries really hard to do the right thing. [Dorothy: yes, absolutely!] Now, it has really some unfortunate headlines at the moment, and I, I don't know about necessarily management decisions. I'm not on the inside anymore. [Dorothy: mm-hmm]  But from a pure trying to do the right thing with tech, I always saw real care taken.

And yet, there's such a backlash against big tech. [Dorothy: mm-hmm]  So, that's one problem with an institution. So, maybe some people think like, well, why should you have proprietary information that we can't try out?  [Dorothy: mm-hmm] And then, on the other hand, government, you're exactly right, it is their job to protect the public, [Dorothy: yes!] but the distrust of government is off the chart. 

So, you're sort of these two groups, which I think are actually intended to work together to come up with a good outcome for this revolutionary technology. The public, in general, doesn't really trust them. I think.

Dorothy: Well, I think policy and public affairs on a really good day, what you're doing is basically translating the public interest to the company and then also working with policymakers to- and, and the public- to better understand, like, “How can you make this technology work for you in a way that earns and retains people's trust?” 

It's like a two-way street. So, it's a really translational and relational job. Sometimes I think companies are a bit too slow to the punch to care about these things, and there are a lot of big lessons to be learned, but overall, I think it's an amazing role, and it's something that I'm super passionate about and think more people need to be getting into. 

Niki: Yeah, I agree! And I, and I also think we're sort of predisposed to be, [Dorothy: of course] maybe a little- 

Dorothy: We're a little biased!

Niki: We're a little biased!  And yet, because we both come from a kind of public affairs background, I do remember this one moment at Google [Dorothy: Mm-hmm] where we had just bought Boston Dynamics and I turned on the Today Show first thing in the morning, and I see one of our engineers kicking one of the robot dogs. 

Dorothy: Oh, no! 

Niki: I know! Exactly what you just said. [Dorothy: Oh my gosh] So, I called the engineering team and they said, “No, it's amazing! It can restabilize!”

Dorothy: Oh, lord. Uh-uh. 

Niki: I said, “You guys, not the point, people. It's not nice. It doesn't look nice. Nobody wants that.” [Dorothy: no!] Nobody wants you kicking a dog. He's like, [in bro's voice] “It has a metal exoskeleton. You know, it's not a, it doesn't have feelings, Niki!” [Dorothy: chuckles]  I'm like, “No, I know!!”   [Dorothy: Oh my gosh]

And I think there's a little bit of that with this too. Like people are just using it, trying to break it, right pushing it to the edges [Dorothy: Yeah] with ChatGPT and then going down rabbit holes of, “Kids are gonna cheat. No one's gonna know how to write. Lawyers will be lawyered out of existence.” 

So, let's talk because we don't want that to be the narrative. 

Dorothy:  I mean, to go back to your point about institutions, is, I think is more important than ever that we figure out what the new institutions need to be to talk about how we integrate this technology into society.

Like, I know you're not on this side, but y’know, the decline of unions is a big deal. 

[both laugh]

Dorothy: The unions are where people used to suss out these, especially in Europe, used to suss out these things. And, y’know, the media as an institution in this country has also really changed. And so, where are the public spaces when we're gonna have these conversations? Like, that needs to be rebuilt. And participation actually is a huge, huge problem in tech that we haven't figured out. Like, how do you do public participation and input well? 

Niki: Yes! And especially in, it seems to me, tell me if I'm understanding this correctly, if you've got tools that people are using [Dorothy: mm-hmm] those, every time you put in a query it's adding to its knowledge set [Dorothy: mm-hmm] and if it's a certain elite subset of people [Dorothy: Oh my gosh] who are interested and have the time, that's not gonna look like- [interrupts self] It's gonna keep corrupting the dataset, I think? 

Dorothy: It's not going to create a solution that works for everyone. And y’know, we're all in this thing to play to, to basically build artificial general intelligence. But if you're not serving the margins, can you really call yourself general? [Niki: Mmmm!]

I mean, I just think we need more and more diverse decision makers in the room, more and more diverse decision-makers at the table, in general, because we aren't going to be building intelligence that serves everyone until that happens. And we've already seen how so many technologies have been released, being like, “Eh, it works for the majority of people and it's good enough for everybody else.”

The problem is that the “good enough” population is the same one over and over and over. [Niki: Right] And then you are just perpetuating injustice. So, how do we bring those historically excluded populations into the fold? What does that look like? It's not easy. 

Niki: And that's something you're working on!

Dorothy: Yeah! It's something, y’ know, we're really passionate about it.

If you look at some of the biggest problems we have in AI, they have to do with data gaps. How do you fill data gaps? I mean, I talked to a woman who runs an amazing organization called the Obsidian Collection out of Chicago. And the reason she started running it, she's retired, is she was basically asking her son- she grew up in a predominantly black community- asking her son to look up an incident that happened in her childhood on Google. It didn't exist. And she found more and more that where events did exist, they were narrated by people outside of her community. 

And so, the narrative starts shifting to be something that's pretty inauthentic to what the black experience is. And so, she's now on this mission to digitize black archives around the world and enable black communities to label their own data.

I think that's amazing. It's something that tech typically doesn't like to do because that's hard to scale quickly. But it's incredibly important filling in some of these gaps and being representative of these communities. There are huge questions that opens up, of course, like, y’know, [chuckling] “Who gets paid for this and how does that work?”[Niki: Right]  But I think it's that incredibly unsexy work that needs to happen. 

And the, the good news is you don't actually have to have a perfect data set that's huge. I mean, the technology's now good enough that you can take like a reference data set, and as long as we expose these algorithms to contrasting data sets, that starts to change how they work.

Like, you and I grew up in very different environments. We were given one data set with our little brains as kids [chuckling] [Niki: Right?!] to basically process information. And you and I have actually come a really long way. And the reason we did was because we were exposed to contrasting data sets as we went through life and AI's the same.

Niki: Oh my gosh! Right?! My original dataset- 

[both laugh]

Dorothy: My original data set!  

Niki: Thanks, Mom and Dad!!  I know, right? 

[both laugh]

Dorothy: [still chuckling] That's what we're all in therapy now. [laughs] But anyways, but that's actually, like, a real way of thinking about how to train a model. You want contrasting datasets. You might never have the perfect data set, but you want contrasting data sets to expose these models to, to keep improving the system.

Niki: So, let me back up one second [Dorothy: sure] and repeat what you just said to make sure I understand. [Dorothy: yeah] So, a perfect data set would be sort of representative of the whole world minus some of the horrific things, I mean, specifically around women and people of color that exist on the internet. 

But maybe you don't even take those out, maybe if you just have a  [Dorothy: mm-hmm] more expansive like you're talking about this project this woman's working on that just includes more people, more images that look more like the world and all of our perspectives versus the small set of what's actually on the internet, which will get replicated because it's easy to scale. That could be a perfect data set, but what you're saying is you don't even need that. You just need the machines [chuckles] to see contrasting data sets to build and evolve. 

Dorothy: Yes, exactly! They're, you're basically, it's the algorithms and systems themselves that need to be better trained. Not you! You're never gonna get to the perfect data set, like, it's always gonna be- 

[interrupts self] Also, these things are dynamic, right? It's not like you're like, “One day, we will reach the ultimate perfect data set that will yield the perfect ethical results.” That just doesn't exist. [Niki: yeah] It's actually about teaching these systems and creating a dynamic environment where we're constantly retraining, like, the things that we're okay with today, ethically, we probably weren't okay with 10 or 20 years ago.

Society's come a really long way in a very short period of time, and so what, instead of trying to constantly perfect a huge data set, which is just unsustainable, it's really about curating and plugging in the gaps in ways that enable these systems to encounter contrasting data sets so they can come to a more updated understanding of what they should be producing.

Niki: You said something when we were talking right before we started recording [Dorothy: chuckles]. You said, “There's, there's lots of data, but there's a dearth of intelligence.”

Dorothy: Yeah. I think that's the most compelling reason why I love working on AI, to be honest. There's so much data out there now, you know, the whole Moore’s Law, data is exploding out there, and there's just not enough intelligence to make sense of it all. [Niki: Mm-hmm] Like, data's not helpful unless it's actionable. And so, how do you take all of those inputs? And that's why I think AI is so important because it's building that scarce resource up so that we can make sense of all the data that's out there and actually empower, be empowered to make better decisions. 

Niki: So, let's talk about, there's an example of something DeepMind did, which, I don't know, we can do it quickly because I don't wanna, I don't want people to be like zoning out if it seems too complicated, but it's a, it's a really important and serious [Dorothy: Yeah] use case, which is alpha fold. 

Dorothy: I mean, it's a fundamental discovery in science! I think the easiest way I understand this is, y’know, how we have DNA? [Niki: Mm-hmm] Everybody knows we have DNA, but the way that DNA expresses itself is in proteins.

Proteins are basically the building blocks of life. You can find them anywhere but the way that proteins take shape actually impacts how they function. So, if you think about Covid, do you remember we were trying to create vaccines that bind to the, the spike protein? [Niki: Yes, I do [chuckles] I do remember that] 

Spike protein is shaped like a spike and so it has a certain [chuckles] kind of purpose and function. And so, you can imagine in the body, if a protein misfolds, how destructive that can be. And so, you have diseases that are impacting people day to day, like Alzheimer's and Parkinson's. A lot of these things are all attributed back to protein misfolding. And so, if you understand how pro the proteins in these structures take shape, you can understand much better how the diseases function as well as, hopefully, how to build better compounds that can resolve them.

I mean, that, this is what we're trying to do is, is put these things into drug discovery. They also, it's also interesting, I mean, there's other use cases like building plastic-eating enzymes. [Niki: Yes!] That's pretty cool. 

Niki: So, DeepMind has a podcast and I listen to an episode on this [Dorothy: yeah?!]  where-  I don't know if you know this?

Dorothy: Yes, yes, yes! [chuckles] 

Niki: And, I listened to the episode on the plastic-eating enzymes and basically what the guest was saying is that they can create, by understanding how the proteins work, [Dorothy: mm-hmm] they could create something that eats microplastics in the ocean. [Dorothy: Yeah] Which is genius! [Dorothy: Yes] Because then it can actually deal with all these things we've thrown in the ocean. 

Dorothy: Yeah. So, it's like a fundamental building block that we've been able to uncover. I mean, these, this is a question that's existed at least for the past five, or six decades once they figured out that protein misfolding was behind so many other issues that we see in the world. To understand how proteins fold and how they take shape and their overall function can actually help us to engineer proteins and do more. For us, that was one of the biggest breakthroughs, y’know, that we were really excited about.

It was probably the first time we really saw, y’know, what we, the research that we were doing in games really be applied in a real-world setting that could yield scientific benefits. So it was super exciting. Y’know, we were working with the Drugs for Neglected Disease Institute and they were telling me that one of the structures that we provided them, could have a huge impact on a disease that affects two x the number of people who've died from Covid globally.

So, it's pretty amazing. It's neglected diseases, y’know, there's no money in it. [Niki: mm-hmm]  No one really wants to work on it, but I think, y’know, with us releasing all structures of the known 200 million proteins in the world, it's gonna make a big difference.

 I mean, if you think about it, they used to take, I think, four to five years of a Ph.D. with lots of expensive machinery per protein structure. And now, we've released 200 million in one go! [chuckles]  [Niki: right] So, it's changed; it's gonna change how we do science and I'm really excited about that. 

Niki: So, I love that you brought this up because it is, it's just a serious application and I feel like, again, the headlines right now are, “I'm engaging in a conversation with ChatGPT [Dorothy: chuckles]  to try to like goad it into being like a toxic partner.”

And it's like, I don't know, is this the most helpful thing to be doing?

[both laugh]

Dorothy: Look, I do think we need certain people testing the systems, right? 

Niki: [interrupts] I mean, it is to theory what these machines, what they are. They do seem to be toxic romantic partners! 

Dorothy:  I mean, I like, I would not advise getting dating advice from ChatGPT!

[both laugh]

But all that being said, I do think it's really interesting to think about the ways that they can enable these systems can enable people to be more creative. 

So, there's a startup that I've been talking to called OpenArt in Silicon Valley, actually founded by two people of color, which I'm very excited about. But what it does is you can basically have, like, a fashion designer give it all their previous designs and then input a few things like, “I want this trend and that trend reflected” and it'll just give you, like, tons, spit out, tons of different designs based on your original designs, new ones. And then, you can just basically go in and edit it and change it. And that's actually, like, co-creating with the AI system, and I think that kind of use case is really, really cool to me. 

I've heard about kids with dyslexia who just have a lot of trouble communicating in the ways the rest of us communicate. And I think ChatGPT and tools like that can make a huge difference for them.

Niki: Yeah. Thinking about it as an accessibility tool. 

Dorothy: Exactly. It can have a lot of great benefits. Now, [pause] does that mean [pause] I think we should all have access to it right away? And that's how it's being used right now? Maybe not. And that's where I think having these conversations about the ways that you should integrate into society is incredibly important.

And I don't see that happening enough. It's something that we really at DeepMind want to start doing more and more. We're hosting all these round tables with different industries and verticals and trying to figure out like, “How would you incorporate this technology?” [Niki: Right] And it, that's, that's how you bring people along with you, too, right?

Like, you don't want to impose technology on society. You want there to be a public mandate for it. And that's what I think we need to build. And so, that's what we're working on. That's what I'm excited about. 

But yeah, it's, y’ know, look, my mom played with ChatGPT last night and she basically, my mother's a scientist. I'm very proud of her! [chuckles] And what she was saying was like,  [in high pitch voice, animated] “This has no citations! How do I know where this information's coming from? Also, I tested it out in Chinese and it's giving me different answers, but”, and she goes,” I guess that's okay because it's probably a different corpus.” But these are things that- 

Niki: Your mom said corpus?!

Dorothy: I mean, she's pretty cool, man. She's a scientist. But these are, 

Niki: [interrupts laughing] This goes back to our original data sets. 

Dorothy: Oh my, yes, yes, my original data sets, my parents! But that just goes to show, like, she's right, like, people are taking a lot that these systems are spinning out as fact, and she's really worried about that as a scientist, right?

She wants to know where is this information coming from, “How do I know if it's true?” And this is all stuff that we should be querying, but we haven't told everybody that, right?

Niki: Which is, is part of your mission as [Dorothy: yeah] doing public affairs in this space and policy work. [Dorothy: yes!!] So, you're in town. That's something that you're working on.

You're also working with European regulators. [Dorothy: yes]  Let's talk a little bit about the; a lot of our listeners are in the tech policy space. What are your goals? How do you, how do you think regulators should be looking at ai? And, and again, I just wanna revisit what you said at the, I almost just said double click on which [Dorothy: chuckles], man-

[both chuckle] 

Niki:  I really wish that hadn’t even almost said that. [sarcastically] I just wanna double click on something you said which is that these are public servants, right? [Dorothy: mm-hmm], they're supposed to be looking out for our welfare. [Dorothy: mm-hmm] 

So, when you're talking to them, what are you trying to express that they need to do and what are they maybe getting wrong or need guidance on?

Dorothy: What I'm trying to get to at first is what is the common goal that we have or the common values we have. So, for me, for example, and I would assume this is what they want: You want generative AI to be grounded. You want it to be accurate, you want it to be fair. You want it to be ultimately, like, harmless, if possible.

And if these are the goals we have, how do we incentivize companies to get there? How do we do that together?

I don't think a single policymaker would say, “Well, I'm okay with, like, just not using it at all, and. put it away and let every other country get on with it.” I don't think that's where we are. [Niki: Right]  Right. We need to understand how to curate data sets better, label them better, and ensure they're like ready for AI.

And at the same time, we also need to know: How it should be used? What are the pitfalls? How do we manage things that we've already been managing in a real way? Like misinformation. And so, we can agree on some of these goals. I think together we can build the right institutions like we were just talking about [Niki: Mm-hmm] to combat that, like, one of the models I've been thinking about a lot is, you know how like the security industry has these bug bounty programs? [Niki: Yes!] And they basically had this responsible disclosure norm that they always-

Niki:  [interrupts] Oh wait! Let's explain a bug bounty program cuz I just said I know what it is, but- 

Dorothy: So, a bug bounty program is basically the ability for, like, y’know, you have these white hat hackers who can basically, like, find vulnerabilities in your software system and people typically can exploit that to get access to people's information, steal credit cards, yada-yada. But, like, that’s just one example.

But what some larger tech companies have done is they've gotten together and they've said, okay, we are gonna actually incentivize you guys to go after this and we will pay you a bounty to go find the weaknesses. [Niki: Yeah, find the weaknesses] Yeah. 

Niki: The Pentagon actually did this! 

Dorothy: Yes. It's, it's, it's become a government norm now, too. And it's amazing. It's an ability to incentivize people to work together to make a system more robust. [Niki: yep] And that's actually what I think we need for AI as well. 

We need that on bias. We need that on a number of things. We need people who are dedicated to like breaking that, but working with you to make the system better. [Niki: Yeah] 

That's what I think we need in terms of institutions and I think what we can do is work with policymakers and others to make that happen. I mean, if you look at, I guess to borrow from Larry Lessig, you have lots of different nodes for change. You have, but the main ones being markets, normative behaviors among companies, and then laws to scale the, the behaviors you want to see. 

And so, how do we think about, you know, if you're a venture capitalist, how do you think about, like, the types of investments you want to make in AI and what requirements you want those companies to have around groundedness, accuracy, fairness, like, not leading to harm. If you are a company, when you think about releasing these technologies, how do you think about those principles and applying them?

And if you're industry level, you know, what are the requirements you are gonna have when people submit papers to conferences.  Y’know, there's a lot of conversations around deprecating data sets that we know are extremely biased and unhelpful. [Niki: mm-hmm]. And then on the government side, it's, y’know, “How do you create the policies that will create the right kinds of guardrails that are gonna be future-proof?”

I mean, one of my biggest fears is if you hone in too much on exactly how the system works today, by the time a lot of these laws pass and are implemented, they're gonna become completely obsolete. And so, can we work together on what the future of this looks like and what the norms should be leading up to that?

Niki: I love this. Well, you said a couple of things that were really novel to me. One, I love the idea of you listed several attributes, but even if it is just cause no harm. [Dorothy: mm-hmm] That seems like a very simple thing we should be applying to this new technology as a baseline. [Dorothy: Yep] You then talked about venture capitalists having an ethical code around their investments, which seems novel to me. [chuckling]  

Dorothy: I mean, they are like the tip of the spear. 

Niki: [interrupts chidingly] They?! You're doing a great job investing!  

Dorothy: And this is why, as you know, I'm doing, I'm angel investing. [Niki: yeah, but still you're getting into]  Yeah, it's, it's, I, you know, I'm in it because I think that markets are a huge way to shift normative behavior and move them towards more ethical means.

I mean the firm that I do angel investing with Atomico, it's one of the bigger venture capital firms in Europe. They build ESG and DE&I requirements into their term sheets when they lead investments. That's huge. I mean, you're basically changing the baseline for how people do business and ultimately, y’know, with more diverse decision-makers at the table with emerging technologies, I think that will change how these things are released and who they benefit.

And so it's, that's like one of the, you know, soft ways people can start creating change. I wanna see that scaled obviously through policy. I do think there needs to be policy change and, you know, when we're talking about AI regulation, it's really interesting because the internet, we didn't have horizontal regulation for that [Niki: right]

We, we basically, especially in the US, went towards sectoral reform. I think that's still necessary. But I do think AI is different in that it's generative, it scales super quickly. You're not laying down new pipes for this, for example. [Niki: Mm-hmm]  And it has a real potential to either entrench or disrupt existing power dynamics.

And in that situation, you do need guardrails for that. And so, I think there are some real conversations around: Where should AI be applied? Who's responsible? How should it be built? Where should people get access to it and who should get that access for? Like, those are questions that we should all be discussing, for sure.

Niki: Yeah! Well, and you're the perfect person to be discussing this, cuz I know you're literally everything from the communications aspect [Dorothy: chuckles] to meeting with regulators, y’know in Europe and here in the United States. You're running through and thinking through creative solutions to the way we're addressing this new technology.

It's really important to have people having this thoughtfulness, but we can't control, y’know, there are going to be people just releasing things into the wild  [Dorothy: of course] 

And I think to some degree, having responsible actors thinking about those red lines [Dorothy: mm-hmm] and those ethical lines is important. And then to your point, the market may end up deciding how they, who they wanna work with and how they wanna work with- 

Dorothy: Oh my gosh! I mean, VCs and corporate executives today, and frankly, boards have a much faster impact on how AI gets integrated into society than governments do at this stage. And I think there's a real responsibility there. 

I mean, y’know, like if you go back to philosophy way back in Rawls’, like, Theory of Justice, which is what, like, most American systems are based on, basically his argument is, like, “Scientists should just keep innovating. Companies should just keep supporting that, and then it's the government's job to regulate.”

We're in an environment where I think that the burden of moral responsibility can't be split that way anymore. It's unsustainable. Like, y’know, with cutting edge technology, you have scientists who are building it are kind of the main people who understand [Niki: mm-hmm] what its impacts will be or can forecast that. And so there needs to be a real renegotiation of what all of our roles are in this process as technology makes it into society. And I think that the dialogue between scientists and policymakers it's never been more important.

But we haven't built that muscle yet. So, that's probably incumbent on people like me and you, actually. 

Niki: Yeah, I think it is. Well, probably more you! I mean, [Dorothy: laughs]  I'm, I'm hosting podcasts. 

Dorothy: Well, you're doing this!  This is pretty great!   

Niki:  I'm doing this! Actually, good point! I'm platforming [Dorothy: you are] someone who is thinking about this and you said so many smart things Dorothy,  [Dorothy: thank you] I am so glad you're in town. I know you're pounding the pavement across DC. I wanna put some links into the show notes [Dorothy: whew!] of the different things you mentioned, the different projects. It would be really cool to get some attention for those, or maybe even companies you think are cool that you're looking at. 

We'll drop a few links in to see the Obsidian Collection. I'm gonna drop that podcast from DeepMind that I thought was really interesting. 

Dorothy: Yes, please do! Oh my gosh, my team is gonna be very upset with me [chuckling] not mentioning that! The DeepMind Podcast is great. 

Niki: Yeah, the DeepMind Podcast is great.

Also, the host, she said at one point, which I thought was really funny, she was talking about chatbots, and she said, “So it's just a clever parrot.” and I thought, oh gosh, she's British! [chuckling] 

Dorothy:  I mean, my favorite article on this is actually Ted Chiang, who's a famous, I would say, like, futures writer, [Niki: Mm-hmm] fiction author who said, in the New Yorker, I think, “Chat GPT is a blurry jpeg of the web.”  Personally, I think it mansplains to me a bit, but that article's a great one too.

Niki: Okay. We’ll link to a bunch of this stuff because I know people have questions and want to dig further and will learn more. [Dorothy: yeah!] 

Dorothy, thank you so much for taking the time while you're in Washington to come in.

Dorothy: Yeah, thanks for having me! 

Outro:  

Niki: Thanks as always for listening. And speaking of algorithms, a gentle ask for you to follow, like, or share this podcast - it really helps us get the machines to help others find the show. 

On our next episode, Denelle Dixon, CEO of the Stellar Development Foundation, joins me in the studio.  It’s easy to get caught up in all the FUD (that’s fear, uncertainty, and doubt in crypto speak) when talking about digital assets. But there’s actually some amazing and transformative good stuff already being done in the space.  Denelle is a language nerd - just like me - and we’ll chat about how the industry can better communicate outside the bro bubble.