Tech'ed Up

AI: "No, Your Computer Isn't Listening to You" • Austin Carson

March 31, 2022 bWitched Media
Tech'ed Up
AI: "No, Your Computer Isn't Listening to You" • Austin Carson
Show Notes Transcript

AI enthusiast and founder of SeedAI, Austin Carson, started a non-profit to get artificial intelligence projects launched in new and diverse communities across America.  We talk about “machines thinking for themselves,” bias in code (and in the world), and Austin finally answers the burning question: Are our computers eavesdropping?

"Just like we are, it (the internet) aggregates our good things and aggregates our bad things and that lays us kind of bare in front of ourselves in the world." -Austin Carson 

Intro:

[music plays] 

Niki: I’m Niki Christoff and welcome to Tech’ed Up. This week’s guest is AI enthusiast Austin Carson, whose non-profit is working to get artificial intelligence projects launched in new and diverse communities across America. 

A note to our listeners: I’ve asked my podcast producer, Selçuk, if he can mute me cackling at my own jokes. He says it’s not possible because I cackle so loudly that it’s picked up on the guests’ microphone. 

So, y’know, that’s a vibe! 

Transcript

Niki: Today in the studio, we have Austin Carson who’s the president and founder of SeedAI. Welcome Austin. 

Austin: Hello. Thank you for having me. 

Niki: Thanks for coming into the studio. I was, we were just talking about podcasts before we started recording, that it's mostly just free association. So- 

Austin: [interrupts] It’s something I'm uniquely good at. 

Niki: [laughing] Perfect! You're a perfect guest. [Austin: Yeah, thank you, thank you] Then I want to start really quickly with you. You were a journalism major, you worked on the Hill. How did you go from that to becoming D.C.'s AI guy? 

Austin: [laughs]  It's funny because even being called that makes me distinctly uncomfortable. There's a certain, D.C. is a place for people that are conceptual understanders of things and then are able to put that into something that feels like it's pretty concrete and substantive by drawing on knowledge of a bunch of other people.

And so, I would say that's pretty much how and why to be as succinct as possible, but ultimately I came up doing comms on the Hill. And then I was just, kind of, the token nerd in all of my offices I've ever been in and so, my first legislative director was, like, “Man, you will not shut up about this stuff. And I hate it. Would you please take over tech policy in our office?” 

And so, from then on, I just kind of gravitated further and further in that direction until at the time at which I worked for, then Chairman now, ranking member, Congressman McCaul. I pretty much only did that,  I, like, ran the legislative office and then only worked explicitly on tech issues.

So, about the time I left the Hill and went to run, be the executive director of a think tank called TechFreedom, I had gotten really involved in reading about AI, learning about AI. I mean, this was 2016. This was when I think it had started to get further and further into like the popular zeitgeist or the popular mindshare as a function of it being integrated into more and more products, right, at a larger scale.  Y’know, and I think at that time, it was more like, okay, Netflix now has a sophisticated recommendation thing, Google of course, Amazon, of course, Twitter, of course. 

I was looking to put together a project, not dissimilar to SeedAI, and one of my mentors recommended our reach out to Nvidia and talk to them about it. And Nvidia was like, “Hmm, or you could just come work for us, which would be nice.”  So,I  worked at Nvidia for three and a half years, and there is no better place, that I can imagine, to learn about, kind of, the ecosystem, the nuts and bolts of artificial intelligence and then the building blocks and what it takes to put it together.

And because Nvidia is on, y’know, a primary level, liken a computing platform company, right, that has pivoted aggressively towards AI it gives you visibility into what is everyone making. What is the next generation of technology? Like, where are things really going?  Where's investment? So, it gives you kind of half a dream of what AI is and could be and then it gives you half very practical, like, this is where money is actually going. 

Niki: Okay! So, we're going to talk about SeedAI [Austin: mm-hmm] which is this non-profit that you started last fall but before we do that,  [Austin: mm-hmm] there ar, right now, in the zeitgeist [Austin: mm-hmm] a lot of buzzwords that everybody's talking about: metaverse, web3 and, obviously, NFTs and crypto, which I would set in a different category. [Austin: mm-hmm] But a lot of this stuff, in my opinion, it is theoretical.  [Austin: mm-hmm] Sometimes, I would say even marketing concepts [Austin: mm-hmm] more than actual new technical concepts. I've been to web3 and I had to interact with a jillion intermediaries to get there.

Austin: [chuckling] I’ve been to web3, [Niki: I have!]  I’ve been to the future!] [Niki: I’ve been to the future] Yeah, yeah yeah! 

Niki: I paid a toll, to a bunch of different companies. [Austin: mm-hmm]  These seem like concepts and theories that aren't in practice. And you said something at South by Southwest a couple of weeks ago about, there was this fear or maybe discomfort, or maybe people were looking toward the future [Austin: mm-hmm] of what artificial intelligence could be like and you said something about how we're already there with what we thought would be 50 years from now. [Austin: mm-hmm] Can you talk about that a little bit? 

Austin: I'm taking a little bit of license with what people are referring to in that instance, but people generally talk about artificial general intelligence as kind of the holy grail of AI or the thing they're terrified of. Right? People think of artificial general intelligence as again, right now, broadly speaking, as good at a thing, right? A specific narrow task.

I explain AI to people as there's two sides of it: there's training AI and inferencing AI. And those together make quote AI. Training AI in normal narrow applications is, like, you're going to learn how to cross the street. And so, you're born as a baby, right? And every time you go up to a street and start across the street, you're training this fuzzy probability thing you have in your brain of what crossing a street is all about. Sometimes, you run across the street, your mom will grab you and sometimes you almost get hit. Whatever. And so, over time, you get to the point where you're walking up to a street and you don't even think about it. You have a fully functioning model of what crossing the street looks like. Right? 

And the next time you walk up to a street, that's the inferencing part. So, the computers come in, they do all that crunching of probability stuff. It makes this inference, you can cross the street or not most of the time it's unconscious. Right? Now, what we're getting to now, is closer to how our own minds work, normally all the other information about crossing the street in an AI has been kind of blocked out. Right? But now, we're getting to the point where we're just pouring a ton of data into these machines, not necessarily just totally tagged and labeled down to what is crossing the street stuff, but now it's just a ton of stuff.

And so, instead of it being like, “Okay, I'm filtering out everything else.” You're like, “Okay, I'm kind of learning the way cars move.”  Right? Or “I'm kind of learning what the weather is like”, and it's not quite the same, but if you look at the, kind of, cutting edge AI technology today, it just cobbles together the different connections between the data points. And then generally, the system was designed to just understand human language and say like, “What's next in a sentence?” And yet, now that they had used all that information in there the systems developed general capabilities. Like, it can write long form text, it can generate out code. As soon as they realized it could generate out code (they being, like large companies) realized they could generate out code they all, sort of, making no-code platforms.

So, now, you've gone from a thing that can help you write an article, to a thing that can pretty much write a whole article, to a thing that can also write code. And they figured out how to use the same types of models to generate images, to generate a video. Right? And also to process and understand that. And, those, ever since that technology came out. we've been in a new epoch. 

And now we're moving into a space where we unquestionably have general artificially intelligent technology, right? More general, now, again, it's not totally general. It's not as a human mind state, but that's not going to stay that way- I mean, if you look at Yann LeCun, who's one of the godfathers of AI who works at Facebook, he just put out a paper about, y’know, what it, what does a truly general AI look like? 

Niki: [interrupts] Does it look like Scarlett Johannson?

Austin: [sigh] Maybe! [Niki: laughs] These are incredible tools, right? They are incredible tools still. Right? They don't have, they don't have agency in the sense that without a prompt AI doesn't do anything. Right? 

Niki:  Yeah. So, wait!  [Austin: Ok.] Let's back up a lot.  [Austin: Ok]  Normally I say, I always say let's back up a smidge, [Austin: yeah, yeah] but we're gonna back up a lot with you.  [Austin: Ok, ok, ok]  I've worked in tech forever. And when I was working at Google, [Austin: mm-hmm] there was something called Google411 [Austin: mm-hmm] you could call a number and get information. [Austin: That is very funny!] Okay. So, it was a phone number and you would call Google411 [Austin: what year is this?] It was 2007 [Austin: Hell, yeah!] [chuckling] And the idea was they were collecting voice snippets for voice search. And we weren't really sure what we were going to do with the data but those voice snippets ended up helping develop voice search. [Austin: mm-hmm] 

And then, once we got early, early prototype voice search, I would be driving back from San Francisco, I lived right by the ballpark [Austin: mm-hmm] and I would say, What’s the score of the Giants game?” because then that would tell me what the traffic was going to be like and over and over it just said, “Chinese Checkers”.  

Niki: I'm like,” I'm not saying Chinese Checkers,  [Austin: Yeah] what I'm saying is Giants game.”   [Austin: Yeah] So, first I think even before you get to predictive,  [Austin: mm-hmm] meaning what's the next word, it was just trying to recognize what the heck you were saying.  [Austin: Right!] And even now, we get these harrowing stories of misidentifying what's in a photo sort of recognition then prediction, which you were just talking about, which is like, what's the next word?  [Austin: mm-hmm] And then, what you're saying is it goes beyond that, into creating text, photos, codes. [Austin: Generation] Wait, and what's that called? [Austin: Generation] Generation. And so, we're there and then the final step would be it’s thinking for itself.

Austin: No!  See, and I think that that's kind of the problem is, that we look at the final step as it's thinking for itself, it's just, it's going to be next steps. I would say that, like, the transformative step is going to be kind of, like, automatically productive technology, which is going to be an agent that you give a broader prompt that then kind of operates and does a bunch of different stuff and comes back.

But it would be like a very broad directive and then it would continue to do something. I think those are some of the horror concepts of like, “Okay. We tell an AI, fix the environment and it kills all humans.” Y’know?I think that that's kind of the,  [Niki: laughs] and that's called alignment. That, that whole area of, like, consideration and research is called AI alignment, which is very important.

And I think AI alignment is one thing we do not spend remotely enough energy on, on a conceptual basis because a lot of it’s like ethics and harm reduction. Whereas alignment is more like, okay, how aligned does an AI as a general matter with human interests, right? Which of course also requires you to define human interests, which is fun itself. 

Niki: Right! There's a lot of focus already on bias and AI. [Austin: mm-hmm] Like, everyone's aware of the bias when you have humans programming the machines, the machines are then going to have blind spots [Austin: mm-hmm!] based on who the humans are programming them.  And then some of the biases also just, we need to correct for things that might have, like, who's the most credit-worthy person [Austin: mm-hmm]  that might actually be an illegal answer if the machines decide.

Austin: Yeah!  But I think both of those are interesting things. Like, first of all, I think the importance of the person creating the technology is actually a little bit tuned, more towards, their knowledge of potential outcomes, right? Like, what are things that disproportionately impact certain communities that, like, us, a more homogenous group of people may not be able to understand, know about, or avert.

And I think that this is actually a little bit of a, this is a dangerous thing for me to broach right now. And perhaps in general, but bias in data is more about bias in society, right?

It's about the fact that like, even, even far more than who's coding, ‘cause coding and AI is pretty much about putting in data and being a data architect and a data scientist more than it is manually configuring much. Except for, again, kind of the variables and parameters and outcomes, which is very important.

But, that data that's implicit throughout all of humanity, kind of, that's being pulled into these things. It contains all structural inequality, it contains all kind of, like, human goodness and badness. There's a Werner Herzog thing on Netflix a while ago, it was called “The Awesome” like, “The Awesome and Terrible Internet, it's Wrong.”  The actual title was amazing. I hate that I can't remember it! 

[both laugh]

But the conclusion was just, like, the internet is us, it’s awesome and terrible just like we are.  It aggregates our good things and aggregates our bad things that lays us kind of bare in front of ourselves in the world.

Interesting quirk is that as you get these, you're talking about the bigger general models as they get larger and larger in terms of the amount of data that's contained in them, they get more racist.  Is it a quirk purely of the way that the model looks at information? Is it some type of representation of a fact, of like, you aggregate certain parts of base humanity and it's, like, disproportionately prioritizes the majority more and more over time? 

Niki: That's a really, really interesting point [Austin: yeah] that the more data you get, the more you have certain, you see certain trends.

Austin: Actually, there is a presentation about this on my website, from my kickoff event, where a guy named Jack Clark, who's a co-founder at Anthropic-  I'm like a horrible fanboy,  I always bring them up.

Niki:  I've seen you bring them up multiple times, including his newsletter. [Austin: I just] Do you want to plug his news? [Austin: Yeah! Yeah. I always do.] 

[cross talk]

Austin: Jackclerk.net is the best newsletter possible. He just is an incredible person for bringing a bunch of data together nobody else does, but he has a presentation on my website called “Why is AI so crazy right now? I feel like that's a killer thing to watch if you want a deeper dive on what I just said. But a point he makes is that, like, AI is a fun house. Right? That at the end of the day, AI is like making parts of us bigger and parts of us smaller, and it's warping us in, in, ways that we don't necessarily understand. The bigger it gets the more of those distortions there are. 

Niki: It's really interesting! And I actually feel, like, so I, again, am, like, Encino Man. You just saw me try to get airdropped photos to my phone.   [Austin: That was awesome, I loved everything about it!] [chuckles] Because I won't pay 99 cents [Austin: Yeah]  to Tim Cook [Austin: Yeah] to get cloud storage, I don't have enough cloud storage. Another example, on LinkedIn, nothing on my LinkedIn says anything about being a woman [Austin: mm-hmm], even though I've gotten. accolades at times for being like the most powerful woman [Austin: snorts],  I don't use the word she her or any of those [Austin: mm-hmm, mmhmm]

Austin: [interrupts]  I like that your pronouns are your accolades. I feel like we should all look at it  more like that! 

Niki: So, I don't have any of that, and yet [Austin: mm-hmm], and yet I get more content telling me how to up-level my career, how to negotiate for a raise, how to all of these things. [Austin: mm-hmm] I actually find it sort of offensive. It drives me nuts. I don't want it. I haven't asked for it. It's based on assumptions about what my career is like [Austin: mm-hmm] based on clearly they can scan, I guess my photo or, who knows, but my point is, it drives me nuts. So, then I try to reduce the information I'm giving the algorithm.

 I try to remove information because I'm feeling like it's making assumptions about me. And I know people might want their Google home device or their Alexa device [Austin: mm-hmm]  to know what they want because it makes their life easier to get around and navigate, but there are people like me feeling as though I don't really want it guessing what I'm going to do next.

Austin: Whenever your profile matches certain stuff within the model, right, removing data on your page is not going to go and remove it from the model. Right? So, kind of the funny thing is your answer is to engage with those things and click the little button and click:  “I don't like this”. [Niki: Yeah] That's actually the only good way to rapidly adjust your content in any of those sites.

Second thing is I think that, I think, there's two aspects of recommendation that are obnoxious. First of all, it's an argument for, again, the same thing I was just talking about, which is kind of broadening participation in the creation of artificial intelligence. You represent an archetype, you represent a community of women, right,  who feel similarly to you. And if it's not you that's getting involved in creating these things and acknowledging that feedback on a more intimate basis, then it's likely to stay in, like, likely to stay in the requests pile. Now it's not to say that it's your responsibility to go fix everything that pisses you off but at the same time, I think it does make somewhat of an argument for broadening participation. 

Niki: Instead of removing data, I should engage with the algorithm and just tell it, this actually really bugs me?

Austin: Yeah! ‘Cause it's the only thing that's going to work at the end of the day, effectively. At some point you, or somebody else on your wifi, which by the way, the computers can't actually, at this point, process your spoken word at a rate possible for them to be eavesdropping all the time.

Niki: But everybody thinks that they are! 

Austin: Everybody thinks that they are, but that's because [pause] what you and the people that share your internet access, so on your wifi it aggregates those things too, right, in some way have represented your interest in this thing and that's why it feels like they're listening to you. And it's also because our memories themselves are imperfect, right? Like, whenever we're talking or doing most things, it goes back into our own little fuzzy AI model and just changes how you perceive the world. And a lot of your memories are like illusions for the sake of rationalizing, whatever your little model is. 

Niki: So, you've just answered a question. When I was getting, I have a gallery wall in this podcast studio, and when I went to get things framed, the number one question of the kids, framing them-  [Austin: mm-hmm] I said, “Oh, I'm doing this podcast on tech. What are you interested in?” [chuckles]  They said, “Why is my phone listening to me? Why is it always giving me ads for stuff I didn't say out loud.” [Austin: yeah] I got an ad for a gravity blanket, I had never typed that word. I did want a gravity blanket, but maybe you're right, maybe I just forgot that I said it! 

Austin: You, you may have forgotten you typed something about it. You may have been searching fo,r or listening to, or engaging with some content- 

Niki: My co-worker was talking about weighted blankets!

Austin: See that's the funny thing. So, if you were on the shared wifi, she was talking about weight, [Niki: he], is talking about weighted blankets.

[crosstalk] 

Austin: Honestly, I thought you had a team of only women. That's why I say this!  [Niki: Oh! Like a coven?]

[both chuckle] 

Yeah, yeah! I mean, some of my favorite teams or teams of only women, I always respect it. That's, You're sharing the wifi and it aggregates your recommendations based upon who's, like, sharing your wifi. 

Niki: This podcast has just illuminated many things for people [Austin: laughs]   Like winner, five stars! Okay!  Let's talk briefly, ‘cause I know you have to go in a moment- Seed AI. What exactly are you doing and why? 

Austin: Okay. So, near the end of my time at Nvidia, there was a project that we kicked off because one of the co-founders had made a donation to his alma mater, which was the University of Florida. But instead of just getting a building for himself, he was like, “All right, let's change the trajectory of Florida.  Let's chat. You know, let’s really jumpstart y'all up into this AI trajectory to the stars.”  Right? And so, he ended up working on a deal with them where he would give, I think Nvidia and maybe Chris jointly, give like 70 or a hundred million dollars or something. 

And then, which everybody listening that wants free computers, it was Chris. [Niki: Laughs] Can't get a gigantic free computer from Nvidia that was man, I'll tell you, a lot of people really, it was like, “Whoa, free, gigantic computers going around. What's that all about? “  No! Stop! Anyways.  [chuckles] So, we, kind of, worked something out where the state and the university combined would put a huge investment behind building up AI curriculum, hiring new researchers, hiring new educators who could teach AI. Like start working out deals with, you know, the surrounding, both, like, lower research capacity in minority serving institutions to help take their AI across the curriculum. 

it was about getting a powerful enough computer to do cutting edge AI research. It was about them having a big body of data. And in Florida's case, it's big body of healthcare data. Right? That they have, y’ know, the federal data protections, they have the necessary agreements in place to securely use that data, the required investments and educators. Right? And required investments and talent. Right? And then it required partnerships. Right? And curriculum designed for those partnerships to help lift up the surrounding school system areas and especially those parts of it that are underserved. 

And I really loved it. I thought it answers a lot of questions that people have been asking. I think it helps take some of the base components of understanding AI and also puts them into a very practical form. And I really wanted to run hard at trying to make that a policy, kind of like, a key policy priority or a key area of engagement. Uh, of course, it's a little bit difficult to do something like that at a company because it is inherently such a big tent enterprise.  It needed to be balanced. And I needed to have the ability to work proactively with, y’know, public and private stakeholders. Right?

So, I [pause] took off at the end of last year, had my kickoff event. And since then I've been working to kind of build out into, and the reason it's called SeedAI, as I just described these components, I mean, effectively, the operating principle is you have a lot of communities that don't have access to these aggregated resources.

And I'm trying to help focus that specifically into this idea of, we can start from a concrete example. We can start from a template. We can create modular, kind of, modular components to that, that we can use to adapt to all different types of communities. And then, you can focus investment both from the public sector and from the private sector to build up durable long-term AI ecosystems. And so, like I said, I call it SeedAI because you bring together these components right into, to not overly stretch the metaphor into a seed, And then whenever you pop that anywhere you will generate a form of AI ecosystem.

Niki: So, the idea is your standing up individual projects [Austin: mm-hmm] that bring together different parts of the ecosystem makes any project work in tech, [Austin: mm-hmm] but in this case it would be government resources, potentially corporate resources, philanthropic resources, educational resources, to have one ecosystem, one project that you're building. And each one of these could be a brick in the road [Austin: mm-hmm] of the foundation for America's AI. 

Austin: Yeah! Yeah. I mean, that is kind of what we hope and you're just going to have foundations that are custom cut for each community so that they can build with their greatest strengths and address their greatest concerns. [Niki: So, they’re bespoke] Yeah! It's a bespokeness, but each one is built of the same blocks to your point. Right?  And so, y’know, it's not just a convening exercise with SeedAII as much as it is like, “Hey, we have, y’know, we have this plot”, right?

We have this plot, we know all the things we need to build this thing. Right? And we're just asking you each for your information or for your contribution into doing this. And if we do successfully put this effort in and build this foundation, we know there's a ton of energy to fill it. 

And if you think about China and the mobile first-generation. Right? And what that did for, China's tech scene. For what it did for, kind of, the adoption of the entire country of an entire country, a new class of economic and technological participants. I mean, I think we have an opportunity for an AI first-generation. Right?  And some of that goes back to the earlier conversation about having no-code technology. Right? 

Having these increasingly general models where we're now getting to where, like, AI makes AI. Now, you still have to tell that first AI what you want the second one to do, I mean, again, [chuckles] it’s imprecise. But we're increasingly getting it to that point. And so, as we approach, like, maximum accessibility for whatever current epoch, we have to be sure that when that happens, we are prepared to have as many people as possible being able to take advantage of that new environment so that we can spread the y’know, all the different reasons why we would do that. 

A big part is social cohesion. A big part of it is economic development. A big part of it is local competitiveness leading in to national competitiveness. Right? In a way that without this investment we wouldn't be able to do because there's only so much shit, a person like me can think of, or a person like you or a person like everybody I've ever met at every tech company. Right? 

Niki: That's a perfect thing to end on. [Austin: Yeah] There's only so much each of us can do if we're, it’s not really convening, it’s collecting of energy and resources and putting them into projects that are bespoke, that are tailored to each community and, and channeling all this energy. And there is excitement!

If you're excited about it, seedai.org,  [Austin: Yeah] check out your sizzle reel and your, and Jack Clark's content. [Austin: laughs] We're going to drop a bunch of things into the show notes, Austin, thank you so much for taking the time today. 

Austin: No, It’s my pleasure.!Thank you so much for having me.

Outro:

Niki: Next week, I’m talking “techlash” and Big Tech crisis response strategies with author Nirit Weiss-Blatt. 

If you enjoy Tech’ed Up, please consider giving us a rating or, even better, a review. You may have noticed that the search function in podcast apps is pretty crummy, so your feedback really helps other people find this show.  Thanks!