Tech'ed Up

"Futureproof" • Kevin Roose (New York Times)

November 02, 2023 bWitched Media
"Futureproof" • Kevin Roose (New York Times)
Tech'ed Up
More Info
Tech'ed Up
"Futureproof" • Kevin Roose (New York Times)
Nov 02, 2023
bWitched Media

New York Times reporter and host of the popular podcast Hard Fork, Kevin Roose, joins Niki remotely to talk all things AI and why he’s a “sub-optomist” when it comes to how this latest tech revolution will impact our future. He shares some tips from his book, Futureproof, about how to be resilient in the age of AI in the workplace. Turns out being more human is the answer because you just can’t out-hustle a machine.

“People should start thinking about it [AI] more as a tool for displacing tasks within jobs rather than entire jobs.”  Kevin Roose

Show Notes Transcript

New York Times reporter and host of the popular podcast Hard Fork, Kevin Roose, joins Niki remotely to talk all things AI and why he’s a “sub-optomist” when it comes to how this latest tech revolution will impact our future. He shares some tips from his book, Futureproof, about how to be resilient in the age of AI in the workplace. Turns out being more human is the answer because you just can’t out-hustle a machine.

“People should start thinking about it [AI] more as a tool for displacing tasks within jobs rather than entire jobs.”  Kevin Roose


[music plays] 

Niki:  I’m Niki Christoff, and welcome to Tech’ed Up! I know my voice sounds a little gravelly and I hope you’ll still listen to this full episode even though it was worse when we taped it a couple of days ago!

Today, I’m chatting with Kevin Roose, New York Times award-winning reporter and author of Futureproof: 9 Rules for Humans in the Age of Automation. He joins me to talk about how the AI future is not utopian or dystopian but something in between. And it’s our human traits that are going to keep each of us relevant and resilient in the age of artificial intelligence.  

Quick note - I’m blaming Sudafed for mixing up Valley and Village in a story I tell in this episode about the Salesforce monks. And, I know, Salesforce monks are a thing.


Niki: Welcome everyone to the latest episode of Teched Up. I'm going to start by addressing my voice. Is it sexy? No, is laryngitis sexy? No, but hopefully, you'll be patient with this as listeners and also our guest today, Kevin Roos. 

Kevin, welcome. 

Kevin: Thank you. And I'm sorry you have laryngitis. 

Niki: Thank you. I'm being a baby about it, but I keep telling myself it's, like, husky. 

Kevin: This is an occupational hazard for podcasters. They don't tell you this when you start a podcast, but it's, like, “You must protect your voice because that's the money marker.” 

Niki: Gosh, I know. It's like Adele. Adele and me, 

[both laugh] 

Niki: I'm straining my vocal cords. How can I be an influencer? 

Okay. So, Kevin, you are an award-winning technology columnist for the New York Times. You have your own podcast, Hard Fork, so you're very familiar with doing all of this. You hosted an eight-part limited podcast series, which I highly recommend, called Rabbit Hole, which is how I got really familiar with some of your thinking on how the internet is affecting us.

And today, we're talking about a book that you wrote called Futureproof. And it's very timely because it's about being a human in the age of automation. And I found it interesting it involves actual tips for humans.

You called yourself a sub-optimist, [chuckles] which I found to be an amazing framing, and I think it's... I think it is sort of a perspective on how you approached this work. If you think that's fair?

But what do you mean by being a sub-optimist about technology? 

Kevin: Well, I was trying to figure out a way to sort of label myself or slot myself into a sort of nomenclature that people would understand about, y’know, how you, how I feel about AI because that people ask me all the time, “How do you feel?” And the implication is you either have to be, uh, y'know, an optimist or a pessimist, a utopian or a dystopian.

You have to think that AI Is going to solve all of our problems and lead us into a glorious y'know, future where we don't work and we just Get all our needs provided for by the robots and we're like the people in in Wall-E [Niki: laughs]who are just, y'know, uh, strapped to our chairs y'know, playing video games all day or you have to believe that it's gonna literally kill us all and there hasn't been a lot of nuance and a lot of shades of gray in between those two options.

I think that AI can and may solve many problems for us may be incredible. It may, y'know, help us cure disease It may help us address climate change, y'know, it may help students get better education. It may help people be more productive at work.

But the key word there is may [Niki: mm-hmm] because we have run these experiments before, where we throw technology into the world very suddenly. We had it during the Industrial Revolution. We had it in the 20th century when factories got automated and electricity became available.

What happens in every period of technological change, if you go back and study it, is that some things get way better and some things get worse and it makes people's lives harder and easier sometimes at the same time. And so, I think both the utopians and the dystopians miss the reality, which is that technology and technological change has a very clear upsides in many cases, but it also has clear downsides.

And so, I was trying to sort of carve out a middle path there between, sort of, full-throated optimism and full-throated pessimism. 

Niki: Right. 

The doomers versus the tech utopians. [Kevin: That’s right] And I think actually there's sort of a meta point, your personal human approach to this issue, which is humanity and sort of uniquely human traits are what are going to help people survive or even thrive during this transition.

So, I want to get into that. But first, I want to talk about one of my favorite topics, which is tech billionaires. So, for my sins, I have spent a lot of time around tech executives and founders and, and just CEOs. And my take is usually they're not driven by pure avarice and often don't see around the corner to the externalities.

So you just, we just talked about sort of big industrial revolutions, but there's also just the recent history. When I started at Google in 2007, we didn't have iPhones. That came out that year. Nobody, I think, anticipated the harms of social media addiction and the attention economy. 

One thing I would love you to talk about is sort of what these executives are saying publicly about artificial intelligence, and then the meeting after the meeting, what you're hearing, the scuttlebutt, what they say in private, because I don't think those are aligned.

Kevin: No, they're not! And that was actually one of the things that made me want to sort of write this book. And, and one of the, the, sort of, things that got me interested, I actually took a trip back in, I think, 2018 or, or 2019, one of those years, but pre-COVID, what was it? 

It was all a hundred years ago. 

Niki: Time is a flat circle. 

Kevin: [laughs] Right, exactly.

Niki: No one knows. Right. 

Kevin: So I, I was in Davos at the World Economic Forum, and I was covering it as a journalist. And y'know, if people don't know what this is, it's like a big confab, all the, y'know, biggest capitalists running all the biggest companies go to the Swiss Alps and, y'know, hang out and go skiing and go to meetings all week.

And, and I had gotten invited by the times and I was there and I was talking with, y'know, I was hearing all these discussions, they have these panels and talks, and everyone was sort of very optimistic about AI, how it's going to empower workers, and it's going to, y'know, it's going to, like, fix the bad parts of people's jobs, and it's gonna be great for society.

And then, I would like that the panels would end and I would go to dinner or drinks or y'know, I would be in a hotel lobby and I would just hear these same executives, these same titans of industry talking in very cynical and I would say sort of cutthroat capitalist terms about how they wanted to use AI at their own businesses.

They were not saying, like, “I want to empower workers.” They were saying, “I want to get rid of workers.” [Niki: yep]  Like, “I want to spend way less than I currently do on things like, y'know, customer service departments or accounts payables or back office functions.” They were very excited by the possibility that AI could help them shrink their cost basis, operate with higher margins, and eventually make more money for themselves and their shareholders.

So, I was struck by the sort of difference between those public and private conversations, and I think that is true of a lot of people in the industry right now who are saying publicly that they're so optimistic about how this technology will benefit humanity. Meanwhile, they're trying to use it at their own businesses to sort of shave costs, get rid of people and are not spending much time thinking about the sort of human cost of that.

Niki: It's so interesting. You're right. Maybe at a macro level, they do have this idea that technology can, can have all these positive externalities, but they're shareholder-driven companies, they got to look at their bottom line, and absolutely, they're going to displace workers, which leads us to the most important concern with AI right now.

And what I want to get into is some, the practical ideas you have for people making themselves resilient during this transition. But first, you made a point about: it's not whole categories, like, I'm hearing lawyers are going to be gotten rid of, right? It's not that. You take a more of a scalpel to the, um, potential job impact. 

So, can you talk a little bit about how you see that playing out? 

Kevin: Yeah, so I think one of the big misconceptions about AI and automation in general is that it is going to sort of wipe out entire occupations and that others will totally survive, y’know no, no jobs displaced at all. And you see this in, sort of, y'know economic studies, y'know, think tanks will put out papers that say, well, y'know, “Doctors have a 63 percent chance of being automated out of their jobs.”

And I, I get, I understand that that's sort of how we talk about automation and how economists think about it is like, “Which occupational categories are going to be most affected?” But if you actually look back at the history of technology and the history of automation, that's not usually how it works.

It's hard to name jobs that totally disappear as a result of technology. There are a few, like, there used to be people who operated the elevators [Niki: mm-hmm], y'know, and there aren't any more or like tollbooth collectors. That's a job in, at least in California, where I live, that has completely disappeared. But I would say most of the time, what technology does is just shift the skills inside occupations.

So, if you are a doctor and you spend all day y'know, reading scans and trying to identify, y'know, pathologies, or disease, or tumors, that is going to be less a part of your job in the future because AI will take that part over, which means that you'll have to spend your time doing something else. 

I would say same thing with lawyers. It's not going to be that you spend, y'know, 10 hours a day doing doc review. If you are a junior associate, it's going to be, y'know, you're going to be spending that time working with clients, maybe writing, maybe checking the outputs of whatever AI system you're using to do doc review. But it's not going to be sort of sitting there and, and looking for issues in contracts or something like that, because that is something that AI is going to do.

So, I, I think people should start thinking about it more as a tool for displacing tasks within jobs rather than entire jobs. 

Niki: It's interesting you mentioned that. I did doc review. I was an attorney for a couple of years. I'm a recovered attorney now and I was actually paid a fortune to, I mean, Coco the Gorilla could have more easily done [Kevin: laughs] what I did, which is just like put post-it notes on banker's boxes full of documents. I mean, that's all I did was sit in an office putting posted notes on things. It was a totally terrible use of, of resources. And the idea is that compared to a deposition, a trial attorney, like, those are still lawyers. You're going to absolutely need people with those social skills.

And so, let's get into one of the rules you're giving to people. You have these sort of nine rules of adapting. And one that I found really, really interesting was “Be surprising, social, and scarce.” 

What do you mean by that? 

Kevin: Yeah, so this was my attempt to kind of answer the question of, like, “Well, if AI is going to displace all these tasks and affect all these people's jobs and, y'know, their ability to make money, like, what is safe? What, what can we predict with some degree of certainty won't be replaced by AI anytime soon?”

And I asked, like, many, many people: researchers, executives, engineers, sort of like, where are the, “Where are the safe zones? Where, where are we pretty much guaranteed to be able to find meaningful work into the future?”

And what they told me was basically what I distilled down into these three categories: surprising, social, and scarce. So, let's just take them one by one. So, surprising is, sort of, work that involves chaotic situations, messy situations, jobs with high sort of variants within them, which would include things like, y'know, a preschool teacher. [Niki: mm-hmm] 

Like, that is not a job that AI Is going to do anytime soon because it's just too chaotic. It involves too many crying, y'know, children [Niki: laughs] and too many people, y'know, hurting themselves and needing to be consoled. And just every day looks different. Every hour looks different. 

And so, AI, it really likes rules and regularity. That's why, for example, you can play chess with an AI at a superhuman level but y'know, if you tried to give it one of these more chaotic tasks, it would sort of fall apart. 

So, the more surprising your job is, the more protected it is. 

Social, the second category is jobs that involve meeting emotional needs rather than material needs. So, you can think of like a therapist or a, y'know, a home health aide or a nurse but also, y'know, jobs that we wouldn't typically think of as being social, like, like a barista, [Niki: mm-hmm] is a job that is actually, y'know, if all you wanted was someone to make you a cup of coffee, like, you could do that at home. 

Everyone's got a coffee maker, but it turns out that when people go to Starbucks or Phil's or Pete's or whatever, like, you want someone to smile at you and say good morning and, y'know, greet you. And that turns out to be an important part of that job that is not, sort of, captured by, “Can this job be automated or not?” So, this the social category is sort of work where even if we could technically automate it, we're going to prefer that a human do it because it does something for us. 

Niki: [interrupts excitedly] So, you mentioned real estate, you mentioned real estate agents at one point, and I just bought a house. I could have looked easily at the comps and done the negotiations and the paperwork, but it almost was like my real estate agent, Kyle, who doesn't listen to this podcast, was like a therapist because  [Kevin: Totally]  I needed to think through my feelings about buying a house and I needed him to be with me during the open houses.

And so, it's one of those, like, AI-assisted jobs, it's going to get more efficient, but absolutely, at least I'm someone who needs that human to help guide me through the process. 

Kevin: Totally, and a lot of people are willing to pay for someone to listen to them, y'know, talk about their problems, whether it's therapists or real estate agents or lawyers [Niki: chuckles].  I mean, I think a good heuristic for this category is, like, if your job has like an invisible slash therapist on the end of the title, like, it's pretty safe, y'know, if you're a lawyer slash therapist or a real estate agent slash therapist, you're gonna be okay. So, that's the second category, that's social. 

Scarce is sort of this, this third little bit stranger category, which is jobs that involve sort of rare skills, high stakes situations, low margin of error, low fault tolerance, or jobs where you get one chance or a couple chances to do something and you really need it to be done right.

So, an example of a job in this category would be something like a 911 operator. [Niki: Mm-hmm] Right, like, we have the technology to automate that job right now. If you wanted to, you could call the, y'know, you could call 911 to report an emergency and like a little robot would pick up and it would say, y'know, “Press one for,’ I don't know, “burglary and press two for murder,” and whatever.  [Niki: Laughs] 

Like, we can do that, but we've decided sort of collectively as a society that that is not a job that we want to automate because you really need a human who's trained in that kind of response to pick up the phone and very quickly identify what you need any nuance in, in the conversation and get you to the right help very quickly.

So, that is a category of job that I think is protected, not because we don't have the technology to do it, but just because we as a, sort of, a society have kind of decided that that's too important to give to machines. 

Niki: So, I loved this chapter because it made me feel way better about my job security because I do a lot of crisis comms and crisis management. And when you're dealing with C-Suites who have been hacked or sued or have significant risk that they're managing. It's slash therapist, right? 

Kevin: Right. They're not giving that to Chat GPT. 

Niki: No!  And they often just need somebody coming in and telling them the next thing to do but that adapts quickly to a chaotic situation and can stay calm because they're rattled. And so, I was like, “Great!” And also, I'm a bullshitter for a living, so best wishes. 

Kevin: [chuckling] Yeah, bullshitters are a protected class. Luckily for you and me!  

Niki: Washington, D. C. - we're gonna be fine, everybody!  But, but one thing you also talk about that I thought was fascinating was the idea of sometimes marginalized groups might adapt a little bit better to these social parts of adapting to an automated world and how they might have almost innate skill sets based on their lived experience that will help them potentially adapt better.

Kevin: Yeah, this was an interesting point that I didn't, I wish I could take credit for this, but I sort of stole it from some, some folks in the AI industry who basically pointed out to me that, like, “These, these skills that are going to be very important in future jobs, the ability to empathize with people to, y'know, connect with them to, to hear their problems, to solve their problems. Like, these are skills that are disproportionately found in sort of the care professions,” which do, which do have larger shares of women working in them.  Things like nurses and, and, and so on. 

So those jobs, I think, are going to be pretty safe and stable well into the future, but I remember talking with an economist named Jed Kolko who is gay and was telling me like,  “One of the, sort of, advantages that people who have spent time in the closet have is sort of this ability to kind of, like, read a room, to sort of size up people in a new social situation and see, like, who is safe, who is trustworthy.”  Like, this kind of emotional intelligence that is sort of hard to teach and hard to capture in any kind of like a skills assessment or anything, but that actually is a very sophisticated kind of emotional intelligence that actually will help people in many of these groups, sort of adapt to a changing workplace.

Niki: Yeah. And you had mentioned potentially women, and racial minorities, underrepresented minorities also have to code switch, which again is sort of adapting in a human way to what's happening around you, which a machine is not gonna to be able to do. 

Kevin: Totally! So I, and I, and I think in part, this is, like, sort of, wishful thinking a little bit on, on the part of the people who think this, because I do think that there will be continue to be structural barriers for, for marginalized groups.

A lot of people are talking about is that we may end up sort of, y'know, in a world where these, these so-called soft skills, which is a term that I hate, [Niki: I do too!] but this is something that y'know, it's the, that these actually do become more valuable than, than the quote-unquote hard skills that people like engineers and executives have.

Niki: Right. Because you can replace that. I actually extended this. This is a personal take on it. But as I heard, as I was thinking about this, I actually, I don't know, maybe this is wishful thinking on my part [chuckling], but I had a pretty tough childhood. And I think sometimes people who grew up that way - talk about reading a room! 

If you grow up in an unstable environment, it's like, I have infrared when I come into a room, I'm very aware of what's happening, which is how I got drawn to crisis comms because I thrive in a crisis. I, since I was a kid, have been really used to that. And I think that there is something to this, the idea that you become resilient with other humans.

And if humanity is what's going to help you have a skill that a machine is not going to really have. I think it's true that, that people who have had to adapt in ways that are difficult and unpleasant, but it also gives you a skillset that's, that's, that's unique. 

Here's your time to really use those skills of emotional intelligence. Maybe it's overly optimistic. We’ll see! 

Kevin: I think it's a, it's a possible outcome. I don't think it's destined to happen, but I think it's possible. And I do think that one of the, one of the hard things is that for many years, we've been teaching children, but even adults, that those kinds of skills aren't important, right?

That if you, if you want to get a job and stay employed and make money, you need to, like, major in STEM and learn to code. “Learn to Code” was sort of this big trope for many years, and I heard it growing up, too. Like, if you want to succeed in the future, you have to essentially, like, turn yourself into a computer almost.

You have to, like, become sort of cold, and analytical, and hard-working, and numerate, and you have to learn to code and become an engineer. Like, that is the path. And so there's this whole generation, maybe multiple generations of people who have grown up with that sort of being hammered into them. And I think it's just wrong.

Like, I think that was maybe true for a while and is definitely not true now. When I go around Silicon Valley, and I asked, y'know, executives, like, “What are the skills that you are hiring for? What are the kinds of people that you're trying to identify? What is valuable now in the workplace?” They're not saying, like, “Ten years of Python experience [Niki: Right]  or, or JavaScript experience.”

They're like, I literally had one CEO tell me, he said, “I've got all the engineers I need, but I can't find sales. I can't find people who have that skill of connecting, and of, of persuading and of, of collaborating.” Like, that is not an easy thing to find, especially in the Bay Area these days. So, I'm taking that as a signal that these kinds of soft skills, and I'm putting that in air quotes, are going to become much more valuable in the 

Niki: [interrupts excitedly] Humanity is going to be important! 

Okay. The other thing I, I think is really interesting that you talk about. And again, I mean, it's my podcast, so I talk about myself a lot, but machine drift. 

I've drifted so far into the machine, it's like I can't even get my way out. Can you explain what machine drift is? 

Kevin: Yeah, machine drift was sort of my, my way of sort of trying to describe what I felt happening in myself and saw happening in others and knew was happening sort of more broadly in the world, which is this feeling that we are sort of gently and gradually giving over many parts of our human agency and decision making to machines.

And I don't just mean like, y'know, Chat GPT or something like that. I, I was writing this at a time where it was, like, algorithmic social media was the big sort of advance in AI, right? You had the YouTube algorithm, the Spotify algorithm, the Facebook algorithm, the Instagram algorithm that these, these machines were essentially sorting the world for us, showing us what they considered important or interesting.

And in many ways, they're taking over our decision-making, right? If, if you're just listening to the Spotify Discover, y'know, algorithmically generated playlist, like, you are, in some sense, giving over your taste to the Spotify algorithm. And I saw this happening in my own life. I knew it was happening in the lives of many of my friends and colleagues and people that, y'know, would, would write in to tell me that this had happened to them.

And so, I think this is a really important step in the process of kind of becoming more resilient to technological changes, sort of realizing the parts of ourselves that we have already kind of given over to the technology and, and trying to claw those back because I think that is really important that we, we can use AI and all these tools to make ourselves more efficient and productive, maybe even happier, but we really have to keep at our core the ability to make decisions about our own choices and preferences and values. 

Niki: And be original!

[Kevin: Yeah! Totally!]

Sometimes I feel like, I actually said this the other day to someone, I'm like, “I don't think we have any original thoughts.” [Kevin: chuckles] Like, I think, I think, I think we're, I think I like this thing because I keep seeing ads for it, and do I actually like it? 

Because in the past, when I spent more time reading and unattached to my phone, I do think I was more creative. I think I was less in, y'know, being fed trends and things that people think I would like that the algorithm thinks I would like, and then maybe I do, and I just get narrower and narrower. 

You talked about the appalling number of times you picked up your phone in a day. I checked that on mine and I was like, “I'm worse!” 

Kevin: What are you?

Niki: I can't even say it's so…

Kevin: No, say!

Niki: 160 times a day.

Kevin: Oh, that's, I've, I've seen worse! 

Niki: You have? I think it's horrible!!

So, I put a rubber band on my phone just as this little speed bump. Like, “Girl, what are you doing? What are you doing? Why are you looking at suggested content of cats?” I mean, I love a cat, but still. And so, then I also just, I mentioned this right before we started recording, I deactivated Instagram for the next 30 days. Just like, detox my brain. 

I'm barely on Twitter anymore. I am on LinkedIn, but I just want to get my attention back. I want to be able to focus a little bit more. And you actually talk about not focusing, but preserving your attention. 

Kevin: Yeah. Just guarding it.

Niki: Guarding it. 

Kevin: Maybe. How, how is the Instagram detox going for you so far? 

Niki: Well, I had done it, previously a couple times, maybe once a year. And I find it, for me, it's easier to abstain completely than to try to moderate my usage. I, y'know, I override my own dumb passcode. It's easier to just cut it off completely for some period of time, clear my head, and then come back to it for the things I love about it, which is connecting with people, but not the mindless scrolling, which I hate about it. 

Kevin: Yeah, so you've done this before. This is not your first detox! 

Niki: Yeah, I go through these like digital minimization moments, and then I go into-

Kevin: Does it make you happier? 

Niki: Yes!

Kevin: Or more connected? 

Niki: It does!

Kevin: You feel the difference? 

Niki: I'm more present. I'm more present in my life. And I think that idleness aversion, right, the, the fear of being idle, which I suffer from is, is part of it.

It's like, I just want to be looking at my phone all the time. I didn't grow up with phones. So, I didn't used to be this way. And I think it's sort of a wistfulness for how how I used to be. So, I always feel better when I do it. I feel more present. 

Kevin: It turns out that being bored is really important.

That like the ability to, kind of, like, sit there and not have something stimulating you and, and come up with something, y'know, of your own to amuse yourself or, or, y'know, distract yourself, or just be with your thoughts. 

It's really annoying. Like, I didn't want this to be true. [Niki: Mm-Hmm]  'Cause I'm like, I everyone, I live in the Bay Area, so everyone's always trying to get me to meditate.

Niki: Oh yeah, totally! 

Kevin: Y'know, and I, and I was, y'know, [laughing]  I was like, for years I was like, I was like, “I'm not doing it! I'm not gonna be one of those people who meditates. This is all dumb!”

And then I tried, and I was, like, “Shit!” 

[Both laugh] 

Niki: You do it?! Do you meditate now? It actually works?

Kevin: I do! I do! And, y'know, I'm not evangelistic about it, but, like, it turns out that just being there with your own thoughts is actually kind of rad once you get the hang of it.

So, I'm not going to tell you that you have to do it, but I will say the meditators, they annoyingly turn out to be onto something. 

Niki: Oh, I'm sure they are!! So, I don't know if y'know this, but I worked at Salesforce for three years and Salesforce has these monks on retainer, the Plum Valley Manassas.

Kevin: Wait, really? [chuckling] Heavenly. [astonished] Salesforce has in-house monks?!!

Niki: They surely do. They're not in-house. They're contractors. [Kevin: Wild!] Yes. And at every major executive function, the Plum Valley Monastics come and talk about mindfulness and meditation. I-  

Kevin: Come on!!  

Niki: I kid you not!  

Kevin: I mean, this is a Silicon Valley episode.

Niki: I honestly, I kept waiting for Silicon Valley to call me to be in, like, an advisor.

Kevin: How has no one profiled the Salesforce monks? 

Niki: I don't know!! 

Kevin: This is a failure of journalism.

Niki: I'll tell you why! So, at an executive, sort of, function that we had, they were speaking about how, when their landline rings, they wait 16 rings before they answer  [Kevin: Oh boy!].  

That's probably why they haven't gotten profiled. [Kevin: laughs] But then they started to count the rings. And by the time they got to ring 4, slowly counting, I had to leave. I just left the room. I could not take it! 

Kevin: [laughing] That's amazing. I love that!  

Niki: I'm like, “I'm in crisis comms. I got to go find a crisis. I'm out of] here.” But, yes, that is an area of improvement that I want to work on. 

Okay. So, the last thing I really want to talk about is there are jobs that are, are absolutely at high, high risk of being eliminated, and those are something you call endpoint jobs.

And so, I think for, for people in those positions, it's really worth thinking about what they do over the coming couple of years probably to create more employment resilience. Can you describe what that means and then, do you have any advice about it? 

Kevin: In software, like, endpoints are sort of these pieces of connective code that join two different apps together.

And so basically, these are, these are jobs where you as the human are either taking instructions from a machine, a computer, an app, a program, or, y’know, and putting it into another system, another machine, another app, another program. And, y'know, some of these jobs are, y'know, things like Uber drivers, y'know, DoorDash delivery people. These jobs where you're literally like your, your boss is an algorithm. [Niki: mm-hmm] 

It's telling you where to go, what to do. You are just sort of the human fleshy, y'know, interface to that algorithm, and you are, y'know, totally interchangeable, and you don't have any sort of creativity or agency in the way that you do their job. And that's not to knock those jobs, like, those jobs provide for a lot of people.

And my fear is that those jobs are already mostly automated, right? You, you really are only necessary until the robot can finish the job, until it can make the delivery, until it can drive the passenger. And so, those jobs are endangered. These, these endpoint jobs. And, y'know, but there are, that's not to say that they're all sort of gig work jobs.

There are a lot of jobs in, in a lot of highly-paid industries. A lot of, y'know, people making six figures are doing what essentially are endpoint jobs. They are taking, y'know, things from a sales projection and putting it into a database. They are taking things from one programming language and converting it into another programming language. They're taking, y'know, PowerPoints and turning them into memos or vice versa. 

They used to call them swivel-ware back in the 80s, where you're just, kind of, like, taking stuff from one place and putting it into another place without doing much original thinking. That is the kind of job that I think is going to be automated first.

Niki: And those people should be looking for other positions or ways, even within their current work, that they can move into something less replaceable. 

Kevin: Yeah, you want a job where your humanity matters, where it is an asset to be a human rather than a liability. 

Those jobs also, I think happen to be, like, the most fun jobs [Niki: Yup] where you are not just like a cog in a machine. Unfortunately, there are cogs in a lot of machines. And so, if you are in one of those jobs, I would say, y'know, start working on finding something where your humanity is actually a core part of what makes you valuable.

Niki: Well, I know that you're not shilling this book because it's been out for a while, but we're still gonna put a link to it because I, I really enjoyed it as a read, and it's so relevant in this moment. Especially, most of our listeners are in Washington, DC and people are really focused on the extremes, the doomers versus the utopian folks.

And I think just having some practical thoughts and more insight into the nuance around this is an important part of the conversation. So I, I recommend it. It's a good read. 

Kevin: I think there are reasons to believe that AI is, is not, sort of, up to human level at certain things, but it is getting good quite quickly at, at many of them. So yeah, I think I, I compared in, in the book, I compared it to an army of chimps [Niki: laughs] where it's like, it's sort of, it's sort of stupid but powerful potentially, and, and maybe dangerous if you assign it to do the wrong thing.

I think since then, I've upgraded it. To, like, I now tell people that AI is like an intern, [Niki: laughs] y'know, it's like, it's like, it's kind of in the right ballpark a lot of the time, but you really do need to give it close supervision or else it's gonna, like, accidentally set your office on fire. 

Niki: [laughing] I love that. I think that's a good evolution from, from where this book started.

I'm going to drop a bunch of links into the episode notes to your book, to Hard Fork, which is a great podcast, to Rabbit Hole, which everybody should listen to, and also to an article you wrote, I think, earlier this year, right around Valentine's Day- 

Kevin:  It was one of the strangest experiences of my life and a nice trauma moment [Niki: laughs] for me to revisit when, [chuckling] when, whenever, whenever people talk to me about it. 

Niki: You're welcome, for bringing that up. 

Kevin: Yeah. [chuckles] 

Niki: Thank you so much for taking the time to come on the show and talk about this.

Kevin: Thank you. This was so fun! 


Niki: On our next episode, I’m joined by the CEO of Project Liberty, Martina Larkin. We discuss the non-profit’s mission of making the internet safer and healthier by shifting power out and down using decentralized tech. Specifically, how can DSNP help you wrangle control of your data back from social media giants? Also, what the heck is DSNP? Tune in to find out.