Tech'ed Up

AI: Winning the New Cold War • Rep. Will Hurd (R-TX)

October 07, 2021 bWitched Media
Tech'ed Up
AI: Winning the New Cold War • Rep. Will Hurd (R-TX)
Show Notes Transcript

Former Congressman and ex-CIA operative, Will Hurd, joins Niki Christoff in the studio to talk about the power and potential of artificial intelligence. The conversation covers the basics of AI and 5G, the Chinese government's global ambitions, why Rep. Hurd believes we're in the midst of a new Cold War, and how he thinks the United States can catch up.  His new book, American Reboot, expands on this vision for U.S. competitiveness. 

 “We have AI now that can help farmers grow more crops, use less water, use less land. Hello?! Why would we not try to push that and move at lightspeed in order to pull that off?”- Will Hurd



Transcript has been edited for clarity

I’m Niki Christoff and welcome to Tech’ed Up. Today we’re learning about artificial intelligence with Congressman Will Hurd, a former undercover CIA agent and author of the forthcoming book American Reboot arriving in February 2022. In our conversation, Will breaks down the difference between AI and machine learning, teaches us the concept AGI, and we both agree that middle schoolers should all be learning how to code. 

Welcome, Will Hurd. Thank you for coming on the show today. You served three terms in Congress representing your hometown of San Antonio, Texas. You are unique in the sense that you were a computer scientist serving in Congress and you worked on a national AI strategy, a bipartisan program. So today that’s what we’re talking about. Let’s start with the basics: what is artificial intelligence?

Will: So artificial intelligence is a tool that reacts in human-like ways. Okay. It's basic. And you put inputs in and inputs come out. What's fascinating is that if you say AI to someone who's older than me, they're going to say Hell 9,000. That was that creepy robot, you know, on that space plane and in space Odyssey. If they're younger than me, they say the killer humanoid, right? Or they'll say Roomba. Right? Which is the little machine that goes around and sweeps your floors. But ultimately artificial intelligence is a tool. I say tool specifically because humans still gotta be able to use it. 

But it is such a powerful tool that I equate it to nuclear fission. Nuclear fusion when its controlled gets us nuclear power, clean energy that can power the world, right? When nuclear fission is uncontrolled, it could destroy the world. And that is where we're at or where we can be when it comes to comes to artificial intelligence.

Niki: It’s helpful. I think you're right. There's sort of this idea of the robot. Or the robot's going to become sentient. Is it like Skynet? Is it evil? And so your point is basically it is a technology, a tool that can be either used for good or for evil. And I want to talk about those different use cases and what you think is important, but can you also just answer a question I do not know the answer to which is: what's the difference between AI and machine learning?

Will: Okay great. Great question. Great question. So machine learning is a part of AI and it is the process in which you teach the algorithm to learn. And so there are different techniques that you can use, coding techniques, in order to reinforce the learning within the algorithm.

So there are three components, most people say the three components of artificial intelligence. I would say there's a fourth. The components are data. You have to train the algorithms, right? You train it with data.

Will: Let's take something as simple as autonomous vehicles. If that camera on the car is looking at a stop sign, they have to know that's a stop sign regardless of how the shade on that stop sign is. So that's why you a million images of that stop sign in order to recognize that it's a stop sign.

So that's why the data goes in. And then you have the algorithms: the program that uses machine learning to take that data and make decisions from it. So you have to have data, you have to have algorithms, you have to have computing power. 

And there are two parts of AI, like how we use AI now. if you're using mapping software that gets you from point A to point B in the quickest way, there's some AI behind that. And there’s another thing called artificial general intelligence, AGI.

And this is a state where the algorithm is going to be smarter than most humans on most things. And we're going to get there. And so AGI is really that thing that is going to be super, super, super powerful, but to get to that, you have to have computing power. So that means you need some fast computers that can tabulate all that data you're putting in to those algorithms that have millions of lines of code. 

And so you need that computing power, but computing power requires energy. I also think another element we should be thinking about is the policy around how artificial intelligence can be used.

A couple of years ago, Vladimir Putin said, and I'm paraphrasing, whoever masters AI is going to master the world. That's probably the only thing I agree with Putin on. 

Niki: There is a lot to unpack in what you just said. So first of all, I don't even like it that Mark Zuckerberg knows that I want to see photos of cats stuffed into pitchers and refrigerators. Literally they show me these photos on Instagram and I don't like it. |t feels invasive, even though I do enjoy a cat photo.

And so I want to talk a little bit about the average person and how they're interacting with this. I think there is potentially a generational divide. There are these positive use cases of AI, but there are also ways in which the private sector, which has a lot of that computing power. These are the people that have the data centers that can run these algorithms, which can then serve us things we want to click on and capture our attention. The private sector right now is dominating this. And I'm curious what you think. Part of me thinks that this is why people are uncomfortable with it; it feels like a privacy invasion. 

Will: Sure. And these are all valid questions, right? And so we can't tackle this question about privacy and data privacy without talking about that, we're in a race. And I do believe that we're in a new cold war with the Chinese government and I say Chinese government specifically.

I don't have a problem with the Chinese people. I obviously don't have problem with Chinese Americans. Some of the hate that has been directed to Asian-Americans in the United States is just outrageous. I'm very precise, it’s the Chinese government. And the Chinese government has made it very clear.

This is not my opinion, this is not me laying in bed at night, staying at a Holiday Inn Express musing about what the Chinese are gonna do. This is what the Chinese government has said about what their goal is. Their goal is to surpass the United States of America as the only superpower in the world, and they're going to do that by being the global leader in advanced technology. 

And they've outlined 12 to 15 different types of technology. Quantum computing is one of them. AI is one of them. 5G is one of them. And so there is this race. Now there are some who are involved in developing AI who think these race conditions are bad. To get to that point of artificial general intelligence--that somebody is going to cut corners. 

And when you cut corners, then you're going to create that Skynet or that thing that gets out of control because this tool is going to be so powerful. But you know who doesn't care about privacy? The Chinese government. You know who doesn't care about civil liberties? The Chinese government. 

And it's hard to talk about artificial intelligence without talking about 5G. t's going to be awesome to download season three of Ted Lasso on my phone in 2.5 seconds. 5G is going to give us the power of those uploads and download speeds that are just outrageous in a good way. 

But there's also a thing called latency. I do something on my phone--I type in a command and I do some action on my phone. It goes up into the cloud. Then it comes back down. The trip that that takes is called latency. With 5G that's going to happen in like a nanosecond. Our thoughts are like seven nanoseconds. So now we're going to have the entire power of the internet in real-time at our fingertips. Whoa. Like, what is that gonna allow us to do? That's going to actually allow us to have true driverless vehicles, things like that.

So 5G is part of the infrastructure that you need in order to truly have artificial intelligence. So why do you think the Chinese government spent all this time developing Huawei? Because they're owning this 5G infrastructure in a lot of parts of the world, right?

If I'm driving down the highway and somebody else owns the highway. Well, then you think: okay, they can't get into my car. What if I put a stoplight on that highway? And they force you to get out of the car? Or, what if I put a trap door in that highway, causing the car to drop out? I can do that because I own the infrastructure.

So this is one issue. Now, this all relates to privacy because the US and Europe, we got to get beyond this transatlantic beef on privacy. We have to be able to work together against what the Chinese government is trying to implement. And that's the real threat.

A couple of weeks ago, the Chinese government took every reference on the Chinese internet to this one actress. She was a well-known actress and became a billionaire and she got cross with the Chinese government. Guess what happened? They literally used AI to trawl the Chinese internet and took every reference of her off of the internet.

Holy smokes. Right? Like that’s why you don't want the Chinese government to win this. And they're exporting this technology and these tools all around the world.

Niki: The Chinese do a pretty good job on the world stage, pretending that they are a democracy. But they don't have elections, they don't have to bother with that. They don't have civil liberties. They're a total surveillance state for the people who live there. They're building with their government resources, the infrastructure that's going to create 5G. That's going to create this extremely powerful artificial intelligence. 

They already have some of that. And while they're focused on that, we're mad about our Instagram feeds. We're talking to Europe about privacy legislation and regulations. Having worked at a tech company, I get it. I absolutely get it. And I do think that we need to think carefully about people's rights and the rights to their data and who owns their data.

But your point, if I'm hearing you is: if we lose this cold war race with the Chinese government, suddenly the East has way more power than these Western democracies that are subject to elections and civil liberties and rights. And in fact, has a private sector that's making a lot of this. Is that what I'm hearing you say?

Will: No, you're absolutely right, but it still matters. Getting this right matters. And I think there's a couple of things. The only way the US is going to win this war, and I would say that the best-case scenario is that we're tied right now.

The only way that we're going to win this is if the public and the private sector actually start working together in a more efficient way. An authoritarian government like the Chinese government can get somewhere first. Why did the Russians get in into space first? There were so many firsts in the Russian space program but they were unable to evolve. They weren't able to cut on a dime and leave change. An authoritarian government can get someplace first because they can marshal all their factors of production in one direction. But I will always bet on American creativity, entrepreneurship, right?

And so we need a privacy standard. So the Europeans are probably like 18 months ahead when it comes to setting policy around these issues. And partly because they don't have big European companies that are driving this conversation. And a lot of their early steps were what I thought were anticompetitive attempts focused on great American tech companies. The tech companies have to recognize that they have a public policy role because their tools are now being used to advance public policy, whether that was the original intent or not.

When it comes to something like artificial intelligence, I think it starts with those algorithms. Let's just start with: follow the law. We already have rules about protecting civil rights and civil liberties. Just enforce that, right? Have the algorithms learn to use those things.

And so this is a conversation that we can move to, but we have to move a little bit faster because the debate is slowing us down. The Chinese government’s always going to have more data because of what we talked about: they don't care. We need to get beyond some of these debates here in Washington, DC so we can start talking about driving forward and winning this race, 

Niki: You make a good point. And just for people who aren’t really close to the law, you're basically saying that if a tech company is using AI or their algorithm is making a determination on credit on mortgage worthiness, they can not discriminate. They're legally not allowed to discriminate. 

We will have an episode on bias in AI and how the people building the machines bring their own biases to the technology. And that's something we need to look at, but in fact, you could in theory use AI to be more compliant with the law.

Will: Right. Because you're going to be able to take every element and every example of how the law was implemented on every court case and ultimately train the algorithm on saying: these are your left and right bounds.

But in order to make sure that that the algorithms are doing the right thing, you need more people involved in this industry that are designing these algorithms. Part of that bias comes from not having a diverse workforce and developing this tech. And so that tech talent divide is serious.

I think one of the ways we improve that is teaching coding in middle school at a minimum, in my opinion. For my generation, if you didn't know how to type, you weren't getting a job. Every job required you to know how to type. And every job is going to have something to do with data analytics, with understanding coding in some form or fashion.

And so we need to start having all these debates. What should we be teaching in college? How do we improve the classes in college? How do we get more cyberwarriors out of college? All the great hackers I know, none of them went to college for cybersecurity. But they had these skills that they developed really at a young age.

So that's another area that we need to be stressing. The problem is that there's not some computer science teacher chilling at the coffee shop somewhere, waiting for a tap on the shoulder. We don't have enough.

And so those that have some of these skillsets need to be working in their communities and being a resource to those teachers. So the teachers that are teaching this have someone that they can go to when they have questions. And that's something that you're starting to see people try to participate in.

Niki: I think that's absolutely right. I, in middle school, was learning to use a bandsaw. I don't even know if they still have industrial arts. I broke three bandsaws. Kids should be learning how to code. These other countries are making sure all their kids know how to code. What are your recommendations for--so we've talked a little bit about the private sector. These big tech companies absolutely have a role in making sure that they're following the law, that they're cooperating with the government on providing data sets potentially, or at least thinking through that. What does a public-private AI strategy look like for the United States?

Will: Well, it starts with making sure that there is some cooperation on developing some of the algorithms. How are some of these tools going to be used? That conversation needs to improve, right? And it needs to move at a faster pace. You also need to make sure that the federal government is introducing some of these tools into the government and using them. 

Why does it take six weeks to renew your passport, right? This should be pretty easy. I should be able to do this on my computer.I shouldn't have to go to the post office to pull this off. And so how are we using that? The future of cyber security is going to be good AI versus bad AI. How are you using AI tools in order to defend the digital infrastructure?

Niki: We absolutely want them running on state-of-the-art technology that the private sector uses. But how do we solve for this? How do you get people? In my opinion, a lot of the issue is: why would someone go work for the federal government when they could go work at a big tech company, making a ton of money? Vesting in some of the most valuable companies on the planet, in the history of the universe, how do we get them to go into government? Do you have ideas for that? 

Will: Sure. I look at two things. One thing I tried when I was in Congress that was unsuccessful: I called it the cyber national guard.

It was very specific to people that were involved in cybersecurity. If you're going to go study cybersecurity and Uncle Sam's going to pay for that. But you got to come back. If you get a three-year scholarship, you're going to come work in the federal government for six years in cybersecurity, and you're not going to necessarily going to DOD or NSA.

You're going to the Department of Commerce. You're going to the Census Bureau because you need tools. You need that workforce all over the government. Then once you finish that requirement and go work in the private sector, the private sector is going to loan you back for 45 man or woman days a year.

And so you have someone that has experience in the government, understands what's happening in the private sector and there's that cross-pollination of ideas. The problem that we found was getting clearances done. It's insane. We should be able to use AI to do that. It shouldn't take six months to do a background check. It should take six days.

So that's one way. Another way is to create a category of person in the government that doesn't have to get rid of all their financial investments. If I'm holding stock in a company that I was working at for a long time,  I shouldn't have to get rid of that in order to go work for the federal government.

Oftentimes that person may come in and work in the federal government without a salary if they're able to keep all their other investments. And that's going to be for a short period of time. I would love to have people like that. When they have a successful exit from a startup company and they're looking for that next thing. Come provide your skills to the government for nine months, a year, or two years. 

But the way we think of a government employee means we can’t have that. We put someone in some category, you gotta be a GSX or this or that. We can't think creatively in those financial structures. People would do that because they want to help their government

Niki:. I think you're absolutely right that people would do it. And in fact, if we have anyone listening who works at a tech company, I'm sure there are engineers that if they could continue vesting their stock, if they could have a safe harbor where they could go back into the private sector without being demoted or losing opportunities, they'd spend a year rotating through government. I think they should ask their employers about helping make that happen, especially because tech at the moment is, you know, they've got a little bit of a PR problem in Washington. But it's a great service to the country to take your engineers, your data scientists, program managers, product development people, and rotate them through.

And I know that people who work in the federal government don't need a bunch of whiz kids coming in and mansplaining stuff to them. However, getting extra boots on the ground and people who can just help would be a huge service. And I think it's only going to happen if the employees ask for it if they start pushing for it. This is my observation, having worked in the industry. 

Will: I'm with you on that because the stakes are high. Let's just take AI as an example. There are algorithms right now that can look at your eye and determine that you're susceptible to a certain kind of cancer, right? It's allowing people to live longer.

We have AI now that can help farmers grow more crops, use less water, use less land. Hello, why would we not try to push that and move at Lightspeed in order to pull that off? The CEO of Open AI says he envisions Moore's Law for everything. Transistors on an integrated circuit were doubling power. He's saying that AI and AGI specifically are going to become so powerful that you're going to be able to reduce the cost of goods or services. Because the algorithm is going to be able to do it. imagine if everything we buy was decreased by half every two years. That's a pretty amazing situation. Why wouldn't we want to try to get there faster? So that's the upside.

The downside? I'm not worried about artificial intelligence becoming the Terminator and that technically wouldn't happen. Skynet certainly wouldn't happen until we actually achieve quantum supremacy, which is a whole other topic. But imagine a piece of software that knows absolutely everything about you and can talk to you.

Imagine something that is the best marketer or influencer and that this is all tailored specifically to an individual. That's why we need to have these protections in place.

Niki: Have a norm, a privacy structure that's normalized across countries, certainly across the 50 states, and get moving. So that it’s not developed first by authoritarian totalitarian regimes who are going to use it for ill. 

Will: If you go back and look at what Mao Zedong did in China without this technology. The death and destruction that was put on the Chinese people over decades. And if you look at how the current leader of the Chinese is using these tools to suppress opposition.

We've seen what they've done in Hong Kong. We see what they do to the Uyghurs, the ethnic minorities. We know how they're going to use these tools to continue to extend their influence over their society. Ten years ago, people thought that the Chinese government only cared about China.

Now they have military bases in Africa. The One Belt, One Road initiative was designed in order to increase their influence in Africa. And so this is the battle that we're at.

I don't want the situation that happened in 475 AD when the Western Roman Empire fell. When the Goths invaded the Western Roman Empire and the empire fell. This is a similar situation that America can be in.

I want us to wake up and be like, Hey, we're ready for the fight. The public and private sectors are going to have to work together. We're going to educate our kids for jobs that don't exist today so that we can be competitive. And then we're going to continue to uplift humanity for another four hundred years. I think we can still get to that.

Niki: So glad you turned that around. It was getting really harrowing for a moment. I don't want to end the show thinking through a modern-day Mao Zedong using AI to subjugate civilization. So I'm really glad you focused on middle schoolers learning to code, come work for the U S., let's get it together and cooperate with our European allies and Western democracies. And let's start moving so that we have innovators and inventors in the United States creating the rails and the infrastructure so that we can win this race. 

Will: Amen.