Former Department of Homeland Security official, Paul Rosenzweig, joins Niki in the studio to share his thoughts on AI, hacking, and national security. They dish some stories from the past and take a look to the future of big tech in China, talk AI hallucinations, and Paul shares his thoughts on what Congress should be doing to respond to the changing - and rapidly evolving - national security landscape.
Niki: I’m Niki Christoff and welcome to Tech’ed Up.
On today’s episode, former Department of Homeland Security official, Paul Rosenzweig, joins me in the studio to talk, well, a lot of things. From AI hallucinations to OG stories from my early days at Google, we cover a lot of ground, but we’re trying to focus on cybersecurity. Paul gives his insight into how the government and individuals can be more resilient as artificial intelligence increases the threat of hacks and attacks.
Niki: Today on Teched Up, I'm delighted to be back in the studio. We've done some - many- remote sessions in a row. And today, we're being visited by Paul Rosenzweig.
You are a Homeland Security expert with a tech specialty, and I've recruited you to come in and talk to us about national security concerns, but specifically in the digital era.
Paul: I think it's a, an important area of concern.
Niki: So, that's why we're here. That's what we're doing.
I think it's an important area of concern. I know people wake up worried about different things. This is what I wake up worried about! And partly because I've worked in tech, partly because I live in Washington, D.C., half a mile from the White House. So this town, you feel things in a really palpable way when they go wrong.
So, I think we can't talk about national security without, you just said this right before we started recording, without starting with China.
Paul: Well, I, I think that's right. I mean, when you're talking tech issues. I mean, obviously, if we're talking war issues, maybe we would have to talk about Russia and Ukraine. But, if we're talking about the preeminent tech competition in the world today, it's between the United States and China.
The most notable recent manifestation, of course, is the government's decision to pass the CHIPS Act, which will pretty much incentivize tech companies to get out of China. We will see de-risking of American tech presence in China precisely because we're increasingly seeing a competition, if you will, between the United States and China for technological preeminence, and that resonates both commercially, but it also resonates in a national security sense.
Niki: We're almost entirely reliant on Taiwan for semiconductor chips because we thought, “Oh, we'll have globalization and we can get things from anywhere.” And I think of it as, almost, you could see instead of a kinetic hot war, you could see a proxy war over Taiwan where we can't get our chips, or maybe they start having other people, y’know, that are allied with China, not let them buy things from Taiwan.
So, getting them back over to the U.S. is shoring things up in a way that [pause] tech companies need that. Right now, we have very little domestic manufacturing. So, it nearshores, or onshores, which I think is important.
What has changed with China? Like, it's hard to decouple or derisk compared to what we did with Russia.
Paul: Well, I think that that is the reality. I mean, let's, let's step back.
One of the big strategic economic decisions that the United States made about 30 years ago was to invest in the globalization of supply chains. And that had a lot of good benefits. It's basically pro-consumer. It means that we get cheaper chips, which means we get cheaper TVs, which means we get cheaper cars, cheaper everything.
We did not realize, or we did not expect, that coming with that would be a globalization of risk to supply chains. We were all very comfortable with the fall of the Berlin Wall, the end - Fukuyama's ‘The End of History’: it's all going to be one great...grand, globalized, love everybody, kumbaya world, where we get our chips from China, and our metal from Brazil and our, y’know, our ore from Australia, and everything will be happy.
What we've come to realize in the last ten years, probably, is that that was a myth. It was a fantasy. And now, our dependence on globalized supply chains is a weakness.
I mean, we saw that to some degree with the pandemic, which was obviously not an act of war, just an act of nature, but the disruption to global supply chains that occurred because of pandemic-related travel and mobility restrictions is still working its way out.
I just went to, kind of, buy a new car, get a new car, and they told me I would only get one key fob.
Niki: Oh, wow!
Paul: Because there are not enough chips to give everybody the two key fobs that they normally get.
Niki: That is wild!
Paul: And my wife and I are now like, “We're gonna fight over who gets the key fob, [Niki: yeah!] and,I mean that's a really small and trivial example. And y’know, it doesn't really matter to me, but it really matters in the, in the, broader scheme of things.
I think you're absolutely correct that the conflict with China is more likely to resonate in quasi-economic conflicts than to immediately become a hot war with airplanes flying and bombs falling.
You raise the possibility of a, of an economic blockade of Taiwan. That's a very real risk. I think it's something that, y’know, that they're gaming out as part of, of pre-game planning. But the other, y’know, kind of aspect of it that you raised is the difficulty, the huge challenge in de-risking from China.
After the Russia-Ukraine conflict, we caused a huge disruption in the, in the economic space, and many Western countries sold their stake, got out, limited their exposure. That was relatively easy because Russia is a relatively small economy.
Today, America's investment in China, and for our purposes, especially in the tech space, is immense! And I hope, I expect that American tech companies with large interests in China, like Microsoft, and Amazon, and Apple, are doing their own planning for what to do in the event of a, of a burgeoning economic conflict. It's, it's for sure coming that sometime in the next five years, those companies are going to be faced with a choice.
My friend Klon Kitchen at AEI says they're going to have to pick a flag: the American flag or the Chinese flag. Right now, they're, they're happy doing both, and I don't blame them! It makes a lot of money to sell in China and sell in the United States, but I don't think that that's tenable in the near term.
Niki: It's interesting you bring this up, so I was asked by a reporter recently about a situation which maybe by the time we air this will have resolved itself or at least we'll have more information, but Google, where (interrupts self) I worked at Google for eight years. And I was in Silicon Valley when I was in a room with the two founders and the CEO at the time, Eric Schmidt, and we decided to pull out of China.
So, we had Search in China. One, there were significant censorship concerns, and one of the co-founders, Sergei Brin, grew up in communist Russia and left as a refugee, and has a major problem with communist governments, and took it very personally; this issue with dissidents being censored and information being censored.
Additionally, we kept getting hacked by our own joint venture partners, which was super annoying [chuckling] and took a lot of resources. So, we sat down in 2010 and said, “We're out!” And we started routing all of Search traffic for many Chinese nationals through Hong Kong.
Well, I got a call recently from a reporter saying, “Oh, Hong Kong now wants pro-democracy anthems taken off of YouTube. What do you think Google's going to do?” And there was one expert, she said, “They absolutely are not going to. They're not going to take down this pro-democracy anthem.” And I said, “Wrong! Because everything has changed from 2010. Everything has changed economically because there are now three times as many people on the Internet in China as total U. S. citizens. It's a huge market. They need the market. They're not being forced to pull out.”
And I said, “My prediction is they say something really, y’know, y’know, they'll be upset about it, but they can't risk losing access to China for YouTube, for Google Play, for all the things we didn't have in 2010.”
So, I don't know! What do you think?
Paul: Well, I mean, that's a great question. Y’know, I think you're right that Google Search and YouTube’s capacity in China is important to them. Lots of the other American tech companies have even deeper interests. I mean, they do manufacturing, there are companies that continue to do artificial intelligence research. Microsoft's research in Asia, for example, is at the forefront of Chinese development of AI.
You have to share source code sometimes. Your hacking issue is one of the concerns that continues. Amazon has server farms and, and Intel, Apple has manufacturing.
I, I don't know what Google will do and I don't know what Microsoft will do. But I will say, and I, I mean this is repeating a point I made a couple minutes ago, I think in the end they're going to have to choose. Which is to say that I don't think you can continue to manufacture iPhones in China for much longer because the U. S. government and the U. S. economy won't stand it.
Now, maybe you'll choose China over the U. S. market. [Niki: Yikes!] Y’know, that is-
Niki: Yikes on a bike! [chuckling]
Paul: It is plausible, I guess, from an economic standpoint. But, y’know, if you choose China over the U. S. market, you're also kind of choosing China over the European market and that will get very difficult very, very quickly.
By the way, I think in the end, I I think I disagree with you. [Niki: Oh!] I'm going to guess.
Niki: Yeah, let's see what happens. [chuckling]
Paul: I, I guess I get, I'm guessing that Google will, is too invested in, in its philosophy of freedom.
Niki: This is, so this is a great point because there are ideologies at some tech companies that are incredibly important to them.
Others I think are driven much more by economics, [in a different tone] Facebook. [chuckling]
I do think.
Paul: Yeah. [laughing] Facebook, you say it under your breath. Facebook!
Niki: No! [chuckling] By the way, this, this podcast, I drag certain people over and over. One is Facebook.
Paul: Ah, okay!
Niki: And I'm not going to drag Amazon, but I would like to make a proposal. We have some listeners who work at Amazon.
Why do they not have an option for me to take anything made in China out of my cart? It wouldn't be that hard. I'd pay more for things made in other places. Anyways, just a suggestion for our listeners.
Paul: Good idea!
Niki: Thank you. I'm here for grievances and proposals. [laughs] So, I think, I think we'll see, right?
It's ideology versus economic realities versus the national security issue, which I want to get back to. But say you’re Apple. They're already starting to move some of their manufacturing to Vietnam. [Paul: Yes] They see the writing on the wall.
Paul: Yes, they're doing, they're doing a pretty good job of starting the de-risking possibility.
And frankly, y’know, given the way that Hong Kong is becoming integrated into China, my guess is that everybody who went to Hong Kong is going to wind up moving to, say, Singapore, or Vietnam, or the Philippines, or something like that.
And I mean, and by that I mean not just tech companies, but, y’know, other American companies that have created hubs in, in Hong Kong. At some point, y’know, there's going to be a red line in Hong Kong that, that crosses. My guess, it'll be something like risk to local employees, [Niki: mmmh] surveillance, application of, increased application of Chinese cybersecurity/surveillance law to American tech enterprises. Those are the types of things that I expect we'll see more of.
Hong Kong is clearly on a path towards complete integration, forcibly or not, into, into China.
Niki: It's really smart that you mentioned local employees.
Looking back to 2010, another issue, not just being hacked, not just the censorship and surveillance, but additionally was Julie, [chuckling] who was our public policy person in China, who kept getting dragged in by the authorities. [Paul: mm-hmm]
And, I'm laughing, but it was not funny! I mean, it was really traumatizing to her, and she kept getting called in and, and it was, I mean, it was one specific person representing the company. It was raised to the very top, and it was a super, a huge, problem for us.
Y’know, it's not okay!
Paul: I, I think that's a real sore point and a real point to vulnerability for multinational companies, not just American companies. Twitter, you know, very famously, at least in the pre-Elon era, was subject to significant legal process in, I mean, not just in China, but in Brazil, in India, anywhere where the government wanted Twitter to provide information about political opposition and Twitter was refusing.
And so, they, y’know, their offices were searched, their local employees were hauled before local courts. That sort of thing. It can be, y’know, a real challenge to try and stand for principles of free expression, broadly written in countries where free expression is as core a part of the culture as it is in American society generally and in American tech companies more particularly.
Niki: Right. And when you talked about ideology, OG Twitter, pre-Elon Twitter, they had some of the most ideological legal team and public policy team members who stood for this firmly. And actually, Jack Dorsey, who - full disclosure, I work for Jack Dorsey, he's one of my clients, recently spoke about this, like in the past what happened, but we've moved into a new era of Twitter, which-
Paul: Well, if you speak to Jack, thank him.
I got an invite to be on BlueSky just recently. So, so I've started up. Thank you!
Niki: I also got an invite to BlueSky and that is a humble brag [chuckling], but it seems more sane for now.
Paul: So far, so good.
Niki: So, let's talk a little bit about, so we've talked national security, China, the de-risking, the incredibly complicated decoupling.
This is sort of the economic underlying friction, and I think escalation we're seeing. But then we add in artificial intelligence, and it would not be 2023 if we didn't talk about AI.
Paul: Wow! It's such a broad topic. Let's, let's start by trying to put it in three buckets.
Part of the challenges with AI are going to be that it just is a super enabler of already bad conduct. So, if you do malicious things now, you'll be able to do more of them later and better. We can talk about that bucket, but an example would be deepfakes.
A second bucket is the risks to artificial intelligence from outside artificial malicious actors. So [pause] AI is really just code, just like any other code. It's amazingly complex and robust and interesting code, and it does really good things, but [singsong voice] code is code, is code, is code. It's ones and zeros in the end. And malicious actors will soon, I think, be targeting, if they aren't already, artificial intelligence to create within it disruptions, degradations of the quality of the product that aren't inherent in the design.
It already has its own problems with hallucinations and things like that. Imagine if the hallucinations weren't accidental but were forced upon us by outside bad actors?
And then, the third bucket is the black swan bucket. The “what we don't know, we don't know.” And one thing I think we can say for sure about artificial intelligence is that there's lots we don't know, and its collateral impacts are, are ones that are difficult, if not impossible, to predict right now, simply because we, it's beyond the scope of imagination.
Niki: I think that's...
Paul: At least mine [chuckling]
Niki: I know, mine too!
So, I just want to go back for anybody who doesn't know what AI hallucination is.
Paul: Ah! So, AI hallucination is the fact that sometimes AI just gives you wrong answers that it's kind of made up out of whole cloth.
One of my favorite stories about that is, and I'm going to shout out a friend of mine, Jeff Jonas, who's the CEO of a, of a company called Senzing. He used to be a Chief Technologist at IBM and Jeff is a brilliant technologist. And one day, on his internal company Slack, one of his employees said, “Jeff, I just used AI to do a bio of you, and it tells me that you, Jeff Jonas, were the model for Jarvis, the artificial intelligence in the Marvel movies, [Niki: laughs] that, that when they created that, they were doing it based upon you!”
And, y’know, Jeff said, “Well, I mean, that's really, really weird because I don't know any of the people who made the movie. I've never met Robert Downey Jr. [Niki: chuckles] Um, I like Marvel, but-”
Niki: [interrupts chuckling] I mean, it's cool that they think that…
Paul: It's cool to think about. But…
Paul: But there's absolutely no reason to think that that's a reality. Jeff has run in every IronMan triathlon, there is, [Niki: laughs] In, in the world, and there are like 36 of them, and he, and every time they add a new one, he goes, and he does it. He is associated with the word IronMan in huge numbers of databases that, that, ChatGPT might have looked into, but it misunderstood which Iron Man they were talking about.
Niki: And it's utterly wrong in an organic way where it's getting the data set. It's reading the data set wrong, but what you're saying is people could intentionally corrupt it.
I mean, there are all these hacks that I know. I'm not an influencer. We don't run ads on this podcast. You can do little hacks, like, if you embed words and terms into your YouTube video descriptions or into your [interrupts self]
You could intentionally change your own profile if you knew that you could drop a bunch of words and then suddenly AI thinks you're an expert - you're not.
Paul: Yeah, I think that you could do it yourself. Or, y’know, all of a sudden, in your bio would be the, “and in 2007 she was arrested for drunk driving.”
Niki: I was not! [chuckling]
Paul: “In Philadelphia.” I know! Yeah, or something like that. There's a politician in Australia who recently sued because the, the ChatGPT bio, OpenAI bio of him included erroneous allegations of prior criminal misconduct on his behalf.
That is extremely disconcerting just from a personal level, let, let alone scaling it to nation-states. [Paul: mm-hmm]
I haven't even looked up my bio on Chat GPT. Who knows what it would say? But I find it fascinating that it'll be hard to pull that back. Like, once anything's on the internet, It'll be hard to pull back. And you can see it even just in a partisan political situation.
I mean, 100 million years ago, when I was on Senator McCain's presidential campaign, we'd go to his Wikipedia and change his height. Every day it was like these petty changes in the facts on his Wikipedia. And then, you'd have to have an intern checking and going back to his normal height or whatever, how many homes he owned, whatever little tiny oppo research they had. And you could see that being used, because if it's coming out of the computers, people will potentially trust it more.
Paul: [sarcastically] You've got to assume it's true if it's on the internet. Y’know, Abraham Lincoln said, ”Everything on the internet is true.”
Niki: [laughs] Oh, my gosh, I know! My, one of my, favorite, not favorite, things on the internet is how so many quotes are completely misattributed. [chuckling]
Niki: Constantly! And they're used over and over and then you start to believe Mark Twain said it. Mark Twain did not say it.
Paul: Absolutely not.
Niki: Okay. So, I want to talk just as we're getting through all these topics, and I know in some ways this is just getting a peek into how you're forecasting things. We recently had a hack against several federal agencies. Tons of exposure. It was a blip on the media radar.
I live in this city, and I didn't even pay attention to it. And, I think that, my speculation is that's because it just feels like, “What are we going to do as individuals?” Right? That there's a lack of control over this.
I think, one, I think people lack imagination for what can go really, really wrong with nation states really quickly. One. But two, I think part of that is you don't want to look at it too closely because what are you going to do as an individual?
Do you have any thoughts on that?
Paul: Well, I mean, there are a lot of thoughts. I want to break that up into two different answers. The first answer is that there really are things that government can and should do. One of the things I've written about a lot is the risk from monocultures in government infrastructure. Which is to say, if we all use the same email system, the same database, the same communications, the same backup system, that's a single point of failure. Right?
And, and that's the type of risk that is exploited in the hack that you were talking about. Or to, to circle back to like the Hafnium hack on, on the Microsoft Exchange server. It's so debilitating because everyone uses Microsoft Exchange. [Niki: mm-hmm]
And there are lots and lots of good reasons for having a monoculture. It's more efficient. It’s usually backward compatible. Usually, the upgrade costs are very cheap. In government, we have a monoculture. That's a problem, not necessarily in insignificant systems, y’know, we can live with the grant-making at the Department of Agriculture going down, and we'll work our way around it.
The idea of being unitarily dependent upon a single system is, is really, I think, problematic.
A couple years ago, the Department of Defense was going to build a new cloud system.
Niki: Oh, yes! I know this well!
Paul: They called it Jedi. [Niki: Right]
At the time they were doing it, one of the things that I was really worried about was they were going to buy one! [Niki: Right]
One single cloud and that struck me as radically unwise.
Niki: And it also created, so I was working at Salesforce at the time of this, and we, we weren't involved in the Jedi pitching, but we were in trade associations where you had the members who were. And it was essentially this, y’know, it was all or nothing.
Like, a cage match between the different cloud providers, AWS, whomever could actually service the intelligence community, the Department of Defense. It created, also, friction within the industry that, to your point, if you hadn't made it all or nothing, but if you'd separate it out, one, you'd have redundancy, which would create more security.
And two, you wouldn't have created what is, it became incredibly hostile between the tech companies that were competing for the contract. But you could come up with redundancies, especially to your point you can't really have DOD systems going down.
Paul: Let's be clear, this is not a solution that you import in every instance and, y’know, I hate to pick on the Department of Agriculture, but
Niki: [interrupts] You have been picking on them! [chuckling]
Paul: But y’know, the farmers can farm without the Department of Agriculture, y’know, being online for a week.
Niki: Not all important infrastructure is critical infrastructure. [Paul: Right] I actually stole that from you. You said that on another podcast!
Niki: I will credit you!
Paul: Well, thank you. It is true. Not all important infrastructure is critical and the more that you think that everything's critical, nothing becomes critical. And that's really bad.
So, that's one answer to your question. There's also another answer to your question. Which is that, systematically, I think, we are indulging in a mistaken rush to kind of centralize all sorts of responsibility and authority over the network.
What we really need to do, is to consider ways in which we can empower individuals to be their own masters, be, be their own, [interrupts self] or if not individuals, then smaller accumulations of people. [Niki: mm-hmm] Maybe it's trade associations, maybe it's small communities, maybe it's accumulations of like-minded individuals on different parts.
Increasingly, we're going in the other direction. We're relying on, y’know, Congress, for example, to pass new laws and regulations. It moves too slowly. It's too rigid.
I mean, one of the things that really dismays me is the rush to regulate artificial intelligence, notwithstanding the fact that there are significant risks.
The Senate is hosting a whole host of learning sessions right now with the eye towards passing artificial intelligence regulation in the next, y’know, six to nine months. And I'm thinking, “Dudes, if you're just learning about this now, the idea that you will pass sensible legislation in, in nine months is, is fanciful.”
I think that both in Europe, and in the Senate, and everywhere else, people are genuinely and justifiably concerned about what AI may bring us, and they are thinking about regulation with good intention.
Sam Altman, obviously, wants regulation so that he can lock in his own leading position.
Niki: I think that's very true. So, Sam Altman, CEO of OpenAI, who came to Congress and said, “Regulate me!”
Niki: So, if you're first, if you're the first mover, and you have this huge brand and a lot of cash coming in, you can, you can adapt to those regulations. And you can shape them.
Paul: Right. He's got self-interest in mind. But I'm going to give credit to the other legislators that they're genuinely concerned. But there's a lot of hubris, or lack of humility in the idea that, y’know, our Senators, the legislatures in the European Parliament, can learn enough about this nascent technology quickly enough that they can propose regulations that are meaningful, useful, and will withstand more than six months of technological development.
Paul: I just don't think they will. So, that's what I mean by the rush to centralization that I fear is a problem in this space.
Niki: I have, I have some thoughts on that. One is, you know, you saw in Europe they passed a comprehensive privacy bill, GDPR, but it doesn't do anything. You just click on ‘Accept all Cookies’ if you go to the Guardian website.
I mean, everybody does. It didn't actually have any effect. And we were sort of beating ourselves up for not being able to get anything done, but they got something done and it was toothless, essentially.
Paul: Except for the 1. 3 billion that Meta's going to have to pay!
Niki: Okay, not wrong! That's right. They, Facebook, I only will call them Facebook.
Paul: Although that's pretty toothless, too, right?!
Niki: They could - I know Mark Zuckerberg can shake that out of the vending machines. [chuckling]
Niki: [chuckling] At headquarters. But yes, they did fine them. That's true. I think the other thing, one of my favorite sort of, tropes about DC is, I can't even remember who said it, but they said the most unrealistic thing about House of Cards is that they passed a comprehensive education bill. [Paul: laughs]
So, six to nine months on an AI bill. I just think, no chance.
Paul: Right. Some, one of the best pieces of advice I ever got from a mentor was in times of crisis, don't just do something - stand there. [Niki: Yeah]
I mean, it's not universal. Obviously, if you're in the middle of a hurricane, you've got to, got to react. But he was rightly of the view that policymaking proceeds better if it has time to kind of really assemble all of the information, grind it up, consider it maturely from a host of different perspectives: the, the pro-commerce perspective, the pro-national security perspective, the pro-diplomacy perspective, and meld all of those together into a, into a cohesive and sensible policy rather than rushing in one direction.
If Congress were to ask me, I would say, y’ know, “Do a pilot program. Hey, pick, pick an agency that's not so important.”
Niki: Like the Agriculture?
Paul: Like the Department of Agriculture. [Niki: laughs] Or, or something like that and have them pilot some AI regulations relating to crop development. I just made that up completely.
Niki: But yeah, get a sandbox going.
Paul: I don’t even know if it's feasible. If you're listening- who's it, Senator Peters and Senator Cruz? I think that's right. [Niki: Yeah] If you're listening, Senators, go and pick someplace where you have jurisdiction so you can make them do it, and that'd be fun and useful.
Niki: I think it would be useful. I agree.
So, we've covered a wide range of topics, but I think if I had to distill it down, it's when you look at China, we're de-risking right now, it's going to have a very complicated but urgent need for both the private sector and the, and the public sector to shore up some of the vulnerabilities that we created over however many years of globalization policy. So, that's one thing.
The second thing is we can't even imagine what AI is gonna be like and some of the risks because it's so, so new. So, the opposite of moving quickly, take a beat, take some time, absorb it.
And then, as far as control: breaking things out of just the centralized idea that the government has to do everything. And, in fact, the government's struggling to do many things at the moment. So, think about it as an individual. Think about what you can do or small groups can do.
I think that's kind of summing it up.
Paul: I think that's a good summary. And one of the things we didn't talk about, about AI, which is the ultimate and the positive, is eventually AI manages AI. We set the good AI to fighting the bad instincts. Y’know, for every AI that finds mustard gas, there'll be an AI that finds a, a preventative cure for mustard gas poisoning.
Niki: Exciting times! [chuckling]
Niki: Exciting times! Well, thank you for taking time to come into the studio while you're in Washington. I'm grateful for you coming in.
Paul: Well, thank you very much for having me. It was a delight to be here and a fun conversation.
As always, thanks for listening.
On the next episode, Axios reporter, Ashley Gold, joins me in the studio to dish about D.C.’s tech scene. What’s the current ground game in Washington with regulators and the state of play for the industry? Be sure to tune in!