Tech'ed Up

Quick Tech Takes • “Spamouflage,” Google Gemini, & ChatGPT

February 29, 2024 Niki Christoff
Tech'ed Up
Quick Tech Takes • “Spamouflage,” Google Gemini, & ChatGPT
Show Notes Transcript

Steer PR partner Lauren Tomlinson joins Niki for the second installment of Beyond the Buzz, the bi-weekly series of Tech’ed Up that digs into the comms, tech, and DC news creating headlines. They talk “Spamoflauge”, Gemini, and Chat GPT. 


[music plays] 

Niki: I'm Niki Christoff and welcome to Tech’ed Up! 

This is our second-ever edition of Beyond the Buzz. And for those of you who are still catching up on the show being weekly, it's an episode where we look past the clickbait in tech news and talk about what's really happening in those articles.

I'm joined again by Lauren Tomlinson, half of the dynamic duo at Steer PR. 

We've got lots to talk about this week, so we'll dive in. Lauren, welcome to the studio. Thank you for, again, being experimental and trying this new format of looking at headlines. 

I appreciate you joining me. 

Lauren: Yeah, thank you. I'm so excited to be here again. It was so much fun last time. 

Niki: It was fun! So today- As a general rule, I try to avoid politics on this show, even though I'm actually pretty political, privately, deep down, but we have to talk politics because we are full swing election season. [pause] Prayers. [Lauren: laughs] 

And it's worse here being in D.C. 

Lauren: Oh, it's definitely worse being in D.C. I've had people tell me they were like, “I just, I can't move back to D.C. at this time because it's so bad in D.C. during election season.“

Niki: No, you can't even buy a latte without talking to the barista about politics. 

Lauren: Yeah. It's everywhere! 

Niki: It's everywhere. Pro tip, I have a Chrome extension that actually blocks any headlines with an individual who rhymes with Trump. 

[both laugh] 

But that means sometimes I'm [chuckling] super clueless in a way that is a little embarrassing. 

Lauren: I kind of feel like I need that for Taylor Swift right now. 

[both laugh] 

I was wondering how I was getting fed so many Taylor Swift stories the other day [chuckling]

Niki: Yeah. You can get a Chrome extension. Just block that stuff. 

Lauren: That's good to know.

Niki: Yeah. Save yourself. 

Lauren: This is good tech tips. [chuckling]

Niki: These are good tech tips. [chuckling] So, I wanted to talk actually about a, an article that you flagged, your PR firm puts out a newsletter called BLUF, which for people who don't know is “bottom line up front.” A key tenant of good communications.

[both chuckle]

And one of the articles you had is about disinformation and misinformation as we approach these important elections, not just in the U.S., but globally that are being run by the Chinese Communist Party. And I thought it was super interesting and important for people to hear about because it's impacting all of us. 

Lauren: Yeah, absolutely. And this one, this article specifically focused on the campaign that the Chinese are running currently, the CCP are running, to heighten divisions within the United States and they're producing lots of, they're using Gen AI to, like, produce images of Trump and Biden, y'know, sword fighting and all of these like aggressive images of division and civil war and, y'know, internal strife within the United States.

What I thought was super interesting about this article though, was that they specifically said that the CCP isn't particularly good at it right now. [Niki: mm-hmm] So, like, they're producing tons of content, but it's not necessarily making an impact, in particular on X. However, y’know-

Niki: [interrupts quickly] X being Twitter. I'm just clarifying. I still call it Twitter. 

Lauren: Yeah, I had to look it up and AP style actually prefers that you say “X formally known as Twitter.”

Niki: I'm just not going to! 

Lauren: [Laughs] Yeah?It'll always be Twitter for you.

Niki: I'll never say META. Stop trying to make it happen. [laughing]

Lauren: You'll never say META?!

Niki: Never!  

Well, speaking of META, they actually had an announcement out this morning that they are going to start working with the EU ahead of their election cycles to combat some of these misinformation campaigns, disinformation campaigns ahead of their election cycle because as y'know, there's like a ton of elections this year.

It's a big, a big year for democracy and, y'know, Russia, Iran, and China are going to be taking advantage of that. I think there's going to be a lot of emphasis on these disinformation campaigns and how generative AI is messing with the election cycle this year and all of those types of things.

I actually look at it from more of a make sure that you're getting your information from credible sources and double checking everything because if you're relying on social media to tell you where to go vote, that's actually a bigger threat than if you're going to just get some information about y'know, how “Biden's terrible” or how “Trump is terrible” because this is going to be a nasty election cycle regardless, so.  

Niki: This is going to be a nasty election cycle! And I think one of the points of the, the articles over the weekend about this escalating CCP misinformation and disinformation campaign. 

Well, first of all, they called it “Spamaflauge,” Lauren: [laughs] which is a great name, 

Lauren: [chuckling] Which is a great name! 

Niki: And the idea is that they're sort of incepting these really negative, divisive pieces of content into the discourse on Twitter, on TikTok, on even Instagram where it's not like they're inventing the divisions, they exist, there already are these, this polarization, but they are just sowing discord and making it worse, such that people think [interrupts self]

Wait, I'm going to read a line from, from the article, actually, which I thought was, it said: “The whole point of the campaign is to breed disenchantment among voters by maligning the United States as rife with urban decay, homelessness, fentanyl abuse, gun violence, and crumbling infrastructure.”

 [both chuckling] 

And when I read that, I thought this is really interesting because I already have started to absorb that messaging pretty seriously. And I'm not on TikTok. I'm barely on Twitter. I have no Facebook account and I'm on Instagram following, like, you,  [both laugh] 

Lauren: Which is just pictures of my kids!  

Niki: Just pictures of people's kids and vacations! And I'm still getting that messaging because everyone around me is getting that messaging. 

And so, I think that one takeaway was that they're not really getting a lot of engagement necessarily of real humans amplifying these bots and this information, but it does seem to be seeping into the discourse pretty effectively to create chaos.

Lauren: Well, I think they're capitalizing, they're capitalizing on messaging that political parties are already pointing out. So, y'know, the crime and fentanyl, those are Republican talking points right now. I think the infrastructure decay, that's something that the Democrats are talking a lot about. So, it's not necessarily, too, that they're propping up one political party over the other.

Their goal, ultimately, is to heighten all of these tensions, heighten these feelings, and make it seem as though the United States is in chaos particularly to the international stage. That's part of their goal as well, is they don't want the U. S. being in a place of power to which when we are full strength that we will interfere with their domestic or international goals. One of those being if they want to retake Taiwan, for example. [Niki: Right] 

They want the United States in pure chaos to where they can't muster enough international support to go and justify us protecting Taiwan if we ever reach that. They're trying to, y'know, put themselves on the international stage to where they can be a world power and it's not just Western ideals that are kind of governing the international world order. 

A United States in chaos, showing that democracy doesn't work, projecting that we aren't strong or unified, that's their ultimate goal, right? Because that just leaves a power vacuum in their mind that they can go in and fill in the Middle East, in the global south. everywhere. 

When we think back on 2020 and this information, the Russian disinformation campaign and favoring one candidate over another, ultimately these foreign actors, that's not their goal. Their goal is just: the United States in chaos and the United States internally divided because then we can't project strength internationally. 

Niki: Right!  And it shows, or it demonstrates, tries to demonstrate the fundamental weaknesses of democracy and a lot of the places where they're building this infrastructure and intervening, they need cobalt and lithium and all the stuff we need to build our cars and our phones. And so, it's in their interest to just go straight to those nation-states and be like, “Hey, at least we're reliable! Y'know, the Americans are beclowning themselves.”

Lauren: Yeah, absolutely. And, y'know, we talk about this a lot too from a messaging perspective, is from like a national security perspective, every time Congress can't pass a budget, [Niki: Right] Or there's a government shutdown. Y'know, it ends up doing is just giving talking points to the CCP to go out and say, “Hey, African nations that are producing these cobalt, you don't want to have a deal with the United States. They can't even pass a government. Their government's not even working right now. Let us buy your mine. Let us fund everything. Ultimately we're more stable than the United States is. We'll live up to our promises.”

And I think that's a horrible talking point to be able to give to the CCP. 

Niki: Right. And it's a legitimate one. We're on the brink of yet another government shutdown this week. [both laughing] 

Yeah, it's going like a day ending in Y, 

Lauren: A partial one [laughing] 

Niki: Yeah, a partial one. Exactly. I mean, again, living in Washington. [both chuckle] So, okay, so the point of this article was: there are the “Spamaflauge” campaigns. They're putting all of this concerning, harrowing, upsetting content into all of our social media feeds. It's making, y’know, they're turning the dial in different ways that make us feel even more freaked out. 

I have some thoughts on this, but what do you think social media companies and the government can do to sort of combat what what is essentially a political warfare tactic? 

What can we do?

Lauren:  So, I think there's two paths, and you've seen one be pursued heavily in the past. So, I think, y'know, we were talking about this a little bit before where it's historically the National Security Apparatus government, we were so focused on battling terrorism, right? And online radicalization and you remember the ISIS beheading videos.[Niki: Right] And that was like a huge deal, that was a huge focus for both the government and the social media companies for a long time. At DHS and FBI, they started sharing threat assessments. 

Niki: And you, just to be clear, used to work at the Department of Homeland Security.

Lauren: Yeah, I used to work at the Department of Homeland Security. So, I was in all of those meetings where we were sharing all of the, ongoing threat assessments with, and also, like, flagging videos that we knew came from ISIS to the social media companies and we were asking them to take it down.

Ultimately, it was up to the social media companies to make the determination because free speech and all those types of things, but the idea was if you're sharing intelligence then that creates a safer environment and also mitigates these risks for populations to be influenced by radicalization videos.

Now that's kind of evolved, like that practice continued for the threats, but now the, the influence campaigns became vaguer and vaguer. How do you now share a threat that y'know that this is coming from the CCP, but it's messaging that is basically rippng off of political messaging that's happening in the United States anyways.

Does that reach the same level of censorship that, that we would think about from an ISIS beheading video? So now enter in the court cases with big tech and conservative groups and legal and the First Amendment and all these things. That's kind of where it is right now. So that's one path, right? Like you work with government, big tech takes it down. y'know, that's the way that, that we're going to address those threats. 

The other, and I think this is the way that we're moving forward is, Americans are really smart. Like that's my, that's always my premise. 

Niki: It's a hot take! 

Lauren: [laughs] Yeah. [Niki: Right] I think that Americans are generally very smart [chuckling].

Niki: I do too. And I think that if you equip them with information, y'know, and this is probably the comms person in me, if you communicate really well to them. They absorb it. They act on it.

Lauren: And, y'know, this is something that I saw in government too, like when you are communicating threats regarding natural disasters. People take that information and they act upon it. They, y'know, they make sure that they evacuate, they make sure that they take care of their neighbors. Like inherently, I think Americans are very smart. 

I think now when we're talking about these disinformation campaigns from foreign actors, one of the best things that government can do is just over-communicate the threat as soon as they have it, instead of just going straight to the social media companies back channeling it. Now I think it's incumbent upon government to basically flag these things for the American public and say, we know this is happening be aware.

And then there's a literacy an online literacy issue there where, y'know, I think younger generations are really good at it, and the older generations are having to catch up but ultimately we're going to have to be more responsible for what we're consuming and sourcing. Basically being aware that “Yes, you can take this meme and agree with it and share it, but it came from the CCP.”

Niki: Right. I think that that's right. If people know more they might handle the information they're getting better and yet we have rock bottom trust in government, rock bottom trust in tech companies we do have, unfortunately, the whole point of these campaigns is to make us mistrust each other as well.

And so I think that there does absolutely need to be more of a campaign around online literacy. What are you seeing? Where did it come from? Who sponsored it? Who paid for it? y'know, use your critical thinking skills. [Lauren: chuckles] Does this make sense? This thing that you're reading? Is somebody trying to manipulate you to make you more scared?

Oftentimes people underestimate the American voter, they have good intuition around things. However, we all have fear fatigue, and news fatigue, and people are busy, and people are stressed about everyday life, and so it's hard to also then go through the provenance of this, [chuckles] 

Lauren: y'know, go find it, right? Right. Like, I'm gonna go Google the FBI site and see what foreign actors are currently operating online. Like, that's like, just not gonna happen, right? 

[both laugh]

Niki: Right! And so maybe you could see a world in which, uh people with a lot of social influence might get involved in trying to give a trusted narrative around this is what's happening. Y'know, we sometimes have a lack of imagination for what can happen if democracy fails. [Lauren: Yeah] And democracies fail all the time. 

Lauren: All the time, but y'know, I love people always say like, “Oh, democracy is so messy,” and “Oh, democracy, like, our systems are failing us.” And I just look back across history and I have such optimism that that's not going to ever be the case, because as long as American voters continue to participate in the system, we continue just to bounce back.

Niki: I'm long on democracy, but I do think that we need to be very intentional about protecting the things we care about as a nation, like the rule of law, and the free press, and liberties and the, y'know, all of these things. We might just go back to first principles. [both chuckle]

I agree there's a responsibility for the tech companies, there's a responsibility from the government. And then we have personal responsibility to just, y'know, put on your thinking cap. [Lauren: chuckles] Like, “What am I looking at? Does this make any sense?"

Lauren: Yeah, absolutely. And, y'know, the fragmented nature of media now means that you can't just issue a press release and then the information is going to get to people. You have to actually go to where people are getting their information. It's a lot more intensive, I think, for communicators to have to navigate these days but it's something that will be more impactful than if you just issue a press release and expect, y'know, legacy media to then go report on it.

Niki: I think that's absolutely right. Okay, so the big takeaway is: we are experiencing, not just us, but Europe as well, these increased “Spamouflage” campaigns on our social media platforms. We should all be aware of it. The big tech companies need to pay attention to it. We hopefully will find some cooperation with the government as we go into this election season. And you and I are both long on democracy and Americans. So! 

Lauren: Yeah, I think that was a great summary. 

[both laugh]

Niki: Yeah. Because we were kind of all over the place there. 

Lauren: There's so many different directions you could take with the CCP and online. 

[both laugh]

Niki: I know. We brought it all back. [chuckling] We brought it all back and tied it together.

Okay. So we've now talked politics. Now we're going to talk culture wars. 

Lauren: [laughing] Perfect. This is a wild podcast. 

[both laugh] 

Niki: Yes. [chuckling] Exactly what we wanted to discuss. 

So, this isn't really culture wars. It's AI, which we have to talk about. But last week, two things happened in the world of artificial intelligence that were evidence that these products are not ready for prime time, even though they're fully in prime time.

So I think that, y’know, [long pause] the iconic phrase from Silicon Valley was “Move fast and break things.” And this is like, “Oh, we're just giving you something broken.” [Lauren: laughs] 

So, the two stories, one was about Google images. So, I'll start by explaining what happened with them. Google images for anyone who didn't see it last week had started to over-index on taking bias out of the data set.

So, basically all these data sets of images have way more white people than is representative of the world and the road to hell is paved with good intentions. Google had their data set and then they said this: “They tuned it to be more representative,” which resulted in things like a woman pope and then everybody freaked out because it felt like “Well, Google's manipulating what you see for some sort of woke agenda.” 

Now again, they're trying to correct for what they know is a problem in the data set, which is it is not remotely representative of the globe. It's not even remotely representative of the United States. So, they basically pushed pause, regrouped,  and communicated about it, which I want to ask your opinion on. 

The second thing that happened is Chat GPT, everybody's favorite large language model prompt, just started spewing gibberish. Apparently a software engineer had updated the code. And just, like, AI hallucinations started coming out of the machine and it was just completely broken and a mess. [Lauren: chuckles] And so basically, you had two of the biggest AI platforms just have on the fritz, y'know, during a week AT&T had an outage. 

Lauren: Yeah, just total technology meltdown week! 

Niki: Total technology meltdown week. 

And so, I think these are interesting communications issues because Google took a very different approach than OpenAI in how they addressed the issue. So, do you have thoughts on that? 

Lauren: Oh, yeah! 

I mean, it's like the OpenAI gibberish, they just like responded with gibberish. Like, you read their statements, and you're, like, “What are you even talking about?” And y'know, then Google, I think, who has been obviously dealing with these types of snafus for a much longer time, they had very transparent, thoughtful, plain-language blog posts that they put out. 

Google did a really good job, I think, of like, transparently laying out exactly what happened. And I think that's the only way that you can combat this is being overly transparent and plain language about this in real time. Whereas you compare it with how open AI handled it,  technical jargon,right? And not a full explanation. And it was also a little bit piecemealed in the way that they issued the statements.

And I think that is something that confuses people more. And it fueled more speculation and more online conversation instead of providing people a single source to go to, to understand what happened, right? People were having to go to secondary sources of, like, translation of, like, what is this? [Niki: Right]  Like, what is actually happening? And that's not what you want from a good y'know, communications perspective. 

You want people just to be able to go to your site, understand what happened, still have faith in the system and, and move on. 

You even saw this with AT&T, right? Like, because they had to issue a few statements of, like, we don't know what's going on. And then, y'know, people were, like, super confused, and Marco Rubio's, like, tweeting that, like, y'know, this, “This is what could happen when, with a CCP-like cyber attack” and, y'know, all this stuff. And then it's like, “Oh, it was a software bug. We basically-  it was a human error.” 

Niki: All of these things were human errors!

Every single thing that happened last week in tech meltdown was because of humans. [Lauren: Yeah] Humans either tuned a little too much toward trying to eliminate bias in Google. Somebody screwed up the coding in Chat GPT and then AT&T, same thing. So, these are, humans are always touching the code, but the code is just ones and zeros.

It's not like a sentient thing that's out to get us. It's just software and they're releasing it and we're using it as it's evolving. And so I think you're right. The communications around it just to bring the level of distress down is important because when this technology soon, now, is part of our healthcare system, part of our financial system, part of our transportation system suddenly, these kind of errors are not okay. 

It's one thing if you're drafting an email for your boss using ChatGPT and it spews out gibberish. It's another thing if, if these technologies are built into our critical infrastructure and our most private and important, y’know, use cases. 

Lauren: 100%. Like, I mean, DOD is looking at predictive threat analysis, right, with AI and, like, how they can use AI to predict and communicate quickly threats, right, like, from other nations to then influence their decision making.

So, if for whatever reason there's a bug in the system and it's, like, “Oh, Russia actually has a nuke that they're about to launch right now and then we're making decisions based off of that.”  That's a big problem, right? 

Niki: Right! And so I am, I'm actually really increasingly excited about the opportunities of AI, and this is a shift for me. 

[both laugh] 

It's like, it's a major shift from deeply cynical and eye-rolling to, like, “No, this is going to be actually really great for in a lot of ways.”

And so, it's so important to communicate, “Hey, listen, we know there's this issue. We know it's super biased. We're trying to correct for that. We overdid it. We tuned too much” and “Hey, we've got software engineers and sometimes they're super tired and they ran out of, like, whatever Red Bull, whatever kids are drinking these days, and, like, screwed up a coding thing. Now you have gibberish. We're going to fix it.” 

AT& T, same thing. If you can communicate clearly how you're going to repair it and how it's not going to happen in the future, then people can start to have trust in these technologies, which they're already pretty freaked out about.

Lauren: Yeah, absolutely. And I think, too, we'll probably reach a state in which when they're doing software updates and there is a potential for a bug, that's probably going to need to be communicated, right?

Like, there needs to be basically a flag of, “Hey, we're doing system updates. You may see. glitches.” And then that kind of lowers the temperature on some of these things. So you don't know how your bugs or your software updates are going to affect these types of things versus like the DoD example I was just using, they're going to be using their own data sets and it's going to be a much different situation of the way in which they're applying AI versus, like, these, like, global, y'know, internet scraping, y'know, like the chat GPTs of the world.

Niki: I think that's a great point and maybe we end on that. If you communicate ahead of time, now obviously, if Google had communicated ahead of time, “Hey, the data set is not remotely representative. We have a lot of bias in our results. So we're going to try to find ways to create better representation. Like, stay tuned.” 

Maybe if they'd pre-announced what they were doing, people wouldn't have the sensation that they were being fed something. [Lauren: Right!] Yeah. Taken that they might know ahead of time. This is what Google is proactively trying to do to fix something we know is a fundamental problem in this technology.

Lauren: Yeah. And then if the flags are happening, it's, like, then the conversation is “Google is really over tuning this. This is ridiculous.” Not, y'know, “Google's trying to pull something over us.”

Niki: Right. And this is not new, by the way. So, back in the day at Google, we had these OneBoxes and occasionally they would come up with, you'd say, “Who's the president of the United States?” and it would say, “Hillary Clinton.”

And that was based on, like, a bizarre, it was people searching for, y'know, those terms together, and it created a OneBox, but no end of problems for me [both laugh] as a press person at Google, because I'd have to try to explain, like, “No, we're not trying to convince people that this thing happened or is going to happen.”

And so it's not new that they have these issues but if you can try to explain ahead of time. At least not for the fringes, but for the vast majority of people, they, I think they can see when someone's trying to, in good faith, make a better product for them so that they can trust using it. 

So, I think maybe we end on that, which is just communicate more and simply.

Lauren: Over-communicate. 

Niki: Yes, Over-communicate!  

Lauren, I want to thank you again for being a guinea pig on this format, being our first co-host next month, Adam Kovacovic from the Chamber of Progress is going to be coming on to try the same thing. 

If you're listening and you like this format, let us know!  If you would make some changes, let us know that! 

But you're just such a good sport and a good friend and thank you very much for coming.

Lauren: I've loved it. This has been super fun!