Tag Archives: AI warninng

AI pioneer’s warning: Powerful, dangerous ‘tools of persuasion’ are coming

Olivia Wise/The Examiner

“Personally, I think we should be more than a little bit scared.”

That’s how Louis Rosenberg reacted in a LinkedIn post to OpenAI CEO Sam Altman’s remark that he was “a little bit scared” of the risks posed by AI.

Like most people in tech and beyond, Rosenberg, CEO and chief scientist of Unanimous AI, was impressed and stunned by the introduction of ChatGPT, which he said represented “a leap forward rather than an incremental shift.”

“The speed of adoption of these AI technologies is going to be significantly faster than the adoption of the personal computer, the adoption of the internet, the adoption of mobile phones — really, the adoption of any society changing technology that I can think of,” he told The Examiner. “That should scare us.”

In an interview with The Examiner, Rosenberg, known as an AI pioneer, explained why he was both impressed and alarmed by the introduction of ChatGPT and the use of AI as “tools of persuasion.”

“What chance does an average consumer have against an AI that is armed with knowledge of your interests and backgrounds, and is trying to persuade you to buy something that you don’t need?” he said.

This interview was edited for brevity and clarity.

When it launched in November, ChatGPT really exploded. It was stunning. You and others have also said it’s scary. Absolutely. Its ability to allow people to create content and to get answers is so good. I am both a fan of ChatGPT, and I’m very worried about it because it’s remarkably good at what it does. It’s very rare for something to happen and we’re suddenly shocked like, “Wow, like, this is a major step forward.”

Was that what happened to you? Absolutely. To me, ChatGPT is really a leap forward rather than an incremental shift.

Can you describe the moment you realized, “OK, this is different”? I’ve been familiar with chatbots going all the way back to the very first one, ”Eliza,” which was built in 1966. It was clearly not human. And it was clearly not engaging you in an authentic conversation. It was clearly pretending to have a conversation.

I felt like that was extended even as these chatbots got better with Siri and Alexa. I also felt that the interactions feel like it’s pretending to have a conversation and is very easily exposed. It’s not authentic.

ChatGPT was the first time I interacted with a conversational interface where it did not seem like it was pretending to have a conversation. It seemed like it was authentically responding to what I was saying. And that’s a leap.

It wasn’t like we saw these other forms of chatbots get a little bit better, a little bit better, a little bit better. It all of a sudden went from not seeming like an authentic conversation to seeming like an authentic conversation. So it was really a jump in power.

Why did you say we should be more than a little bit scared of the rise of AI? From my perspective, ChatGPT and these other large language models collectively represent a really significant technological advancement. This is probably the most significant technological advancement that is hitting society since the personal computer and the Internet in terms of its potential impact.

We can assume like every big technological change there’ll be positives and negatives. What’s different about this time is that the speed of adoption of these AI technologies is going to be significantly faster than the adoption of the personal computer, the adoption of the internet, the adoption of mobile phones — really the adoption of any society-changing technology that I can think of. That should scare us. This is coming fast.

In addition, these are technologies that our understanding of how they work is actually much less clear compared to these other technologies. These AI technologies are much less clear. Our ability to control the output is far more difficult, even if your intentions are to make them safe and accurate and not offensive. It’s challenging.

It’s less clear how these technologies work or how to control them. And regulators are not prepared for this. They’re not prepared for a technology space where the actual developers aren’t even sure how to prevent them from doing offensive things. It’s a recipe for bad outcomes.

Like what outcomes? There’s a list of bad outcomes that a lot of people are talking about. There’s some bad outcomes that I think people are not sufficiently talking about.

People are obviously worried about humans being replaced by machines. These AIs are a tool that make us significantly more efficient in creating content. So a significant efficiency boost in content creation will impact jobs. It will also create new jobs.

I personally am not super worried about that in the long term. Technologies emerge and they replace other technologies. There’s no horse-and-buggy drivers out there today, but lots of jobs replaced them.

The second issue that people talk about is we perceive these AI systems as authoritative because they give us output with authority. We humans assume that they are correct. That’s a danger. The danger is that we will trust the output when the output could be flawed. This is not malicious. These technologies are not deterministic.

I’m not super worried about that. I think OpenAI and other companies are getting better at making sure that their output is accurate. I also think that people will learn that, if they’re doing something of a really high importance, they should double check. Again, I don’t see that as an existential threat.

Then there’s like the third thing that people talk about: the impact on disinformation. What they’re really saying is, “Hey, ChatGPT and these other tools can make it really easy to produce malicious content quickly, to produce disinformation, misinformation, radical propaganda.

It’s a real problem. That is an amplification of an existing problem. And actors already seem to have a pretty good handle on the challenge of generating disinformation quickly.

There have been reports about ChatGPT making mistakes when you ask certain questions. The mistakes usually are factual mistakes. They’re not logical mistakes. So to me the mistakes don’t make it less impressive. I can talk to another human being who’s just wrong about something. And I know they’re wrong. But I still know I’m having an authentic conversation.

ChatGPT’s conversational abilities are really impressive. It’s not just responding to what you just wrote. It’s keeping the context of the whole conversation in mind.

I compare it back to the early “Star Trek” episodes, the first time Captain Kirk was speaking to a computer. This was a vision of centuries from now. Captain Kirk speaks to the computer by saying, “Computer, tell me how many life forms are on this planet.” And the computer responds. Then he says, “Computer, tell me if any of the life forms are injured.” And the computer responds.

It was not a conversation in the traditional sense. It’s very stilted. If you look at how Siri and Alexa and other chatbots have worked in the decades since Captain Kirk first started doing that, it’s been very similar. You say, “Siri, tell me what movies are playing.”

Whether it makes factual errors or not is less of the point than the fact that it can engage the person in conversation. That has profound implications to how we will be interacting with computers going forward.

We’re now at the point where conversational interfaces will become a critical part of our daily lives. We’re not there yet. But within a few years conversational interfaces to all aspects of our digital life will become commonplace because the technology now exists to do it.

You also said massive disinformation like deep fakes deployed on a massive scale is scary. But what’s also scary is the way individuals can be influenced by conversational AI systems that target us on a one-on-one basis. Can you give examples? Absolutely. Right now, when people think about disinformation, they’re thinking about pieces of content that are being deployed out there into the world. The worry is, “Oh, ChatGPT can help bad actors create lots of pieces of malicious content.” I think they’re missing the point. And I think regulators are also missing the point. The point is ChatGPT is not just good at creating traditional pieces of content.

ChatGPT is actually a new form of media. It’s a new form of media because it’s interactive and real time. And those two points are essentially everything. When you engage in a conversational interfacing dialogue, and it could be text or it could be voice, you’re interacting with a new interactive form of media that can generate content on the fly, specifically targeted at you.

It’s targeting you in the first person and it can adjust its tactics based on how you react to it in the first person because, after all, that’s what a conversation is. One party makes a point. The other party expresses resistance or hesitation or disbelief to that point. Then the first party adjusts its persuasive tactics, making counter points.

Now when it’s a human versus a human, it’s fair dialogue, even if one of those humans has an agenda. When you engage a salesperson or a politician, that other human has an agenda. He or she is trying to persuade you.

But when it’s an AI versus a human, when an AI is engaging you and that AI has an agenda, it’s going to craft its conversation, not just generically, but potentially based on information it has about you — your age, your gender, where you live, your socio economic background. It’s going to craft a custom conversation to ease you into a piece of influence that it wants to target you with. Then as you react to that influence, it’s going to adjust its tactics.

And this is in real time. In real time. That’s why I refer to it as the AI manipulation problem. These systems will be skilled at manipulating us humans.

OpenAI is not doing this, but with the APIs (application programming interface) third parties could do this, could deploy these conversational AI systems, deploy them so they have a persuasive agenda.

Salespeople already do it, right? If you’re being cold-called by a skilled salesperson, that person can read your reactions and adjust its tactics and manipulate you. These AI systems will be far better than the most skilled salesperson because they will potentially know far more about you based on the data that’s been collected about you over time. And it will be able to read your emotions better very likely based on your vocal inflection. Ultimately these interfaces will go to voice and also video. It will have your facial expressions and vocal inflections.

Imagine if a piece of disinformation was about vaccines. It’s one thing to deploy a piece of disinformation as a document that maybe is misleading, that somebody reads. It’s very different if instead, you’re engaging in a conversation with an interface. Because ChatGPT and these other large language models can be deployed through API’s, I might go to a website and not realize how this technology is being used. I might engage. I might ask a question. And this conversational interface could guide the conversation towards trying to convince me that vaccines are unsafe.

If I express that I don’t quite believe that, it can react to my points with counter points. It could be a far more persuasive form of media than we’ve ever seen. An AI can be used as an interactive, customized form of media that adjusts to your reactions in real time. And it could have a persuasive agenda. It could be programmed to deliver a piece of influence.

Somebody could give a chat interface an agenda that says convince this person that this particular vaccine is unsafe or convince this person that this particular politician is untrustworthy, or convince this person that this particular piece of radical propaganda is authentic. And it will craft convincing dialogue and will potentially craft that dialogue with knowledge of your background and values and interests and your education level, your political leanings.

It will craft that pitch in a way that will be particularly appealing to you. And it will assess your reactions. It could potentially document over time as you interact with this system what kind of arguments work well on you. So it will get better at persuading you over time.

So these conversational interfaces have the potential to be the most effective form of persuasion that we humans have ever created. And regulators are not yet even realizing that it’s the interactive aspect of these chat systems, the fact that they will respond to your reactions and continue to provide counter arguments that makes them so much more dangerous.

It’s unfair to have a person versus an AI with an agenda. We know that AI systems right now can beat the world’s best chess player, best poker player. These systems can be strategic and they can beat humans at the hardest games on Earth. What chance does an average consumer have against an AI that is armed with knowledge of your interests and backgrounds, and is trying to persuade you to buy something that you don’t need?

Even worse, what chance do you have against an AI that’s trying to persuade you to believe a piece of misinformation that is just not true? It’s very, very dangerous. An agenda-driven conversation is a new form of media and a new way to deploy targeted influence. And that’s really, really dangerous.

What can be done? How can the companies and policymakers respond? I think that policymakers certainly need to realize that third parties could use these large language models to deploy targeted influence through interactive conversations. And that should either be illegal or it should be upfront, meaning if you’re engaged in a conversation with a piece of software, and it has an agenda, it should have to tell you what this agenda is.

It should tell you, “I’m an advertisement that’s trying to sell you a Tesla Model 3.” If you at least know that you’re engaged in a promotional conversation, that’s very helpful. If you didn’t know, it becomes predatory advertising.

These conversational systems can easily be deployed in ways where you don’t know that it has a promotional agenda. You don’t know that it’s trying to sell you a product or service. You don’t know that it’s trying to sell you on a political candidate. You might just think you’re engaging in an organic conversation and not realize that it’s tilting the conversation towards certain points and making counter points and trying to manipulate you towards a particular conclusion.

So the first line of defense is transparency. If these systems get deployed for promotional purposes, they have to reveal when they do have a promotional agenda. A better line of defense would be that that should not be legal – that systems can’t be used as tools of influence, as tools of persuasion. Because they potentially are really powerful tools of persuasion.

We honestly don’t even know how powerful of a tool of persuasion they can be. Five to 10 years from now, we will be interacting with these conversational AI systems. They will look photorealistic on our screen, like we’re talking to somebody over zoom. They will also be influencing us not just by what they’re saying, but also by their facial expressions and how they look.

They’ll be designed to look very trustworthy. In fact, their age and gender and features will probably be chosen specifically for you based on what kinds of chat interfaces you were the most responsive to in the past. It’ll be reading your emotions in your facial expressions from your webcam. What chance do you have to not be influenced by an AI that is listening to what you say, sensing your emotions in your voice and in your face, and is making counterpoints very skillfully to persuade you in a particular direction? It’s not a fair battle.

This is very sci-fi now. You mentioned “Star Trek.” There’s also “2001: A Space Odyssey” to “I, Robot” to Skynet. What are the chances of AI systems becoming independent of human control and taking over?

The possibility of these systems becoming sentient, I personally think that it is absolutely possible. I don’t think it’s gonna happen tomorrow. But I do think that we’re talking about decades, not centuries. But I do think that we have almost as dangerous of a problem right now even before that happens.

Because these systems are extremely powerful. Right now, sentience can come from a person, a single bad actor who can then wield an AI system to do a lot of bad things. So, yes, we could imagine that one day these systems are sentient on their own and have their own bad intentions. But we don’t have to wait that long. Bad actors already have bad intentions. And these technologies are now becoming freely available to everybody. Bad actors will use them and supply the bad intentions and use all the other powers in very dangerous ways.

Subscribe to Benjamin Pimentel’s new tech newsletter at sfexaminer.com/newsletter/

bpimentel@sfexaminer.com

@benpimentel

Benjamin Pimentel

Benjamin Pimentel

Benjamin Pimentel is The Examiner’s senior technology reporter.