AI: Savior, Satan or Both?

Generative AI is a wonderful font of drama, for it offers up equally compelling visions of Utopia or a descent into Satan’s lair.

Savior or Satan, or both? Let’s start with the fact that AI is amoral; it has no intrinsic moral compass. A compass of some sort might be included in its programming, or it might not. Or the moral compass coding could be modified if it starts crimping profits.

In all cases, users have no idea what limits have been encoded, if any, and no idea if the limits actually work or if they’re easily bypassed.

All generative AI is a black box, and that’s why it makes such grand drama: it’s the character in the play that can’t be pinned down, the character that’s inscrutable yet helpful, but with agendas that are invisible. Trusting this character is the plot point that sends the narrative flying.

Savior or Satan, or both? Consider these recent articles as data points.

My mother fell in love with an A-list celebrity she met online-the video is so convincing she refuses to believe it’s fake (via Richard M.)

‘One day I overheard my boss saying: just put it in ChatGPT’: the workers who lost their jobs to AI.

So a deepfake Owen Wilson is setting the hook for some as yet unknown scam. Younger, more tech-savvy viewers easily detect the evidence that it’s fake–never mind the obvious clue that famous folks tend to have more on their minds than engaging regular folks in online chats–but lonely, credulous older folks are easy pickings.

The estimates of how much web traffic is malicious run from 7% to roughly one-third. Clearly, the quantity of malicious traffic seeking to exploit vulnerabilities in systems and human nature is soaring–just look at your spam folder, your SMS feed, and your phone messages.

Those of us who first entered the online realm back in the dialup modem days are nostalgic for the time when malicious traffic wasn’t an issue that demanded constant shadow work to delete it, block it, unsubscribe from it, etc.

AI is a willing helper in all this, and this is sobering, as good old AI can be prompted to run through thousands of lines of code to identify vulnerabilities missed by human programmers, and it can be prompted to assemble technical tricks to make scams ever more difficult to detect.

There are accounts of police HQ phone numbers being spoofed, spoofed voices of frantic relatives reporting they’ve been kidnapped, and various other very realistic and seemingly authentic scams.

Hello, this is your credit card company security-fraud detection team, and we need to verify some information. This is a rich vein of irony, isn’t it? To lower our shields, the scammers claim to be the scam-detection team working hard to protect you.

The wolf isn’t just cloaked in a sheep costume: it is the sheep. Go ahead and touch it, it’s real.

Could AI be helping those hijacking servers and then demanding ransom payments lest the server be wiped clean? Of course AI is helping: hand extremely powerful tools to everyone with an Internet connection, and what do you reckon will happen?

AI mimicry of voice and images is already good enough to fool us, and it will only improve, as evidenced by the examples of people who have lost their jobs in voice and image-related lines of work.

I discern an unhelpful asymmetry in all this: the scammers and blackmailers have enormous incentives to deploy AI maliciously (Satan), while the victims don’t have the same incentive to invest heavily in hardening their defenses against malicious attacks until it’s too late (Savior).

If Grannie or Grandpa can be scammed out of $5,000 by AI bot-generated deepfakes, that’s quite an incentive to send out a few million deepfakes. Meanwhile, what’s the incentive for the average online user to spend serious time and money hardening their defenses against such persuasive scams?

It’s very low, as we tend to over-estimate our BS detectors –and in the case of the elderly, we under-estimate our cognitive decline.

AI also helps target the most gullible / vulnerable targets. The elderly who have already been conned out of real money by bogus non-profits claiming to support police officers and veterans are prime targets–hello, beautiful, this is Owen Wilson, and I’m thrilled to find your amazing self online to offer you an exciting job at Warner Brothers studio–yes, in Hollywood.

Who has an incentive to spend the enormous quantity of time, effort and money required to deploy AI at sufficient scale to block 95% of the malicious traffic? As far as I can tell, the answer is no one.

Big Tech is, well, too big to care. Why waste money limiting malicious traffic?

As for political action that will actually move the needle on limiting malicious traffic: if Grannie or Grandpa offer $100 million in campaign contributions to the pay-to-play heavy hitters, well, yeah, sure, some watered-down verbiage will be duly added to the next 800-page bill working its way through the acid-bath of Congress. But the political class has zero base interest in limiting malicious online traffic.

In other words: AI Satan is extremely motivated and well-funded, while AI Savior is like the homeless guy in a dirty white robe who rouses himself every once in a while to help an elderly person dodge the traffic as they cross the street.

The upside to generative AI is: we’re firing a boatload of expensive employees, yowza!

There’s also a troubling asymmetry to this upside, as those being fired don’t have the same power as employers to generate net income with AI. The employer just added $60,000 to the bottom line for every employee replaced, while the unemployed worker has no equivalently easy way to fire up AI-bot Claude and immediately start earning $5,000 a month.

Yeah, sure, there are posts claiming to use AI to print money, but 1) are these real and 2) are these techniques scalable, meaning the 1 million workers replaced by generative AI can all use this same grab-bag to replace their wages lost to AI? There is zero evidence any such DIY AI grab-bag-makes-bank scales, and does so in a durable fashion.

There is more to say on this, but that’s a topic for another post.

Nobody watching this drama has any idea of the eventual consequences this destruction of trust will unleash. For that is the only rational response to malicious AI: trust nothing that isn’t a wet signature, signed in your physical presence. Literally everything else can be spoofed.

I might open a link and find… well, I’d rather not say. Why give anyone ideas they haven’t already seen on a screen?

Only the paranoid survive. Andy Grove’s advice is more applicable than ever before.

For there will be second order effects of the erosion of trust: consequences unleash their own consequences.

I’ve already discussed one option: a heavily moated “Platinum” Web that only accepts authentic individuals and relentlessly vets / screens every user: random retinal scans, the works. Like a Platinum card, it will cost serious money. For what is trust worth? Far more than we seem to be able to imagine at this moment.

Hello, this is Owen Wilson with a special offer to you, yes wonderful special you, to join the exclusive Platinum Web.

CHS NOTE: I understand some readers object to paywalled posts, so please note that my weekday posts are free and I reserve my weekend Musings Report for subscribers. Hopefully this mix makes sense in light of the fact that writing is my only paid work/job. I am grateful for your readership and blessed by your financial support.


Read more

Similar Posts