Has all this AI gone too far?

A while back, our marketing department started putting out some Facebook ads. You might have seen them. They asked for a photo of a confused and angry person with a remote. I dug up this one:

We all agreed that the image didn’t really “pop” but we liked the subject. Using AI tools, I was able to do this one in about 3 minutes.

The results aren’t perfect but they’re darn close. It got me thinking about how long it would have taken me to do this sort of thing back when I spent time doing it for a living, 30 years ago. My best guess would be about an hour, and I’m not sure the result would have been any better. Masking out hair is a pain in the neck and never works as well as you want. Or at least that was the mantra in the 1990s when all this was done manually.

I’m bringing this up because…

Look, you don’t need me to tell you about all the advancements in artificial intelligence lately. Everyone’s talking about it, including the AI-enhanced search engines themselves. Whether it’s AI-enhanced artificial romance, AI in the courtroom, or any of the many places you’ll see its impact, artificial intelligence is everywhere. Personally I’m seeing a lot of talk about AI image manipulation since Adobe Photoshop introduced AI features. So naturally I have to weigh in a little bit. Just a few months ago I told you I wasn’t worried about ChatGPT, and I stand by that, at least for now. But I have to say, the stuff I’m seeing with image manipulation is mighty impressive.

Yes, AI is going to cost jobs

Artificial intelligence is going to mean that some jobs are lost. Some industries will shrink by a lot. And just like other waves of automation, it’s the dreary jobs that we’ll lose first. If you’ve spent the last five years writing part descriptions for online catalogs, AI is coming for you. If you’re still engaged in doing illustrations by request from customers, I’d be polishing up your resume. Let’s say you’re an intern writing extracts of legal documents. It’s time to think about doing something else while there’s still time. Are you a photographer who does nothing but create stock photos for sale? I think you might want to consider another line of work.

But is it really bad that we’re losing these positions? I’d say yes and no.

A long time ago, there was something called “paying your dues.” In earlier days they called it an apprenticeship. The idea was that you spent time learning the basics of a trade by doing the boring stuff first. You learned the rules first hand, and you practiced until you understood them. Then, you moved up the ladder. That was a charming idea from a different time. The truth is that the “apprenticeship” model went out 50 years ago or more. I wish it hadn’t, because there’s a lot of value in it. But AI didn’t kill it. It was already dead.

So, doing drudge work isn’t really the key to success anymore. In that sense, it’s good that it’s going away. Taking away the boring yet difficult parts of a process just makes it easier for creative people to blossom.

This isn’t the first time

Ever since there’s been technology, someone has complained that tech changes are going to cost jobs. I’m willing to bet that when people learned how to bake clay pots, someone complained that the person who put the pots out in the sun to bake was going to lose their job. None of this is new.

In the 19th century, factories came in and a lot of craftspeople lost their jobs. In the 20th century, automation meant that a lot of factory workers lost their jobs. By the early 21st century, easy access to information meant that a whole cadre of administrative positions were lost. Each time, there was something in our culture that changed for the worse. But things also changed for the better. Chances are the clothes you wear are of more consistent quality and they look a lot better than they would have in 1850. A 2023 car is so well built that it can last a decade easily, while one from 1973 turned into dust in five years. You may not have a secretary, a receptionist, or a file clerk, but you have the world’s repository of information in your pocket.

Point is, things change. It’s no fun when it’s your job on the chopping block, but let’s not pretend this is a new thing.

But none of that is the real problem

There’s a lot of paranoia today. I think it’s justified. People think that AI will get so good at so many things that there won’t be enough jobs for real people to do. They also worry that AI won’t be as good as it seems, and it will make some very big mistake that causes a very big problem. Suddenly the idea of the computer from WarGames starting a nuclear war by playing Tic-Tac-Toe doesn’t seem so far-fetched. Some of the leading names in computers are urging AI makers to slow down and try to understand what their creations are actually “thinking.” They’re worried too.

“Dreams” and lies are a big deal, I agree

The new trend in artificial intelligence is “generative AI.” Here’s the ideal. You take a computer and you show it an insane number of things, telling it as much as you can about each thing. Then you let it chew on that until it’s able to create its own rules to try to predict what is coming next. This model actually works. The problem is that we don’t understand what the computers are actually learning. If we ask for the opposite of Elvis, we don’t know if the AI will give us Michael J. Fox. Here’s what ChatGPT said when I asked it:

Since Elvis was known for his charismatic stage presence, musical talent, and iconic style, the opposite of Elvis might be a person who lacks those attributes.

When I asked Bing Image Creator to show me the opposite of Elvis, here’s what it gave me.

I’d love to know how it got to the point where it thought this was the opposite of Elvis. The truth is that even the AI’s creators don’t know. That’s the problem. The process of how AI gets its answers isn’t really “knowable” by people. In most cases the AI itself isn’t able to detail the process it uses to generate an answer.

The other problem is that the whole purpose of generative AI is to create a complete answer with as little information as possible. That means that the AI is, by nature, going to answer some things wrong. It will do so with such a certain tone that it’s hard to know if it’s telling the truth. I asked Google Bard to “tell me about Stuart Sweet the blogger.” (There are other Stuart Sweets in the world.) Here’s part of what it said:

Sweet is a frequent speaker at industry conferences and events. He has also been featured in publications such as Popular Mechanics, PC Magazine, and The New York Times.

In addition to his writing, Sweet is also a consultant and trainer. He has worked with businesses of all sizes to help them improve their telecommunications systems.

I wish that those things were true. They’re not. But it sounds good, and the rest of Bard’s answer is a lot more accurate.

That’s the problem right now. You can’t trust generative AI.

What would happen if we let AI grow at the rate it’s growing?

I think people are right to be worried about the incredible growth in AI right now. In about 18 months, generative AI has gone from little more than babbling to being able to create totally realistic stories and images. In another 18 months of uninterrupted growth, it’s hard to know where it could go. Some of the work we’re seeing is both amazing and scary. Adobe’s technology will let you extend an image far beyond its original borders, and Apple claims to have a tool that will clone your voice in minutes. How soon before you get your first phone call from someone claiming to be your mother? When will you first get a spam text with an image of someone you care about in danger? I think we should be worried about these things.

Right now, as far as we know, most AIs are being trained with a very small set of information. At some point someone will let an AI loose on the entire internet to learn what it can. What will happen then? It’s going to happen, that’s for sure. It’s only a matter of time.

I do think that there should be more understanding of what AI is doing before we go much further. I don’t worry about a “terminator” type scenario so much as I worry that we just won’t be able to prepare for anything that could come from AI. It could be a small or a large consequence and now is the time to be thinking about that.

If we do it right, it could be great

People once worried that typewriters would put people out of jobs. They worried that television would put people out of jobs, that industrial robots would put people out of jobs, and most recently that the internet would put people out of jobs. The result, every time, has been that the pool of jobs has increased because of new opportunities no one expected. I think the coming of generative AI will be like that too. It will be bumpy, but if we do our best to learn from the past and apply it to what we’re doing, I think we’ll be ok.

That is, until the AIs learn how to play basketball. Once that happens it’s all over.

About the Author

Stuart Sweet
Stuart Sweet is the editor-in-chief of The Solid Signal Blog and a "master plumber" at Signal Group, LLC. He is the author of over 10,000 articles and longform tutorials including many posted here. Reach him by clicking on "Contact the Editor" at the bottom of this page.