My thoughts about AI in 2024

About a year ago, I let ChatGPT write an article for this blog. It wasn’t a bad article, but I decided at that point that the AI bot wasn’t ready to quite replace me. Good thing, too. As far as I know, ChatGPT doesn’t need to make a mortgage payment, and I sure do. But that was a year ago. Has AI changed a lot since then?

Answer: yes and no

Obviously, there have been some massive improvements in AI in the last 12 months. A lot of that has been in the form of image generation. It’s now possible to generate AI images that are utterly, totally realistic. Not only that, it’s become routine to let AI create clip art for you, something that seems to have killed the already moribund clip art industry. (Moribund? Yeah kids, look it up.) AI will probably kill the stock photo industry in the next year, too, with the exception of licensable photos of real places.

But, at least from my perspective, AI has so far failed at the one thing that I thought it would excel at: gathering and presenting information in written form. You can ask an AI a question using Windows’ own Copilot, or any other GPT-like engine, and you’ll get an answer. Often times the answer will be all you need to know. But even the best AI seems to fail when you ask it something that requires nuanced detail. You’ll get everything from slightly wrong answers to out-and-out lies. Wikipedia may not be the perfect resource, as it too is often wrong. But it’s still generations better than AI-generated text.

Why is AI’s ability to answer simple questions still so bad?

I tend to think that AI’s relatively slow development toward being a reliable knowledge resource is on purpose. In order for that AI to know something, it has to learn it from somewhere. And often, the best place to learn something is a copyrighted work. We’re just starting to learn where a lot of these robots get their information, and there are a few lawsuits out there that claim what the AI is doing is copyright infringement.

That’s led to a general slowdown in the growth of “knowledge-based” AI as most of the big players aren’t willing to turn AI loose on the open internet to learn things that just might be copyrighted. There’s obviously going to be some new laws written about how AI can get its information, and I think it’s going to hinge on the basic difference between how humans research things and how AIs do it. We tend to believe that humans can look at a large mass of information and come up with a new insight. We also tend to believe that robots can’t do that. Those beliefs may be wrong, but they seem to be at the center of the legal battles.

The new worry: increased reliance on something that’s not ready yet

When I tell people that I don’t trust AI-generated answers, they usually disagree with me. I’ve lost count of the number of times I’ve said something and someone else says “Well, ChatGPT says…” as a rebuttal. There are a lot of people who already trust AI completely and that’s what’s scaring me in 2024 as we see AI infiltrate more of our devices.

AI can do some amazing things. Its ability to correlate information is amazing. This can help us develop drugs more quickly, help doctors diagnose weird illnesses, and even help us get out of trouble by finding relevant case law that no one even remembered. You can also use AI to custom-build voting maps that will help you win. AI can help you create compelling arguments that just don’t hold up to scrutiny. Like any tool it can be used for good or bad.

But that’s not the part that really worries me. The worry is that we’ll trust AI so much that we’ll stop questioning it. In early 2024, AI still makes a lot of mistakes. We still don’t even know if it will make good judgments when put to the test. Yet, the recent CES show in Las Vegas showed us that AI will be in pretty much everything very soon.

What happens when you trust AI to cook your meals perfectly but it can’t detect when something is on fire? What happens when an AI-controlled civic water system decides that it can’t supply safe water to everyone so it just cuts off a whole neighborhood? We haven’t been able to train AI with our values yet, and we don’t give it enough information to make good decisions. Yet, it seems like most folks don’t realize that.

Where will AI go in 2024?

Well, honestly I don’t know. I think like other “magical” innovations like CGI, Photoshop, and even 1980s-style desktop publishing, the general public will get a better sense of AI fakery. We’d better get that sense fast though, because people could very easily use AI to manipulate world events this year.

It seems to me that we’ll see some understanding of what and how AIs will be allowed to learn in 2024. That’s a good thing, of course, unless you disagree with what the courts decide. Not only that, you have to hope that the judges and juries in these cases aren’t using flawed AI to help make decisions.

I’ll admit that I worried last year that AI would take my job. I contented myself with believing it wasn’t ready. This year I’m a little more worried that AI will take my world. That sounds alarmist, and that’s on purpose. We all need to be a little more vigilant as this new amazing tool takes shape. If not, we could end up with a lot more than just a burnt holiday dinner.

About the Author

Stuart Sweet
Stuart Sweet is the editor-in-chief of The Solid Signal Blog and a "master plumber" at Signal Group, LLC. He is the author of over 10,000 articles and longform tutorials including many posted here. Reach him by clicking on "Contact the Editor" at the bottom of this page.