fbpx
today
timer

The promise of AI doesn’t hold up in reality

People talk about AI constantly these days, and for good reason. There’s been a tremendous amount of innovation for artificial intelligence and machine learning over the past several years, to the point where “impressive” doesn’t quite cover it. It’s reached a level of maturity that people in a number of industries are scared for their jobs, and that’s understandable. ChatGPT, for example, has copywriters (businesses and individuals alike) concerned that they’re either already redundant, or very soon will be.

But, beyond the glitz and glamor, AI isn’t really what so many people think it is. The truth of AI in its current state isn’t quite as dire (or exciting) as some people make it out to be—and that’s almost certainly going to remain the case for the lifetimes of everyone reading this today.

  1. AI is an additive, not a replacement

Technology is constantly improving, always has, and always will until the human race reaches extinction. One of the hallmarks of civilization has been the level of technological progress achieved, from the advent of the aqueduct all the way to the microchip. And when computers first became standard office equipment, a lot of people feared for their livelihoods back then, too. After all, if a computer can automate accounting or word processing, what will become of accountants or typesetters? (Please note: there are still accountants and typesetters, and probably in greater numbers now than there were forty years ago.)

To use a military term, the ubiquity of computers turned out to be one of the greatest industrial force multipliers in human history. What they managed to do was make it possible for individuals to produce far more than they normally could, in less time. Global productivity is higher than it ever has been, in large part thanks to automation made possible by computerization. That’s because the people who embraced them saw them as an additive tool rather than a replacement worker.

AI is, in many ways, merely the next step. It is another force multiplier for productivity, one that will allow an individual to increase their output, or make a process more efficient. What it can do is cut out steps in a longer process, but it cannot (and, unless I’m completely missing something, will not) be capable of handling an entire process. Human input and guidance is still necessary, and likely will remain necessary for decades—unless there is some tremendous leap forward that hasn’t yet materialized.

So no, AI can’t replace a copywriter; it can’t replace a designer; it can’t replace a worker. That’s not what it’s for, and businesses really shouldn’t look at it that way. AI is a tool that can add a layer of automation to a process, or help a worker produce something faster, or better, and nothing more. That doesn’t make it less impressive, but it should make it less scary to most people.

  1. “AI” isn’t actually “intelligent”

Anyone who’s played around with ChatGPT can tell you that it’s impressive, it’s efficient, it’s fast… and it frequently produces some entertainingly funny errors. That’s because it’s not really designed to produce finished products, and it very likely will never be able to reach that level.

AI in reality isn’t the kind you see pop up in science fiction; Isaac Asimov wasn’t really thinking about ChatGPT or something similar. That’s because the AI that’s available now isn’t truly independent, and however many alarmists raise eyebrows with dire warnings about Skynet going online, it never will be.

The “artificial intelligence” that’s available on the market (and at least the next dozen or two iterations) isn’t true AI; it’s machine learning. They really are completely different, regardless of the similarities between them. Calling something “AI” is mostly marketing; it’s typically an algorithm that’s been trained to recognize patterns, at its core.

To go back to ChatGPT as an example (mostly because it’s the one most people are most familiar with), it’s an impressive but limited engine, and its output is based on the data that it’s been fed. For an experiment, ask it about current world events; it will tell you that its training data only goes up to January of 2022. That alone demonstrates the limitations of the application, however impressive it may.

To boil it down to the simplest possible terms: AI cannot create anything that’s truly new. It can produce something that’s similar to what’s come before; it can put an interesting spin on an old idea; it can write a thousand words on events from a hundred years ago—but it can’t work on something that it’s never encountered in its training data.

There’s a world of difference between intelligence and knowledge. Dictionaries are full of knowledge, but there’s no intelligence; it’s the person reading it that applies intelligence to the knowledge it provides. The same thing applies to every available application of artificial intelligence you’ll find on the market today; it “knows” a lot, but it’s not particularly good at applying that knowledge—at least not without a lot of guidance.

  1. AI is a first-draft tool

A lot of businesses are making a huge mistake with AI. They see ChatGPT produce a blog post for them, and think “Hey, why do I need all of these copywriters? Let’s fire them to save money!” And maybe that’s good enough for some businesses, but it’s not good enough for those who know what they’re doing—which may sound insulting, but it’s the truth. That’s because AI is a terrible tool for producing finished products.

If you were to go to ChatGPT and ask it to write a thousand words about, for example, why the Suez Canal is important (a current world news topic that’s impacting a tremendous number of people across the planet), it could probably do that. But if a dozen, a hundred, or a thousand people asked it for that, it would produce what amounts to the same thing each time.

Similarly to the point above about AI’s inability to create things that are truly new, AI tends to function the same way for just about everyone, without extraordinarily specific prompting. You can ask it for a new tone, or a new spin, or feed it some information it doesn’t have already, but the function behind it is still the same. Have you ever watched a movie without knowing anything about it, and been able to figure out who directed it just from what’s shown on the screen? I can spot an Edgar Wright movie from a hundred yards away (Scorsese, Tarantino, Lynch, and a few others too, not to toot my own horn). AI suffers from the same problem.

The right way to use AI for a business is not to attempt to have it make a final product; it’s to have it produce a first draft.

To stick with the above example with ChatGPT and the Suez Canal, instead of asking it to write a thousand words on the topic and then publishing that, a better use would be to have it outline a post, or summarize what it knows about the topic, or write a rough draft that you can use to build on—and that’s just a few ways to use it more effectively. Your voice and opinions matter, and they’re what make what you produce different from everyone else’s. The post you’re reading right now is mine and mine alone; it wasn’t touched by AI (though it might make for an interesting exercise).

The point is that AI is far, far better at giving you something to work off of that isn’t a blank canvas or a blank page. The biggest roadblock to anything creative is, in my experience, what I call “Blank Page Syndrome.” Staring at an empty screen and suffering from a bit of analysis paralysis is extremely common; it can be difficult to start from scratch. AI can give someone a place to start, a way to get the ball rolling.

In other words, AI is good at giving you something to improve, rather than something to publish. Even ignoring the issue of AI-generated content of all types being oddly similar and not really “new,” its current state isn’t really capable of producing something that’s polished and ready to go.

Anyone who’s seen a presentation about AI will have seen some truly astounding examples that AI idealists will point to, proudly saying “Look at what AI is capable of!” But the reality is that those are always cherrypicked examples, the standout results of what was likely a substantial amount of labor, even on the part of the human prompting it.

We’ve all seen the “funny” results that AI-generated art produces. And many will point and chuckle at how wrong it the details are—six-fingered humans, for example, or oddly distorted faces, or something similarly off.

But you know what? That’s often a really good starting point for a designer. Someone talented can take that piece and improve upon it, making the changes to polish it to the point where it’s not “good enough,” it’s just good. And that in and of itself is impressive, and, in my opinion, the true value of AI in its current state: it’s not producing something you can rubberstamp and publish, but it’s giving you something that’s 50%, maybe even as much as 90% of the way to “finished.” And that allows individuals and businesses alike to get more done in less time.

So let’s turn down some of the volume about AI eliminating jobs. The businesses laying off their talented employees in favor of AI are going to very rapidly discover that 50-90% of a finished product isn’t a finished product. By the time they realize their mistake, I’m not sure all of them will be able to recover. So don’t make that mistake in the first place; be realistic about what AI is capable of, and use it appropriately. You’ll find that it’s a good tool, but a poor replacement for a talented specialist.