Driven by Commercialism and Momentum, Walking Into a Problem With Our Eyes Open
We’ve all known one of those people. Maybe a neighbor who regularly has foolish ideas like dealing with dead grass by spray painting it green. Or who thinks throwing gasoline on the barbecue is a great way to start it.
But would you want someone like that in charge of important decisions? Such as what our foreign policy should be, or which projects NASA should pursue, or even just how to lay out streets in some new part of your town? We are setting ourselves up for that as our reliance on AI increases.
An aside note: Some of the pieces I write strive to have some important information or perspective or strategy. Others just help us process a little of the so many difficult things lately. This is a little of the first but mostly the latter.
You’ve likely seen the stories. The case of AI encouraging a boy to commit suicide. The case of AI leading a previously sensible man into the delusion he could be a super-hero. The AI company internal-testers checking to be sure their AI wouldn’t hurt people to protect itself, but it did try to harm peoples’ reputations in a bid to avoid being shut down. That last incidence reveals a key problem. The programmers don’t have control. So? Just put a command in it to never harm anyone, right? Those responsible programmers had already done just that. Unfortunately the AI, like a teenager launching their independence, decided it didn’t need to obey that rule. Other scientists are studying AI to try to comprehend it, as if they were studying a distant planet they have limited understanding of. Wait a minute, didn’t we humans make this AI? But we don’t really know what’s going on under the hood? Nope.
So as we rely more on AI it is likely to, mostly be good, but we will sometimes get the equivalent of the green paint guy directing important decisions.
But that’s okay because surely competent people will be checking those directions before implementing them, right? Two problems with that hope.
One is the kind of AI. It can be developed for a specific purpose, like analyzing medical images for signs of disease, and do that very well. But most of the AI assistance we’ll be using is of the kind whose starting goal in development was to be convincing. To seem human. To give answers that sound right. Also, in our commercial world, to please you so you’ll use it even more. Thus it can result in answers that could lead you toward feeling you’re a super-hero. Or even without the ulterior motive answers can be given as authoritative even when it doesn’t really know, as it’s just doing it’s job giving the best answer it can
The other problem is humans. Oh, we humans. Some will get lazy and accustomed to thinking the AI is always right so why bother checking. Others are incompetent and wouldn’t know if the AI is right and will use it as cover so no one will discover they don’t know.
To take that same problem to the next step, there are the idiots among leadership. They should only use specific AI for specific kinds of work, not rely on the inexpensive retail AI of the everyday sort people use to help coordinate their busy schedule, or use to get advice on dealing with some troublesome coworker. But idiot leaders will turn to the “just give them an answer they’ll want to hear” AI. They’re the same leaders who would listen to social-media rumors rather than to dedicated, knowledgeable people. The ones who have stopped research that will save lives because some social-media person, who probably also paints their grass green, said so.
To put it bluntly, stupid leaders. “Stupid”? Not “ignorant”? Well, when real information is all around in plain sight, and ways of sorting solid from suspect information are pointed out regularly, but they refuse to choose any of that, then, yes, it is still ignorance, but it’s willful ignorance. And, really, what’s the difference between willful ignorance and stupidity? The only time it isn’t stupidity is when it’s for intentional harm, promoting false information for corrupt ends.
Choosing better leaders is part of what will help. Likewise refusing products and services that put AI in critical places. (Do you want the computer in your airliner to only be a tool that assists pilots as they direct, as it is now (for the most part)? Or an AI system that takes over the pilot’s job much of the time but which gets it wrong occasionally?) Also, insisting on AI that doesn’t con us when it doesn’t have the answer. Although that’s tough to do in a commercial world where the first rule is to draw us ever further in in pursuit of maximum profit, as it is now on commercial social-media.
Also remember that AI is not a person, not an expert, just a tool we can use to help us. That’s something else we could insist on. That AI not interact like some appealing human, but rather in ways that keep us always aware we’re dealing with a machine. A machine created to be our tool.
If we don’t want the equivalent of green paint guy making big decisions, those are some steps that have the potential to keep his paint ideas to himself.
OUR NONPROFIT NEWSROOM NEEDS YOUR SUPPORT. PLEASE CONSIDER A DONATION TODAY.