banner image

The Rational & Irrational Fears of Artificial Intelligence

May 17, 2023 by Paul Byrne

artificial intelligence
AI
automation
customer service
analytics
cybersecurity
FUD

The Rational & Irrational Fears of Artificial Intelligence

Most AI fear that I am aware of comes from the following predictions:

  • Disruption of the labor market
  • Increasing disparity between the haves and have nots
  • AI intentionally or accidentally harming humans

Supporters of AI seem to agree those risks exist but ask, “What happens if we don’t develop AI to address our existential issues?” I somewhat agree with that, so I won’t cover it here.

As a product designer and software developer who works with AI every day, I have a different take that I hope you will find valuable.

Let’s address each one of these legitimate predictions about AI and then talk about why much of the fear of AI is irrational.

digital art showing a silver android eminating blue light and surrounded by water droplets

Disruption of the Labor Market

No surprise here, AI has already started disrupting the labor market, as has every other economic growth engine since the industrial revolution. While some, perhaps many, jobs will be displaced, new ones will be created.

Is losing jobs to AI a real concern right now?

It’s been 8 years since Elon Musk first stated his cars would be able to drive themselves by 2018. While impressive progress has been made, the closer we get to the ability of cars to drive themselves, the more effort each little bit of progress takes.

Why is that? I compare a complex goal to a tree. Creating the trunk of the tree takes a lot of effort and when you do, it looks like amazing progress.

The trunk itself can be very useful. However, a tree is not a trunk; and a trunk doesn’t provide shade or acorns. A natural tree branches out in a self-replicating structure above and below ground until you get leaves, and there are 1000s of them, each emanating from a complex branching system. The tree must also have a root system or you will have to pay a team of people to hold it up and make the trunk appear to be a tree.

Complex engineering goals are like the tree. As you progress, problems multiply, and branch off many times before you get to the leaves. Each branch may require its own solution or version thereof. In my view, neural nets and transformers may have gotten us past the trunk and to the first set of branches but only leaves in very specific instances.

Chances are, we’ll need multiple new breakthroughs before getting to tree-like status.

Some would argue that we will reach a point that the AI can write, deploy, and maintain its own code. Let me assure you that as a product designer and software developer who uses AI every day… we have a long way to go before that happens. Self-replication may come from out of nowhere and take us by surprise but I doubt it. I believe we will know when we’re getting close as the system will evolve in that direction. It will first improve its code in a very unsophisticated way and gradually begin to make a difference.

My point is that it will take time for jobs to be fully replaceable by an entirely AI constructed solution. With that being said, jobs and opportunities will adjust and evolve to suit evolving needs of the world. Just as the world has evolved around the internet’s disruptive capabilities, society will evolve and adapt to AI’s.

Increasing Disparity Between the Haves and Have Nots

In case you hadn’t noticed, the gap between the top 1% and the rest of us is already happening and has been for centuries. At this point, we’re just blaming the new kid on the block. Whether it is AI or some other innovation (or a combination of many), this is going to happen. It is truly a problem for society and a difficult one to solve. However, it is not a new problem.

The use of technology allows a single person to do more work. In economic terms, they create more value which economists measure in terms of GDP. For example, when I was a kid, my trash was collected by a man driving a truck and two men riding on the back. Now, my garbage is collected by one man in a truck and an arm that takes my garbage bin and empties it… even shakes out the sticky stuff. The driver works alone and never leaves his truck.

The driver is paid a little more, but the company produces far more output (the robot arm that grabs my bin is faster than the guys riding on the back of the truck were) with 1/3 of the employees. By innovating, the company earns more money, at least until the competition catches up. That money goes to the people at the top: executives, shareholders, and so-on. The driver makes a little more money, but the company makes a lot more profit. This is the effect that drives wealth upwards. I don’t expect this pattern to change any time soon, this cycle will continue with or without AI at the forefront.

Historically, the gap between the rich and the poor has always existed.

Photograph depicting two grown men posing in Roman period costumes in a backyard

A look back in time, however, shows that this problem ebbs and flows on a millennial scale. Rome’s Crassus controlled as much wealth as the entire budget of the Roman Republic. It would be like having an American whose wealth was more than $5 Trillion. Julius Caesar, a populist, led to Crassus' downfall and it would be centuries before someone else equaled his influence.

It is interesting to note, however, that his comparable wealth today is estimated at $20 Billion or so simply because there was less wealth in the world and spheres of influence were less than global.

My point is that it is unlikely we will solve this problem that grows with our ability to create more value through improved productivity. It comes with the territory. No “free” society has solved this problem well. It usually involves a lot of death and opens the door to grifters who want to replace the people at the top with themselves rather than smooth it out (think Stalin, Hitler, Julius Caesar). I believe this will ebb and flow with or without AI.

AI Harming Humans

AI-generated digital art depicting a silver android with red eyes

Most of our fear of AI stems from media over the years: movies like 2001 A Space Odyssey, having robots and AI turn against humans. Hollywood and media outlets sensationalize those circumstances and our imaginations have had little room to really dream up what might actually become of AI.

Personally, I love reading stories about chatbots telling its handlers how it will destroy humanity. If AI becomes so super-intelligent that it can destroy us, I believe it will simply find us uninteresting unless it sees us as a resource to help it achieve its goals.

The most advanced AI we have are Large Language Models (LLM) like ChatGPT/GPT4 which are incredibly bad at precision tasks, especially writing code. LLMs do well when it has an assignment whose answer is judged subjectively, like writing an email or creating a picture that nobody has seen before. They perform horribly at tasks that have real restraints.

In any case, there are a lot of big assumptions in the apocalyptic view of AI vs. humans. I don’t buy any of it. AI has no will of its own. What if a malicious AI operator trains AI to battle humanity? That only plays out in a scenario where there is a single AI. But there are many, many instances of AI. Perhaps they will battle each other and one will become victorious and take over the world.

For example, I had heard that ChatGPT was good at setting up schedules for lots of meetings. However, once I added restraints that included compacting the schedule to minimize meeting gaps, it started scheduling meetings at the same time with the same people. I continued to point out what the problems were and it continued to make the same mistakes over and over again.

As far as writing code goes, ChatGPT is VERY limited. I’ve seen YouTube videos that claim ChatGPT/GPT4 and Copilot work at a senior developer level. While it will write code when you ask it to and some of it is correct, it fails at very basic coding tasks consistently. Worst of all, it will write code that looks correct but will never work. I would rather work with a junior developer who tells me they can’t figure it out!

Keir Dullea as David Bowman in 2001: A Space Odyssey with the text 'Open the pod bay doors, HAL'

Bottom line… Are fears about AI rational or irrational?

Fears about losing jobs, and increased wage gap between classes in our society are both habitually historical problems that come and go with each phase of technological and societal evolution. Long story short, these problems are going to exist with or without AI. Any real danger with Artifical Intelligence will likely come from human error, misuse, and misguided dependence on intelligent systems.

Many believe that AGI (artificial general intelligence) will happen soon. “Artificial General Intelligence (AGI) refers to a theoretical type of artificial intelligence that possesses human-like cognitive abilities, such as the ability to learn, reason, solve problems, and communicate in natural language.” (Forbes, March 28, 2023). My opinion is that the current models of OpenAI products like ChatGPT and others are absolutely nowhere near that finish line. You can read more about the “last mile” problem for AGI in our other blog The Last Mile Problem for Artificial Intelligence.

At the end of the day, it’s an absolute certainty that people will use AI to produce both good and evil outcomes. We need to continue to be innovative but remain skeptical and vigilant about these systems and their utilization within our lives. Artificial intelligence and AI are amazing tools that, when used and developed properly, can benefit mankind.

If we allow AI to cause the end of humanity, well, frankly, we will have deserved it.

Subscribe to our newsletter for regular community updates, case studies, and more.