The Challenge of Artificial Intelligence

By Roland Maxwell

The best lack all conviction,

While the worst are full of passionate intensity.

~ from The Second Coming, by W.B Yeats

As soon as humans develop a new tool, another group of humans leap in and appropriate it for crime. Artificial Intelligence was hardly out of its shrink-wrap when it became used as an engine for disinformation, sexual exploitation, fraud, and theft. And in between, there’s a mass of enthusiastic bandwagon jumpers, and others anxiously trying to use the new thing so as not to be left behind.

There is part of me that thinks the Amish may have got it right: you can’t touch these technologies without becoming corrupted. But before I head off to tend the herb garden in the morning, and decorate the Book of Kells in the afternoon, I want to take a good hard look, and work out whether we have a long enough spoon to eat with this particular devil.

What is AI?

One of the problems with all new technologies is the bow-wave of hype that they generate. Labels are often appropriated from the fabulous works of Sci Fi writers, or propounded in ways to attract headlines and therefore funding. So we need to start with a dispassionate analysis of the tool we are proposing to use. 

Artificial or Imitation?

I have real concerns with the label “Artificial Intelligence”.  This argument has been raging since the 1950s, with tech luminaries like John McCarthy arguing that machines can have beliefs, and philosopher John Searl arguing that they cannot, because machines are not conscious. (See also Wikipedia’s History of artificial intelligence.)

When we use the word “intelligence” we assume that there is a reasoning being at the end of it, who understands its own utterances, has life experience, and can reflect on the validity and outcomes of its thinking. No such being exists at the end of an “Artificial Intelligence” response. 

I would sleep more easily if we called it “Imitation Intelligence”. That would alert us to the absence of consciousness at its core. Alan Turing’s Test (to determine whether a machine’s processes were on a par with human thought) he originally called “the imitation game”. There are many PhDs still to be written on this subject, and people much smarter than I have argued both sides. At least you know where I stand.

Hidden bias

But where does the AI stand?  The simple, command-line style of interface conceals the biases of the people constructing the AI, and of the Internet itself. Buried in any massive codebase there will be the prejudices of many Elon Musks. The major powers inevitably dominate in sheer volume of content generated, and beneath that there are the content wars, with state actors trying to undermine and deceive their ideological opponents. Add in pornographers, scammers, and all the crazies and conspiracy theorists diligently churning out content, and you realise that the field which AI tills is as full of harmful untruths as it is credible information. 

Lack of experience

AIs such as Chat GPT are search engines on steroids, or call them “large language models” if you prefer. These models are developed by crawling the Internet. That’s the extent of it. 

Think of your own life. There is so much that informs your sense of being, and your judgements, that comes from lived experience - your interactions with family, friends, colleagues, pets; your gender; your cultural background; moving around urban and natural environments; your experience of having a body; your favourite piece of music; your moments of regret, euphoria, pleasure, and despair. In short, Artificial Intelligence can generate a plausible review of your favourite restaurant, but it cannot eat the food.

Hallucination

Another aspect of the lack of experience is now known in AI circles as “hallucination”, where the AI comes back with a result that is bizarre or aberrant. If you have the understanding to realise that an AI result is a hallucination, that’s good, and you will reject it. But if you are relying on AI to provide answers that are outside your expertise, danger. And even if you do have the knowledge, you may still be tricked by a plausible answer that has a hallucination buried within its processes. See the IEEE’s interview with Gary Marcus. And how will you fare if an undeclared AI is built into a process you are using?

Loss of Sources

With an Internet search, I have some access to the origin of the search returns. I can usually see which ones hale from an authoritative source like the Bureau of Meteorology, and which are from some florid flat-earther, running a blog out of his garage in Wyoming. AI vitamises all of its search results into a beguilingly coherent response which deprives us of the ability to assess the sources.

Theft

That loss of sources brings us to another problem - the massive appropriation of other people’s work, without acknowledgement or remuneration. AI is, by design, plagiarism on a grand scale. Humans can struggle to draw a line between fair use and theft. AI doesn’t even try.

Scarlett Johansson has sufficient public profile and wealth to challenge the appropriation of her voice by OpenAI, but countless other personalities and creatives do not have the means to confront a multi-billion dollar industry. Our ABC has written a useful article on the Johansson case.

But Boojum is still using AI?

The answer is “Yes”. We currently use it:

  • as an additional opinion, for example if we have a programming challenge, it can be helpful to see how Chat GPT would code a specific function.

  • where we can define the inputs and outputs and have confidence that the outcome will be life-positive.

In practice, what does that look like?

COMING SOON - a case study illustrating the process we have used to get the best out of AI, and mitigate the risks.

PS - This essay was produced the hard way; hitting the books, good old Wikipedia, conversations with friends and colleagues; our experiences with applying Artificial Intelligence to solve specific problems, and gazing out of the window.

Next
Next

IT train-wrecks and how to avoid them