15.7 C
Niagara Falls
Friday, May 17, 2024
Dr. Brown: Will humans meet their match in AI? It’s possible
A photo of Dr. William Brown. Supplied

No one doubts artificial intelligence’s ability to crunch huge amounts of data.

What surprises some, however, is how well it learns, innovates and even seems to intuit in ways eerily similar to human brains.

The human mind is very flexible and adaptive at finding novel solutions and figuring stuff out.

Just watch children play with iPads: through independent trial and error, they often solve problems their grandparents and parents struggle with – and unlike the latter, kids don’t forget.

Modern smartphones pack the power of early-day computers, which filled a room and cost millions of dollars a few decades ago.

These days, who doesn’t “Google it” when they want a quick answer to some esoteric question?

But Google and other search engines are only as accurate and fulsome as the data they’re fed.

The same applies to chatbots such as Open AI’s ChatGPT and Google’s version, called Bard, which captured the imagination of millions of users, beginning with Open AI’s first version in November 2022.

Chatbots are fun to use and, for most users, they’re magic – even if they’re prone to create fictitious data (called hallucinations).

ChatGPT generates plausible, well-composed term papers, grant applications and even research papers, plans holidays in Europe and, in the case of higher-powered versions, even novel music and art.

What intrigues me about AI is its talent for finding novel solutions to challenging problems and, in some instances, writing its own code to do so.

Those are properties we normally associate with the human brain. And like the latter, in many instances, AI companies have no idea how their systems came up with such innovative capabilities.

In nature, natural selection determines which variants are favoured and which are not. Perhaps AI operates similarly by favouring some algorithms over others.

In some instances, AI goes further: when faced with coding roadblocks, AI appears capable of creating go-around algorithms.

That’s a stunning achievement: until recently, when faced with a roadblock in an algorithmic network, human programmers had to step in, identify the hitch and write code to get around it.

Clearly, AI has crossed a threshold from dependence on human programming to a measure of independence, which can only expand in the future.

This means future AI systems may be capable of evolving – much as biological systems do.

In short, AI has become very intelligent and like highly intelligent, creative humans, may not be able to tell us how it became so.

This is where AI potentially becomes a threat: should it ever develop and exceed the broad intelligence of humans? At that point, who’s in charge?

Looking to the near future, the advent of truly powerful AI systems will change almost every aspect of our lives.

Recently, I tried out what I thought were plausible tests of medical reasoning by presenting ChatGPT 3.5 with case material taken directly from the New England Journal of Medicine’s weekly clinical-pathological conferences.

Just in case ChatGPT somehow had access to that material, I made up clinical cases and introduced ChatGPT to each case beginning with the history, checking to see what ChatGPT came up with, before feeding it more and more information to see how accurate it was.

ChatGPT got the right answer in all 10 cases at what I would estimate would be the performance level of a well-trained clinical resident.

Maybe ChatGPT and my patients don’t need me! That’s one of the big issues, isn’t it?

Physicians take eight to 10 years to train, beginning with their first year of medicine followed by residency years in specialties.

AI, once trained and updated regularly, takes far less time and is as good or better. It even provides all the notes, management plans, and lists of ongoing clinical trials on request.

That’s very impressive, given that ChatGPT was not specifically trained in any form of medicine.

There are bugs to overcome but given the probable development trend, ChatGPT’s successors will continue to evolve, and humans won’t.

As a patient, I welcome the help of ChatGPT and its successors as partners in an overworked system with plenty of its own flaws and rough edges, with long delays at every stage and not-always-pleasant encounters between patients and healthcare workers.

So, I vote for AI-assisted health care.

True, it can’t dress patients in medical gowns or dress wounds, but it is patient and respectful all the time.

Dr. William Brown is a professor of neurology at McMaster University and co-founder of the Infohealth series at the Niagara-on-the-Lake Public Library.  

Subscribe to our mailing list