11.2 C
Niagara Falls
Friday, April 26, 2024
Dr. Brown: AI programs getting progressively better and reliable
Dr. William Brown. File Photo

Some form of artificial intelligence had been imagined by fiction writers in the 1800s and the first half of 1900s.

But it was Alan Turing, a mathematician and computer scientist, who in the 1950s developed the theoretical basis for what became the field of artificial intelligence, an early hardware version of which was a machine, aptly called the Turing machine.

In the last two decades AI made real progress in science by linking powerful computers with algorithmic programs loosely based on the architecture of the brain and capable of teasing out meaningful patterns from masses of data. In later versions, it even writes its own code.

But when the first ChatGPT was released in November 2022 by Sam Altman’s Open AI, there was a storm of interest as the public for the first time could harness the power of AI coupled with large language programs, to do what they wanted to do – whether planning trips, writing proposals or checking out almost anything they were interested in. Sort of Google on steroids. 

Now Google with Gemini (formerly Bard) and other high-tech firms have released competing versions of ChatGPT. 

Professionals were captivated, too. Scientists and graduate students were among the first to use versions of ChatGPT to help them write papers, grant proposals, do literature searches and reviews, and more.

On the science journal front, many top-notch journals made it clear they would not accept any AI-created articles due to concerns over authorship. 

Since then, the field has swung wildly between hype and failures, to stunts such as high-profile trouncing of humans at games such as chess, Go and Jeopardy, and more impressively I’d say, figuring out the molecular structures of a wide range of molecules with pharmacological promise as drugs. 

In the last decade there have been many triumphs such as employing AI to read X-rays, CT and MRI scans.

What really launched AI, in the eyes of scientists and the public was the creation of a product, in which powerful AI programs trained on large publicly available data sets are coupled to language tools trained on huge language data sets.

This made user-friendly communication possible and harnessed AI to create whatever the user wants — from literature searches to reviews, summaries, pictures, and even music and PowerPoint presentations.   

ChatGPT was an instant success in the marketplace but remains far from perfect. But new updates are working on its flaws.

Early on, problems include the unintended fabrication of stuff (called hallucinations), which could be hard to spot, or data sets might be skewed by excluding important information in the data sets.

This is especially important in health care where race, sex and age can have a huge effect on symptoms, findings, prevalence and management of some medical problems, such as diabetes and hypertension. 

There’s also the thorny question of where the data comes from, an issue brought to the fore by writers and publishers. The New York Times has announced it will sue companies that use content from their paper without consent.

From a health care perspective, data used by AI programs must come from reputable sources, which if it comes from hospitals or should be screened to exclude errors and ensure privacy. 

There isn’t space in a few articles to cover the potential of AI has for improving medical care for people in Ontario who often wait as much as a year or more to see specialists, or if you happen to live in a remote area, with even less coverage.

For these bottlenecks, AI could offer first-look screening, follow-ups and even referrals for patients.

No one knows it all these days. Medicine has become too complex, so AI makes sense for analyzing data, making notes and even assessing patients in collaboration with overworked human health care staff.

Physicians make mistakes in diagnosis and management — sometimes serious ones — that perhaps future versions of AI with access to high-quality data bases from the best sources, could provide as trusted assistants to health care professionals. 

Disclosure: This article was not written by any version of AI, but included important input from the NOTL Public Library’s Debbie Krause.

Dr. William Brown is a professor of neurology at McMaster University and co-founder of the InfoHealth series at the Niagara-on-the-Lake Public Library.

 

Subscribe to our mailing list