Wednesday, February 21, 2024
HomeHealth & FitnessHealthcare must control the 'if, when and how' of AI development and...

Healthcare must control the 'if, when and how' of AI development and deployment

CHICAGO – HIMSS23 kicks off Tuesday in full swing here with a sold-out opening keynote address from around the world. HIMSS CEO Hal Wolf notes that the organization now has more than 122,000 members – a 60% increase over the past five years – and has an increasingly global feel. Healthcare and technology leaders from more than 80 countries attended the show, all grappling with similar challenges.

“We’ve had to solve a lot of problems over the last three years,” Wolfe said.

In addition to the pandemic, other barriers to health and care remain in the United States and around the world: aging populations, chronic diseases, geographic displacement, and challenges to accessing care, Financial pressures, staffing shortages, and fundamental shifts in healthcare delivery, such as the rise of consumerism and shifts to telehealth and home care.

To address these challenges, “the need for actionable information is greater now than ever before,” Wolf said. Controversial opening panel discussion.

Artificial intelligence and machine learning can “open up new horizons if – if – we use them appropriately,” Wolf said. He nods playfully at the recent wave of hype surrounding OpenAI’s ChatGPT, noting that he recently posed a simple question to AI models: “How do we solve global healthcare challenges?”

Within seconds, the software returned an answer of more than 300 words.

These challenges are “complex and multifaceted, thus requiring an integrated approach involving multiple stakeholders, strategies and solutions,” ChatGPT said, which lists improving access , investment in preventive care, technological innovation, addressing health disparities, and global collaboration among the most important recommendations. Discussion – It’s hard to argue that healthcare is only in the early stages of “creating and learning how to manage these emerging AI tools”.

Ross convened the discussion, “Responsible AI: Prioritizing Patient Safety, Privacy, and Ethical Considerations,” featuring four AI innovators who have been thinking hard about this transformative The real challenges and opportunities of technology.

Andrew Moore, Founder and CEO, Lovelace AI; Kay Firth-Butterfield, CEO, Center for Trustworthy Technology; Peter Lee, VP of Research and Incubation, Microsoft; and The Moral Machine Reed Blackman, author and Virtue CEO, was tasked with exploring a simple question Rose posed about AI: “Just because we can do a thing, should we?”

‘It’s not easy, there’s a lot to learn’

As he has done in the past, Ross contrasts what he calls Big AI – “Bold ideas like machines that can diagnose disease better than doctors”, Little AI – “Machines are already listening, writing, helping – and irreversibly changing the way we live and work.”

Those AI tools are already helping their users do “bigger and bigger things,” he said. It is through the accumulation of small AI advances that big AI emerges.

And it’s happening fast. For this reason, Moore believes it is time for health systems to rise to the challenge.

While the rapidly evolving capabilities of large language models like ChatGPT may strike some as inconceivable, “I hope responsible hospitals should be using large language models by now,” he said , for tasks such as customer service and call center automation.

“Don’t wait to see what happens in the next iteration,” Moore said. “Start now so you’ll be ready.”

The capabilities of generative AI are emerging, and in ways that could significantly benefit healthcare.

A useful use case is already evident: integrating generative AI to improve clinical note-taking. See, for example, Epic’s generative AI announcement this week in partnership with Microsoft and Nuance, or the tools medical schools are deploying so AI can “play the role of the patient.”

But there are “some dire risks,” Lee said. “It’s not easy and there’s a lot to learn.”

To manage these risks, Lee pleaded at HIMSS23 that “the community needs to have ‘if, when and how ‘ ‘These AI technologies will be used in the future.”

Yes, “there are huge opportunities,” he said. “But there are risks, some of which we may not yet know about.”

Thus, “the healthcare community needs to take firm ownership” of how the development and deployment of these tools has evolved — With a keen eye for safety, efficacy and fairness.

Blackman said he still had real concerns about the black box aspects of too many AI models, and said that if these tools were to gain more acceptance, especially In a clinical setting, then greater transparency and interpretability are fundamental must-haves.

“ChatGPT 4 is very useful,” he said. But the decisions it makes and the answers it gives “don’t give you reasons”.

Sometimes, as the model’s answers to Wolf’s questions proved to be accurate and true. But they often arrive at their results using arcane and complex calculations that have an effect that feels like “magic,” Blackman said.

“Maybe we’re all right with magic,” he said. “But if you’re going to have a cancer diagnosis, I need to know exactly why.”

The LL.M. is “a word predictor, not a deliberator,” he said. Even after careful examination, “when you get those causes, they’re not why you’re actually getting the diagnosis.”

But ultimately, he said, the medical establishment needs to think hard to understand “What can we accept? Can we accept a black box model, even if it works perfectly?”

Firth-Butterfield – Last month he was one of more than 26,000 people who signed An open letter calling for a moratorium on labs training new powerful AI systems for at least six months—the key question is not how the model arrives at the answer, but “where is it available?”

“While 100 million people use ChatGPT, 3 billion still don’t have access to the internet.” She said her concerns about AI have a lot to do with health equity, bias, fairness and accountability.

“If you’re going to use generative AI, what data are you going to share with those systems?” she asked. and “Who do you sue when something goes wrong?”

Lee agrees that “the accountability issue is a very serious one and the world needs to figure it out. It needs to be looked at sector by sector , with a particular focus on healthcare and education. “

AI is advancing, even faster than many experts believe.

The question, then, is “Where should medical Healthcare will be combined with these transformative yet mysterious technologies,” said Ross.

In fact, “beyond ethics,” said Lee, who has been grappling with the limitations of AI for years. Ethical issues.

“Ferth-Butterfield warned. “For our future, what do we want from these tools?

Lee reiterated his hope that the healthcare community will work together to help answer this question – making sure guidance and guardrails are in place – As different stakeholders “work together for some common ground. “

He acknowledged that there is still a lot of fear, uncertainty and doubt about what AI is already doing and what it might do. “It hits a nerve,” he said. It’s an emotional thing. ”

So his advice is to learn to overcome this uncertainty. “Get hands-on. Try to immerse yourself in it, and understand. Then work with others in the community.

Moore agrees that passive observation is not an option.

“Don’t stop and wait to see what happens,” he said. “Get your own people to build the model. Don’t just rely on the vendor. Make sure you’re involved and your people understand what’s going on.”

Mike Miliard is Executive Editor, Healthcare IT News
Email: mike.miliard@himssmedia. com
Healthcare IT News is a HIMSS publication.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LAST NEWS

Featured NEWS