People to Confide in AI Chatbots for Healthcare

Find out more about the potential for AI-based Chat systems to help healthcare providers take better care of their patients.
 
AUSTIN, Texas - July 23, 2024 - PRLog -- Tea And Sympathy: Do AI-Based Chat Systems Have What It Takes To Offer Accurate And Empathetic Patient Care?

AI-based chat systems are getting "smarter" – but are they smart enough to offer reliable patient care?

One of the first available scientific studies indicates that AI-based Chat systems can perform admirably when answering patient questions.

A research team headed by Dr. John W. Ayers at the University of California San Diego recently set out to evaluate which is better, ChatGPT or a human physician.

The team compiled a set of approximately 200 typical patient questions (extracted from the AskDocs forum on Redditt.)

A three-panel jury of licensed healthcare providers evaluated the responses – without knowing if the answer was provided by a human physician or by ChatGPT.

Sorry if you are a physician reading this article, but the results came back strong in favor of ChatGPT – the three-panel jury preferred the ChatGPT response nearly 80% of the time.

The jurors rated ChatGPT's responses to common medical questions as being of much higher quality 78.5% of the time (versus only 22.1% for the physician-sourced answers.)

ChatGPT also demonstrated a much higher level of empathy toward patients (in 45% of responses), a statistically higher achievement than the 4.6% of times the physician's answer was more empathetic.

Should physicians be concerned about AI taking over their jobs?

In the short term, the answer is no.

Instead, AI might be a godsend for helping healthcare providers keep up with their ever-growing workload. The widespread adoption of online healthcare portals (part of the push toward electronic records) has led to an overwhelming increase in the number of electronic messages sent by patients. Reading, prioritizing, and responding to these messages has become very burdensome for healthcare providers, so AI-based tools could help a lot – either by reading the inquiries and providing draft responses offline (for the provider to review and send to the patient) or by "chatting" with the patient directly.

AI Chat Trustworthiness Is Marred By Potential Hallucinations. How Will Patients React?

Unfortunately, the Large Language Model (LLM) systems that power AI-based chat systems can sometimes get it wrong.

This can happen either because the language models were trained on faulty data or they happen upon a gap or glitch in their "knowledge," which they sometimes "fill in" with related though potentially incorrect information.

(Depending on the circumstances, providing wrong answers can lead to legal exposure – as this recent Wall Street Journal article discusses.)

Read more...https://formaspace.com/articles/healthcare/ai-chatbots-in...

Contact
mktg@formaspace.com
8002511505
End
Source: » Follow
Email:***@formaspace.com Email Verified
Tags:Artificial Intelligence
Industry:Health
Location:Austin - Texas - United States
Account Email Address Verified     Account Phone Number Verified     Disclaimer     Report Abuse
Formaspace News
Trending
Most Viewed
Daily News



Like PRLog?
9K2K1K
Click to Share