LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Large language models will not replace healthcare professionals: curbing popular fears and hype.

Photo from wikipedia

Following the release of ChatGPT, large language models (LLMs) have entered the mainstream. ChatGPT and GPT-4 recently garnered particular attention for attaining expert-level performance in United States Medical Licensing Examinations.… Click to show full abstract

Following the release of ChatGPT, large language models (LLMs) have entered the mainstream. ChatGPT and GPT-4 recently garnered particular attention for attaining expert-level performance in United States Medical Licensing Examinations. However, performance is not perfect, and has not been as impressive in more specialised tests, such as the Membership of the Royal College of General Practitioners Applied Knowledge Test. ChatGPT frequently ‘hallucinates’, providing false, unverified information in the same manner as which it delivers facts. While performance in clinical tasks is expected to improve dramatically with the release of GPT-4, remaining inaccuracy and lack of an uncertainty indicator preclude autonomous deployment of ChatGPT and LLM chatbots like it in clinical settings. LLM applications may nevertheless revolutionise cognitive work – tools such as ChatGPT excel in tasks where specialist knowledge is not required, or is provided by the user prompt: examples include correcting language and rephrasing information for different audiences or within other constraints (e.g. word limits), and it has already been proposed as a tool for administrative tasks, clinical work and patient education. While this does represent an impressive advance in natural language processing, and benefits may be manifold across fields including medicine, these limited use-cases do not live up to the hype surrounding LLMs and artificial intelligence (AI) more generally in 2023. This is due to a fundamental misunderstanding about the form of AI represented by LLMs. Do LLMs represent artificial generalised intelligence (AGI)? The answer is currently probably not, despite emergence of interactive conversational interfaces and few-shot or zero-shot properties – where models execute tasks that they have previously been exposed to only a few times before, or never before, respectively. This is demonstrated by observing how these models are trained, and the composition of their architecture. The backend LLM (GPT-3, from which GPT-3.5 was developed) underpinning older versions of ChatGPT was initially trained on a dataset of billions of words taken from books, Wikipedia and the wider internet. Through a process of machine learning, the GPT-3 accurately encoded the association between individual words in the training dataset. Through ‘reinforcement learning from human feedback’, GPT-3 was subsequently finetuned to provide appropriate responses to users’ queries – producing GPT-3.5. Through these processes, ChatGPT has developed an impressive ability to respond appropriately to diverse prompts, albeit equally lucidly with accurate and inaccurate statements. This lucidity, responsiveness and flexibility have led to sensational claims regarding attainment of AGI that could feasibly replace professionals in cognitive roles. The performance of GPT-4 – which powers newer versions of ChatGPT – dwarfs that of GPT-3.5 across tasks including logical reasoning and medical aptitude tests. Moreover, GPT-4 can be prompted to adopt different roles on demand, and will accept multimodal input, processing images as well as text. Prominent figures in industry and academia have advocated for a moratorium on development of more advanced AI systems in response to concerns regarding safety, ethics and fears of replacement. Despite these fears and hype, the barriers to implementation of LLMs replacing healthcare professionals in any capacity still look out of reach. Although GPT-4’s architecture and training are confidential, it likely relies on similar schemata to its predecessor as it exhibits similar (albeit fewer) hallucinations and reasoning errors, including in medicine. None of ChatGPT’s published autonomous training involved actual comprehension of language in context; the meaning (as we understand it) of words in the dataset was immaterial throughout. While this brute force linguistic processing may prove sufficient to develop a form of AGI, it appears that these LLMs will continue to be afflicted by mistakes and errors. Journal of the Royal Society of Medicine; 2023, Vol. 116(5) 181–182

Keywords: large language; chatgpt; language models; medicine; gpt

Journal Title: Journal of the Royal Society of Medicine
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.