Given the impact artificial intelligence (AI)–based medical technologies (hardware devices, software programs, and mobile apps) can have on society, debates regarding the principles behind their development and deployment are emerging.… Click to show full abstract
Given the impact artificial intelligence (AI)–based medical technologies (hardware devices, software programs, and mobile apps) can have on society, debates regarding the principles behind their development and deployment are emerging. Using the biopsychosocial model applied in psychiatry and other fields of medicine as our foundation, we propose a novel 3-step framework to guide industry developers of AI-based medical tools as well as health care regulatory agencies on how to decide if a product should be launched—a “Go or No-Go” approach. More specifically, our novel framework places stakeholders’ (patients, health care professionals, industry, and government institutions) safety at its core by asking developers to demonstrate the biological-psychological (impact on physical and mental health), economic, and social value of their AI tool before it is launched. We also introduce a novel cost-effective, time-sensitive, and safety-oriented mixed quantitative and qualitative clinical phased trial approach to help industry and government health care regulatory agencies test and deliberate on whether to launch these AI-based medical technologies. To our knowledge, our biological-psychological, economic, and social (BPES) framework and mixed method phased trial approach are the first to place the Hippocratic Oath of “Do No Harm” at the center of developers’, implementers’, regulators’, and users’ mindsets when determining whether an AI-based medical technology is safe to launch. Moreover, as the welfare of AI users and developers becomes a greater concern, our framework’s novel safety feature will allow it to complement existing and future AI reporting guidelines.
               
Click one of the above tabs to view related content.