Artificial intelligence (AI) chatbots such as Chat Generative Pretrained Transformer‐4 (ChatGPT‐4) have made significant strides in generating human‐like responses. Trained on an extensive corpus of medical literature, ChatGPT‐4 has the… Click to show full abstract
Artificial intelligence (AI) chatbots such as Chat Generative Pretrained Transformer‐4 (ChatGPT‐4) have made significant strides in generating human‐like responses. Trained on an extensive corpus of medical literature, ChatGPT‐4 has the potential to augment patient education materials. These chatbots may be beneficial to populations considering a diagnosis of colorectal cancer (CRC). However, the accuracy and quality of patient education materials are crucial for informed decision‐making. Given workforce demands impacting holistic care, AI chatbots can bridge gaps in CRC information, reaching wider demographics and crossing language barriers. However, rigorous evaluation is essential to ensure accuracy, quality and readability. Therefore, this study aims to evaluate the efficacy, quality and readability of answers generated by ChatGPT‐4 on CRC, utilizing patient‐style question prompts.
               
Click one of the above tabs to view related content.