Artificial intelligence (AI) has been heralded as a transformative force in medicine, offering advances in diagnostics, personalized treatment, and health system efficiency. However, rapid adoption of AI in clinical practice presents risks that must be addressed. The dangers of AI use in medicine range from algorithmic bias and data misuse to erosion of physician skills and patient safety threats.
One of the most pressing dangers is bias in AI algorithms that results in disparities in outcomes and quality of care. Because AI systems are trained on historical datasets, they often replicate existing disparities in healthcare. For instance, models may underrepresent racial and ethnic minorities, leading to inequitable care delivery. This phenomenon threatens to widen rather than bridge health disparities, especially when diagnostic tools are deployed predominantly in well-resourced populations, leaving vulnerable groups behind. Such inequities highlight the importance of embedding fairness and equity into AI design and deployment. As noted in reviews of AI applications, biases can stem from unrepresentative training data, perpetuating systemic inequalities in outcomes like disease prediction and resource allocation 1.
A second danger involves data security and misuse. AI relies on vast quantities of sensitive health data, which raises the stakes for privacy breaches and cyberattacks. Hacking of medical datasets could compromise not only individual patients but entire healthcare systems. Moreover, phenomena such as “data poisoning,” where datasets are deliberately manipulated to produce inaccurate outputs, represent an underappreciated risk with potentially devastating consequences for patient safety. Ethical frameworks emphasize the need for robust data protection, informed consent, and transparency to mitigate these issues, ensuring AI development prioritizes patient autonomy and societal benefit 2.
Patient safety is further threatened by AI misbehavior in real-world clinical settings. A recent analysis of AI-related healthcare incidents documented dozens of cases where algorithmic errors or misuse directly harmed patients. Failures in monitoring systems, flawed predictions, and black-box diagnostic tools (AI systems used in medical diagnostics that provide diagnoses based on hidden, complex internal processes, making it difficult for users like doctors to understand or verify the reasoning) led to outcomes classified as severe or even critical. These incidents underscore that AI is not inherently safe and requires rigorous oversight and monitoring to prevent harm. For example, in primary care messaging, AI drafts have been shown to contain errors like objective inaccuracies or harmful omissions, with physicians often failing to correct them before sending 3.
Even when AI performs as designed, risks arise from automation bias, the human tendency to over-rely on machine-generated outputs. Physicians may defer to AI recommendations even when they conflict with clinical judgment, undermining autonomy and potentially leading to errors 4. In primary care, studies show that physicians often miss errors in AI-generated responses to patient portal messages, with potentially harmful information sent uncorrected 5. This complacency reflects a dangerous shift toward de-skilling.
De-skilling is particularly concerning in surgical practice. The increasing role of AI in robotic-assisted procedures and intraoperative decision-making may diminish surgeons’ dexterity and problem-solving abilities. Overdependence risks leaving clinicians ill-prepared for emergencies when AI systems fail, compromising patient outcomes 6. This erosion of critical skills could have long-term consequences for medical training and the integrity of surgical practice.
Finally, the ethical and regulatory landscape lags behind technological advancement. Issues of accountability remain unresolved: if an AI system errs, clinicians, hospitals, and developers are all complicit in the error. Without robust frameworks for transparency, liability, and informed consent, patients are left vulnerable 7.
While AI holds enormous promise for medicine and many other fields, its dangers are equally significant. Algorithmic bias, data misuse, patient safety risks, de-skilling of clinicians, and unresolved ethical questions must be addressed before AI can be responsibly integrated into medicine.
References
- Dankwa-Mullan I. Health equity and ethical considerations in using artificial intelligence in public health and medicine. Prev Chronic Dis. 2024;21:240245. doi:10.5888/pcd21.240245
- Siafakas N, Vasarmidi E. Risks of artificial intelligence (AI) in medicine. Pneumon. 2024;37(3):40. doi:10.18332/pne/191736
- Denecke K, Lopez-Campos G, May R. The unintended harm of artificial intelligence (AI): exploring critical incidents of AI in healthcare. In: Househ MS, et al., eds. MEDINFO 2025 — Healthcare Smart × Medicine Deep. IOS Press; 2025:1013-1018. doi:10.3233/SHTI250992
- Aldosari B, Aldosari H, Alanazi A. Challenges of artificial intelligence in medicine. In: Mantas J, et al., eds. Envisioning the Future of Health Informatics and Digital Health. IOS Press; 2025:16-23. doi:10.3233/SHTI250039
- Biro JM, Handley JL, McCurry JM, et al. Opportunities and risks of artificial intelligence in patient portal messaging in primary care. npj Digit Med. 2025;8:222. doi:10.1038/s41746-025-01586-2
- Adegbesan A. From scalpels to algorithms: the risk of dependence on artificial intelligence in surgery. J Med Surg Public Health. 2024;3:100140. doi:10.1016/j.glmedi.2024.100140
- Chustecki M. Benefits and risks of AI in health care: narrative review. Interact J Med Res. 2024;13:e53616. doi:10.2196/53616