BEYOND BLACK BOXES: INTERPRETABLE AI FOR ENHANCED RISK ASSESSMENT AND ETHICAL DECISION-MAKING IN FORENSIC PSYCHIATRY
DOI:
https://doi.org/10.51891/rease.v10i8.15298Keywords:
Forensic Psychiatry. Artificial Intelligence. Risk Assessment. Ethics, Professional. Decision Making.Abstract
The increasing adoption of artificial intelligence (AI) in forensic psychiatry has sparked discussions about its potential to revolutionize risk assessment, diagnosis, and treatment. However, the use of 'black box' AI models, which lack transparency and interpretability, has raised significant ethical concerns. This narrative review explores the current state of AI in forensic psychiatry, with a focus on developing interpretable AI models for enhanced risk assessment and ethical decision-making. The article underscores the importance of considering social and environmental factors alongside neurobiological data in AI-based predictions and discusses AI's legal and ethical implications in forensic contexts. The review concludes by emphasizing the need for interdisciplinary collaboration and responsible evaluation of AI models before widespread adoption in high-stakes decision-making processes within forensic psychiatry and criminal justice.
Downloads
Published
How to Cite
Issue
Section
Categories
License
Atribuição CC BY