Artificial Intelligence in Melanoma Diagnosis: Ethical Considerations and Clinical Implementation

BMC Medicine (Proc Bayl Univ Med Cent) 2025 AI / Ethics 4 Explanations View Original
Original Paper (PDF)

Unable to display PDF. Download it here or view on PMC.

Plain-English Explanations
Page 1
FDA-Approved AI Devices for Melanoma Detection: Current Landscape

This short perspectives piece by Verma et al. (2025) surveys the current state of AI-based melanoma detection systems and the ethical hurdles that must be cleared before they become standard clinical tools. The authors highlight that AI has shown strong potential for improving early detection of skin cancer, particularly melanoma, by rapidly evaluating dermoscopic images.

DermaSensor, the most recent device approved by the FDA, uses AI-powered spectroscopy and achieved 96% sensitivity across all skin cancer types in clinical trials. Nevisense, FDA-approved since 2017, relies on electrical impedance spectroscopy and has reported 96% sensitivity but only 34% specificity, meaning it catches nearly all cancers but flags many benign lesions as suspicious. All Data Are Ext (ADAE) is an open-source AI system still in the research phase that outperformed dermatologists in balancing accuracy and sensitivity for melanoma detection in prospective testing.

MelaFind, one of the earliest AI melanoma devices, received FDA approval in 2011 but was later discontinued due to insufficient specificity. This historical example underscores that high sensitivity alone is not enough for a viable diagnostic tool; overly high false-positive rates can lead to unnecessary biopsies and patient anxiety. The authors also note that AI could help address the dermatologist shortage in low-income or understaffed areas by increasing efficiency in clinical workflows.

TL;DR: DermaSensor and Nevisense are FDA-approved AI melanoma detection devices with 96% sensitivity. The open-source ADAE system shows promise in research. MelaFind's discontinuation proves that sensitivity without adequate specificity is not clinically viable.
Page 1
Transparency, Informed Consent, and Patient Communication

The authors argue that patients must be clearly informed about the use of AI in their care, including the role and limitations of the system, its efficacy, and how its assessment fits into the broader diagnostic decision-making process. Transparency is essential for maintaining patient trust and obtaining genuine informed consent.

Explaining AI to patients requires a careful balance between providing adequate information and avoiding excessive technical jargon. Clinicians should be prepared to discuss how AI analyzes images, its function in assisting with diagnosis, and the critical importance of human oversight in the final decision. The paper emphasizes that AI complements but does not replace clinical judgment.

When a clinician's opinion differs from the AI's interpretation of a melanoma image, a significant challenge arises. In these situations, the dermatologist must carefully examine both perspectives, which may include reexamining the lesion, requesting a second opinion from a colleague, or evaluating additional clinical information not available to the AI. Both the AI's assessment and the clinician's reasoning should be documented in the patient's medical record, and this information should accompany any biopsy referral so that pathologists can analyze all relevant data for a more precise diagnosis.

TL;DR: Patients must understand how AI is being used in their diagnosis, including its limitations. When AI and clinician opinions diverge, both assessments should be documented and shared with pathologists to enable the most accurate final diagnosis.
Pages 1-2
Patient Data Privacy, Image Management, and HIPAA Compliance

A central concern raised by the authors is the management of patient images uploaded to AI systems. Health care providers must employ strong data protection procedures to protect patient information and comply with HIPAA (the Health Insurance Portability and Accountability Act). Clear procedures should govern image retention, use for research or model enhancement, and patient rights to view and remove their data.

Encryption, safe storage, and deidentification of data are described as critical safeguards for protecting patient privacy and preventing unauthorized use of sensitive medical photographs. The paper notes that these protections are especially important given the visual nature of dermatologic data, where photographs can be inherently identifying.

The authors also address the emerging issue of ambient listening devices in clinical settings. Health care professionals must guarantee HIPAA compliance and obtain patients' informed consent before using ambient listening technologies. Providers should explain how ambient listening can improve care quality and reduce clinician burnout, while patients should be informed of clear policies regarding data keeping, access, and use. Transparency about the goal, scope, and data handling procedures of these technologies is critical.

TL;DR: AI-based dermatology tools raise serious data privacy questions. Patient images require encryption, deidentification, and strict HIPAA-compliant storage. Ambient listening devices in clinical settings add another layer of consent and transparency requirements.
Page 2
Accountability, Liability, and the Path Forward for AI Regulation

The use of AI in melanoma detection raises unresolved questions about accountability and duty when errors occur. The authors call for clear standards defining the responsibilities of AI developers, health care institutions, and individual physicians in the event of a misdiagnosis or missed diagnosis. Without these standards, the legal landscape for AI-assisted dermatology remains uncertain.

Bias and validation are also highlighted as ongoing concerns. The authors stress that constant evaluation and validation of AI systems across broad, diverse patient groups is required to ensure effectiveness across different skin types and to reduce any built-in biases. AI models trained primarily on lighter skin tones may underperform on darker skin, potentially widening existing health disparities rather than narrowing them.

Looking ahead, the paper calls for continued research, regulatory guidance, and open communication among health care professionals, patients, and technology developers. The goal is to realize AI's full promise in dermatology while upholding the highest standards of patient care and ethical practice. The authors position this not as a future aspiration but as an urgent present need, given that several AI devices are already FDA-approved and entering clinical use.

TL;DR: Clear liability frameworks are needed for AI developers, institutions, and clinicians when diagnostic errors occur. Ongoing validation across diverse skin types is essential to prevent AI from worsening health disparities, and regulatory guidance must keep pace with rapidly advancing technology.