The U.S. Food and Drug Administration (FDA) has sketched out details to help improve transparency in AI software product information, according to an article published January 26 in npj Digital Medicine.
In a commentary, FDA scientists noted that few AI software products on the market currently include information that makes sense to consumers regarding how the algorithms work, and added that designs for product information should take a “human-centered” approach.
To that end, the group proposed a new definition of transparency based on workshop discussions with stakeholders held over the past three years, noted lead author Aubrey Shick, of the agency’s Center for Devices and Radiological Health (CDRH), and colleagues.
“Transparency is the degree to which appropriate information about a device – including its intended use, development, performance, and, when available, logic – is clearly communicated to stakeholders,” the group wrote.
The FDA is reviewing an increasing number of applications for AI or machine learning (ML) medical devices, with the number receiving marketing clearance nearing 700 as of October 2023, according to the authors.
However, research suggests that most of these devices currently don’t include public data to back up claims in their product information, for instance, and that most patients and caregivers lack knowledge on how AI/ML devices may impact their health and health care, the group added.
In January 2021, the FDA launched an action plan for AI/ML devices that included a focus on improving their transparency and followed up with a workshop on the issue in October 2021. Including the new definition of transparency, takeaways from the workshop included perspectives on improving transparency from patients, healthcare providers, researchers, industry members, regulators, and payors.
“Taken together, the varied feedback provided by stakeholders reveals the opportunity for a human-centered approach to the transparency of AI/ML devices,” the authors wrote.
Ultimately, moving forward, they added that the FDA plans to integrate these perspectives as it engages in regulatory science efforts, the authors wrote.
“Workshop attendees identified that improving the transparency of AI/ML devices, especially concerning the communication of training, validation, and real-world performance, continues to be an area in need of further growth,” Shick and colleagues concluded.
The full article is available here.
link