Artificial Intelligence (AI) and Machine Learning (ML) are bringing healthcare to a new frontier with vast potential to improve clinical outcomes, manage resources and support therapeutic development. They also raise ethical, legal, and operational conundrums that can, in turn, amplify risk.
Where is AI and ML today? Go, stop, go.
2023 brought a rollercoaster of activity marked by tremendous advances and a reckoning with their implications, resulting in efforts to contain runaway expansion. Many industry leaders have called to halt continued advances for at least six months after seeing high-speed growth in AI technology, only to see others continue to capitalize on target-rich opportunities. This push-and-pull reflects the need to be careful in the investment and use of AI/ML.
Activity at the government level is also rapidly evolving. In late 2022, the White House released a “Plan for an AI Bill of Rights” that guides the deployment, design, and use of automated systems, prioritizing civil rights and democratic values. On April 3, 2023, the FDA issued draft guidance to develop the agency’s regulatory framework for software functions of AI/ML-enabled devices. This guidance proposes an approach to ensuring the safety and effectiveness of AI/ML that uses adaptive mechanisms to incorporate new data and improve in real time. Given the lack of comprehensive federal AI legislation, states have been active in developing privacy legislation. Additionally, to align with patient-centric and healthcare-related AI standards, the Coalition for Health AI released a “Blueprint For Trustworthy AI Implementation Guidance and Assurance for Healthcare” in early April.
These accelerated developments have resulted in calls to action internationally. Italy temporarily banned ChatGPT in April and launched an investigation into the app’s suspected GDPR violation. Spain, Canada and France also raised similar concerns and launched investigations. EU lawmakers have called for an international summit and new AI rules, including the proposed AI Act. Consequently, implementing AI/ML technology oversight practices and accountability is increasingly becoming a regulatory priority.
Key areas of AI growth
- Service Customization: AI has the potential to detect disease and guide treatment, consolidating current medical research and treatment capabilities in real time. The predictive elements of AI technologies can project treatment outcomes, which can improve the quality of care and minimize costs. Examples of patient-specific applications include: predictive analytics to determine patient outcomes with high accuracy, personalized provider matching based on modeled variations in provider outcomes and a patient’s specific diagnoses, and timely clinical intervention through wearable monitoring by AI decision tools. AI’s ability to detect patterns is especially useful in medical imaging, as pattern recognition supports disease diagnosis and prognosis. Non-clinical AI can help streamline workflow, monitor hospital bed availability and readmission rates, and identify health equity gaps.
- Early detection and diagnosis: AI algorithms can accurately detect and diagnose serious illnesses such as ALS, kidney failure and Alzheimer’s years before a conventional diagnosis can be made. AI detection capabilities have also been implemented in the general wellness space, including for monitoring sleep, diet and mental health, which can lead to earlier detection of related illnesses, improving treatment effectiveness. AI algorithms have been shown to predict diabetes disease with an accuracy of up to 90% and achieve clinical accuracy comparable to the average physician when diagnosing written test cases.
- Therapeutic development and discovery: AI can examine and analyze large amounts of digitized pharmaceutical information to solve complex clinical problems. Consequently, there has been a notable increase in partnerships between traditional pharmaceutical companies and AI-driven companies. AI is especially relevant in drug discovery, screening and molecular design; clinical trial design; and pharmaceutical manufacturing.
Legal and industry considerations
While the goal of AI/ML technology is to deliver “smarter” care, to date, the patient-professional relationship remains crucial to ensuring that patients receive appropriate healthcare. The growth of AI in healthcare and life sciences has also brought new legal and regulatory considerations, especially in the areas of:
- FDA and SaMD: The use or assistance of AI algorithms in clinical decision-making could bring the technology under the FDA’s regulatory authority if it meets the definition of a “medical device”. The FDA has developed a framework for regulating AI/ML-enabled medical devices and AI/ML-based technologies that are “Software as a Medical Device”. As technology evolves and public interest grows, the FDA remains active in issuing guidance on these topics.
- Ethics and research: As AI applications expand into the scope of services traditionally performed by licensed physicians, questions about the unlicensed practice of medicine may be raised. The use of patient data in the development and testing of AI technologies may also require informed consent and trigger IRB oversight. The need for human oversight, or lack thereof, will likely remain an ongoing concern as AI proliferates, especially to monitor AI’s ability to generate incorrect results and cause unnecessary or incorrect care. Additionally, malicious and unintentional AI applications such as biohacking, bioweapons, and weaponizing health information require careful protection and proactive surveillance by everyone to ensure proper oversight.
- Intellectual property and data assets: Healthcare innovators in the AI/ML space face a different IP climate, as AI/ML systems may not receive the same protections as traditional output. Copyrights and patents, for example, cannot be attached to output that is not a human author or developer’s work. Rights to data assets, such as raw data and derived data underlying AI algorithms, also require monitoring.
- Privacy and data rights: Healthcare privacy laws and regulations can be involved at both the federal and state levels. Patient information may be subject to protection under HIPAA and other state laws and may need to be de-identified before this data can be shared and used to develop AI/ML products. Furthermore, consumer privacy laws and private lawsuits relating to data rights provide a basis for individuals to monitor and potentially object to the use of their personal data in AI development.
- Reimbursement and coverage: The use and deployment of AI by healthcare providers and entities depends largely on financial incentives, including the rate of reimbursement based on new AI iterations of an innovation and whether AI services will be covered by payers. As the industry moves towards value-based care, AI can offer additional tools and opportunities.
- Possible biases and inaccuracies: Despite the innovative and revolutionary potential of AI/ML technologies, AI technology algorithms can detect patterns using human-annotated data, which may be (1) based on outdated, homogeneous, or incomplete datasets, and (2) likely to reproduce and perpetuating racial, gender and even age-based prejudices. As a result, there is an increased focus on diversifying and expanding medical datasets to identify and mitigate these potential biases.
a crucial moment
The dichotomy between the advancement in the development of AI technologies coupled with the calls to pause has brought the growth of AI/ML to a crucial juncture. As industry and governments recognize the enormous potential and risks of AI, it is critical to monitor developments closely to ensure that innovation is implemented in a way that accelerates societal benefit while mitigating unintended harm.
While there is uncertainty and risk, implementing AI with the right compliance framework and infrastructure offers an exciting opportunity to transform healthcare into a new frontier with better patient outcomes and greater efficiency.
Photo: ipopba, Getty Images