The intersection between healthcare and cutting-edge technology is groundbreaking. One such trailblazing technology is Generative Artificial Intelligence (GenAI), which is reshaping several industries, and the healthcare industry has not remained untouched by its transformative power.
We recently sponsored a thought-provoking interview with key leaders in the field, exclusively conducted by Emerj Artificial Intelligence Research, where Sina Bari of iMerit and Milind Sawant of Siemens Healthineers took a deep dive into the cathartic power of Generative Artificial Intelligence (GenAI) and shared their expertise on how GenAI solutions are poised to reshape the future of healthcare.
Before we move forward, let us know more about the two industry experts.
Sina Bari, AVP, iMerit Technology, is a reconstructive surgeon trained at Stanford and experienced in the medical device and information technology spaces. He also has expertise in AI, Machine Learning, Data Operations, and Healthcare Information Technology, and he manages the strategy and growth of the medical division at iMerit.
Milind Sawant is the founder of Siemens Healthineers and a senior R&D executive with extensive experience in Agile methodologies, DFSS, and AI/ML integration with medical systems. With over 20 years of experience, he excels in AI/ML integration for global platform products and cost-savings.
GenAI for Healthcare
The healthcare industry faces many scaling challenges, such as data management, technology integration, streamlined administrative processes, regulatory compliance, cost management, optimizing patient care, and more.
GenAI is a transformative tool technology that utilizes large language models (LLMs) and deep learning algorithms. With GenAI, healthcare providers can bring potential improvements in diagnostic accuracy, streamline record-keeping, and enhance patient engagement.
Sina Bari and Milind Sawant discuss cost-saving measures for healthcare while scaling with GenAI. Here are some valuable insights:
- Leverage the language processing abilities of LLMs to develop intermediary interfaces.
- Simplify healthcare policy guidance to utilize LLMs for prompt and precise answers to specific questions on healthcare insurance policies.
Developing Intermediary Interfaces for Proprietary Data
LLM is known for its language processing abilities, such as understanding queries, searching data through records, and generating responses. Both experts understand the cost and data security challenges of training Large Language Models (LLMs). Milind discusses the issue of proprietary data and why companies hesitate to use their data. Most companies are afraid that their data is used to train the LLM and can be accessible to their competitors.
Milind presents a prompt engineering strategy to address proprietary data-related concerns of the companies, suggesting the development of an intermediary interface instead of directly training the LLM using sensitive data. This system would allow uploading of HR documents such as PDFs, excel files, and more, and processing them through a cognitive search system, which then extracts relevant information from the documents and HR policies.
Sina questions whether LLMs should be hosted on-premise or in a single-tenant environment. He emphasizes the importance of incorporating patient health data to achieve accuracy in health. He also discusses the intricacy of patient data while emphasizing that it involves more than just ID recognition. However, there are significant challenges in securing comprehensive health histories. Sina articulates that patient health issues are inherently tied to their historical continuum, making it crucial to uphold stringent data security measures.
“When it comes to the entire patient health data, it is not simply a matter of identification. There is no way to truly anonymize if I tell you their whole health history, which is unique and potentially identifiable information, even when you have the name and medical record number out. But that is what gets us towards precision health because every health problem happens within the context of a longitudinal patient history. So to work with all that data requires that we maintain the most stringent data security policies.”
– Sina Bari, AVP, iMerit Technology
Shifting the discussion towards regulatory hurdles linked to the AI and ML landscape, Milind points out the fact that regulatory bodies like the FDA have been adapting their guidance to align with the swift progress of technology. According to his observations, these regulations ensure safety in the healthcare industry, as lives are at risk. Milind believes that as a manufacturer of medical products distributed globally, they must adhere to a range of regional regulations and standards extending beyond the scope of the FDA.
He also emphasizes that FDA guidance documents have changed over time, focusing on comprehending how ML models and new technologies get upgraded and the potential risks associated with these upgrades. The aim is to prevent any harm to patients due to technological advancements. However, Milind acknowledges that despite substantial progress, regulatory bodies still have a considerable distance to cover with the swiftly evolving technology landscape.
On the other hand, Sina highlights their involvement in assisting products through FDA regulatory approval using pathways such as the 510(k). He also points out that, despite the FDA’s emphasis on metrics such as test data representation, there is still a need to address the specifics of training data. According to him, there is a potential future need from the FDA for a defined level of expertise and a digital signature in training data.
Sina elaborates on the significance of developing a Gold Set through agreement and mediation for validating AI models. He explains that the criteria for good performance can differ based on content and expertise levels. He recognizes the leadership of companies such as Siemens in establishing a framework for testing AI applications in healthcare, noting its extension to other domains as they evolve.
Simplifying Healthcare Policies Guidance
Milind addresses the potential of LLMs and their capabilities, along with the prevalent hype and expectations these models come with. With a practical example of GenAI, he delves into a common organizational scenario where employees navigate healthcare-related HR policies. Typically, this would involve sorting through several documents or contacting HR partners. However, LLMs provide a solution to simplify this process by allowing employees to ask specific questions regarding their healthcare coverage, such as whether scuba diving is covered or not, and they receive prompt and accurate responses.
Furthermore, Milind emphasizes a significant organizational challenge related to AI- a necessity for a comprehensive understanding of AI capabilities and limitations throughout the enterprise. Imagine a scenario from a pharmaceutical industry where the pressure from top management to utilize GenAI may not align with the actual requirement of the project. This example highlights a gap in understanding between technology and decision-making management.
Sina aligns with Milind concerning the swift rise and subsequent decline of the excitement surrounding GenAI. He examines a specific case in radiology where LLMs could be utilized to interpret model outcomes into reports and highlights the significance of reinforcement learning in developing these reports. Sina draws attention to the limitation of conventional computer vision models that solely depend on radiologic data, emphasizing the necessity for a more comprehensive, multimodal approach that considers clinical contexts, lab values, and clinical history.
“The problem I am seeing is a lack of understanding that makes people think AI can solve all problems. One of my friends in the pharmaceutical industry had a customer saying, “Can we please use generative AI because my boss is asking me to use it in our workflow?” Their use cases did not need generative AI; they could use a traditional machine learning algorithm. However, they are under pressure from top management, who may or may not understand the power, strengths, and weaknesses of GenAI. They are forcing generative AI because they want to tell their top boss they are using it, and now are struggling to find a use case.”
-Milind Sawant, Founder & Lead, AI/ML & DFSS Center of Excellence, Siemens Healthineers
Moreover, Sina highlights the need to incorporate flexibility and modularity into AI solutions, recognizing the rapidly changing dynamics of the field. He advises against adopting static solutions that could quickly become outdated in this dynamic landscape. Sina draws attention to a problem-driven approach to technology adoption, especially in the content of LLMs in healthcare. He recommends structuring AI development by focusing on a well-defined problem and harnessing technology as a solution.
A cost-effective strategy to tackle scaling challenges
Milind stresses the need for a comprehensive, multimodal approach to patient diagnosis, drawing an analogy to how physicians consider several factors and data points to reach a diagnosis. He emphasizes the significance of integrating multifaceted perspectives into AI tools to facilitate effective diagnostics. He states that a single parameter or data source is insufficient to address the complexity of diagnosing patients.
Transitioning to clinical decision support systems, Milind addresses several critical challenges, such as bias mitigation and ensuring patient data use with explicit consent. He also mentions the potential complexities involving insurance companies in legal matters related to potential lawsuits related to patient data.
He also explores the issue of patient privacy, acknowledging concerns about data misuse or unintentional leaks. Despite these worries, he recognizes the genuine willingness of patients to contribute to medical knowledge and treatment options. Patients prefer keeping their names and specific health information confidential on public platforms such as social media but are often open to sharing anonymized data to advance medical understanding.
Reflecting concerns of Milind about data silos and privacy, Sina presents a two-point strategy to tackle these challenges, focusing on cost-effective scaling for expertise. He mentions these techniques:
Harnessing Global Expertise
He proposes the utilization of domain experts in different regions such as Latin America, India, and Africa to integrate insights from various sources and enhance collective knowledge. Emphasizing the importance of efficient communication tools among these international experts, Sina adds that despite technology disparities, physicians commonly share standard textbooks and knowledge sources. This shared foundation can help bridge technological gaps.
Optimizing Resources and Automation
He compares it to the surgical principle of achieving efficiency by swiftly handling fast tasks and meticulously addressing slow tasks. Data structuring involves allocating expert resources thoughtfully according to task complexity and incorporating automation where practical.
Conclusion
We hope you found this article insightful, where industry leaders delved into topics like the challenges of using Large Language Models (LLMs), addressing proprietary data concerns, regulatory hurdles, and the need for understanding AI capabilities across organizations. Additionally, the leaders emphasized the importance of a problem-driven approach to technology adoption and integrating multifaceted perspectives into AI tools for effective diagnostics and patient privacy concerns. Strategies involving global expertise utilization and resource optimization were proposed to address scaling challenges cost-effectively.