09/21/23 – A Pioneering Code of Conduct for AI in Healthcare Unveiled by National Academy of Medicine
As a pioneer in the realm of clinical AI, Bayesian Health is both invigorated and deeply aligned with a recent multi-phased initiative spearheaded by the National Academy of Medicine (NAM) Leadership Consortium. With Dr Suchi Saria (Bayesian Health’s CEO) serving as a founding member of the steering committee, this ambitious project aims to establish an Artificial Intelligence Code of Conduct (AICC) for the healthcare sector. Far more than a mere initiative, the AICC serves as a guiding framework that resonates with Bayesian’s unwavering commitment to efficacy and responsible AI use. This endeavor underscores the pivotal role that Bayesian approaches are playing in setting the standard for accurate, safe, reliable, and ethical AI applications in healthcare.
About NAM Leadership Consortium
The National Academy of Medicine (NAM) Leadership Consortium stands as a trusted platform and cornerstone for interdisciplinary dialogue, uniting national leaders across diverse sectors. With a focus on aligning science, informatics, incentives, and culture, the Consortium is committed to fostering a healthcare system that is both innovative and continuously learning. It serves as a linchpin in streamlining the various components essential for improving healthcare outcomes for all, truly embodying the ideal of crafting a healthcare system that benefits everyone.
The Imperative for AICC
With Dr Suchi Saria helping to lead the steering committee, the Artificial Intelligence Code of Conduct (AICC) stands as a timely and imperative answer to the rapid expansion of AI technologies in healthcare. Addressing the pressing need for a cohesive set of best practices and ethical guidelines, the AICC’s two-fold vision aims to harmonize existing ethical principles while pinpointing gaps. In doing so, it seeks to establish a universally respected Code of Conduct that not only serves as a starting point but also as a yardstick for ongoing refinement, testing, validation, and improvement in the ever-evolving landscape of healthcare AI.
AICC Vision
The project aims to catalyze collective action to harness the potential of AI in transforming healthcare delivery and advancing health research, all while upholding high standards of ethics, equity, privacy, and security.
Timeline and Governance
Scheduled to last for three years, the AICC project has a roadmap that includes steering committee meetings, webinars, and final hybrid discussions. The steering committee, composed of thought leaders in multiple fields, will guide the project to ensure it meets its intended aims and earns broad stakeholder support.
Comprehensive Outputs
Among its key deliverables, AICC plans to offer a harmonized Code of Conduct, a comprehensive landscape assessment, a systematic review of guidelines, and a final special publication outlining the Code of Conduct framework for deployment in the healthcare system.
Global Reach
While primarily based on U.S. experience, the initiative intends to be informed by international efforts, making it globally relevant.
Synergistic Reinforcement
NAM is also focused on aligning its work with other initiatives in the field, reinforcing and being informed by other projects to ensure the broad adoption of the Code.
Why Bayesian Health is Intrinsically Involved
For Bayesian Health, the AICC initiative serves as a vital framework that aligns seamlessly with our core values and ambitions, particularly under the strategic guidance of Dr Suchi Saria on the steering committee. Deeply invested in ethical and effective clinical AI, we see the AICC as an essential guide to fine-tuning our practices in concert with broader industry standards.
With backing from various foundations and a commitment to include a diverse set of stakeholders, the project promises to have profound implications for the future of healthcare. Stay engaged with us as we actively participate in this pivotal initiative, shaping and anticipating the transformative impact it will have on making AI a reliable and ethical force in healthcare.
Learn More About the NAM Leadership Consortium
09/12/23 – Recognizing Sepsis Awareness Month 2023: Every Second Counts
September is Sepsis Awareness Month, a critical period for shedding light on a condition that remains the leading cause of hospital deaths in the United States. Initiated by the Sepsis Alliance in 2011, this annual awareness campaign brings together individuals, healthcare professionals across various disciplines, and organizations large and small to amplify the urgency of understanding and combating sepsis.
The Staggering Statistics
Sepsis is a severe and often fatal response to infection, affecting a staggering 1.7 million people and causing the death of 350,000 adults in the U.S. each year. What’s more alarming is that research indicates up to 80% of sepsis-related deaths could potentially be prevented with swift diagnosis and treatment. For every hour that treatment is delayed, the risk of mortality skyrockets by approximately 8%. These numbers underline the imperative nature of education and awareness about sepsis.
At the Core of Bayesian Health’s Mission
Early detection and timely intervention are not just statistical factors; they lie at the very heart of Bayesian Health’s mission. Our commitment to driving efficacy in clinical AI directly aligns with the pressing need for rapid evaluation and treatment of sepsis. By developing tools that empower healthcare providers to diagnose sepsis early, we’re not just innovating; we’re striving to save lives.
Act Now: Resources for Awareness
Our team at Bayesian Health advocates for effective and immediate action to raise sepsis awareness. We strongly recommend taking advantage of the following resources to escalate public understanding of sepsis in your community:
- Sepsis Alliance – Sepsis Awareness Month Toolkit: This comprehensive guide provides in-depth information and ideas for promoting awareness. Download it here.
- The National Association of County Health Officials: They offer a variety of materials and educational pieces on how you can get ahead of sepsis. Visit their website for more information.
Awareness is the first step towards a significant impact. This Sepsis Awareness Month, let’s unite to disseminate knowledge, drive timely interventions, and ultimately, save lives.
Spread the word, because when it comes to sepsis, every second counts.
o3/05/23 –Reducing Variability of Care with Bayesian’s AI Platform
Inconsistent and variable care delivery can lead to negative consequences for hospitals, patients, and payers. Bayesian’s AI platform is designed to help reduce variability in care, improve clinical outcomes, and enhance overall patient satisfaction.
Defining Variability
Variability of care refers to the differences in the way patients are treated and the outcomes they receive based on a wide array of factors such as demography, what hospital they have access to and the care practices employed in their treatment. There are many factors contributing to variability, including lack of evidence-based protocols and fragmented care delivery systems.
The Impact of Variable Care Practices
Variability of care can lead to unequal access to care, inconsistent quality of care, and higher costs for patients, hospitals and payers. These can affect patient satisfaction and create a lack of trust in the healthcare system. Payers also face financial consequences from variability in care, as they may have to cover the cost of additional treatments, longer hospital stays, or readmissions.
How Bayesian’s AI Platform Addresses Variability
Bayesian’s AI platform is designed to address variability in care by providing clinicians with real-time insights and personalized treatment recommendations. The platform uses an adaptive, modular framework that considers a patient’s unique physiology, clinical protocol, provider workflow, and hospital operations. This approach helps clinicians make more informed decisions and reduces the likelihood of variability in care delivery for a wide range of critical condition areas such as sepsis, all-cause deterioration, pressure injuries, and transitions of care, just to name a few.
Practical Uses of Bayesian’s Clinical AI to Standardize Care Practices:
- Bayesian’s AI platform can be tailored to different clinical settings and hospital needs, providing a personalized approach to care.
- The multi-modal platform identifies patterns and trends in patient data that traditional practices/methods may miss.
- Bayesian provides access to evidence-based guidelines and best practices, leading to better treatment decisions and outcomes.
- By automating certain tasks, the platform reduces the time it takes to provide care, improving patient outcomes and reducing costs associated with unnecessary procedures and extended hospital stays.
Our Unique Approach
Unlike previously studied models, Bayesian’s approach is clinically grounded, thinking like a clinician. Clinicians use the platform as an extra set of eyes and ears, making it an effective tool in improving the overall quality of care. The platform uses Bayesian statistical models to analyze patient data, such as medical history, lab results, and vital signs. These models allow clinicians to identify potential risks and predict patient outcomes based on data from similar patients. The platform also provides real-time feedback to clinicians on their treatment plans, alerting them to any potential issues or opportunities for improvement.
Benefits of Reducing Variability
Variability in care can have negative consequences for hospitals, patients, and payers. However, with Bayesian’s AI platform, clinicians can have access to real-time insights and personalized treatment recommendations that can help reduce variability in care delivery. By using Bayesian’s AI platform, hospitals can improve clinical outcomes, enhance patient satisfaction, and reduce costs associated with longer hospital stays or readmissions.
1/03/23 – Duality – Data and Trust: Digital Transformation – The Impact of Machine Learning in Healthcare
An informative discussion between Prof. Shafi Goldwasser, Chief Scientist and Co-Founder of Duality and Suchi Saria, Founder of Bayesian Health focused on where digital transformation and healthcare meet, and how can it impact, improve, and lead to better outcomes?
Saria asks, can the public place their trust in advanced algorithms?
The answer is yes – but only after the public becomes more aware and educated about how models can read data objectively and help healthcare providers make critical decisions. The public should also be educated about how technology is extensively and rigorously validated to help blend human ingenuity with machine-driven decision making.
Once the digital transformation of healthcare is complete – and if done well – Saria hopes to see a significant reduction in mortality rates as well as in healthcare costs. And it’s within reach within as little as 10 years – driven by the innovative technology providers, with the deep expertise of statistics, machine learning and data privacy to optimize patient care.
LINKS
The current practice of medicine is incredibly biased — because its policies, procedures, technologies and people are all implicitly biased. Though there has been ongoing attention to explicitly biased individuals and processes in healthcare, there are also long-standing policies, procedures, and technologies that have ingrained implicit bias.
Recently, many have wondered if the introduction of artificial intelligence and machine learning (AI/ML) technologies in the healthcare setting will result in increased bias and harm. It is possible — when AI/ML solutions use inherently biased studies, policies or processes as inputs, the technology, of course, will serve biased outputs. However, AI/ML technology can be key in terms of making the practice of medicine more fair and equitable. When done right, AI/ML technology has the potential to greatly reduce bias in medicine by flagging insights or critical moments that a clinician might not see. In order to create technology that better serves at risk and underserved individuals and communities, technologists and healthcare organizations must actively work to minimize bias when creating and deploying AI/ML solutions. They can do so by leveraging the following three strategies:
- Creating a checklist that evaluates potential sources of bias and what groups may be at risk for inequity,
- Proactively evaluating models for bias and robustness ; and
- Continuously monitoring results and outputs over time.
Understanding why healthcare is biased and the sources of bias
Bias enters healthcare in a variety of ways. Depending on the way medical instruments were developed, they may not account for a variety of races. For example, pulse oximetry is more likely to miss hypoxemia (as measured by arterial blood gas) in black patients than white patients. This is because pulse oximeters were developed and calibrated with light-skinned individuals; and since a pulse ox reads light passing through the skin, it’s not surprising that skin color could impact readings.
Policies and processes can also hold inherent bias. Many organizations prioritize patients for care management using models that predict a patient’s future cost based on the assumption that patients with the highest healthcare costs also have the greatest needs. The issue with this assumption is Black patients tend to generate lower healthcare costs than White patients with the same level of comorbidities, likely because they have more barriers to accessing health care. As a result, resources might be mis-allocated to patients with lower needs (but higher predicted cost).
Historical studies have also led to inequities in care. Interpretation of spirometry data (for lung capacity) creates unfairness because Black people are assumed to have 15% lower lung capacity than white people, and Asians are assumed to have 5% lower. These “correction factors” are based on historical studies that conflated average lung capacity with healthy lung capacity, without accounting for socioeconomic distinctions. Lung capacity tends to be reduced for individuals that live near roads, but this is correlated with disadvantaged ethnic groups.
These care disparities have a significant impact. For example, Sepsis, a condition which causes over 300,000 deaths per year, disproportionately impacts minority communities. According to the Sepsis Alliance, Black and Hispanic patients have a higher incidence of severe sepsis as compared to white patients; Black children are 30% more likely than white children to develop sepsis after surgery; and Black women have more than twice the risk of severe maternal sepsis as compared to white women.
For health systems, creating tools that actively work to combat these disparities in care isn’t a nice to have, but a mission critical must have. Health systems have a responsibility to provide equitable, safe care, and AI/ML technologies have the promise to help them do so.
What can be done to combat bias and promote equity in AI/ML technology?
Health organizations can implement these three strategies when launching AI/ML technologies to drive better, more equitable care outcomes.
Create a checklist that evaluates potential sources of bias and what groups may be at risk for inequity. Prior to validating or deploying a predictive model, it is worthwhile to clearly describe the clinical/business driver(s) for the intended predictive model and how the model will be used. Given the intended use, is there a risk that the model might perform unequally across subgroups and/or result in an unequal allocation of resources or outcomes for specific subgroups? If the prediction target is only a proxy for the outcome of interest, could that lead to unintended disparities between subgroups?
Once the objectives are clearly determined, it is possible to identify potential sources of bias in a given model. Some example questions to address include:
- Are there inputs that might be predictive of the outcome for some subgroups (e.g., socioeconomic status) that are not included in the model?
- Is the prediction target measured in the same way for all subgroups?
- Are input variables more likely to be missing in one subgroup than another?
- Could end users use the model outputs differently for specific subgroups?
Proactively evaluate models for bias and robustness. Identifying subgroups at risk of bias or inequity facilitates explicit testing for differences in model performance between subgroups. Understanding differences in performance is necessary to avoid and mitigate bias, but it is not sufficient because the validation data may still differ in important ways from the environment in which the model is ultimately deployed. Fortunately, new machine learning techniques can evaluate whether models are robust to differences in data and also identify the conditions under which the model will no longer perform and potentially become unsafe.
Continuously monitor results and outputs over time. Done incorrectly we risk harming patients, making care less safe and potentially exacerbating bias. Even if models are free from bias when initially validated and deployed, it is essential to continue monitoring model performance to ensure performance does not degrade over time. Models are particularly susceptible to failure after unanticipated changes in technology (e.g., new devices, new code sets), population (e.g., demographic shifts, new diseases), or behavior (e.g., practice patterns, reimbursement incentives). These changes are collectively referred to as dataset shift because the data used in clinical practice differs from data used to train the predictive model. Although clinicians, administrators, or IT teams can mitigate changes in performance by explicitly identifying scenarios when dataset shift is likely, it is equally important that solution vendors monitor model performance on an ongoing process and update the models when needed
As more health systems and healthcare organizations implement AI/ML technology to help enable patient-specific insights to drive improved care, they need to be actively working to reduce bias and provide better, more equitable care by implementing three key strategies. Understanding the potential sources of bias, proactively looking for and evaluating for bias in models, and monitoring results overtime will help reduce differential treatment of patients by race, gender, weight, age, language and income.
The current practice of medicine is incredibly biased — because its policies, procedures, technologies and people are all implicitly biased. Though there has been ongoing attention to explicitly biased individuals and processes in healthcare, there are also long-standing policies, procedures, and technologies that have ingrained implicit bias.
Recently, many have wondered if the introduction of artificial intelligence and machine learning (AI/ML) technologies in the healthcare setting will result in increased bias and harm. It is possible — when AI/ML solutions use inherently biased studies, policies or processes as inputs, the technology, of course, will serve biased outputs. However, AI/ML technology can be key in terms of making the practice of medicine more fair and equitable. When done right, AI/ML technology has the potential to greatly reduce bias in medicine by flagging insights or critical moments that a clinician might not see. In order to create technology that better serves at risk and underserved individuals and communities, technologists and healthcare organizations must actively work to minimize bias when creating and deploying AI/ML solutions. They can do so by leveraging the following three strategies:
-
Creating a checklist that evaluates potential sources of bias and what groups may be at risk for inequity,
-
Proactively evaluating models for bias and robustness ; and
-
Continuously monitoring results and outputs over time.
Understanding why healthcare is biased and the sources of bias
Bias enters healthcare in a variety of ways. Depending on the way medical instruments were developed, they may not account for a variety of races. For example, pulse oximetry is more likely to miss hypoxemia (as measured by arterial blood gas) in black patients than white patients. This is because pulse oximeters were developed and calibrated with light-skinned individuals; and since a pulse ox reads light passing through the skin, it’s not surprising that skin color could impact readings.
Policies and processes can also hold inherent bias. Many organizations prioritize patients for care management using models that predict a patient’s future cost based on the assumption that patients with the highest healthcare costs also have the greatest needs. The issue with this assumption is Black patients tend to generate lower healthcare costs than White patients with the same level of comorbidities, likely because they have more barriers to accessing health care. As a result, resources might be mis-allocated to patients with lower needs (but higher predicted cost).
Historical studies have also led to inequities in care. Interpretation of spirometry data (for lung capacity) creates unfairness because Black people are assumed to have 15% lower lung capacity than white people, and Asians are assumed to have 5% lower. These “correction factors” are based on historical studies that conflated average lung capacity with healthy lung capacity, without accounting for socioeconomic distinctions. Lung capacity tends to be reduced for individuals that live near roads, but this is correlated with disadvantaged ethnic groups.
These care disparities have a significant impact. For example, Sepsis, a condition which causes over 300,000 deaths per year, disproportionately impacts minority communities. According to the Sepsis Alliance, Black and Hispanic patients have a higher incidence of severe sepsis as compared to white patients; Black children are 30% more likely than white children to develop sepsis after surgery; and Black women have more than twice the risk of severe maternal sepsis as compared to white women.
For health systems, creating tools that actively work to combat these disparities in care isn’t a nice to have, but a mission critical must have. Health systems have a responsibility to provide equitable, safe care, and AI/ML technologies have the promise to help them do so.
What can be done to combat bias and promote equity in AI/ML technology?
Health organizations can implement these three strategies when launching AI/ML technologies to drive better, more equitable care outcomes.
Create a checklist that evaluates potential sources of bias and what groups may be at risk for inequity. Prior to validating or deploying a predictive model, it is worthwhile to clearly describe the clinical/business driver(s) for the intended predictive model and how the model will be used. Given the intended use, is there a risk that the model might perform unequally across subgroups and/or result in an unequal allocation of resources or outcomes for specific subgroups? If the prediction target is only a proxy for the outcome of interest, could that lead to unintended disparities between subgroups?
Once the objectives are clearly determined, it is possible to identify potential sources of bias in a given model. Some example questions to address include:
- Are there inputs that might be predictive of the outcome for some subgroups (e.g., socioeconomic status) that are not included in the model?
- Is the prediction target measured in the same way for all subgroups?
- Are input variables more likely to be missing in one subgroup than another?
- Could end users use the model outputs differently for specific subgroups?
Proactively evaluate models for bias and robustness. Identifying subgroups at risk of bias or inequity facilitates explicit testing for differences in model performance between subgroups. Understanding differences in performance is necessary to avoid and mitigate bias, but it is not sufficient because the validation data may still differ in important ways from the environment in which the model is ultimately deployed. Fortunately, new machine learning techniques can evaluate whether models are robust to differences in data and also identify the conditions under which the model will no longer perform and potentially become unsafe.
Continuously monitor results and outputs over time. Done incorrectly we risk harming patients, making care less safe and potentially exacerbating bias. Even if models are free from bias when initially validated and deployed, it is essential to continue monitoring model performance to ensure performance does not degrade over time. Models are particularly susceptible to failure after unanticipated changes in technology (e.g., new devices, new code sets), population (e.g., demographic shifts, new diseases), or behavior (e.g., practice patterns, reimbursement incentives). These changes are collectively referred to as dataset shift because the data used in clinical practice differs from data used to train the predictive model. Although clinicians, administrators, or IT teams can mitigate changes in performance by explicitly identifying scenarios when dataset shift is likely, it is equally important that solution vendors monitor model performance on an ongoing process and update the models when needed
As more health systems and healthcare organizations implement AI/ML technology to help enable patient-specific insights to drive improved care, they need to be actively working to reduce bias and provide better, more equitable care by implementing three key strategies. Understanding the potential sources of bias, proactively looking for and evaluating for bias in models, and monitoring results overtime will help reduce differential treatment of patients by race, gender, weight, age, language and income.
Clinical AI can reduce harm, improve patient outcomes and deliver financial benefits by augmenting physician and nurse decision-making at the bedside, making care more proactive. But, knowing where to begin can be tricky. In what clinical areas should your health system consider applying AI? With so many potential areas — sepsis, stroke, patient deterioration, medication adherence — knowing where to begin applying AI can be tricky..
Reducing pressure injuries is one clinical area where many health systems are applying AI to improve in-the-moment decision-making, reduce the burden on nurses and reduce hospital acquired pressure injuries (HAPIs).
Why tackle pressure injuries with AI?
HAPIs hurt patients, prolong hospital stays, consume resources, reflect poorly on quality of care, and are costly to hospitals. To achieve these expectations, hospitals need to catch pressure injuries early — and also have efficient and effective ways of documenting pressure injuries on admission. However, current approaches to pressure injury risk prediction are not based on robust evidence, creating a need — and opportunity — for better data-driven decision-making. Integrating an AI platform that has a specific pressure injury module can provide health systems with the necessary efficiency to prevent pressure injuries and improve patient outcomes. Specifically, there are three benefits health systems are seeing by applying AI to pressure injuries:
1. Clinical AI can more accurately predict a patient’s risk level for pressure injuries
Current methods of pressure injury risk prediction use the standard Braden or Norton scales. These scales only cover a limited range of factors and may suffer from poor interobserver reliability. The Braden model alone catches 40 percent of patients with pressure injuries at 90 percent specificity. By incorporating AI with the Braden model, health systems can catch 60 percent of pressure injuries at the same level of specificity. This provides a significant opportunity for health systems to prevent pressure injuries before development. For example, Bayesian Health’s AI platform can accurately predict pressure injury infections in patients a median of 6.2 days prior to development, equipping nurses and physicians with the time they need to intervene, conduct screening, and take preventative action. Better predictions and focused interventions means less HAPIs.
2. Clinical AI can enable nurses to act faster, preventing hospital acquired pressure injuries
Using an AI tool targeted for catching pressure injuries early can also make triaging assessments faster and more efficient. On average, only one in eight patients are at high risk of developing a pressure injury, but nurses are required to complete lengthy assessments on all patients. AI can help nurses prioritize these high risk patients, from the minute they start their shift. With high risk alerts, AI helps ensure that nurses care for their highest risk patients first, leading to earlier interventions and improved patient outcomes.
3. Clinical AI can improve nursing documentation efficiency and compliance
Pressure injury documentation can be lengthy, repetitive, and is often spread through a combination of electronic and paper charts. AI can provide a consolidated documentation system, charting a comprehensive assessment into the EMR that meets CMS and coding standards. The time saved increases efficiency in documentation and improves compliance from nurses, allowing for more lifesaving, face-to-face patient care. Similarly, it can be incredibly difficult for staff to consistently identify and document pressure injuries that are present on admission, leading to penalties. AI can help identify a pressure injury present on arrival, and ensure appropriate documentation. For example, Bayesian Health’s pressure injury module identifies 95% of pressure injuries which are present on admission, and facilitates the appropriate documentation from nurses, leading to improved clinical, quality and financial outcomes.
A tangible clinical and financial impact
Clinical AI provides health systems with a huge opportunity to focus on preventing and reducing pressure injuries. With better calculations, faster identification, and improved documentation, health systems can improve patient outcomes and reduce costs by an estimated $400k a year. This impact is significant. To learn more about what makes a pressure injury AI tool safe, effective, and impactful, developed together with leading informaticists and clinicians.
Building and deploying AI predictive tools in healthcare isn’t easy. The data are messy and challenging from the start, and building models that can integrate, adapt, and analyze this type of data requires a deep understanding of the latest AI/ML strategies and an ability to employ these strategies effectively. Recent studies and reporting have shown how hard it is to get it right, and how important it is to be transparent with what’s “under the hood” and the effectiveness of any predictive tool.
What makes this even harder is that the industry is still learning how to evaluate these types of solutions. While there are many entities and groups (such as the FDA) working diligently on creating guidelines and regulations to evaluate AI and predictive tools in healthcare, at the moment, there’s no governing body explaining the right way to do predictive tool evaluations, which is leaving a gap in terms of understanding what a solution should look like and how it should be measured.
As a result, many are making mistakes when evaluating AI and predictive solutions. These mistakes can lead to health systems choosing predictive tools that aren’t effective or appropriate for their population. As a long-time researcher in the field, I have seen these common mistakes made, and also have been guiding health systems on how to overcome them to have a safe, robust, and reliable tool.
Here are the top seven common mistakes typically made when evaluating an AI / predictive healthcare tool, and how to overcome these challenges to ensure an effective tool:
- Only the workflow is evaluated, not the models: The models are just as important as the workflow. Look for high performing models, e.g. with both high sensitivity and high precision before implementing within workflow. Not evaluating if the models work before implementation, and assuming you can obtain efficacy through optimizing workflows alone is like not knowing if a drug will work and changing the label on it to try to increase effectiveness.
- The models are evaluated, but with the wrong metrics: The models should be evaluated, but the metrics should be determined based on the mechanism of action for each condition area. For example, in sepsis, lead time–median time prior to antibiotics administration–is critical. But, you also don’t want to alert on too many people because low quality alerts that are not actionable will lead to provider burnout and over-treatment. The key criteria to look for in a sepsis tool are high sensitivity, significant lead time, and low false alerting rate.
- Adoption isn’t measured on a granular level: Typically, end user adoption isn’t measured. However, to obtain sustained outcome improvements, a framework for measuring adoption (at varying levels of granularity) and improving adoption is critical. Look to see if the tool also comes with an infrastructure that continuously monitors use, and provides strategies to improve and increase adoption.
- The impact on outcomes isn’t measured correctly: Many studies rely on coded data to identify cases and measure outcome impact. These are not reliable because coding is highly dependent on documentation practices and often a surveillance tool itself impacts documentation. In fact, a common flawed design is a pre/post study where the post period leverages a surveillance tool that dramatically increases the number of coded cases, in turn, leading to the perception that outcomes have improved because adverse rate (e.g., sepsis mortality rate on coded cases) has decreased. Look for rigorous studies of the tool that account for these types of issues.
- The ability to detect and tackle shifts isn’t identified: If a model doesn’t proactively tackle the issue of shifts and transportability, it is at risk of being “unsafe.” Strategies to reduce bias and adapt for dataset shift is critical because practice patterns are frequently changing (see what happened at one hospital during Covid-19, for example). Look for evidence of high performance across diverse populations to see if the solution is detecting and tuning appropriately for shifts (read more about best practices for combating dataset shift in this recent New England Journal of Medicine article).
- “Apples to oranges” outcome studies are compared: A common mistake is to overlook what the standard of care was in the environment where the outcome studies were done. For example, a 10% improvement in outcomes at a high reliability organization may be just as much or more impressive than similar improvement at a different organization with historically poor outcomes. Understanding the populations in which the studies were done and the standard of care in those environments will help you understand how and why the tool worked.
- Assuming a team of informaticists can tune any model to success: Keeping models tuned to be high-performing over time is a significant lift. Further, a common mistake is to assume any model can be made to work in your environment with enough rules and configurations added on top. The predictive AI tool should come with its own ability to tune, with an understanding of when and how to tune. Starting with the rudimentary model is akin to being given the names of molecules and asking you to create the right drug if you can mix the ingredients correctly.
When dealing with predictive AI tools in the healthcare space, the stakes could not be higher. As a result, predictive solutions need to be monitored and evaluated to ensure effectiveness, otherwise it’s likely the tools will have no impact, or worse, result in a negative patient impact. Understanding the common mistakes made, as well as the best practices for evaluation, will help health systems identify solutions that are safe, robust, and reliable, and ultimately, help physicians and care team members deliver safer, and higher quality care.
Learn more about Bayesian Health’s research-first mentality, recent evaluations and outcome studies here.
Healthcare providers around the country are struggling due to a severe shortage of workers. The massive healthcare demand caused by the pandemic has inflicted burnout and stress to healthcare workers in all sectors, driving most to their wits’ end. According to the Bureau of Labor Statistics, the healthcare sector has already lost at least 500,000 workers since February 2020. This spells disaster for healthcare providers today, especially since many more healthcare workers are considering leaving the workforce due to other reasons such as reduced benefits, cut salaries, and grueling working conditions.
A recent report projects that around 6.5 million employees in the healthcare sector will leave their jobs by 2026. To address the shortage of workers, healthcare providers should leverage various tools and technologies. One such technological innovation that can help prepare healthcare providers is artificial intelligence (AI). In this post, let’s explore how AI can assist in reducing the impact of the current staffing shortage in healthcare.
Assist staff in triaging patients
Triaging has always been an essential process in healthcare institutions, but its role has been further highlighted by COVID-19. Triage staff need to stay alert and successfully differentiate COVID-19 from similar respiratory illnesses such as the flu. However, healthcare workers who are tasked with triaging patients may find it difficult to effectively and efficiently do their jobs, especially if cases soar and more people visit the hospital.
Healthcare providers can lighten the load on triage staff by using AI. Patients can answer a series of questions that will be evaluated against an algorithm. An AI-powered program can then help a healthcare worker accurately and quickly respond to the needs of a patient — whether it’s further testing or emergency health services. In order to effectively use AI for this purpose, healthcare providers should have a solid IT infrastructure that relies on printed circuit boards with reliable power integrity structure. This allows triage staff to utilize this AI feature continuously and accurately assess patients that need immediate healthcare attention.
Streamline healthcare documentation
Electronic health records (EHRs) are vital healthcare documentation that inform the doctor about the patient’s current medical status and contain notes on how to move forward with a patient’s treatment plan. Aside from that, EHRs also act as a report card for the government, as it includes billing records, insurance records, and other crucial documentation that may be legally required. Overseeing EHRs and making sure that the information in them remains updated can be a cumbersome task for healthcare workers. In addition, some healthcare executives say that while EHRs are a necessary tool today, they did nothing to improve patient encounters and instead added more time to a clinician’s workday.
Through AI, EHR documentation for clinicians and other healthcare workers can be made more efficient and accurate. AI technology can listen to clinician-patient conversations, and then interpret and transform it into salient content for orders, referrals, and notes. The AI can also input data directly to the EHR, which then reduces the burdensome administrative tasks to the already limited healthcare staff.
Improve patient outcome
The shortage in healthcare workers worsen the patient outcomes in an already delicate time. As the global health crisis continues to send more people to hospital, it is important that healthcare leaders employ tools that boost patient outcomes and reduce readmissions. AI can improve patient outcomes and reduce the healthcare burden by automating routine tasks, allowing healthcare workers to focus on patient care. In addition, AI-assisted solutions such as predictive analytics can help boost healthcare by determining which individuals are most at risk from a particular disease, identify patients who are likely to skip out on scheduled appointments, and even identify early warning signs of diseases before they become severe.
Expand healthcare access to underserved regions
Despite the advancement of healthcare technologies, there are still a lot of medically underserved areas in the country. The current shortage of healthcare workers in the country will also surely exacerbate this problem. Thankfully, AI can reduce the impact of the current staffing shortage by fulfilling some diagnostic duties usually assigned to specialists and other healthcare workers. For example, healthcare providers in regions that have a scarcity of ultrasound technicians and radiologists can use AI imaging tools to assess chest x-rays for signs of illnesses such as tuberculosis and pneumonia, with a level of accuracy akin to human specialists. Through this, AI can enhance the availability and accessibility of crucial healthcare processes to areas that don’t have sufficient healthcare workers.
Indeed, AI can be effectively leveraged by healthcare providers to reduce the strain on the healthcare system caused by the current staffing shortage. In the long run, AI can decrease the rate of burnout and stress that healthcare workers experience, as well as drive more equitable and safer health systems, both of which can improve the quality of healthcare provided to today’s patients.
written for bayesianhealth.com
by Jamie Rose