Innovating Health Care | The Rise of Collaborative Intelligence: How Do We Instill Trust in a Nonhuman ‘Brain’?

artificial intelligence

The term artificial intelligence (AI) often refers to when a machine mimics human cognition, as it attempts to "learn" and "problem solve." It is now extended to "interacting" or reacting to a human, as in a chatbot. Regardless of the science it is referencing, there is a consistent underpinning of distrust when the phrase AI is used (and by extension, the technologies AI encompasses), perhaps most notably when used in health care.

Does the Word "Artificial" Itself Lead to Mistrust in the System?

The Oxford English Dictionary (OED) defines artificial as "made or produced by human beings rather than occurring naturally, especially as a copy of something natural." When referring to a person or their behavior, the OED defines artificial as insincere or affected. When referring to a concept or situation, it is defined as "not existing naturally, contrived or false."

If we further trace its etymology, the Latin origin is "artificium" or handicraft (ars – art, facere – to make). The old French to middle English transition of the word artifice as "clever or cunning devices or expedients, especially as used to deceive others," further unveils the origins of our discontent with AI in medicine. The barrier to making innovation such as AI trustworthy may have its foundation in this definition.

Previously, building trust in AI was thought to necessitate instilling it with "a sense of morality while operating at full transparency," as stated in an IBM brief. However, one might ask, wouldn't we trust a system more if it did not deign to trick us into seeming moral with human values? Whereas AI began when computing was in its infancy, the sheer volume of signals and data now barraging the human brain makes a case that computing power is better equipped to make initial insights than an individual or collaboration of humans. It does not suggest that the computer is human.

At its origin, AI attempted to replicate human tasks, creating algorithms based on human workflow, then imitating and improving upon it by performing faster and at larger scale. However, in health care today, it is true that "AI will not replace the clinician." The best uses of modern machine learning (ML) are to answer the questions where clinical judgement is challenged, not where clinical acumen is the gold standard.

Human Support vs. Human Replacement

How do we instill trust in and assure that a nonhuman brain provides real insight and avoids reflecting our inherent biases? We need to rethink the purpose the AI is serving: human support, not human replacement.

We need assistance in the processing of vast amounts of information coming from patient devices, apps and our own clinical systems so we can review it with a critical, human eye to assess utility. This is not artificial work, it is collaboration. When discussing the use of analytics to care for people, words matter and provide a needed framework for valuation and incorporation into care delivery. Therefore, we term this approach to the use of analytics in medicine "collaborative intelligence."

Collaborative intelligence is defined as the use of advanced analytics and computing power with an understanding that we are responsible both for the data it is offered and fair interpretation of its outputs, with the intention of together becoming more intelligent.

Migrating to collaborative intelligence promotes trust by: 1) stating the purpose of a technology and where it supports human acumen; 2) articulating specifically where this technology is helpful and importantly where it is not; and 3) providing a standard upon which to evaluate the analytic technology itself. Specifically, does a technology support the clinician or patient and do the insights meet their needs?

Collaborative Intelligence: What's the Recipe?

  1. Apply data standards to ensure the system is primed with fair and equitable data. For health systems to use the power of AI to scale care delivery without amplifying inequities, we must have a framework to perform active surveillance for bias. The development of software for AI demands fairness, operational tolerances, and surveying for reinforcement of systemic bias. When these mechanisms involve an assessment of influences on vulnerable populations, we can limit bias upstream; everyone involved in the process has a responsibility to promote ethical models, including clinicians, health systems, vendors in the technology sector and regulatory authorities.

    Importantly, incorporating user (clinician or patient) feedback recognizing that social determinants of health which are incorporated into frameworks are not static and may change over time will lead to iterative assessment. Infrastructure will then be designed to be agile and contextually relevant.
  2. Create accessible analytics. Solutions need to be available for a wide array of circumstances. A well thought out user interface, tools and cloud infrastructure will allow smaller practices to engage more robustly in digital health care delivery and optimize those services. Partnering with industries that can leverage their networks to power local analytics will further decrease the digital divide by encouraging fair interpretation of data in less well-resourced communities.
  3. Be specific in the application of analytics. Focusing on a particular part of the process or a narrowly framed outcome can engage the end user of an AI-based care delivery system with passion and purpose. The more specific the role of the AI, the easier for the end user to merge it into their understanding and therefore workflow. To create specificity of purpose, which is essential for successful implementation, all stakeholders must be AI-enabled. AI aids in quality or accountability, improvement, or reflection and discovery.

AI-enabled care does not require every clinician to review algorithms (although a clinician will be involved in algorithm development), but to know the assumptions from which output is derived. They should trust that their clinical acumen may sometimes be challenged, and other times supersede the AI and both outcomes are okay. These observations and collaboration between analytics and clinical acumen allow us to better train the analytics such as ML, unlearn parts of ML that do not fit a situation, identify systematic bias or incorrect assumptions, and learn about novel groupings which may lead to clinical insights. It is important for clinicians to be AI-enabled and understand how the products we use employ ML, rather than aim to be AI experts ourselves.

A familiar area for the use of AI in health care is clinical decision support, which offers an excellent example of how collaborative intelligence can be implemented. A foundational goal of clinical decision support has been to harness the power of patient level data combined with guideline-based knowledge to optimize and inform care.

With the sheer volume of data – patient-reported outcomes, wearables, medical-grade remote monitoring, soon perhaps ambient continuous data, combined with the rapid pace of scientific advances, guidelines and trials – delivering the highest quality care that the medical field has to offer is nearly impossible for an individual clinician to achieve. Even attempting to continuously scan for that data can lead to burnout. The application of computer models to gather, categorize, discover and present those insights to the clinician and system, allows us to apply clinical acumen, with autonomy, to a new elevated foundation for care.

Raising the fundamentals upon which clinical acumen is applied may drive higher quality and more equitable care by increasing evidence-based care for all and decreasing unnecessary variability while still allowing clinicians to have full autonomy over how they choose to treat a patient. The key is in directing this information to clinicians with a seamless natural experience that does not lead to administrative burden in the form of excess pushes, frequent pauses in workflow or overbearing decision support that makes an individual feel judged. We receive similar nudges daily in other areas of life and work and can apply those human-centered design processes to the clinician-patient interaction.

Risk Prediction

Collaborative intelligence is essential in risk prediction, pattern recognition and hypothesis generation for academic research. It can uncover what humans are unable to see clearly and offer it for assessment. Doing something complicated is often conflated with intelligence. The assessment of the complicated analysis and its applicability or worth is where the intelligence lies. That is also where bias can arise, therefore, an open mind is essential when interpreting complex outputs, especially when they challenge our traditional assumptions.

Risk prediction using AI offers the ability to differentiate outcomes such as the level of care needed (virtual vs. local vs. tertiary, or admission vs. discharge home). During the COVID-19 pandemic, one of the largest, most diverse federated learning initiatives with 20 hospitals in a global consortium achieved a model with an area under the curve of 0.94, of a goal of 1.0, resulting in excellent prediction for the level of oxygen required by incoming patients.1 AI can also find the "next patient who needs us" in the inpatient setting using the myriad continuous available data.

Data-driven decision support tools improve the efficiency of clinical teams and improve quality and safety measures including response times to changing patterns.2 As with any risk prediction algorithm, we do not have to be 100% correct, just better than a random sampling of clinicians. We need to set the goal posts appropriately and recognize that improvement is essential and valuable whereas the pursuit of perfection is not. Iteration is better than perfection, in that we can teach ourselves and our systems to be continuously better, accepting that no one algorithm is consistently perfect in the practice of medicine.

Pattern Recognition

Pattern recognition is its own form of insight, and this is a place where AI excels and is a natural support to humans. With AI in health care, there is not a "get it right the first time" culture. Therefore, many of the initial insights need to be paired with a human and support clinical decision-making or identify at-risk populations and then allow a human to oversee the process to help the analytics iterate to become smarter. Data are also continuously changing, whether hemodynamics, scientific insights or social determinants of health. ML has the power and speed to learn and practice more efficiently than a human brain. Most importantly, the clinician still stands atop the pyramid with ultimate decision-making capability.

Developing effective methods for human-AI collaboration include creating shared public datasets and environments for AI training and testing, measuring and evaluating AI tech through standards and benchmarks, and expanding public-private partnerships to accelerate advances in AI.3

Learning is now a continuous process which occurs while we develop analytic systems. It allows us to discover new relationships to evaluate while also applying the frameworks and iterating on them as we guide clinician technology adoption.

As we become AI enabled and understand our role in creating and implementing analytics in the practice of cardiology, we will learn new insights about technology as well. For example, a convolutional neural network may demonstrate better risk prediction when known clinical risk factors for atrial fibrillation or at-risk subgroups are addressed. This illustrates that collaborative intelligence is essential when planning for implementation at scale.4

Hypothesis Generation

As we embark on hypothesis generation, causal inference becomes an important concept. Causal inference is the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system.

Many state the first step in hypothesis generation is to understand the causal hierarchy to be able to choose analytic tools. However, upstream of the analytic process is the need to understand the milieu – to begin to understand which confounders are medically unavoidable and what physiology drives some of these associations.

Data alone are never enough, especially in clinical medicine, to understand whether all the necessary confounders have been accounted for and what kind of domain adaptation is occurring. When we consider the data outputs with an expert clinical eye, we reduce the gap between theoretical analytics and actionable hypotheses to test in the clinical realm.

Lastly, electronic health records (EHR) can also offer real-world treatment data and create a powerful platform for clinical outcome studies.5 Collaborative intelligence can utilize nuanced data in the EHR in addition to well-established structured registries in clinical care delivery.

Can we guarantee 100% accuracy and precision? Not with either an artificial or human brain. Can we program systems with unbiased, fair data to generate insights and then use individual judgement to decide what comes next? Absolutely. We must acknowledge that we are using a computer, recognize it will never have a moral compass, and take responsibility for the data we offer it while interpreting the outputs fairly. This supports the intention of together becoming more intelligent. Collaborative intelligence is a concept that, if adopted, can help us find creative solutions to improve care and protect the people delivering it. The future of collaborative intelligence is as a supportive tool: analytics that ask a clinician, "Have you looked at this? I am noticing a pattern, what do you make of it?" in consultation, like a true collaborator.

Ami Bhatt, MD, FACC
Jennifer Bae

This article was authored by ACC Chief Innovation Officer Ami Bhatt, MD, FACC and Jennifer Bae, Team Leader, Innovation Strategy & Partnerships for the ACC.

References

  1. Dayan I, Roth HR, Zhong A, et al. Federated learning for predicting clinical outcomes in patients with COVID-19. Nat Med 2021;27:1735-43.
  2. Hanson CW 3rd, Marshall BE. Artificial intelligence applications in the intensive care unit. Crit Care Med 2001;29:427-35.
  3. Select Committee on Artificial Intelligence of the National Science and Technology Council. The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update. Available here.
  4. Khurshid S, Friedman S, Reeder C, et al. ECG-based deep learning and clinical risk factors to predict atrial fibrillation. Circulation 2022;145:122-33.
  5. Rogers JR, Lee J, Zhou Z, et al. Contemporary use of real-world data for clinical trial conduct in the United States: a scoping review. J Am Med Inform Assoc 2021;28:144-54.

Clinical Topics: Cardiovascular Care Team, COVID-19 Hub, Prevention, Stress

Keywords: ACC Publications, Cardiology Magazine, Area Under Curve, Artificial Intelligence, Benchmarking, Brain, Burnout, Psychological, Cardiology, Clinical Decision-Making, Clinical Medicine, Clinical Reasoning, Cognition, Computer Simulation, Computers, COVID-19, Decision Support Systems, Clinical, Delivery of Health Care, Digital Divide, Electronic Health Records, Feedback, Goals, Hearing, Hemodynamics, Hospitals, Inpatients, Intelligence, Intention, Machine Learning, Morals, Neural Networks, Computer, New York, Outcome Assessment, Health Care, Oxygen, Pandemics, Patient Discharge, Patient Reported Outcome Measures, Policy, Public-Private Sector Partnerships, Reaction Time, Referral and Consultation, Registries, Risk Factors, Social Determinants of Health, Social Responsibility, Software, Technology, Trust, Vulnerable Populations, Watchful Waiting, Wearable Electronic Devices, Workflow, Innovation


< Back to Listings