Can Artificial Intelligence in Ultrasounds and Scan Results Improve Diagnostic Testing?

 
brain scan artificial intelligence

By Gina Michiko Craig

With the advent of ChatGPT and other artificial intelligence (AI), the use of AI has entered a time of higher visibility to the general population in the U.S. and around the world. Ultrasounds and scan results comprise just a fraction of expansive diagnostic testing performed each year, but the prospective benefits could be monumental. However, healthcare experts, researchers, and patient advocates alike must approach the potential benefits to using AI with their eyes wide open. Artificial intelligence can assist with the workload of processing and analyzing ultrasound and scan results in diagnostic testing for patients. Yet an informed and balanced approach must be taken by healthcare leaders and advocates to ensure that optimal benefits prevail for patients while also decreasing long-standing healthcare disparities.

Benefits of AI in Ultrasound and Scan Results

AI use in diagnostic testing has the potential to reap a multitude of benefits for patient care. For instance, when people interpret test results, there’s a natural percentage of test results that may be misinterpreted simply due to human error or to situations like lab technicians being overtired. Artificial intelligence is touted as a solution toward decreasing the percentage of these errors in test results. AI technology will also find issues that the human eye cannot see and will discover patterns in results analysis that can be used as a deciding factor in test interpretation. A study that analyzed disparities in knee pain in underserved patients found that an algorithmic approach closed the disparity gap. The algorithmic approach to knee pain showed that factors external to imaging scans, such as stress, could account for the differences in pain levels.

Not all tasks for healthcare professionals are viewed as the most interesting tasks either. Enter AI where it is also viewed by many in healthcare as a way of reducing some tedious test interpretation tasks. In addition, a common complaint by patients in the U.S. healthcare system is that they simply don’t get enough time with their doctors and other healthcare professionals. Artificial intelligence can also lend assistance here by freeing up more time for care providers to spend talking with their patients face-to-face.

Risks of AI in Ultrasound and Scan Results

In the course of evaluating the use of artificial intelligence in ultrasounds and scan results, evaluators must also look at the potential risks to patients and to long-documented health disparities. A recent study examined large data competitions that looked at medical imaging algorithms such as some that evaluate CT scans to diagnose blood clots or brain tumors in the lungs. The study found that 61 percent of those in the competitions did not include demographic data. To work toward true diagnostic equity, a diverse database of all ages, ethnicities, races, genders, and sexual orientations must be well-represented in thorough testing of AI technology for ultrasounds and scan results.

Another major concern of researchers centers around AI’s ability to hone in on very minute details – finding a veritable needle in a haystack – this ability also means that the AI models can use these same abilities to form biased decisions. And biased decisions are issues that many in the healthcare field have taken vows to fight against. That’s an irony taken too far that must be guarded against for patient safety.

The Future of AI in Ultrasounds and Scan Results

Technology often moves at an incredibly rapid pace, and artificial intelligence is no exception to this phenomenon. The U.S. is frequently at the forefront of technology breakthroughs, but the negative results often haven’t been discovered until years later as happened with social media platforms. But to build a strong future with AI in ultrasounds and scan results, healthcare experts, researchers, and patient advocates must help ensure diverse representation of all ages, ethnicities, races, genders, and sexual orientations in data and testing models. Healthcare leaders must help identify and address potential AI modeling issues. These efforts will aid in ensuring that artificial intelligence models don’t create unintended harm to patients. Everyone deserves access to equitable diagnostic testing and care no matter race, gender, geographic location, socioeconomic status, or otherwise. Now those who advocate for patients must do all they can to help ensure this mantra rings true for artificial intelligence in ultrasounds and scan results.

What Can You Do Next?

  • Advocate for yourself and others. Be sure to share your concerns and opinions. Write to your congressperson to ask for federal funding to support diverse representation in testing of artificial intelligence in ultrasounds, scan results, and other medical technologies.

  • Build community and support toward equity in diagnostic testing. Look for online forums discussing equity in diagnostics and artificial intelligence. Ask your healthcare provider for other ways to find people working to create AI diagnostic tools with diverse patient groups.

  • Lead. If you are a healthcare provider, examine how you currently engage with your patients. Consider joining our Pro Hub to connect with other experts committed to removing barriers to health equity.


Call to Action

DHH is expanding and amplifying research on inequities in diagnostic testing. We believe this topic impacts EVERYONE, as tests are the baseline for care and treatment of any health issue or disease. Stay tuned to learn with us, as we continue to report on what you need to know to advocate for yourself and others. Subscribe for the latest.


Sources

Sean P. Garin, Vishwa  S. Parekh, Jeremias Sulam, Paul H. Yi. Medical imaging data science competitions should report dataset demographics and evaluate for bias. Nature Medicine. Accessed December 18, 2024. https://www.nature.com/articles/s41591-023-02264-0

Emma Pierson, David  M. Cutler, Jure Leskovec, Sendhil Mullainathan, Ziad Obermeyer. An algorithmic approach to reducing unexplained pain disparities in underserved populations. Nature Medicine. Accessed December 18, 2024. https://www.nature.com/articles/s41591-020-01192-7 

Said A. Ibrahim, MD, MPH, MBA; Peter J. Pronovost, MD, PhD. Diagnostic Errors, Health Disparities, and Artificial Intelligence: A Combination for Health or Harm? JAMA Network website. Accessed December 18, 2024. https://jamanetwork.com/journals/jama-health-forum/fullarticle/2784385

In retaining editorial control, the information produced by Diverse Health Hub does not encapsulate the views of our sponsors, contributors, or collaborators.

Importantly, this information is not a substitute for, nor does it replace professional medical advice, diagnosis, or treatment. If you have any concerns or questions about your health, you should always consult with a healthcare professional. To learn more about privacy, read our 
Privacy Policy.

 
Previous
Previous

Diagnostic Equity and Its Importance in Patient Care

Next
Next

How Do Healthcare Disparities Impact Diagnosis?