AI as a Clinical Assistant: Enhancing MSK Ultrasound Interpretation and Reporting

If you haven’t yet tried using an AI assistant in your clinical practice, now is the time to start.

We are standing at the threshold of a shift in how we work. The rise of large language models (LLMs)—text-based AI systems like Chat GPT that can interpret, generate, and summarize content—offers clinicians a remarkable opportunity: to work faster, think broader, and document smarter. I want to be clear that these tools are still evolving, but their usefulness in the day-to-day reality of musculoskeletal ultrasound is already tangible, even resulting in substantial changes.

An AI-generated image of Dr Wilcox scanning a patient with an AI avatar in the background

In my own sports medicine practice, AI has become a quiet but powerful assistant. It’s not replacing clinical expertise; it’s extending it. Over time, I’ve found a sweet spot—not in making decisions for me, but in helping me think more clearly. One of the most practical ways I use LLMs is for differential generation. I paste in my ultrasound findings and impression and ask for a possible differential diagnosis list. The results are consistently thought-provoking. Typically, it reflects five or six diagnoses I already had in mind; throws in a couple I disagree with outright; and adds two or three that surprise me, and deserve a closer look. Especially in complex or uncertain cases that prompt a pause and consideration of something new that can be invaluable.

Some mainstream AI platforms even promise image interpretation. My experience? These are not yet ready for prime time. Results can be inconsistent; accuracy is still highly variable. But for text-based assistance—where language, not pixels, is the primary input—LLMs can make the difference.

One area where AI shines is in reducing the friction of tedious or repetitive tasks. Prior authorizations, for example, used to eat up valuable time and mental bandwidth. Now, I can copy a de-identified clinical summary and the insurance denial into an LLM and request a short appeal letter. It generates a polished draft that often needs only light editing. Occasionally, I’ll even ask the AI why it thinks the request was denied—it often gives helpful insight I can use in peer-to-peer calls.

The same applies to documentation templates. I’ve built standard templates for common joints, but what about when a patient presents with something less routine, such as a region I haven’t scanned often enough to have a template, like the sternoclavicular joint? I give the model an existing template and ask it to adapt it to the new joint. The results? Fast, accurate, and easy to refine. Here’s a quick look at how I use AI in daily practice:

  • Differential support: Expands my diagnostic horizons, especially in unusual or complex cases.
  • Template generation: Converts existing structures into less common regions or patient types with minimal effort.
  • Prior auths & letters: Speeds up appeal writing; reduces emotional exhaustion from repetitive documentation.
  • Note polishing: Transforms shorthand findings into clean, communicative notes for specialists or patients.

But let’s be clear: none of this replaces the responsibility we carry as clinicians. AI is a powerful tool, but it must be used wisely. A recent study from MIT (Your Brain on ChatGPT) found that users writing essays with AI support showed lower brainwave activity, suggesting a reduction in active cognitive processing. The lesson here is sharp: when we outsource too much thinking, our ability to reason, synthesize, and create diminishes.

We cannot allow that to happen in medicine. What we document, what we diagnose—these remain our responsibility. AI can offer suggestions, but only we can make decisions. Every recommendation must be filtered through our personal, sound clinical judgment.

So yes—use AI to sharpen your workflow, expand your thinking, and save time. But use it with intention. Let it challenge your thinking, not do your thinking. Let it shape your creativity, not replace it. When used well, AI doesn’t flatten our clinical voice; it amplifies it. It helps us become more precise, more efficient, and, most importantly, more present with the people we serve.

References: Kosmyna N, Hauptmann E, Yuan YT, et al. Your brain on ChatGPT: accumulation of cognitive debt when using an AI assistant for essay writing task. Preprint. Submitted June 10, 2025. Accessed 7/8/2025. Available from: https://arxiv.org/abs/2506.08872

James Wilcox, MD, RMSK, is a family medicine and sports medicine physician in the United Arab Emirates, where he is the Director of the ProMotion Sports Medicine Clinic at Specialized Rehabilitation Hospital in Abu Dhabi, and Assistant Professor of Family Medicine at UAE University..

This posting has been edited for length and clarity. The opinions expressed in this posting are the author’s own and do not necessarily reflect the view of their employer or the American Institute of Ultrasound in Medicine.

6 Ultrasound Trends to Watch in 2025

The field of ultrasound technology is rapidly evolving, with advances that promise to reshape diagnostic imaging and patient care. As we begin 2025, several exciting trends are emerging, driven by breakthroughs in artificial intelligence, portability, and precision imaging. Here, we explore six ultrasound trends that are set to make waves in the medical field in 2025.

1. AI-Powered Ultrasound Diagnostics

Artificial Intelligence (AI) is transforming ultrasound imaging by automating complex tasks and enhancing diagnostic accuracy. In 2025, we expect AI to play a central role in streamlining workflows.

AI algorithms are increasingly capable of analyzing ultrasound images to detect and measure abnormalities, such as tumors, cysts, or cardiovascular issues, with speed and precision. These systems can assist practitioners in diagnosing conditions at an earlier state, reducing the risk of misdiagnosis. Moreover, real-time AI guidance is being integrated into portable devices, making it easier for clinicians to perform and interpret scans in remote or underserved areas.

For example, machine learning models are being trained to help ultrasound practitioners evaluate fetal development, monitor chronic diseases, and even predict patient outcomes. As these tools become more accessible, AI-driven ultrasound diagnostics will help address global disparities in healthcare delivery.

2. Therapeutic Ultrasound

Beyond diagnostics, ultrasound is increasingly being used for therapeutic purposes. Therapeutic ultrasound employs high-intensity sound waves to treat a variety of medical conditions by delivering targeted energy to tissues.

Applications of therapeutic ultrasound include treating kidney stones, fibroid, and prostate disease, as well as enhancing drug delivery and alleviating chronic pain. Focused ultrasound therapy is also making significant strides in oncology. It’s used to ablate tumors non-invasively using either thermal or mechanical effects and the latter has been found to also promote abscopal immune responses. Additionally, this technology is showing promise in neurology, with research exploring its potential to treat conditions like Parkinson’s disease, addiction, and depression by stimulating specific areas of the brain.

As the technology continues to advance, therapeutic ultrasound offers a noninvasive alternative to traditional surgical procedures, reducing recovery times and minimizing risks. In 2025, look out for this application as it gains more widespread adoption in both clinical and research settings.

3. Miniaturization and Portability

Portability is becoming a common feature of next-generation ultrasound devices. Compact and lightweight handheld units are set to become even more powerful in 2025, enabling point-of-care imaging in ways that were unimaginable just a decade ago.

These miniaturized devices are equipped with wireless capabilities, allowing clinicians to transmit data seamlessly to cloud-based platforms or electronic health records (EHRs). In emergency situations, paramedics and first responders can use portable ultrasound to assess internal injuries on-site, significantly improving patient outcomes.

Additionally, this trend aligns with the growing focus on telemedicine. Patients in remote or rural areas can now benefit from real-time imaging performed by trained technologists and reviewed by specialists miles away.

4. High-Resolution 3D and 4D Imaging

The demand for high-resolution imaging is pushing the boundaries of 3D and 4D ultrasound technology. By 2025, these systems will deliver clearer, more detailed images, providing clinicians with enhanced diagnostic capabilities.

4D ultrasound, which adds the dimension of time to 3D imaging, is especially beneficial in fields like obstetrics, where it offers real-time visualization of fetal movements. Beyond obstetrics, high-resolution imaging is proving invaluable in cardiology and oncology, enabling practitioners to visualize complex structures such as heart valves or tumor margins with greater clarity. This technology also bridges the gap and allows for greater reliability of mutual registration between ultrasound and MRI, CT, and PET.  

Image resolution improvements are accompanied by generally more affordable ultrasound technology overall, making sonography a first radiologic assessment tool accessible to smaller clinics and facilities worldwide.

5. Integration With Wearable Technologies

Wearable devices are stepping into the ultrasound space, promising to revolutionize how and where imaging is conducted. These devices, which can be worn as patches or integrated into clothing, are designed to provide continuous monitoring of specific conditions.

In 2025, you may see wearable ultrasound being used for applications like tracking cardiovascular health or monitoring chronic conditions such as kidney disease. For instance, a wearable device could continuously measure blood flow or detect abnormalities in real time, alerting healthcare providers to intervene in a timely manner.

This trend aligns with the broader movement towards personalized medicine, where patients take a proactive role in their healthcare with the help of smart technologies.

6. Expanded Use of Contrast-Enhanced Ultrasound (CEUS)

Contrast-enhanced ultrasound (CEUS) is gaining traction for its ability to improve visualization of blood flow and tissue vascularity. Unlike traditional ultrasound, CEUS uses microbubble contrast agents that provide detailed imaging without exposing patients to ionizing radiation or iodinated contrast material.

In 2025, CEUS is expected to find broader applications, particularly in oncology and cardiology. It is being used to assess heart function more accurately, differentiate between benign and malignant lesions, monitor the efficacy of cancer treatments, and has therapeutic applications. The latter is a unique demonstration of ultrasound having both diagnostic and therapeutic indications. 

The noninvasive nature of CEUS, combined with its diagnostic precision, is making it a preferred option for patients and providers alike. As regulatory approvals expand and more clinicians are trained to use this technology, CEUS will likely become a standard in advanced diagnostic imaging.

Conclusion

Ultrasound technology is undergoing a renaissance, driven by advances in electronics, miniaturization, portability, and imaging algorithms, including AI. As we move into 2025, these trends are set to enhance diagnostic capabilities, improve patient outcomes, and make imaging more accessible than ever before.

For healthcare providers and institutions, staying ahead of these trends will be critical in delivering cutting-edge care. Whether through adopting AI-powered solutions or CEUS, integrating wearable devices, or exploring new techniques like therapeutic ultrasound, the future of ultrasound is brighter—and more innovative—than ever.

Therese Cooper, BS, RDMS, is a sonographer and the Chief Learning Officer at the American Institute of Ultrasound in Medicine.

The Dawn of Large Language Models (LLMs) in Ultrasound

With the advent of large language models (LLMs), such as the well-known ChatGPT, there has been a surge of interest in how to leverage these technologies in healthcare. These queries are far from baseless, as LLMs have already demonstrated significant value in various non-clinical fields. It is entirely reasonable to explore their potential in medical imaging. The biomedical industry has begun to innovate and propose solutions based on the perceived needs of physicians and the medical imaging workforce, often from an engineering standpoint. However, LLMs offer a unique opportunity to develop solutions through a collaborative approach that includes both physicians and industry professionals.

In other words, only by integrating insights from clinicians can we ensure that the benefits of LLMs are realized in ways that genuinely enhance clinical practice. This collaborative approach is particularly relevant in the field of ultrasound imaging, where the unique real-time nature of the modality, combined with operator-dependent variability, presents both opportunities and challenges. This blog post explores the exciting possibilities of LLMs in ultrasound imaging through two specific approaches: scan-time AI assistance and review-time AI assistance.

The Dream: Scan-Time AI by Real-Time Integration of LLMs

Imagine having a smart assistant right by your side during an ultrasound exam, processing data in real time and offering insights instantaneously. This “scan-time AI” is not a distant dream but an emerging reality. By integrating LLMs into ultrasound machines, clinicians can receive immediate feedback on the screen. This AI-powered assistance can highlight areas of interest, suggest potential diagnoses, and recommend additional views or techniques to optimize image quality, making the diagnostic process more accurate and efficient.

However, the journey to seamless real-time AI integration comes with its own set of challenges. The primary hurdle is ensuring that the AI operates with split-second precision, as any lag could disrupt the examination flow. Additionally, the integration must be intuitive, ensuring that AI suggestions complement the clinician’s expertise without causing distraction. The ultimate goal is to create a harmonious partnership where AI augments the clinician’s skills and enhances patient care.

As their name implies, LLMs are designed to communicate with language at the center. Early examples include chat-like communication with the user, which, at first glance, may not seem viable for medical imaging workflows. However, LLM literature is advancing very rapidly, and with the invention of multi-modal LLMs, communication with ultrasound systems will no longer be limited to text but also extend to other modalities such as voice and images. Voice commands can streamline the process, allowing clinicians to focus on the patient and the probe without needing to manipulate controls manually. For instance, a clinician could say, “Compare the thickness of the renal cortex with the medulla” and the ultrasound machine would reason through the command, detect the said anatomical structures, perform the measurement, and display the results, thus improving efficiency and ergonomics. However, voice interaction in a clinical environment brings its own set of complexities. The bustling background noise, the need for precise and unambiguous commands, and the potential for AI misinterpretation are significant factors to consider. Furthermore, voice interaction must be evaluated for its impact on privacy within the clinical setting. When these issues with voice communication in clinical settings are addressed, using LLMs through voice commands for ultrasound examinations will become much smoother and more efficient.

We’re There: Review-Time AI for Post-Examination Analysis

While real-time AI offers immediate benefits during the scan, “review-time AI” focuses on the critical post-scan phase. LLMs can meticulously review ultrasound images and generate detailed reports, highlighting key findings and suggesting differential diagnoses. This application can significantly alleviate the documentation burden on clinicians, allowing them to dedicate more time to patient care.

The necessity for LLMs in review-time AI stems from the sheer volume and complexity of data clinicians must analyze. By automating the initial review and providing structured reports, LLMs enhance the consistency and quality of ultrasound interpretations. This approach also facilitates collaborative care, as AI-generated reports can be easily shared and reviewed by other specialists, ensuring a comprehensive evaluation of the patient’s condition.

A Call to Action for Physicians

Physicians play a pivotal role in shaping the future of AI technologies. While engineers and data scientists provide the technical backbone, clinicians’ insights and feedback are crucial in developing AI systems that truly address healthcare needs. Physicians are encouraged to experiment with these new-age AI tools in their daily routines, providing critical feedback that will steer the evolution of AI in a direction that genuinely enhances clinical practice.

Integrating LLMs into ultrasound imaging is not merely a technological advancement but a paradigm shift that requires active collaboration between clinicians and technologists. By exploring the exciting possibilities of scan-time and review-time AI and addressing the challenges of voice interaction, we can pave the way for a more efficient and accurate diagnostic process. Physicians, your involvement and insights are crucial. Together, we can shape a future where AI not only complements but also elevates the art and science of ultrasound imaging. Let’s embrace this transformative journey and lead the way to a new era of medical innovation.

Utku Kaya is a Co-founder and Chief Executive Officer of SmartAlpha.

Exploring the Future of Ultrasound: 5 Trends to Watch

Ultrasound technology has come a long way since its inception and continues to evolve at a rapid pace. As we look ahead to the near future, it’s clear that ultrasound will play an even more vital role in healthcare. In this blog post, we’ll explore 5 trends (in no particular order) that are set to shape the field of ultrasound in the coming years.

1. Portable and Handheld Ultrasound Devices

The trend of portable and handheld ultrasound devices is on the rise. In the past, ultrasound machines were hundreds of pounds, carted around on wheels, and costly to manufacture. These new, compact, and lightweight devices offer healthcare professionals the convenience of conducting ultrasound examinations at the patient’s bedside, in remote areas, or during emergency situations, and wearable devices will become part of the ultrasound tool kit. Their affordability and ease of use make them accessible to a broader range of healthcare providers, expanding the potential applications of ultrasound. I predict that, under a doctor’s care and orders, the ways in which ultrasound is used will expand!

2. Artificial Intelligence (AI) Integration

AI is revolutionizing the field of medical imaging, and ultrasound is no exception; however, sonographers and doctors are not going anywhere. AI algorithms can assist in image analysis, automate measurements, enhance quantitative imaging, and aid in the detection of abnormalities. In the near future, we can anticipate more sophisticated AI integration into ultrasound systems, which will not only enhance diagnostic accuracy but also improve workflow efficiency. AI will play a significant role in making ultrasound more accessible and reliable in terms of scanning, reading images, and delivering accurate results.

3. 3D and 4D Imaging

Three-dimensional (3D) and real-time 3D (4D) ultrasound imaging will continue to advance, providing clinicians with more detailed and interactive views of anatomical structures. This trend will be particularly valuable in obstetrics for capturing fetal development and in various other medical specialties where enhanced visualization and quantification are crucial. Expect to see more applications for complex anatomical assessments and dynamic studies.

4. Point-of-Care Ultrasound (POCUS)

Point-of-care ultrasound, or POCUS, is transforming the way medical professionals diagnose and manage patients. POCUS is expected to see increased adoption in various clinical settings, including emergency medicine, anesthesiology, primary care, and critical care. As training programs expand, more healthcare providers will be equipped to use POCUS for rapid and accurate assessments, which can lead to improved patient care and outcomes on the spot. With increased adoption, interest in ultrasound practice accreditation in this area is rising.

5. Therapeutic Ultrasound Applications

Beyond its diagnostic role, ultrasound is making great advances in therapeutic applications. Techniques like High-Intensity Focused Ultrasound (HIFU) are being employed for noninvasive surgeries, cancer treatments, and targeted drug delivery. In the coming years, we can expect to see further developments in therapeutic ultrasound, offering less invasive treatment options for a wide range of medical conditions and increasing the potential for ultrasound theranostics.

The future of ultrasound is incredibly promising with these 5 trends at the forefront of its evolution. From portable devices and AI integration to advanced imaging techniques and expanding applications in point-of-care and therapeutics, ultrasound is set to become even more integral to modern healthcare. Stay tuned as these trends continue to shape the landscape of medical imaging and patient care. We’re excited to witness the many possibilities that lie ahead for this versatile technology.

Therese Cooper, BS, RDMS, is a sonographer and the Director of Accreditation at the American Institute of Ultrasound in Medicine.

Using AI and Ultrasound to Diagnose COVID-19 Faster

Coronavirus disease 2019 (COVID-19) is a newly identified virus that has caused a recent outbreak of respiratory illnesses starting from an isolated event to a global pandemic. As of July 2020, there are over 2.8 million confirmed COVID-19 cases in the U.S. and over 11.4 million worldwide. In the United States alone, over 130,000 Americans have died from COVID-19, with no end in sight. A major cause of this rapid and seemingly endless expansion can be traced back to the inefficiency and shortage of testing kits that offer accurate results in a timely manner. The lack of optimized tools necessary for rapid mass testing produces a ripple effect that includes the health of your loved ones, jobs, education, and on the national level, a country’s Gross Domestic Product (GDP), but artificial intelligence and ultrasound may help.

STATE OF ART IN DIAGNOSIS

Prof. Alper Yilmaz, PhDCurrently, there are two types of tests that are conducted by healthcare professionals–diagnostic tests and antibody tests. The diagnostic test, as the name implies, helps diagnose an active coronavirus infection in a patient. The ideal diagnostic test and the “gold standard” according to the United States Center for Disease Control (CDC) is the Reverse Transcription Polymerase Chain Reaction, or simply, RT-PCR. RT-PCR is a molecular test not only capable of diagnosing an active coronavirus infection, but it can also indicate whether the patient has ever had COVID-19 or were infected with the coronavirus in the past. However, the time required to conduct the test limits its effectiveness when mass deployed.

A much faster but less reliable diagnostic test alternative to RT-PCR is an antigen test. Much like the gold standard, the antigen test is capable of detecting an active coronavirus infection in a much shorter timeframe. Although antigen tests produce rapid results, usually in about an hour, the results are deemed highly unreliable, especially with patients who were tested negative according to the US FDA.

In contrast, the antibody test is designed to search for antibodies produced by the immune system of a patient in response to the virus and is limited by its ability to only detect past infections, which is less than ideal to prevent an ongoing pandemic.

THE PROBLEM 

To combat the rapid expansion of an airborne virus such as COVID-19, or future variations of a similar virus, rapid and reliable solutions must be developed that aim at improving the limitations of current methods. Although highly accurate, methods such as RT-PCR do not meet the speed requirements needed for testing on a large scale. Depending on the location, diagnosis of an active coronavirus infection with RT-PCR may take anywhere between several hours and up to a week. When the number of daily human-to-human interactions are considered, the lack of speed in diagnosing an active coronavirus patient could be the difference between a pandemic or an isolated local event.

As an alternative to molecular tests, Computed Tomography (CT) scans of a patient’s chest have shown promising results in detecting an infection. However, in addition to not being recommended by the CDC to diagnose COVID-19 patients, there are many unwanted consequences with the use of CT scans. With CT scans used to diagnose multiple illnesses, some of which relate to serious emergencies such as brain hemorrhaging, they cannot be used as the primary tool for diagnosing COVID-19. This is especially true in rural areas where the healthcare infrastructure is underfunded. Mainly due to the required deep cleaning of the machine and room after each patient, which usually requires 60 to 120 minutes, many institutions are unable to provide CT scans as a viable primary diagnostic tool. Ultimately, given the need for CT scanners for several other health complications combined with limited patient capacity at each hospital, alternative methods must be developed to diagnose an active coronavirus patient.

THE SOLUTION 

Recently Point-of-Care (POC) devices have started to be adopted by many healthcare professionals due to its reliability and portability. An emerging popular technique, which adopts improvements made in mobile ultrasound technology, allows for healthcare professionals to conduct rapid screenings on a large scale.

Working since mid-March, when early cases of physicians adopting mobile ultrasound technology emerged, the research team at The Ohio State University, Dr. Alper Yilmaz and PhD student Shehan Perera, started developing a solution that can automate an already well-established process. Dr. Yilmaz is the director of the Photogrammetric Computer Vision lab at Ohio State. Dr. Yilmaz’s expertise in machine learning, artificial intelligence, and computer vision combined with the research experience of Shehan Perera laid a strong foundation to tackle the problem at hand. As it stands, the screening of a new patient, with the use of a mobile ultrasound device takes about 13 minutes, with the caveat that it requires a highly trained professional to interpret the results generated by the device. With the combination of deep learning and computer vision, the research team was able to use data generated from the ultrasound device to accurately identify COVID-19 cases. The current network architecture, which is the product of many iterations, is capable of detecting the presence of the virus in a patient with a high level of accuracy.

Many fields have been revolutionized with modern deep learning and computer vision technologies. With the methods developed by the research team, this technology can now allow any untrained worker to use a handheld ultrasound device, and still be able to provide a service that rivals that of a highly trained doctor. In addition to being extremely accurate, the automated detection and diagnosis process takes less than 10 minutes, which includes scanning time, and sanitation is as simple as removing a plastic seal that covers the device. The benefits of this technology can not only be useful for countries such as the United States, with a well-established healthcare system, but, more importantly, can significantly help countries and areas where medical expertise is rare.

CONCLUSION 

The United States healthcare system is among the best in the world, yet we are failing to provide the necessary treatment patients clearly need. The developments made in artificial intelligence, deep learning, and computer vision offer proven benefits, which can not only be leveraged to improve the current state of the global pandemic but can lay the foundation to prevent the next. Alternative testing methods such as mobile ultrasound devices combined with novel artificial intelligence algorithms that allow for mass production, distribution, and testing could be the innovation that could help decelerate the spread of the virus, reducing the strain on the global healthcare infrastructure.

Feel Free to Reach the Authors at: 

Photogrammetric Computer Vision Lab – https://pcvlab.engineering.osu.edu/
Dr. Alper Yilmaz, PhD
Email: Yilmaz.15@osu.du
LinkedIn: https://www.linkedin.com/in/alper-yilmaz

Shehan Perera
Email: Perera.27@osu.edu
LinkedIn: https://www.linkedin.com/in/shehanp/

References 

https://www.fda.gov/consumers/consumer-updates/coronavirus-testing-basics

https://www.whitehouse.gov/articles/depth-look-COVID-19s-early-effects-consumer-spending-gdp/#:~:text=BEA%20estimates%20that%20real%20GDP,first%20decline%20in%20six%20years.&text=This%20drop%20in%20GDP%20serves,in%20response%20to%20COVID%2D19.

 

Interested in learning more about COVID-19 or AI? Check out the following posts from the Scan:

https://connect.aium.org/home

The Excitement of New Ultrasound Technologies and Their Effects on Imaging-Guided Interventions

Recent advancements in ultrasound technologies have generated excitement in the field of ultrasound-guided intervention. For me, an interventional radiologist, these developments create new potential to perform needed procedures and a complementary approach to addressing our patients’ complex medical conditions. Further, benefits from these technologies include enabling us to achieve better patient outcomes, improve patient satisfaction, gain operational efficiencies, and improve stake holder’s satisfaction.azar_nami

The new technologies to which I’m referring are ultrasound contrast and ultrasound fusion. Ultrasound fusion is an element of artificial intelligence that combines the anatomic details of cross-sectional imaging like CT scan, PET scan, and MRI with the power of real-time ultrasound and is gaining more acceptance and popularity in medicine. Similar to a car’s GPS, ultrasound fusion helps a user find something. The powerful tool enables the operator to find lesions, which normally are difficult or even impossible to find on standard ultrasound. Needle navigation in the form of virtual tracking is a bonus that identifies needle location even when it is obscured by air or bone. It’s also a great teaching tool for inexperienced physicians who are interested in interventional radiology.

Ultrasound contrast is also emerging as a powerful tool in the field of interventional radiology. It enables the operator to better visualize a lesion and characterize the lesion and surrounding tissue. Now, we also can perform an ultrasound contrast sinogram to assess any cavity or catheter location, which opens new horizons in the field of ultrasound intervention, mainly in pediatric intervention.

An additional benefit for ultrasound contrast that it can be given without worrying about renal injury. This is very valuable when it comes to avoiding the toxic effect of iodinated contrast, especially in renal transplant intervention. Also, its very sensitivity to assess bleeding when compared with that of Doppler ultrasound. This technology allows us to discharge our patients home earlier after procedures when the contrast study is negative.

This is a very exciting time in the field of interventional radiology (IR). So many procedures that we could not perform using real-time ultrasound in the past now can be safely done with only ultrasound. Our patients appreciate how convenient it is. The procedures are done quickly, without the need to move the patient from their bed onto a stiff CT scan table. The lack of ionizing radiation in IR is also an attractive concept to the patient (mainly pediatric and/or pregnant), the clinician, and our IR staff.

Our institution is very supportive of utilizing advanced ultrasound technologies, as ultrasound allows us to gain operational efficiencies and is a more cost-effective alternative to CT-guided procedures. Operational efficiencies are gained by doing interventional cases portably with ultrasound, thus allowing the interventional CT suite to be utilized for diagnostic exams, which bring additional revenue to the institution. The ordering clinicians are also cognizant of radiation dose reduction, so providing an alternative to CT-guided procedures appeals to them.

Even though the implementation of contrast-enhanced ultrasound and fusion has been slower in the United States when compared with our colleagues abroad, it has brought a lot of excitement to my colleagues and me in interventional radiology. Like any new technology, the more we use, the more we appreciate its value. I predict they will become the new norm in daily practice. These advancements will continue to evolve and be an essential part of medicine.

 

Interested in reading more about contrast ultrasound? Check out the following posts from the Scan:

 

Connect_digital_graphics_E-NEWSLETTER

Nami Azar, MD, MBA, is an Associate Professor of Radiology in the Department of Radiology at University Hospitals of Cleveland Medical Center in Ohio.

Artificial Intelligence and Point-of-Care Ultrasound

One of the greatest ongoing challenges of POCUS (point-of-care ultrasound) is educating existing physicians, residents, students and others. There are not even enough teachers to teach everyone who wants to learn. Clinicians would like to get the results from POCUS performed on their patients but have difficulty investing the effort required to learn, practice, and then become credentialed. Further complicating things for some is the dreaded self-doubting period, which could last months or years, where providers worry they may make a mistake and be ridiculed for it, or worse.Blaivas

One potential answer is thought to be artificial intelligence (AI); kind of like it seems to be for everything in medicine today. What good is AI in POCUS anyway? What if the education required was simply to find the correct spot on the body to apply the probe? Then the algorithm would do the rest and it would be more accurate than the best POCUS masters. Not only would training be truly minimized, maybe to minutes, but the examination would be shortened as well. A few sweeps through organs, whether it is the liver and gallbladder or the heart, may be enough for the AI algorithm to do its thing. This would mean all those busy clinicians really would get a great return on their time investment. If the algorithm is that accurate and expert, providers will not be questioned easily when they document an AI US finding.

AI is an inescapable topic of sensational news stories and movies alike. AI is simply a machine approximation of human-like intelligence in task performance. The type most associated with image interpretation is deep learning. How does it work? Programmers develop software architectures roughly resembling levels of neurons in the cerebral cortex, with multiple connections. The levels of neurons have specific functions and transmit messages to neurons in the next row via mathematical functions. They are also capable of sending messages in reverse as feedback. Such a deep network is often termed a convolutional neural network (CNN; or some variant on the name). It can learn to interpret images, whether CXR, head CT, or ultrasound, by scanning each image one tiny part at a time, then pooling all of the neuronal-like reactions to those tiny parts and coming up with an answer. Give it enough training data and such a CNN can become very accurate.

Well, imagine a CNN algorithm plugged into your favorite POCUS machine. The CNN is trained on the liver and gallbladder; it has seen millions of example images, both normal and abnormal. It can recognize liver anatomy and point it out for you, the same for every detail around the gallbladder and biliary tree. It’s great at identifying pathology and can make measurements in the correct spots for the wall, common bile duct (CBD), and more. Once again, who really cares? I spent 2 decades scanning the gallbladder, performing research studies, and publishing on it. Well, while it may not have been an issue for me, not everyone invests their free time like that. Yet, many would like to be able to put a probe on the abdomen, have the ultrasound machine tell them where to move it, point out pathology, and come up with a likely diagnosis. Did I mention it could happen in real time, at the patient’s bedside, while you are casually speaking to them? How useful would this be? It could substitute for years of training, maybe even over 2 decades worth. There are other subtle benefits too. Although some studies seem to show that CNN CT algorithms seem to catch so much pathology radiologists can miss, the individual CNN may not be as good at finding something a rare expert might pick up, at least for now. But the CNN never gets tired. It never gets hit with a massive wave of scans to read late at night or overwhelmed with clinicians calling to discuss imaging studies. Thus, even experts can benefit from such algorithms as an aid.

Not happy with the image quality due to patient body habitus or another factor? It turns out another algorithm can actually artificially improve the image clarity and quality, and do so accurately without introducing false data. This has not been introduced into clinical use of POCUS but is likely to be just around the corner. The key is to make sure nothing is invented by the algorithm that is not actually there.

Imagine incredible ultrasound expertise from a short exam that required minimal training to perform. This scenario will come, but not this year or the next. As some speakers and authors have noted, AI coupled with POCUS is a big step toward the fabled and elusive “tricorder” first depicted in the 1960s Star Trek television series. An incredible hand-held device (that does not even require body contact), which diagnoses maladies in a few short sweeps over the patient. The eventual outcome of approaching such a device is greatly increased speed, efficiency, confidence, and accuracy of patient assessment and diagnosis. The benefit of significantly decreased skill/training requirements will also pose some challenges for the workforce, but these are likely to appear gradually and may be hardly noted.

What about combining other data feeds along with the ultrasound images? AI algorithms are great at interpreting EKG tracings and even cardiac and lung auscultation. Studies analyzing digital auscultation signals using deep learning systems are able to diagnose many more abnormalities than humans are. The result could be synergistic and add redundancy in diagnosis, such as for abnormal lung or heart sounds during ultrasound evaluation. Maybe other signals could be incorporated also.

These algorithms just need data, lots of data, and that is the conundrum for people seeking to develop AI apps. What do you think about companies getting de-identified image data without provider and patient awareness? Do you think it would help you to have a smart machine that analyzed the images and made calculations within seconds? What about incorporating other diagnostic signals such as digital auscultation, EKG tracings, or maybe some other signal?

 

 

Share your thoughts on AI in ultrasound: comment below, or, AIUM members, continue the conversation on Connect, the AIUM’s online community.

connect_now_live_digital_graphics_e-newsletter-1

 

Michael Blaivas, MD, MBA, FACEP, FAIUM, is an Affiliate Professor of Medicine in the Department of Medicine at the University of South Carolina, School of Medicine. He works in the Department of Emergency Medicine at St. Francis Hospital in Columbus, Georgia.