Tag Archive for: AI

Ian Leibowitz

In the News: Advancing innovations in pediatric gastroenterology and hepatology

“The future is in AI and machine learning and how it allows large data sets to be utilized to a level of understanding that we currently don’t have…We have very rare monogenetic disorders where single gene is the cause of certain inflammatory valve diseases in young children and we’re starting to learn about what’s the right therapy by that gene and personalizing medicine… Not just precision medicine (which is better for a population) but really personalizing medicine.”

Learn more about what Ian Leibowitz, M.D., division chief of Gastroenterology, Hepatology and Nutrition Services, says as he discusses advances in clinical care algorithms that facilitate the timely diagnosis of critical conditions, efforts to increase access to medical and surgical treatment, and broaden awareness among primary care physicians to help ensure care is available and provided as early as possible to all patients.

healthcare workers putting on PPE

“Mask up!” Soon, AI may be prompting healthcare workers

Researchers at Children’s National Hospital are embarking on an effort to deploy computer vision and artificial intelligence (AI) to ensure medical professionals appropriately use personal protective equipment (PPE). This strikingly common problem touches almost every medical specialty and setting.

With nearly $2.2 million in grants from the National Institutes of Health, the team is combining their expertise with information scientists at Drexel University and engineers at Rutgers University to build a system that will alert doctors, nurses and other medical professionals of mistakes in how they are wearing their PPE. The goal is to better protect healthcare workers (HCWs) from dangerous viruses and bacteria that they may encounter — an issue laid bare with the COVID-19 pandemic and PPE shortages.

“If any kind of healthcare setting says they don’t have a problem with PPE non-adherence, it’s because they’re not monitoring it,” said Randall Burd, M.D., Ph.D., division chief of Trauma and Burn Surgery at Children’s National and the principal investigator on the project. “We need to solve this problem, so the medical community will be prepared for the next potential disaster that we might face.”

The big picture

The World Health Organization has estimated that between 80,000 and 180,000 HCWs died globally from COVID-19 between January 2020 and May 2021 — an irreplaceable loss of life that created significant gaps in the pandemic response. Research has shown that HCWs had an 11-fold greater infection risk than the workers in other professions, and those who were not wearing appropriate PPE had a 1/3 higher infection risk, compared to peers who followed best practices.

Burd said the Centers for Disease Control and Prevention has recommended that hospitals task observers to stand in the corner with a clipboard to watch clinicians work and confirm that they are being mindful of their PPE. However, “that’s just not scalable,” he said. “You can’t always have someone watching, especially when you may have 50 people in and out of an operating room on a challenging case. On top of that, the observers are generally trained clinicians who could be filling other roles.”

What’s ahead

Bringing together the engineering talents at Drexel and Rutgers with the clinical and machine-learning expertise at Children’s National, the researchers plan to build a computer-vision system that will watch whether HCWs are properly wearing PPE such as gloves, masks, eyewear, gowns and shoe covers.

The team is contemplating how the system will alert HCWs to any errors and is considering haptic watch alerts and other types of immediate feedback. The emerging power of AI brings tremendous advantages over the current, human-driven systems, said Marius George Linguraru, D.Phil., M.A., M.Sc., the Connor Family Professor in Research and Innovation at Children’s National and principal investigator in the Sheikh Zayed Institute for Pediatric Surgical Innovation.

“Human observers only have one pair of eyes and may fatigue or get distracted,” Linguraru said. “Yet artificial intelligence, and computers in general, work without getting tired. We are excited to figure out how a computer can do this work – without ever blinking.”

Children’s National Hospital leads the way

Linguraru says that Children’s National and its partners make up the ideal team to tackle this vexing challenge because of their ability to assemble a multidisciplinary team of scientists and engineers who can work together with clinicians. “This is a dialogue,” he said. “A computer scientist, like myself, needs to understand the intricacies of complicated clinical realities, while a clinician needs to understand how AI can impact the practice of medicine. The team we are bringing together is intentional and poised to fix this problem.”

Marius Linguraru, D.Phil., M.A., M.Sc., a co-principal investigator for the project, presents

Children’s National joins team to use AI to expand health knowledge in Kenya

Marius Linguraru, D.Phil., M.A., M.Sc., a co-principal investigator for the project, presentsChildren’s National Hospital is joining a team of global health researchers to use large language models (LLMs) like ChatGPT to help Kenyan youth learn about their health and adopt lifestyles that may prevent cancer, diabetes and other non-communicable diseases.

The work, which is one of nearly 50 Grand Challenges Catalyzing Equitable Artificial Intelligence (AI) Use grants announced by the Bill & Melinda Gates Foundation, will harness the emerging power of AI to empower young people with information that they can carry through adulthood to reduce rates of unhealthy behaviors including physical inactivity, unhealthy diet and use of tobacco and alcohol.

“We are thrilled to be part of this effort to bring our AI expertise closer to young patients who would benefit dramatically from technology and health information,” said Marius George Linguraru, D.Phil., M.A., M.Sc., a co-principal investigator for the project, the Connor Family Professor in Research and Innovation at Children’s National and principal investigator in the Sheikh Zayed Institute for Pediatric Surgical Innovation. “Using generative AI, we will build an application to enhance the knowledge, attitudes and healthy habits of Kenyan youth and use this as a foundation to improve health inequities around the globe.”

Why it matters

A lower middle-income country located on the east coast of Sub-Saharan Africa, Kenya is home to 50 million people and one of the continent’s fastest-growing economies. English is one of Kenya’s official languages, and the country has been recognized as a technology leader in Africa, with 82% of Kenyans having phone connectivity. Taken together, these factors make the country an ideal location to deploy an LLM-based platform designed to improve health information and attitudes.

The Gates Foundation selected this project from more than 1,300 grant applications. The nearly 50 funded projects are aimed at supporting low- and middle-income countries to harness the power of AI for good and help countries participate in the AI development process. The project’s findings will contribute to building an evidence base for testing LLMs that can fill wide gaps in access and equitable use of these tools. Each of the grants provides an opportunity to mitigate challenges experienced by communities, researchers and governments.

What’s next

The project development will be led by the National Cancer Institute of Kenya, with Linguraru and other global experts advising the effort from Kenyan institutions and Stanford University. Researchers plan to enroll youth from universities, shopping malls, markets, sporting events and other high-traffic locations. The study will look at participants’ risk factors and how their attitudes toward healthier lifestyles changed after engaging with the new LLM platform.

“The team is thrilled to be selected as one of the nearly 50 most promising AI proposals in the Gates Foundation Grand Challenge competition, and we look forward to seeing how our work can benefit the health of Kenyan youth,” said Dr. Martin Mwangi, principal investigator for the project and head of the Cancer Prevention and Control Directorate at the National Cancer Institute of Kenya. “If successful, we hope to share this model and the expertise we gain to expand health equity and knowledge to other regions.”

lung ct scan

With COVID-19, artificial intelligence performs well to study diseased lungs

lung ct scan

New research shows that artificial intelligence can be rapidly designed to study the lung images of COVID-19 patients.

Artificial intelligence can be rapidly designed to study the lung images of COVID-19 patients, opening the door to the development of platforms that can provide more timely and patient-specific medical interventions during outbreaks, according to research published this month in Medical Image Analysis.

The findings come as part of a global test of AI’s power, called the COVID-19 Lung CT Lesion Segmentation Challenge 2020. More than 2,000 international teams came together to train the power of machine learning and imaging on COVID-19, led by researchers at Children’s National Hospital, AI tech giant NVIDIA and the National Institutes of Health (NIH).

The bottom line

Many of the competing AI platforms were successfully trained to analyze lung lesions in COVID-19 patients and measure acute issues including lung thickening, effusions and other clinical findings. Ten leaders were named in the competition, which ran between November and December 2020. The datasets included patients with a range of ages and disease severity.

Yet work remains before AI could be implemented in a clinical setting. The AI models performed comparably to radiologists when analyzing data similar to what the algorithms had already encountered. However, the AI was less valuable when trained on fresh data from other sources during the testing phase, indicating that systems may need to study larger and more diverse data sets to meet their full potential. This is a challenge with AI that has been noted by others too.

What they’re saying

“These are the first steps in learning how we can quickly and accurately train AI for clinical use,” said Marius George Linguraru, D.Phil., M.A., M.Sc., principal investigator at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National, who led the Grand Challenge Initiative. “The global interest in COVID-19 gave us a groundbreaking opportunity to address a health crisis, and multidisciplinary teams can now focus that interest and energy on developing better tools and methods.”

Holger Roth, senior applied research scientist at NVIDIA, said the challenge gave researchers around the world a shared platform for developing and evaluating AI algorithms to quickly detect and quantify COVID lesions from lung CT images. “These models help researchers visualize and measure COVID-specific lesions of infected patients and can facilitate timelier and patient-specific medical interventions to better treat COVID,” he said.

Moving the field forward

The organizers see great potential for clinical use. In areas with limited resources, AI could help triage patients, guide the use of therapeutics or provide diagnoses when expensive testing is unavailable. AI-defined standardization in clinical trials could also uniformly measure the effects of the countermeasures used against the disease.

Linguraru and his colleagues recommend more challenges, like the lung segmentation challenge, to develop AI applications in biomedical spaces that can test the functionality of these platforms and harness their potential. Open-source AI algorithms and public curated data, such as those offered through the COVID-19 Lung CT Lesion Segmentation Challenge 2020, are valuable resources for the scientific and clinical communities to work together on advancing healthcare.

“The optimal treatment of COVID-19 and other diseases hinges on the ability of clinicians to understand disease throughout populations – in both adults and children,” Linguraru said. “We are making significant progress with AI, but we must walk before we can run.”

echocardiogram

AI may revolutionize rheumatic heart disease early diagnosis

echocardiogram

Researchers at Children’s National Hospital have created a new artificial intelligence (AI) algorithm that promises to be as successful at detecting early signs of rheumatic heart disease (RHD) in color Doppler echocardiography clips as expert clinicians.

Researchers at Children’s National Hospital have created a new artificial intelligence (AI) algorithm that promises to be as successful at detecting early signs of rheumatic heart disease (RHD) in color Doppler echocardiography clips as expert clinicians. Even better, this novel model diagnoses this deadly heart condition from echocardiography images of varying quality — including from low-resource settings — a huge challenge that has delayed efforts to automate RHD diagnosis for children in these areas.

Why it matters

Current estimates are that 40.5 million people worldwide live with rheumatic heart disease, and that it kills 306,000 people every year. Most of those affected are children, adolescents and young adults under age 25.

Though widely eradicated in nations such as the United States, rheumatic fever remains prevalent in developing countries, including those in sub-Saharan Africa. Recent studies have shown that, if detected soon enough, a regular dose of penicillin may slow the development and damage caused by RHD. But it has to be detected.

The hold-up in the field

Diagnosing RHD requires an ultrasound image of the heart, known as an echocardiogram. However, ultrasound in general is very variable as an imaging modality. It is full of texture and noise, making it one of the most challenging to interpret visually. Specialists undergo significant training to read them correctly. However, in areas where RHD is rampant, people who can successfully read these images are few and far between. Making matters worse, the devices used in these low resource settings have their own levels of varying quality, especially when compared to what is available in a well-resourced hospital elsewhere.

The research team hypothesized that a novel, automated deep learning-based method might detect successfully diagnose RHD, which would allow for more diagnoses in areas where specialists are limited. However, to date, machine learning has struggled the same way the human eye does with noisy ultrasound images.

Children’s National leads the way

Using approaches that led to successful objective digital biometric analysis software for non-invasive screening of genetic disease, researchers at the Sheikh Zayed Institute for Pediatric Surgical Innovation, including medical imaging scientist Pooneh Roshanitabrizi, Ph.D., and Marius Linguraru, D.Phil., M.A., M.Sc., principal investigator, partnered with clinicians from Children’s National Hospital, including Craig Sable, M.D., associate chief of Cardiology and director of Echocardiography, and cardiology fellow Kelsey Brown, M.D., who are heavily involved in efforts to research, improve treatments and ultimately eliminate the deadly impacts of RHD in children. The collaborators also included cardiac surgeons from the Uganda Heart Institute and cardiologists from Cincinnati Children’s Hospital Medical Center.

Dr. Linguraru’s team of AI and imaging scientists spent hours working with cardiologists, including Dr. Sable, to truly understand how they approach and assess RHD from echocardiograms. Building the tool based on that knowledge is why this tool stands apart from other efforts to use machine-learning for this purpose. Orienting the approach to the clinical steps of diagnosis is what led to the very first deep learning algorithm that diagnoses mild RHD with similar success to the specialists themselves. After the platform was built, 2,136 echocardiograms from 591 children treated at the Uganda Heart Institute fed the learning algorithm.

What’s next

The team will continue to collect data points based on clinical imaging data to refine and validate the tool. Ultimately, researchers will look for a way that the algorithm can work directly with ultrasound/echocardiogram machines. For example, the program might be run through an app that sits on top of an ultrasound device and works on the same platform to communicate directly with it, right in the clinic. By putting the two technologies together, care providers on the ground will be able to diagnose mild cases and prescribe prophylactic treatments like penicillin in one visit.

The first outcomes from the program were showcased in a presentation by Dr. Roshanitabrizi at one of the biggest and most prestigious medical imaging and AI computing meetings — the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI).

parathyroid close-up

A new imaging device with AI may reduce complications during thyroid surgery

parathyroid close-upSurgeons perform approximately 150,000 thyroidectomies in the United States. Post-surgical complications from this procedure frequently occur due to the misidentification or accidental removal of healthy parathyroid glands. On average, 27% of these patients suffer from transient or permanent hypocalcemia, a condition in which the blood has too little calcium, leading to lifelong complications and socioeconomic burden.

To improve parathyroid detection during surgery, Children’s National Hospital experts developed a prototype equipped with a dual-sensor imaging device and a deep learning algorithm that accurately detects parathyroids, according to a new study published in the Journal of Biophotonics.

“What excited us in this study was that even deep-seated tissues were able to be imaged without light loss, and high resolution imaging was possible due to the unique optical design,” said Richard Jaepyeong Cha, Ph.D., council member of the International Society of Innovative Technologies for Endocrine Surgery and principal investigator for the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National Hospital. “Moreover, in several cases, parathyroid autofluorescence was detected even before the surgeon dissected the parathyroid gland, and while it was covered by fat and/or fascia.”

What’s unique

This is the first study that uses color RGB/NIR paired imaging-based parathyroid detection by incorporating multi-modal (both RGB light and near-infrared autofluorescence, or NIRAF, ground truth imaging) data into parathyroid identification using a deep learning algorithm.

The patient benefit

“We envision that our technology will open a new door for the digital imaging paradigm of dye-free, temporally unlimited, and precise parathyroid detection and preservation,” said Richard. “Successful translation of this technology will potentially reduce the risk of hypoparathyroidism after common thyroid surgery and improve the clinical outcomes.”

The results support the effectiveness of their novel approach despite the small sample size, which can potentially improve specificity in the identification of parathyroid glands during parathyroid and thyroid surgeries.

The hold-up in the field

It is often difficult for surgeons with naked eyes to identify parathyroid glands from thyroid tissue because of the small size, the variable position, and similar appearance to the surrounding tissues.

Since 2011, surgeons have benefited from using NIRAF, a non-invasive optical method for intraoperative real-time localization of parathyroids.

While the NIRAF technology has gained traction among endocrine surgery community, false negatives can occur with current devices that use the NIRAF technology in secondary hyperparathyroidism cases. According to Kim et al., the technology is still suboptimal, and a significant percentage of parathyroid is being missed.

Children’s National Hospital leads the way

Engineers in Children’s National are leading this field through several innovations:

  • Non-dye injected, label-free use in real-time in comparison to temporally limited ICG angiography. This technology was featured as the cover article in the journal Lasers in Surgery and Medicine 54(3), 2022.).
  • Simultaneous perfusion assessment from four glands at any time during operation.
  • Arterial flow detection from pulsatile information in well-perfused PG vasculature.
  • Quantified parathyroid detection and classification with prediction values using deep learning technique.

You can read the full study “A co-axial excitation, dual-RGB/NIR paired imaging system toward computer-aided detection (CAD) of parathyroid glands in situ and ex vivo” in the Journal of Biophotonics.

overview of parathyroid surgery procedure

binary numbers

New datasets predict surgeon performance during complications

binary numbers

In a new study published in JAMA Network Open, experts at Children’s National and allied institutions created and validated the first dataset to depict hemorrhage control for machine learning applications, the simulated outcomes following carotid artery laceration (SOCAL) video dataset.

Computer algorithms, such as machine learning and computer vision, are increasingly able to discover patterns in visual data, powering exciting new technologies worldwide. At Children’s National Hospital, physician-scientists develop and apply these advanced algorithms to make surgery safer by studying surgical video.

The big picture

In a new study published in JAMA Network Open, experts at Children’s National and allied institutions created and validated the first dataset to depict hemorrhage control for machine learning applications, the simulated outcomes following carotid artery laceration (SOCAL) video dataset.

The authors designed SOCAL to serve as a benchmark for data-science applications, including object detection, performance metric development and outcome prediction. Hemorrhage control is a high-stakes adverse event that can pose unique challenges for video analysis. With SOCAL, the authors aim to solve a valuable use case with algorithms.

“As neurosurgeons, we are often called to perform high-risk and high-impact procedures. No one is more passionate about making surgery safer,” said Daniel Donoho, M.D., neurosurgeon at Children’s National Hospital and senior author of the study. “Our team at Children’s National and the Sheikh Zayed Institute is poised to lead this exciting new field of surgical data science.”

The hold-up in the field

These algorithms require raw data for their development, but the field lacks datasets that depict surgeons managing complications.

By creating automated insights from surgical video, these tools may one day improve patient care by detecting complications before patients are harmed, facilitating surgeon development.

Why it matters

“Until very recently, surgeons have not known what may be possible with large quantities of surgical video captured each day in the operating room,” said Gabriel Zada, M.D., M.S., F.A.A.N.S., F.A.C.S., director of the Brain Tumor Center at the University of Southern California (USC) and co-author of the study. “Our team’s research led by Dr. Donoho shows the feasibility and the potential of computer vision analysis in surgical skill assessment, virtual coaching and simulation training of surgeons.”

The lack of videos of adverse events creates a dataset bias which hampers surgical data science. SOCAL was designed to meet this need. After creating a cadaveric simulator of internal carotid artery injury and training hundreds of surgeons on the model at nationwide courses, the authors then developed computational models to measure and improve performance.

“We are currently comparing our algorithms to experts, including those developed using the SOCAL dataset,” Dr. Donoho said. “Human versus machine, and our patients are ultimately the winners in the competition.”

What’s next

The authors are also building a nationwide collective of surgeons and data scientists to share data and improve algorithm performance through exciting partnerships with USC, California Institute of Technology and other institutions.

You can read the full study “Utility of the Simulated Outcomes Following Carotid Artery Laceration Video Data Set for Machine Learning Applications” in JAMA Network Open.

AI chip illustration

How radiologists and data scientists can collaborate to advance AI in clinical practice

AI chip illustration

The scientific community continues to debate AI’s possibility of outperforming humans in specific tasks. In the context of the machine’s performance versus the clinician, Linguraru et al. argue that the community must consider social, psychological and economic contexts in addition to the medical implications to answer this puzzling question.

In a special report published in Radiology: Artificial Intelligence, a Children’s National Hospital expert and other institutions discussed a shared multidisciplinary vision to develop radiologic and medical imaging techniques through advanced quantitative imaging biomarkers and artificial intelligence (AI).

“AI algorithms can construct, reconstruct and interpret radiologic images, but they also have the potential to guide the scanner and optimize its parameters,” said Marius George Linguraru, D.Phil., M.A., M.Sc., principal investigator in the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National. “The acquisition and analysis of radiologic images is personalized, and radiologists and technologists adapt their approach to every patient based on their experience. AI can simplify this process and make it faster.”

The scientific community continues to debate AI’s possibility of outperforming humans in specific tasks. In the context of the machine’s performance versus the clinician, Linguraru et al. argue that the community must consider social, psychological and economic contexts in addition to the medical implications to answer this puzzling question.

Still, they believe that developing a useful radiologic AI system designed with the participation of radiologists could complement and possibly surpass human’s interpretation of the visuals.

Given AI’s potential applications, the authors encouraged radiologists to access many freely available resources to learn about machine learning, and radiomics to familiarize with basic concepts. Coursera, for example, can teach radiologists about convolutional neural networks and other techniques used by AI researchers.

Conversely, AI experts must reach out to radiologists and participate in public speaking events about their work. According to the researchers, during those engagement opportunities, clinicians understood the labor-saving benefits of automatic complex measurements on millions of images—something that they have been doing manually for years.

There are also hurdles around this quest of automation, which Linguraru et al. hope both fields can sort out by working together. A critical challenge that the experts mentioned was earning the trust of clinicians that are skeptical about the “black box” functionality of AI models, which makes it hard to understand and explain the behavior of a model.

Some questions, too, need answers on how to best leverage both human intelligence and AI by using human-in-the-loop where people train, tune, and test a particular algorithm, or AI in-the-loop where this different framing generates AI input and reflection in human systems.

“The key is to have a good scientific premise to adequately train and validate the algorithms and make them clinically useful. At that point, we can trust the box,” said Linguraru. “In radiology, we should focus on AI systems with radiologists in-the-loop, but also on training radiologists with AI in-the-loop, particularly as AI systems are getting smarter and learning to work better with radiologists.”

The experts also provided possible solutions to sharing large datasets, how to build datasets that allows robust investigations and how to improve the quality of a model that might be compared against human’s gold standard.

This special report is the second in a series of panel discussions hosted by the Radiological Society of North America and the Medical Image Computing and Computer Assisted Intervention Society. The discussion builds upon the first in the series “Machine Learning for Radiology from Challenges to Clinical Applications” that touched on how to incentivize annotators to participate in projects, the promotion of “team science” to address research questions and challenges, among other topics.

control population and population with Williams-Beuren syndrome.

Machine learning tool detects the risk of genetic syndromes

control population and population with Williams-Beuren syndrome.

(A) Control population. (B) Population with Williams-Beuren syndrome. Average faces were generated for each demographic group after automatic face pose correction.

With an average accuracy of 88%, a deep learning technology offers rapid genetic screening that could accelerate the diagnosis of genetic syndromes, recommending further investigation or referral to a specialist in seconds, according to a study published in The Lancet Digital Health. Trained with data from 2,800 pediatric patients from 28 countries, the technology also considers the face variability related to sex, age, racial and ethnic background, according to the study led by Children’s National Hospital researchers.

“We built a software device to increase access to care and a machine learning technology to identify the disease patterns not immediately obvious to the human eye or intuition, and to help physicians non-specialized in genetics,” said Marius George Linguraru, D.Phil., M.A., M.Sc., principal investigator in the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National Hospital and senior author of the study. “This technological innovation can help children without access to specialized clinics, which are unavailable in most of the world. Ultimately, it can help reduce health inequality in under-resourced societies.”

This machine learning technology indicates the presence of a genetic syndrome from a facial photograph captured at the point-of-care, such as in pediatrician offices, maternity wards and general practitioner clinics.

“Unlike other technologies, the strength of this program is distinguishing ‘normal’ from ‘not-normal,’ which makes it an effective screening tool in the hands of community caregivers,” said Marshall L. Summar, M.D., director of the Rare Disease Institute at Children’s National. “This can substantially accelerate the time to diagnosis by providing a robust indicator for patients that need further workup. This first step is often the greatest barrier to moving towards a diagnosis. Once a patient is in the workup system, then the likelihood of diagnosis (by many means) is significantly increased.”

Every year, millions of children are born with genetic disorders — including Down syndrome, a condition in which a child is born with an extra copy of their 21st chromosome causing developmental delays and disabilities, Williams-Beuren syndrome, a rare multisystem condition caused by a submicroscopic deletion from a region of chromosome 7, and Noonan syndrome, a genetic disorder caused by a faulty gene that prevents normal development in various parts of the body.

Most children with genetic syndromes live in regions with limited resources and access to genetic services. The genetic screening may come with a hefty price tag. There are also insufficient specialists to help identify genetic syndromes early in life when preventive care can save lives, especially in areas of low income, limited resources and isolated communities.

“The presented technology can assist pediatricians, neonatologists and family physicians in the routine or remote evaluation of pediatric patients, especially in areas with limited access to specialized care,” said Porras et al. “Our technology may be a step forward for the democratization of health resources for genetic screening.”

The researchers trained the technology using 2,800 retrospective facial photographs of children, with or without a genetic syndrome, from 28 countries, such as Argentina, Australia, Brazil, China, France, Morocco, Nigeria, Paraguay, Thailand and the U.S. The deep learning architecture was designed to account for the normal variations in the face appearance among populations from diverse demographic groups.

“Facial appearance is influenced by the race and ethnicity of the patients. The large variety of conditions and the diversity of populations are impacting the early identification of these conditions due to the lack of data that can serve as a point of reference,” said Linguraru. “Racial and ethnic disparities still exist in genetic syndrome survival even in some of the most common and best-studied conditions.”

Like all machine learning tools, they are trained with the available dataset. The researchers expect that as more data from underrepresented groups becomes available, they will adapt the model to localize phenotypical variations within more specific demographic groups.

In addition to being an accessible tool that could be used in telehealth services to assess genetic risk, there are other potentials for this technology.

“I am also excited about the potential of the technology in newborn screening,” said Linguraru. “There are approximately 140 million newborns every year worldwide of which eight million are born with a serious birth defect of genetic or partially genetic origin, many of which are discovered late.”

Children’s National as well recently announced that it has entered into a licensing agreement with MGeneRx Inc. for its patented pediatric medical device technology. MGeneRx is a spinoff from BreakThrough BioAssets LLC, a life sciences technology operating company focused on accelerating and commercializing new innovations, such as this technology, with an emphasis on positive social impact.

“The social impact of this technology cannot be underestimated,” said Nasser Hassan, acting chief executive officer of MGeneRx Inc. “We are excited about this licensing agreement with Children’s National Hospital and the opportunity to enhance this technology and expand its application to populations where precision medicine and the earliest possible interventions are sorely needed in order to save and improve children’s lives.”

Coronavirus and lungs with world map in the background

Top AI models unveiled in COVID-19 challenge to improve lung diagnostics

Coronavirus and lungs with world map in the background

The top 10 results have been unveiled in the first-of-its-kind COVID-19 Lung CT Lesion Segmentation Grand Challenge, a groundbreaking research competition focused on developing artificial intelligence (AI) models to help in the visualization and measurement of COVID specific lesions in the lungs of infected patients, potentially facilitating more timely and patient-specific medical interventions.

Attracting more than 1,000 global participants, the competition was presented by the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National Hospital in collaboration with leading AI technology company NVIDIA and the National Institutes of Health (NIH). The competition’s AI models utilized a multi-institutional, multi-national data set provided by public datasets from The Cancer Imaging Archive (National Cancer Institute), NIH and the University of Arkansas, that originated from patients of different ages, genders and with variable disease severity. NVIDIA provided GPUs to the top five winners as prizes, as well as supported the selection and judging process.

“Improving COVID-19 treatment starts with a clearer understanding of the patient’s disease state. However, a prior lack of global data collaboration limited clinicians in their ability to quickly and effectively understand disease severity across both adult and pediatric patients,” says Marius George Linguraru, D.Phil., M.A., M.Sc., principal investigator at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National, who led the Grand Challenge initiative. “By harnessing the power of AI through quantitative imaging and machine learning, these discoveries are helping clinicians better understand COVID-19 disease severity and potentially stratify and triage into appropriate treatment protocols at different stages of the disease.”

The top 10 AI algorithms were identified from a highly competitive field of participants who tested the data in November and December 2020. The results were unveiled on Jan. 11, 2021, in a virtual symposium, hosted by Children’s National, that featured presentations from top teams, event organizers and clinicians.

Developers of the 10 top AI models from the COVID-19 Lung CT Lesion Segmentation Grand Challenge are:

  1. Shishuai Hu, et al. Northwestern Polytechnical University, China. “Semi-supervised Method for COVID-19 Lung CT Lesion Segmentation”
  2. Fabian Isensee, et al. German Cancer Research Center, Germany. “nnU-Net for Covid Segmentation”
  3. Claire Tang, Lynbrook High School, USA. “Automated Ensemble Modeling for COVID-19 CT Lesion Segmentation”
  4. Qinji Yu, et al. Shanghai JiaoTong University, China. “COVID-19-20 Lesion Segmentation Based on nnUNet”
  5. Andreas Husch, et al. University of Luxembourg, Luxembourg. “Leveraging State-of-the-Art Architectures by Enriching Training Information – a case study”
  6. Tong Zheng, et al. Nagoya University, Japan. “Fully-automated COVID-19-20 Segmentation”
  7. Vitali Liauchuk. United Institute of Informatics Problems (UIIP), Belarus. “Semi-3D CNN with ImageNet Pretrain for Segmentation of COVID Lesions on CT”
  8. Ziqi Zhou, et al. Shenzhen University, China. “Automated Chest CT Image Segmentation of COVID-19 with 3D Unet-based Framework”
  9. Jan Hendrik Moltz, et al. Fraunhofer Institute for Digital Medicine MEVIS, Germany. “Segmentation of COVID-19 Lung Lesions in CT Using nnU-Net”
  10. Bruno Oliveira, et al. 2Ai – Polytechnic Institute of Cávado and Ave, Portugal. “Automatic COVID-19 Detection and Segmentation from Lung Computed Tomography (CT) Images Using 3D Cascade U-net”

Linguraru added that, in addition to an award for the top five AI models, these winning algorithms are now available to partner with clinical institutions across the globe to further evaluate how these quantitative imaging and machine learning methods may potentially impact global public health.

“Quality annotations are a limiting factor in the development of useful AI models,” said Mona Flores, M.D., global head of Medical AI, NVIDIA. “Using the NVIDIA COVID lesion segmentation model available on our NGC software hub, we were able to quickly label the NIH dataset, allowing radiologists to do precise annotations in record time.”

“I applaud the computer science, data science and image processing global academic community for rapidly teaming up to combine multi-disciplinary expertise towards development of potential automated and multi-parametric tools to better study and address the myriad of unmet clinical needs created by the pandemic,” said Bradford Wood, M.D., director, NIH Center for Interventional Oncology and chief, Interventional Radiology Section, NIH Clinical Center. “Thank you to each team for locking arms towards a common cause that unites the scientific community in these challenging times.”