Tag Archive for: machine learning

Ian Leibowitz

In the News: Advancing innovations in pediatric gastroenterology and hepatology

“The future is in AI and machine learning and how it allows large data sets to be utilized to a level of understanding that we currently don’t have…We have very rare monogenetic disorders where single gene is the cause of certain inflammatory valve diseases in young children and we’re starting to learn about what’s the right therapy by that gene and personalizing medicine… Not just precision medicine (which is better for a population) but really personalizing medicine.”

Learn more about what Ian Leibowitz, M.D., division chief of Gastroenterology, Hepatology and Nutrition Services, says as he discusses advances in clinical care algorithms that facilitate the timely diagnosis of critical conditions, efforts to increase access to medical and surgical treatment, and broaden awareness among primary care physicians to help ensure care is available and provided as early as possible to all patients.

Winners of the International Conference on Medical Image Computing and Computer Assisted Intervention

AI team wins international competition to measure pediatric brain tumors

Winners of the International Conference on Medical Image Computing and Computer Assisted Intervention
Children’s National Hospital scientists won first place in a global competition to use artificial intelligence (AI) to analyze pediatric brain tumor volumes, demonstrating the team’s ground-breaking advances in imaging and machine learning.

During the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), the Children’s National team demonstrated the most accurate algorithm to study the volume of brain tumors – the most common solid tumors affecting children and adolescents and a leading cause of disease-related death at this young age. The technology could someday help oncologists understand the extent of a patient’s disease, quantify the efficacy of treatments and predict patient outcomes.

“The Brain Tumor Segmentation Challenge inspires leaders in medical imaging and deep learning to try to solve some of the most vexing problems facing radiologists, oncologists, computer engineers and data scientists,” said Marius George Linguraru, D.Phil., M.A., M.Sc., the Connor Family Professor in Research and Innovation and principal investigator in the Sheikh Zayed Institute for Pediatric Surgical Innovation. “I am honored that our team won, and I’m even more thrilled for our clinicians and their patients, who need us to keep moving forward to find new ways to treat pediatric brain tumors.”

Why we’re excited

With roughly 4,000 children diagnosed yearly, pediatric brain tumors are consistently the most common type of pediatric solid tumor, second only to leukemia in pediatric malignancies. At the urging of Linguraru and one of his peers at the Children’s Hospital of Philadelphia, pediatric data was included in the international competition for the first time, helping to ensure that children are represented in medical and technological advances.

The contest required participants to use data from multiple institutions and consortia to test competing methods fairly. The Children’s National team created a method to tap into the power of two types of imaging and machine learning: 3D convolutional neural network and 3D Vision Transformer-based deep learning models. They identified regions of the brain affected by tumors, made shrewd data-processing decisions driven by the team’s experience in AI for pediatric healthcare and achieved state-of-the-art results.

The competition drew 18 teams who are leaders from across the AI and machine learning community. The runner-up teams were from NVIDIA and the University of Electronic Science and Technology of China.

The big picture

“Children’s National has an all-star lineup, and I am thrilled to see our scientists recognized on an international stage,” said interim Executive Vice President and Chief Academic Officer Catherine Bollard, M.D., M.B.Ch.B., director of the Center for Cancer for Immunology Research. “As we work to attack brain tumors from multiple angles, we continue to show our exceptional ability to create new and better tools for diagnosing, imaging and treating these devastating tumors.”

Attendees at the inaugural symposium on AI in Pediatric Health and Rare Diseases

AI: The “single greatest tool” for improving access to pediatric healthcare

Attendees at the inaugural symposium on AI in Pediatric Health and Rare Diseases

The daylong event drew experts from the Food and Drug Administration, Pfizer, Oracle Health, NVIDIA, AWS Health and elsewhere to start building a community aimed at using data for the advancement of pediatric medicine.

The future of pediatric medicine holds the promise of artificial intelligence (AI) that can help diagnose rare diseases, provide roadmaps for safer surgeries, tap into predictive technologies to guide individual treatment plans and shrink the distance between patients in rural areas and specialty care providers.

These and dozens of other innovations were contemplated as scientists came together at the inaugural symposium on AI in Pediatric Health and Rare Diseases, hosted by Children’s National Hospital and the Fralin Biomedical Research Institute at Virginia Tech. The daylong event drew experts from the Food and Drug Administration, Pfizer, Oracle Health, NVIDIA, AWS Health and elsewhere to start building a community aimed at using data for the advancement of pediatric medicine.

“AI is the single greatest tool for improving equity and access to health care,” said symposium host Marius George Linguraru, D.Phil., M.A., M.Sc., principal investigator at the Sheikh Zayed Institute for Pediatric Surgical Innovation. “As a population, kids are vastly underrepresented in scientific research and resulting treatments, but pediatric specialties can use AI to provide medical care to kids more efficiently, more quickly and more effectively.”

What they’re saying

Scientists shared their progress in building digital twins to predict surgical outcomes, enhancing visualization to increase the precision of delicate interventions, establishing data command centers to anticipate risks for fragile patients and more. Over two dozen speakers shared their vision for the future of medicine, augmented by the power of AI:

  • Keynote speaker Subha Madhavan, Ph.D., vice president and head of AI and machine learning at Pfizer, discussed various use cases and the potential to bring drugs to market faster using real-world evidence and AI. She saw promise for pediatrics. “This is probably the most engaging mission: children’s health and rare diseases,” she said. “It’s hard to find another mission that’s as compelling.”
  • Brandon J. Nelson, Ph.D., staff fellow in the Division of Imaging, Diagnostics and Software Reliability at the Food and Drug Administration, shared ways AI will improve diagnostic imaging and reduce radiation exposure to patients, using more advanced image reconstruction and denoising techniques. “That is really our key take-home message,” he said. “We can get what … appear as higher dose images, but with less dose.”
  • Daniel Donoho, M.D., a neurosurgeon at Children’s National, introduced the audience to the potential of “Smart ORs”: operating rooms where systems can ingest surgery video and provide feedback and skill assessments. “We have to transform the art of surgery into a measurable and improvable scientific practice,” he said.
  • Debra Regier, M.D., chief of Genetics and Metabolism at Children’s National, discussed how AI could be used to diagnose and treat rare diseases by conducting deep dives into genetics and studying dysmorphisms in patients’ faces. Already, Children’s National has designed an app – mGene – that measures facial features and provides a risk score to help anyone in general practice determine if a child has a genetic condition. “The untrained eye can stay the untrained eye, and the family can continue to have faith in their provider,” she said.

What’s next

Linguraru and others stressed the need to design AI for kids, rather than borrow it from adults, to ensure medicine meets their unique needs. He noted that scientists will need to solve challenges, such as the lack of data inherent in rare pediatric disorders and the simple fact that children grow. “Children are not mini-adults,” Linguraru said. “There are big changes in a child’s life.”

The landscape will require thoughtfulness. Naren Ramakrishnan, Ph.D., director of the Sanghani Center for Artificial Intelligence & Analytics at Virginia Tech and symposium co-host, said that scientists are heading into an era with a new incarnation of public-private partnerships, but many questions remain about how data will be shared and organizations will connect. “It is not going to be business as usual, but what is this new business?” he asked.

lung ct scan

With COVID-19, artificial intelligence performs well to study diseased lungs

lung ct scan

New research shows that artificial intelligence can be rapidly designed to study the lung images of COVID-19 patients.

Artificial intelligence can be rapidly designed to study the lung images of COVID-19 patients, opening the door to the development of platforms that can provide more timely and patient-specific medical interventions during outbreaks, according to research published this month in Medical Image Analysis.

The findings come as part of a global test of AI’s power, called the COVID-19 Lung CT Lesion Segmentation Challenge 2020. More than 2,000 international teams came together to train the power of machine learning and imaging on COVID-19, led by researchers at Children’s National Hospital, AI tech giant NVIDIA and the National Institutes of Health (NIH).

The bottom line

Many of the competing AI platforms were successfully trained to analyze lung lesions in COVID-19 patients and measure acute issues including lung thickening, effusions and other clinical findings. Ten leaders were named in the competition, which ran between November and December 2020. The datasets included patients with a range of ages and disease severity.

Yet work remains before AI could be implemented in a clinical setting. The AI models performed comparably to radiologists when analyzing data similar to what the algorithms had already encountered. However, the AI was less valuable when trained on fresh data from other sources during the testing phase, indicating that systems may need to study larger and more diverse data sets to meet their full potential. This is a challenge with AI that has been noted by others too.

What they’re saying

“These are the first steps in learning how we can quickly and accurately train AI for clinical use,” said Marius George Linguraru, D.Phil., M.A., M.Sc., principal investigator at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National, who led the Grand Challenge Initiative. “The global interest in COVID-19 gave us a groundbreaking opportunity to address a health crisis, and multidisciplinary teams can now focus that interest and energy on developing better tools and methods.”

Holger Roth, senior applied research scientist at NVIDIA, said the challenge gave researchers around the world a shared platform for developing and evaluating AI algorithms to quickly detect and quantify COVID lesions from lung CT images. “These models help researchers visualize and measure COVID-specific lesions of infected patients and can facilitate timelier and patient-specific medical interventions to better treat COVID,” he said.

Moving the field forward

The organizers see great potential for clinical use. In areas with limited resources, AI could help triage patients, guide the use of therapeutics or provide diagnoses when expensive testing is unavailable. AI-defined standardization in clinical trials could also uniformly measure the effects of the countermeasures used against the disease.

Linguraru and his colleagues recommend more challenges, like the lung segmentation challenge, to develop AI applications in biomedical spaces that can test the functionality of these platforms and harness their potential. Open-source AI algorithms and public curated data, such as those offered through the COVID-19 Lung CT Lesion Segmentation Challenge 2020, are valuable resources for the scientific and clinical communities to work together on advancing healthcare.

“The optimal treatment of COVID-19 and other diseases hinges on the ability of clinicians to understand disease throughout populations – in both adults and children,” Linguraru said. “We are making significant progress with AI, but we must walk before we can run.”

echocardiogram

AI may revolutionize rheumatic heart disease early diagnosis

echocardiogram

Researchers at Children’s National Hospital have created a new artificial intelligence (AI) algorithm that promises to be as successful at detecting early signs of rheumatic heart disease (RHD) in color Doppler echocardiography clips as expert clinicians.

Researchers at Children’s National Hospital have created a new artificial intelligence (AI) algorithm that promises to be as successful at detecting early signs of rheumatic heart disease (RHD) in color Doppler echocardiography clips as expert clinicians. Even better, this novel model diagnoses this deadly heart condition from echocardiography images of varying quality — including from low-resource settings — a huge challenge that has delayed efforts to automate RHD diagnosis for children in these areas.

Why it matters

Current estimates are that 40.5 million people worldwide live with rheumatic heart disease, and that it kills 306,000 people every year. Most of those affected are children, adolescents and young adults under age 25.

Though widely eradicated in nations such as the United States, rheumatic fever remains prevalent in developing countries, including those in sub-Saharan Africa. Recent studies have shown that, if detected soon enough, a regular dose of penicillin may slow the development and damage caused by RHD. But it has to be detected.

The hold-up in the field

Diagnosing RHD requires an ultrasound image of the heart, known as an echocardiogram. However, ultrasound in general is very variable as an imaging modality. It is full of texture and noise, making it one of the most challenging to interpret visually. Specialists undergo significant training to read them correctly. However, in areas where RHD is rampant, people who can successfully read these images are few and far between. Making matters worse, the devices used in these low resource settings have their own levels of varying quality, especially when compared to what is available in a well-resourced hospital elsewhere.

The research team hypothesized that a novel, automated deep learning-based method might detect successfully diagnose RHD, which would allow for more diagnoses in areas where specialists are limited. However, to date, machine learning has struggled the same way the human eye does with noisy ultrasound images.

Children’s National leads the way

Using approaches that led to successful objective digital biometric analysis software for non-invasive screening of genetic disease, researchers at the Sheikh Zayed Institute for Pediatric Surgical Innovation, including medical imaging scientist Pooneh Roshanitabrizi, Ph.D., and Marius Linguraru, D.Phil., M.A., M.Sc., principal investigator, partnered with clinicians from Children’s National Hospital, including Craig Sable, M.D., associate chief of Cardiology and director of Echocardiography, and cardiology fellow Kelsey Brown, M.D., who are heavily involved in efforts to research, improve treatments and ultimately eliminate the deadly impacts of RHD in children. The collaborators also included cardiac surgeons from the Uganda Heart Institute and cardiologists from Cincinnati Children’s Hospital Medical Center.

Dr. Linguraru’s team of AI and imaging scientists spent hours working with cardiologists, including Dr. Sable, to truly understand how they approach and assess RHD from echocardiograms. Building the tool based on that knowledge is why this tool stands apart from other efforts to use machine-learning for this purpose. Orienting the approach to the clinical steps of diagnosis is what led to the very first deep learning algorithm that diagnoses mild RHD with similar success to the specialists themselves. After the platform was built, 2,136 echocardiograms from 591 children treated at the Uganda Heart Institute fed the learning algorithm.

What’s next

The team will continue to collect data points based on clinical imaging data to refine and validate the tool. Ultimately, researchers will look for a way that the algorithm can work directly with ultrasound/echocardiogram machines. For example, the program might be run through an app that sits on top of an ultrasound device and works on the same platform to communicate directly with it, right in the clinic. By putting the two technologies together, care providers on the ground will be able to diagnose mild cases and prescribe prophylactic treatments like penicillin in one visit.

The first outcomes from the program were showcased in a presentation by Dr. Roshanitabrizi at one of the biggest and most prestigious medical imaging and AI computing meetings — the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI).

binary numbers

New datasets predict surgeon performance during complications

binary numbers

In a new study published in JAMA Network Open, experts at Children’s National and allied institutions created and validated the first dataset to depict hemorrhage control for machine learning applications, the simulated outcomes following carotid artery laceration (SOCAL) video dataset.

Computer algorithms, such as machine learning and computer vision, are increasingly able to discover patterns in visual data, powering exciting new technologies worldwide. At Children’s National Hospital, physician-scientists develop and apply these advanced algorithms to make surgery safer by studying surgical video.

The big picture

In a new study published in JAMA Network Open, experts at Children’s National and allied institutions created and validated the first dataset to depict hemorrhage control for machine learning applications, the simulated outcomes following carotid artery laceration (SOCAL) video dataset.

The authors designed SOCAL to serve as a benchmark for data-science applications, including object detection, performance metric development and outcome prediction. Hemorrhage control is a high-stakes adverse event that can pose unique challenges for video analysis. With SOCAL, the authors aim to solve a valuable use case with algorithms.

“As neurosurgeons, we are often called to perform high-risk and high-impact procedures. No one is more passionate about making surgery safer,” said Daniel Donoho, M.D., neurosurgeon at Children’s National Hospital and senior author of the study. “Our team at Children’s National and the Sheikh Zayed Institute is poised to lead this exciting new field of surgical data science.”

The hold-up in the field

These algorithms require raw data for their development, but the field lacks datasets that depict surgeons managing complications.

By creating automated insights from surgical video, these tools may one day improve patient care by detecting complications before patients are harmed, facilitating surgeon development.

Why it matters

“Until very recently, surgeons have not known what may be possible with large quantities of surgical video captured each day in the operating room,” said Gabriel Zada, M.D., M.S., F.A.A.N.S., F.A.C.S., director of the Brain Tumor Center at the University of Southern California (USC) and co-author of the study. “Our team’s research led by Dr. Donoho shows the feasibility and the potential of computer vision analysis in surgical skill assessment, virtual coaching and simulation training of surgeons.”

The lack of videos of adverse events creates a dataset bias which hampers surgical data science. SOCAL was designed to meet this need. After creating a cadaveric simulator of internal carotid artery injury and training hundreds of surgeons on the model at nationwide courses, the authors then developed computational models to measure and improve performance.

“We are currently comparing our algorithms to experts, including those developed using the SOCAL dataset,” Dr. Donoho said. “Human versus machine, and our patients are ultimately the winners in the competition.”

What’s next

The authors are also building a nationwide collective of surgeons and data scientists to share data and improve algorithm performance through exciting partnerships with USC, California Institute of Technology and other institutions.

You can read the full study “Utility of the Simulated Outcomes Following Carotid Artery Laceration Video Data Set for Machine Learning Applications” in JAMA Network Open.

AI chip illustration

How radiologists and data scientists can collaborate to advance AI in clinical practice

AI chip illustration

The scientific community continues to debate AI’s possibility of outperforming humans in specific tasks. In the context of the machine’s performance versus the clinician, Linguraru et al. argue that the community must consider social, psychological and economic contexts in addition to the medical implications to answer this puzzling question.

In a special report published in Radiology: Artificial Intelligence, a Children’s National Hospital expert and other institutions discussed a shared multidisciplinary vision to develop radiologic and medical imaging techniques through advanced quantitative imaging biomarkers and artificial intelligence (AI).

“AI algorithms can construct, reconstruct and interpret radiologic images, but they also have the potential to guide the scanner and optimize its parameters,” said Marius George Linguraru, D.Phil., M.A., M.Sc., principal investigator in the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National. “The acquisition and analysis of radiologic images is personalized, and radiologists and technologists adapt their approach to every patient based on their experience. AI can simplify this process and make it faster.”

The scientific community continues to debate AI’s possibility of outperforming humans in specific tasks. In the context of the machine’s performance versus the clinician, Linguraru et al. argue that the community must consider social, psychological and economic contexts in addition to the medical implications to answer this puzzling question.

Still, they believe that developing a useful radiologic AI system designed with the participation of radiologists could complement and possibly surpass human’s interpretation of the visuals.

Given AI’s potential applications, the authors encouraged radiologists to access many freely available resources to learn about machine learning, and radiomics to familiarize with basic concepts. Coursera, for example, can teach radiologists about convolutional neural networks and other techniques used by AI researchers.

Conversely, AI experts must reach out to radiologists and participate in public speaking events about their work. According to the researchers, during those engagement opportunities, clinicians understood the labor-saving benefits of automatic complex measurements on millions of images—something that they have been doing manually for years.

There are also hurdles around this quest of automation, which Linguraru et al. hope both fields can sort out by working together. A critical challenge that the experts mentioned was earning the trust of clinicians that are skeptical about the “black box” functionality of AI models, which makes it hard to understand and explain the behavior of a model.

Some questions, too, need answers on how to best leverage both human intelligence and AI by using human-in-the-loop where people train, tune, and test a particular algorithm, or AI in-the-loop where this different framing generates AI input and reflection in human systems.

“The key is to have a good scientific premise to adequately train and validate the algorithms and make them clinically useful. At that point, we can trust the box,” said Linguraru. “In radiology, we should focus on AI systems with radiologists in-the-loop, but also on training radiologists with AI in-the-loop, particularly as AI systems are getting smarter and learning to work better with radiologists.”

The experts also provided possible solutions to sharing large datasets, how to build datasets that allows robust investigations and how to improve the quality of a model that might be compared against human’s gold standard.

This special report is the second in a series of panel discussions hosted by the Radiological Society of North America and the Medical Image Computing and Computer Assisted Intervention Society. The discussion builds upon the first in the series “Machine Learning for Radiology from Challenges to Clinical Applications” that touched on how to incentivize annotators to participate in projects, the promotion of “team science” to address research questions and challenges, among other topics.

control population and population with Williams-Beuren syndrome.

Machine learning tool detects the risk of genetic syndromes

control population and population with Williams-Beuren syndrome.

(A) Control population. (B) Population with Williams-Beuren syndrome. Average faces were generated for each demographic group after automatic face pose correction.

With an average accuracy of 88%, a deep learning technology offers rapid genetic screening that could accelerate the diagnosis of genetic syndromes, recommending further investigation or referral to a specialist in seconds, according to a study published in The Lancet Digital Health. Trained with data from 2,800 pediatric patients from 28 countries, the technology also considers the face variability related to sex, age, racial and ethnic background, according to the study led by Children’s National Hospital researchers.

“We built a software device to increase access to care and a machine learning technology to identify the disease patterns not immediately obvious to the human eye or intuition, and to help physicians non-specialized in genetics,” said Marius George Linguraru, D.Phil., M.A., M.Sc., principal investigator in the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National Hospital and senior author of the study. “This technological innovation can help children without access to specialized clinics, which are unavailable in most of the world. Ultimately, it can help reduce health inequality in under-resourced societies.”

This machine learning technology indicates the presence of a genetic syndrome from a facial photograph captured at the point-of-care, such as in pediatrician offices, maternity wards and general practitioner clinics.

“Unlike other technologies, the strength of this program is distinguishing ‘normal’ from ‘not-normal,’ which makes it an effective screening tool in the hands of community caregivers,” said Marshall L. Summar, M.D., director of the Rare Disease Institute at Children’s National. “This can substantially accelerate the time to diagnosis by providing a robust indicator for patients that need further workup. This first step is often the greatest barrier to moving towards a diagnosis. Once a patient is in the workup system, then the likelihood of diagnosis (by many means) is significantly increased.”

Every year, millions of children are born with genetic disorders — including Down syndrome, a condition in which a child is born with an extra copy of their 21st chromosome causing developmental delays and disabilities, Williams-Beuren syndrome, a rare multisystem condition caused by a submicroscopic deletion from a region of chromosome 7, and Noonan syndrome, a genetic disorder caused by a faulty gene that prevents normal development in various parts of the body.

Most children with genetic syndromes live in regions with limited resources and access to genetic services. The genetic screening may come with a hefty price tag. There are also insufficient specialists to help identify genetic syndromes early in life when preventive care can save lives, especially in areas of low income, limited resources and isolated communities.

“The presented technology can assist pediatricians, neonatologists and family physicians in the routine or remote evaluation of pediatric patients, especially in areas with limited access to specialized care,” said Porras et al. “Our technology may be a step forward for the democratization of health resources for genetic screening.”

The researchers trained the technology using 2,800 retrospective facial photographs of children, with or without a genetic syndrome, from 28 countries, such as Argentina, Australia, Brazil, China, France, Morocco, Nigeria, Paraguay, Thailand and the U.S. The deep learning architecture was designed to account for the normal variations in the face appearance among populations from diverse demographic groups.

“Facial appearance is influenced by the race and ethnicity of the patients. The large variety of conditions and the diversity of populations are impacting the early identification of these conditions due to the lack of data that can serve as a point of reference,” said Linguraru. “Racial and ethnic disparities still exist in genetic syndrome survival even in some of the most common and best-studied conditions.”

Like all machine learning tools, they are trained with the available dataset. The researchers expect that as more data from underrepresented groups becomes available, they will adapt the model to localize phenotypical variations within more specific demographic groups.

In addition to being an accessible tool that could be used in telehealth services to assess genetic risk, there are other potentials for this technology.

“I am also excited about the potential of the technology in newborn screening,” said Linguraru. “There are approximately 140 million newborns every year worldwide of which eight million are born with a serious birth defect of genetic or partially genetic origin, many of which are discovered late.”

Children’s National as well recently announced that it has entered into a licensing agreement with MGeneRx Inc. for its patented pediatric medical device technology. MGeneRx is a spinoff from BreakThrough BioAssets LLC, a life sciences technology operating company focused on accelerating and commercializing new innovations, such as this technology, with an emphasis on positive social impact.

“The social impact of this technology cannot be underestimated,” said Nasser Hassan, acting chief executive officer of MGeneRx Inc. “We are excited about this licensing agreement with Children’s National Hospital and the opportunity to enhance this technology and expand its application to populations where precision medicine and the earliest possible interventions are sorely needed in order to save and improve children’s lives.”

Coronavirus and lungs with world map in the background

Top AI models unveiled in COVID-19 challenge to improve lung diagnostics

Coronavirus and lungs with world map in the background

The top 10 results have been unveiled in the first-of-its-kind COVID-19 Lung CT Lesion Segmentation Grand Challenge, a groundbreaking research competition focused on developing artificial intelligence (AI) models to help in the visualization and measurement of COVID specific lesions in the lungs of infected patients, potentially facilitating more timely and patient-specific medical interventions.

Attracting more than 1,000 global participants, the competition was presented by the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National Hospital in collaboration with leading AI technology company NVIDIA and the National Institutes of Health (NIH). The competition’s AI models utilized a multi-institutional, multi-national data set provided by public datasets from The Cancer Imaging Archive (National Cancer Institute), NIH and the University of Arkansas, that originated from patients of different ages, genders and with variable disease severity. NVIDIA provided GPUs to the top five winners as prizes, as well as supported the selection and judging process.

“Improving COVID-19 treatment starts with a clearer understanding of the patient’s disease state. However, a prior lack of global data collaboration limited clinicians in their ability to quickly and effectively understand disease severity across both adult and pediatric patients,” says Marius George Linguraru, D.Phil., M.A., M.Sc., principal investigator at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National, who led the Grand Challenge initiative. “By harnessing the power of AI through quantitative imaging and machine learning, these discoveries are helping clinicians better understand COVID-19 disease severity and potentially stratify and triage into appropriate treatment protocols at different stages of the disease.”

The top 10 AI algorithms were identified from a highly competitive field of participants who tested the data in November and December 2020. The results were unveiled on Jan. 11, 2021, in a virtual symposium, hosted by Children’s National, that featured presentations from top teams, event organizers and clinicians.

Developers of the 10 top AI models from the COVID-19 Lung CT Lesion Segmentation Grand Challenge are:

  1. Shishuai Hu, et al. Northwestern Polytechnical University, China. “Semi-supervised Method for COVID-19 Lung CT Lesion Segmentation”
  2. Fabian Isensee, et al. German Cancer Research Center, Germany. “nnU-Net for Covid Segmentation”
  3. Claire Tang, Lynbrook High School, USA. “Automated Ensemble Modeling for COVID-19 CT Lesion Segmentation”
  4. Qinji Yu, et al. Shanghai JiaoTong University, China. “COVID-19-20 Lesion Segmentation Based on nnUNet”
  5. Andreas Husch, et al. University of Luxembourg, Luxembourg. “Leveraging State-of-the-Art Architectures by Enriching Training Information – a case study”
  6. Tong Zheng, et al. Nagoya University, Japan. “Fully-automated COVID-19-20 Segmentation”
  7. Vitali Liauchuk. United Institute of Informatics Problems (UIIP), Belarus. “Semi-3D CNN with ImageNet Pretrain for Segmentation of COVID Lesions on CT”
  8. Ziqi Zhou, et al. Shenzhen University, China. “Automated Chest CT Image Segmentation of COVID-19 with 3D Unet-based Framework”
  9. Jan Hendrik Moltz, et al. Fraunhofer Institute for Digital Medicine MEVIS, Germany. “Segmentation of COVID-19 Lung Lesions in CT Using nnU-Net”
  10. Bruno Oliveira, et al. 2Ai – Polytechnic Institute of Cávado and Ave, Portugal. “Automatic COVID-19 Detection and Segmentation from Lung Computed Tomography (CT) Images Using 3D Cascade U-net”

Linguraru added that, in addition to an award for the top five AI models, these winning algorithms are now available to partner with clinical institutions across the globe to further evaluate how these quantitative imaging and machine learning methods may potentially impact global public health.

“Quality annotations are a limiting factor in the development of useful AI models,” said Mona Flores, M.D., global head of Medical AI, NVIDIA. “Using the NVIDIA COVID lesion segmentation model available on our NGC software hub, we were able to quickly label the NIH dataset, allowing radiologists to do precise annotations in record time.”

“I applaud the computer science, data science and image processing global academic community for rapidly teaming up to combine multi-disciplinary expertise towards development of potential automated and multi-parametric tools to better study and address the myriad of unmet clinical needs created by the pandemic,” said Bradford Wood, M.D., director, NIH Center for Interventional Oncology and chief, Interventional Radiology Section, NIH Clinical Center. “Thank you to each team for locking arms towards a common cause that unites the scientific community in these challenging times.”

communication network concept image

Children’s National joins international AI COVID-19 initiative

communication network concept image

Children’s National Hospital is the first pediatric partner to join an international initiative led by leading technology firm NVIDIA and Massachusetts General Brigham Hospital, focused on creating solutions through machine and deep learning to benefit COVID-19 healthcare outcomes.

Children’s National Hospital is the first pediatric partner to join an international initiative led by leading technology firm NVIDIA and Massachusetts General Brigham Hospital, focused on creating solutions through machine and deep learning to benefit COVID-19 healthcare outcomes. The initiative, known as EXAM (EMR CXR AI Model) is the largest and most diverse federated learning enterprise, comprised of 20 leading hospitals from around the globe.

Marius George Linguraru, D.Phil., M.A., M.Sc., principal investigator at the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National Hospital, noted that one of the core goals of the initiative is to create a platform which brings resources together, from a variety of leading institutions, to advance the care of COVID-19 patients across the board, including children.

“Children’s National Hospital is proud to be the first pediatric partner joining the world’s leading healthcare institutions in this collaboration to advance global health,” says Linguraru. “We are currently living in a time where rapid access to this kind of global data has never been more important — we need solutions that work fast and are effective. That is not possible without this degree of collaboration and we look forward to continuing this important work with our partners to address one of the most significant healthcare challenges in our lifetime.”

A recent systematic review and meta-analysis from Children’s National Hospital became another core contribution to understanding how children are impacted by COVID-19. Led by Linguraru and accepted to be published in Pediatric Pulmonology, it offers the first comprehensive summary of the findings of various studies published thus far that describe COVID-19 lung imaging data across the pediatric population.

The review examined articles based on chest CT imaging in 1,026 pediatric patients diagnosed with COVID-19, and concluded that chest CT manifestations in those patients could potentially be used to prompt intervention across the pediatric population.

Marius George Linguraru

“Children’s National Hospital is proud to be the first pediatric partner joining the world’s leading healthcare institutions in this collaboration to advance global health,” says Marius George Linguraru, D.Phil., M.A., M.Sc.

“Until this point, pediatric COVID-19 studies have largely been restricted to case reports and small case series, which have prevented the identification of any specific pediatric lung disease patterns in COVID-19 patients,” says Linguraru. “Not only did this review help identify the common patterns in the lungs of pediatric patients presenting COVID-19 symptoms, which are distinct from the signs of other viral respiratory infections in children, it also provided insight into the differences between children and adults with COVID-19.”

Earlier this month, NVIDIA announced the EXAM initiative had – in just 20 days – developed an artificial intelligence (AI) model to determine whether a patient demonstrating COVID-19 symptoms in an emergency room would require supplemental oxygen hours – even days – after the initial exam. This data ultimately aids physicians in determining the proper level of care for patients, including potential ICU placement.

The EXAM initiative achieved a machine learning model offering precise prediction for the level of oxygen incoming patients would require.

In addition to Children’s National Hospital, other participants included Mass Gen Brigham and its affiliated hospitals in Boston; NIHR Cambridge Biomedical Research Centre; The Self-Defense Forces Central Hospital in Tokyo; National Taiwan University MeDA Lab and MAHC and Taiwan National Health Insurance Administration; Tri-Service General Hospital in Taiwan; Kyungpook National University Hospital in South Korea; Faculty of Medicine, Chulalongkorn University in Thailand; Diagnosticos da America SA in Brazil; University of California, San Francisco; VA San Diego; University of Toronto; National Institutes of Health in Bethesda, Maryland; University of Wisconsin-Madison School of Medicine and Public Health; Memorial Sloan Kettering Cancer Center in New York; and Mount Sinai Health System in New York.

kidney ultrasound

Using computers to enhance hydronephrosis diagnosis

kidney ultrasound

Researchers at Children’s National Hospital are using quantitative imaging and machine intelligence to enhance care for children with a common kidney disease, and their initial results are very promising. Their technique provides an accurate way to predict earlier which children with hydronephrosis will need surgical intervention, simplifying and enhancing their care.

We live in a time of great uncertainty yet great promise, particularly when it comes to harnessing technology to improve lives. Researchers at Children’s National Hospital are using quantitative imaging and machine intelligence to enhance care for children with a common kidney disease, and their initial results are very promising. Their technique provides an accurate way to predict earlier which children with hydronephrosis will need surgical intervention, simplifying and enhancing their care.

Hydronephrosis means “water in the kidney” and is a condition in which a kidney doesn’t empty normally. One of the most frequently detected abnormalities on prenatal ultrasound, hydronephrosis affects up to 4.5% of all pregnancies and is often discovered prenatally or just after birth.

Although hydronephrosis in children sometimes resolves by itself, identifying which kidneys are obstructed and more likely to need intervention isn’t particularly easy. But it is critical. “Children with severe hydronephrosis over long periods of time can start losing kidney function to the point of losing a kidney,” says Marius George Linguraru, DPhil, MA, MSc, principal investigator of the project; director of Precision Medical Imaging Group at the Sheikh Zayed Institute for Pediatric Surgical Innovation; and professor of radiology, pediatrics and biomedical engineering at George Washington University.

Children with hydronephrosis face three levels of examination and intervention: ultrasound, nuclear imaging testing called diuresis renogram and surgery for the critical cases. “What we want to do with this project is stratify kids as early as possible,” Dr. Linguraru says. “The earlier we can predict, the better we can plan the clinical care for these kids.”

Ultrasound is used to see whether there is a blockage and try to determine hydronephrosis severity. “Ultrasound is non-invasive, non-radiating, and does not expose the child to any risk prenatally or postnatally,” Dr. Linguraru says. Ultrasound evaluations require a trained radiologist, but there’s a lot of variability. Radiologists have a grading system based on the ultrasound appearance of the kidney to determine whether the hydronephrosis is mild, moderate or severe, but studies show this isn’t predictive of longer term outcomes.

Children whose ultrasounds show concern will be referred to diuresis renogram. Costly, complex, invasive and irradiating, it tests how well the kidney empties. Although appropriate for good clinical indications, doctors try to minimize its use. “Management of hydronephrosis is complex,” Dr. Linguraru says. “We want to use ultrasound as much as possible and much less diuresis renogram.”

For those patients whose kidney is obstructed and eventually need surgical intervention, the sooner that decision can be made the better. “The more you wait for a kidney that is severely obstructed, the more function may be lost. If intervention is required, it’s preferable to do it early,” Dr. Linguraru says. Of course for the child whose hydronephrosis will likely resolve itself, intervention is not the best option.

Marius George Linguraru

“With our technique we are measuring physiological and anatomical changes in the ultrasound image of the kidney,” says Marius George Linguraru, DPhil, MA, MSc. “The human eye may find it difficult to put all this together, but the machine can do it. We use quantitative imaging to do deep phenotyping of the kidney and machine learning to interpret the data.”

Dr. Linguraru and the multidisciplinary team at Children’s National Hospital, including radiology and urology clinicians, are putting the power of computers to work interpreting subtleties in the ultrasound data that humans just can’t see. In their pilot study they found that 60% of the nuclear imaging tests could have been safely avoided without missing any of the critical cases of hydronephrosis. “With our technique we are measuring physiological and anatomical changes in the ultrasound image of the kidney,” Dr. Linguraru says. “The human eye may find it difficult to put all this together, but the machine can do it. We use quantitative imaging to do deep phenotyping of the kidney and machine learning to interpret the data.”

Results of the initial study indicate that kids who have a mild condition can be safely discharged earlier and the model can predict all those kids with obstructions and accelerate their diagnosis by sending them earlier to get further investigation. Dr. Linguraru says. “There are only benefits: some kids will get earlier diagnosis, some earlier discharges.”

The team also has a way to improve the interpretation of diuresis renograms. “We analyze the dynamics of the kidney’s drainage curve in quantifiable way. Using machine learning to interpret those results, we showed we can potentially discharge some kids earlier and accelerate intervention for the most severe cases instead of waiting and repeating the invasive tests,” he says. The framework has 93% accuracy, including 91% sensitivity and 96% specificity, to predict surgical cases, a significant improvement over clinical metrics’ accuracy.

The next step is a study connecting all the protocols. “Right now we have a study on ultrasound, a study on nuclear imaging, but we need to connect them so a child with hydronephrosis immediately benefits,” says Dr. Linguraru. Future work will focus on streamlining and accelerating diagnosis and intervention for kids who need it, both in prospective studies and hopefully clinically as well.

Hydronephrosis is an area in which machine learning can be applied to pediatric health in meaningful ways because of the sheer volume of cases.

“Machine learning algorithms work best when they are trained well on a lot of data,” Dr. Linguraru says. “Often in pediatric conditions, data are sparse because conditions are rare. Hydronephrosis is one of those areas that can really benefit from this new technological development because there is a big volume of patients. We are collecting more data, and we’re becoming smarter with these kinds of algorithms.”

Learn more about the Precision Medical Imaging Laboratory and its work to enhance clinical information in medical images to improve children’s health.