News

New AI tool enhances medical imaging with deep learning and text analysis

  • Apr 19, 2024
  • 0 Comments
  • 134
New AI tool enhances medical imaging with deep learning and text analysis

In a recent study published in Nature Medicine, researchers developed the medical concept retriever (MONET) foundation model, which connects medical pictures to text and evaluates images based on their idea existence, which aids in critical tasks in medical artificial intelligence (AI) development and implementation.

Study: Prediction of tumor origin in cancers of unknown primary origin with cytology-based deep learning. Image Credit: LALAKA/Shutterstock.com

Background

Building reliable picture-based medical artificial intelligence systems necessitates analyzing information and neural network models at each level of development, from the training phase to the post-deployment phase.

Richly annotated medical datasets containing semantically relevant ideas could de-mystify the ‘black-box’ technologies.

Understanding clinically significant notions like darker pigmentation, atypical pigment networks, and multiple colors is medically beneficial; however, getting labels takes effort, and most medical information sets provide just diagnostic annotations.

About the study

In the current study, researchers created MONET, an AI model that can annotate medical pictures with medically relevant ideas. They designed the model to identify various human-understandable ideas across two picture modalities in dermatology: dermoscopic and clinical images.

The researchers gathered 105,550 dermatology image-text pairings from PubMed articles and medical textbooks, followed by training MONET using 105,550 dermatology-related photos and natural language data from a broad-scale medical literature database.

MONET assigns ratings to photos for each idea, which indicate the extent to which the image portrays the notion.

MONET, based on contrastive-type learning, is an artificial intelligence approach that allows for direct plain language description application to images.

This method avoids manual labeling, allowing for massive image-text pair information on a considerably larger scale than possible with supervised-type learning. After MONET training, the researchers evaluated its effectiveness in annotation and other AI transparency-related use cases.

The researchers tested MONET’s concept annotation capabilities by selecting the most conceptual photos from dermoscopic and clinical images.

They compared MONET’s performance to supervised learning strategies involving training ResNet-50 models with ground-truth conceptual labels and OpenAI’s Contrastive language-image pretraining (CLIP) model.

The researchers also used MONET to automate data evaluation and tested its efficacy in concept differential analysis.

They utilized MONET to analyze the International Skin Imaging Collaboration (ISIC) data, the broadest dermoscopic image collection with over 70,000 publicly available images routinely used to train dermatological AI models.

The researchers developed model auditing using MONET’ (MA-MONET) using MONET for the automatic detection of semantically relevant medical concepts and model mistakes.

Researchers evaluated MONET-MA in real-world settings by training CNN models on data from several universities and assessing their automated concept annotation.

They contrasted the ‘MONET + CBM’ automatic idea scoring method against the human labeling method, which exclusively applies to photos containing SkinCon labels.

The researchers also investigated the effect of concept selection on MONET+CBM performance, specifically task-relevant ideas in bottleneck layers. Further, they evaluated the impact of incorporating the concept of red in the bottleneck on MONET+CBM performance in interinstitutional transfer scenarios.

Results

MONET is a flexible medical AI platform that can appropriately annotate ideas across dermatological images, as confirmed by board-certified dermatologists.

Its concept annotation feature enables relevant trustworthiness evaluations across the medical artificial intelligence pipeline, as proven by model audits, data audits, and interpretable model developments.

MONET successfully finds appropriate dermoscopic and clinical images for various dermatological keywords, beating the baseline CLIP model in both areas. MONET outperformed CLIP for dermoscopic and clinical pictures while remaining equivalent to supervised learning models for clinical pictures.

MONET’s automated annotation functionality aids in the identification of differentiating traits between any two arbitrary groups of images in a human-readable language during idea differential analysis.

The researchers found that MONET recognizes differentially expressed ideas in clinical and dermoscopic datasets and can help with large-scale dataset auditing.

MA-MONET use revealed features linked with high mistake rates, such as a cluster of photos labeled blue-whitish veil, blue, black, gray, and flat-topped.

The researchers identified the cluster with the highest error rate by erythema, regression structure, red, atrophy, and hyperpigmentation. Dermatologists chose ten target-related ideas for the MONET+CBM and CLIP+CBM bottleneck layers, allowing for flexible labeling options.

MONET+CBM surpasses all baselines concerning the mean area under the receiver-operating characteristic curve (AUROC) for predicting malignancy and melanoma in clinical pictures. Supervised black-box models consistently outperformed in cancer and melanoma prediction tests.

Conclusion

The study found that image-text models can increase AI transparency and trustworthiness in the medical field. MONET, a platform for medical concept annotation, can improve dermatological AI transparency and trustworthiness by allowing for large-scale annotation of ideas.

AI model developers may improve data collection, processing, and optimization procedures, resulting in more dependable medical AI models.

MONET can influence clinical deployment and monitoring of medical image AI systems by allowing for full auditing and fairness analysis through annotating skin tone descriptors.


Disclaimer: This story is auto-aggregated by a computer program and has not been created or edited by menshealthfits.
Publisher: Source link