The Med*A-Eye platform is a device-agnostic decision-support system for medical images. The objective is to help clinicians use time efficiently and prioritize image regions most likely to contain abnormalities.
Analysis of radiology images:
Med*A-Eye Mammo uses a pair of "stacked" neural networks -- one, YOLO, optimized for object detection; and a second, EfficientNetB0, trained to distinguish diseased from undiseased tissue. The YOLO stage circumscribes features identified as masses with high precision (red regions). The EfficientNetB0 stage identifies lower-probability ROIs with less precision but high recall (yellow regions). The output analysis represents the union of these ROIs, with the high-probability red ROIs overwriting the yellow. Peer-reviewed reference: A deep learning architecture with an object-detection algorithm and a convolutional neural network for breast mass detection and visualization (Healthcare Analytics)
Biopsy analysis and subtyping:
Med*A-Eye Patho uses two parallel branches, one for mapping and one for subtyping. First, a "whole-slide image" is rescaled. In the mapping or "segmentation" branch, tiles are generated from an image optimally sized for mapping, sifted, and analyzed by differently trained neural networks to produce a consensus segmentation mask. In the subtyping branch, the mask is resized to match the image rescaled for subtyping and used to exclude regions unlikely to be diseased. Tiles are generated from the unmasked regions of the subtype image, sifted, and analyzed by neural networks trained for subtyping to produce a subtype prediction. Peer-reviewed references: Accurate diagnostic tissue segmentation and concurrent disease subtyping with small datasets (Journal of Pathology Informatics); Resource-frugal classification and analysis of pathology slides using image entropy (Biomedical Signal Processing and Control)