The Med*A-Eye platform is a device-agnostic decision-support system for medical images.  The objective is to help clinicians use time efficiently and prioritize image regions most likely to contain abnormalities.

Analysis of radiology images:

Med*A-Eye Mammo uses a stack of neural networks each optimized for a different type of feature.  Object detectors identify calcifications (green boxes) and masses (red regions) with high precision.  A classifier identifies mass regions with less precision but high recall (yellow regions).  And an instance segmentation algorithm identifies architectural distortions (blue regions).   Peer-reviewed referenceA deep learning architecture with an object-detection algorithm and a convolutional neural network for breast mass detection and visualization (Healthcare Analytics)

Biopsy analysis and subtyping:

Med*A-Eye Patho uses two parallel branches, one for mapping and one for subtyping.  First, a "whole-slide image" is rescaled.  In the mapping or "segmentation" branch, tiles are generated from an image optimally sized for mapping, sifted, and analyzed by differently trained neural networks to produce a consensus segmentation mask.  In the subtyping branch, the mask is resized to match the image rescaled for subtyping and used to exclude regions unlikely to be diseased.  Tiles are generated from the unmasked regions of the subtype image, sifted, and analyzed by neural networks trained for subtyping to produce a subtype prediction.   Peer-reviewed referencesAccurate diagnostic tissue segmentation and concurrent disease subtyping with small datasets (Journal of Pathology Informatics); Resource-frugal classification and analysis of pathology slides using image entropy (Biomedical Signal Processing and Control)