Medical Devices and Healthcare IT

Combining Augmented Reality with Deep Learning for Cancer Diagnostics

17 April 2018

Left: Schematic overview of the ARM. Right: A picture of the prototype which has been retrofitted into a typical clinical-grade light microscope. Source: GoogleLeft: Schematic overview of the ARM. Right: A picture of the prototype which has been retrofitted into a typical clinical-grade light microscope. Source: Google

Applications of deep learning in medical disciplines including ophthalmology, dermatology, radiology and pathology have recently shown great promise to increase both the accuracy and availability of high-quality healthcare. To further this technology, Google researchers have developed a tool that combines augmented reality with a deep learning neural network to provide pathologists with help in spotting cancerous cells on slides under a microscope.

The prototype Augmented Reality Microscope (ARM) platform consists of a modified light microscope that enables real-time image analysis and presentation of the results of machine learning algorithms directly into the field of view. The ARM can be retrofitted into existing light microscopes found in hospitals and clinics by using low-cost, readily available components, and without the need for whole slide digital versions of the tissue being analyzed.

The system was built on prior research in which a team developed a deep learning network that was trained to spot breast cancer. Lessons learned from various projects surrounding augmented reality were incorporated to train their deep learning application to draw circles around suspected cancer cells in a tissue sample. By using the add-on diagnostic tool with the type of microscope already in use, the pathologist sees the cells of the tissue — moving the slide around allows them to see the whole sample. If the ARM spots possible cancer cells, it draws a ring around them, alerting the pathologist to take a closer look.

The platform has been configured to run two different cancer detection algorithms: one that detects breast cancer metastases in lymph node specimens, and another that detects prostate cancer in prostatectomy specimens. These models can run at magnifications between 4-40x, and the result of a given model is displayed by outlining detected tumor regions with a green contour.

Both cancer models were originally trained on images from a whole slide scanner with a significantly different optical configuration and performed remarkably well on the ARM with no additional re-training. The lymph node metastasis model had an area-under-the-curve (AUC) of 0.98 and the prostate cancer model had an AUC of 0.96 for cancer detection in the field of view (FoV) when run on the ARM, only slightly decreased performance than obtained on whole slide imagery. The performance of these models can be further improved by additional training on digital images captured directly from the ARM itself.

To contact the author of this article, email shimmelstein@globalspec.com


Powered by CR4, the Engineering Community

Discussion – 0 comments

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Engineering Newsletter Signup
Get the Engineering360
Stay up to date on:
Features the top stories, latest news, charts, insights and more on the end-to-end electronics value chain.
Advertisement
Weekly Newsletter
Get news, research, and analysis
on the Electronics industry in your
inbox every week - for FREE
Sign up for our FREE eNewsletter
Advertisement
Find Free Electronics Datasheets
Advertisement