Skip to content

Automated Deep Learning Accurately Assesses Muscle and Fat Tissue on Routine Chest CT

Key findings

  • Florian J. Fintelmann, MD, and colleagues at Massachusetts General Hospital previously showed that muscle on chest CT improves the prediction of morbidity and mortality following lung resection in patients with lung cancer
  • The team now reports their development and validation of a fully automated deep learning system for analyzing muscle and adipose tissues on CT scans at three thoracic vertebral levels
  • The system's performance matched that of research assistants trained and supervised by expert radiologists, with intraclass correlation coefficients of 0.95 to 0.99
  • There was no significant difference in performance when characterizing scans obtained with or without intravenous contrast

Routinely acquired CT images in patients with cancer contain important biomarkers, such as muscle and adipose tissue, that are typically overlooked during clinical care. Florian J. Fintelmann, MD, physician–scientist in the Division of Thoracic Imaging and Intervention of the Department of Radiology at Massachusetts General Hospital and associate professor of Radiology at Harvard Medical School, and colleagues recently reported multicenter studies in Annals of Surgery and Cancer Medicine that showed that muscle assessment on chest CT before lung surgery improved the prediction of morbidity and mortality in patients with lung cancer.

Now, the group has developed an automated image analysis system (or pipeline) that fully automates quantification and characterization of thoracic muscle and adipose tissue at multiple vertebral levels. It's described in Radiology: Artificial Intelligence by Christopher P. Bridge, DPhil, director of Machine Learning in the Data Science Office and a member of the Athinoula A. Martinos Center for Biomedical Imaging, Till D. Best, MD candidate, Dr. Fintelmann, and colleagues.


The system was created with deep learning, a form of artificial intelligence that uses large artificial neural networks to learn patterns in images or other complex data through iterative training steps without specific programming. Once the network is fully trained, it can make predictions for previously unseen images.

The system, consisting of two neural networks, was trained and validated on 629 chest CT scans from 629 patients with lung cancer, acquired before lobectomy in a study pooling data from three medical centers. These scans were made with 37 scanner models from four manufacturers.

The team built their deep learning system by extending their previously described body composition analysis pipeline for abdominal CT. In the first stage, a convolutional DenseNet neural network (pattern-recognition algorithm) analyzed each two-dimensional axial image and selected images representative of each of three vertebral levels: T5, T8, and T10.

A U-Net, a convolutional neural network for segmentation, then segmented muscle and adipose tissue on each image.

Two trained research assistants supervised by a board-certified radiologist generated the ground truth–level selection and segmentation.

Network Performance

On an independent test set, the researchers compared the ground truth with two metrics for muscle and adipose tissue for each level:

  • Predicted cross-sectional area—The median absolute errors were 3.1 cm2 (3.8%) for muscle and 4.6 cm2 (3.4%) for adipose tissue across all levels
  • Median attenuation—The median absolute error was 1.0 HU for both muscle and adipose tissue

The system's results were very well matched to the performance of human analysts, with intraclass correlation coefficients ranging from 0.951 to 0.998.

Intravenous contrast material was used for 56% of scans. There was no significant difference in performance between scans with and without contrast regarding slice selection, cross-sectional area, or median attenuation.

Potential for Expansion

The earlier system that was adapted for this work was designed for body composition analysis at L3. Only minor changes to the DenseNet were required, suggesting the possibility of scaling the new system to more vertebral levels.

Similarly, the single U-Net network was able to segment all three thoracic levels and might be able to analyze more.

Learn more about the Division of Thoracic Imaging and Intervention

Learn more about Radiology Research at Massachusetts General Hospital


Image-guided ablation represents an alternative approach for patients who cannot undergo surgery or radiation due to the extent of metastatic disease or comorbidities including limited respiratory function, advanced age or prior radiation therapy.


Researchers at Massachusetts General Hospital demonstrated that "CXR-Age," a convolutional neural network, can estimate biological age from a chest X-ray image. This biological age was better than chronological age at predicting longevity.