Transfer learning improves performance in volumetric electron microscopy organelle segmentation across tissues.

Bioinformatics advances
Authors
Abstract

MOTIVATION: Volumetric electron microscopy (VEM) enables nanoscale resolution three-dimensional imaging of biological samples. Identification and labeling of organelles, cells, and other structures in the image volume is required for image interpretation, but manual labeling is extremely time-consuming. This can be automated using deep learning segmentation algorithms, but these traditionally require substantial manual annotation for training and typically these labeled datasets are unavailable for new samples.RESULTS: We show that transfer learning can help address this challenge. By pretraining on VEM data from multiple mammalian tissues and organelle types and then fine-tuning on a target dataset, we segment multiple organelles at high performance, yet require a relatively small amount of new training data. We benchmark our method on three published VEM datasets and a new rat liver dataset we imaged over a 56×56×11 m volume measuring 7000×7000×219 px using serial block face scanning electron microscopy with corresponding manually labeled mitochondria and endoplasmic reticulum structures. We further benchmark our approach against the Segment Anything Model 2 and MitoNet in zero-shot, prompted, and fine-tuned settings.AVAILABILITY AND IMPLEMENTATION: Our rat liver dataset's raw image volume, manual ground truth annotation, and model predictions are freely shared at github.com/Xrioen/cross-tissue-transfer-learning-in-VEM.

Year of Publication
2025
Journal
Bioinformatics advances
Volume
5
Issue
1
Pages
vbaf021
Date Published
12/2025
ISSN
2635-0041
DOI
10.1093/bioadv/vbaf021
PubMed ID
40196751
Links