A multi-modal transformer for cell type-agnostic regulatory predictions.

Cell genomics
Authors
Keywords
Abstract

Sequence-based deep learning models have emerged as powerful tools for deciphering the cis-regulatory grammar of the human genome but cannot generalize to unobserved cellular contexts. Here, we present EpiBERT, a multi-modal transformer that learns generalizable representations of genomic sequence and cell type-specific chromatin accessibility through a masked accessibility-based pre-training objective. Following pre-training, EpiBERT can be fine-tuned for gene expression prediction, achieving accuracy comparable to the sequence-only Enformer model, while also being able to generalize to unobserved cell states. The learned representations are interpretable and useful for predicting chromatin accessibility quantitative trait loci (caQTLs), regulatory motifs, and enhancer-gene links. Our work represents a step toward improving the generalization of sequence-based deep neural networks in regulatory genomics.

Year of Publication
2025
Journal
Cell genomics
Pages
100762
Date Published
01/2025
ISSN
2666-979X
DOI
10.1016/j.xgen.2025.100762
PubMed ID
39884279
Links