Self-Supervised Learning for Feature Extraction from Glomerular Images and Disease Classification with Minimal Annotations

imageKey Points

Self-supervised learning extracts meaningful glomerular features without teacher labels.Self-distillation with no labels–pretrained model outperformed conventional supervised learning in disease and clinical parameter classification.Self-distillation with no labels model enabled deep learning on small datasets, reducing annotation efforts.

Background

Deep learning has great potential in digital kidney pathology. However, its effectiveness depends heavily on the availability of extensively labeled datasets, which are often limited because of the specialized knowledge and time required for their creation. This limitation hinders the widespread application of deep learning for the analysis of kidney biopsy images.

Methods

We applied self-distillation with no labels (DINO), a self-supervised learning method, to a dataset of 10,423 glomerular images obtained from 384 periodic acid–Schiff-stained kidney biopsy slides. Glomerular features extracted from the DINO-pretrained backbone were visualized using principal component analysis. We then performed classification tasks by adding either k-nearest neighbor classifiers or linear head layers to the DINO-pretrained or ImageNet-pretrained backbones. These models were trained on our labeled classification dataset. Performance was evaluated using metrics such as the area under the receiver operating characteristic curve (ROC-AUC). The classification tasks encompassed four disease categories (minimal change disease, mesangial proliferative GN, membranous nephropathy, and diabetic nephropathy) and clinical parameters such as hypertension, proteinuria, and hematuria.

Results

Principal component analysis visualization revealed distinct principal components corresponding to different glomerular structures, demonstrating the capability of the DINO-pretrained backbone to capture morphologic features. In disease classification, the DINO-pretrained transferred model (ROC-AUC=0.93) outperformed the ImageNet-pretrained fine-tuned model (ROC-AUC=0.89). When the labeled data were limited, the ImageNet-pretrained fine-tuned model's ROC-AUC dropped to 0.76 (95% confidence interval, 0.72 to 0.80), whereas the DINO-pretrained transferred model maintained superior performance (ROC-AUC, 0.88; 95% confidence interval, 0.86 to 0.90). The DINO-pretrained transferred model also exhibited higher AUCs for the classification of several clinical parameters. External validation using two independent datasets confirmed DINO pretraining's superiority, particularly when labeled data were limited.

Conclusions

The application of DINO to unlabeled periodic acid–Schiff-stained glomerular images facilitated the extraction of histologic features that could be effectively used for disease classification.