Skip Navigation
Skip to contents

대한신장학회

My KSN 메뉴 열기

간행물 검색
Self-Supervised Learning for Mesangial Proliferation Prediction in Digital Pathology
Boa Jang
2025 ; 2025(1):
    Immunoglobulin A nephropathy, Mesangial proliferation, Glomerulus-based learning, Self-supervised learning
논문분류 :
춘계학술대회 초록집
Immunoglobulin A nephropathy (IgAN) is the most common primary glomerulonephritis. Mesangial proliferation (M1) is a key pathological indicator, with M1 patients experiencing faster disease progression, whereas absence of mesangial proliferation (M0) cases have a milder course. However, distinguishing M1 from M0 is challenging due to limited histopathological precision and interpretation variability among pathologists, leading to diagnostic inconsistencies. Training deep learning models for M1/M0 classification requires large labeled datasets, but kidney biopsy samples are limited, and subjective labeling introduces variability. To overcome this, we propose a glomerulus (glom)-based learning approach. Self-supervised learning (SSL) is applied for upstream training on unlabeled glom data, enabling the model to learn intrinsic glomerular features. The pretrained model is then fine-tuned for M0M1 classification, improving performance through glom-based learning. Digital histopathological images of IgAN patients were retrospectively collected from Seoul National University Hospital (SNUH). A total of 13,231 unlabeled glom patches were extracted and used for SSL with SimCLR to develop a glom-based pretraining model. During pretraining, data augmentation (horizontal flipping, color jitter, rotation, random cropping, CLAHE) was applied, and images were resized to 512 × 512 pixels. For downstream training, 1,551 labeled glom patches with M0/M1 annotations were used to train the M0M1 classification model, fine-tuning the pretrained glom-based model via transfer learning for improved M0/M1 differentiation. The glom-based pretraining model outperformed scratch learning and ImageNet transfer learning across all metrics. It achieved the highest AUC (0.890 ± 0.007), surpassing ImageNet (0.796 ± 0.013) and scratch learning (0.637 ± 0.027). Additionally, it showed superior accuracy (0.810 ± 0.004) and weighted F1-score (0.811 ± 0.003), demonstrating its effectiveness in M0M1 classification. Glom-based pretraining enhances M0M1 classification by leveraging domain-specific features, outperforming ImageNet transfer learning. These results highlight the benefits of task-specific feature extraction for pathology-driven deep learning.
위로가기

(06022) 서울시 강남구 압구정로 30길 23 미승빌딩 301호

Copyright© 대한신장학회. All rights reserved.