Skip to content

Releases: lunit-io/benchmark-ssl-pathology

Self-supervised pre-trained weights on TCGA

10 Apr 03:21
Compare
Choose a tag to compare

Benchmarking Self-Supervised Learning on Diverse Pathology Datasets

We execute the largest-scale study of SSL pre-training on pathology image data. Our study is conducted using 4 representative SSL methods below on diverse downstream tasks. We establish that large-scale domain-aligned pre-training in pathology consistently out-performs ImageNet pre-training.

Pre-trained weights

  1. bt_rn50_ep200.torch: ResNet50 pre-trained using Barlow Twins
  2. mocov2_rn50_ep200.torch: ResNet50 pre-trained using MoCoV2
  3. swav_rn50_ep200.torch: ResNet50 pre-trained using SwAV
  4. dino_small_patch_${patch_size}_ep200.torch: ViT-Small/${patch_size} pre-trained using DINO

md5sum

Weight MD5SUM
bt_rn50_ep200.torch e5621a2350d4023b78870fd75dc27862
mocov2_rn50_ep200.torch 54f7a12b63922895face4ef32c370c5e
swav_rn50_ep200.torch b817e5e2875e7097d8bb650168aa4761
dino_small_patch_16_ep200.torch 8dbbdae7d6413d58bef6aa90c41699dc
dino_small_patch_8_ep200.torch 5b6d6262fb87284fa5b97d171044153a

Image statistics

We used the following statistics for image intensity standardization (normalization):

mean: [ 0.70322989, 0.53606487, 0.66096631 ]
std: [ 0.21716536, 0.26081574, 0.20723464 ]

which are values corresponding to R, G, and B channels respectively, determined from 10% of the training samples.

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy