Skip to content

Release v1.0.17

Latest
Compare
Choose a tag to compare
@rwightman rwightman released this 10 Jul 16:04

July 7, 2025

  • MobileNet-v5 backbone tweaks for improved Google Gemma 3n behaviour (to pair with updated official weights)
    • Add stem bias (zero'd in updated weights, compat break with old weights)
    • GELU -> GELU (tanh approx). A minor change to be closer to JAX
  • Add two arguments to layer-decay support, a min scale clamp and 'no optimization' scale threshold
  • Add 'Fp32' LayerNorm, RMSNorm, SimpleNorm variants that can be enabled to force computation of norm in float32
  • Some typing, argument cleanup for norm, norm+act layers done with above
  • Support Naver ROPE-ViT (https://github.com/naver-ai/rope-vit) in eva.py, add RotaryEmbeddingMixed module for mixed mode, weights on HuggingFace Hub
model img_size top1 top5 param_count
vit_large_patch16_rope_mixed_ape_224.naver_in1k 224 84.84 97.122 304.4
vit_large_patch16_rope_mixed_224.naver_in1k 224 84.828 97.116 304.2
vit_large_patch16_rope_ape_224.naver_in1k 224 84.65 97.154 304.37
vit_large_patch16_rope_224.naver_in1k 224 84.648 97.122 304.17
vit_base_patch16_rope_mixed_ape_224.naver_in1k 224 83.894 96.754 86.59
vit_base_patch16_rope_mixed_224.naver_in1k 224 83.804 96.712 86.44
vit_base_patch16_rope_ape_224.naver_in1k 224 83.782 96.61 86.59
vit_base_patch16_rope_224.naver_in1k 224 83.718 96.672 86.43
vit_small_patch16_rope_224.naver_in1k 224 81.23 95.022 21.98
vit_small_patch16_rope_mixed_224.naver_in1k 224 81.216 95.022 21.99
vit_small_patch16_rope_ape_224.naver_in1k 224 81.004 95.016 22.06
vit_small_patch16_rope_mixed_ape_224.naver_in1k 224 80.986 94.976 22.06
  • Some cleanup of ROPE modules, helpers, and FX tracing leaf registration
  • Preparing version 1.0.17 release

What's Changed

  • Adding Naver rope-vit compatibility to EVA ViT by @rwightman in #2529
  • Update no_grad usage to inference_mode if possible by @GuillaumeErhard in #2534
  • Add a min layer-decay scale clamp, and no optimization threshold to exclude groups from optimization by @rwightman in #2537
  • Add stem_bias option to MNV5. Resolve the norm layer so can pass string. by @rwightman in #2538
  • Add flag to enable float32 computation for normalization (norm + affine) by @rwightman in #2536
  • fix: mnv5 conv_stem bias and GELU with approximate=tanh by @RyanMullins in #2533
  • Fixup casting issues for weights/bias in fp32 norm layers by @rwightman in #2539
  • Fix H, W ordering for xy indexing in ROPE by @rwightman in #2541
  • Fix 3 typos in README.md by @robin-ede in #2544

New Contributors

Full Changelog: v1.0.16...v1.0.17

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy