Skip to content

Commit 6232f80

Browse files
committed
Fix lora_alpha in clip_benchmark (#701)
Resolved inconsistency in lora_alpha configuration between training and evaluation that caused poor performance for LoRA fine-tuned models.
1 parent a319309 commit 6232f80

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

clip_benchmark/clip_benchmark/models/internvl_huggingface/modeling_internvl.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -206,9 +206,9 @@ def __init__(self, config: InternVLConfig):
206206
# self.post_init()
207207

208208
if config.use_backbone_lora:
209-
self.wrap_backbone_lora(r=config.use_backbone_lora)
209+
self.wrap_backbone_lora(r=config.use_backbone_lora, lora_alpha=config.use_backbone_lora * 2)
210210
if config.use_qllama_lora:
211-
self.wrap_qllama_lora(r=config.use_qllama_lora)
211+
self.wrap_qllama_lora(r=config.use_qllama_lora, lora_alpha=config.use_qllama_lora * 2)
212212
if config.force_image_size:
213213
self.vision_model.resize_pos_embeddings(
214214
old_size=config.vision_config.image_size,

0 commit comments

Comments
 (0)
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy