Skip to content

model card yaml tab->2xspace #14819

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

csabakecskemeti
Copy link
Contributor

@csabakecskemeti csabakecskemeti commented Jul 22, 2025

Issue to fix:

yaml.scanner.ScannerError: while scanning for the next token
found character '\t' that cannot start any token
  in "<unicode string>", line 2, column 20:
    license: apache-2.0	

caused by Qwen/Qwen3-235B-A22B-Instruct-2507 model card had spaces

yaml should use 2 spaces instead of tab.

Tested locally

(Note: I've also sent a PR to fix the model card too)

@github-actions github-actions bot added the python python script changes label Jul 22, 2025
@ggerganov ggerganov merged commit acd6cb1 into ggml-org:master Jul 22, 2025
4 checks passed
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Jul 23, 2025
* origin/master: (49 commits)
ci : correct label refactor->refactoring (ggml-org#14832)
CUDA: fix quantized KV cache + multiple sequences (ggml-org#14822)
tests : add non-cont K,V FA tests
memory : handle saving/loading null layers in recurrent memory (ggml-org#14675)
ggml: fix loongarch quantize_row_q8_1 error (ggml-org#14827)
CANN: weight format to NZ for Ascend310P3 (ggml-org#14407)
CUDA: add fused rms norm (ggml-org#14800)
ggml : model card yaml tab->2xspace (ggml-org#14819)
vulkan: fix rms_norm_mul to handle broadcasting dim0 (ggml-org#14817)
llama : add model type detection for rwkv7 7B&14B (ggml-org#14816)
imatrix: add option to display importance score statistics for a given imatrix file (ggml-org#12718)
Mtmd: add a way to select device for vision encoder (ggml-org#14236)
cuda : implement bf16 cpy ops and enable bf16 cont (ggml-org#14763)
opencl: remove unreachable `return` (ggml-org#14806)
server : allow setting `--reverse-prompt` arg (ggml-org#14799)
cuda: remove linking to cublasLt (ggml-org#14790)
opencl: fix `im2col` when `KW!=KH` (ggml-org#14803)
opencl: add conv2d kernel (ggml-org#14403)
sycl: Fix im2col (ggml-org#14797)
kleidiai: add support for get_rows (ggml-org#14676)
...
taronaeo pushed a commit to taronaeo/llama.cpp-s390x that referenced this pull request Jul 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
python python script changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy