-
Notifications
You must be signed in to change notification settings - Fork 8.3k
Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
ollama._types.ResponseError: llama runner process has terminated: signal: broken pipe
bug
Something isn't working
#8216
opened Dec 23, 2024 by
MarkCayton
/api/chat and /api/generate endpoints are timing out
bug
Something isn't working
#8214
opened Dec 23, 2024 by
wkevin
Documentation for manual Linux installation is outdated/doesn't work for AMD GPU setup
bug
Something isn't working
#8207
opened Dec 21, 2024 by
lwasyl
docker installation failure due to your installation failure...
bug
Something isn't working
#8205
opened Dec 21, 2024 by
remco-pc
Enhanced aria2c download support with optimized configurations
feature request
New feature or request
#8203
opened Dec 21, 2024 by
A-Akhil
Ollama hangs when running llama3.2 and llama3.2:1b
bug
Something isn't working
#8200
opened Dec 21, 2024 by
pr0fsmith
Check Available Memory Before Downloading
feature request
New feature or request
#8192
opened Dec 21, 2024 by
JamesGMCoder
Corrupt output on multiple GPU in Windows 11
bug
Something isn't working
#8188
opened Dec 20, 2024 by
robbyjo
mllama doesn't support parallel requests yet - llama3.2-vision:11b for Standard_NC24ads_A100_v4
bug
Something isn't working
#8186
opened Dec 20, 2024 by
breddy-lgamerica
Constained Output Validation Error when Using Pattern
bug
Something isn't working
#8185
opened Dec 20, 2024 by
DiyarD
Falcon3 10B in 1.58bit format
model request
Model requests
#8184
opened Dec 20, 2024 by
thiswillbeyourgithub
How do I specify specific GPUs when running a model?
feature request
New feature or request
#8183
opened Dec 20, 2024 by
any35
{"error":"POST predict: Post \"http://127.0.0.1:33603/completion\": EOF"}
bug
Something isn't working
#8182
opened Dec 20, 2024 by
forReason
LLAMA 3:70B is crashing inside K8s pods
bug
Something isn't working
needs more info
More information is needed to assist
#8179
opened Dec 19, 2024 by
IrfDev
qwen 2.5 coder stuck "Stopping"
bug
Something isn't working
#8178
opened Dec 19, 2024 by
MHugonKaliop
Unable to install Ollama on Macbook Air running MacOS Sequoia 15.2
bug
Something isn't working
#8174
opened Dec 19, 2024 by
Bheeshmat
Unable to load dynamic library: libstdc++.so.6: cannot open
bug
Something isn't working
#8168
opened Dec 19, 2024 by
Bekbo01
Error: max retries exceeded for all ollama model pulls (read: connection reset by peer)
bug
Something isn't working
#8167
opened Dec 19, 2024 by
saisun229
StructuredOutputs Schema Missing in Prompt [Unlike OpenAI API Default Behavior]
feature request
New feature or request
#8162
opened Dec 18, 2024 by
ikot-humanoid
Setup window scaling is bigger than expected.
bug
Something isn't working
#8160
opened Dec 18, 2024 by
Segilmez06
IBM Granite MoE & Dense-2b is very slow when KV Cache quantization is enabled
bug
Something isn't working
#8158
opened Dec 18, 2024 by
vYLQs6
falcon3:10b gives empty response sometimes
bug
Something isn't working
#8157
opened Dec 18, 2024 by
i0ntempest
Clarification: "format" vs "tools" behaviours
bug
Something isn't working
#8155
opened Dec 18, 2024 by
VMinB12
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.