-
Notifications
You must be signed in to change notification settings - Fork 312
Pull requests: containers/ramalama
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Default to llama.cpp vulkan backend for AMD and Intel GPUs
#2541
opened Mar 20, 2026 by
olliewalsh
Loading…
llama-cpp: add flash attention for benchmarks for openvino backend
#2532
opened Mar 19, 2026 by
kannon92
Loading…
Improvements to UX when getting 404 errors from Hugging Face
#2519
opened Mar 15, 2026 by
miabbott
Loading…
ramalama sandbox: run an AI agent in a sandbox, backed by a local AI Model
#2498
opened Mar 7, 2026 by
mikebonnet
Loading…
Document Fedora Silverblue and Toolbox usage (fixes #2086)
#2477
opened Feb 27, 2026 by
rhatdan
Loading…
Add --chat-template-file to mount custom chat template (fixes #1783)
#2476
opened Feb 27, 2026 by
rhatdan
Loading…
ci: only install podman and crun for podman5 install task
#2453
opened Feb 23, 2026 by
olliewalsh
•
Draft
Add --engine-args flag for custom container engine arguments
#2444
opened Feb 21, 2026 by
rhatdan
Loading…
Add wait_for_server in chat.py and test_chat.py corresponding unit tests
stale-pr
#2433
opened Feb 17, 2026 by
scoonce
Loading…
feat(args): adds context-shift as a standalone argument
#2332
opened Jan 21, 2026 by
bmahabirbu
Loading…
Enable GGML_HIP_ROCWMMA_FATTN in llamacpp ROCm build
#2296
opened Jan 9, 2026 by
olliewalsh
Loading…
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.