Big1 sources· last seen 2h ago· first seen 8h ago

The MCP PR for llama.cpp has been merged !

The MCP PR for llama.cpp has finally been merged: [https://github.com/ggml-org/llama.cpp/pull/18655](https://github.com/ggml-org/llama.cpp/pull/18655) This unlocks a pretty major piece on the llama-server / WebUI side, with MCP support, tool calls, an agentic loop, a server selector, resources, pro

Lead: r/LocalLLaMABigness: 56mcpmetacppmerged
📡 Coverage
10
1 news source
🟠 Hacker News
0
🔴 Reddit
65
131 upvotes across 1 sub
📈 Google Trends
80
Meta AI: 80/100
Full methodology: How scoring works

Receipts (all sources)

The MCP PR for llama.cpp has been merged !
REDDIT · r/LocalLLaMA · 8h ago · ⬆ 124 · 💬 14
score 120

The MCP PR for llama.cpp has finally been merged: [https://github.com/ggml-org/llama.cpp/pull/18655](https://github.com/ggml-org/llama.cpp/pull/18655) This unlocks a pretty major piece on the llama-server / WebUI side, with MCP support, tool calls, an agentic loop, a server selector, resources, pro

llama.cpp server is slow
REDDIT · r/LocalLLaMA · 2h ago · ⬆ 7 · 💬 21
score 110

I just build llama.cpp and I am happy with the performance `build/bin/llama-cli -hf unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XL --ctx-size 16384 --temp 1.0 --top-p 0.95 --top-k 20 --min-p 0.00` Gets me approx. 100t/s When I change llama-cli to llama-server `build/bin/llama-se

Related clusters