Rising1 sources· last seen 12h ago· first seen 12h ago

update your llama.cpp - great tg speedup on Qwen3.5 / Qwen-Next

https://preview.redd.it/e2kxthdj0mng1.png?width=1798&format=png&auto=webp&s=b203af8b35294e081b1093a5a89076452128ec0d great work by u/am17an [https://github.com/ggml-org/llama.cpp/pull/19504](https://github.com/ggml-org/llama.cpp/pull/19504) probably only CUDA/CPU are affected For som

Lead: r/LocalLLaMABigness: 28yourmetacppgreatspeedup
📡 Coverage
10
1 news source
🟠 Hacker News
0
🔴 Reddit
68
144 upvotes across 1 sub
📈 Google Trends
0
Full methodology: How scoring works

Receipts (all sources)

update your llama.cpp - great tg speedup on Qwen3.5 / Qwen-Next
REDDIT · r/LocalLLaMA · 12h ago · ⬆ 144 · 💬 79
score 116

https://preview.redd.it/e2kxthdj0mng1.png?width=1798&format=png&auto=webp&s=b203af8b35294e081b1093a5a89076452128ec0d great work by u/am17an [https://github.com/ggml-org/llama.cpp/pull/19504](https://github.com/ggml-org/llama.cpp/pull/19504) probably only CUDA/CPU are affected For som

Related clusters