Rising1 sources· last seen 7h ago· first seen 7h ago
Qwen3.6-27B vs Coder-Next
Burned about 20 hours of side-by-side compute on my two RTX PRO 6000 Blackwells trying to get a definitive answer on which of these two models was clearly better. As with many things in life, after many tokens and kWhs later the answer was "it depends." These models in the aggregate are actually cr
Lead: r/LocalLLaMABigness: 31qwen36-27bcoder-next
📡 Coverage
10
1 news source
🟠 Hacker News
0
🔴 Reddit
76
315 upvotes across 1 sub
📈 Google Trends
0
Full methodology: How scoring works
Receipts (all sources)
Qwen3.6-27B vs Coder-Next
REDDIT · r/LocalLLaMA · 7h ago · ⬆ 315 · 💬 66
score 128
Burned about 20 hours of side-by-side compute on my two RTX PRO 6000 Blackwells trying to get a definitive answer on which of these two models was clearly better. As with many things in life, after many tokens and kWhs later the answer was "it depends." These models in the aggregate are actually cr
Related clusters
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B...
1 sources · bigness 28 · 10h ago
We are finally there: Qwen3.6-27B + agentic search; 95.7% SimpleQA on a single 3090, fully local
1 sources · bigness 32 · 23h ago
Qwen3.6-27B at 72 tok/s on RTX 3090 on Windows using native vLLM (no WSL, no Docker), portable launcher and installer
1 sources · bigness 27 · 1d ago