Rising1 sources· last seen 10h ago· first seen 10h ago
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B...
I've had better results quality wise with 35B AND it's much faster than 27B. Just curious cause I see lots of people post about 27B. Am I doing something wrong with 27B? Use cases are multi-stage pipelines for coding and internet research. I also use Opencode a bit. All use cases I normally app
Lead: r/LocalLLaMABigness: 28qwen36-27b35bpreferpost
📡 Coverage
10
1 news source
🟠 Hacker News
0
🔴 Reddit
67
128 upvotes across 1 sub
📈 Google Trends
0
Full methodology: How scoring works
Receipts (all sources)
Qwen3.6-27B vs 35B, I prefer 35B but more people here post about 27B...
REDDIT · r/LocalLLaMA · 10h ago · ⬆ 128 · 💬 96
score 117
I've had better results quality wise with 35B AND it's much faster than 27B. Just curious cause I see lots of people post about 27B. Am I doing something wrong with 27B? Use cases are multi-stage pipelines for coding and internet research. I also use Opencode a bit. All use cases I normally app
Related clusters
Qwen3.6-27B vs Coder-Next
1 sources · bigness 31 · 7h ago
We are finally there: Qwen3.6-27B + agentic search; 95.7% SimpleQA on a single 3090, fully local
1 sources · bigness 32 · 23h ago
Qwen3.6-27B at 72 tok/s on RTX 3090 on Windows using native vLLM (no WSL, no Docker), portable launcher and installer
1 sources · bigness 27 · 1d ago