Rising1 sources· last seen 6h ago· first seen 18h ago
Qwen3.5 Model Comparison: 27B vs 35B on RTX 4090
I wanted to check qwen3.5 35B-A3B models that can be run on my GPU. So I compared 3 GGUF options. **Hardware:** RTX 4090 (24GB VRAM) **Test:** Multi-agent Tetris development (Planner → Developer → QA) # Models Under Test |Model|Preset|Quant|Port|VRAM|Parallel| |:-|:-|:-|:-|:-|:-| |Qwen3.5-27B|`q
Lead: r/LocalLLaMABigness: 33qwen3comparison27b35brtx
📡 Coverage
10
1 news source
🟠 Hacker News
0
🔴 Reddit
82
445 upvotes across 1 sub
📈 Google Trends
0
Full methodology: How scoring works
Receipts (all sources)
Qwen3.5 Model Comparison: 27B vs 35B on RTX 4090
REDDIT · r/LocalLLaMA · 6h ago · ⬆ 57 · 💬 30
score 118
I wanted to check qwen3.5 35B-A3B models that can be run on my GPU. So I compared 3 GGUF options. **Hardware:** RTX 4090 (24GB VRAM) **Test:** Multi-agent Tetris development (Planner → Developer → QA) # Models Under Test |Model|Preset|Quant|Port|VRAM|Parallel| |:-|:-|:-|:-|:-|:-| |Qwen3.5-27B|`q
Qwen3.5 27B better than 35B-A3B?
REDDIT · r/LocalLLaMA · 18h ago · ⬆ 388 · 💬 145
score 114
Which model would be better with 16 GB of VRAM and 32 GB of RAM?