Cluster1 sources· last seen 3h ago· first seen 3h ago
Qwen3.6 35B-A3B is quite useful on 780m iGPU (llama.cpp,vulkan)
I have ThinkPad T14 Gen 5 (8840U, **Radeon 780M**, 64GB DDR5 5600 MT/s ). Tried out the recent Qwen MoE release, and pp/tg speed is good (on vulkan) (250+pp, 20 tg): ~/dev/llama.cpp master* ❯ ./build-vulkan/bin/llama-bench \ -hf AesSedai/Qwen3.6-35B-A3B-GGUF:Q6_K \ -
Lead: r/LocalLLaMABigness: 22qwen335b-a3bquiteuseful780m
📡 Coverage
10
1 news source
🟠 Hacker News
0
🔴 Reddit
49
35 upvotes across 1 sub
📈 Google Trends
0
Full methodology: How scoring works
Receipts (all sources)
Qwen3.6 35B-A3B is quite useful on 780m iGPU (llama.cpp,vulkan)
REDDIT · r/LocalLLaMA · 3h ago · ⬆ 35 · 💬 13
score 119
I have ThinkPad T14 Gen 5 (8840U, **Radeon 780M**, 64GB DDR5 5600 MT/s ). Tried out the recent Qwen MoE release, and pp/tg speed is good (on vulkan) (250+pp, 20 tg): ~/dev/llama.cpp master* ❯ ./build-vulkan/bin/llama-bench \ -hf AesSedai/Qwen3.6-35B-A3B-GGUF:Q6_K \ -