Cluster1 sources· last seen 3h ago· first seen 3h ago
The 'Running Doom' of AI: Qwen3.5-27B on a 512MB Raspberry Pi Zero 2W
Yes, seriously, no API calls or word tricks. I was wondering what the absolute lower bound is if you want a truly offline AI. Just like people trying to run Doom on everything, why can't we run a Large Language Model purely on a $15 device with only 512MB of memory? I know it's incredibly slow (we'
Lead: r/LocalLLaMABigness: 24runningdoomqwen35-27b512mb
📡 Coverage
10
1 news source
🟠 Hacker News
0
🔴 Reddit
56
57 upvotes across 1 sub
📈 Google Trends
0
Full methodology: How scoring works
Receipts (all sources)
The 'Running Doom' of AI: Qwen3.5-27B on a 512MB Raspberry Pi Zero 2W
REDDIT · r/LocalLLaMA · 3h ago · ⬆ 57 · 💬 29
score 122
Yes, seriously, no API calls or word tricks. I was wondering what the absolute lower bound is if you want a truly offline AI. Just like people trying to run Doom on everything, why can't we run a Large Language Model purely on a $15 device with only 512MB of memory? I know it's incredibly slow (we'