Rising1 sources· last seen 7h ago· first seen 7h ago

I technically got an LLM running locally on a 1998 iMac G3 with 32 MB of RAM

Hardware: • Stock iMac G3 Rev B (October 1998). 233 MHz PowerPC 750, 32 MB RAM, Mac OS 8.5. No upgrades. • Model: Andrej Karpathy’s 260K TinyStories (Llama 2 architecture). \~1 MB checkpoint. Toolchain: • Cross-compiled from a Mac mini using Retro68 (GCC for classic Mac OS → PEF binaries) • End

Lead: r/LocalLLaMABigness: 34technicallyllmrunninglocally1998
📡 Coverage
10
1 news source
🟠 Hacker News
0
🔴 Reddit
85
679 upvotes across 1 sub
📈 Google Trends
0
Full methodology: How scoring works

Receipts (all sources)

I technically got an LLM running locally on a 1998 iMac G3 with 32 MB of RAM
REDDIT · r/LocalLLaMA · 7h ago · ⬆ 679 · 💬 48
score 133

Hardware: • Stock iMac G3 Rev B (October 1998). 233 MHz PowerPC 750, 32 MB RAM, Mac OS 8.5. No upgrades. • Model: Andrej Karpathy’s 260K TinyStories (Llama 2 architecture). \~1 MB checkpoint. Toolchain: • Cross-compiled from a Mac mini using Retro68 (GCC for classic Mac OS → PEF binaries) • End