Big2 sources· last seen 4h ago· first seen 4h ago

The Bonsai 1-bit models are very good

Hey everyone, Tim from [AnythingLLM](https://github.com/Mintplex-Labs/anything-llm/issues) and yesterday I saw the [PrismML Bonsai](https://prismml.com/news/bonsai-8b) post so i had to give it a real shot because 14x smaller models (in size and memory) would actually be a huge game changer for Loca

Lead: r/LocalLLaMABigness: 69bonsai1-bit
📡 Coverage
50
2 news sources
🟠 Hacker News
0
🔴 Reddit
93
352 upvotes across 2 subs
📈 Google Trends
0
Full methodology: How scoring works

Receipts (all sources)

The Bonsai 1-bit models are very good
REDDIT · r/LocalLLaMA · 4h ago · ⬆ 307 · 💬 53
score 131

Hey everyone, Tim from [AnythingLLM](https://github.com/Mintplex-Labs/anything-llm/issues) and yesterday I saw the [PrismML Bonsai](https://prismml.com/news/bonsai-8b) post so i had to give it a real shot because 14x smaller models (in size and memory) would actually be a huge game changer for Loca

1-bit models are here: PrismMLs Bonsai series of models
REDDIT · r/singularity · 4h ago · ⬆ 45 · 💬 9
score 119

An excerpt from their blog post: >1-bit Bonsai 8B implements a proprietary 1-bit model design across the entire network: embeddings, attention layers, MLP layers, and the LM head are all 1-bit. There are no higher-precision escape hatches. It is a true 1-bit model, end to end, across 8.2 billion