The Bonsai 1-bit models are very good
Hey everyone, Tim from [AnythingLLM](https://github.com/Mintplex-Labs/anything-llm/issues) and yesterday I saw the [PrismML Bonsai](https://prismml.com/news/bonsai-8b) post so i had to give it a real shot because 14x smaller models (in size and memory) would actually be a huge game changer for Loca
Receipts (all sources)
Hey everyone, Tim from [AnythingLLM](https://github.com/Mintplex-Labs/anything-llm/issues) and yesterday I saw the [PrismML Bonsai](https://prismml.com/news/bonsai-8b) post so i had to give it a real shot because 14x smaller models (in size and memory) would actually be a huge game changer for Loca
An excerpt from their blog post: >1-bit Bonsai 8B implements a proprietary 1-bit model design across the entire network: embeddings, attention layers, MLP layers, and the LM head are all 1-bit. There are no higher-precision escape hatches. It is a true 1-bit model, end to end, across 8.2 billion