Rising1 sources· last seen 3h ago· first seen 3h ago

Bankai (卍解) — the first post-training adaptation method for true 1-bit LLMs.

I've been experimenting with Bonsai 8B — PrismML's true 1-bit model (every weight is literally 0 or 1, not ternary like BitNet). I realized that since weights are bits, the diff between two model behaviors is just a XOR mask. So I built a tool that searches for sparse XOR patches that modify model b

Lead: r/LocalLLaMABigness: 25bankaipost-trainingadaptationmethodtrue
📡 Coverage
10
1 news source
🟠 Hacker News
0
🔴 Reddit
59
70 upvotes across 1 sub
📈 Google Trends
0
Full methodology: How scoring works

Receipts (all sources)

score 123

I've been experimenting with Bonsai 8B — PrismML's true 1-bit model (every weight is literally 0 or 1, not ternary like BitNet). I realized that since weights are bits, the diff between two model behaviors is just a XOR mask. So I built a tool that searches for sparse XOR patches that modify model b