arcee-ai/Trinity-Large-Thinking · Hugging Face
[arcee-ai/Trinity-Large-Thinking · Hugging Face](https://huggingface.co/arcee-ai/Trinity-Large-Thinking)
The Bonsai 1-bit models are very good
Hey everyone, Tim from [AnythingLLM](https://github.com/Mintplex-Labs/anything-llm/issues) and yesterday I saw the [PrismML Bonsai](https://prismml.com/news/bonsai-8b) post so i had to give it a real shot because 14x smaller models (in size and memory) would actually be a huge game changer for Loca
Claude Code Leak Reveals Always-On ‘Kairos’ Agent
After Anthropic released Claude Code's 2.1.88 update, users quickly discovered that it contained a package with a source map file containing its TypeScript codebase, with one person on X calling attention to the leak and posting a file containing the code. The leaked data reportedly contains more th
Here's what that Claude Code source leak reveals about Anthropic's plans
A persistent agent, stealth "Undercover" mode, and... a virtual assistant named Buddy?
r/programming bans all discussion of LLM programming
Qwen3.6-Plus
Blog post: [https://qwen.ai/blog?id=qwen3.6](https://qwen.ai/blog?id=qwen3.6) From Chujie Zheng on 𝕏: [https://x.com/ChujieZheng/status/2039560126047359394](https://x.com/ChujieZheng/status/2039560126047359394)
Solar Balconies Take Europe by Storm
Chinese state media releases episode 2 of their AI generated Iran war animated series
Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident
Anthropic executives said it was an accident and retracted the bulk of the takedown notices.
Gemini 4 is coming ??
AI for American-produced cement and concrete
ZomboCom stolen by a hacker, sold, now replaced with AI-generated makeover
Trinity Large Thinking
Ask HN: Why is almost all of API documentation online?
Gemma time! What are your wishes ?
Gamma 4 drops most likely tomorrow! what will it take to make it a good release for you?
TurboQuant isn’t just for KV: Qwen3.5-27B at near-Q4_0 quality, about 10% smaller, and finally fitting on my 16GB 5060 Ti
I bought an RTX 5060 Ti 16GB around Christmas and had one goal: get a strong model running locally on my card without paying api fees. I have been testing local ai with open claw. I did not come into this with a quantization background. I only learned about llama, lmstudio and ollama two months ago