Google announces Gemma 4 open AI models, switches to Apache 2.0 license
I do not really like to take X posts as a source, but it's Jeff Dean, maybe there will be more surprises other than what we just got. Thanks, Google! Edit: Seems like Jeff deleted the mention of 124B. Maybe it's because it exceeded Gemini 3 Flash-Lite on benchmark?
Gemma 4 has been released
[https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF](https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF) [https://huggingface.co/unsloth/gemma-4-31B-it-GGUF](https://huggingface.co/unsloth/gemma-4-31B-it-GGUF) [https://huggingface.co/unsloth/gemma-4-E4B-it-GGUF](https://huggingface.co/unsl
Anthropic Says That Claude Contains Its Own Kind of Emotions
and what does this even mean? "internal representations of emotion concepts driving claude behaviour" I get it that they don’t feel emotions and they simulate patterns of emotion, but the scary part is humans respond to the simulation the same way "panic"
Anthropic Acquires Startup Coefficient Bio for About $400 Million
https://www.theinformation.com/articles/anthropic-acquires-startup-coefficient-bio-400-million Coefficient Bio is a New York-based AI biotech startup. The Company focuses on AI driven drug discovery and automating scientific experiments. Seems like Dario is confident his vision of tens of million
OpenAI acquires TBPN, the buzzy founder-led business talk show
OpenAI acquires TBPN to accelerate global conversations around AI and support independent media, expanding dialogue with builders, businesses, and the broader tech community.
Claude Code and Cowork now let Anthropic's AI take control of your Mac or Windows desktop
Anthropic has announced a new feature for its AI assistant Claude: the ability to directly operate a user's computer, handling tasks people would normally do themselves at their desk. The article Claude Code and Cowork now let Anthropic's AI take control of your Mac or Windows desktop appeared first
Show HN: Apfel – The free AI already on your Mac
qwen 3.6 voting
I am afraid you have to use X guys [https://x.com/ChujieZheng/status/2039909486153089250](https://x.com/ChujieZheng/status/2039909486153089250)
llama.cpp Gemma4 Tokenizer Fix Was Merged Into Main Branch
Switzerland hosts 'CERN of semiconductor research'
Altman on shutting down Sora: 'I did not expect 3 or 6 months ago to be at this point we're at now; where something very big and important is about to happen again with this next generation of models and the agents they can power.'
[https://youtu.be/mJSnn0GZmls](https://youtu.be/mJSnn0GZmls) ‘We have a few times in our history realized something really important is working, or about to work so well, that we have to stop a bunch of other projects. In fact, this was the original thing that happened with GPT3. We had a whole por
Linux Kernel developers are receiving record high number of CORRECT bug reports because of AI and expect quality of software to be much higher in the future
The message at the end (second snapshot) is particularly hopeful. It's great to see open-source software benefiting the most from the frontier models and the model developers giving back to those who created their training data. This significantly challenges the narrative pushed by some of the anti-
I asked Gemma 4 26b to code a simple single page breakout game to test its coding abilities and it just started going full schizophrenic
Gotta say my first experience with the model didn't go that well.
Gemma 4 is good
Waiting for artificialanalysis to produce intelligence index, but I see it's good. Gemma 26b a4b is the same speed on Mac Studio M1 Ultra as Qwen3.5 35b a3b (\~1000pp, \~60tg at 20k context length, llama.cpp). And in my short test, it behaves way, way better than Qwen, not even close. Chain of thoug
'Backrooms' and the Rise of the Institutional Gothic
Gemma 4 and Qwen3.5 on shared benchmarks
AI will do to our minds what machines did to our bodies
Just like we go to gyms today because machines have replaced strenuous physical work, in the near future, we’ll need to go to mental gyms to “work out” our minds because AI will do all the challenging mental work. A thousand years ago, physical strength was just part of life. You built with your ba
Gemma 4 is seriously broken when using Unsloth and llama.cpp
Hi! Just checking, am I the only one who has serious issues with Gemma 4 locally? I've played around with Gemma 4 using Unsloth quants on llama.cpp, and it's seriously broken. I'm using the latest changes from llama.cpp, along with the reccomended temperature, top-p and top-k. Giving it an article
VRAM optimization for gemma 4
**TLDR: add -np 1 to your llama.cpp launch command if you are the only user, cuts SWA cache VRAM by 3x instantly** So I was messing around with Gemma 4 and noticed the dense model hogs a massive chunk of VRAM before you even start generating anything. If you are on 16GB you might be hitting OOM and
171 emotion vectors found inside Claude. Not metaphors. Actual neuron activation patterns steering behavior.
https://preview.redd.it/kkvvcqr8susg1.jpg?width=1200&format=pjpg&auto=webp&s=ae0315c528afef84c035354927c4b9c5d8ec0bb4 Anthropic's mechanistic interpretability team just published something that deserves way more attention than its getting. They identified 171 distinct emotion-like