Google announces Gemma 4 open AI models, switches to Apache 2.0 license
I do not really like to take X posts as a source, but it's Jeff Dean, maybe there will be more surprises other than what we just got. Thanks, Google! Edit: Seems like Jeff deleted the mention of 124B. Maybe it's because it exceeded Gemini 3 Flash-Lite on benchmark?
Gemma 4 has been released
[https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF](https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF) [https://huggingface.co/unsloth/gemma-4-31B-it-GGUF](https://huggingface.co/unsloth/gemma-4-31B-it-GGUF) [https://huggingface.co/unsloth/gemma-4-E4B-it-GGUF](https://huggingface.co/unsl
Gemma 4 and Qwen3.5 on shared benchmarks
Anthropic Says That Claude Contains Its Own Kind of Emotions
and what does this even mean? "internal representations of emotion concepts driving claude behaviour" I get it that they don’t feel emotions and they simulate patterns of emotion, but the scary part is humans respond to the simulation the same way "panic"
[R] Is autoresearch really better than classic hyperparameter tuning?
[](https://preview.redd.it/is-autoresearch-really-better-than-classic-hyperparameter-v0-zgty2uy3ausg1.png?width=1118&format=png&auto=webp&s=aa1ca48a2422a0f2f69ed00a6cdfeefa87f4037d) We did experiments comparing Optuna & autoresearch. Autoresearch converges faster, is more cost-eff
Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex
As Cursor launches the next generation of its product, the AI coding startup has to compete with OpenAI and Anthropic more directly than ever.
OpenAI acquires TBPN, the buzzy founder-led business talk show
OpenAI acquires TBPN to accelerate global conversations around AI and support independent media, expanding dialogue with builders, businesses, and the broader tech community.
171 emotion vectors found inside Claude. Not metaphors. Actual neuron activation patterns steering behavior.
https://preview.redd.it/kkvvcqr8susg1.jpg?width=1200&format=pjpg&auto=webp&s=ae0315c528afef84c035354927c4b9c5d8ec0bb4 Anthropic's mechanistic interpretability team just published something that deserves way more attention than its getting. They identified 171 distinct emotion-like
Lemonade by AMD: a fast and open source local LLM server using GPU and NPU
One of the best sensible reasons that I can think of to have an llm downloaded on my cell phone would be emergency advice.
It seems like every conversation about derestricted models everyone treat you like a pervert. The fact is you can be sensible and be a pervert 😂.
Why OpenAI’s Fidji Simo Bought the TBPN Podcast Amid Crusade Against ‘Side Quests’
OpenAI has purchased TBPN, an online talk show that often interviews AI executives and other tech leaders. The show goes live every weekday at 2PM PT, often for a three-hour duration, counting OpenAI CEO Sam Altman, as well as executives from Meta, Microsoft, Palantir, and Andreessen Horowitz, among
Gemma 4 is efficient with thinking tokens, but it will also happily reason for 10+ minutes if you prompt it to do so.
Tested both 26b and 31b in AI Studio. The task I asked of it was to crack a cypher. The top closed source models can crack this cypher at max thinking parameters, and Kimi 2.5 Thinking and Deepseek 3.2 are the only open source models to crack the cypher without tool use. (Of course, with the closed
'Backrooms' and the Rise of the Institutional Gothic
AI-2027 forecasters move their timelines ~1.5 years earlier, predict 2027 or 2028 most likely year for AGI
Link to twitter thread: https://x.com/eli\_lifland/status/2039773600555979251?s=20 Link to blog: https://blog.aifutures.org/p/q1-2026-timelines-update
A $20/month user costs OpenAI $65 in compute. AI video is a money furnace
p-e-w/gemma-4-E2B-it-heretic-ara: Gemma 4's defenses shredded by Heretic's new ARA method 90 minutes after the official release
Google's Gemma models have long been known for their strong "alignment" (censorship). I am happy to report that even the latest iteration, Gemma 4, is not immune to Heretic's new [Arbitrary-Rank Ablation (ARA)](https://github.com/p-e-w/heretic/pull/211) method, which uses matrix optimization to supp
Google strongly implies the existence of large Gemma 4 models
In the [huggingface card:](https://huggingface.co/google/gemma-4-26B-A4B-it) > Increased Context Window – The small models feature a 128K context window, while the medium models support 256K. Small and medium... implying at least one large model! 124B confirmed :P
Gemma 4 has been abliterated
Got inspired to try and crack this egg without using heretic. FP16, Q8\_0 and Q4\_K\_M quants, plus the abliteration script for modification/use is here: [https://huggingface.co/paperscarecrow/Gemma-4-31B-it-abliterated-gguf](https://huggingface.co/paperscarecrow/Gemma-4-31B-it-abliterated-gguf)
Microsoft takes on AI rivals with three new foundational models
MAI released models that can transcribe voice into text as well as generate audio and images after the group's formation six months ago.