Google announces Gemma 4 open AI models, switches to Apache 2.0 license
I do not really like to take X posts as a source, but it's Jeff Dean, maybe there will be more surprises other than what we just got. Thanks, Google! Edit: Seems like Jeff deleted the mention of 124B. Maybe it's because it exceeded Gemini 3 Flash-Lite on benchmark?
Gemma 4 and Qwen3.5 on shared benchmarks
Gemma 4 running on Raspberry Pi5
To be specific: RP5 8GB with SSD (but the speed is the same on the non-ssd one), running [Potato OS](https://github.com/slomin/potato-os) with latest llama.cpp branch compiled. This is Gemma 4 e2b, the Unsloth variety.
Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex
As Cursor launches the next generation of its product, the AI coding startup has to compete with OpenAI and Anthropic more directly than ever.
[R] Is autoresearch really better than classic hyperparameter tuning?
[](https://preview.redd.it/is-autoresearch-really-better-than-classic-hyperparameter-v0-zgty2uy3ausg1.png?width=1118&format=png&auto=webp&s=aa1ca48a2422a0f2f69ed00a6cdfeefa87f4037d) We did experiments comparing Optuna & autoresearch. Autoresearch converges faster, is more cost-eff
Gemma 4 has been released
[https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF](https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF) [https://huggingface.co/unsloth/gemma-4-31B-it-GGUF](https://huggingface.co/unsloth/gemma-4-31B-it-GGUF) [https://huggingface.co/unsloth/gemma-4-E4B-it-GGUF](https://huggingface.co/unsl
Anthropic Says That Claude Contains Its Own Kind of Emotions
and what does this even mean? "internal representations of emotion concepts driving claude behaviour" I get it that they don’t feel emotions and they simulate patterns of emotion, but the scary part is humans respond to the simulation the same way "panic"
Alibaba launches Qwen3.6-Plus, its third proprietary AI model in days
Blog post: [https://qwen.ai/blog?id=qwen3.6](https://qwen.ai/blog?id=qwen3.6) From Chujie Zheng on 𝕏: [https://x.com/ChujieZheng/status/2039560126047359394](https://x.com/ChujieZheng/status/2039560126047359394)
Google's Gemma 4 is now available with Apache 2.0 licensing for the first time
Google is releasing Gemma 4, its most capable open model family yet. The four new models run on everything from smartphones to workstations and ship under a fully open Apache 2.0 license for the first time. The article Google's Gemma 4 is now available with Apache 2.0 licensing for the first time ap
OpenAI acquires TBPN, the buzzy founder-led business talk show
OpenAI acquires TBPN to accelerate global conversations around AI and support independent media, expanding dialogue with builders, businesses, and the broader tech community.
Ask HN: Should there be a temporary ban on new accounts?
Lemonade by AMD: a fast and open source local LLM server using GPU and NPU
AI-2027 forecasters move their timelines ~1.5 years earlier, predict 2027 or 2028 most likely year for AGI
Link to twitter thread: https://x.com/eli\_lifland/status/2039773600555979251?s=20 Link to blog: https://blog.aifutures.org/p/q1-2026-timelines-update
One of the best sensible reasons that I can think of to have an llm downloaded on my cell phone would be emergency advice.
It seems like every conversation about derestricted models everyone treat you like a pervert. The fact is you can be sensible and be a pervert 😂.
A $20/month user costs OpenAI $65 in compute. AI video is a money furnace
Maybe a party-pooper but: A dozen 120B models later, and GPTOSS-120B is still king
- Never consumes entire context walking in place. - Never fails at tool calling. - Never runs slow regardless the back-end. - Never misses a piece of context in its entire window. - Never slows down no matter how long the prompt is. As much as I despise OpenAI, I believe they've done something exce
'Backrooms' and the Rise of the Institutional Gothic
Microsoft takes on AI rivals with three new foundational models
MAI released models that can transcribe voice into text as well as generate audio and images after the group's formation six months ago.
p-e-w/gemma-4-E2B-it-heretic-ara: Gemma 4's defenses shredded by Heretic's new ARA method 90 minutes after the official release
Google's Gemma models have long been known for their strong "alignment" (censorship). I am happy to report that even the latest iteration, Gemma 4, is not immune to Heretic's new [Arbitrary-Rank Ablation (ARA)](https://github.com/p-e-w/heretic/pull/211) method, which uses matrix optimization to supp
Gemma 4 on Android phones
sounds local [https://x.com/osanseviero/status/2039801593055322601](https://x.com/osanseviero/status/2039801593055322601) [https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery](https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery)