Linux Kernel developers are receiving record high number of CORRECT bug reports because of AI and expect quality of software to be much higher in the future
The message at the end (second snapshot) is particularly hopeful. It's great to see open-source software benefiting the most from the frontier models and the model developers giving back to those who created their training data. This significantly challenges the narrative pushed by some of the anti-
Anthropic Says That Claude Contains Its Own Kind of Emotions
and what does this even mean? "internal representations of emotion concepts driving claude behaviour" I get it that they don’t feel emotions and they simulate patterns of emotion, but the scary part is humans respond to the simulation the same way "panic"
Anthropic Acquires Startup Coefficient Bio for About $400 Million
https://www.theinformation.com/articles/anthropic-acquires-startup-coefficient-bio-400-million Coefficient Bio is a New York-based AI biotech startup. The Company focuses on AI driven drug discovery and automating scientific experiments. Seems like Dario is confident his vision of tens of million
OpenAI acquires TBPN, the buzzy founder-led business talk show
OpenAI acquires TBPN to accelerate global conversations around AI and support independent media, expanding dialogue with builders, businesses, and the broader tech community.
Google announces Gemma 4 open AI models, switches to Apache 2.0 license
Gemma 4: Our most intelligent open models to date, purpose-built for advanced reasoning and agentic workflows.
qwen 3.6 voting
I am afraid you have to use X guys [https://x.com/ChujieZheng/status/2039909486153089250](https://x.com/ChujieZheng/status/2039909486153089250)
Altman on shutting down Sora: 'I did not expect 3 or 6 months ago to be at this point we're at now; where something very big and important is about to happen again with this next generation of models and the agents they can power.'
[https://youtu.be/mJSnn0GZmls](https://youtu.be/mJSnn0GZmls) ‘We have a few times in our history realized something really important is working, or about to work so well, that we have to stop a bunch of other projects. In fact, this was the original thing that happened with GPT3. We had a whole por
Show HN: Apfel – The free AI already on your Mac
Gemma 4 has been released
[https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF](https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF) [https://huggingface.co/unsloth/gemma-4-31B-it-GGUF](https://huggingface.co/unsloth/gemma-4-31B-it-GGUF) [https://huggingface.co/unsloth/gemma-4-E4B-it-GGUF](https://huggingface.co/unsl
AI will do to our minds what machines did to our bodies
Just like we go to gyms today because machines have replaced strenuous physical work, in the near future, we’ll need to go to mental gyms to “work out” our minds because AI will do all the challenging mental work. A thousand years ago, physical strength was just part of life. You built with your ba
AI solves John Conway's bountied math problem (decades old)
[https://x.com/spicey\_lemonade/status/2039643930010980715?s=20](https://x.com/spicey_lemonade/status/2039643930010980715?s=20) The problem is listed on Wikipedia's "unsolved problems in mathematics" list
VRAM optimization for gemma 4
**TLDR: add -np 1 to your llama.cpp launch command if you are the only user, cuts SWA cache VRAM by 3x instantly** So I was messing around with Gemma 4 and noticed the dense model hogs a massive chunk of VRAM before you even start generating anything. If you are on 16GB you might be hitting OOM and
'Backrooms' and the Rise of the Institutional Gothic
Gemma 4 have enough ;)
Gemma 4 and Qwen3.5 on shared benchmarks
[D] TMLR reviews seem more reliable than ICML/NeurIPS/ICLR
This year I submitted a paper to ICML for the first time. I have also experienced the review process at TMLR and ICLR. From my observation, given these venues take up close to (or less than) 4 months until the final decision, I think the quality of reviews at TMLR was so much on point when compared
llama.cpp Gemma4 Tokenizer Fix Was Merged Into Main Branch
Switzerland hosts 'CERN of semiconductor research'
Gemma 4 is seriously broken when using Unsloth and llama.cpp
Hi! Just checking, am I the only one who has serious issues with Gemma 4 locally? I've played around with Gemma 4 using Unsloth quants on llama.cpp, and it's seriously broken. I'm using the latest changes from llama.cpp, along with the reccomended temperature, top-p and top-k. Giving it an article
171 emotion vectors found inside Claude. Not metaphors. Actual neuron activation patterns steering behavior.
https://preview.redd.it/kkvvcqr8susg1.jpg?width=1200&format=pjpg&auto=webp&s=ae0315c528afef84c035354927c4b9c5d8ec0bb4 Anthropic's mechanistic interpretability team just published something that deserves way more attention than its getting. They identified 171 distinct emotion-like