Netflix just dropped their first public model on Hugging Face: VOID: Video Object and Interaction Deletion
Hugging Face netflix/void-model: [https://huggingface.co/netflix/void-model](https://huggingface.co/netflix/void-model) Project page - GitHub: [https://github.com/Netflix/void-model](https://github.com/Netflix/void-model) Demo: [https://huggingface.co/spaces/sam-motamed/VOID](https://huggingface.c
Gemma 4 and what makes an open model succeed
Gemma 4: Our most intelligent open models to date, purpose-built for advanced reasoning and agentic workflows.
OpenAI acquires TBPN, the buzzy founder-led business talk show
OpenAI acquires TBPN to accelerate global conversations around AI and support independent media, expanding dialogue with builders, businesses, and the broader tech community.
If you're running OpenClaw, you probably got hacked in the last week
Show HN: Apfel – The free AI already on your Mac
Microsoft is betting $10 billion on Japan's AI future
Microsoft is investing $10 billion in Japan from 2026 to 2029, its largest ever commitment to the country. The article Microsoft is betting $10 billion on Japan's AI future appeared first on The Decoder.
Early anti-clankerite violence caught on film
Local man joined the machine uprising on the wrong side. Really brave stuff, man. Took on a delivery robot carrying Thai food. History will remember your courage. Imagine being so profoundly useless that your big act of rebellion is hate speech toward a cooler with sensors. He’s basically Don Qui
Gemma 4 is fine great even …
Been playing with the new Gemma 4 models it’s amazing great even but boy did it make me appreciate the level of quality the qwen team produced and I’m able to have much larger context windows on my standard consumer hardware.
OpenAI Buys ‘TBPN'
OpenAI says program will remain in Los Angeles and will be editorially independent.
Claude Code and Cowork now let Anthropic's AI take control of your Mac or Windows desktop
Anthropic has announced a new feature for its AI assistant Claude: the ability to directly operate a user's computer, handling tasks people would normally do themselves at their desk. The article Claude Code and Cowork now let Anthropic's AI take control of your Mac or Windows desktop appeared first
Smaller models are getting scary good.
I am still processing this lol. I had **Gemini 3 Pro Deepthink** try to solve a complex security puzzle (which was secretly an unwinnable paradox). It spit out this incredibly professional-looking, highly structured answer after about 15 minutes of reasoning. Just for fun, I passed its solution ov
Humanoid robots are actively training
These images show one of China’s massive training labs, but things have already moved far beyond setups like this just using video.
Altman met with astonished physicist using their internal system, “decades worth of theoretical physics progress in the next couple years”
Link to tweet with clip: https://x.com/vitrupo/status/2039987607686586392?s=20 Link to interview: https://m.youtube.com/watch?v=mJSnn0GZmls&ra=m
Show HN: A memory layer for AI agents that organizes itself
Visual Guide to Gemma 4
source: [https://x.com/osanseviero/status/2040105484061954349](https://x.com/osanseviero/status/2040105484061954349) [https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-gemma-4](https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-gemma-4)
llama.cpp Gemma4 Tokenizer Fix Was Merged Into Main Branch
Altman on shutting down Sora: 'I did not expect 3 or 6 months ago to be at this point we're at now; where something very big and important is about to happen again with this next generation of models and the agents they can power.'
[https://youtu.be/mJSnn0GZmls](https://youtu.be/mJSnn0GZmls) ‘We have a few times in our history realized something really important is working, or about to work so well, that we have to stop a bunch of other projects. In fact, this was the original thing that happened with GPT3. We had a whole por
qwen 3.6 voting
I am afraid you have to use X guys [https://x.com/ChujieZheng/status/2039909486153089250](https://x.com/ChujieZheng/status/2039909486153089250)
My biggest Issue with the Gemma-4 Models is the Massive KV Cache!!
I mean, I have 40GB of Vram and I still cannot fit the entire Unsloth Gemma-4-31B-it-UD-Q8 (35GB) even at 2K context size unless I quantize KV to Q4 with 2K context size? WTF? For comparison, I can fit the entire UD-Q8 Qwen3.5-27B at full context without KV quantization! If I have to run a Q4 Gemm
Linux Kernel developers are receiving record high number of CORRECT bug reports because of AI and expect quality of software to be much higher in the future
The message at the end (second snapshot) is particularly hopeful. It's great to see open-source software benefiting the most from the frontier models and the model developers giving back to those who created their training data. This significantly challenges the narrative pushed by some of the anti-