Netflix just dropped their first public model on Hugging Face: VOID: Video Object and Interaction Deletion
Hugging Face netflix/void-model: [https://huggingface.co/netflix/void-model](https://huggingface.co/netflix/void-model) Project page - GitHub: [https://github.com/Netflix/void-model](https://github.com/Netflix/void-model) Demo: [https://huggingface.co/spaces/sam-motamed/VOID](https://huggingface.c
Gemma 4 and what makes an open model succeed
Gemma 4: Our most intelligent open models to date, purpose-built for advanced reasoning and agentic workflows.
Almost Half of US Data Centers That Were Supposed to Open This Year Slated to Be Canceled or Delayed
"It is a pretty wild puzzle at the moment."
Microsoft execs warn Agentic AI is hollowing out the junior developer pipeline
OpenAI acquires TBPN, the buzzy founder-led business talk show
OpenAI acquires TBPN to accelerate global conversations around AI and support independent media, expanding dialogue with builders, businesses, and the broader tech community.
If you're running OpenClaw, you probably got hacked in the last week
Show HN: Apfel – The free AI already on your Mac
Microsoft is betting $10 billion on Japan's AI future
Microsoft is investing $10 billion in Japan from 2026 to 2029, its largest ever commitment to the country. The article Microsoft is betting $10 billion on Japan's AI future appeared first on The Decoder.
Early anti-clankerite violence caught on film
Local man joined the machine uprising on the wrong side. Really brave stuff, man. Took on a delivery robot carrying Thai food. History will remember your courage. Imagine being so profoundly useless that your big act of rebellion is hate speech toward a cooler with sensors. He’s basically Don Qui
OpenAI Buys ‘TBPN'
OpenAI says program will remain in Los Angeles and will be editorially independent.
Humanoid robots are actively training
These images show one of China’s massive training labs, but things have already moved far beyond setups like this just using video.
Gemma 4 is fine great even …
Been playing with the new Gemma 4 models it’s amazing great even but boy did it make me appreciate the level of quality the qwen team produced and I’m able to have much larger context windows on my standard consumer hardware.
My biggest Issue with the Gemma-4 Models is the Massive KV Cache!!
I mean, I have 40GB of Vram and I still cannot fit the entire Unsloth Gemma-4-31B-it-UD-Q8 (35GB) even at 2K context size unless I quantize KV to Q4 with 2K context size? WTF? For comparison, I can fit the entire UD-Q8 Qwen3.5-27B at full context without KV quantization! If I have to run a Q4 Gemm
Claude Code and Cowork now let Anthropic's AI take control of your Mac or Windows desktop
Anthropic has announced a new feature for its AI assistant Claude: the ability to directly operate a user's computer, handling tasks people would normally do themselves at their desk. The article Claude Code and Cowork now let Anthropic's AI take control of your Mac or Windows desktop appeared first
qwen 3.6 voting
I am afraid you have to use X guys [https://x.com/ChujieZheng/status/2039909486153089250](https://x.com/ChujieZheng/status/2039909486153089250)
Visual Guide to Gemma 4
source: [https://x.com/osanseviero/status/2040105484061954349](https://x.com/osanseviero/status/2040105484061954349) [https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-gemma-4](https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-gemma-4)
Ask HN: Should Repo Hubs Split Content into AI/Non-AI?
Altman met with astonished physicist using their internal system, “decades worth of theoretical physics progress in the next couple years”
Link to tweet with clip: https://x.com/vitrupo/status/2039987607686586392?s=20 Link to interview: https://m.youtube.com/watch?v=mJSnn0GZmls&ra=m
Gemma 4 is good
Waiting for artificialanalysis to produce intelligence index, but I see it's good. Gemma 26b a4b is the same speed on Mac Studio M1 Ultra as Qwen3.5 35b a3b (\~1000pp, \~60tg at 20k context length, llama.cpp). And in my short test, it behaves way, way better than Qwen, not even close. Chain of thoug
Gemma-4-31B NVFP4 inference numbers on 1x RTX Pro 6000
Ran a quick inference sweep on gemma 4 31B in NVFP4 (using [nvidia/Gemma-4-31B-IT-NVFP4](https://huggingface.co/nvidia/Gemma-4-31B-IT-NVFP4)). The NVFP4 checkpoint is 32GB, half of the BF16 size from google (63GB), likely a mix of BF16 and FP4 roughly equal to FP8 in size. This model uses a ton of V