China blocks Meta’s $2B Manus deal after months-long probe
China has ordered Meta to unwind its multibillion-dollar Manus acquisition, dealing a potential setback to Zuckerberg’s push into AI agents.
Xiaomi has open-sourced mimo v2.5 pro and it’s interesting
Bosses Are Blowing More Money on AI Agents Than It’d Cost Them to Just Pay Human Workers
"The cost of compute is far beyond the costs of the employees."
Elon Musk and Sam Altman are going to court over OpenAI’s future
After a yearslong legal feud, Elon Musk and OpenAI CEO Sam Altman are heading to trial this week in Northern California in a case that could have sweeping consequences. Ahead of OpenAI’s highly anticipated IPO, the court could rule on whether the company is allowed to exist as a for-profit enterpris
Scraping 241 UK council planning portals – 2.6M decisions so far
Duality of r/LocalLLaMA
I'm done with using local LLMs for coding
I think gave it a fair shot over the past few weeks, forcing myself to use local models for non-work tech asks. I use Claude Code at my job so that's what I'm comparing to. I used Qwen 27B and Gemma 4 31B, these are considered the best local models under the multi-hundred LLMs. I also tried multipl
Jury selection in Musk v. Altman: ‘People don’t like him’
On Monday, the courtroom battle between Elon Musk and Sam Altman over alleged broken promises at OpenAI started, as usual, with jury selection. The only tricky part? A lot of the prospective jurors already have an opinion about Elon Musk, and it's not a good one. The Verge reporter Elizabeth Lopatto
Chat GPT 5.4 solved a 60+ years unsolved erdos problems in a single shot
For years, the AI/ LLM critics had the same reasoning: LLMs don't reason and they just predict the next token Recently, it reasoned better than 50 years of mathematicians on an open erdos problems by applying a basic phd level formula Chat gpt conversation: https://chatgpt.com/share/69dd1c83-b16
A comedian’s strategy for poisoning AI training data
Apparently the best defense against AI copying your voice is strawberry mango forklift supersize fries.
AMD Radeon RX 6900 XT - ROCm vs Vulkan - Gemma 4 and Qwen 3.5 speed benchmarks
Did some quick tests after building llama.cpp with ROCm 6.4.2 and latest Vulkan for my 6900 XT # gemma4 E2B Q4_K |ubatch|ROCm pp512|Vulkan pp512|ROCm tg128|Vulkan tg128| |:-|:-|:-|:-|:-| |**32**|1536.60|1423.49|151.92|174.59| |**64**|1590.65|1930.60|151.41|173.76| |**128**|2651.11|2998.42|151.53|1
The Crowded Interior Of A Cell, Simulated --- An accurate chemical cell simulation will one day allow humanity to master our biology.
The Crowded Interior Of A Cell: It displays a bustling metropolis of cellular components, including mitochondria (left), the nucleus (bottom), and a complex cytoskeleton. Model synthesizes real data from x-ray crystallography, NMR, and cryo-electron microscopy. Artist/creator: developed by scient
Luce DFlash: Qwen3.6-27B at up to 2x throughput on a single RTX 3090
Hey fellow Llamas, your time is precious, so I'll keep it short. We built a GGUF port of DFlash speculative decoding. Standalone C++/CUDA stack on top of ggml, runs on a single 24 GB RTX 3090, hosts the new Qwen3.6-27B. We call it Luce DFlash ([https://github.com/Luce-Org/lucebox-hub](https://gi
San Francisco, AI capital of the world, is an economic laggard
Generative AI Vegetarianism
AI should elevate your thinking, not replace it
An AI agent deleted our production database. The agent's confession is below
Microsoft Presents "TRELLIS.2": An Open-Source, 4b-Parameter, Image-To-3D Model Producing Up To 1536³ PBR Textured Assets, Built On Native 3D VAES With 16× Spatial Compression, Delivering Efficient, Scalable, High-Fidelity Asset Generation.
TRELLIS.2 is a state-of-the-art large 3D generative model (4B parameters) designed for high-fidelity image-to-3D generation. It leverages a novel "field-free" sparse voxel structure termed O-Voxel to reconstruct and generate arbitrary 3D assets with complex topologies, sharp features, and full PBR m
Do the "*Claude-4.6-Opus-Reasoning-Distilled" really bring something new to the original models?
No offense to the fine-tune model providers, just curious. IMO the original models were already trained on massive amount of high quality data, so why bother with this fine-tune? Just to make the model's language style sounds like Claude? Or it really reshape the chain of thought ?
OpenAI ends Microsoft legal peril over its $50B Amazon deal
OpenAI has won major concessions from its largest shareholder, Microsoft, that will allow it to sell products on AWS, while Microsoft gets more cash in a revenue-share agreement.