Google says 75 percent of its new code is now written by AI
75 percent of new code at Google is now generated by AI and then reviewed by human developers, the company says. The article Google says 75 percent of its new code is now written by AI appeared first on The Decoder.
GPT-5.5 System Card
Deepseek v4 people
Ask HN: Am I getting old, or is working with AI juniors becoming a nightmare?
S. Korea police arrest man over AI image of runaway wolf that misled authorities
DeepSeek-v4 has a comical 384K max output capability
was shocked when saw that spec, immediatly went to the website and asked it to make a comprehensive single-html-web-OS and it indeed generated a single 100KB html for me...I'm speechless. https://preview.redd.it/6zcbzbkvj3xg1.png?width=2878&format=png&auto=webp&s=6279909b483b7b32e7c41
DS4-Flash vs Qwen3.6
DeepSeek V4 has released
HuggingFace: https://huggingface.co/collections/deepseek-ai/deepseek-v4
Deepseek V4 Flash and Non-Flash Out on HuggingFace
https://huggingface.co/collections/deepseek-ai/deepseek-v4
Show HN: How LLMs Work – Interactive visual guide based on Karpathy's lecture
An update on recent Claude Code quality reports
An update on recent Claude Code quality reports Anthropic
MeshCore development team splits over trademark dispute and AI-generated code
Big model feel with GPT 5.5
People are bashing 5.5 left and right, mostly because the benchmark improvements were lower than expected, and probably also because of the hype around this model. But honestly, this model **FEELS** different. It feels more intuitive and is better at covering the kinds of points and arguments that a
This isn’t X this is Y needs to die
All models spam this exact phrase liberally. Time to train it out. That is all.
r/LocalLLaMa Rule Updates
As the sub has grown (and as AI based tools have gotten better) with *over 1M weekly visitors*, we've seen a marked increase in slop, spam etc. This has been on the mod team's mind for a while + there have been many threads started by users on this topic garnering lots of upvotes/comments. We're th
Qwen3.6 35B-A3B is quite useful on 780m iGPU (llama.cpp,vulkan)
I have ThinkPad T14 Gen 5 (8840U, **Radeon 780M**, 64GB DDR5 5600 MT/s ). Tried out the recent Qwen MoE release, and pp/tg speed is good (on vulkan) (250+pp, 20 tg): ~/dev/llama.cpp master* ❯ ./build-vulkan/bin/llama-bench \ -hf AesSedai/Qwen3.6-35B-A3B-GGUF:Q6_K \ -
DeepSeek V4 Benchmarks!
Qwen 3.6 27B Makes Huge Gains in Agency on Artificial Analysis - Ties with Sonnet 4.6
It is crazy that Qwen3.6 27B now matches Sonnet 4.6 on AA's Agentic Index, overtaking Gemini 3.1 Pro Preview, GPT 5.2 and 5.3 as well as MiniMax 2.7. It made gains across all three indices but the way the Coding Index works, I don't think the gains are as apparent as they should be. The Coding Index
Compared QWEN 3.6 35B with QWEN 3.6 27B for coding primitives
MacBook Pro M5 MAX 64GB. Qwen 3.6 35B - 72 TPS. Qwen 3.6 27B - 18 TPS. Tested coding primitives. The 27B model thinks more, but the result is more precise and correct. The 35B model handled the task worse, but did it faster. What's your experience? Prompt: Write a single HTML file with a ful
OpenCode or ClaudeCode for Qwen3.5 27B
I'm tired of copy & pasting code. What should I try and why? Which is faster / easier to install? Which is easier to use? Which has less bugs? OpenCode or ClaudeCode with Qwen3.5/3.6 27B on Linux?