Big2 sources· last seen 10h ago· first seen 10h ago

Why isn’t LLM reasoning done in vector space instead of natural language?

**Why don’t LLMs use explicit vector-based reasoning instead of language-based chain-of-thought? What would happen if they did?** Most LLM reasoning we see is expressed through language: step-by-step text, explanations, chain-of-thought style outputs, etc. But internally, models already operate on

Lead: r/LocalLLaMABigness: 69isnllmreasoningdonevector
📡 Coverage
50
2 news sources
🟠 Hacker News
0
🔴 Reddit
93
316 upvotes across 2 subs
📈 Google Trends
0
Full methodology: How scoring works

Receipts (all sources)

Why isn’t LLM reasoning done in vector space instead of natural language?
REDDIT · r/LocalLLaMA · 10h ago · ⬆ 254 · 💬 107
score 122

**Why don’t LLMs use explicit vector-based reasoning instead of language-based chain-of-thought? What would happen if they did?** Most LLM reasoning we see is expressed through language: step-by-step text, explanations, chain-of-thought style outputs, etc. But internally, models already operate on

Why isn’t LLM reasoning done in vector space instead of natural language?[D]
REDDIT · r/MachineLearning · 10h ago · ⬆ 62 · 💬 34
score 113

**Why don’t LLMs use explicit vector-based reasoning instead of language-based chain-of-thought? What would happen if they did?** Most LLM reasoning we see is expressed through language: step-by-step text, explanations, chain-of-thought style outputs, etc. But internally, models already operate on