Why isn’t LLM reasoning done in vector space instead of natural language?
**Why don’t LLMs use explicit vector-based reasoning instead of language-based chain-of-thought? What would happen if they did?** Most LLM reasoning we see is expressed through language: step-by-step text, explanations, chain-of-thought style outputs, etc. But internally, models already operate on
Receipts (all sources)
**Why don’t LLMs use explicit vector-based reasoning instead of language-based chain-of-thought? What would happen if they did?** Most LLM reasoning we see is expressed through language: step-by-step text, explanations, chain-of-thought style outputs, etc. But internally, models already operate on
**Why don’t LLMs use explicit vector-based reasoning instead of language-based chain-of-thought? What would happen if they did?** Most LLM reasoning we see is expressed through language: step-by-step text, explanations, chain-of-thought style outputs, etc. But internally, models already operate on