Big2 sources· last seen 1h ago· first seen 1d ago
Learning, Fast and Slow: Towards LLMs That Adapt Continually
Large language models (LLMs) are trained for downstream tasks by updating their parameters (e.g., via RL). However, updating parameters forces them to absorb task-specific information, which can result in catastrophic forgetting and loss of plasticity. In contrast, in-context learning with fixed LLM
Lead: arXivBigness: 46fastslowllmsadaptcontinually
📡 Coverage
50
2 news sources
🟠 Hacker News
24
3 pts, 1 comments
🔴 Reddit
0
📈 Google Trends
0
Full methodology: How scoring works
Receipts (all sources)
Learning, Fast and Slow: LLMs That Adapt Continually
HACKERNEWS · Hacker News · 1h ago · ▲ 3 · 💬 1
score 160
Learning, Fast and Slow: Towards LLMs That Adapt Continually
ARXIV · arXiv · 1d ago
score 98
Large language models (LLMs) are trained for downstream tasks by updating their parameters (e.g., via RL). However, updating parameters forces them to absorb task-specific information, which can result in catastrophic forgetting and loss of plasticity. In contrast, in-context learning with fixed LLM