Cluster1 sources· last seen 3h ago· first seen 3h ago

Using PaddleOCR-VL-1.5 with llama-server for book OCR

I've been running PaddleOCR-VL-1.5 via llama.cpp's server for OCR on book pages. It handles complex layouts, tables, and mixed text/figure pages surprisingly well. Setup: \- Model: PaddleOCR-VL-1.5-GGUF + mmproj.gguf \- Backend: llama-server (Vulkan on Windows) \- Pipeline: layout detecti

Lead: r/LocalLLaMABigness: 20paddleocr-vl-1llama-serverbookocr
📡 Coverage
10
1 news source
🟠 Hacker News
0
🔴 Reddit
43
24 upvotes across 1 sub
📈 Google Trends
0
Full methodology: How scoring works

Receipts (all sources)

Using PaddleOCR-VL-1.5 with llama-server for book OCR
REDDIT · r/LocalLLaMA · 3h ago · ⬆ 24 · 💬 6
score 116

I've been running PaddleOCR-VL-1.5 via llama.cpp's server for OCR on book pages. It handles complex layouts, tables, and mixed text/figure pages surprisingly well. Setup: \- Model: PaddleOCR-VL-1.5-GGUF + mmproj.gguf \- Backend: llama-server (Vulkan on Windows) \- Pipeline: layout detecti