Back

Show HN: Ocrbase – pdf → .md/.json document OCR and structured extraction API

99 points18 daysgithub.com
sync18 days ago

This is essentially a (vibe-coded?) wrapper around PaddleOCR: https://github.com/PaddlePaddle/PaddleOCR

The "guts" are here: https://github.com/majcheradam/ocrbase/blob/7706ef79493c47e8...

M4R5H4LL18 days ago

Most production software is wrappers around existing libraries. The relevant question is whether this wrapper adds operational or usability value, not whether it reimplements OCR. If there are architectural or reliability concerns, it’d be more useful to call those out directly.

tuwtuwtuwtuw18 days ago

Sure. The self host guide tells me to enter my github secret, in plain-text, in an env file. But it doesn't tell me why I should do that.

Do people actually store their secrets in plain text on the file system in production environments? Just seems a bit wild to me.

adammajcher18 days ago

well, you can use secrets manager as well

Oras18 days ago

Claude is included in the contributors, so the OP didn’t hide it

Tiberium18 days ago

At this point it feels like HN is becoming more like Reddit, most people upvote before actually checking the repo.

prats22617 days ago

Instead of markdown -> LLM to get JSON, you can just train a slightly bigger model which you can constrain decode to give JSON rightaway. https://huggingface.co/nanonets/Nanonets-OCR2-3B

We recently published a cookbook for constrained decoding here: https://nanonets.com/cookbooks/structured-llm-outputs/

constantinum18 days ago

What matters most is how well OCR and structured data extraction tools handle documents with high variation at production scale. In real workflows like accounting, every invoice, purchase order, or contract can look different. The extraction system must still work reliably across these variations with minimal ongoing tweaks.

Equally important is how easily you can build a human-in-the-loop review layer on top of the tool. This is needed not only to improve accuracy, but also for compliance—especially in regulated industries like insurance.

Other tools in this space:

LLMWhisperer/Unstract(AGPL)

Reducto

Extend Ai

LLamaparse

Docling

binalpatel18 days ago

This is admittedly dated but even back in December 2023 GPT-4 with it's Vision preview was able to very reliably do structured extraction, and I'd imagine Gemini 3 Flash is much better than back then.

https://binal.pub/2023/12/structured-ocr-with-gpt-vision/

Back of the napkin math (which I could be messing up completely) but I think you could process a 100 page PDF for ~$0.50 or less using Gemini 3 Flash?

>560 input tokens per page * 100 pages = 56000 tokens = $0.028 input ($0.5/m input tokens) >~1000 output tokens per page * 100 pages = $0.30 output ($3/m output tokens)

(https://ai.google.dev/gemini-api/docs/gemini-3#media_resolut...)

adammajcher18 days ago

sure, in some small projects I recommend my friends to use gemini 3 flash. ocrbase is aimed more at scale and self-hosting: fixed infra cost, high throughput, and no data leaving your environment. at large volumes, that tradeoff starts to matter more than per-100-page pricing

v3ss0n18 days ago

How this is better over Surya/Marker or kreuzberg https://github.com/kreuzberg-dev/kreuzberg.

jadbox18 days ago

Sounds like someone needs to run their own test cases and report back on which solution does a better job...

kspacewalk218 days ago

Let me fire up Claude code.

sixtyj18 days ago

Let me fire up Tesseract.

https://github.com/tesseract-ocr

+1
Jimmc41418 days ago
hersko18 days ago

I have a flow where i extract text from a pdf with pdf-parse and then feed that to an ai for data extraction. If that fails i convert it to a png and send the image for data extraction. This works very well and would presumably be far cheaper as i'm generally sending text to the model instead of relying on images. Isn't just sending the images for ocr significantly more expensive?

trollbridge18 days ago

I always render an image and OCR that so I don’t get odd problems from invisible text and it also avoids being affected by anything for SEO.

saaaaaam18 days ago

There was an interesting discussion on here a couple of months back about images vs text, driven by this article: https://www.seangoedecke.com/text-tokens-as-image-tokens/

Discussion is here: https://news.ycombinator.com/item?id=45652952

mimim1mi18 days ago

By definition, OCR means optical character recognition. It depends on the contents of the PDF what kind of extraction methodology can work. Often some available PDFs are just scans of printed documents or handwritten notes. If machine readable text is available your approach is great.

unrahul18 days ago

I have seen this flow in what people in some startups call "Agentic OCR", its essentially a control flow that is coded that tries pdf-parse first or a similar non expensive approach, and if it fails a threshold then use screenshot to text extraction.

sgc18 days ago

How does this compare to dots.ocr? I got fantastic results when I tested dots.

https://github.com/rednote-hilab/dots.ocr

mjrpes18 days ago

Ocrbase is CUDA only while dots.ocr uses vLLM, so should support ROCm/AMD cards?

actionfromafar18 days ago

How about CPU?

jasonni17 days ago

dots.ocr requires requires a considerable amount of computational resources. If you have Mac device with ARM CPU(M series), you can try my dots.ocr.runner(https://github.com/jason-ni/app.dots.ocr.runner).

There is a pipeline solution with multiple small specific models that can run only with CPU: https://github.com/RapidAI/RapidOCR

sgc17 days ago

Jason, your runner looks interesting. I am using debian linux on my laptop with an intel cpu and nvidia gpu (proprietary nvidia cuda drivers). Should I be able to get it working? What is your speed per page at this point? Thank you

cess1118 days ago

Why is 12GB+ VRAM a requirement? The OCR model looks kind of small, https://huggingface.co/PaddlePaddle/PaddleOCR-VL/tree/main, so I'm assuming it is some processing afterwards it would be used for.

adammajcher18 days ago

fixed

cess1118 days ago

OK, thanks, so it runs on a couple GB of CUDA?

fmirkowski18 days ago

having worked with paddleocr, tesseract and many other ocr tools before this is still one of the best and smoothest ocr experiences ive ever had, deployed in minutes

mechazawa18 days ago

Is only bun supported or also regular node?

adammajcher18 days ago

it's bun first because of performance

mechazawa16 days ago

performance for a tool like this isn't really a huge priority imho. Libraries should have compatibility as a priority over performance unless it's the stated goal.

woocash9917 days ago

Awesome idea!

woocash9917 days ago

Very useful!