Some test text!
Nodejs / Guides / OCR Workflow
To make a searchable PDF by adding invisible text to an image using OCR.
async function main() {
const doc = await PDFNet.PDFDoc.create();
// Run OCR on the image without options
await PDFNet.OCRModule.imageToPDF(doc, image_path);
}
PDFNet.runWithCleanup(main);
Convert images to PDF with searchable/selectable text
Full code sample which shows how to use the Apryse OCR module on scanned documents in multiple languages. The OCR module can make searchable PDFs and extract scanned text for further indexing.
To make a searchable PDF by adding invisible text to an image based PDF such as a scanned document using OCR.
async function main() {
const doc = await PDFNet.PDFDoc.createFromFilePath(filename);
// Set English as the language of choice
const opts = new PDFNet.OCRModule.OCROptions();
opts.addLang("eng");
// Run OCR on the PDF with options
await PDFNet.OCRModule.processPDF(doc, opts);
}
PDFNet.runWithCleanup(main);
Add searchable/selectable text to an image based PDF like a scanned document
Full code sample which shows how to use the Apryse OCR module on scanned documents in multiple languages. The OCR module can make searchable PDFs and extract scanned text for further indexing.
If we want to apply raw OCR output to the input document, we can
either call OCRModule::ImageToPDF
(if input file is an image) or OCROptions::ProcessPDF
(for a PDF).
However, it is likely that some post-processing will be beneficial, e.g., comparing results
against white/black lists. To this purpose we can first extract text and corresponding metadata as either JSON or XML before re-applying processed results to the input document.
async function main() {
// Setup empty destination doc
const doc = await PDFNet.PDFDoc.create();
const image_path = "path/to/image";
// Extract OCR results as JSON
const json = await PDFNet.OCRModule.getOCRJsonFromImage(doc, image_path, opts);
// Post-processing step (whatever it might be)
// Re-apply results.
await PDFNet.OCRModule.applyOCRJsonToPDF(doc, json);
}
PDFNet.runWithCleanup(main);
OCR output consists of nested arrays: array of pages, array of paragraphs, array of lines, array of words. Pages have additional metadata:
Attribute | Value | Description |
---|---|---|
num | page number | |
dpi | document resolution (needed to correctly scale the coordinates from points to pixels) | |
origin | TopLeft | coordinate system has origin at the top left corner (default) |
BottomLeft | coordinate system has origin at the bottom left corner (i.e., PDF page coordinate system) |
Then each word in the OCR output has the following:
Attribute | Value | Description |
---|---|---|
x | bouding box lower left corner x coordinate | |
y | bouding box lower left corner y coordinate | |
length | length of bounding box | |
font-size | text's font size | |
text | text output | |
orientation | L | 270 degrees clockwise rotation |
R | 90 degrees clockwise rotation | |
D | 180 degrees clockwise rotation | |
U | 0 degrees clockwise rotation | |
Finally, each line has an optional box property consisting of 4 values having the same interpretation as pdftron::PDF::Rect . |
Below is a sample JSON output that the OCR module would output.
{
"Page":[
{
"Para":[
{
"Line":[
{
"Word":[
{
"font-size": 27,
"length": 64,
"orientation": "U",
"text":"Hello",
"x": 273,
"y": 265
}
],
"box":[
273,
265,
64,
29
]
}
]
}
],
"num": 1,
"dpi": 96,
"origin": "BottomLeft"
}
]
}
The API can also be used to apply OCR XML/JSON generated by different OCR engines. The expected structure for input JSON and XML respectively are:
{
"Page":[
{
"Word":[
{
"font-size": 12,
"length": 43,
"text":"ABC",
"x": 321,
"y": 141
}
],
"num": 1,
"dpi": 96,
"origin": "TopLeft"
}
]
}
<Doc>
<Page num="1" origin="TopLeft" dpi="96">
<Word font-size="12" x="321" y="141" length="43">ABC</Word>
</Page>
</Doc>
Note that the OCR structure is simplified and we are expecting an array of Page
, with each page consisting of Word
array.
Each Word
is described by its text content and 4 typographic point values (i.e., font-size="12" x="321" y="141" length="43" in the example above) needed to construct the bounding box for placement of text on a page.
We use pdftron.PDF.OCROptions
convenience class to pass OCR parameters.
We can call pdftron.PDF.OCROptions.AddLang
to pick a target language.
If no language option is set, English is assumed.
OCR Module binary currently contains 6 built-in languages to play with:
Additional trained language files
can be placed in the search path ( which can be registered using PDFNet::AddResourceSearchPath
).
Afterwards they can be referred to via their file prefix.
Multiple languages can be specified, although it is not recommended to use more than 3 languages.
async function main() {
// Add French, Spanish and default English to target languages
const opts = new PDFNet.OCRModule.OCROptions();
opts.addLang("fra");
opts.addLang("spa");
}
PDFNet.runWithCleanup(main);
When processing documents with a priori known layouts, we can enhance output quality by either specifying regions that we want OCR to ignore via OCROptions::AddIgnoreZonesForPage
,
or listing exclusive regions to process via OCROptions::AddTextZonesForPage
. Both zone options act as stencils, wherein for ignore zones we white out area inside supplied rectangular regions before processing, and
for the the text zones we white out areas outside the supplied regions. The options store an array of RectCollection, where the index into the array corresponds to the relevant page number.
OCROptions::AddIgnoreZonesForPage
can also be used to skip pages via setting ignore zone to equal page's media box.
async function main() {
// Optionally specify page zones for OCR extraction in a multipage document
let page_zones = [];
page_zones.push(new PDFNet.Rect(900, 2384, 1236, 2480));
page_zones.push(new PDFNet.Rect(948, 1288, 1672, 1476));
// OCR will only process the two specified zones on the first page
opts.addTextZonesForPage(page_zones, 1);
// Reset zone container
page_zones = [];
page_zones.push(new PDFNet.Rect(428, 1484, 1784, 2344));
// OCR will only process one specified zone on the second page
opts.addTextZonesForPage(page_zones, 2);
}
PDFNet.runWithCleanup(main);
We enable users to manually set input image resolution (tweaking which can often lead to better results in practice).
// Manually override DPI
opts.addDPI(300);
Get the answers you need: Chat with us