Platforms
Frameworks
Languages
Platform Specifics
File format support
The following is a simple example that shows how one might include document context with a query to an LLM, while leveraging information about the document structure contained in the PDF. The process can be broken down into a few steps:
To run the example, use the following command (with your virtual environment active, if using):
You should see some text indicating progress, with a question and answer about the document appearing at the end. LLM's aren't guaranteed to produce identical output between runs, but you should see something similar to the following:
In this section, we introduce the concept of Retrieval Augmented Generation (RAG), and show how you can break down larger documents into searchable chunks to use with your queries.
Did you find this helpful?
Trial setup questions?
Ask experts on DiscordNeed other help?
Contact SupportPricing or product questions?
Contact Sales