Streamlining Decision-making Using Language Models

Multi-step multi-PDF Q&A system with LLMs and Langchain

Yes, this is yet another LLM tutorial. Hopefully, slightly more useful

Alex Honchar
Neurons Lab
Published in
5 min readJun 10, 2023

--

More details about the project in the recent Honchar AI episode

Hello folks! The topic of this particular blog post will not be unique at the moment of writing. There are already many YouTube videos, articles, and Twitter threads that describe this idea of talking to PDFs. There are chat GPT plugins that can do this, and there is Langchain, a library that allows you to do this as well and this is exactly the library that we are going to use today.

But I think it will be worth describing once again this methodology and this approach also taking into account that to solve the problem of chatting to your PDFs properly just do the prompting and asking is not enough. There are a couple of important intermediate steps that can help you a lot to make your solution much better and this is something that we can focus on today in this blog.

Link to the GitHub with source code and Streamlit app.

The challenge: reading insurance proposals

Typical PDFs with insurance plans — still analyzing them manually?

Picking the right insurance plan for your team means going through countless PDFs, organizing the information, and ultimately making an informed choice. It’s a time-consuming task that involves plowing through dozens (and sometimes hundreds) of pages and documents. So, here’s the big question: Can we automate this workload? You bet we can! But unfortunately, just throwing 1000s of docs to ChatGPT is not gonna make it. At least not today.

Intermediate steps are key

Asking questions and prompting are important, but it’s not enough to crack the code on this challenge. We need to add in some extra steps to truly ace it. One often overlooked step is generating an intermediate summary before we ask the final question. Why? Well, imagine dealing with tons of documents. It will be tough for any algorithm to maintain context accurately throughout the process.

And let’s not forget about the limits of tools like Langchain, which have to contend with the token limits in those LLMs. To tackle these hurdles head-on, we can proactively whip up answers to specific questions related to the insurance documents. By stitching together these answers, we create a summary that captures the essence of thousands of documents. And the Langchain tutorial (level 4 here) actually recommends this!

Standard toolkit: LLMs + Langchain

1. Vectorizing

To keep things simple, we’ll roll with the OpenAI GPT model, combined with the Langchain library. These powerhouses allow us to tap into the vast potential of language models while making the most of Langchain’s extended context capabilities. By vectorizing those PDFs with a touch of overlap, we ensure we don’t lose any important context along the way:

class DatasetVectorizer:
"""
A class for vectorizing datasets.
"""
def vectorize(self, text_file_paths, chunk_size=1000, chunk_overlap=500, openai_key=""):
documents = []
for text_file_path in text_file_paths:
doc_loader = TextLoader(text_file_path)
documents.extend(doc_loader.load())
text_splitter = RecursiveCharacterTextSplitter(chunk_overlap=chunk_overlap, chunk_size=chunk_size)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings(openai_api_key=openai_key)
docsearch = Chroma.from_documents(texts, embeddings)
return documents, texts, docsearch

2. Hand-crafted summary

Now, let’s talk about that game-changing intermediate summary. To make it proper, we’ll prep a set of general questions that hone in on the insurance documents. Important questions are about deductibles, coverage deets, compensation amounts, hospital choices, family coverage, going international, dental care, and so on. By answering these questions and knitting the responses together, we’ll create a top-notch summary that captures the essence of each document.

QUESTIONS = [
'How good are the deductibles?',
"How is the preventive care coverage?",
'How this plan fits for remote workers in the US and abroad?',
'What is the maximum money amount that can be compensated?',
'Can I go to any hospital of my choice?',
'Are there any limitations that won\'t allow to use the insurance?',
'Does it cover the family members of the applicant?',
'What are the healthcare procedures that are not covered by the insurance?',
'Can I use the insurance for the dental care?',
'Can I use the insurance in other countries?'
]

3. Asking the final question

With the intermediate summary in our back pocket, it’s time to dive into the decision-making process. This is where we finally solve what we want to! We can now fire off a specific set of questions based on our criteria for selecting insurance proposals. Let’s compare those proposals based on coverage, deductibles, specific requirements, and more. By posing these questions to the summary, we’ll extract the juiciest bits of info and make decisions like rock stars!

template = """
I want you to act as an expert in insurance policies. I have asked two companies about their insurance policies and here are their answers:
{summary_of_answers}
I am looking for insurance for a full-remote consulting company with 100 employees. I want you to tell me which company is better and why.
Give me a rating (x out of 10) for the following categories for each company separately with a short explanation (10 words max) for each category:
1. Coverage of different health procedures
2. Flexibility for remote workers abroad
3. Price and compensation
Your answer:
"""
prompt = PromptTemplate(
input_variables=["summary_of_answers"],
template=template,
)

Streamlit app to make it simple

Streamlit app with interactive UI

Now, here’s the icing on the cake. I have prepared a user-friendly interface using the Streamlit library. Even if you’re not a tech wizard, you can effortlessly navigate the application and drop in your burning questions and criteria.

Conclusions

We have revisited the capabilities of language models such as OpenAI GPT and Langchain, to generate comprehensive summaries and make well-informed decisions based on our criteria. The inclusion of intermediate steps, the utilization of the Streamlit application, and the introduction of automation streamline the entire process, making it accessible and efficient for all stakeholders involved. Now it’s your turn to build your own app over it! Let me know if you need any questions or support in the development.

--

--