Skip to main content

How-to Guides

Here you’ll find short answers to β€œHow do I….?” types of questions. These how-to guides don’t cover topics in depth – you’ll find that material in the Tutorials and the API Reference. However, these guides will help you quickly accomplish common tasks.

Core Functionality​

This covers functionality that is core to using LangChain

LangChain Expression Language (LCEL)​

LangChain Expression Language a way to create arbitrary custom chains.

Components​

These are the core building blocks you can use when building applications.

Prompt Templates​

Prompt Templates are responsible for formatting user input into a format that can be passed to a language model.

Example Selectors​

Example Selectors are responsible for selecting the correct few shot examples to pass to the prompt.

Chat Models​

Chat Models are newer forms of language models that take messages in and output a message.

LLMs​

What LangChain calls LLMs are older forms of language models that take a string in and output a string.

Output Parsers​

Output Parsers are responsible for taking the output of an LLM and parsing into more structured format.

Document Loaders​

Document Loaders are responsible for loading documents from a variety of sources.

Text Splitters​

Text Splitters take a document and split into chunks that can be used for retrieval.

Embedding Models​

Embedding Models take a piece of text and create a numerical representation of it.

Vector Stores​

Vector Stores are databases that can efficiently store and retrieve embeddings.

Retrievers​

Retrievers are responsible for taking a query and returning relevant documents.

Indexing​

Indexing is the process of keeping your vectorstore in-sync with the underlying data source.

Tools​

LangChain Tools contain a description of the tool (to pass to the language model) as well as the implementation of the function to call).

Agents​

note

For in depth how-to guides for agents, please check out LangGraph documentation.

Custom​

All of LangChain components can easily be extended to support your own versions.

Use Cases​

These guides cover use-case specific details.

Q&A with RAG​

Retrieval Augmented Generation (RAG) is a way to connect LLMs to external sources of data.

Extraction​

Extraction is when you use LLMs to extract structured information from unstructured text.

Chatbots​

Chatbots involve using an LLM to have a conversation.

Query Analysis​

Query Analysis is the task of using an LLM to generate a query to send to a retriever.

Q&A over SQL + CSV​

You can use LLMs to do question answering over tabular data.

Q&A over Graph Databases​

You can use an LLM to do question answering over graph databases.


Help us out by providing feedback on this documentation page: