|
--- |
|
sidebar_position: 0 |
|
sidebar_class_name: hidden |
|
--- |
|
|
|
|
|
New to LangChain or LLM app development in general? Read this material to quickly get up and running building your first applications. |
|
|
|
|
|
|
|
Familiarize yourself with LangChain's open-source components by building simple applications. |
|
|
|
If you're looking to get started with [chat models](/docs/integrations/chat/), [vector stores](/docs/integrations/vectorstores/), |
|
or other LangChain components from a specific provider, check out our supported [integrations](/docs/integrations/providers/). |
|
|
|
- [Chat models and prompts](/docs/tutorials/llm_chain): Build a simple LLM application with [prompt templates](/docs/concepts/prompt_templates) and [chat models](/docs/concepts/chat_models). |
|
- [Semantic search](/docs/tutorials/retrievers): Build a semantic search engine over a PDF with [document loaders](/docs/concepts/document_loaders), [embedding models](/docs/concepts/embedding_models/), and [vector stores](/docs/concepts/vectorstores/). |
|
- [Classification](/docs/tutorials/classification): Classify text into categories or labels using [chat models](/docs/concepts/chat_models) with [structured outputs](/docs/concepts/structured_outputs/). |
|
- [Extraction](/docs/tutorials/extraction): Extract structured data from text and other unstructured media using [chat models](/docs/concepts/chat_models) and [few-shot examples](/docs/concepts/few_shot_prompting/). |
|
|
|
Refer to the [how-to guides](/docs/how_to) for more detail on using all LangChain components. |
|
|
|
|
|
|
|
Get started using [LangGraph](https://langchain-ai.github.io/langgraph/) to assemble LangChain components into full-featured applications. |
|
|
|
- [Chatbots](/docs/tutorials/chatbot): Build a chatbot that incorporates memory. |
|
- [Agents](/docs/tutorials/agents): Build an agent that interacts with external tools. |
|
- [Retrieval Augmented Generation (RAG) Part 1](/docs/tutorials/rag): Build an application that uses your own documents to inform its responses. |
|
- [Retrieval Augmented Generation (RAG) Part 2](/docs/tutorials/qa_chat_history): Build a RAG application that incorporates a memory of its user interactions and multi-step retrieval. |
|
- [Question-Answering with SQL](/docs/tutorials/sql_qa): Build a question-answering system that executes SQL queries to inform its responses. |
|
- [Summarization](/docs/tutorials/summarization): Generate summaries of (potentially long) texts. |
|
- [Question-Answering with Graph Databases](/docs/tutorials/graph): Build a question-answering system that queries a graph database to inform its responses. |
|
|
|
|
|
|
|
LangSmith allows you to closely trace, monitor and evaluate your LLM application. |
|
It seamlessly integrates with LangChain, and you can use it to inspect and debug individual steps of your chains as you build. |
|
|
|
LangSmith documentation is hosted on a separate site. |
|
You can peruse [LangSmith tutorials here](https://docs.smith.langchain.com/tutorials/). |
|
|
|
### Evaluation |
|
|
|
LangSmith helps you evaluate the performance of your LLM applications. The tutorial below is a great way to get started: |
|
|
|
- [Evaluate your LLM application](https://docs.smith.langchain.com/tutorials/Developers/evaluation) |
|
|