text,source "--- logos: - /img/customers-logo/discord.svg - /img/customers-logo/johnson-and-johnson.svg - /img/customers-logo/perplexity.svg - /img/customers-logo/mozilla.svg - /img/customers-logo/voiceflow.svg - /img/customers-logo/bosch-digital.svg sitemapExclude: true ---",customers/logo-cards-1.md "--- review: “We looked at all the big options out there right now for vector databases, with our focus on ease of use, performance, pricing, and communication. Qdrant came out on top in each category... ultimately, it wasn't much of a contest.” names: Alex Webb positions: Director of Engineering, CB Insights avatar: src: /img/customers/alex-webb.svg alt: Alex Webb Avatar logo: src: /img/brands/cb-insights.svg alt: Logo sitemapExclude: true --- ",customers/customers-testimonial1.md "--- title: Customers description: Learn how Qdrant powers thousands of top AI solutions that require vector search with unparalleled efficiency, performance and massive-scale data processing. caseStudy: logo: src: /img/customers-case-studies/customer-logo.svg alt: Logo title: Recommendation Engine with Qdrant Vector Database description: Dailymotion leverages Qdrant to optimize its video recommendation engine, managing over 420 million videos and processing 13 million recommendations daily. With this, Dailymotion was able to reduced content processing times from hours to minutes and increased user interactions and click-through rates by more than 3x. link: text: Read Case Study url: /blog/case-study-dailymotion/ image: src: /img/customers-case-studies/case-study.png alt: Preview cases: - id: 0 logo: src: /img/customers-case-studies/visua.svg alt: Visua Logo image: src: /img/customers-case-studies/case-visua.png alt: The hands of a person in a medical gown holding a tablet against the background of a pharmacy shop title: VISUA improves quality control process for computer vision with anomaly detection by 10x. link: text: Read Story url: /blog/case-study-visua/ - id: 1 logo: src: /img/customers-case-studies/dust.svg alt: Dust Logo image: src: /img/customers-case-studies/case-dust.png alt: A man in a jeans shirt is holding a smartphone, only his hands are visible. In the foreground, there is an image of a robot surrounded by chat and sound waves. title: Dust uses Qdrant for RAG, achieving millisecond retrieval, reducing costs by 50%, and boosting scalability. link: text: Read Story url: /blog/dust-and-qdrant/ - id: 2 logo: src: /img/customers-case-studies/iris-agent.svg alt: Logo image: src: /img/customers-case-studies/case-iris-agent.png alt: Hands holding a smartphone, styled smartphone interface visualisation in the foreground. First-person view title: IrisAgent uses Qdrant for RAG to automate support, and improve resolution times, transforming customer service. link: text: Read Story url: /blog/iris-agent-qdrant/ sitemapExclude: true --- ",customers/customers-case-studies.md "--- review: “We LOVE Qdrant! The exceptional engineering, strong business value, and outstanding team behind the product drove our choice. Thank you for your great contribution to the technology community!” names: Kyle Tobin positions: Principal, Cognizant avatar: src: /img/customers/kyle-tobin.png alt: Kyle Tobin Avatar logo: src: /img/brands/cognizant.svg alt: Cognizant Logo sitemapExclude: true --- ",customers/customers-testimonial2.md "--- logos: - /img/customers-logo/gitbook.svg - /img/customers-logo/deloitte.svg - /img/customers-logo/disney.svg sitemapExclude: true ---",customers/logo-cards-3.md "--- title: Vector Space Wall link: url: https://testimonial.to/qdrant/all text: Submit Your Testimonial testimonials: - id: 0 name: Jonathan Eisenzopf position: Chief Strategy and Research Officer at Talkmap avatar: src: /img/customers/jonathan-eisenzopf.svg alt: Avatar text: “With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform on enterprise scale.” - id: 1 name: Angel Luis Almaraz Sánchez position: Full Stack | DevOps avatar: src: /img/customers/angel-luis-almaraz-sanchez.svg alt: Avatar text: Thank you, great work, Qdrant is my favorite option for similarity search. - id: 2 name: Shubham Krishna position: ML Engineer @ ML6 avatar: src: /img/customers/shubham-krishna.svg alt: Avatar text: Go ahead and checkout Qdrant. I plan to build a movie retrieval search where you can ask anything regarding a movie based on the vector embeddings generated by a LLM. It can also be used for getting recommendations. - id: 3 name: Kwok Hing LEON position: Data Science avatar: src: /img/customers/kwok-hing-leon.svg alt: Avatar text: Check out qdrant for improving searches. Bye to non-semantic KM engines. - id: 4 name: Ankur S position: Building avatar: src: /img/customers/ankur-s.svg alt: Avatar text: Quadrant is a great vector database. There is a real sense of thought behind the api! - id: 5 name: Yasin Salimibeni View Yasin Salimibeni’s profile position: AI Evangelist | Generative AI Product Designer | Entrepreneur | Mentor avatar: src: /img/customers/yasin-salimibeni-view-yasin-salimibeni.svg alt: Avatar text: Great work. I just started testing Qdrant Azure and I was impressed by the efficiency and speed. Being deploy-ready on large cloud providers is a great plus. Way to go! - id: 6 name: Marcel Coetzee position: Data and AI Plumber avatar: src: /img/customers/marcel-coetzee.svg alt: Avatar text: Using Qdrant as a blazing fact vector store for a stealth project of mine. It offers fantasic functionality for semantic search ✨ - id: 7 name: Andrew Rove position: Principal Software Engineer avatar: src: /img/customers/andrew-rove.svg alt: Avatar text: We have been using Qdrant in production now for over 6 months to store vectors for cosine similarity search and it is way more stable and faster than our old ElasticSearch vector index.

No merging segments, no red indexes at random times. It just works and was super easy to deploy via docker to our cluster.

It’s faster, cheaper to host, and more stable, and open source to boot! - id: 8 name: Josh Lloyd position: ML Engineer avatar: src: /img/customers/josh-lloyd.svg alt: Avatar text: I'm using Qdrant to search through thousands of documents to find similar text phrases for question answering. Qdrant's awesome filtering allows me to slice along metadata while I'm at it! 🚀 and it's fast ⏩🔥 - id: 9 name: Leonard Püttmann position: data scientist avatar: src: /img/customers/leonard-puttmann.svg alt: Avatar text: Amidst the hype around vector databases, Qdrant is by far my favorite one. It's super fast (written in Rust) and open-source! At Kern AI we use Qdrant for fast document retrieval and to do quick similarity search for text data. - id: 10 name: Stanislas Polu position: Software Engineer & Co-Founder, Dust avatar: src: /img/customers/stanislas-polu.svg alt: Avatar text: Qdrant's the best. By. Far. - id: 11 name: Sivesh Sukumar position: Investor at Balderton avatar: src: /img/customers/sivesh-sukumar.svg alt: Avatar text: We're using Qdrant to help segment and source Europe's next wave of extraordinary companies! - id: 12 name: Saksham Gupta position: AI Governance Machine Learning Engineer avatar: src: /img/customers/saksham-gupta.svg alt: Avatar text: Looking forward to using Qdrant vector similarity search in the clinical trial space! OpenAI Embeddings + Qdrant = Match made in heaven! - id: 12 name: Rishav Dash position: Data Scientist avatar: src: /img/customers/rishav-dash.svg alt: Avatar text: awesome stuff 🔥 sitemapExclude: true --- ",customers/customers-vector-space-wall.md "--- title: Customers description: Learn how Qdrant powers thousands of top AI solutions that require vector search with unparalleled efficiency, performance and massive-scale data processing. sitemapExclude: true --- ",customers/customers-hero.md "--- title: Customers description: Customers build: render: always cascade: - build: list: local publishResources: false render: never --- ",customers/_index.md "--- logos: - /img/customers-logo/flipkart.svg - /img/customers-logo/x.svg - /img/customers-logo/quora.svg sitemapExclude: true ---",customers/logo-cards-2.md "--- title: Qdrant Demos and Tutorials description: Experience firsthand how Qdrant powers intelligent search, anomaly detection, and personalized recommendations, showcasing the full capabilities of vector search to revolutionize data exploration and insights. cards: - id: 0 title: Semantic Search Demo - Startup Search paragraphs: - id: 0 content: This demo leverages a pre-trained SentenceTransformer model to perform semantic searches on startup descriptions, transforming them into vectors for the Qdrant engine. - id: 1 content: Enter a query to see how neural search compares to traditional full-text search, with the option to toggle neural search on and off for direct comparison. link: text: View Demo url: https://qdrant.to/semantic-search-demo - id: 1 title: Semantic Search and Recommendations Demo - Food Discovery paragraphs: - id: 0 content: Explore personalized meal recommendations with our demo, using Delivery Service data. Like or dislike dish photos to refine suggestions based on visual appeal. - id: 1 content: Filter options allow for restaurant selections within your delivery area, tailoring your dining experience to your preferences. link: text: View Demo url: https://food-discovery.qdrant.tech/ - id: 2 title: Categorization Demo -
E-Commerce Products paragraphs: - id: 0 content: Discover the power of vector databases in e-commerce through our demo. Simply input a product name and watch as our multi-language model intelligently categorizes it. The dots you see represent product clusters, highlighting our system's efficient categorization. link: text: View Demo url: https://qdrant.to/extreme-classification-demo - id: 3 title: Code Search Demo -
Explore Qdrant's Codebase paragraphs: - id: 0 content: Semantic search isn't just for natural language. By combining results from two models, qdrant is able to locate relevant code snippets down to the exact line. link: text: View Demo url: https://code-search.qdrant.tech/ ---",demo/_index.md "--- content: Learn more about all features that are supported on Qdrant Cloud. link: text: Qdrant Features url: /qdrant-vector-database/ sitemapExclude: true --- ",qdrant-cloud/qdrant-cloud-features-link.md "--- title: Qdrant Cloud description: Qdrant Cloud provides optimal flexibility and offers a suite of features focused on efficient and scalable vector search - fully managed. Available on AWS, Google Cloud, and Azure. startFree: text: Start Free url: https://cloud.qdrant.io/ contactUs: text: Contact us url: /contact-us/ icon: src: /icons/fill/lightning-purple.svg alt: Lightning content: ""Learn how to get up and running in minutes:"" #video: # src: / # button: Watch Demo # icon: # src: /icons/outline/play-white.svg # alt: Play # preview: /img/qdrant-cloud-demo.png sitemapExclude: true --- ",qdrant-cloud/qdrant-cloud-hero.md "--- items: - id: 0 title: Run Anywhere description: Available on AWS, Google Cloud, and Azure regions globally for deployment flexibility and quick data access. image: src: /img/qdrant-cloud-bento-cards/run-anywhere-graphic.png alt: Run anywhere graphic - id: 1 title: Simple Setup and Start Free description: Deploying a cluster via the Qdrant Cloud Console takes only a few seconds and scales up as needed. image: src: /img/qdrant-cloud-bento-cards/simple-setup-illustration.png alt: Simple setup illustration - id: 2 title: Efficient Resource Management description: Dramatically reduce memory usage with built-in compression options and offload data to disk. image: src: /img/qdrant-cloud-bento-cards/efficient-resource-management.png alt: Efficient resource management diagram - id: 3 title: Zero-downtime Upgrades description: Uninterrupted service during scaling and model updates for continuous operation and deployment flexibility. link: text: Cluster Scaling url: /documentation/cloud/cluster-scaling/ image: src: /img/qdrant-cloud-bento-cards/zero-downtime-upgrades.png alt: Zero downtime upgrades illustration - id: 4 title: Continuous Backups description: Automated, configurable backups for data safety and easy restoration to previous states. link: text: Backups url: /documentation/cloud/backups/ image: src: /img/qdrant-cloud-bento-cards/continuous-backups.png alt: Continuous backups illustration sitemapExclude: true --- ",qdrant-cloud/qdrant-cloud-bento-cards.md "--- title: ""Qdrant Cloud: Scalable Managed Cloud Services"" url: cloud description: ""Discover Qdrant Cloud, the cutting-edge managed cloud for scalable, high-performance AI applications. Manage and deploy your vector data with ease today."" build: render: always cascade: - build: list: local publishResources: false render: never --- ",qdrant-cloud/_index.md "--- logo: title: Our Logo description: ""The Qdrant logo represents a paramount expression of our core brand identity. With consistent placement, sizing, clear space, and color usage, our logo affirms its recognition across all platforms."" logoCards: - id: 0 logo: src: /img/brand-resources-logos/logo.svg alt: Logo Full Color title: Logo Full Color link: url: /img/brand-resources-logos/logo.svg text: Download - id: 1 logo: src: /img/brand-resources-logos/logo-black.svg alt: Logo Black title: Logo Black link: url: /img/brand-resources-logos/logo-black.svg text: Download - id: 2 logo: src: /img/brand-resources-logos/logo-white.svg alt: Logo White title: Logo White link: url: /img/brand-resources-logos/logo-white.svg text: Download logomarkTitle: Logomark logomarkCards: - id: 0 logo: src: /img/brand-resources-logos/logomark.svg alt: Logomark Full Color title: Logomark Full Color link: url: /img/brand-resources-logos/logomark.svg text: Download - id: 1 logo: src: /img/brand-resources-logos/logomark-black.svg alt: Logomark Black title: Logomark Black link: url: /img/brand-resources-logos/logomark-black.svg text: Download - id: 2 logo: src: /img/brand-resources-logos/logomark-white.svg alt: Logomark White title: Logomark White link: url: /img/brand-resources-logos/logomark-white.svg text: Download colors: title: Colors description: Our brand colors play a crucial role in maintaining a cohesive visual identity. The careful balance of these colors ensures a consistent and impactful representation of Qdrant, reinforcing our commitment to excellence and precision in every aspect of our work. cards: - id: 0 name: Amaranth type: HEX code: ""DC244C"" - id: 1 name: Blue type: HEX code: ""2F6FF0"" - id: 2 name: Violet type: HEX code: ""8547FF"" - id: 3 name: Teal type: HEX code: ""038585"" - id: 4 name: Black type: HEX code: ""090E1A"" - id: 5 name: White type: HEX code: ""FFFFFF"" typography: title: Typography description: Main typography is Satoshi, this is employed for both UI and marketing purposes. Headlines are set in Bold (600), while text is rendered in Medium (500). example: AaBb specimen: ""ABCDEFGHIJKLMNOPQRSTUVWXYZ
abcdefghijklmnopqrstuvwxyz
0123456789 !@#$%^&*()"" link: url: https://api.fontshare.com/v2/fonts/download/satoshi text: Download trademarks: title: Trademarks description: All features associated with the Qdrant brand are safeguarded by relevant trademark, copyright, and intellectual property regulations. Utilization of the Qdrant trademark must adhere to the specified Qdrant Trademark Standards for Use.

Should you require clarification or seek permission to utilize these resources, feel free to reach out to us at link: url: ""mailto:info@qdrant.com"" text: info@qdrant.com. sitemapExclude: true --- ",brand-resources/brand-resources-content.md "--- title: Qdrant Brand Resources buttons: - id: 0 url: ""#logo"" text: Logo - id: 1 url: ""#colors"" text: Colors - id: 2 url: ""#typography"" text: Typography - id: 3 url: ""#trademarks"" text: Trademarks sitemapExclude: true --- ",brand-resources/brand-resources-hero.md "--- title: brand-resources description: brand-resources build: render: always cascade: - build: list: local publishResources: false render: never --- ",brand-resources/_index.md "--- title: Cloud Quickstart weight: 4 aliases: - quickstart-cloud - ../cloud-quick-start - cloud-quick-start - cloud-quickstart - cloud/quickstart-cloud/ --- # How to Get Started With Qdrant Cloud

You can try vector search on Qdrant Cloud in three steps.
Instructions are below, but the video is faster:

## Setup a Qdrant Cloud cluster 1. Register for a [Cloud account](https://cloud.qdrant.io/) with your email, Google or Github credentials. 2. Go to **Overview** and follow the onboarding instructions under **Create First Cluster**. ![create a cluster](/docs/gettingstarted/gui-quickstart/create-cluster.png) 3. When you create it, you will receive an API key. You will need to copy and paste it soon. 4. Your new cluster will be created under **Clusters**. Give it a few moments to provision. ## Access the cluster dashboard 1. Go to your **Clusters**. Under **Actions**, open the **Dashboard**. 2. Paste your new API key here. If you lost it, make another in **Access Management**. 3. The key will grant you access to your Qdrant instance. Now you can see the cluster Dashboard. ![access the dashboard](/docs/gettingstarted/gui-quickstart/access-dashboard.png) ## Try the Tutorial sandbox 1. Open the interactive **Tutorial**. Here, you can test basic Qdrant API requests. 2. Using the **Quickstart** instructions, create a collection, add vectors and run a search. 3. The output on the right will show you some basic semantic search results. ![interactive-tutorial](/docs/gettingstarted/gui-quickstart/interactive-tutorial.png) ## That's vector search! You can stay in the sandbox and continue trying our different API calls.
When ready, use the Console and our complete REST API to try other operations. ## What's next? Now that you have a Qdrant Cloud cluster up and running, you should [test remote access](/documentation/cloud/authentication/#test-cluster-access) with a Qdrant Client. ",documentation/quickstart-cloud.md "--- title: Release Notes weight: 24 type: external-link external_url: https://github.com/qdrant/qdrant/releases sitemapExclude: True --- ",documentation/release-notes.md "--- title: Benchmarks weight: 33 draft: true --- ",documentation/benchmarks.md "--- title: Community links weight: 42 draft: true --- # Community Contributions Though we do not officially maintain this content, we still feel that is is valuable and thank our dedicated contributors. | Link | Description | Stack | |------|------------------------------|--------| | [Pinecone to Qdrant Migration](https://github.com/NirantK/qdrant_tools) | Complete python toolset that supports migration between two products. | Qdrant, Pinecone | | [LlamaIndex Support for Qdrant](https://gpt-index.readthedocs.io/en/latest/examples/vector_stores/QdrantIndexDemo.html) | Documentation on common integrations with LlamaIndex. | Qdrant, LlamaIndex | | [Geo.Rocks Semantic Search Tutorial](https://geo.rocks/post/qdrant-transformers-js-semantic-search/) | Create a fully working semantic search stack with a built in search API and a minimal stack. | Qdrant, HuggingFace, SentenceTransformers, transformers.js | ",documentation/community-links.md "--- title: Local Quickstart weight: 5 aliases: - quick_start - quick-start - quickstart --- # How to Get Started with Qdrant Locally In this short example, you will use the Python Client to create a Collection, load data into it and run a basic search query. ## Download and run First, download the latest Qdrant image from Dockerhub: ```bash docker pull qdrant/qdrant ``` Then, run the service: ```bash docker run -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrant ``` Under the default configuration all data will be stored in the `./qdrant_storage` directory. This will also be the only directory that both the Container and the host machine can both see. Qdrant is now accessible: - REST API: [localhost:6333](http://localhost:6333) - Web UI: [localhost:6333/dashboard](http://localhost:6333/dashboard) - GRPC API: [localhost:6334](http://localhost:6334) ## Initialize the client ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); ``` ```rust use qdrant_client::Qdrant; // The Rust client uses Qdrant's gRPC interface let client = Qdrant::from_url(""http://localhost:6334"").build()?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; // The Java client uses Qdrant's gRPC interface QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); ``` ```csharp using Qdrant.Client; // The C# client uses Qdrant's gRPC interface var client = new QdrantClient(""localhost"", 6334); ``` ```go import ""github.com/qdrant/go-client/qdrant"" // The Go client uses Qdrant's gRPC interface client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) ``` ## Create a collection You will be storing all of your vector data in a Qdrant collection. Let's call it `test_collection`. This collection will be using a dot product distance metric to compare vectors. ```python from qdrant_client.models import Distance, VectorParams client.create_collection( collection_name=""test_collection"", vectors_config=VectorParams(size=4, distance=Distance.DOT), ) ``` ```typescript await client.createCollection(""test_collection"", { vectors: { size: 4, distance: ""Dot"" }, }); ``` ```rust use qdrant_client::qdrant::{CreateCollectionBuilder, VectorParamsBuilder}; client .create_collection( CreateCollectionBuilder::new(""test_collection"") .vectors_config(VectorParamsBuilder::new(4, Distance::Dot)), ) .await?; ``` ```java import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; client.createCollectionAsync(""test_collection"", VectorParams.newBuilder().setDistance(Distance.Dot).setSize(4).build()).get(); ``` ```csharp using Qdrant.Client.Grpc; await client.CreateCollectionAsync(collectionName: ""test_collection"", vectorsConfig: new VectorParams { Size = 4, Distance = Distance.Dot }); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 4, Distance: qdrant.Distance_Cosine, }), }) ``` ## Add vectors Let's now add a few vectors with a payload. Payloads are other data you want to associate with the vector: ```python from qdrant_client.models import PointStruct operation_info = client.upsert( collection_name=""test_collection"", wait=True, points=[ PointStruct(id=1, vector=[0.05, 0.61, 0.76, 0.74], payload={""city"": ""Berlin""}), PointStruct(id=2, vector=[0.19, 0.81, 0.75, 0.11], payload={""city"": ""London""}), PointStruct(id=3, vector=[0.36, 0.55, 0.47, 0.94], payload={""city"": ""Moscow""}), PointStruct(id=4, vector=[0.18, 0.01, 0.85, 0.80], payload={""city"": ""New York""}), PointStruct(id=5, vector=[0.24, 0.18, 0.22, 0.44], payload={""city"": ""Beijing""}), PointStruct(id=6, vector=[0.35, 0.08, 0.11, 0.44], payload={""city"": ""Mumbai""}), ], ) print(operation_info) ``` ```typescript const operationInfo = await client.upsert(""test_collection"", { wait: true, points: [ { id: 1, vector: [0.05, 0.61, 0.76, 0.74], payload: { city: ""Berlin"" } }, { id: 2, vector: [0.19, 0.81, 0.75, 0.11], payload: { city: ""London"" } }, { id: 3, vector: [0.36, 0.55, 0.47, 0.94], payload: { city: ""Moscow"" } }, { id: 4, vector: [0.18, 0.01, 0.85, 0.80], payload: { city: ""New York"" } }, { id: 5, vector: [0.24, 0.18, 0.22, 0.44], payload: { city: ""Beijing"" } }, { id: 6, vector: [0.35, 0.08, 0.11, 0.44], payload: { city: ""Mumbai"" } }, ], }); console.debug(operationInfo); ``` ```rust use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder}; let points = vec![ PointStruct::new(1, vec![0.05, 0.61, 0.76, 0.74], [(""city"", ""Berlin"".into())]), PointStruct::new(2, vec![0.19, 0.81, 0.75, 0.11], [(""city"", ""London"".into())]), PointStruct::new(3, vec![0.36, 0.55, 0.47, 0.94], [(""city"", ""Moscow"".into())]), // ..truncated ]; let response = client .upsert_points(UpsertPointsBuilder::new(""test_collection"", points).wait(true)) .await?; dbg!(response); ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.UpdateResult; UpdateResult operationInfo = client .upsertAsync( ""test_collection"", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f)) .putAllPayload(Map.of(""city"", value(""Berlin""))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.19f, 0.81f, 0.75f, 0.11f)) .putAllPayload(Map.of(""city"", value(""London""))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.36f, 0.55f, 0.47f, 0.94f)) .putAllPayload(Map.of(""city"", value(""Moscow""))) .build())) // Truncated .get(); System.out.println(operationInfo); ``` ```csharp using Qdrant.Client.Grpc; var operationInfo = await client.UpsertAsync(collectionName: ""test_collection"", points: new List { new() { Id = 1, Vectors = new float[] { 0.05f, 0.61f, 0.76f, 0.74f }, Payload = { [""city""] = ""Berlin"" } }, new() { Id = 2, Vectors = new float[] { 0.19f, 0.81f, 0.75f, 0.11f }, Payload = { [""city""] = ""London"" } }, new() { Id = 3, Vectors = new float[] { 0.36f, 0.55f, 0.47f, 0.94f }, Payload = { [""city""] = ""Moscow"" } }, // Truncated }); Console.WriteLine(operationInfo); ``` ```go import ( ""context"" ""fmt"" ""github.com/qdrant/go-client/qdrant"" ) operationInfo, err := client.Upsert(context.Background(), &qdrant.UpsertPoints{ CollectionName: ""test_collection"", Points: []*qdrant.PointStruct{ { Id: qdrant.NewIDNum(1), Vectors: qdrant.NewVectors(0.05, 0.61, 0.76, 0.74), Payload: qdrant.NewValueMap(map[string]any{""city"": ""Berlin""}), }, { Id: qdrant.NewIDNum(2), Vectors: qdrant.NewVectors(0.19, 0.81, 0.75, 0.11), Payload: qdrant.NewValueMap(map[string]any{""city"": ""London""}), }, { Id: qdrant.NewIDNum(3), Vectors: qdrant.NewVectors(0.36, 0.55, 0.47, 0.94), Payload: qdrant.NewValueMap(map[string]any{""city"": ""Moscow""}), }, // Truncated }, }) if err != nil { panic(err) } fmt.Println(operationInfo) ``` **Response:** ```python operation_id=0 status= ``` ```typescript { operation_id: 0, status: 'completed' } ``` ```rust PointsOperationResponse { result: Some( UpdateResult { operation_id: Some( 0, ), status: Completed, }, ), time: 0.00094027, } ``` ```java operation_id: 0 status: Completed ``` ```csharp { ""operationId"": ""0"", ""status"": ""Completed"" } ``` ```go operation_id:0 status:Acknowledged ``` ## Run a query Let's ask a basic question - Which of our stored vectors are most similar to the query vector `[0.2, 0.1, 0.9, 0.7]`? ```python search_result = client.query_points( collection_name=""test_collection"", query=[0.2, 0.1, 0.9, 0.7], limit=3 ).points print(search_result) ``` ```typescript let searchResult = await client.query( ""test_collection"", { query: [0.2, 0.1, 0.9, 0.7], limit: 3 }); console.debug(searchResult.points); ``` ```rust use qdrant_client::qdrant::QueryPointsBuilder; let search_result = client .query( QueryPointsBuilder::new(""test_collection"") .query(vec![0.2, 0.1, 0.9, 0.7]) ) .await?; dbg!(search_result); ``` ```java import java.util.List; import io.qdrant.client.grpc.Points.ScoredPoint; import io.qdrant.client.grpc.Points.QueryPoints; import static io.qdrant.client.QueryFactory.nearest; List searchResult = client.queryAsync(QueryPoints.newBuilder() .setCollectionName(""test_collection"") .setLimit(3) .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .build()).get(); System.out.println(searchResult); ``` ```csharp var searchResult = await client.QueryAsync( collectionName: ""test_collection"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, limit: 3, ); Console.WriteLine(searchResult); ``` ```go import ( ""context"" ""fmt"" ""github.com/qdrant/go-client/qdrant"" ) searchResult, err := client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""test_collection"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), }) if err != nil { panic(err) } fmt.Println(searchResult) ``` **Response:** ```json [ { ""id"": 4, ""version"": 0, ""score"": 1.362, ""payload"": null, ""vector"": null }, { ""id"": 1, ""version"": 0, ""score"": 1.273, ""payload"": null, ""vector"": null }, { ""id"": 3, ""version"": 0, ""score"": 1.208, ""payload"": null, ""vector"": null } ] ``` The results are returned in decreasing similarity order. Note that payload and vector data is missing in these results by default. See [payload and vector in the result](../concepts/search/#payload-and-vector-in-the-result) on how to enable it. ## Add a filter We can narrow down the results further by filtering by payload. Let's find the closest results that include ""London"". ```python from qdrant_client.models import Filter, FieldCondition, MatchValue search_result = client.query_points( collection_name=""test_collection"", query=[0.2, 0.1, 0.9, 0.7], query_filter=Filter( must=[FieldCondition(key=""city"", match=MatchValue(value=""London""))] ), with_payload=True, limit=3, ).points print(search_result) ``` ```typescript searchResult = await client.query(""test_collection"", { query: [0.2, 0.1, 0.9, 0.7], filter: { must: [{ key: ""city"", match: { value: ""London"" } }], }, with_payload: true, limit: 3, }); console.debug(searchResult); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, QueryPointsBuilder}; let search_result = client .query( QueryPointsBuilder::new(""test_collection"") .query(vec![0.2, 0.1, 0.9, 0.7]) .filter(Filter::must([Condition::matches( ""city"", ""London"".to_string(), )])) .with_payload(true), ) .await?; dbg!(search_result); ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; List searchResult = client.queryAsync(QueryPoints.newBuilder() .setCollectionName(""test_collection"") .setLimit(3) .setFilter(Filter.newBuilder().addMust(matchKeyword(""city"", ""London""))) .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(enable(true)) .build()).get(); System.out.println(searchResult); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; var searchResult = await client.QueryAsync( collectionName: ""test_collection"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, filter: MatchKeyword(""city"", ""London""), limit: 3, payloadSelector: true ); Console.WriteLine(searchResult); ``` ```go import ( ""context"" ""fmt"" ""github.com/qdrant/go-client/qdrant"" ) searchResult, err := client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""test_collection"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""city"", ""London""), }, }, WithPayload: qdrant.NewWithPayload(true), }) if err != nil { panic(err) } fmt.Println(searchResult) ``` **Response:** ```json [ { ""id"": 2, ""version"": 0, ""score"": 0.871, ""payload"": { ""city"": ""London"" }, ""vector"": null } ] ``` You have just conducted vector search. You loaded vectors into a database and queried the database with a vector of your own. Qdrant found the closest results and presented you with a similarity score. ## Next steps Now you know how Qdrant works. Getting started with [Qdrant Cloud](../cloud/quickstart-cloud/) is just as easy. [Create an account](https://qdrant.to/cloud) and use our SaaS completely free. We will take care of infrastructure maintenance and software updates. To move onto some more complex examples of vector search, read our [Tutorials](../tutorials/) and create your own app with the help of our [Examples](../examples/). **Note:** There is another way of running Qdrant locally. If you are a Python developer, we recommend that you try Local Mode in [Qdrant Client](https://github.com/qdrant/qdrant-client), as it only takes a few moments to get setup. ",documentation/quickstart.md "--- title: Qdrant Cloud API weight: 10 --- # Qdrant Cloud API The Qdrant Cloud API lets you manage Cloud accounts and their respective Qdrant clusters. You can use this API to manage your clusters, authentication methods, and cloud configurations. | REST API | Documentation | | -------- | ------------------------------------------------------------------------------------ | | v.0.1.0 | [OpenAPI Specification](https://cloud.qdrant.io/pa/v1/docs) | **Note:** This is not the Qdrant REST API. For core product APIs & SDKs, see our list of [interfaces](/documentation/interfaces/) ## Authentication: Connecting to Cloud API To interact with the Qdrant Cloud API, you must authenticate using an API key. Each request to the API must include the API key in the **Authorization** header. The API key acts as a bearer token and grants access to your account’s resources. You can create a Cloud API key in the Cloud Console UI. Go to **Access Management** > **Qdrant Cloud API Keys**. ![Authentication](/documentation/cloud/authentication.png) **Note:** Ensure that the API key is kept secure and not exposed in public repositories or logs. Once authenticated, the API allows you to manage clusters, collections, and perform other operations available to your account. ## Sample API Request Here's an example of a basic request to **list all clusters** in your Qdrant Cloud account: ```bash curl -X 'GET' \ 'https://cloud.qdrant.io/pa/v1/accounts//clusters' \ -H 'accept: application/json' \ -H 'Authorization: ' ``` This request will return a list of clusters associated with your account in JSON format. ## Cluster Management Use these endpoints to create and manage your Qdrant database clusters. The API supports fine-grained control over cluster resources (CPU, RAM, disk), node configurations, tolerations, and other operational characteristics across all cloud providers (AWS, GCP, Azure) and their respective regions in Qdrant Cloud, as well as Hybrid Cloud. - **Get Cluster by ID**: Retrieve detailed information about a specific cluster using the cluster ID and associated account ID. - **Delete Cluster**: Remove a cluster, with optional deletion of backups. - **Update Cluster**: Apply modifications to a cluster's configuration. - **List Clusters**: Get all clusters associated with a specific account, filtered by region or other criteria. - **Create Cluster**: Add new clusters to the account with configurable parameters such as nodes, cloud provider, and regions. - **Get Booking**: Manage hosting across various cloud providers (AWS, GCP, Azure) and their respective regions. ## Cluster Authentication Management Use these endpoints to manage your cluster API keys. - **List API Keys**: Retrieve all API keys associated with an account. - **Create API Key**: Generate a new API key for programmatic access. - **Delete API Key**: Revoke access by deleting a specific API key. - **Update API Key**: Modify attributes of an existing API key. ",documentation/qdrant-cloud-api.md "--- #Delimiter files are used to separate the list of documentation pages into sections. title: ""Getting Started"" type: delimiter weight: 1 # Change this weight to change order of sections sitemapExclude: True _build: publishResources: false render: never ---",documentation/0-dl.md "--- #Delimiter files are used to separate the list of documentation pages into sections. title: ""Integrations"" type: delimiter weight: 14 # Change this weight to change order of sections sitemapExclude: True _build: publishResources: false render: never ---",documentation/2-dl.md "--- title: Roadmap weight: 32 draft: true --- # Qdrant 2023 Roadmap Goals of the release: * **Maintain easy upgrades** - we plan to keep backward compatibility for at least one major version back. * That means that you can upgrade Qdrant without any downtime and without any changes in your client code within one major version. * Storage should be compatible between any two consequent versions, so you can upgrade Qdrant with automatic data migration between consecutive versions. * **Make billion-scale serving cheap** - qdrant already can serve billions of vectors, but we want to make it even more affordable. * **Easy scaling** - our plan is to make it easy to dynamically scale Qdrant, so you could go from 1 to 1B vectors seamlessly. * **Various similarity search scenarios** - we want to support more similarity search scenarios, e.g. sparse search, grouping requests, diverse search, etc. ## Milestones * :atom_symbol: Quantization support * [ ] Scalar quantization f32 -> u8 (4x compression) * [ ] Advanced quantization (8x and 16x compression) * [ ] Support for binary vectors --- * :arrow_double_up: Scalability * [ ] Automatic replication factor adjustment * [ ] Automatic shard distribution on cluster scaling * [ ] Repartitioning support --- * :eyes: Search scenarios * [ ] Diversity search - search for vectors that are different from each other * [ ] Sparse vectors search - search for vectors with a small number of non-zero values * [ ] Grouping requests - search within payload-defined groups * [ ] Different scenarios for recommendation API --- * Additionally * [ ] Extend full-text filtering support * [ ] Support for phrase queries * [ ] Support for logical operators * [ ] Simplify update of collection parameters ",documentation/roadmap.md "--- #Delimiter files are used to separate the list of documentation pages into sections. title: ""Managed Services"" type: delimiter weight: 7 # Change this weight to change order of sections sitemapExclude: True _build: publishResources: false render: never ---",documentation/4-dl.md "--- #Delimiter files are used to separate the list of documentation pages into sections. title: ""Examples"" type: delimiter weight: 17 # Change this weight to change order of sections sitemapExclude: True _build: publishResources: false render: never ---",documentation/3-dl.md "--- title: Practice Datasets weight: 23 --- # Common Datasets in Snapshot Format You may find that creating embeddings from datasets is a very resource-intensive task. If you need a practice dataset, feel free to pick one of the ready-made snapshots on this page. These snapshots contain pre-computed vectors that you can easily import into your Qdrant instance. ## Available datasets Our snapshots are usually generated from publicly available datasets, which are often used for non-commercial or academic purposes. The following datasets are currently available. Please click on a dataset name to see its detailed description. | Dataset | Model | Vector size | Documents | Size | Qdrant snapshot | HF Hub | |--------------------------------------------|-----------------------------------------------------------------------------|-------------|-----------|--------|----------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------| | [Arxiv.org titles](#arxivorg-titles) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 7.1 GB | [Download](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) | | [Arxiv.org abstracts](#arxivorg-abstracts) | [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) | 768 | 2.3M | 8.4 GB | [Download](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/arxiv-abstracts-instructorxl-embeddings) | | [Wolt food](#wolt-food) | [clip-ViT-B-32](https://huggingface.co/sentence-transformers/clip-ViT-B-32) | 512 | 1.7M | 7.9 GB | [Download](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot) | [Open](https://huggingface.co/datasets/Qdrant/wolt-food-clip-ViT-B-32-embeddings) | Once you download a snapshot, you need to [restore it](/documentation/concepts/snapshots/#restore-snapshot) using the Qdrant CLI upon startup or through the API. ## Qdrant on Hugging Face

[Hugging Face](https://huggingface.co/) provides a platform for sharing and using ML models and datasets. [Qdrant](https://huggingface.co/Qdrant) is one of the organizations there! We aim to provide you with datasets containing neural embeddings that you can use to practice with Qdrant and build your applications based on semantic search. **Please let us know if you'd like to see a specific dataset!** If you are not familiar with [Hugging Face datasets](https://huggingface.co/docs/datasets/index), or would like to know how to combine it with Qdrant, please refer to the [tutorial](/documentation/tutorials/huggingface-datasets/). ## Arxiv.org [Arxiv.org](https://arxiv.org) is a highly-regarded open-access repository of electronic preprints in multiple fields. Operated by Cornell University, arXiv allows researchers to share their findings with the scientific community and receive feedback before they undergo peer review for formal publication. Its archives host millions of scholarly articles, making it an invaluable resource for those looking to explore the cutting edge of scientific research. With a high frequency of daily submissions from scientists around the world, arXiv forms a comprehensive, evolving dataset that is ripe for mining, analysis, and the development of future innovations. ### Arxiv.org titles This dataset contains embeddings generated from the paper titles only. Each vector has a payload with the title used to create it, along with the DOI (Digital Object Identifier). ```json { ""title"": ""Nash Social Welfare for Indivisible Items under Separable, Piecewise-Linear Concave Utilities"", ""DOI"": ""1612.05191"" } ``` The embeddings generated with InstructorXL model have been generated using the following instruction: > Represent the Research Paper title for retrieval; Input: The following code snippet shows how to generate embeddings using the InstructorXL model: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR(""hkunlp/instructor-xl"") sentence = ""3D ActionSLAM: wearable person tracking in multi-floor environments"" instruction = ""Represent the Research Paper title for retrieval; Input:"" embeddings = model.encode([[instruction, sentence]]) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { ""location"": ""https://snapshots.qdrant.io/arxiv_titles-3083016565637815127-2023-05-29-13-56-22.snapshot"" } ``` ### Arxiv.org abstracts This dataset contains embeddings generated from the paper abstracts. Each vector has a payload with the abstract used to create it, along with the DOI (Digital Object Identifier). ```json { ""abstract"": ""Recently Cole and Gkatzelis gave the first constant factor approximation\nalgorithm for the problem of allocating indivisible items to agents, under\nadditive valuations, so as to maximize the Nash Social Welfare. We give\nconstant factor algorithms for a substantial generalization of their problem --\nto the case of separable, piecewise-linear concave utility functions. We give\ntwo such algorithms, the first using market equilibria and the second using the\ntheory of stable polynomials.\n In AGT, there is a paucity of methods for the design of mechanisms for the\nallocation of indivisible goods and the result of Cole and Gkatzelis seemed to\nbe taking a major step towards filling this gap. Our result can be seen as\nanother step in this direction.\n"", ""DOI"": ""1612.05191"" } ``` The embeddings generated with InstructorXL model have been generated using the following instruction: > Represent the Research Paper abstract for retrieval; Input: The following code snippet shows how to generate embeddings using the InstructorXL model: ```python from InstructorEmbedding import INSTRUCTOR model = INSTRUCTOR(""hkunlp/instructor-xl"") sentence = ""The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train."" instruction = ""Represent the Research Paper abstract for retrieval; Input:"" embeddings = model.encode([[instruction, sentence]]) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { ""location"": ""https://snapshots.qdrant.io/arxiv_abstracts-3083016565637815127-2023-06-02-07-26-29.snapshot"" } ``` ## Wolt food Our [Food Discovery demo](https://food-discovery.qdrant.tech/) relies on the dataset of food images from the Wolt app. Each point in the collection represents a dish with a single image. The image is represented as a vector of 512 float numbers. There is also a JSON payload attached to each point, which looks similar to this: ```json { ""cafe"": { ""address"": ""VGX7+6R2 Vecchia Napoli, Valletta"", ""categories"": [""italian"", ""pasta"", ""pizza"", ""burgers"", ""mediterranean""], ""location"": {""lat"": 35.8980154, ""lon"": 14.5145106}, ""menu_id"": ""610936a4ee8ea7a56f4a372a"", ""name"": ""Vecchia Napoli Is-Suq Tal-Belt"", ""rating"": 9, ""slug"": ""vecchia-napoli-skyparks-suq-tal-belt"" }, ""description"": ""Tomato sauce, mozzarella fior di latte, crispy guanciale, Pecorino Romano cheese and a hint of chilli"", ""image"": ""https://wolt-menu-images-cdn.wolt.com/menu-images/610936a4ee8ea7a56f4a372a/005dfeb2-e734-11ec-b667-ced7a78a5abd_l_amatriciana_pizza_joel_gueller1.jpeg"", ""name"": ""L'Amatriciana"" } ``` The embeddings generated with clip-ViT-B-32 model have been generated using the following code snippet: ```python from PIL import Image from sentence_transformers import SentenceTransformer image_path = ""5dbfd216-5cce-11eb-8122-de94874ad1c8_ns_takeaway_seelachs_ei_baguette.jpeg"" model = SentenceTransformer(""clip-ViT-B-32"") embedding = model.encode(Image.open(image_path)) ``` The snapshot of the dataset might be downloaded [here](https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot). #### Importing the dataset The easiest way to use the provided dataset is to recover it via the API by passing the URL as a location. It works also in [Qdrant Cloud](https://cloud.qdrant.io/). The following code snippet shows how to create a new collection and fill it with the snapshot data: ```http request PUT /collections/{collection_name}/snapshots/recover { ""location"": ""https://snapshots.qdrant.io/wolt-clip-ViT-B-32-2446808438011867-2023-12-14-15-55-26.snapshot"" } ``` ",documentation/datasets.md "--- #Delimiter files are used to separate the list of documentation pages into sections. title: ""User Manual"" type: delimiter weight: 10 # Change this weight to change order of sections sitemapExclude: True _build: publishResources: false render: never ---",documentation/1-dl.md "--- #Delimiter files are used to separate the list of documentation pages into sections. title: ""Support"" type: delimiter weight: 21 # Change this weight to change order of sections sitemapExclude: True _build: publishResources: false render: never ---",documentation/5-dl.md "--- title: Home weight: 2 hideTOC: true --- # Documentation Qdrant is an AI-native vector dabatase and a semantic search engine. You can use it to extract meaningful information from unstructured data. Want to see how it works? [Clone this repo now](https://github.com/qdrant/qdrant_demo/) and build a search engine in five minutes. ||| |-:|:-| |[Cloud Quickstart](/documentation/quickstart-cloud/)|[Local Quickstart](/documentation/quick-start/)| ## Ready to start developing? ***

Qdrant is open-source and can be self-hosted. However, the quickest way to get started is with our [free tier](https://qdrant.to/cloud) on Qdrant Cloud. It scales easily and provides an UI where you can interact with data.

*** [![Hybrid Cloud](/docs/homepage/cloud-cta.png)](https://qdrant.to/cloud) ## Qdrant's most popular features: |||| |:-|:-|:-| |[Filtrable HNSW](/documentation/filtering/)
Single-stage payload filtering | [Recommendations & Context Search](/documentation/concepts/explore/#explore-the-data)
Exploratory advanced search| [Pure-Vector Hybrid Search](/documentation/hybrid-queries/)
Full text and semantic search in one| |[Multitenancy](/documentation/guides/multiple-partitions/)
Payload-based partitioning|[Custom Sharding](/documentation/guides/distributed_deployment/#sharding)
For data isolation and distribution|[Role Based Access Control](/documentation/guides/security/?q=jwt#granular-access-control-with-jwt)
Secure JWT-based access | |[Quantization](/documentation/guides/quantization/)
Compress data for drastic speedups|[Multivector Support](/documentation/concepts/vectors/?q=multivect#multivectors)
For ColBERT late interaction |[Built-in IDF](/documentation/concepts/indexing/?q=inverse+docu#idf-modifier)
Cutting-edge similarity calculation|",documentation/_index.md "--- title: Contribution Guidelines weight: 35 draft: true --- # How to contribute If you are a Qdrant user - Data Scientist, ML Engineer, or MLOps, the best contribution would be the feedback on your experience with Qdrant. Let us know whenever you have a problem, face an unexpected behavior, or see a lack of documentation. You can do it in any convenient way - create an [issue](https://github.com/qdrant/qdrant/issues), start a [discussion](https://github.com/qdrant/qdrant/discussions), or drop up a [message](https://discord.gg/tdtYvXjC4h). If you use Qdrant or Metric Learning in your projects, we'd love to hear your story! Feel free to share articles and demos in our community. For those familiar with Rust - check out our [contribution guide](https://github.com/qdrant/qdrant/blob/master/CONTRIBUTING.md). If you have problems with code or architecture understanding - reach us at any time. Feeling confident and want to contribute more? - Come to [work with us](https://qdrant.join.com/)!",documentation/contribution-guidelines.md "--- title: Bubble aliases: [ ../frameworks/bubble/ ] --- # Bubble [Bubble](https://bubble.io/) is a software development platform that enables anyone to build and launch fully functional web applications without writing code. You can use the [Qdrant Bubble plugin](https://bubble.io/plugin/qdrant-1716804374179x344999530386685950) to interface with Qdrant in your workflows. ## Prerequisites 1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). 2. An account at [Bubble.io](https://bubble.io/) and an app set up. ## Setting up the plugin Navigate to your app's workflows. Select `""Install more plugins actions""`. ![Install New Plugin](/documentation/frameworks/bubble/install-bubble-plugin.png) You can now search for the Qdrant plugin and install it. Ensure all the categories are selected to perform a full search. ![Qdrant Plugin Search](/documentation/frameworks/bubble/qdrant-plugin-search.png) The Qdrant plugin can now be found in the installed plugins section of your workflow. Enter the API key of your Qdrant instance for authentication. ![Qdrant Plugin Home](/documentation/frameworks/bubble/qdrant-plugin-home.png) The plugin provides actions for upserting, searching, updating and deleting points from your Qdrant collection with dynamic and static values from your Bubble workflow. ## Further Reading - [Bubble Academy](https://bubble.io/academy). - [Bubble Manual](https://manual.bubble.io/) ",documentation/platforms/bubble.md "--- title: Make.com aliases: [ ../frameworks/make/ ] --- # Make.com [Make](https://www.make.com/) is a platform for anyone to design, build, and automate anything—from tasks and workflows to apps and systems without code. Find the comprehensive list of available Make apps [here](https://www.make.com/en/integrations). Qdrant is available as an [app](https://www.make.com/en/integrations/qdrant) within Make to add to your scenarios. ![Qdrant Make hero](/documentation/frameworks/make/hero-page.png) ## Prerequisites Before you start, make sure you have the following: 1. A Qdrant instance to connect to. You can get free cloud instance [cloud.qdrant.io](https://cloud.qdrant.io/). 2. An account at Make.com. You can register yourself [here](https://www.make.com/en/register). ## Setting up a connection Navigate to your scenario on the Make dashboard and select a Qdrant app module to start a connection. ![Qdrant Make connection](/documentation/frameworks/make/connection.png) You can now establish a connection to Qdrant using your [instance credentials](/documentation/cloud/authentication/). ![Qdrant Make form](/documentation/frameworks/make/connection-form.png) ## Modules Modules represent actions that Make performs with an app. The Qdrant Make app enables you to trigger the following app modules. ![Qdrant Make modules](/documentation/frameworks/make/modules.png) The modules support mapping to connect the data retrieved by one module to another module to perform the desired action. You can read more about the data processing options available for the modules in the [Make reference](https://www.make.com/en/help/modules). ## Next steps - Find a list of Make workflow templates to connect with Qdrant [here](https://www.make.com/en/templates). - Make scenario reference docs can be found [here](https://www.make.com/en/help/scenarios).",documentation/platforms/make.md "--- title: Portable.io aliases: [ ../frameworks/portable/ ] --- # Portable [Portable](https://portable.io/) is an ELT platform that builds connectors on-demand for data teams. It enables connecting applications to your data warehouse with no code. You can avail the [Qdrant connector](https://portable.io/connectors/qdrant) to build data pipelines from your collections. ![Qdrant Connector](/documentation/frameworks/portable/home.png) ## Prerequisites 1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). 2. A [Portable account](https://app.portable.io/). ## Setting up the connector Navigate to the Portable dashboard. Search for `""Qdrant""` in the sources section. ![Install New Source](/documentation/frameworks/portable/install.png) Configure the connector with your Qdrant instance credentials. ![Configure connector](/documentation/frameworks/portable/configure.png) You can now build your flows using data from Qdrant by selecting a [destination](https://app.portable.io/destinations) and scheduling it. ## Further Reading - [Portable API Reference](https://developer.portable.io/api-reference/introduction). - [Portable Academy](https://portable.io/learn) ",documentation/platforms/portable.md "--- title: BuildShip aliases: [ ../frameworks/buildship/ ] --- # BuildShip [BuildShip](https://buildship.com/) is a low-code visual builder to create APIs, scheduled jobs, and backend workflows with AI assitance. You can use the [Qdrant integration](https://buildship.com/integrations/qdrant) to development workflows with semantic-search capabilites. ## Prerequisites 1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). 2. A [BuildsShip](https://buildship.app/) for developing workflows. ## Nodes Nodes are are fundamental building blocks of BuildShip. Each responsible for an operation in your workflow. The Qdrant integration includes the following nodes with extensibility if required. ### Add Point ![Add Point](/documentation/frameworks/buildship/add.png) ### Retrieve Points ![Retrieve Points](/documentation/frameworks/buildship/get.png) ### Delete Points ![Delete Points](/documentation/frameworks/buildship/delete.png) ### Search Points ![Search Points](/documentation/frameworks/buildship/search.png) ## Further Reading - [BuildShip Docs](https://docs.buildship.com/basics/node). - [BuildShip Integrations](https://buildship.com/integrations) ",documentation/platforms/buildship.md "--- title: Apify aliases: [ ../frameworks/apify/ ] --- # Apify [Apify](https://apify.com/) is a web scraping and browser automation platform featuring an [app store](https://apify.com/store) with over 1,500 pre-built micro-apps known as Actors. These serverless cloud programs, which are essentially dockers under the hood, are designed for various web automation applications, including data collection. One such Actor, built especially for AI and RAG applications, is [Website Content Crawler](https://apify.com/apify/website-content-crawler). It's ideal for this purpose because it has built-in HTML processing and data-cleaning functions. That means you can easily remove fluff, duplicates, and other things on a web page that aren't relevant, and provide only the necessary data to the language model. The Markdown can then be used to feed Qdrant to train AI models or supply them with fresh web content. Qdrant is available as an [official integration](https://apify.com/apify/qdrant-integration) to load Apify datasets into a collection. You can refer to the [Apify documentation](https://docs.apify.com/platform/integrations/qdrant) to set up the integration via the Apify UI. ## Programmatic Usage Apify also supports programmatic access to integrations via the [Apify Python SDK](https://docs.apify.com/sdk/python/). 1. Install the Apify Python SDK by running the following command: ```sh pip install apify-client ``` 2. Create a Python script and import all the necessary modules: ```python from apify_client import ApifyClient APIFY_API_TOKEN = ""YOUR-APIFY-TOKEN"" OPENAI_API_KEY = ""YOUR-OPENAI-API-KEY"" # COHERE_API_KEY = ""YOUR-COHERE-API-KEY"" QDRANT_URL = ""YOUR-QDRANT-URL"" QDRANT_API_KEY = ""YOUR-QDRANT-API-KEY"" client = ApifyClient(APIFY_API_TOKEN) ``` 3. Call the [Website Content Crawler](https://apify.com/apify/website-content-crawler) Actor to crawl the Qdrant documentation and extract text content from the web pages: ```python actor_call = client.actor(""apify/website-content-crawler"").call( run_input={""startUrls"": [{""url"": ""https://qdrant.tech/documentation/""}]} ) ``` 4. Call the Qdrant integration and store all data in the Qdrant Vector Database: ```python qdrant_integration_inputs = { ""qdrantUrl"": QDRANT_URL, ""qdrantApiKey"": QDRANT_API_KEY, ""qdrantCollectionName"": ""apify"", ""qdrantAutoCreateCollection"": True, ""datasetId"": actor_call[""defaultDatasetId""], ""datasetFields"": [""text""], ""enableDeltaUpdates"": True, ""deltaUpdatesPrimaryDatasetFields"": [""url""], ""expiredObjectDeletionPeriodDays"": 30, ""embeddingsProvider"": ""OpenAI"", # ""Cohere"" ""embeddingsApiKey"": OPENAI_API_KEY, ""performChunking"": True, ""chunkSize"": 1000, ""chunkOverlap"": 0, } actor_call = client.actor(""apify/qdrant-integration"").call(run_input=qdrant_integration_inputs) ``` Upon running the script, the data from will be scraped, transformed into vector embeddings and stored in the Qdrant collection. ## Further Reading - Apify [Documentation](https://docs.apify.com/) - Apify [Templates](https://apify.com/templates) - Integration [Source Code](https://github.com/apify/actor-vector-database-integrations) ",documentation/platforms/apify.md "--- title: PrivateGPT aliases: [ ../integrations/privategpt/, ../frameworks/privategpt/ ] --- # PrivateGPT [PrivateGPT](https://docs.privategpt.dev/) is a production-ready AI project that allows you to inquire about your documents using Large Language Models (LLMs) with offline support. PrivateGPT uses Qdrant as the default vectorstore for ingesting and retrieving documents. ## Configuration Qdrant settings can be configured by setting values to the qdrant property in the `settings.yaml` file. By default, Qdrant tries to connect to an instance at http://localhost:3000. Example: ```yaml qdrant: url: ""https://xyz-example.eu-central.aws.cloud.qdrant.io:6333"" api_key: """" ``` The available [configuration options](https://docs.privategpt.dev/manual/storage/vector-stores#qdrant-configuration) are: | Field | Description | |--------------|-------------| | location | If `:memory:` - use in-memory Qdrant instance.
If `str` - use it as a `url` parameter.| | url | Either host or str of `Optional[scheme], host, Optional[port], Optional[prefix]`.
Eg. `http://localhost:6333` | | port | Port of the REST API interface. Default: `6333` | | grpc_port | Port of the gRPC interface. Default: `6334` | | prefer_grpc | If `true` - use gRPC interface whenever possible in custom methods. | | https | If `true` - use HTTPS(SSL) protocol.| | api_key | API key for authentication in Qdrant Cloud.| | prefix | If set, add `prefix` to the REST URL path.
Example: `service/v1` will result in `http://localhost:6333/service/v1/{qdrant-endpoint}` for REST API.| | timeout | Timeout for REST and gRPC API requests.
Default: 5.0 seconds for REST and unlimited for gRPC | | host | Host name of Qdrant service. If url and host are not set, defaults to 'localhost'.| | path | Persistence path for QdrantLocal. Eg. `local_data/private_gpt/qdrant`| | force_disable_check_same_thread | Force disable check_same_thread for QdrantLocal sqlite connection.| ## Next steps Find the PrivateGPT docs [here](https://docs.privategpt.dev/). ",documentation/platforms/privategpt.md "--- title: Pipedream aliases: [ ../frameworks/pipedream/ ] --- # Pipedream [Pipedream](https://pipedream.com/) is a development platform that allows developers to connect many different applications, data sources, and APIs in order to build automated cross-platform workflows. It also offers code-level control with Node.js, Python, Go, or Bash if required. You can use the [Qdrant app](https://pipedream.com/apps/qdrant) in Pipedream to add vector search capabilities to your workflows. ## Prerequisites 1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). 2. A [Pipedream project](https://pipedream.com/) to develop your workflows. ## Setting Up Search for the Qdrant app in your workflow apps. ![Qdrant Pipedream App](/documentation/frameworks/pipedream/qdrant-app.png) The Qdrant app offers extensible API interface and pre-built actions. ![Qdrant App Features](/documentation/frameworks/pipedream/app-features.png) Select any of the actions of the app to set up a connection. ![Qdrant Connect Account](/documentation/frameworks/pipedream/app-upsert-action.png) Configure connection with the credentials of your Qdrant instance. ![Qdrant Connection Credentials](/documentation/frameworks/pipedream/app-connection.png) You can verify your credentials using the ""Test Connection"" button. Once a connection is set up, you can use the app to build workflows with the [2000+ apps supported by Pipedream](https://pipedream.com/apps/). ## Further Reading - [Pipedream Documentation](https://pipedream.com/docs). - [Qdrant Cloud Authentication](https://qdrant.tech/documentation/cloud/authentication/). - [Source Code](https://github.com/PipedreamHQ/pipedream/tree/master/components/qdrant) ",documentation/platforms/pipedream.md "--- title: Ironclad Rivet aliases: [ ../frameworks/rivet/ ] --- # Ironclad Rivet [Rivet](https://rivet.ironcladapp.com/) is an Integrated Development Environment (IDE) and library designed for creating AI agents using a visual, graph-based interface. Qdrant is available as a [plugin](https://github.com/qdrant/rivet-plugin-qdrant) for building vector-search powered workflows in Rivet. ## Installation - Open the plugins overlay at the top of the screen. - Search for the official Qdrant plugin. - Click the ""Add"" button to install it in your current project. ![Rivet plugin installation](/documentation/frameworks/rivet/installation.png) ## Setting up the connection You can configure your Qdrant instance credentials in the Rivet settings after installing the plugin. ![Rivet plugin connection](/documentation/frameworks/rivet/connection.png) Once you've configured your credentials, you can right-click on your workspace to add nodes from the plugin and get building! ![Rivet plugin nodes](/documentation/frameworks/rivet/node.png) ## Further Reading - Rivet [Tutorial](https://rivet.ironcladapp.com/docs/tutorial). - Rivet [Documentation](https://rivet.ironcladapp.com/docs). - Plugin [Source Code](https://github.com/qdrant/rivet-plugin-qdrant) ",documentation/platforms/rivet.md "--- title: DocsGPT aliases: [ ../frameworks/docsgpt/ ] --- # DocsGPT [DocsGPT](https://docsgpt.arc53.com/) is an open-source documentation assistant that enables you to build conversational user experiences on top of your data. Qdrant is supported as a vectorstore in DocsGPT to ingest and semantically retrieve documents. ## Configuration Learn how to setup DocsGPT in their [Quickstart guide](https://docs.docsgpt.co.uk/Deploying/Quickstart). You can configure DocsGPT with environment variables in a `.env` file. To configure DocsGPT to use Qdrant as the vector store, set `VECTOR_STORE` to `""qdrant""`. ```bash echo ""VECTOR_STORE=qdrant"" >> .env ``` DocsGPT includes a list of the Qdrant configuration options that you can set as environment variables [here](https://github.com/arc53/DocsGPT/blob/00dfb07b15602319bddb95089e3dab05fac56240/application/core/settings.py#L46-L59). ## Further reading - [DocsGPT Reference](https://github.com/arc53/DocsGPT) ",documentation/platforms/docsgpt.md "--- title: Platforms weight: 15 --- ## Platform Integrations | Platform | Description | | ------------------------------------- | ---------------------------------------------------------------------------------------------------- | | [Apify](./apify/) | Platform to build web scrapers and automate web browser tasks. | | [Bubble](./bubble) | Development platform for application development with a no-code interface | | [BuildShip](./buildship) | Low-code visual builder to create APIs, scheduled jobs, and backend workflows. | | [DocsGPT](./docsgpt/) | Tool for ingesting documentation sources and enabling conversations and queries. | | [Make](./make/) | Cloud platform to build low-code workflows by integrating various software applications. | | [N8N](./n8n/) | Platform for node-based, low-code workflow automation. | | [Pipedream](./pipedream/) | Platform for connecting apps and developing event-driven automation. | | [Portable.io](./portable/) | Cloud platform for developing and deploying ELT transformations. | | [PrivateGPT](./privategpt/) | Tool to ask questions about your documents using local LLMs emphasising privacy. | | [Rivet](./rivet/) | A visual programming environment for building AI agents with LLMs. | ",documentation/platforms/_index.md "--- title: N8N aliases: [ ../frameworks/n8n/ ] --- # N8N [N8N](https://n8n.io/) is an automation platform that allows you to build flexible workflows focused on deep data integration. Qdrant is available as a vectorstore node in N8N for building AI-powered functionality within your workflows. ## Prerequisites 1. A Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). 2. A running N8N instance. You can learn more about using the N8N cloud or self-hosting [here](https://docs.n8n.io/choose-n8n/). ## Setting up the vectorstore Select the Qdrant vectorstore from the list of nodes in your workflow editor. ![Qdrant n8n node](/documentation/frameworks/n8n/node.png) You can now configure the vectorstore node according to your workflow requirements. The configuration options reference can be found [here](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#node-parameters). ![Qdrant Config](/documentation/frameworks/n8n/config.png) Create a connection to Qdrant using your [instance credentials](/documentation/cloud/authentication/). ![Qdrant Credentials](/documentation/frameworks/n8n/credentials.png) The vectorstore supports the following operations: - Get Many - Get the top-ranked documents for a query. - Insert documents - Add documents to the vectorstore. - Retrieve documents - Retrieve documents for use with AI nodes. ## Further Reading - N8N vectorstore [reference](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/). - N8N AI-based workflows [reference](https://n8n.io/integrations/basic-llm-chain/). - [Source Code](https://github.com/n8n-io/n8n/tree/master/packages/@n8n/nodes-langchain/nodes/vector_store/VectorStoreQdrant)",documentation/platforms/n8n.md "--- title: Semantic Querying with Airflow and Astronomer weight: 36 aliases: - /documentation/examples/qdrant-airflow-astronomer/ --- # Semantic Querying with Airflow and Astronomer | Time: 45 min | Level: Intermediate | | | | ------------ | ------------------- | --- | --- | In this tutorial, you will use Qdrant as a [provider](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/index.html) in [Apache Airflow](https://airflow.apache.org/), an open-source tool that lets you setup data-engineering workflows. You will write the pipeline as a DAG (Directed Acyclic Graph) in Python. With this, you can leverage the powerful suite of Python's capabilities and libraries to achieve almost anything your data pipeline needs. [Astronomer](https://www.astronomer.io/) is a managed platform that simplifies the process of developing and deploying Airflow projects via its easy-to-use CLI and extensive automation capabilities. Airflow is useful when running operations in Qdrant based on data events or building parallel tasks for generating vector embeddings. By using Airflow, you can set up monitoring and alerts for your pipelines for full observability. ## Prerequisites Please make sure you have the following ready: - A running Qdrant instance. We'll be using a free instance from - The Astronomer CLI. Find the installation instructions [here](https://docs.astronomer.io/astro/cli/install-cli). - A [HuggingFace token](https://huggingface.co/docs/hub/en/security-tokens) to generate embeddings. ## Implementation We'll be building a DAG that generates embeddings in parallel for our data corpus and performs semantic retrieval based on user input. ### Set up the project The Astronomer CLI makes it very straightforward to set up the Airflow project: ```console mkdir qdrant-airflow-tutorial && cd qdrant-airflow-tutorial astro dev init ``` This command generates all of the project files you need to run Airflow locally. You can find a directory called `dags`, which is where we can place our Python DAG files. To use Qdrant within Airflow, install the Qdrant Airflow provider by adding the following to the `requirements.txt` file ```text apache-airflow-providers-qdrant ``` ### Configure credentials We can set up provider connections using the Airflow UI, environment variables or the `airflow_settings.yml` file. Add the following to the `.env` file in the project. Replace the values as per your credentials. ```env HUGGINGFACE_TOKEN="""" AIRFLOW_CONN_QDRANT_DEFAULT='{ ""conn_type"": ""qdrant"", ""host"": ""xyz-example.eu-central.aws.cloud.qdrant.io:6333"", ""password"": """" }' ``` ### Add the data corpus Let's add some sample data to work with. Paste the following content into a file called `books.txt` file within the `include` directory. ```text 1 | To Kill a Mockingbird (1960) | fiction | Harper Lee's Pulitzer Prize-winning novel explores racial injustice and moral growth through the eyes of young Scout Finch in the Deep South. 2 | Harry Potter and the Sorcerer's Stone (1997) | fantasy | J.K. Rowling's magical tale follows Harry Potter as he discovers his wizarding heritage and attends Hogwarts School of Witchcraft and Wizardry. 3 | The Great Gatsby (1925) | fiction | F. Scott Fitzgerald's classic novel delves into the glitz, glamour, and moral decay of the Jazz Age through the eyes of narrator Nick Carraway and his enigmatic neighbour, Jay Gatsby. 4 | 1984 (1949) | dystopian | George Orwell's dystopian masterpiece paints a chilling picture of a totalitarian society where individuality is suppressed and the truth is manipulated by a powerful regime. 5 | The Catcher in the Rye (1951) | fiction | J.D. Salinger's iconic novel follows disillusioned teenager Holden Caulfield as he navigates the complexities of adulthood and society's expectations in post-World War II America. 6 | Pride and Prejudice (1813) | romance | Jane Austen's beloved novel revolves around the lively and independent Elizabeth Bennet as she navigates love, class, and societal expectations in Regency-era England. 7 | The Hobbit (1937) | fantasy | J.R.R. Tolkien's adventure follows Bilbo Baggins, a hobbit who embarks on a quest with a group of dwarves to reclaim their homeland from the dragon Smaug. 8 | The Lord of the Rings (1954-1955) | fantasy | J.R.R. Tolkien's epic fantasy trilogy follows the journey of Frodo Baggins to destroy the One Ring and defeat the Dark Lord Sauron in the land of Middle-earth. 9 | The Alchemist (1988) | fiction | Paulo Coelho's philosophical novel follows Santiago, an Andalusian shepherd boy, on a journey of self-discovery and spiritual awakening as he searches for a hidden treasure. 10 | The Da Vinci Code (2003) | mystery/thriller | Dan Brown's gripping thriller follows symbologist Robert Langdon as he unravels clues hidden in art and history while trying to solve a murder mystery with far-reaching implications. ``` Now, the hacking part - writing our Airflow DAG! ### Write the dag We'll add the following content to a `books_recommend.py` file within the `dags` directory. Let's go over what it does for each task. ```python import os import requests from airflow.decorators import dag, task from airflow.models.baseoperator import chain from airflow.models.param import Param from airflow.providers.qdrant.hooks.qdrant import QdrantHook from airflow.providers.qdrant.operators.qdrant import QdrantIngestOperator from pendulum import datetime from qdrant_client import models QDRANT_CONNECTION_ID = ""qdrant_default"" DATA_FILE_PATH = ""include/books.txt"" COLLECTION_NAME = ""airflow_tutorial_collection"" EMBEDDING_MODEL_ID = ""sentence-transformers/all-MiniLM-L6-v2"" EMBEDDING_DIMENSION = 384 SIMILARITY_METRIC = models.Distance.COSINE def embed(text: str) -> list: HUGGINFACE_URL = f""https://api-inference.huggingface.co/pipeline/feature-extraction/{EMBEDDING_MODEL_ID}"" response = requests.post( HUGGINFACE_URL, headers={""Authorization"": f""Bearer {os.getenv('HUGGINGFACE_TOKEN')}""}, json={""inputs"": [text], ""options"": {""wait_for_model"": True}}, ) return response.json()[0] @dag( dag_id=""books_recommend"", start_date=datetime(2023, 10, 18), schedule=None, catchup=False, params={""preference"": Param(""Something suspenseful and thrilling."", type=""string"")}, ) def recommend_book(): @task def import_books(text_file_path: str) -> list: data = [] with open(text_file_path, ""r"") as f: for line in f: _, title, genre, description = line.split(""|"") data.append( { ""title"": title.strip(), ""genre"": genre.strip(), ""description"": description.strip(), } ) return data @task def init_collection(): hook = QdrantHook(conn_id=QDRANT_CONNECTION_ID) if not hook.conn..collection_exists(COLLECTION_NAME): hook.conn.create_collection( COLLECTION_NAME, vectors_config=models.VectorParams( size=EMBEDDING_DIMENSION, distance=SIMILARITY_METRIC ), ) @task def embed_description(data: dict) -> list: return embed(data[""description""]) books = import_books(text_file_path=DATA_FILE_PATH) embeddings = embed_description.expand(data=books) qdrant_vector_ingest = QdrantIngestOperator( conn_id=QDRANT_CONNECTION_ID, task_id=""qdrant_vector_ingest"", collection_name=COLLECTION_NAME, payload=books, vectors=embeddings, ) @task def embed_preference(**context) -> list: user_mood = context[""params""][""preference""] response = embed(text=user_mood) return response @task def search_qdrant( preference_embedding: list, ) -> None: hook = QdrantHook(conn_id=QDRANT_CONNECTION_ID) result = hook.conn.query_points( collection_name=COLLECTION_NAME, query=preference_embedding, limit=1, with_payload=True, ).points print(""Book recommendation: "" + result[0].payload[""title""]) print(""Description: "" + result[0].payload[""description""]) chain( init_collection(), qdrant_vector_ingest, search_qdrant(embed_preference()), ) recommend_book() ``` `import_books`: This task reads a text file containing information about the books (like title, genre, and description), and then returns the data as a list of dictionaries. `init_collection`: This task initializes a collection in the Qdrant database, where we will store the vector representations of the book descriptions. `embed_description`: This is a dynamic task that creates one mapped task instance for each book in the list. The task uses the `embed` function to generate vector embeddings for each description. To use a different embedding model, you can adjust the `EMBEDDING_MODEL_ID`, `EMBEDDING_DIMENSION` values. `embed_user_preference`: Here, we take a user's input and convert it into a vector using the same pre-trained model used for the book descriptions. `qdrant_vector_ingest`: This task ingests the book data into the Qdrant collection using the [QdrantIngestOperator](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/1.0.0/), associating each book description with its corresponding vector embeddings. `search_qdrant`: Finally, this task performs a search in the Qdrant database using the vectorized user preference. It finds the most relevant book in the collection based on vector similarity. ### Run the DAG Head over to your terminal and run ```astro dev start``` A local Airflow container should spawn. You can now access the Airflow UI at . Visit our DAG by clicking on `books_recommend`. ![DAG](/documentation/examples/airflow/demo-dag.png) Hit the PLAY button on the right to run the DAG. You'll be asked for input about your preference, with the default value already filled in. ![Preference](/documentation/examples/airflow/preference-input.png) After your DAG run completes, you should be able to see the output of your search in the logs of the `search_qdrant` task. ![Output](/documentation/examples/airflow/output.png) There you have it, an Airflow pipeline that interfaces with Qdrant! Feel free to fiddle around and explore Airflow. There are references below that might come in handy. ## Further reading - [Introduction to Airflow](https://docs.astronomer.io/learn/intro-to-airflow) - [Airflow Concepts](https://docs.astronomer.io/learn/category/airflow-concepts) - [Airflow Reference](https://airflow.apache.org/docs/) - [Astronomer Documentation](https://docs.astronomer.io/) ",documentation/send-data/qdrant-airflow-astronomer.md "--- title: Qdrant on Databricks weight: 36 aliases: - /documentation/examples/databricks/ --- # Qdrant on Databricks | Time: 30 min | Level: Intermediate | [Complete Notebook](https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/4750876096379825/93425612168199/6949977306828869/latest.html) | | ------------ | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | [Databricks](https://www.databricks.com/) is a unified analytics platform for working with big data and AI. It's built around Apache Spark, a powerful open-source distributed computing system well-suited for processing large-scale datasets and performing complex analytics tasks. Apache Spark is designed to scale horizontally, meaning it can handle expensive operations like generating vector embeddings by distributing computation across a cluster of machines. This scalability is crucial when dealing with large datasets. In this example, we will demonstrate how to vectorize a dataset with dense and sparse embeddings using Qdrant's [FastEmbed](https://qdrant.github.io/fastembed/) library. We will then load this vectorized data into a Qdrant cluster using the [Qdrant Spark connector](/documentation/frameworks/spark/) on Databricks. ### Setting up a Databricks project - Set up a **[Databricks cluster](https://docs.databricks.com/en/compute/configure.html)** following the official documentation guidelines. - Install the **[Qdrant Spark connector](/documentation/frameworks/spark/)** as a library: - Navigate to the `Libraries` section in your cluster dashboard. - Click on `Install New` at the top-right to open the library installation modal. - Search for `io.qdrant:spark:VERSION` in the Maven packages and click on `Install`. ![Install the library](/documentation/examples/databricks/library-install.png) - Create a new **[Databricks notebook](https://docs.databricks.com/en/notebooks/index.html)** on your cluster to begin working with your data and libraries. ### Download a dataset - **Install the required dependencies:** ```python %pip install fastembed datasets ``` - **Download the dataset:** ```python from datasets import load_dataset dataset_name = ""tasksource/med"" dataset = load_dataset(dataset_name, split=""train"") # We'll use the first 100 entries from this dataset and exclude some unused columns. dataset = dataset.select(range(100)).remove_columns([""gold_label"", ""genre""]) ``` - **Convert the dataset into a Spark dataframe:** ```python dataset.to_parquet(""/dbfs/pq.pq"") dataset_df = spark.read.parquet(""file:/dbfs/pq.pq"") ``` ### Vectorizing the data In this section, we'll be generating both dense and sparse vectors for our rows using [FastEmbed](https://qdrant.github.io/fastembed/). We'll create a user-defined function (UDF) to handle this step. #### Creating the vectorization function ```python from fastembed import TextEmbedding, SparseTextEmbedding def vectorize(partition_data): # Initialize dense and sparse models dense_model = TextEmbedding(model_name=""BAAI/bge-small-en-v1.5"") sparse_model = SparseTextEmbedding(model_name=""Qdrant/bm25"") for row in partition_data: # Generate dense and sparse vectors dense_vector = next(dense_model.embed(row.sentence1)) sparse_vector = next(sparse_model.embed(row.sentence2)) yield [ row.sentence1, # 1st column: original text row.sentence2, # 2nd column: original text dense_vector.tolist(), # 3rd column: dense vector sparse_vector.indices.tolist(), # 4th column: sparse vector indices sparse_vector.values.tolist(), # 5th column: sparse vector values ] ``` We're using the [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) model for dense embeddings and [BM25](https://huggingface.co/Qdrant/bm25) for sparse embeddings. #### Applying the UDF on our dataframe Next, let's apply our `vectorize` UDF on our Spark dataframe to generate embeddings. ```python embeddings = dataset_df.rdd.mapPartitions(vectorize) ``` The `mapPartitions()` method returns a [Resilient Distributed Dataset (RDD)](https://www.databricks.com/glossary/what-is-rdd) which should then be converted back to a Spark dataframe. #### Building the new Spark dataframe with the vectorized data We'll now create a new Spark dataframe (`embeddings_df`) with the vectorized data using the specified schema. ```python from pyspark.sql.types import StructType, StructField, StringType, ArrayType, FloatType, IntegerType # Define the schema for the new dataframe schema = StructType([ StructField(""sentence1"", StringType()), StructField(""sentence2"", StringType()), StructField(""dense_vector"", ArrayType(FloatType())), StructField(""sparse_vector_indices"", ArrayType(IntegerType())), StructField(""sparse_vector_values"", ArrayType(FloatType())) ]) # Create the new dataframe with the vectorized data embeddings_df = spark.createDataFrame(data=embeddings, schema=schema) ``` ### Uploading the data to Qdrant - **Create a Qdrant collection:** - [Follow the documentation](/documentation/concepts/collections/#create-a-collection) to create a collection with the appropriate configurations. Here's an example request to support both dense and sparse vectors: ```json PUT /collections/{collection_name} { ""vectors"": { ""dense"": { ""size"": 384, ""distance"": ""Cosine"" } }, ""sparse_vectors"": { ""sparse"": {} } } ``` - **Upload the dataframe to Qdrant:** ```python options = { ""qdrant_url"": """", ""api_key"": """", ""collection_name"": """", ""vector_fields"": ""dense_vector"", ""vector_names"": ""dense"", ""sparse_vector_value_fields"": ""sparse_vector_values"", ""sparse_vector_index_fields"": ""sparse_vector_indices"", ""sparse_vector_names"": ""sparse"", ""schema"": embeddings_df.schema.json(), } embeddings_df.write.format(""io.qdrant.spark.Qdrant"").options(**options).mode( ""append"" ).save() ``` Ensure to replace the placeholder values (``, ``, ``) with your actual values. If the `id_field` option is not specified, Qdrant Spark connector generates random UUIDs for each point. The command output you should see is similar to: ```console Command took 40.37 seconds -- by xxxxx90@xxxxxx.com at 4/17/2024, 12:13:28 PM on fastembed ``` ### Conclusion That wraps up our tutorial! Feel free to explore more functionalities and experiments with different models, parameters, and features available in Databricks, Spark, and Qdrant. Happy data engineering! ",documentation/send-data/databricks.md "--- title: How to Setup Seamless Data Streaming with Kafka and Qdrant weight: 49 aliases: - /examples/data-streaming-kafka-qdrant/ --- # Setup Data Streaming with Kafka via Confluent **Author:** [M K Pavan Kumar](https://www.linkedin.com/in/kameshwara-pavan-kumar-mantha-91678b21/) , research scholar at [IIITDM, Kurnool](https://iiitk.ac.in). Specialist in hallucination mitigation techniques and RAG methodologies. • [GitHub](https://github.com/pavanjava) • [Medium](https://medium.com/@manthapavankumar11) ## Introduction This guide will walk you through the detailed steps of installing and setting up the [Qdrant Sink Connector](https://github.com/qdrant/qdrant-kafka), building the necessary infrastructure, and creating a practical playground application. By the end of this article, you will have a deep understanding of how to leverage this powerful integration to streamline your data workflows, ultimately enhancing the performance and capabilities of your data-driven real-time semantic search and RAG applications. In this example, original data will be sourced from Azure Blob Storage and MongoDB. ![1.webp](/documentation/examples/data-streaming-kafka-qdrant/1.webp) Figure 1: [Real time Change Data Capture (CDC)](https://www.confluent.io/learn/change-data-capture/) with Kafka and Qdrant. ## The Architecture: ## Source Systems The architecture begins with the **source systems**, represented by MongoDB and Azure Blob Storage. These systems are vital for storing and managing raw data. MongoDB, a popular NoSQL database, is known for its flexibility in handling various data formats and its capability to scale horizontally. It is widely used for applications that require high performance and scalability. Azure Blob Storage, on the other hand, is Microsoft’s object storage solution for the cloud. It is designed for storing massive amounts of unstructured data, such as text or binary data. The data from these sources is extracted using **source connectors**, which are responsible for capturing changes in real-time and streaming them into Kafka. ## Kafka At the heart of this architecture lies **Kafka**, a distributed event streaming platform capable of handling trillions of events a day. Kafka acts as a central hub where data from various sources can be ingested, processed, and distributed to various downstream systems. Its fault-tolerant and scalable design ensures that data can be reliably transmitted and processed in real-time. Kafka’s capability to handle high-throughput, low-latency data streams makes it an ideal choice for real-time data processing and analytics. The use of **Confluent** enhances Kafka’s functionalities, providing additional tools and services for managing Kafka clusters and stream processing. ## Qdrant The processed data is then routed to **Qdrant**, a highly scalable vector search engine designed for similarity searches. Qdrant excels at managing and searching through high-dimensional vector data, which is essential for applications involving machine learning and AI, such as recommendation systems, image recognition, and natural language processing. The **Qdrant Sink Connector** for Kafka plays a pivotal role here, enabling seamless integration between Kafka and Qdrant. This connector allows for the real-time ingestion of vector data into Qdrant, ensuring that the data is always up-to-date and ready for high-performance similarity searches. ## Integration and Pipeline Importance The integration of these components forms a powerful and efficient data streaming pipeline. The **Qdrant Sink Connector** ensures that the data flowing through Kafka is continuously ingested into Qdrant without any manual intervention. This real-time integration is crucial for applications that rely on the most current data for decision-making and analysis. By combining the strengths of MongoDB and Azure Blob Storage for data storage, Kafka for data streaming, and Qdrant for vector search, this pipeline provides a robust solution for managing and processing large volumes of data in real-time. The architecture’s scalability, fault-tolerance, and real-time processing capabilities are key to its effectiveness, making it a versatile solution for modern data-driven applications. ## Installation of Confluent Kafka Platform To install the Confluent Kafka Platform (self-managed locally), follow these 3 simple steps: **Download and Extract the Distribution Files:** - Visit [Confluent Installation Page](https://www.confluent.io/installation/). - Download the distribution files (tar, zip, etc.). - Extract the downloaded file using: ```bash tar -xvf confluent-.tar.gz ``` or ```bash unzip confluent-.zip ``` **Configure Environment Variables:** ```bash # Set CONFLUENT_HOME to the installation directory: export CONFLUENT_HOME=/path/to/confluent- # Add Confluent binaries to your PATH export PATH=$CONFLUENT_HOME/bin:$PATH ``` **Run Confluent Platform Locally:** ```bash # Start the Confluent Platform services: confluent local start # Stop the Confluent Platform services: confluent local stop ``` ## Installation of Qdrant: To install and run Qdrant (self-managed locally), you can use Docker, which simplifies the process. First, ensure you have Docker installed on your system. Then, you can pull the Qdrant image from Docker Hub and run it with the following commands: ```bash docker pull qdrant/qdrant docker run -p 6334:6334 -p 6333:6333 qdrant/qdrant ``` This will download the Qdrant image and start a Qdrant instance accessible at `http://localhost:6333`. For more detailed instructions and alternative installation methods, refer to the [Qdrant installation documentation](https://qdrant.tech/documentation/quick-start/). ## Installation of Qdrant-Kafka Sink Connector: To install the Qdrant Kafka connector using [Confluent Hub](https://www.confluent.io/hub/), you can utilize the straightforward `confluent-hub install` command. This command simplifies the process by eliminating the need for manual configuration file manipulations. To install the Qdrant Kafka connector version 1.1.0, execute the following command in your terminal: ```bash confluent-hub install qdrant/qdrant-kafka:1.1.0 ``` This command downloads and installs the specified connector directly from Confluent Hub into your Confluent Platform or Kafka Connect environment. The installation process ensures that all necessary dependencies are handled automatically, allowing for a seamless integration of the Qdrant Kafka connector with your existing setup. Once installed, the connector can be configured and managed using the Confluent Control Center or the Kafka Connect REST API, enabling efficient data streaming between Kafka and Qdrant without the need for intricate manual setup. ![2.webp](/documentation/examples/data-streaming-kafka-qdrant/2.webp) *Figure 2: Local Confluent platform showing the Source and Sink connectors after installation.* Ensure the configuration of the connector once it's installed as below. keep in mind that your `key.converter` and `value.converter` are very important for kafka to safely deliver the messages from topic to qdrant. ```bash { ""name"": ""QdrantSinkConnectorConnector_0"", ""config"": { ""value.converter.schemas.enable"": ""false"", ""name"": ""QdrantSinkConnectorConnector_0"", ""connector.class"": ""io.qdrant.kafka.QdrantSinkConnector"", ""key.converter"": ""org.apache.kafka.connect.storage.StringConverter"", ""value.converter"": ""org.apache.kafka.connect.json.JsonConverter"", ""topics"": ""topic_62,qdrant_kafka.docs"", ""errors.deadletterqueue.topic.name"": ""dead_queue"", ""errors.deadletterqueue.topic.replication.factor"": ""1"", ""qdrant.grpc.url"": ""http://localhost:6334"", ""qdrant.api.key"": ""************"" } } ``` ## Installation of MongoDB For the Kafka to connect MongoDB as source, your MongoDB instance should be running in a `replicaSet` mode. below is the `docker compose` file which will spin a single node `replicaSet` instance of MongoDB. ```bash version: ""3.8"" services: mongo1: image: mongo:7.0 command: [""--replSet"", ""rs0"", ""--bind_ip_all"", ""--port"", ""27017""] ports: - 27017:27017 healthcheck: test: echo ""try { rs.status() } catch (err) { rs.initiate({_id:'rs0',members:[{_id:0,host:'host.docker.internal:27017'}]}) }"" | mongosh --port 27017 --quiet interval: 5s timeout: 30s start_period: 0s start_interval: 1s retries: 30 volumes: - ""mongo1_data:/data/db"" - ""mongo1_config:/data/configdb"" volumes: mongo1_data: mongo1_config: ``` Similarly, install and configure source connector as below. ```bash confluent-hub install mongodb/kafka-connect-mongodb:latest ``` After installing the `MongoDB` connector, connector configuration should look like this: ```bash { ""name"": ""MongoSourceConnectorConnector_0"", ""config"": { ""connector.class"": ""com.mongodb.kafka.connect.MongoSourceConnector"", ""key.converter"": ""org.apache.kafka.connect.storage.StringConverter"", ""value.converter"": ""org.apache.kafka.connect.storage.StringConverter"", ""connection.uri"": ""mongodb://127.0.0.1:27017/?replicaSet=rs0&directConnection=true"", ""database"": ""qdrant_kafka"", ""collection"": ""docs"", ""publish.full.document.only"": ""true"", ""topic.namespace.map"": ""{\""*\"":\""qdrant_kafka.docs\""}"", ""copy.existing"": ""true"" } } ``` ## Playground Application As the infrastructure set is completely done, now it's time for us to create a simple application and check our setup. the objective of our application is the data is inserted to Mongodb and eventually it will get ingested into Qdrant also using [Change Data Capture (CDC)](https://www.confluent.io/learn/change-data-capture/). `requirements.txt` ```bash fastembed==0.3.1 pymongo==4.8.0 qdrant_client==1.10.1 ``` `project_root_folder/main.py` This is just sample code. Nevertheless it can be extended to millions of operations based on your use case. ```python from pymongo import MongoClient from utils.app_utils import create_qdrant_collection from fastembed import TextEmbedding collection_name: str = 'test' embed_model_name: str = 'snowflake/snowflake-arctic-embed-s' ``` ```python # Step 0: create qdrant_collection create_qdrant_collection(collection_name=collection_name, embed_model=embed_model_name) # Step 1: Connect to MongoDB client = MongoClient('mongodb://127.0.0.1:27017/?replicaSet=rs0&directConnection=true') # Step 2: Select Database db = client['qdrant_kafka'] # Step 3: Select Collection collection = db['docs'] # Step 4: Create a Document to Insert description = ""qdrant is a high available vector search engine"" embedding_model = TextEmbedding(model_name=embed_model_name) vector = next(embedding_model.embed(documents=description)).tolist() document = { ""collection_name"": collection_name, ""id"": 1, ""vector"": vector, ""payload"": { ""name"": ""qdrant"", ""description"": description, ""url"": ""https://qdrant.tech/documentation"" } } # Step 5: Insert the Document into the Collection result = collection.insert_one(document) # Step 6: Print the Inserted Document's ID print(""Inserted document ID:"", result.inserted_id) ``` `project_root_folder/utils/app_utils.py` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"", api_key="""") dimension_dict = {""snowflake/snowflake-arctic-embed-s"": 384} def create_qdrant_collection(collection_name: str, embed_model: str): if not client.collection_exists(collection_name=collection_name): client.create_collection( collection_name=collection_name, vectors_config=models.VectorParams(size=dimension_dict.get(embed_model), distance=models.Distance.COSINE) ) ``` Before we run the application, below is the state of MongoDB and Qdrant databases. ![3.webp](/documentation/examples/data-streaming-kafka-qdrant/3.webp) Figure 3: Initial state: no collection named `test` & `no data` in the `docs` collection of MongodDB. Once you run the code the data goes into Mongodb and the CDC gets triggered and eventually Qdrant will receive this data. ![4.webp](/documentation/examples/data-streaming-kafka-qdrant/4.webp) Figure 4: The test Qdrant collection is created automatically. ![5.webp](/documentation/examples/data-streaming-kafka-qdrant/5.webp) Figure 5: Data is inserted into both MongoDB and Qdrant. ## Conclusion: In conclusion, the integration of **Kafka** with **Qdrant** using the **Qdrant Sink Connector** provides a seamless and efficient solution for real-time data streaming and processing. This setup not only enhances the capabilities of your data pipeline but also ensures that high-dimensional vector data is continuously indexed and readily available for similarity searches. By following the installation and setup guide, you can easily establish a robust data flow from your **source systems** like **MongoDB** and **Azure Blob Storage**, through **Kafka**, and into **Qdrant**. This architecture empowers modern applications to leverage real-time data insights and advanced search capabilities, paving the way for innovative data-driven solutions.",documentation/send-data/data-streaming-kafka-qdrant.md "--- title: Send Data to Qdrant weight: 18 --- ## How to Send Your Data to a Qdrant Cluster | Example | Description | Stack | |---------------------------------------------------------------------------------|-------------------------------------------------------------------|---------------------------------------------| | [Pinecone to Qdrant Data Transfer](https://githubtocolab.com/qdrant/examples/blob/master/data-migration/from-pinecone-to-qdrant.ipynb) | Migrate your vector data from Pinecone to Qdrant. | Qdrant, Vector-io | | [Stream Data to Qdrant with Kafka](../send-data/data-streaming-kafka-qdrant/) | Use Confluent to Stream Data to Qdrant via Managed Kafka. | Qdrant, Kafka | | [Qdrant on Databricks](../send-data/databricks/) | Learn how to use Qdrant on Databricks using the Spark connector | Qdrant, Databricks, Apache Spark | | [Qdrant with Airflow and Astronomer](../send-data/qdrant-airflow-astronomer/) | Build a semantic querying system using Airflow and Astronomer | Qdrant, Airflow, Astronomer |",documentation/send-data/_index.md "--- title: Snowflake Models weight: 2900 --- # Snowflake Qdrant supports working with [Snowflake](https://www.snowflake.com/blog/introducing-snowflake-arctic-embed-snowflakes-state-of-the-art-text-embedding-family-of-models/) text embedding models. You can find all the available models on [HuggingFace](https://huggingface.co/Snowflake). ### Setting up the Qdrant and Snowflake models ```python from qdrant_client import QdrantClient from fastembed import TextEmbedding qclient = QdrantClient("":memory:"") embedding_model = TextEmbedding(""snowflake/snowflake-arctic-embed-s"") texts = [ ""Qdrant is the best vector search engine!"", ""Loved by Enterprises and everyone building for low latency, high performance, and scale."", ] ``` ```typescript import {QdrantClient} from '@qdrant/js-client-rest'; import { pipeline } from '@xenova/transformers'; const client = new QdrantClient({ url: 'http://localhost:6333' }); const extractor = await pipeline('feature-extraction', 'Snowflake/snowflake-arctic-embed-s'); const texts = [ ""Qdrant is the best vector search engine!"", ""Loved by Enterprises and everyone building for low latency, high performance, and scale."", ] ``` The following example shows how to embed documents with the [`snowflake-arctic-embed-s`](https://huggingface.co/Snowflake/snowflake-arctic-embed-s) model that generates sentence embeddings of size 384. ### Embedding documents ```python embeddings = embedding_model.embed(texts) ``` ```typescript const embeddings = await extractor(texts, { normalize: true, pooling: 'cls' }); ``` ### Converting the model outputs to Qdrant points ```python from qdrant_client.models import PointStruct points = [ PointStruct( id=idx, vector=embedding, payload={""text"": text}, ) for idx, (embedding, text) in enumerate(zip(embeddings, texts)) ] ``` ```typescript let points = embeddings.tolist().map((embedding, i) => { return { id: i, vector: embedding, payload: { text: texts[i] } } }); ``` ### Creating a collection to insert the documents ```python from qdrant_client.models import VectorParams, Distance COLLECTION_NAME = ""example_collection"" qclient.create_collection( COLLECTION_NAME, vectors_config=VectorParams( size=384, distance=Distance.COSINE, ), ) qclient.upsert(COLLECTION_NAME, points) ``` ```typescript const COLLECTION_NAME = ""example_collection"" await client.createCollection(COLLECTION_NAME, { vectors: { size: 384, distance: 'Cosine', } }); await client.upsert(COLLECTION_NAME, { wait: true, points }); ``` ### Searching for documents with Qdrant Once the documents are added, you can search for the most relevant documents. ```python query_embedding = next(embedding_model.query_embed(""What is the best to use for vector search scaling?"")) qclient.search( collection_name=COLLECTION_NAME, query_vector=query_embedding, ) ``` ```typescript const query_embedding = await extractor(""What is the best to use for vector search scaling?"", { normalize: true, pooling: 'cls' }); await client.search(COLLECTION_NAME, { vector: query_embedding.tolist()[0], }); ``` ",documentation/embeddings/snowflake.md " --- title: Watsonx weight: 3000 aliases: - /documentation/examples/watsonx-search/ - /documentation/tutorials/watsonx-search/ - /documentation/integrations/watsonx/ --- # Using Watsonx with Qdrant Watsonx is IBM's platform for AI embeddings, focusing on enterprise-level text and data analytics. These embeddings are suitable for high-precision vector searches in Qdrant. ## Installation You can install the required package using the following pip command: ```bash pip install watsonx ``` ## Code Example ```python import qdrant_client from qdrant_client.models import Batch from watsonx import Watsonx # Initialize Watsonx AI model model = Watsonx(""watsonx-model"") # Generate embeddings for enterprise data text = ""Watsonx provides enterprise-level NLP solutions."" embeddings = model.embed(text) # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name=""EnterpriseData"", points=Batch( ids=[1], vectors=[embeddings], ) ) ``` ",documentation/embeddings/watsonx.md "--- title: Instruct weight: 1800 --- # Using Instruct with Qdrant Instruct is a specialized provider offering detailed embeddings for instructional content, which can be effectively used with Qdrant. With Instruct every text input is embedded together with instructions explaining the use case (e.g., task and domain descriptions). Unlike encoders from prior work that are more specialized, INSTRUCTOR is a single embedder that can generate text embeddings tailored to different downstream tasks and domains, without any further training. ## Installation ```bash pip install instruct ``` Below is an example of how to obtain embeddings using Instruct's API and store them in a Qdrant collection: ```python import qdrant_client from qdrant_client.models import Batch from instruct import Instruct # Initialize Instruct model model = Instruct(""instruct-base"") # Generate embeddings for instructional content text = ""Instruct provides detailed embeddings for learning content."" embeddings = model.embed(text) # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name=""LearningContent"", points=Batch( ids=[1], vectors=[embeddings], ) ) ``` ",documentation/embeddings/instruct.md "--- title: GPT4All weight: 1700 --- # Using GPT4All with Qdrant GPT4All offers a range of large language models that can be fine-tuned for various applications. GPT4All runs large language models (LLMs) privately on everyday desktops & laptops. No API calls or GPUs required - you can just download the application and get started. Use GPT4All in Python to program with LLMs implemented with the llama.cpp backend and Nomic's C backend. ## Installation You can install the required package using the following pip command: ```bash pip install gpt4all ``` Here is how you might connect to GPT4ALL using Qdrant: ```python import qdrant_client from qdrant_client.models import Batch from gpt4all import GPT4All # Initialize GPT4All model model = GPT4All(""gpt4all-lora-quantized"") # Generate embeddings for a text text = ""GPT4All enables open-source AI applications."" embeddings = model.embed(text) # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name=""OpenSourceAI"", points=Batch( ids=[1], vectors=[embeddings], ) ) ``` ",documentation/embeddings/gpt4all.md "--- title: Voyage AI weight: 3200 --- # Voyage AI Qdrant supports working with [Voyage AI](https://voyageai.com/) embeddings. The supported models' list can be found [here](https://docs.voyageai.com/docs/embeddings). You can generate an API key from the [Voyage AI dashboard]() to authenticate the requests. ### Setting up the Qdrant and Voyage clients ```python from qdrant_client import QdrantClient import voyageai VOYAGE_API_KEY = """" qclient = QdrantClient("":memory:"") vclient = voyageai.Client(api_key=VOYAGE_API_KEY) texts = [ ""Qdrant is the best vector search engine!"", ""Loved by Enterprises and everyone building for low latency, high performance, and scale."", ] ``` ```typescript import {QdrantClient} from '@qdrant/js-client-rest'; const VOYAGEAI_BASE_URL = ""https://api.voyageai.com/v1/embeddings"" const VOYAGEAI_API_KEY = """" const client = new QdrantClient({ url: 'http://localhost:6333' }); const headers = { ""Authorization"": ""Bearer "" + VOYAGEAI_API_KEY, ""Content-Type"": ""application/json"" } const texts = [ ""Qdrant is the best vector search engine!"", ""Loved by Enterprises and everyone building for low latency, high performance, and scale."", ] ``` The following example shows how to embed documents with the [`voyage-large-2`](https://docs.voyageai.com/docs/embeddings#model-choices) model that generates sentence embeddings of size 1536. ### Embedding documents ```python response = vclient.embed(texts, model=""voyage-large-2"", input_type=""document"") ``` ```typescript let body = { ""input"": texts, ""model"": ""voyage-large-2"", ""input_type"": ""document"", } let response = await fetch(VOYAGEAI_BASE_URL, { method: ""POST"", body: JSON.stringify(body), headers }); let response_body = await response.json(); ``` ### Converting the model outputs to Qdrant points ```python from qdrant_client.models import PointStruct points = [ PointStruct( id=idx, vector=embedding, payload={""text"": text}, ) for idx, (embedding, text) in enumerate(zip(response.embeddings, texts)) ] ``` ```typescript let points = response_body.data.map((data, i) => { return { id: i, vector: data.embedding, payload: { text: texts[i] } } }); ``` ### Creating a collection to insert the documents ```python from qdrant_client.models import VectorParams, Distance COLLECTION_NAME = ""example_collection"" qclient.create_collection( COLLECTION_NAME, vectors_config=VectorParams( size=1536, distance=Distance.COSINE, ), ) qclient.upsert(COLLECTION_NAME, points) ``` ```typescript const COLLECTION_NAME = ""example_collection"" await client.createCollection(COLLECTION_NAME, { vectors: { size: 1536, distance: 'Cosine', } }); await client.upsert(COLLECTION_NAME, { wait: true, points }); ``` ### Searching for documents with Qdrant Once the documents are added, you can search for the most relevant documents. ```python response = vclient.embed( [""What is the best to use for vector search scaling?""], model=""voyage-large-2"", input_type=""query"", ) qclient.search( collection_name=COLLECTION_NAME, query_vector=response.embeddings[0], ) ``` ```typescript body = { ""input"": [""What is the best to use for vector search scaling?""], ""model"": ""voyage-large-2"", ""input_type"": ""query"", }; response = await fetch(VOYAGEAI_BASE_URL, { method: ""POST"", body: JSON.stringify(body), headers }); response_body = await response.json(); await client.search(COLLECTION_NAME, { vector: response_body.data[0].embedding, }); ``` ",documentation/embeddings/voyage.md "--- title: Together AI weight: 3000 --- # Using Together AI with Qdrant Together AI focuses on collaborative AI embeddings that enhance multi-user search scenarios when integrated with Qdrant. ## Installation You can install the required package using the following pip command: ```bash pip install togetherai ``` ## Integration Example ```python import qdrant_client from qdrant_client.models import Batch from togetherai import TogetherAI # Initialize Together AI model model = TogetherAI(""togetherai-collab"") # Generate embeddings for collaborative content text = ""Together AI enhances collaborative content search."" embeddings = model.embed(text) # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name=""CollaborativeContent"", points=Batch( ids=[1], vectors=[embeddings], ) ) ``` ",documentation/embeddings/togetherai.md "--- title: OpenAI weight: 2700 aliases: [ ../integrations/openai/ ] --- # OpenAI Qdrant supports working with [OpenAI embeddings](https://platform.openai.com/docs/guides/embeddings/embeddings). There is an official OpenAI Python package that simplifies obtaining them, and it can be installed with pip: ```bash pip install openai ``` ### Setting up the OpenAI and Qdrant clients ```python import openai import qdrant_client openai_client = openai.Client( api_key="""" ) client = qdrant_client.QdrantClient("":memory:"") texts = [ ""Qdrant is the best vector search engine!"", ""Loved by Enterprises and everyone building for low latency, high performance, and scale."", ] ``` The following example shows how to embed a document with the `text-embedding-3-small` model that generates sentence embeddings of size 1536. You can find the list of all supported models [here](https://platform.openai.com/docs/models/embeddings). ### Embedding a document ```python embedding_model = ""text-embedding-3-small"" result = openai_client.embeddings.create(input=texts, model=embedding_model) ``` ### Converting the model outputs to Qdrant points ```python from qdrant_client.models import PointStruct points = [ PointStruct( id=idx, vector=data.embedding, payload={""text"": text}, ) for idx, (data, text) in enumerate(zip(result.data, texts)) ] ``` ### Creating a collection to insert the documents ```python from qdrant_client.models import VectorParams, Distance collection_name = ""example_collection"" client.create_collection( collection_name, vectors_config=VectorParams( size=1536, distance=Distance.COSINE, ), ) client.upsert(collection_name, points) ``` ## Searching for documents with Qdrant Once the documents are indexed, you can search for the most relevant documents using the same model. ```python client.search( collection_name=collection_name, query_vector=openai_client.embeddings.create( input=[""What is the best to use for vector search scaling?""], model=embedding_model, ) .data[0] .embedding, ) ``` ## Using OpenAI Embedding Models with Qdrant's Binary Quantization You can use OpenAI embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much. |Method|Dimensionality|Test Dataset|Recall|Oversampling| |-|-|-|-|-| |OpenAI text-embedding-3-large|3072|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M) | 0.9966|3x| |OpenAI text-embedding-3-small|1536|[DBpedia 100K](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-small-1536-100K)| 0.9847|3x| |OpenAI text-embedding-3-large|1536|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M)| 0.9826|3x| |OpenAI text-embedding-ada-002|1536|[DbPedia 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) |0.98|4x| ",documentation/embeddings/openai.md "--- title: AWS Bedrock weight: 1000 --- # Bedrock Embeddings You can use [AWS Bedrock](https://aws.amazon.com/bedrock/) with Qdrant. AWS Bedrock supports multiple [embedding model providers](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html). You'll need the following information from your AWS account: - Region - Access key ID - Secret key To configure your credentials, review the following AWS article: [How do I create an AWS access key](https://repost.aws/knowledge-center/create-access-key). With the following code sample, you can generate embeddings using the [Titan Embeddings G1 - Text model](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-embedding-models.html) which produces sentence embeddings of size 1536. ```python # Install the required dependencies # pip install boto3 qdrant_client import json import boto3 from qdrant_client import QdrantClient, models session = boto3.Session() bedrock_client = session.client( ""bedrock-runtime"", region_name="""", aws_access_key_id="""", aws_secret_access_key="""", ) qdrant_client = QdrantClient(url=""http://localhost:6333"") qdrant_client.create_collection( ""{collection_name}"", vectors_config=models.VectorParams(size=1536, distance=models.Distance.COSINE), ) body = json.dumps({""inputText"": ""Some text to generate embeddings for""}) response = bedrock_client.invoke_model( body=body, modelId=""amazon.titan-embed-text-v1"", accept=""application/json"", contentType=""application/json"", ) response_body = json.loads(response.get(""body"").read()) qdrant_client.upsert( ""{collection_name}"", points=[models.PointStruct(id=1, vector=response_body[""embedding""])], ) ``` ```javascript // Install the required dependencies // npm install @aws-sdk/client-bedrock-runtime @qdrant/js-client-rest import { BedrockRuntimeClient, InvokeModelCommand, } from ""@aws-sdk/client-bedrock-runtime""; import { QdrantClient } from '@qdrant/js-client-rest'; const main = async () => { const bedrockClient = new BedrockRuntimeClient({ region: """", credentials: { accessKeyId: """",, secretAccessKey: """", }, }); const qdrantClient = new QdrantClient({ url: 'http://localhost:6333' }); await qdrantClient.createCollection(""{collection_name}"", { vectors: { size: 1536, distance: 'Cosine', } }); const response = await bedrockClient.send( new InvokeModelCommand({ modelId: ""amazon.titan-embed-text-v1"", body: JSON.stringify({ inputText: ""Some text to generate embeddings for"", }), contentType: ""application/json"", accept: ""application/json"", }) ); const body = new TextDecoder().decode(response.body); await qdrantClient.upsert(""{collection_name}"", { points: [ { id: 1, vector: JSON.parse(body).embedding, }, ], }); } main(); ``` ",documentation/embeddings/bedrock.md "--- title: Aleph Alpha weight: 900 aliases: - /documentation/examples/aleph-alpha-search/ - /documentation/tutorials/aleph-alpha-search/ - /documentation/integrations/aleph-alpha/ --- # Using Aleph Alpha Embeddings with Qdrant Aleph Alpha is a multimodal and multilingual embeddings' provider. Their API allows creating the embeddings for text and images, both in the same latent space. They maintain an [official Python client](https://github.com/Aleph-Alpha/aleph-alpha-client) that might be installed with pip: ```bash pip install aleph-alpha-client ``` There is both synchronous and asynchronous client available. Obtaining the embeddings for an image and storing it into Qdrant might be done in the following way: ```python import qdrant_client from qdrant_client.models import Batch from aleph_alpha_client import ( Prompt, AsyncClient, SemanticEmbeddingRequest, SemanticRepresentation, ImagePrompt ) aa_token = ""<< your_token >>"" model = ""luminous-base"" qdrant_client = qdrant_client.QdrantClient() async with AsyncClient(token=aa_token) as client: prompt = ImagePrompt.from_file(""./path/to/the/image.jpg"") prompt = Prompt.from_image(prompt) query_params = { ""prompt"": prompt, ""representation"": SemanticRepresentation.Symmetric, ""compress_to_size"": 128, } query_request = SemanticEmbeddingRequest(**query_params) query_response = await client.semantic_embed( request=query_request, model=model ) qdrant_client.upsert( collection_name=""MyCollection"", points=Batch( ids=[1], vectors=[query_response.embedding], ) ) ``` If we wanted to create text embeddings with the same model, we wouldn't use `ImagePrompt.from_file`, but simply provide the input text into the `Prompt.from_text` method. ",documentation/embeddings/aleph-alpha.md "--- title: Ollama weight: 2600 --- # Using Ollama with Qdrant Ollama provides specialized embeddings for niche applications. Ollama supports a variety of embedding models, making it possible to build retrieval augmented generation (RAG) applications that combine text prompts with existing documents or other data in specialized areas. ## Installation You can install the required package using the following pip command: ```bash pip install ollama ``` ## Integration Example ```python import qdrant_client from qdrant_client.models import Batch from ollama import Ollama # Initialize Ollama model model = Ollama(""ollama-unique"") # Generate embeddings for niche applications text = ""Ollama excels in niche applications with specific embeddings."" embeddings = model.embed(text) # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name=""NicheApplications"", points=Batch( ids=[1], vectors=[embeddings], ) ) ``` ",documentation/embeddings/ollama.md "--- title: OpenCLIP weight: 2750 --- # Using OpenCLIP with Qdrant OpenCLIP is an open-source implementation of the CLIP model, allowing for open source generation of multimodal embeddings that link text and images. ```python import qdrant_client from qdrant_client.models import Batch import open_clip # Load the OpenCLIP model and tokenizer model, preprocess = open_clip.create_model_and_transforms('ViT-B-32', pretrained='openai') tokenizer = open_clip.get_tokenizer('ViT-B-32') # Generate embeddings for a text text = ""A photo of a cat"" text_inputs = tokenizer([text]) with torch.no_grad(): text_features = model.encode_text(text_inputs) # Convert tensor to a list embeddings = text_features[0].cpu().numpy().tolist() # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name=""OpenCLIPEmbeddings"", points=Batch( ids=[1], vectors=[embeddings], ) ) ``` ",documentation/embeddings/openclip.md "--- title: Databricks Embeddings weight: 1500 --- # Using Databricks Embeddings with Qdrant Databricks offers an advanced platform for generating embeddings, especially within large-scale data environments. You can use the following Python code to integrate Databricks-generated embeddings with Qdrant. ```python import qdrant_client from qdrant_client.models import Batch from databricks import sql # Connect to Databricks SQL endpoint connection = sql.connect(server_hostname='your_hostname', http_path='your_http_path', access_token='your_access_token') # Execute a query to get embeddings query = ""SELECT embedding FROM your_table WHERE id = 1"" cursor = connection.cursor() cursor.execute(query) embedding = cursor.fetchone()[0] # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name=""DatabricksEmbeddings"", points=Batch( ids=[1], # Unique ID for the data point vectors=[embedding], # Embedding fetched from Databricks ) ) ``` ",documentation/embeddings/databricks.md "--- title: Cohere weight: 1400 aliases: [ ../integrations/cohere/ ] --- # Cohere Qdrant is compatible with Cohere [co.embed API](https://docs.cohere.ai/reference/embed) and its official Python SDK that might be installed as any other package: ```bash pip install cohere ``` The embeddings returned by co.embed API might be used directly in the Qdrant client's calls: ```python import cohere import qdrant_client from qdrant_client.models import Batch cohere_client = cohere.Client(""<< your_api_key >>"") qdrant_client = qdrant_client.QdrantClient() qdrant_client.upsert( collection_name=""MyCollection"", points=Batch( ids=[1], vectors=cohere_client.embed( model=""large"", texts=[""The best vector database""], ).embeddings, ), ) ``` If you are interested in seeing an end-to-end project created with co.embed API and Qdrant, please check out the ""[Question Answering as a Service with Cohere and Qdrant](/articles/qa-with-cohere-and-qdrant/)"" article. ## Embed v3 Embed v3 is a new family of Cohere models, released in November 2023. The new models require passing an additional parameter to the API call: `input_type`. It determines the type of task you want to use the embeddings for. - `input_type=""search_document""` - for documents to store in Qdrant - `input_type=""search_query""` - for search queries to find the most relevant documents - `input_type=""classification""` - for classification tasks - `input_type=""clustering""` - for text clustering While implementing semantic search applications, such as RAG, you should use `input_type=""search_document""` for the indexed documents and `input_type=""search_query""` for the search queries. The following example shows how to index documents with the Embed v3 model: ```python import cohere import qdrant_client from qdrant_client.models import Batch cohere_client = cohere.Client(""<< your_api_key >>"") client = qdrant_client.QdrantClient() client.upsert( collection_name=""MyCollection"", points=Batch( ids=[1], vectors=cohere_client.embed( model=""embed-english-v3.0"", # New Embed v3 model input_type=""search_document"", # Input type for documents texts=[""Qdrant is the a vector database written in Rust""], ).embeddings, ), ) ``` Once the documents are indexed, you can search for the most relevant documents using the Embed v3 model: ```python client.search( collection_name=""MyCollection"", query_vector=cohere_client.embed( model=""embed-english-v3.0"", # New Embed v3 model input_type=""search_query"", # Input type for search queries texts=[""The best vector database""], ).embeddings[0], ) ``` ",documentation/embeddings/cohere.md "--- title: Clip weight: 1300 --- # Using Clip with Qdrant CLIP (Contrastive Language-Image Pre-Training) provides advanced AI capabilities including natural language processing and computer vision. CLIP is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. ## Installation You can install the required package using the following pip command: ```bash pip install clip-client ``` ## Integration Example ```python import qdrant_client from qdrant_client.models import Batch from transformers import CLIPProcessor, CLIPModel from PIL import Image # Load the CLIP model and processor model = CLIPModel.from_pretrained(""openai/clip-vit-base-patch32"") processor = CLIPProcessor.from_pretrained(""openai/clip-vit-base-patch32"") # Load and process the image image = Image.open(""path/to/image.jpg"") inputs = processor(images=image, return_tensors=""pt"") # Generate embeddings with torch.no_grad(): embeddings = model.get_image_features(**inputs).numpy().tolist() # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name=""ImageEmbeddings"", points=Batch( ids=[1], vectors=embeddings, ) ) ``` ",documentation/embeddings/clip.md "--- title: Clarifai weight: 1200 --- # Using Clarifai Embeddings with Qdrant Clarifai is a leading provider of visual embeddings, which are particularly strong in image and video analysis. Clarifai offers an API that allows you to create embeddings for various media types, which can be integrated into Qdrant for efficient vector search and retrieval. You can install the Clarifai Python client with pip: ```bash pip install clarifai-client ``` ## Integration Example ```python import qdrant_client from qdrant_client.models import Batch from clarifai.rest import ClarifaiApp # Initialize Clarifai client clarifai_app = ClarifaiApp(api_key=""<< your_api_key >>"") # Choose the model for embeddings model = clarifai_app.public_models.general_embedding_model # Upload and get embeddings for an image image_path = ""./path/to/the/image.jpg"" response = model.predict_by_filename(image_path) # Extract the embedding from the response embedding = response['outputs'][0]['data']['embeddings'][0]['vector'] # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient() # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name=""MyCollection"", points=Batch( ids=[1], vectors=[embedding], ) ) ``` ",documentation/embeddings/clarifai.md "--- title: Mistral weight: 2100 --- | Time: 10 min | Level: Beginner | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/qdrant/examples/blob/mistral-getting-started/mistral-embed-getting-started/mistral_qdrant_getting_started.ipynb) | | --- | ----------- | ----------- | # Mistral Qdrant is compatible with the new released Mistral Embed and its official Python SDK that can be installed as any other package: ## Setup ### Install the client ```bash pip install mistralai ``` And then we set this up: ```python from mistralai.client import MistralClient from qdrant_client import QdrantClient from qdrant_client.models import PointStruct, VectorParams, Distance collection_name = ""example_collection"" MISTRAL_API_KEY = ""your_mistral_api_key"" client = QdrantClient("":memory:"") mistral_client = MistralClient(api_key=MISTRAL_API_KEY) texts = [ ""Qdrant is the best vector search engine!"", ""Loved by Enterprises and everyone building for low latency, high performance, and scale."", ] ``` Let's see how to use the Embedding Model API to embed a document for retrieval. The following example shows how to embed a document with the `models/embedding-001` with the `retrieval_document` task type: ## Embedding a document ```python result = mistral_client.embeddings( model=""mistral-embed"", input=texts, ) ``` The returned result has a data field with a key: `embedding`. The value of this key is a list of floats representing the embedding of the document. ### Converting this into Qdrant Points ```python points = [ PointStruct( id=idx, vector=response.embedding, payload={""text"": text}, ) for idx, (response, text) in enumerate(zip(result.data, texts)) ] ``` ## Create a collection and Insert the documents ```python client.create_collection(collection_name, vectors_config=VectorParams( size=1024, distance=Distance.COSINE, ) ) client.upsert(collection_name, points) ``` ## Searching for documents with Qdrant Once the documents are indexed, you can search for the most relevant documents using the same model with the `retrieval_query` task type: ```python client.search( collection_name=collection_name, query_vector=mistral_client.embeddings( model=""mistral-embed"", input=[""What is the best to use for vector search scaling?""] ).data[0].embedding, ) ``` ## Using Mistral Embedding Models with Binary Quantization You can use Mistral Embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much. At an oversampling of 3 and a limit of 100, we've a 95% recall against the exact nearest neighbors with rescore enabled. | Oversampling | | 1 | 1 | 2 | 2 | 3 | 3 | |--------------|---------|----------|----------|----------|----------|----------|--------------| | | **Rescore** | False | True | False | True | False | True | | **Limit** | | | | | | | | | 10 | | 0.53444 | 0.857778 | 0.534444 | 0.918889 | 0.533333 | 0.941111 | | 20 | | 0.508333 | 0.837778 | 0.508333 | 0.903889 | 0.508333 | 0.927778 | | 50 | | 0.492222 | 0.834444 | 0.492222 | 0.903556 | 0.492889 | 0.940889 | | 100 | | 0.499111 | 0.845444 | 0.498556 | 0.918333 | 0.497667 | **0.944556** | That's it! You can now use Mistral Embedding Models with Qdrant! ",documentation/embeddings/mistral.md "--- title: ""Nomic"" weight: 2300 --- # Nomic The `nomic-embed-text-v1` model is an open source [8192 context length](https://github.com/nomic-ai/contrastors) text encoder. While you can find it on the [Hugging Face Hub](https://huggingface.co/nomic-ai/nomic-embed-text-v1), you may find it easier to obtain them through the [Nomic Text Embeddings](https://docs.nomic.ai/reference/endpoints/nomic-embed-text). Once installed, you can configure it with the official Python client, FastEmbed or through direct HTTP requests. You can use Nomic embeddings directly in Qdrant client calls. There is a difference in the way the embeddings are obtained for documents and queries. #### Upsert using [Nomic SDK](https://github.com/nomic-ai/nomic) The `task_type` parameter defines the embeddings that you get. For documents, set the `task_type` to `search_document`: ```python from qdrant_client import QdrantClient, models from nomic import embed output = embed.text( texts=[""Qdrant is the best vector database!""], model=""nomic-embed-text-v1"", task_type=""search_document"", ) client = QdrantClient() client.upsert( collection_name=""my-collection"", points=models.Batch( ids=[1], vectors=output[""embeddings""], ), ) ``` #### Upsert using [FastEmbed](https://github.com/qdrant/fastembed) ```python from fastembed import TextEmbedding from client import QdrantClient, models model = TextEmbedding(""nomic-ai/nomic-embed-text-v1"") output = model.embed([""Qdrant is the best vector database!""]) client = QdrantClient() client.upsert( collection_name=""my-collection"", points=models.Batch( ids=[1], vectors=[embeddings.tolist() for embeddings in output], ), ) ``` #### Search using [Nomic SDK](https://github.com/nomic-ai/nomic) To query the collection, set the `task_type` to `search_query`: ```python output = embed.text( texts=[""What is the best vector database?""], model=""nomic-embed-text-v1"", task_type=""search_query"", ) client.search( collection_name=""my-collection"", query_vector=output[""embeddings""][0], ) ``` #### Search using [FastEmbed](https://github.com/qdrant/fastembed) ```python output = next(model.embed(""What is the best vector database?"")) client.search( collection_name=""my-collection"", query_vector=output.tolist(), ) ``` For more information, see the Nomic documentation on [Text embeddings](https://docs.nomic.ai/reference/endpoints/nomic-embed-text). ",documentation/embeddings/nomic.md "--- title: Nvidia weight: 2400 --- # Nvidia Qdrant supports working with [Nvidia embeddings](https://build.nvidia.com/explore/retrieval). You can generate an API key to authenticate the requests from the [Nvidia Playground](). ### Setting up the Qdrant client and Nvidia session ```python import requests from qdrant_client import QdrantClient NVIDIA_BASE_URL = ""https://ai.api.nvidia.com/v1/retrieval/nvidia/embeddings"" NVIDIA_API_KEY = """" nvidia_session = requests.Session() client = QdrantClient("":memory:"") headers = { ""Authorization"": f""Bearer {NVIDIA_API_KEY}"", ""Accept"": ""application/json"", } texts = [ ""Qdrant is the best vector search engine!"", ""Loved by Enterprises and everyone building for low latency, high performance, and scale."", ] ``` ```typescript import { QdrantClient } from '@qdrant/js-client-rest'; const NVIDIA_BASE_URL = ""https://ai.api.nvidia.com/v1/retrieval/nvidia/embeddings"" const NVIDIA_API_KEY = """" const client = new QdrantClient({ url: 'http://localhost:6333' }); const headers = { ""Authorization"": ""Bearer "" + NVIDIA_API_KEY, ""Accept"": ""application/json"", ""Content-Type"": ""application/json"" } const texts = [ ""Qdrant is the best vector search engine!"", ""Loved by Enterprises and everyone building for low latency, high performance, and scale."", ] ``` The following example shows how to embed documents with the `embed-qa-4` model that generates sentence embeddings of size 1024. ### Embedding documents ```python payload = { ""input"": texts, ""input_type"": ""passage"", ""model"": ""NV-Embed-QA"", } response_body = nvidia_session.post( NVIDIA_BASE_URL, headers=headers, json=payload ).json() ``` ```typescript let body = { ""input"": texts, ""input_type"": ""passage"", ""model"": ""NV-Embed-QA"" } let response = await fetch(NVIDIA_BASE_URL, { method: ""POST"", body: JSON.stringify(body), headers }); let response_body = await response.json() ``` ### Converting the model outputs to Qdrant points ```python from qdrant_client.models import PointStruct points = [ PointStruct( id=idx, vector=data[""embedding""], payload={""text"": text}, ) for idx, (data, text) in enumerate(zip(response_body[""data""], texts)) ] ``` ```typescript let points = response_body.data.map((data, i) => { return { id: i, vector: data.embedding, payload: { text: texts[i] } } }) ``` ### Creating a collection to insert the documents ```python from qdrant_client.models import VectorParams, Distance collection_name = ""example_collection"" client.create_collection( collection_name, vectors_config=VectorParams( size=1024, distance=Distance.COSINE, ), ) client.upsert(collection_name, points) ``` ```typescript const COLLECTION_NAME = ""example_collection"" await client.createCollection(COLLECTION_NAME, { vectors: { size: 1024, distance: 'Cosine', } }); await client.upsert(COLLECTION_NAME, { wait: true, points }) ``` ## Searching for documents with Qdrant Once the documents are added, you can search for the most relevant documents. ```python payload = { ""input"": ""What is the best to use for vector search scaling?"", ""input_type"": ""query"", ""model"": ""NV-Embed-QA"", } response_body = nvidia_session.post( NVIDIA_BASE_URL, headers=headers, json=payload ).json() client.search( collection_name=collection_name, query_vector=response_body[""data""][0][""embedding""], ) ``` ```typescript body = { ""input"": ""What is the best to use for vector search scaling?"", ""input_type"": ""query"", ""model"": ""NV-Embed-QA"", } response = await fetch(NVIDIA_BASE_URL, { method: ""POST"", body: JSON.stringify(body), headers }); response_body = await response.json() await client.search(COLLECTION_NAME, { vector: response_body.data[0].embedding, }); ``` ",documentation/embeddings/nvidia.md "--- title: Prem AI weight: 2800 --- # Prem AI [PremAI](https://premai.io/) is a unified generative AI development platform for fine-tuning deploying, and monitoring AI models. Qdrant is compatible with PremAI APIs. ### Installing the SDKs ```bash pip install premai qdrant-client ``` To install the npm package: ```bash npm install @premai/prem-sdk @qdrant/js-client-rest ``` ### Import all required packages ```python from premai import Prem from qdrant_client import QdrantClient from qdrant_client.models import Distance, VectorParams ``` ```typescript import Prem from '@premai/prem-sdk'; import { QdrantClient } from '@qdrant/js-client-rest'; ``` ### Define all the constants We need to define the project ID and the embedding model to use. You can learn more about obtaining these in the PremAI [docs](https://docs.premai.io/quick-start). ```python PROJECT_ID = 123 EMBEDDING_MODEL = ""text-embedding-3-large"" COLLECTION_NAME = ""prem-collection-py"" QDRANT_SERVER_URL = ""http://localhost:6333"" DOCUMENTS = [ ""This is a sample python document"", ""We will be using qdrant and premai python sdk"" ] ``` ```typescript const PROJECT_ID = 123; const EMBEDDING_MODEL = ""text-embedding-3-large""; const COLLECTION_NAME = ""prem-collection-js""; const SERVER_URL = ""http://localhost:6333"" const DOCUMENTS = [ ""This is a sample javascript document"", ""We will be using qdrant and premai javascript sdk"" ]; ``` ### Set up PremAI and Qdrant clients ```python prem_client = Prem(api_key=""xxxx-xxx-xxx"") qdrant_client = QdrantClient(url=QDRANT_SERVER_URL) ``` ```typescript const premaiClient = new Prem({ apiKey: ""xxxx-xxx-xxx"" }) const qdrantClient = new QdrantClient({ url: SERVER_URL }); ``` ### Generating Embeddings ```python from typing import Union, List def get_embeddings( project_id: int, embedding_model: str, documents: Union[str, List[str]] ) -> List[List[float]]: """""" Helper function to get the embeddings from premai sdk Args project_id (int): The project id from prem saas platform. embedding_model (str): The embedding model alias to choose documents (Union[str, List[str]]): Single texts or list of texts to embed Returns: List[List[int]]: A list of list of integers that represents different embeddings """""" embeddings = [] documents = [documents] if isinstance(documents, str) else documents for embedding in prem_client.embeddings.create( project_id=project_id, model=embedding_model, input=documents ).data: embeddings.append(embedding.embedding) return embeddings ``` ```typescript async function getEmbeddings(projectID, embeddingModel, documents) { const response = await premaiClient.embeddings.create({ project_id: projectID, model: embeddingModel, input: documents }); return response; } ``` ### Converting Embeddings to Qdrant Points ```python from qdrant_client.models import PointStruct embeddings = get_embeddings( project_id=PROJECT_ID, embedding_model=EMBEDDING_MODEL, documents=DOCUMENTS ) points = [ PointStruct( id=idx, vector=embedding, payload={""text"": text}, ) for idx, (embedding, text) in enumerate(zip(embeddings, DOCUMENTS)) ] ``` ```typescript function convertToQdrantPoints(embeddings, texts) { return embeddings.data.map((data, i) => { return { id: i, vector: data.embedding, payload: { text: texts[i] } }; }); } const embeddings = await getEmbeddings(PROJECT_ID, EMBEDDING_MODEL, DOCUMENTS); const points = convertToQdrantPoints(embeddings, DOCUMENTS); ``` ### Set up a Qdrant Collection ```python qdrant_client.create_collection( collection_name=COLLECTION_NAME, vectors_config=VectorParams(size=3072, distance=Distance.DOT) ) ``` ```typescript await qdrantClient.createCollection(COLLECTION_NAME, { vectors: { size: 3072, distance: 'Cosine' } }) ``` ### Insert Documents into the Collection ```python doc_ids = list(range(len(embeddings))) qdrant_client.upsert( collection_name=COLLECTION_NAME, points=points ) ``` ```typescript await qdrantClient.upsert(COLLECTION_NAME, { wait: true, points }); ``` ### Perform a Search ```python query = ""what is the extension of python document"" query_embedding = get_embeddings( project_id=PROJECT_ID, embedding_model=EMBEDDING_MODEL, documents=query ) qdrant_client.search(collection_name=COLLECTION_NAME, query_vector=query_embedding[0]) ``` ```typescript const query = ""what is the extension of javascript document"" const query_embedding_response = await getEmbeddings(PROJECT_ID, EMBEDDING_MODEL, query) await qdrantClient.search(COLLECTION_NAME, { vector: query_embedding_response.data[0].embedding }); ``` ",documentation/embeddings/premai.md "--- title: GradientAI weight: 1750 --- # Using GradientAI with Qdrant GradientAI provides state-of-the-art models for generating embeddings, which are highly effective for vector search tasks in Qdrant. ## Installation You can install the required packages using the following pip command: ```bash pip install gradientai python-dotenv qdrant-client ``` ## Code Example ```python from dotenv import load_dotenv import qdrant_client from qdrant_client.models import Batch from gradientai import Gradient load_dotenv() def main() -> None: # Initialize GradientAI client gradient = Gradient() # Retrieve the embeddings model embeddings_model = gradient.get_embeddings_model(slug=""bge-large"") # Generate embeddings for your data generate_embeddings_response = embeddings_model.generate_embeddings( inputs=[ ""Multimodal brain MRI is the preferred method to evaluate for acute ischemic infarct and ideally should be obtained within 24 hours of symptom onset, and in most centers will follow a NCCT"", ""CTA has a higher sensitivity and positive predictive value than magnetic resonance angiography (MRA) for detection of intracranial stenosis and occlusion and is recommended over time-of-flight (without contrast) MRA"", ""Echocardiographic strain imaging has the advantage of detecting early cardiac involvement, even before thickened walls or symptoms are apparent"", ], ) # Initialize Qdrant client client = qdrant_client.QdrantClient(url=""http://localhost:6333"") # Upsert the embeddings into Qdrant for i, embedding in enumerate(generate_embeddings_response.embeddings): client.upsert( collection_name=""MedicalRecords"", points=Batch( ids=[i + 1], # Unique ID for each embedding vectors=[embedding.embedding], ) ) print(""Embeddings successfully upserted into Qdrant."") gradient.close() if __name__ == ""__main__"": main() ```",documentation/embeddings/gradientai.md "--- title: Gemini weight: 1600 --- | Time: 10 min | Level: Beginner | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/qdrant/examples/blob/gemini-getting-started/gemini-getting-started/gemini-getting-started.ipynb) | | --- | ----------- | ----------- | # Gemini Qdrant is compatible with Gemini Embedding Model API and its official Python SDK that can be installed as any other package: Gemini is a new family of Google PaLM models, released in December 2023. The new embedding models succeed the previous Gecko Embedding Model. In the latest models, an additional parameter, `task_type`, can be passed to the API call. This parameter serves to designate the intended purpose for the embeddings utilized. The Embedding Model API supports various task types, outlined as follows: 1. `retrieval_query`: query in a search/retrieval setting 2. `retrieval_document`: document from the corpus being searched 3. `semantic_similarity`: semantic text similarity 4. `classification`: embeddings to be used for text classification 5. `clustering`: the generated embeddings will be used for clustering 6. `task_type_unspecified`: Unset value, which will default to one of the other values. If you're building a semantic search application, such as RAG, you should use `task_type=""retrieval_document""` for the indexed documents and `task_type=""retrieval_query""` for the search queries. The following example shows how to do this with Qdrant: ## Setup ```bash pip install google-generativeai ``` Let's see how to use the Embedding Model API to embed a document for retrieval. The following example shows how to embed a document with the `models/embedding-001` with the `retrieval_document` task type: ## Embedding a document ```python import google.generativeai as gemini_client from qdrant_client import QdrantClient from qdrant_client.models import Distance, PointStruct, VectorParams collection_name = ""example_collection"" GEMINI_API_KEY = ""YOUR GEMINI API KEY"" # add your key here client = QdrantClient(url=""http://localhost:6333"") gemini_client.configure(api_key=GEMINI_API_KEY) texts = [ ""Qdrant is a vector database that is compatible with Gemini."", ""Gemini is a new family of Google PaLM models, released in December 2023."", ] results = [ gemini_client.embed_content( model=""models/embedding-001"", content=sentence, task_type=""retrieval_document"", title=""Qdrant x Gemini"", ) for sentence in texts ] ``` ## Creating Qdrant Points and Indexing documents with Qdrant ### Creating Qdrant Points ```python points = [ PointStruct( id=idx, vector=response['embedding'], payload={""text"": text}, ) for idx, (response, text) in enumerate(zip(results, texts)) ] ``` ### Create Collection ```python client.create_collection(collection_name, vectors_config= VectorParams( size=768, distance=Distance.COSINE, ) ) ``` ### Add these into the collection ```python client.upsert(collection_name, points) ``` ## Searching for documents with Qdrant Once the documents are indexed, you can search for the most relevant documents using the same model with the `retrieval_query` task type: ```python client.search( collection_name=collection_name, query_vector=gemini_client.embed_content( model=""models/embedding-001"", content=""Is Qdrant compatible with Gemini?"", task_type=""retrieval_query"", )[""embedding""], ) ``` ## Using Gemini Embedding Models with Binary Quantization You can use Gemini Embedding Models with [Binary Quantization](/articles/binary-quantization/) - a technique that allows you to reduce the size of the embeddings by 32 times without losing the quality of the search results too much. In this table, you can see the results of the search with the `models/embedding-001` model with Binary Quantization in comparison with the original model: At an oversampling of 3 and a limit of 100, we've a 95% recall against the exact nearest neighbors with rescore enabled. | Oversampling | | 1 | 1 | 2 | 2 | 3 | 3 | |--------------|---------|----------|----------|----------|----------|----------|----------| | | **Rescore** | False | True | False | True | False | True | | **Limit** | | | | | | | | | 10 | | 0.523333 | 0.831111 | 0.523333 | 0.915556 | 0.523333 | 0.950000 | | 20 | | 0.510000 | 0.836667 | 0.510000 | 0.912222 | 0.510000 | 0.937778 | | 50 | | 0.489111 | 0.841556 | 0.489111 | 0.913333 | 0.488444 | 0.947111 | | 100 | | 0.485778 | 0.846556 | 0.485556 | 0.929000 | 0.486000 | **0.956333** | That's it! You can now use Gemini Embedding Models with Qdrant! ",documentation/embeddings/gemini.md "--- title: OCI (Oracle Cloud Infrastructure) weight: 2500 --- # Using OCI (Oracle Cloud Infrastructure) with Qdrant OCI provides robust cloud-based embeddings for various media types. The Generative AI Embedding Models convert textual input - ranging from phrases and sentences to entire paragraphs - into a structured format known as embeddings. Each piece of text input is transformed into a numerical array consisting of 1024 distinct numbers. ## Installation You can install the required package using the following pip command: ```bash pip install oci ``` ## Code Example Below is an example of how to obtain embeddings using OCI (Oracle Cloud Infrastructure)'s API and store them in a Qdrant collection: ```python import qdrant_client from qdrant_client.models import Batch import oci # Initialize OCI client config = oci.config.from_file() ai_client = oci.ai_language.AIServiceLanguageClient(config) # Generate embeddings using OCI's AI service text = ""OCI provides cloud-based AI services."" response = ai_client.batch_detect_language_entities(text) embeddings = response.data[0].entities[0].embedding # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name=""CloudAI"", points=Batch( ids=[1], vectors=[embeddings], ) ) ``` ",documentation/embeddings/oci.md "--- title: Jina Embeddings weight: 1900 aliases: - /documentation/embeddings/jina-emebddngs/ - ../integrations/jina-embeddings/ --- # Jina Embeddings Qdrant can also easily work with [Jina embeddings](https://jina.ai/embeddings/) which allow for model input lengths of up to 8192 tokens. To call their endpoint, all you need is an API key obtainable [here](https://jina.ai/embeddings/). By the way, our friends from **Jina AI** provided us with a code (**QDRANT**) that will grant you a **10% discount** if you plan to use Jina Embeddings in production. ```python import qdrant_client import requests from qdrant_client.models import Distance, VectorParams, Batch # Provide Jina API key and choose one of the available models. # You can get a free trial key here: https://jina.ai/embeddings/ JINA_API_KEY = ""jina_xxxxxxxxxxx"" MODEL = ""jina-embeddings-v2-base-en"" # or ""jina-embeddings-v2-base-en"" EMBEDDING_SIZE = 768 # 512 for small variant # Get embeddings from the API url = ""https://api.jina.ai/v1/embeddings"" headers = { ""Content-Type"": ""application/json"", ""Authorization"": f""Bearer {JINA_API_KEY}"", } data = { ""input"": [""Your text string goes here"", ""You can send multiple texts""], ""model"": MODEL, } response = requests.post(url, headers=headers, json=data) embeddings = [d[""embedding""] for d in response.json()[""data""]] # Index the embeddings into Qdrant client = qdrant_client.QdrantClient("":memory:"") client.create_collection( collection_name=""MyCollection"", vectors_config=VectorParams(size=EMBEDDING_SIZE, distance=Distance.DOT), ) qdrant_client.upsert( collection_name=""MyCollection"", points=Batch( ids=list(range(len(embeddings))), vectors=embeddings, ), ) ``` ",documentation/embeddings/jina-embeddings.md "--- title: Upstage weight: 3100 --- # Upstage Qdrant supports working with the Solar Embeddings API from [Upstage](https://upstage.ai/). [Solar Embeddings](https://developers.upstage.ai/docs/apis/embeddings) API features dual models for user queries and document embedding, within a unified vector space, designed for performant text processing. You can generate an API key to authenticate the requests from the [Upstage Console](). ### Setting up the Qdrant client and Upstage session ```python import requests from qdrant_client import QdrantClient UPSTAGE_BASE_URL = ""https://api.upstage.ai/v1/solar/embeddings"" UPSTAGE_API_KEY = """" upstage_session = requests.Session() client = QdrantClient(url=""http://localhost:6333"") headers = { ""Authorization"": f""Bearer {UPSTAGE_API_KEY}"", ""Accept"": ""application/json"", } texts = [ ""Qdrant is the best vector search engine!"", ""Loved by Enterprises and everyone building for low latency, high performance, and scale."", ] ``` ```typescript import { QdrantClient } from '@qdrant/js-client-rest'; const UPSTAGE_BASE_URL = ""https://api.upstage.ai/v1/solar/embeddings"" const UPSTAGE_API_KEY = """" const client = new QdrantClient({ url: 'http://localhost:6333' }); const headers = { ""Authorization"": ""Bearer "" + UPSTAGE_API_KEY, ""Accept"": ""application/json"", ""Content-Type"": ""application/json"" } const texts = [ ""Qdrant is the best vector search engine!"", ""Loved by Enterprises and everyone building for low latency, high performance, and scale."", ] ``` The following example shows how to embed documents with the recommended `solar-embedding-1-large-passage` and `solar-embedding-1-large-query` models that generates sentence embeddings of size 4096. ### Embedding documents ```python body = { ""input"": texts, ""model"": ""solar-embedding-1-large-passage"", } response_body = upstage_session.post( UPSTAGE_BASE_URL, headers=headers, json=body ).json() ``` ```typescript let body = { ""input"": texts, ""model"": ""solar-embedding-1-large-passage"", } let response = await fetch(UPSTAGE_BASE_URL, { method: ""POST"", body: JSON.stringify(body), headers }); let response_body = await response.json() ``` ### Converting the model outputs to Qdrant points ```python from qdrant_client.models import PointStruct points = [ PointStruct( id=idx, vector=data[""embedding""], payload={""text"": text}, ) for idx, (data, text) in enumerate(zip(response_body[""data""], texts)) ] ``` ```typescript let points = response_body.data.map((data, i) => { return { id: i, vector: data.embedding, payload: { text: texts[i] } } }) ``` ### Creating a collection to insert the documents ```python from qdrant_client.models import VectorParams, Distance collection_name = ""example_collection"" client.create_collection( collection_name, vectors_config=VectorParams( size=4096, distance=Distance.COSINE, ), ) client.upsert(collection_name, points) ``` ```typescript const COLLECTION_NAME = ""example_collection"" await client.createCollection(COLLECTION_NAME, { vectors: { size: 4096, distance: 'Cosine', } }); await client.upsert(COLLECTION_NAME, { wait: true, points }) ``` ## Searching for documents with Qdrant Once all the documents are added, you can search for the most relevant documents. ```python body = { ""input"": ""What is the best to use for vector search scaling?"", ""model"": ""solar-embedding-1-large-query"", } response_body = upstage_session.post( UPSTAGE_BASE_URL, headers=headers, json=body ).json() client.search( collection_name=collection_name, query_vector=response_body[""data""][0][""embedding""], ) ``` ```typescript body = { ""input"": ""What is the best to use for vector search scaling?"", ""model"": ""solar-embedding-1-large-query"", } response = await fetch(UPSTAGE_BASE_URL, { method: ""POST"", body: JSON.stringify(body), headers }); response_body = await response.json() await client.search(COLLECTION_NAME, { vector: response_body.data[0].embedding, }); ``` ",documentation/embeddings/upstage.md "--- title: John Snow Labs weight: 2000 --- # Using John Snow Labs with Qdrant John Snow Labs offers a variety of models, particularly in the healthcare domain. They have pre-trained models that can generate embeddings for medical text data. ## Installation You can install the required package using the following pip command: ```bash pip install johnsnowlabs ``` Here is an example of how you might obtain embeddings using John Snow Labs's API and store them in a Qdrant collection: ```python import qdrant_client from qdrant_client.models import Batch from johnsnowlabs import nlp # Load the pre-trained model, for example, a named entity recognition (NER) model model = nlp.load_model(""ner_jsl"") # Sample text to generate embeddings text = ""John Snow Labs provides state-of-the-art healthcare NLP solutions."" # Generate embeddings for the text document = nlp.DocumentAssembler().setInput(text) embeddings = model.transform(document).collectEmbeddings() # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333) # Upsert the embeddings into Qdrant qdrant_client.upsert( collection_name=""HealthcareNLP"", points=Batch( ids=[1], # This would be your unique ID for the data point vectors=[embeddings], ) ) ``` ",documentation/embeddings/johnsnow.md " --- title: Embeddings weight: 15 --- # Supported Embedding Providers & Models Qdrant supports all available text and multimodal dense vector embedding models as well as vector embedding services without any limitations. ## Some of the Embeddings you can use with Qdrant: SentenceTransformers, BERT, SBERT, Clip, OpenClip, Open AI, Vertex AI, Azure AI, AWS Bedrock, Jina AI, Upstage AI, Mistral AI, Cohere AI, Voyage AI, Aleph Alpha, Baidu Qianfan, BGE, Instruct, Watsonx Embeddings, Snowflake Embeddings, NVIDIA NeMo, Nomic, OCI Embeddings, Ollama Embeddings, MixedBread, Together AI, Clarifai, Databricks Embeddings, GPT4All Embeddings, John Snow Labs Embeddings. Additionally, [any open-source embeddings from HuggingFace](https://huggingface.co/spaces/mteb/leaderboard) can be used with Qdrant. ## Code samples: | Embeddings Providers | Description | | ----------------------------- | ----------- | | [Aleph Alpha](./aleph-alpha/) | Multilingual embeddings focused on European languages. | | [Azure](./azure/) | Microsoft's embedding model selection. | | [Bedrock](./bedrock/) | AWS managed service for foundation models and embeddings. | | [Clarifai](./clarifai/) | Embeddings for image and video recognition. | | [Clip](./clip/) | Aligns images and text, created by OpenAI. | | [Cohere](./cohere/) | Language model embeddings for NLP tasks. | | [Databricks](./databricks/) | Scalable embeddings integrated with Apache Spark. | | [Gemini](./gemini/) | Google’s multimodal embeddings for text and vision. | | [GPT4All](./gpt4all/) | Open-source, local embeddings for privacy-focused use. | | [GradientAI](./gradient/) | AI Models for custom enterprise tasks.| | [Instruct](./instruct/) | Embeddings tuned for following instructions. | | [Jina AI](./jina-embeddings/) | Customizable embeddings for neural search. | | [John Snow Labs](./johnsnow/) | Medical and clinical embeddings. | | [Mistral](./mistral/) | Open-source, efficient language model embeddings. | | [MixedBread](./mixedbread/) | Lightweight embeddings for constrained environments. | | [Nomic](./nomic/) | Embeddings for data visualization. | | [Nvidia](./nvidia/) | GPU-optimized embeddings from Nvidia. | | [OCI](./oci/) | Oracle Cloud’s AI service with embeddings. | | [Ollama](./ollama/) | Embeddings for conversational AI. | | [OpenAI](./openai/) | Industry-leading embeddings for NLP. | | [OpenCLIP](./openclip/) | OS implementation of CLIP for image and text. | | [Prem AI](./premai/) | Precise language embeddings. | | [Snowflake](./snowflake/) | Scalable embeddings for big data. | | [Together AI](./togetherai/) | Community-driven, open-source embeddings. | | [Upstage](./upstage/) | Embeddings for speech and language tasks. | | [Voyage AI](./voyage/) | Navigation and spatial understanding embeddings. | | [Watsonx](./watsonx/) | IBM's enterprise-grade embeddings. | ",documentation/embeddings/_index.md "--- title: MixedBread weight: 2200 --- # Using MixedBread with Qdrant MixedBread is a unique provider offering embeddings across multiple domains. Their models are versatile for various search tasks when integrated with Qdrant. MixedBread is creating state-of-the-art models and tools that make search smarter, faster, and more relevant. Whether you're building a next-gen search engine or RAG (Retrieval Augmented Generation) systems, or whether you're enhancing your existing search solution, they've got the ingredients to make it happen. ## Installation You can install the required package using the following pip command: ```bash pip install mixedbread ``` ## Integration Example Below is an example of how to obtain embeddings using MixedBread's API and store them in a Qdrant collection: ```python import qdrant_client from qdrant_client.models import Batch from mixedbread import MixedBreadModel # Initialize MixedBread model model = MixedBreadModel(""mixedbread-variant"") # Generate embeddings text = ""MixedBread provides versatile embeddings for various domains."" embeddings = model.embed(text) # Initialize Qdrant client qdrant_client = qdrant_client.QdrantClient(host=""localhost"", port=6333) # Upsert the embedding into Qdrant qdrant_client.upsert( collection_name=""VersatileEmbeddings"", points=Batch( ids=[1], vectors=[embeddings], ) ) ``` ",documentation/embeddings/mixedbread.md "--- title: Azure OpenAI weight: 950 --- # Using Azure OpenAI with Qdrant Azure OpenAI is Microsoft's platform for AI embeddings, focusing on powerful text and data analytics. These embeddings are suitable for high-precision vector searches in Qdrant. ## Installation You can install the required packages using the following pip command: ```bash pip install openai azure-identity python-dotenv qdrant-client ``` ## Code Example ```python import os import openai import dotenv import qdrant_client from qdrant_client.models import Batch from azure.identity import DefaultAzureCredential, get_bearer_token_provider dotenv.load_dotenv() # Set to True if using Azure Active Directory for authentication use_azure_active_directory = False # Qdrant client setup qdrant_client = qdrant_client.QdrantClient(url=""http://localhost:6333"") # Azure OpenAI Authentication if not use_azure_active_directory: endpoint = os.environ[""AZURE_OPENAI_ENDPOINT""] api_key = os.environ[""AZURE_OPENAI_API_KEY""] client = openai.AzureOpenAI( azure_endpoint=endpoint, api_key=api_key, api_version=""2023-09-01-preview"" ) else: endpoint = os.environ[""AZURE_OPENAI_ENDPOINT""] client = openai.AzureOpenAI( azure_endpoint=endpoint, azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), ""https://cognitiveservices.azure.com/.default""), api_version=""2023-09-01-preview"" ) # Deployment name of the model in Azure OpenAI Studio deployment = ""your-deployment-name"" # Replace with your deployment name # Generate embeddings using the Azure OpenAI client text_input = ""The food was delicious and the waiter..."" embeddings_response = client.embeddings.create( model=deployment, input=text_input ) # Extract the embedding vector from the response embedding_vector = embeddings_response.data[0].embedding # Insert the embedding into Qdrant qdrant_client.upsert( collection_name=""MyCollection"", points=Batch( ids=[1], # This ID can be dynamically assigned or managed vectors=[embedding_vector], ) ) print(""Embedding successfully upserted into Qdrant."") ```",documentation/embeddings/azure.md "--- title: Database Optimization weight: 2 --- # Frequently Asked Questions: Database Optimization ### How do I reduce memory usage? The primary source of memory usage is vector data. There are several ways to address that: - Configure [Quantization](../../guides/quantization/) to reduce the memory usage of vectors. - Configure on-disk vector storage The choice of the approach depends on your requirements. Read more about [configuring the optimal](../../tutorials/optimize/) use of Qdrant. ### How do you choose the machine configuration? There are two main scenarios of Qdrant usage in terms of resource consumption: - **Performance-optimized** -- when you need to serve vector search as fast (many) as possible. In this case, you need to have as much vector data in RAM as possible. Use our [calculator](https://cloud.qdrant.io/calculator) to estimate the required RAM. - **Storage-optimized** -- when you need to store many vectors and minimize costs by compromising some search speed. In this case, pay attention to the disk speed instead. More about it in the article about [Memory Consumption](../../../articles/memory-consumption/). ### I configured on-disk vector storage, but memory usage is still high. Why? Firstly, memory usage metrics as reported by `top` or `htop` may be misleading. They are not showing the minimal amount of memory required to run the service. If the RSS memory usage is 10 GB, it doesn't mean that it won't work on a machine with 8 GB of RAM. Qdrant uses many techniques to reduce search latency, including caching disk data in RAM and preloading data from disk to RAM. As a result, the Qdrant process might use more memory than the minimum required to run the service. > Unused RAM is wasted RAM If you want to limit the memory usage of the service, we recommend using [limits in Docker](https://docs.docker.com/config/containers/resource_constraints/#memory) or Kubernetes. ### My requests are very slow or time out. What should I do? There are several possible reasons for that: - **Using filters without payload index** -- If you're performing a search with a filter but you don't have a payload index, Qdrant will have to load whole payload data from disk to check the filtering condition. Ensure you have adequately configured [payload indexes](../../concepts/indexing/#payload-index). - **Usage of on-disk vector storage with slow disks** -- If you're using on-disk vector storage, ensure you have fast enough disks. We recommend using local SSDs with at least 50k IOPS. Read more about the influence of the disk speed on the search latency in the article about [Memory Consumption](../../../articles/memory-consumption/). - **Large limit or non-optimal query parameters** -- A large limit or offset might lead to significant performance degradation. Please pay close attention to the query/collection parameters that significantly diverge from the defaults. They might be the reason for the performance issues.",documentation/faq/database-optimization.md "--- title: Qdrant Fundamentals weight: 1 --- # Frequently Asked Questions: General Topics |||||| |-|-|-|-|-| |[Vectors](/documentation/faq/qdrant-fundamentals/#vectors)|[Search](/documentation/faq/qdrant-fundamentals/#search)|[Collections](/documentation/faq/qdrant-fundamentals/#collections)|[Compatibility](/documentation/faq/qdrant-fundamentals/#compatibility)|[Cloud](/documentation/faq/qdrant-fundamentals/#cloud)| ## Vectors ### What is the maximum vector dimension supported by Qdrant? Qdrant supports up to 65,535 dimensions by default, but this can be configured to support higher dimensions. ### What is the maximum size of vector metadata that can be stored? There is no inherent limitation on metadata size, but it should be [optimized for performance and resource usage](/documentation/guides/optimize/). Users can set upper limits in the configuration. ### Can the same similarity search query yield different results on different machines? Yes, due to differences in hardware configurations and parallel processing, results may vary slightly. ### What to do with documents with small chunks using a fixed chunk strategy? For documents with small chunks, consider merging chunks or using variable chunk sizes to optimize vector representation and search performance. ### How do I choose the right vector embeddings for my use case? This depends on the nature of your data and the specific application. Consider factors like dimensionality, domain-specific models, and the performance characteristics of different embeddings. ### How does Qdrant handle different vector embeddings from various providers in the same collection? Qdrant natively [supports multiple vectors per data point](/documentation/concepts/vectors/#multivectors), allowing different embeddings from various providers to coexist within the same collection. ### Can I migrate my embeddings from another vector store to Qdrant? Yes, Qdrant supports migration of embeddings from other vector stores, facilitating easy transitions and adoption of Qdrant’s features. ## Search ### How does Qdrant handle real-time data updates and search? Qdrant supports live updates for vector data, with newly inserted, updated and deleted vectors available for immediate search. The system uses full-scan search on unindexed segments during background index updates. ### My search results contain vectors with null values. Why? By default, Qdrant tries to minimize network traffic and doesn't return vectors in search results. But you can force Qdrant to do so by setting the `with_vector` parameter of the Search/Scroll to `true`. If you're still seeing `""vector"": null` in your results, it might be that the vector you're passing is not in the correct format, or there's an issue with how you're calling the upsert method. ### How can I search without a vector? You are likely looking for the [scroll](../../concepts/points/#scroll-points) method. It allows you to retrieve the records based on filters or even iterate over all the records in the collection. ### Does Qdrant support a full-text search or a hybrid search? Qdrant is a vector search engine in the first place, and we only implement full-text support as long as it doesn't compromise the vector search use case. That includes both the interface and the performance. What Qdrant can do: - Search with full-text filters - Apply full-text filters to the vector search (i.e., perform vector search among the records with specific words or phrases) - Do prefix search and semantic [search-as-you-type](../../../articles/search-as-you-type/) - Sparse vectors, as used in [SPLADE](https://github.com/naver/splade) or similar models - [Multi-vectors](../../concepts/vectors/#multivectors), for example ColBERT and other late-interaction models - Combination of the [multiple searches](../../concepts/hybrid-queries/) What Qdrant doesn't plan to support: - Non-vector-based retrieval or ranking functions - Built-in ontologies or knowledge graphs - Query analyzers and other NLP tools Of course, you can always combine Qdrant with any specialized tool you need, including full-text search engines. Read more about [our approach](../../../articles/hybrid-search/) to hybrid search. ## Collections ### How many collections can I create? As many as you want, but be aware that each collection requires additional resources. It is _highly_ recommended not to create many small collections, as it will lead to significant resource consumption overhead. We consider creating a collection for each user/dialog/document as an antipattern. Please read more about collections, isolation, and multiple users in our [Multitenancy](../../tutorials/multiple-partitions/) tutorial. ### How do I upload a large number of vectors into a Qdrant collection? Read about our recommendations in the [bulk upload](../../tutorials/bulk-upload/) tutorial. ### Can I only store quantized vectors and discard full precision vectors? No, Qdrant requires full precision vectors for operations like reindexing, rescoring, etc. ## Compatibility ### Is Qdrant compatible with CPUs or GPUs for vector computation? Qdrant primarily relies on CPU acceleration for scalability and efficiency, with no current support for GPU acceleration. ### Do you guarantee compatibility across versions? In case your version is older, we only guarantee compatibility between two consecutive minor versions. This also applies to client versions. Ensure your client version is never more than one minor version away from your cluster version. While we will assist with break/fix troubleshooting of issues and errors specific to our products, Qdrant is not accountable for reviewing, writing (or rewriting), or debugging custom code. ### Do you support downgrades? We do not support downgrading a cluster on any of our products. If you deploy a newer version of Qdrant, your data is automatically migrated to the newer storage format. This migration is not reversible. ### How do I avoid issues when updating to the latest version? We only guarantee compatibility if you update between consecutive versions. You would need to upgrade versions one at a time: `1.1 -> 1.2`, then `1.2 -> 1.3`, then `1.3 -> 1.4`. ## Cloud ### Is it possible to scale down a Qdrant Cloud cluster? It is possible to vertically scale down a Qdrant Cloud cluster, as long as the disk size is not reduced. Horizontal downscaling is currently not possible, but on our roadmap. But in some cases, we might be able to help you with that manually. Please open a support ticket, so that we can assist. ",documentation/faq/qdrant-fundamentals.md "--- title: FAQ weight: 22 is_empty: true ---",documentation/faq/_index.md "--- title: Airbyte aliases: [ ../integrations/airbyte/, ../frameworks/airbyte/ ] --- # Airbyte [Airbyte](https://airbyte.com/) is an open-source data integration platform that helps you replicate your data between different systems. It has a [growing list of connectors](https://docs.airbyte.io/integrations) that can be used to ingest data from multiple sources. Building data pipelines is also crucial for managing the data in Qdrant, and Airbyte is a great tool for this purpose. Airbyte may take care of the data ingestion from a selected source, while Qdrant will help you to build a search engine on top of it. There are three supported modes of how the data can be ingested into Qdrant: * **Full Refresh Sync** * **Incremental - Append Sync** * **Incremental - Append + Deduped** You can read more about these modes in the [Airbyte documentation](https://docs.airbyte.io/integrations/destinations/qdrant). ## Prerequisites Before you start, make sure you have the following: 1. Airbyte instance, either [Open Source](https://airbyte.com/solutions/airbyte-open-source), [Self-Managed](https://airbyte.com/solutions/airbyte-enterprise), or [Cloud](https://airbyte.com/solutions/airbyte-cloud). 2. Running instance of Qdrant. It has to be accessible by URL from the machine where Airbyte is running. You can follow the [installation guide](/documentation/guides/installation/) to set up Qdrant. ## Setting up Qdrant as a destination Once you have a running instance of Airbyte, you can set up Qdrant as a destination directly in the UI. Airbyte's Qdrant destination is connected with a single collection in Qdrant. ![Airbyte Qdrant destination](/documentation/frameworks/airbyte/qdrant-destination.png) ### Text processing Airbyte has some built-in mechanisms to transform your texts into embeddings. You can choose how you want to chunk your fields into pieces before calculating the embeddings, but also which fields should be used to create the point payload. ![Processing settings](/documentation/frameworks/airbyte/processing.png) ### Embeddings You can choose the model that will be used to calculate the embeddings. Currently, Airbyte supports multiple models, including OpenAI and Cohere. ![Embeddings settings](/documentation/frameworks/airbyte/embedding.png) Using some precomputed embeddings from your data source is also possible. In this case, you can pass the field name containing the embeddings and their dimensionality. ![Precomputed embeddings settings](/documentation/frameworks/airbyte/precomputed-embedding.png) ### Qdrant connection details Finally, we can configure the target Qdrant instance and collection. In case you use the built-in authentication mechanism, here is where you can pass the token. ![Qdrant connection details](/documentation/frameworks/airbyte/qdrant-config.png) Once you confirm creating the destination, Airbyte will test if a specified Qdrant cluster is accessible and might be used as a destination. ## Setting up connection Airbyte combines sources and destinations into a single entity called a connection. Once you have a destination configured and a source, you can create a connection between them. It doesn't matter what source you use, as long as Airbyte supports it. The process is pretty straightforward, but depends on the source you use. ![Airbyte connection](/documentation/frameworks/airbyte/connection.png) ## Further Reading * [Airbyte documentation](https://docs.airbyte.com/understanding-airbyte/connections/). * [Source Code](https://github.com/airbytehq/airbyte/tree/master/airbyte-integrations/connectors/destination-qdrant) ",documentation/data-management/airbyte.md "--- title: Apache Spark aliases: [ ../integrations/spark/, ../frameworks/spark/ ] --- # Apache Spark [Spark](https://spark.apache.org/) is a distributed computing framework designed for big data processing and analytics. The [Qdrant-Spark connector](https://github.com/qdrant/qdrant-spark) enables Qdrant to be a storage destination in Spark. ## Installation You can set up the Qdrant-Spark Connector in a few different ways, depending on your preferences and requirements. ### GitHub Releases The simplest way to get started is by downloading pre-packaged JAR file releases from the [GitHub releases page](https://github.com/qdrant/qdrant-spark/releases). These JAR files come with all the necessary dependencies. ### Building from Source If you prefer to build the JAR from source, you'll need [JDK 8](https://www.azul.com/downloads/#zulu) and [Maven](https://maven.apache.org/) installed on your system. Once you have the prerequisites in place, navigate to the project's root directory and run the following command: ```bash mvn package ``` This command will compile the source code and generate a fat JAR, which will be stored in the `target` directory by default. ### Maven Central For use with Java and Scala projects, the package can be found [here](https://central.sonatype.com/artifact/io.qdrant/spark). ## Usage Below, we'll walk through the steps of creating a Spark session with Qdrant support and loading data into Qdrant. ### Creating a single-node Spark session with Qdrant Support To begin, import the necessary libraries and create a Spark session with Qdrant support: ```python from pyspark.sql import SparkSession spark = SparkSession.builder.config( ""spark.jars"", ""spark-VERSION.jar"", # Specify the downloaded JAR file ) .master(""local[*]"") .appName(""qdrant"") .getOrCreate() ``` ```scala import org.apache.spark.sql.SparkSession val spark = SparkSession.builder .config(""spark.jars"", ""spark-VERSION.jar"") // Specify the downloaded JAR file .master(""local[*]"") .appName(""qdrant"") .getOrCreate() ``` ```java import org.apache.spark.sql.SparkSession; public class QdrantSparkJavaExample { public static void main(String[] args) { SparkSession spark = SparkSession.builder() .config(""spark.jars"", ""spark-VERSION.jar"") // Specify the downloaded JAR file .master(""local[*]"") .appName(""qdrant"") .getOrCreate(); } } ``` ### Loading data into Qdrant The connector supports ingesting multiple named/unnamed, dense/sparse vectors. _Click each to expand._
Unnamed/Default vector ```python .write .format(""io.qdrant.spark.Qdrant"") .option(""qdrant_url"", ) .option(""collection_name"", ) .option(""embedding_field"", ) # Expected to be a field of type ArrayType(FloatType) .option(""schema"", .schema.json()) .mode(""append"") .save() ```
Named vector ```python .write .format(""io.qdrant.spark.Qdrant"") .option(""qdrant_url"", ) .option(""collection_name"", ) .option(""embedding_field"", ) # Expected to be a field of type ArrayType(FloatType) .option(""vector_name"", ) .option(""schema"", .schema.json()) .mode(""append"") .save() ``` > #### NOTE > > The `embedding_field` and `vector_name` options are maintained for backward compatibility. It is recommended to use `vector_fields` and `vector_names` for named vectors as shown below.
Multiple named vectors ```python .write .format(""io.qdrant.spark.Qdrant"") .option(""qdrant_url"", """") .option(""collection_name"", """") .option(""vector_fields"", "","") .option(""vector_names"", "","") .option(""schema"", .schema.json()) .mode(""append"") .save() ```
Sparse vectors ```python .write .format(""io.qdrant.spark.Qdrant"") .option(""qdrant_url"", """") .option(""collection_name"", """") .option(""sparse_vector_value_fields"", """") .option(""sparse_vector_index_fields"", """") .option(""sparse_vector_names"", """") .option(""schema"", .schema.json()) .mode(""append"") .save() ```
Multiple sparse vectors ```python .write .format(""io.qdrant.spark.Qdrant"") .option(""qdrant_url"", """") .option(""collection_name"", """") .option(""sparse_vector_value_fields"", "","") .option(""sparse_vector_index_fields"", "","") .option(""sparse_vector_names"", "","") .option(""schema"", .schema.json()) .mode(""append"") .save() ```
Combination of named dense and sparse vectors ```python .write .format(""io.qdrant.spark.Qdrant"") .option(""qdrant_url"", """") .option(""collection_name"", """") .option(""vector_fields"", "","") .option(""vector_names"", "","") .option(""sparse_vector_value_fields"", "","") .option(""sparse_vector_index_fields"", "","") .option(""sparse_vector_names"", "","") .option(""schema"", .schema.json()) .mode(""append"") .save() ```
No vectors - Entire dataframe is stored as payload ```python .write .format(""io.qdrant.spark.Qdrant"") .option(""qdrant_url"", """") .option(""collection_name"", """") .option(""schema"", .schema.json()) .mode(""append"") .save() ```
## Databricks You can use the `qdrant-spark` connector as a library in [Databricks](https://www.databricks.com/). - Go to the `Libraries` section in your Databricks cluster dashboard. - Select `Install New` to open the library installation modal. - Search for `io.qdrant:spark:VERSION` in the Maven packages and click `Install`. ![Databricks](/documentation/frameworks/spark/databricks.png) ## Datatype Support Qdrant supports all the Spark data types, and the appropriate data types are mapped based on the provided schema. ## Configuration Options | Option | Description | Column DataType | Required | | :--------------------------- | :------------------------------------------------------------------ | :---------------------------- | :------- | | `qdrant_url` | GRPC URL of the Qdrant instance. Eg: | - | ✅ | | `collection_name` | Name of the collection to write data into | - | ✅ | | `schema` | JSON string of the dataframe schema | - | ✅ | | `embedding_field` | Name of the column holding the embeddings | `ArrayType(FloatType)` | ❌ | | `id_field` | Name of the column holding the point IDs. Default: Random UUID | `StringType` or `IntegerType` | ❌ | | `batch_size` | Max size of the upload batch. Default: 64 | - | ❌ | | `retries` | Number of upload retries. Default: 3 | - | ❌ | | `api_key` | Qdrant API key for authentication | - | ❌ | | `vector_name` | Name of the vector in the collection. | - | ❌ | | `vector_fields` | Comma-separated names of columns holding the vectors. | `ArrayType(FloatType)` | ❌ | | `vector_names` | Comma-separated names of vectors in the collection. | - | ❌ | | `sparse_vector_index_fields` | Comma-separated names of columns holding the sparse vector indices. | `ArrayType(IntegerType)` | ❌ | | `sparse_vector_value_fields` | Comma-separated names of columns holding the sparse vector values. | `ArrayType(FloatType)` | ❌ | | `sparse_vector_names` | Comma-separated names of the sparse vectors in the collection. | - | ❌ | | `shard_key_selector` | Comma-separated names of custom shard keys to use during upsert. | - | ❌ | For more information, be sure to check out the [Qdrant-Spark GitHub repository](https://github.com/qdrant/qdrant-spark). The Apache Spark guide is available [here](https://spark.apache.org/docs/latest/quick-start.html). Happy data processing! ",documentation/data-management/spark.md "--- title: Confluent Kafka aliases: [ ../frameworks/confluent/ ] --- ![Confluent Logo](/documentation/frameworks/confluent/confluent-logo.png) Built by the original creators of Apache Kafka®, [Confluent Cloud](https://www.confluent.io/confluent-cloud/?utm_campaign=tm.pmm_cd.cwc_partner_Qdrant_generic&utm_source=Qdrant&utm_medium=partnerref) is a cloud-native and complete data streaming platform available on AWS, Azure, and Google Cloud. The platform includes a fully managed, elastically scaling Kafka engine, 120+ connectors, serverless Apache Flink®, enterprise-grade security controls, and a robust governance suite. With our [Qdrant-Kafka Sink Connector](https://github.com/qdrant/qdrant-kafka), Qdrant is part of the [Connect with Confluent](https://www.confluent.io/partners/connect/) technology partner program. It brings fully managed data streams directly to organizations from Confluent Cloud, making it easier for organizations to stream any data to Qdrant with a fully managed Apache Kafka service. ## Usage ### Pre-requisites - A Confluent Cloud account. You can begin with a [free trial](https://www.confluent.io/confluent-cloud/tryfree/?utm_campaign=tm.pmm_cd.cwc_partner_qdrant_tryfree&utm_source=qdrant&utm_medium=partnerref) with credits for the first 30 days. - Qdrant instance to connect to. You can get a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). ### Installation 1) Download the latest connector zip file from [Confluent Hub](https://www.confluent.io/hub/qdrant/qdrant-kafka). 2) Configure an environment and cluster on Confluent and create a topic to produce messages for. 3) Navigate to the `Connectors` section of the Confluent cluster and click `Add Plugin`. Upload the zip file with the following info. ![Qdrant Connector Install](/documentation/frameworks/confluent/install.png) 4) Once installed, navigate to the connector and set the following configuration values. ![Qdrant Connector Config](/documentation/frameworks/confluent/config.png) Replace the placeholder values with your credentials. 5) Add the Qdrant instance host to the allowed networking endpoints. ![Qdrant Connector Endpoint](/documentation/frameworks/confluent/endpoint.png) 7) Start the connector. ## Producing Messages You can now produce messages for the configured topic, and they'll be written into the configured Qdrant instance. ![Qdrant Connector Message](/documentation/frameworks/confluent/message.png) ## Message Formats The connector supports messages in the following formats. _Click each to expand._
Unnamed/Default vector Reference: [Creating a collection with a default vector](https://qdrant.tech/documentation/concepts/collections/#create-a-collection). ```json { ""collection_name"": ""{collection_name}"", ""id"": 1, ""vector"": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ], ""payload"": { ""name"": ""kafka"", ""description"": ""Kafka is a distributed streaming platform"", ""url"": ""https://kafka.apache.org/"" } } ```
Named multiple vectors Reference: [Creating a collection with multiple vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors). ```json { ""collection_name"": ""{collection_name}"", ""id"": 1, ""vector"": { ""some-dense"": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ], ""some-other-dense"": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ] }, ""payload"": { ""name"": ""kafka"", ""description"": ""Kafka is a distributed streaming platform"", ""url"": ""https://kafka.apache.org/"" } } ```
Sparse vectors Reference: [Creating a collection with sparse vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-sparse-vectors). ```json { ""collection_name"": ""{collection_name}"", ""id"": 1, ""vector"": { ""some-sparse"": { ""indices"": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], ""values"": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 ] } }, ""payload"": { ""name"": ""kafka"", ""description"": ""Kafka is a distributed streaming platform"", ""url"": ""https://kafka.apache.org/"" } } ```
Multi-vectors Reference: - [Multi-vectors](https://qdrant.tech/documentation/concepts/vectors/#multivectors) ```json { ""collection_name"": ""{collection_name}"", ""id"": 1, ""vector"": { ""some-multi"": [ [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 ], [ 1.0, 0.9, 0.8, 0.5, 0.4, 0.8, 0.6, 0.4, 0.2, 0.1 ] ] }, ""payload"": { ""name"": ""kafka"", ""description"": ""Kafka is a distributed streaming platform"", ""url"": ""https://kafka.apache.org/"" } } ```
Combination of named dense and sparse vectors Reference: - [Creating a collection with multiple vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors). - [Creating a collection with sparse vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-sparse-vectors). ```json { ""collection_name"": ""{collection_name}"", ""id"": ""a10435b5-2a58-427a-a3a0-a5d845b147b7"", ""vector"": { ""some-other-dense"": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ], ""some-sparse"": { ""indices"": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], ""values"": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 ] } }, ""payload"": { ""name"": ""kafka"", ""description"": ""Kafka is a distributed streaming platform"", ""url"": ""https://kafka.apache.org/"" } } ```
## Further Reading - [Kafka Connect Docs](https://docs.confluent.io/platform/current/connect/index.html) - [Confluent Connectors Docs](https://docs.confluent.io/cloud/current/connectors/bring-your-connector/custom-connector-qs.html) ",documentation/data-management/confluent.md "--- title: Redpanda Connect --- ![Redpanda Cover](/documentation/data-management/redpanda/redpanda-cover.png) [Redpanda Connect](https://www.redpanda.com/connect) is a declarative data-agnostic streaming service designed for efficient, stateless processing steps. It offers transaction-based resiliency with back pressure, ensuring at-least-once delivery when connecting to at-least-once sources with sinks, without the need to persist messages during transit. Connect pipelines are configured using a YAML file, which organizes components hierarchically. Each section represents a different component type, such as inputs, processors and outputs, and these can have nested child components and [dynamic values](https://docs.redpanda.com/redpanda-connect/configuration/interpolation/). The [Qdrant Output](https://docs.redpanda.com/redpanda-connect/components/outputs/qdrant/) component enables streaming vector data into Qdrant collections in your RedPanda pipelines. ## Example An example configuration of the output once the inputs and processors are set, would look like: ```yaml input: # https://docs.redpanda.com/redpanda-connect/components/inputs/about/ pipeline: processors: # https://docs.redpanda.com/redpanda-connect/components/processors/about/ output: label: ""qdrant-output"" qdrant: max_in_flight: 64 batching: count: 8 grpc_host: xyz-example.eu-central.aws.cloud.qdrant.io:6334 api_token: """" tls: enabled: true # skip_cert_verify: false # enable_renegotiation: false # root_cas: """" # root_cas_file: """" # client_certs: [] collection_name: """" id: root = uuid_v4() vector_mapping: 'root = {""some_dense"": this.vector, ""some_sparse"": {""indices"": [23,325,532],""values"": [0.352,0.532,0.532]}}' payload_mapping: 'root = {""field"": this.value, ""field_2"": 987}' ``` ## Further Reading - [Getting started with Connect](https://docs.redpanda.com/redpanda-connect/guides/getting_started/) - [Qdrant Output Reference](https://docs.redpanda.com/redpanda-connect/components/outputs/qdrant/) ",documentation/data-management/redpanda.md "--- title: DLT aliases: [ ../integrations/dlt/, ../frameworks/dlt/ ] --- # DLT(Data Load Tool) [DLT](https://dlthub.com/) is an open-source library that you can add to your Python scripts to load data from various and often messy data sources into well-structured, live datasets. With the DLT-Qdrant integration, you can now select Qdrant as a DLT destination to load data into. **DLT Enables** - Automated maintenance - with schema inference, alerts and short declarative code, maintenance becomes simple. - Run it where Python runs - on Airflow, serverless functions, notebooks. Scales on micro and large infrastructure alike. - User-friendly, declarative interface that removes knowledge obstacles for beginners while empowering senior professionals. ## Usage To get started, install `dlt` with the `qdrant` extra. ```bash pip install ""dlt[qdrant]"" ``` Configure the destination in the DLT secrets file. The file is located at `~/.dlt/secrets.toml` by default. Add the following section to the secrets file. ```toml [destination.qdrant.credentials] location = ""https://your-qdrant-url"" api_key = ""your-qdrant-api-key"" ``` The location will default to `http://localhost:6333` and `api_key` is not defined - which are the defaults for a local Qdrant instance. Find more information about DLT configurations [here](https://dlthub.com/docs/general-usage/credentials). Define the source of the data. ```python import dlt from dlt.destinations.qdrant import qdrant_adapter movies = [ { ""title"": ""Blade Runner"", ""year"": 1982, ""description"": ""The film is about a dystopian vision of the future that combines noir elements with sci-fi imagery."" }, { ""title"": ""Ghost in the Shell"", ""year"": 1995, ""description"": ""The film is about a cyborg policewoman and her partner who set out to find the main culprit behind brain hacking, the Puppet Master."" }, { ""title"": ""The Matrix"", ""year"": 1999, ""description"": ""The movie is set in the 22nd century and tells the story of a computer hacker who joins an underground group fighting the powerful computers that rule the earth."" } ] ``` Define the pipeline. ```python pipeline = dlt.pipeline( pipeline_name=""movies"", destination=""qdrant"", dataset_name=""movies_dataset"", ) ``` Run the pipeline. ```python info = pipeline.run( qdrant_adapter( movies, embed=[""title"", ""description""] ) ) ``` The data is now loaded into Qdrant. To use vector search after the data has been loaded, you must specify which fields Qdrant needs to generate embeddings for. You do that by wrapping the data (or [DLT resource](https://dlthub.com/docs/general-usage/resource)) with the `qdrant_adapter` function. ## Write disposition A DLT [write disposition](https://dlthub.com/docs/dlt-ecosystem/destinations/qdrant/#write-disposition) defines how the data should be written to the destination. All write dispositions are supported by the Qdrant destination. ## DLT Sync Qdrant destination supports syncing of the [`DLT` state](https://dlthub.com/docs/general-usage/state#syncing-state-with-destination). ## Next steps - The comprehensive Qdrant DLT destination documentation can be found [here](https://dlthub.com/docs/dlt-ecosystem/destinations/qdrant/). - [Source Code](https://github.com/dlt-hub/dlt/tree/devel/dlt/destinations/impl/qdrant) ",documentation/data-management/dlt.md "--- title: Apache Airflow aliases: [ ../frameworks/airflow/ ] --- # Apache Airflow [Apache Airflow](https://airflow.apache.org/) is an open-source platform for authoring, scheduling and monitoring data and computing workflows. Airflow uses Python to create workflows that can be easily scheduled and monitored. Qdrant is available as a [provider](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/index.html) in Airflow to interface with the database. ## Prerequisites Before configuring Airflow, you need: 1. A Qdrant instance to connect to. You can set one up in our [installation guide](/documentation/guides/installation/). 2. A running Airflow instance. You can use their [Quick Start Guide](https://airflow.apache.org/docs/apache-airflow/stable/start.html). ## Installation You can install the Qdrant provider by running `pip install apache-airflow-providers-qdrant` in your Airflow shell. **NOTE**: You'll have to restart your Airflow session for the provider to be available. ## Setting up a connection Open the `Admin-> Connections` section of the Airflow UI. Click the `Create` link to create a new [Qdrant connection](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/connections.html). ![Qdrant connection](/documentation/frameworks/airflow/connection.png) You can also set up a connection using [environment variables](https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html#environment-variables-connections) or an [external secret backend](https://airflow.apache.org/docs/apache-airflow/stable/security/secrets/secrets-backend/index.html). ## Qdrant hook An Airflow hook is an abstraction of a specific API that allows Airflow to interact with an external system. ```python from airflow.providers.qdrant.hooks.qdrant import QdrantHook hook = QdrantHook(conn_id=""qdrant_connection"") hook.verify_connection() ``` A [`qdrant_client#QdrantClient`](https://pypi.org/project/qdrant-client/) instance is available via `@property conn` of the `QdrantHook` instance for use within your Airflow workflows. ```python from qdrant_client import models hook.conn.count("""") hook.conn.upsert( """", points=[ models.PointStruct(id=32, vector=[0.32, 0.12, 0.123], payload={""color"": ""red""}) ], ) ``` ## Qdrant Ingest Operator The Qdrant provider also provides a convenience operator for uploading data to a Qdrant collection that internally uses the Qdrant hook. ```python from airflow.providers.qdrant.operators.qdrant import QdrantIngestOperator vectors = [ [0.11, 0.22, 0.33, 0.44], [0.55, 0.66, 0.77, 0.88], [0.88, 0.11, 0.12, 0.13], ] ids = [32, 21, ""b626f6a9-b14d-4af9-b7c3-43d8deb719a6""] payload = [{""meta"": ""data""}, {""meta"": ""data_2""}, {""meta"": ""data_3"", ""extra"": ""data""}] QdrantIngestOperator( conn_id=""qdrant_connection"", task_id=""qdrant_ingest"", collection_name="""", vectors=vectors, ids=ids, payload=payload, ) ``` ## Reference - 📦 [Provider package PyPI](https://pypi.org/project/apache-airflow-providers-qdrant/) - 📚 [Provider docs](https://airflow.apache.org/docs/apache-airflow-providers-qdrant/stable/index.html) - 📄 [Source Code](https://github.com/apache/airflow/tree/main/airflow/providers/qdrant) ",documentation/data-management/airflow.md "--- title: MindsDB aliases: [ ../integrations/mindsdb/, ../frameworks/mindsdb/ ] --- # MindsDB [MindsDB](https://mindsdb.com) is an AI automation platform for building AI/ML powered features and applications. It works by connecting any source of data with any AI/ML model or framework and automating how real-time data flows between them. With the MindsDB-Qdrant integration, you can now select Qdrant as a database to load into and retrieve from with semantic search and filtering. **MindsDB allows you to easily**: - Connect to any store of data or end-user application. - Pass data to an AI model from any store of data or end-user application. - Plug the output of an AI model into any store of data or end-user application. - Fully automate these workflows to build AI-powered features and applications ## Usage To get started with Qdrant and MindsDB, the following syntax can be used. ```sql CREATE DATABASE qdrant_test WITH ENGINE = ""qdrant"", PARAMETERS = { ""location"": "":memory:"", ""collection_config"": { ""size"": 386, ""distance"": ""Cosine"" } } ``` The available arguments for instantiating Qdrant can be found [here](https://github.com/mindsdb/mindsdb/blob/23a509cb26bacae9cc22475497b8644e3f3e23c3/mindsdb/integrations/handlers/qdrant_handler/qdrant_handler.py#L408-L468). ## Creating a new table - Qdrant options for creating a collection can be specified as `collection_config` in the `CREATE DATABASE` parameters. - By default, UUIDs are set as collection IDs. You can provide your own IDs under the `id` column. ```sql CREATE TABLE qdrant_test.test_table ( SELECT embeddings,'{""source"": ""bbc""}' as metadata FROM mysql_demo_db.test_embeddings ); ``` ## Querying the database #### Perform a full retrieval using the following syntax. ```sql SELECT * FROM qdrant_test.test_table ``` By default, the `LIMIT` is set to 10 and the `OFFSET` is set to 0. #### Perform a similarity search using your embeddings ```sql SELECT * FROM qdrant_test.test_table WHERE search_vector = (select embeddings from mysql_demo_db.test_embeddings limit 1) ``` #### Perform a search using filters ```sql SELECT * FROM qdrant_test.test_table WHERE `metadata.source` = 'bbc'; ``` #### Delete entries using IDs ```sql DELETE FROM qtest.test_table_6 WHERE id = 2 ``` #### Delete entries using filters ```sql DELETE * FROM qdrant_test.test_table WHERE `metadata.source` = 'bbc'; ``` #### Drop a table ```sql DROP TABLE qdrant_test.test_table; ``` ## Next steps - You can find more information pertaining to MindsDB and its datasources [here](https://docs.mindsdb.com/). - [Source Code](https://github.com/mindsdb/mindsdb/tree/main/mindsdb/integrations/handlers/qdrant_handler) ",documentation/data-management/mindsdb.md "--- title: Apache NiFi aliases: [ ../frameworks/nifi/ ] --- # Apache NiFi [NiFi](https://nifi.apache.org/) is a real-time data ingestion platform, which can transfer and manage data transfer between numerous sources and destination systems. It supports many protocols and offers a web-based user interface for developing and monitoring data flows. NiFi supports ingesting and querying data in Qdrant via its processor modules. ## Configuration ![NiFi Qdrant configuration](/documentation/frameworks/nifi/nifi-conifg.png) You can configure Qdrant NiFi processors with your Qdrant credentials, query/upload configurations. The processors offer 2 built-in embedding providers to encode data into vector embeddings - HuggingFace, OpenAI. ## Put Qdrant ![NiFI Put Qdrant](/documentation/frameworks/nifi/nifi-put-qdrant.png) The `Put Qdrant` processor can ingest NiFi [FlowFile](https://nifi.apache.org/docs/nifi-docs/html/nifi-in-depth.html#intro) data into a Qdrant collection. ## Query Qdrant ![NiFI Query Qdrant](/documentation/frameworks/nifi/nifi-query-qdrant.png) The `Query Qdrant` processor can perform a similarity search across a Qdrant collection and return a [FlowFile](https://nifi.apache.org/docs/nifi-docs/html/nifi-in-depth.html#intro) result. ## Further Reading - [NiFi Documentation](https://nifi.apache.org/documentation/v2/). - [Source Code](https://github.com/apache/nifi-python-extensions) ",documentation/data-management/nifi.md "--- title: InfinyOn Fluvio --- ![Fluvio Logo](/documentation/data-management/fluvio/fluvio-logo.png) [InfinyOn Fluvio](https://www.fluvio.io/) is an open-source platform written in Rust for high speed, real-time data processing. It is cloud native, designed to work with any infrastructure type, from bare metal hardware to containerized platforms. ## Usage with Qdrant With the [Qdrant Fluvio Connector](https://github.com/qdrant/qdrant-fluvio), you can stream records from Fluvio topics to Qdrant collections, leveraging Fluvio's delivery guarantees and high-throughput. ### Pre-requisites - A Fluvio installation. You can refer to the [Fluvio Quickstart](https://www.fluvio.io/docs/fluvio/quickstart/) for instructions. - Qdrant server to connect to. You can set up a [local instance](/documentation/quickstart/) or a free cloud instance at [cloud.qdrant.io](https://cloud.qdrant.io/). ### Downloading the connector Run the following commands after [setting up Fluvio](https://www.fluvio.io/docs/fluvio/quickstart). ```console cdk hub download qdrant/qdrant-sink@0.1.0 ``` ### Example Config > _config.yaml_ ```yaml apiVersion: 0.1.0 meta: version: 0.1.0 name: my-qdrant-connector type: qdrant-sink topic: topic-name secrets: - name: QDRANT_API_KEY qdrant: url: https://xyz-example.eu-central.aws.cloud.qdrant.io:6334 api_key: ""${{ secrets.QDRANT_API_KEY }}"" ``` > _secrets.txt_ ```text QDRANT_API_KEY= ``` ### Running ```console cdk deploy start --ipkg qdrant-qdrant-sink-0.1.0.ipkg -c config.yaml --secrets secrets.txt ``` ### Produce Messages You can now run the following to generate messages to be written into Qdrant. ```console fluvio produce topic-name ``` ### Message Formats This sink connector supports messages with dense/sparse/multi vectors. _Click each to expand._
Unnamed/Default vector Reference: [Creating a collection with a default vector](https://qdrant.tech/documentation/concepts/collections/#create-a-collection). ```json { ""collection_name"": ""{collection_name}"", ""id"": 1, ""vector"": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ], ""payload"": { ""name"": ""fluvio"", ""description"": ""Solution for distributed stream processing"", ""url"": ""https://www.fluvio.io/"" } } ```
Named multiple vectors Reference: [Creating a collection with multiple vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors). ```json { ""collection_name"": ""{collection_name}"", ""id"": 1, ""vector"": { ""some-dense"": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ], ""some-other-dense"": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ] }, ""payload"": { ""name"": ""fluvio"", ""description"": ""Solution for distributed stream processing"", ""url"": ""https://www.fluvio.io/"" } } ```
Sparse vectors Reference: [Creating a collection with sparse vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-sparse-vectors). ```json { ""collection_name"": ""{collection_name}"", ""id"": 1, ""vector"": { ""some-sparse"": { ""indices"": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], ""values"": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 ] } }, ""payload"": { ""name"": ""fluvio"", ""description"": ""Solution for distributed stream processing"", ""url"": ""https://www.fluvio.io/"" } } ```
Multi-vector ```json { ""collection_name"": ""{collection_name}"", ""id"": 1, ""vector"": { ""some-multi"": [ [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 ], [ 1.0, 0.9, 0.8, 0.5, 0.4, 0.8, 0.6, 0.4, 0.2, 0.1 ] ] }, ""payload"": { ""name"": ""fluvio"", ""description"": ""Solution for distributed stream processing"", ""url"": ""https://www.fluvio.io/"" } } ```
Combination of named dense and sparse vectors Reference: - [Creating a collection with multiple vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors). - [Creating a collection with sparse vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-sparse-vectors). ```json { ""collection_name"": ""{collection_name}"", ""id"": ""a10435b5-2a58-427a-a3a0-a5d845b147b7"", ""vector"": { ""some-other-dense"": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 ], ""some-sparse"": { ""indices"": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], ""values"": [ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 ] } }, ""payload"": { ""name"": ""fluvio"", ""description"": ""Solution for distributed stream processing"", ""url"": ""https://www.fluvio.io/"" } } ```
### Further Reading - [Fluvio Quickstart](https://www.fluvio.io/docs/fluvio/quickstart) - [Fluvio Tutorials](https://www.fluvio.io/docs/fluvio/tutorials/) - [Connector Source](https://github.com/qdrant/qdrant-fluvio) ",documentation/data-management/fluvio.md "--- title: Unstructured aliases: [ ../frameworks/unstructured/ ] --- # Unstructured [Unstructured](https://unstructured.io/) is a library designed to help preprocess, structure unstructured text documents for downstream machine learning tasks. Qdrant can be used as an ingestion destination in Unstructured. ## Setup Install Unstructured with the `qdrant` extra. ```bash pip install ""unstructured[qdrant]"" ``` ## Usage Depending on the use case you can prefer the command line or using it within your application. ### CLI ```bash EMBEDDING_PROVIDER=${EMBEDDING_PROVIDER:-""langchain-huggingface""} unstructured-ingest \ local \ --input-path example-docs/book-war-and-peace-1225p.txt \ --output-dir local-output-to-qdrant \ --strategy fast \ --chunk-elements \ --embedding-provider ""$EMBEDDING_PROVIDER"" \ --num-processes 2 \ --verbose \ qdrant \ --collection-name ""test"" \ --url ""http://localhost:6333"" \ --batch-size 80 ``` For a full list of the options the CLI accepts, run `unstructured-ingest qdrant --help` ### Programmatic usage ```python from unstructured.ingest.connector.local import SimpleLocalConfig from unstructured.ingest.connector.qdrant import ( QdrantWriteConfig, SimpleQdrantConfig, ) from unstructured.ingest.interfaces import ( ChunkingConfig, EmbeddingConfig, PartitionConfig, ProcessorConfig, ReadConfig, ) from unstructured.ingest.runner import LocalRunner from unstructured.ingest.runner.writers.base_writer import Writer from unstructured.ingest.runner.writers.qdrant import QdrantWriter def get_writer() -> Writer: return QdrantWriter( connector_config=SimpleQdrantConfig( url=""http://localhost:6333"", collection_name=""test"", ), write_config=QdrantWriteConfig(batch_size=80), ) if __name__ == ""__main__"": writer = get_writer() runner = LocalRunner( processor_config=ProcessorConfig( verbose=True, output_dir=""local-output-to-qdrant"", num_processes=2, ), connector_config=SimpleLocalConfig( input_path=""example-docs/book-war-and-peace-1225p.txt"", ), read_config=ReadConfig(), partition_config=PartitionConfig(), chunking_config=ChunkingConfig(chunk_elements=True), embedding_config=EmbeddingConfig(provider=""langchain-huggingface""), writer=writer, writer_kwargs={}, ) runner.run() ``` ## Next steps - Unstructured API [reference](https://unstructured-io.github.io/unstructured/api.html). - Qdrant ingestion destination [reference](https://unstructured-io.github.io/unstructured/ingest/destination_connectors/qdrant.html). - [Source Code](https://github.com/Unstructured-IO/unstructured/blob/main/unstructured/ingest/connector/qdrant.py) ",documentation/data-management/unstructured.md "--- title: Data Management weight: 15 --- ## Data Management Integrations | Integration | Description | | ------------------------------- | -------------------------------------------------------------------------------------------------- | | [Airbyte](./airbyte/) | Data integration platform specialising in ELT pipelines. | | [Airflow](./airflow/) | Platform designed for developing, scheduling, and monitoring batch-oriented workflows. | | [Connect](./redpanda/) | Declarative data-agnostic streaming service for efficient, stateless processing. | | [Confluent](./confluent/) | Fully-managed data streaming platform with a cloud-native Apache Kafka engine. | | [DLT](./dlt/) | Python library to simplify data loading processes between several sources and destinations. | | [Fluvio](./fluvio/) | Rust-based platform for high speed, real-time data processing. | | [Fondant](./fondant/) | Framework for developing datasets, sharing reusable operations and data processing trees. | | [MindsDB](./mindsdb/) | Platform to deploy, serve, and fine-tune models with numerous data source integrations. | | [NiFi](./nifi/) | Data ingestion platform to manage data transfer between different sources and destination systems. | | [Spark](./spark/) | A unified analytics engine for large-scale data processing. | | [Unstructured](./unstructured/) | Python library with components for ingesting and pre-processing data from numerous sources. | ",documentation/data-management/_index.md "--- title: Fondant aliases: [ ../integrations/fondant/, ../frameworks/fondant/ ] --- # Fondant [Fondant](https://fondant.ai/en/stable/) is an open-source framework that aims to simplify and speed up large-scale data processing by making containerized components reusable across pipelines and execution environments. Benefit from built-in features such as autoscaling, data lineage, and pipeline caching, and deploy to (managed) platforms such as Vertex AI, Sagemaker, and Kubeflow Pipelines. Fondant comes with a library of reusable components that you can leverage to compose your own pipeline, including a Qdrant component for writing embeddings to Qdrant. ## Usage **A data load pipeline for RAG using Qdrant**. A simple ingestion pipeline could look like the following: ```python import pyarrow as pa from fondant.pipeline import Pipeline indexing_pipeline = Pipeline( name=""ingestion-pipeline"", description=""Pipeline to prepare and process data for building a RAG solution"", base_path=""./fondant-artifacts"", ) # An custom implemenation of a read component. text = indexing_pipeline.read( ""path/to/data-source-component"", arguments={ # your custom arguments } ) chunks = text.apply( ""chunk_text"", arguments={ ""chunk_size"": 512, ""chunk_overlap"": 32, }, ) embeddings = chunks.apply( ""embed_text"", arguments={ ""model_provider"": ""huggingface"", ""model"": ""all-MiniLM-L6-v2"", }, ) embeddings.write( ""index_qdrant"", arguments={ ""url"": ""http:localhost:6333"", ""collection_name"": ""some-collection-name"", }, cache=False, ) ``` Once you have a pipeline, you can easily run it using the built-in CLI. Fondant allows you to run the pipeline in production across different clouds. The first component is a custom read module that needs to be implemented and cannot be used off the shelf. A detailed tutorial on how to rebuild this pipeline [is provided on GitHub](https://github.com/ml6team/fondant-usecase-RAG/tree/main). ## Next steps More information about creating your own pipelines and components can be found in the [Fondant documentation](https://fondant.ai/en/stable/). ",documentation/data-management/fondant.md "--- title: Working with ColBERT weight: 6 --- # How to Generate ColBERT Multivectors with FastEmbed With FastEmbed, you can use ColBERT to generate multivector embeddings. ColBERT is a powerful retrieval model that combines the strength of BERT embeddings with efficient late interaction techniques. FastEmbed will provide you with an optimized pipeline to utilize these embeddings in your search tasks. Please note that ColBERT requires more resources than other no-interaction models. We recommend you use ColBERT as a re-ranker instead of a first-stage retriever. The first-stage retriever can retrieve 100-500 examples. This task would be done by a simpler model. Then, you can rank the leftover results with ColBERT. ## Setup This command imports all late interaction models for text embedding. ```python from fastembed import LateInteractionTextEmbedding ``` You can list which models are supported in your version of FastEmbed. ```python LateInteractionTextEmbedding.list_supported_models() ``` This command displays the available models. The output shows details about the ColBERT model, including its dimensions, description, size, sources, and model file. ```python [{'model': 'colbert-ir/colbertv2.0', 'dim': 128, 'description': 'Late interaction model', 'size_in_GB': 0.44, 'sources': {'hf': 'colbert-ir/colbertv2.0'}, 'model_file': 'model.onnx'}] ``` Now, load the model. ```python embedding_model = LateInteractionTextEmbedding(""colbert-ir/colbertv2.0"") ``` The model files will be fetched and downloaded, with progress showing. ## Embed data First, you need to define both documents and queries. ```python documents = [ ""ColBERT is a late interaction text embedding model, however, there are also other models such as TwinBERT."", ""On the contrary to the late interaction models, the early interaction models contains interaction steps at embedding generation process"", ] queries = [ ""Are there any other late interaction text embedding models except ColBERT?"", ""What is the difference between late interaction and early interaction text embedding models?"", ] ``` **Note:** ColBERT computes document and query embeddings differently. Make sure to use the corresponding methods. Now, create embeddings from both documents and queries. ```python document_embeddings = list( embedding_model.embed(documents) ) # embed and qury_embed return generators, # which we need to evaluate by writing them to a list query_embeddings = list(embedding_model.query_embed(queries)) ``` Display the shapes of document and query embeddings. ```python document_embeddings[0].shape, query_embeddings[0].shape ``` You should get something like this: ```python ((26, 128), (32, 128)) ``` Don't worry about query embeddings having the bigger shape in this case. ColBERT authors recommend to pad queries with [MASK] tokens to 32 tokens. They also recommend truncating queries to 32 tokens, however, we don't do that in FastEmbed so that you can put some straight into the queries. ## Compute similarity This function calculates the relevance scores using the MaxSim operator, sorts the documents based on these scores, and returns the indices of the top-k documents. ```python import numpy as np def compute_relevance_scores(query_embedding: np.array, document_embeddings: np.array, k: int): """""" Compute relevance scores for top-k documents given a query. :param query_embedding: Numpy array representing the query embedding, shape: [num_query_terms, embedding_dim] :param document_embeddings: Numpy array representing embeddings for documents, shape: [num_documents, max_doc_length, embedding_dim] :param k: Number of top documents to return :return: Indices of the top-k documents based on their relevance scores """""" # Compute batch dot-product of query_embedding and document_embeddings # Resulting shape: [num_documents, num_query_terms, max_doc_length] scores = np.matmul(query_embedding, document_embeddings.transpose(0, 2, 1)) # Apply max-pooling across document terms (axis=2) to find the max similarity per query term # Shape after max-pool: [num_documents, num_query_terms] max_scores_per_query_term = np.max(scores, axis=2) # Sum the scores across query terms to get the total score for each document # Shape after sum: [num_documents] total_scores = np.sum(max_scores_per_query_term, axis=1) # Sort the documents based on their total scores and get the indices of the top-k documents sorted_indices = np.argsort(total_scores)[::-1][:k] return sorted_indices ``` Calculate sorted indices. ```python sorted_indices = compute_relevance_scores( np.array(query_embeddings[0]), np.array(document_embeddings), k=3 ) print(""Sorted document indices:"", sorted_indices) ``` The output shows the sorted document indices based on the relevance to the query. ```python Sorted document indices: [0 1] ``` ## Show results ```python print(f""Query: {queries[0]}"") for index in sorted_indices: print(f""Document: {documents[index]}"") ``` The query and corresponding sorted documents are displayed, showing the relevance of each document to the query. ```bash Query: Are there any other late interaction text embedding models except ColBERT? Document: ColBERT is a late interaction text embedding model, however, there are also other models such as TwinBERT. Document: On the contrary to the late interaction models, the early interaction models contains interaction steps at embedding generation process ``` ",documentation/fastembed/fastembed-colbert.md "--- title: Working with SPLADE weight: 5 --- # How to Generate Sparse Vectors with SPLADE SPLADE is a novel method for learning sparse text representation vectors, outperforming BM25 in tasks like information retrieval and document classification. Its main advantage is generating efficient and interpretable sparse vectors, making it effective for large-scale text data. ## Setup First, install FastEmbed. ```python pip install -q fastembed ``` Next, import the required modules for sparse embeddings and Python’s typing module. ```python from fastembed import SparseTextEmbedding, SparseEmbedding from typing import List ``` You may always check the list of all supported sparse embedding models. ```python SparseTextEmbedding.list_supported_models() ``` This will return a list of models, each with its details such as model name, vocabulary size, description, and sources. ```python [{'model': 'prithivida/Splade_PP_en_v1', 'vocab_size': 30522, 'description': 'Independent Implementation of SPLADE++ Model for English', 'size_in_GB': 0.532, 'sources': {'hf': 'Qdrant/SPLADE_PP_en_v1'}}] ``` Now, load the model. ```python model_name = ""prithvida/Splade_PP_en_v1"" # This triggers the model download model = SparseTextEmbedding(model_name=model_name) ``` ## Embed data You need to define a list of documents to be embedded. ```python documents: List[str] = [ ""Chandrayaan-3 is India's third lunar mission"", ""It aimed to land a rover on the Moon's surface - joining the US, China and Russia"", ""The mission is a follow-up to Chandrayaan-2, which had partial success"", ""Chandrayaan-3 will be launched by the Indian Space Research Organisation (ISRO)"", ""The estimated cost of the mission is around $35 million"", ""It will carry instruments to study the lunar surface and atmosphere"", ""Chandrayaan-3 landed on the Moon's surface on 23rd August 2023"", ""It consists of a lander named Vikram and a rover named Pragyan similar to Chandrayaan-2. Its propulsion module would act like an orbiter."", ""The propulsion module carries the lander and rover configuration until the spacecraft is in a 100-kilometre (62 mi) lunar orbit"", ""The mission used GSLV Mk III rocket for its launch"", ""Chandrayaan-3 was launched from the Satish Dhawan Space Centre in Sriharikota"", ""Chandrayaan-3 was launched earlier in the year 2023"", ] ``` Then, generate sparse embeddings for each document. Here,`batch_size` is optional and helps to process documents in batches. ```python sparse_embeddings_list: List[SparseEmbedding] = list( model.embed(documents, batch_size=6) ) ``` ## Retrieve embeddings `sparse_embeddings_list` contains sparse embeddings for the documents provided earlier. Each element in this list is a `SparseEmbedding` object that contains the sparse vector representation of a document. ```python index = 0 sparse_embeddings_list[index] ``` This output is a `SparseEmbedding` object for the first document in our list. It contains two arrays: `values` and `indices`. - The `values` array represents the weights of the features (tokens) in the document. - The `indices` array represents the indices of these features in the model's vocabulary. Each pair of corresponding `values` and `indices` represents a token and its weight in the document. ```python SparseEmbedding(values=array([0.05297208, 0.01963477, 0.36459631, 1.38508618, 0.71776593, 0.12667948, 0.46230844, 0.446771 , 0.26897505, 1.01519883, 1.5655334 , 0.29412213, 1.53102326, 0.59785569, 1.1001817 , 0.02079751, 0.09955651, 0.44249091, 0.09747757, 1.53519952, 1.36765671, 0.15740395, 0.49882549, 0.38629025, 0.76612782, 1.25805044, 0.39058095, 0.27236196, 0.45152301, 0.48262018, 0.26085234, 1.35912788, 0.70710695, 1.71639752]), indices=array([ 1010, 1011, 1016, 1017, 2001, 2018, 2034, 2093, 2117, 2319, 2353, 2509, 2634, 2686, 2796, 2817, 2922, 2959, 3003, 3148, 3260, 3390, 3462, 3523, 3822, 4231, 4316, 4774, 5590, 5871, 6416, 11926, 12076, 16469])) ``` ## Examine weights Now, print the first 5 features and their weights for better understanding. ```python for i in range(5): print(f""Token at index {sparse_embeddings_list[0].indices[i]} has weight {sparse_embeddings_list[0].values[i]}"") ``` The output will display the token indices and their corresponding weights for the first document. ```python Token at index 1010 has weight 0.05297207832336426 Token at index 1011 has weight 0.01963476650416851 Token at index 1016 has weight 0.36459630727767944 Token at index 1017 has weight 1.385086178779602 Token at index 2001 has weight 0.7177659273147583 ``` ## Analyze results Let's use the tokenizer vocab to make sense of these indices. ```python import json from tokenizers import Tokenizer tokenizer = Tokenizer.from_pretrained(SparseTextEmbedding.list_supported_models()[0][""sources""][""hf""]) ``` The `get_tokens_and_weights` function takes a `SparseEmbedding` object and a `tokenizer` as input. It will construct a dictionary where the keys are the decoded tokens, and the values are their corresponding weights. ```python def get_tokens_and_weights(sparse_embedding, tokenizer): token_weight_dict = {} for i in range(len(sparse_embedding.indices)): token = tokenizer.decode([sparse_embedding.indices[i]]) weight = sparse_embedding.values[i] token_weight_dict[token] = weight # Sort the dictionary by weights token_weight_dict = dict(sorted(token_weight_dict.items(), key=lambda item: item[1], reverse=True)) return token_weight_dict # Test the function with the first SparseEmbedding print(json.dumps(get_tokens_and_weights(sparse_embeddings_list[index], tokenizer), indent=4)) ``` ## Dictionary output The dictionary is then sorted by weights in descending order. ```python { ""chandra"": 1.7163975238800049, ""third"": 1.5655333995819092, ""##ya"": 1.535199522972107, ""india"": 1.5310232639312744, ""3"": 1.385086178779602, ""mission"": 1.3676567077636719, ""lunar"": 1.3591278791427612, ""moon"": 1.2580504417419434, ""indian"": 1.1001816987991333, ""##an"": 1.015198826789856, ""3rd"": 0.7661278247833252, ""was"": 0.7177659273147583, ""spacecraft"": 0.7071069478988647, ""space"": 0.5978556871414185, ""flight"": 0.4988254904747009, ""satellite"": 0.4826201796531677, ""first"": 0.46230843663215637, ""expedition"": 0.4515230059623718, ""three"": 0.4467709958553314, ""fourth"": 0.44249090552330017, ""vehicle"": 0.390580952167511, ""iii"": 0.3862902522087097, ""2"": 0.36459630727767944, ""##3"": 0.2941221296787262, ""planet"": 0.27236196398735046, ""second"": 0.26897504925727844, ""missions"": 0.2608523368835449, ""launched"": 0.15740394592285156, ""had"": 0.12667948007583618, ""largest"": 0.09955651313066483, ""leader"": 0.09747757017612457, "","": 0.05297207832336426, ""study"": 0.02079751156270504, ""-"": 0.01963476650416851 } ``` ## Observations - The relative order of importance is quite useful. The most important tokens in the sentence have the highest weights. - **Term Expansion:** The model can expand the terms in the document. This means that the model can generate weights for tokens that are not present in the document but are related to the tokens in the document. This is a powerful feature that allows the model to capture the context of the document. Here, you'll see that the model has added the tokens '3' from 'third' and 'moon' from 'lunar' to the sparse vector. ## Design choices - The weights are not normalized. This means that the sum of the weights is not 1 or 100. This is a common practice in sparse embeddings, as it allows the model to capture the importance of each token in the document. - Tokens are included in the sparse vector only if they are present in the model's vocabulary. This means that the model will not generate a weight for tokens that it has not seen during training. - Tokens do not map to words directly -- allowing you to gracefully handle typo errors and out-of-vocabulary tokens.",documentation/fastembed/fastembed-splade.md "--- title: ""FastEmbed & Qdrant"" weight: 3 --- # Using FastEmbed with Qdrant for Vector Search ## Install Qdrant Client ```python pip install qdrant-client ``` ## Install FastEmbed Installing FastEmbed will let you quickly turn data to vectors, so that Qdrant can search over them. ```python pip install fastembed ``` ## Initialize the client Qdrant Client has a simple in-memory mode that lets you try semantic search locally. ```python from qdrant_client import QdrantClient client = QdrantClient("":memory:"") # Qdrant is running from RAM. ``` ## Add data Now you can add two sample documents, their associated metadata, and a point `id` for each. ```python docs = [""Qdrant has a LangChain integration for chatbots."", ""Qdrant has a LlamaIndex integration for agents.""] metadata = [ {""source"": ""langchain-docs""}, {""source"": ""llamaindex-docs""}, ] ids = [42, 2] ``` ## Load data to a collection Create a test collection and upsert your two documents to it. ```python client.add( collection_name=""test_collection"", documents=docs, metadata=metadata, ids=ids ) ``` ## Run vector search Here, you will ask a dummy question that will allow you to retrieve a semantically relevant result. ```python search_result = client.query( collection_name=""test_collection"", query_text=""Which integration is best for agents?"" ) print(search_result) ``` The semantic search engine will retrieve the most similar result in order of relevance. In this case, the second statement about LlamaIndex is more relevant. ```bash [QueryResponse(id=2, embedding=None, sparse_embedding=None, metadata={'document': 'Qdrant has a LlamaIndex integration for agents', 'source': 'llamaindex-docs'}, document='Qdrant has a LlamaIndex integration for agents.', score=0.8749180370667156), QueryResponse(id=42, embedding=None, sparse_embedding=None, metadata={'document': 'Qdrant has a LangChain integration for chatbots.', 'source': 'langchain-docs'}, document='Qdrant has a LangChain integration for chatbots.', score=0.8351846822959111)] ```",documentation/fastembed/fastembed-semantic-search.md "--- title: ""Quickstart"" weight: 2 --- # How to Generate Text Embedings with FastEmbed ## Install FastEmbed ```python pip install fastembed ``` Just for demo purposes, you will use Lists and NumPy to work with sample data. ```python from typing import List import numpy as np ``` ## Load default model In this example, you will use the default text embedding model, `BAAI/bge-small-en-v1.5`. ```python from fastembed import TextEmbedding ``` ## Add sample data Now, add two sample documents. Your documents must be in a list, and each document must be a string ```python documents: List[str] = [ ""FastEmbed is lighter than Transformers & Sentence-Transformers."", ""FastEmbed is supported by and maintained by Qdrant."", ] ``` Download and initialize the model. Print a message to verify the process. ```python embedding_model = TextEmbedding() print(""The model BAAI/bge-small-en-v1.5 is ready to use."") ``` ## Embed data Generate embeddings for both documents. ```python embeddings_generator = embedding_model.embed(documents) embeddings_list = list(embeddings_generator) len(embeddings_list[0]) ``` Here is the sample document list. The default model creates vectors with 384 dimensions. ```bash Document: This is built to be faster and lighter than other embedding libraries e.g. Transformers, Sentence-Transformers, etc. Vector of type: with shape: (384,) Document: fastembed is supported by and maintained by Qdrant. Vector of type: with shape: (384,) ``` ## Visualize embeddings ```python print(""Embeddings:\n"", embeddings_list) ``` The embeddings don't look too interesting, but here is a visual. ```bash Embeddings: [[-0.11154681 0.00976555 0.00524559 0.01951888 -0.01934952 0.02943449 -0.10519084 -0.00890122 0.01831438 0.01486796 -0.05642502 0.02561352 -0.00120165 0.00637456 0.02633459 0.0089221 0.05313658 0.03955453 -0.04400245 -0.02929407 0.04691846 -0.02515868 0.00778646 -0.05410657 ... -0.00243012 -0.01820582 0.02938612 0.02108984 -0.02178085 0.02971899 -0.00790564 0.03561783 0.0652488 -0.04371546 -0.05550042 0.02651665 -0.01116153 -0.01682246 -0.05976734 -0.03143916 0.06522726 0.01801389 -0.02611006 0.01627177 -0.0368538 0.03968835 0.027597 0.03305927]] ```",documentation/fastembed/fastembed-quickstart.md "--- title: ""FastEmbed"" weight: 6 --- # What is FastEmbed? FastEmbed is a lightweight Python library built for embedding generation. It supports popular embedding models and offers a user-friendly experience for embedding data into vector space. By using FastEmbed, you can ensure that your embedding generation process is not only fast and efficient but also highly accurate, meeting the needs of various machine learning and natural language processing applications. FastEmbed easily integrates with Qdrant for a variety of multimodal search purposes. ## How to get started with FastEmbed |Beginner|Advanced| |:-:|:-:| |[Generate Text Embedings with FastEmbed](fastembed-quickstart/)|[Combine FastEmbed with Qdrant for Vector Search](fastembed-semantic-search/)| ## Why is FastEmbed useful? - Light: Unlike other inference frameworks, such as PyTorch, FastEmbed requires very little external dependencies. Because it uses the ONNX runtime, it is perfect for serverless environments like AWS Lambda. - Fast: By using ONNX, FastEmbed ensures high-performance inference across various hardware platforms. - Accurate: FastEmbed aims for better accuracy and recall than models like OpenAI’s `Ada-002`. It always uses model which demonstrate strong results on the MTEB leaderboard. - Support: FastEmbed supports a wide range of models, including multilingual ones, to meet diverse use case needs. ",documentation/fastembed/_index.md "--- title: OpenLIT weight: 3100 aliases: [ ../frameworks/openlit/ ] --- # OpenLIT [OpenLIT](https://github.com/openlit/openlit) is an OpenTelemetry-native LLM Application Observability tool and includes OpenTelemetry auto-instrumentation to monitor Qdrant and provide insights to improve database operations and application performance. This page assumes you're using `qdrant-client` version 1.7.3 or above. ## Usage ### Step 1: Install OpenLIT Open your command line or terminal and run: ```bash pip install openlit ``` ### Step 2: Initialize OpenLIT in your Application Integrating OpenLIT into LLM applications is straightforward with just **two lines of code**: ```python import openlit openlit.init() ``` OpenLIT directs the trace to your console by default. To forward telemetry data to an HTTP OTLP endpoint, configure the `otlp_endpoint` parameter or the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable. For OpenTelemetry backends requiring authentication, use the `otlp_headers` parameter or the `OTEL_EXPORTER_OTLP_HEADERS` environment variable with the required values. ## Further Reading With the LLM Observability data now being collected by OpenLIT, the next step is to visualize and analyze this data to get insights Qdrant's performance, behavior, and identify areas of improvement. To begin exploring your LLM Application's performance data within the OpenLIT UI, please see the [Quickstart Guide](https://docs.openlit.io/latest/quickstart). If you want to integrate and send the generated metrics and traces to your existing observability tools like Promethues+Jaeger, Grafana or more, refer to the [Official Documentation for OpenLIT Connections](https://docs.openlit.io/latest/connections/intro) for detailed instructions. ",documentation/observability/openlit.md "--- title: Datadog --- ![Datadog Cover](/documentation/observability/datadog/datadog-cover.jpg) [Datadog](https://www.datadoghq.com/) is a cloud-based monitoring and analytics platform that offers real-time monitoring of servers, databases, and numerous other tools and services. It provides visibility into the performance of applications and enables businesses to detect issues before they affect users. You can install the [Qdrant integration](https://docs.datadoghq.com/integrations/qdrant/) to get real-time metrics to monitor your Qdrant deployment within Datadog including: - The performance of REST and gRPC interfaces with metrics such as total requests, total failures, and time to serve to identify potential bottlenecks and mitigate them. - Information about the readiness of the cluster, and deployment (total peers, pending operations, etc.) to gain insights into your Qdrant deployment. ### Usage - With the [Datadog Agent installed](https://docs.datadoghq.com/agent/basic_agent_usage), run the following command to add the Qdrant integration: ```shell datadog-agent integration install -t qdrant==1.0.0 ``` - Edit the `qdrant.d/conf.yaml` file in the `conf.d/` folder at the root of your [Agent's configuration directory](https://docs.datadoghq.com/agent/guide/agent-configuration-files/#agent-configuration-directory) to start collecting your [Qdrant metrics](/documentation/guides/monitoring/). Most importantly, set the `openmetrics_endpoint` value to the `/metrics` endpoint of your Qdrant instance. ```yaml instances: ## @param openmetrics_endpoint - string - optional ## The URL exposing metrics in the OpenMetrics format. - openmetrics_endpoint: http://localhost:6333/metrics ``` If the Qdrant instance requires authentication, you can specify the token by configuring [`extra_headers`](https://github.com/DataDog/integrations-core/blob/26f9ae7660f042c43f5d771f0c937ff805cf442c/openmetrics/datadog_checks/openmetrics/data/conf.yaml.example#L553C1-L558C35). ```yaml # @param extra_headers - mapping - optional # Additional headers to send with every request. extra_headers: api-key: ``` - Restart the Datadog agent. - You can now head over to the Datadog dashboard to view the [metrics](https://docs.datadoghq.com/integrations/qdrant/#data-collected) emitted by the Qdrant check. ## Further Reading - [Getting started with Datadog](https://docs.datadoghq.com/getting_started/) - [Qdrant integration source](https://github.com/DataDog/integrations-extras/tree/master/qdrant) ",documentation/observability/datadog.md "--- title: OpenLLMetry weight: 2300 aliases: [ ../frameworks/openllmetry/ ] --- # OpenLLMetry OpenLLMetry from [Traceloop](https://www.traceloop.com/) is a set of extensions built on top of [OpenTelemetry](https://opentelemetry.io/) that gives you complete observability over your LLM application. OpenLLMetry supports instrumenting the `qdrant_client` Python library and exporting the traces to various observability platforms, as described in their [Integrations catalog](https://www.traceloop.com/docs/openllmetry/integrations/introduction#the-integrations-catalog). This page assumes you're using `qdrant-client` version 1.7.3 or above. ## Usage To set up OpenLLMetry, follow these steps: 1. Install the SDK: ```console pip install traceloop-sdk ``` 1. Instantiate the SDK: ```python from traceloop.sdk import Traceloop Traceloop.init() ``` You're now tracing your `qdrant_client` usage with OpenLLMetry! ## Without the SDK Since Traceloop provides standard OpenTelemetry instrumentations, you can use them as standalone packages. To do so, follow these steps: 1. Install the package: ```console pip install opentelemetry-instrumentation-qdrant ``` 1. Instantiate the `QdrantInstrumentor`. ```python from opentelemetry.instrumentation.qdrant import QdrantInstrumentor QdrantInstrumentor().instrument() ``` ## Further Reading - 📚 OpenLLMetry [API reference](https://www.traceloop.com/docs/api-reference/introduction) - 📄 [Source Code](https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-qdrant) ",documentation/observability/openllmetry.md "--- title: Observability weight: 15 --- ## Observability Integrations | Tool | Description | | ----------------------------- | -------------------------------------------------------------------------------------- | | [OpenLIT](./openlit/) | Platform for OpenTelemetry-native Observability & Evals for LLMs and Vector Databases. | | [OpenLLMetry](./openllmetry/) | Set of OpenTelemetry extensions to add Observability for your LLM application. | | [Datadog](./datadog/) | Cloud-based monitoring and analytics platform. | ",documentation/observability/_index.md "--- title: Setup Hybrid Cloud weight: 1 --- # Creating a Hybrid Cloud Environment The following instruction set will show you how to properly set up a **Qdrant cluster** in your **Hybrid Cloud Environment**. To learn how Hybrid Cloud works, [read the overview document](/documentation/hybrid-cloud/). ## Prerequisites - **Kubernetes cluster:** To create a Hybrid Cloud Environment, you need a [standard compliant](https://www.cncf.io/training/certification/software-conformance/) Kubernetes cluster. You can run this cluster in any cloud, on-premise or edge environment, with distributions that range from AWS EKS to VMWare vSphere. - **Storage:** For storage, you need to set up the Kubernetes cluster with a Container Storage Interface (CSI) driver that provides block storage. For vertical scaling, the CSI driver needs to support volume expansion. For backups and restores, the driver needs to support CSI snapshots and restores. - **Permissions:** To install the Qdrant Kubernetes Operator you need to have `cluster-admin` access in your Kubernetes cluster. - **Connection:** The Qdrant Kubernetes Operator in your cluster needs to be able to connect to Qdrant Cloud. It will create an outgoing connection to `cloud.qdrant.io` on port `443`. - **Locations:** By default, the Qdrant Cloud Agent and Operator pulls Helm charts and container images from `registry.cloud.qdrant.io`. The Qdrant database container image is pulled from `docker.io`. > **Note:** You can also mirror these images and charts into your own registry and pull them from there. ### CLI tools During the onboarding, you will need to deploy the Qdrant Kubernetes Operator and Agent using Helm. Make sure you have the following tools installed: * [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) * [helm](https://helm.sh/docs/intro/install/) You will need to have access to the Kubernetes cluster with `kubectl` and `helm` configured to connect to it. Please refer the documentation of your Kubernetes distribution for more information. ### Required artifacts Container images: - `docker.io/qdrant/qdrant` - `registry.cloud.qdrant.io/qdrant/qdrant-cloud-agent` - `registry.cloud.qdrant.io/qdrant/qdrant-operator` - `registry.cloud.qdrant.io/qdrant/cluster-manager` - `registry.cloud.qdrant.io/qdrant/prometheus` - `registry.cloud.qdrant.io/qdrant/prometheus-config-reloader` - `registry.cloud.qdrant.io/qdrant/kube-state-metrics` Open Containers Initiative (OCI) Helm charts: - `registry.cloud.qdrant.io/qdrant-charts/qdrant-cloud-agent` - `registry.cloud.qdrant.io/qdrant-charts/qdrant-operator` - `registry.cloud.qdrant.io/qdrant-charts/qdrant-cluster-manager` - `registry.cloud.qdrant.io/qdrant-charts/prometheus` ## Installation 1. To set up Hybrid Cloud, open the Qdrant Cloud Console at [cloud.qdrant.io](https://cloud.qdrant.io). On the dashboard, select **Hybrid Cloud**. 2. Before creating your first Hybrid Cloud Environment, you have to provide billing information and accept the Hybrid Cloud license agreement. The installation wizard will guide you through the process. > **Note:** You will only be charged for the Qdrant cluster you create in a Hybrid Cloud Environment, but not for the environment itself. 3. Now you can specify the following: - **Name:** A name for the Hybrid Cloud Environment - **Kubernetes Namespace:** The Kubernetes namespace for the operator and agent. Once you select a namespace, you can't change it. You can also configure the StorageClass and VolumeSnapshotClass to use for the Qdrant databases, if you want to deviate from the default settings of your cluster. 4. You can then enter the YAML configuration for your Kubernetes operator. Qdrant supports a specific list of configuration options, as described in the [Qdrant Operator configuration](/documentation/hybrid-cloud/operator-configuration/) section. 5. (Optional) If you have special requirements for any of the following, activate the **Show advanced configuration** option: - If you use a proxy to connect from your infrastructure to the Qdrant Cloud API, you can specify the proxy URL, credentials and cetificates. - Container registry URL for Qdrant Operator and Agent images. The default is . - Helm chart repository URL for the Qdrant Operator and Agent. The default is . - Log level for the operator and agent 6. Once complete, click **Create**. > **Note:** All settings but the Kubernetes namespace can be changed later. ### Generate Installation Command After creating your Hybrid Cloud, select **Generate Installation Command** to generate a script that you can run in your Kubernetes cluster which will perform the initial installation of the Kubernetes operator and agent. It will: - Create the Kubernetes namespace, if not present - Set up the necessary secrets with credentials to access the Qdrant container registry and the Qdrant Cloud API. - Sign in to the Helm registry at `registry.cloud.qdrant.io` - Install the Qdrant cloud agent and Kubernetes operator chart You need this command only for the initial installation. After that, you can update the agent and operator using the Qdrant Cloud Console. > **Note:** If you generate the installation command a second time, it will re-generate the included secrets, and you will have to apply the command again to update them. ## Deleting a Hybrid Cloud Environment To delete a Hybrid Cloud Environment, first delete all Qdrant database clusters in it. Then you can delete the environment itself. To clean up your Kubernetes cluster, after deleting the Hybrid Cloud Environment, you can use the following command: ```shell helm -n the-qdrant-namespace delete qdrant-cloud-agent helm -n the-qdrant-namespace delete qdrant-prometheus helm -n the-qdrant-namespace delete qdrant-operator kubectl -n the-qdrant-namespace patch HelmRelease.cd.qdrant.io qdrant-cloud-agent -p '{""metadata"":{""finalizers"":null}}' --type=merge kubectl -n the-qdrant-namespace patch HelmRelease.cd.qdrant.io qdrant-prometheus -p '{""metadata"":{""finalizers"":null}}' --type=merge kubectl -n the-qdrant-namespace patch HelmRelease.cd.qdrant.io qdrant-operator -p '{""metadata"":{""finalizers"":null}}' --type=merge kubectl -n the-qdrant-namespace patch HelmChart.cd.qdrant.io the-qdrant-namespace-qdrant-cloud-agent -p '{""metadata"":{""finalizers"":null}}' --type=merge kubectl -n the-qdrant-namespace patch HelmChart.cd.qdrant.io the-qdrant-namespace-qdrant-prometheus -p '{""metadata"":{""finalizers"":null}}' --type=merge kubectl -n the-qdrant-namespace patch HelmChart.cd.qdrant.io the-qdrant-namespace-qdrant-operator -p '{""metadata"":{""finalizers"":null}}' --type=merge kubectl -n the-qdrant-namespace patch HelmRepository.cd.qdrant.io qdrant-cloud -p '{""metadata"":{""finalizers"":null}}' --type=merge kubectl delete namespace the-qdrant-namespace kubectl get crd -o name | grep qdrant | xargs -n 1 kubectl delete ``` ",documentation/hybrid-cloud/hybrid-cloud-setup.md "--- title: Configure the Qdrant Operator weight: 3 --- # Configuring Qdrant Operator: Advanced Options The Qdrant Operator has several configuration options, which can be configured in the advanced section of your Hybrid Cloud Environment. The following YAML shows all configuration options with their default values: ```yaml # Retention for the backup history of Qdrant clusters backupHistoryRetentionDays: 2 # Timeout configuration for the Qdrant operator operations operationTimeout: 7200 # 2 hours handlerTimeout: 21600 # 6 hours backupTimeout: 12600 # 3.5 hours # Incremental backoff configuration for the Qdrant operator operations backOff: minDelay: 5 maxDelay: 300 increment: 5 # node_selector: {} # tolerations: [] # Default ingress configuration for a Qdrant cluster ingress: enabled: false provider: KubernetesIngress # or NginxIngress # kubernetesIngress: # ingressClassName: """" # Default storage configuration for a Qdrant cluster #storage: # Default VolumeSnapshotClass for a Qdrant cluster # snapshot_class: ""csi-snapclass"" # Default StorageClass for a Qdrant cluster, uses cluster default StorageClass if not set # default_storage_class_names: # StorageClass for DB volumes # db: """" # StorageClass for snapshot volumes # snapshots: """" # Default scheduling configuration for a Qdrant cluster #scheduling: # default_topology_spread_constraints: [] # default_pod_disruption_budget: {} qdrant: # Default security context for Qdrant cluster # securityContext: # enabled: false # user: """" # fsGroup: """" # group: """" # Default Qdrant image configuration # image: # pull_secret: """" # pull_policy: IfNotPresent # repository: qdrant/qdrant # Default Qdrant log_level # log_level: INFO # Default network policies to create for a qdrant cluster networkPolicies: ingress: - ports: - protocol: TCP port: 6333 - protocol: TCP port: 6334 # Allow DNS resolution from qdrant pods at Kubernetes internal DNS server egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system ports: - protocol: UDP port: 53 ``` ",documentation/hybrid-cloud/operator-configuration.md "--- title: Networking, Logging & Monitoring weight: 4 --- # Configuring Networking, Logging & Monitoring in Qdrant Hybrid Cloud ## Configure network policies For security reasons, each database cluster is secured with network policies. By default, database pods only allow egress traffic between each and allow ingress traffic to ports 6333 (rest) and 6334 (grpc) from within the Kubernetes cluster. You can modify the default network policies in the Hybrid Cloud environment configuration: ```yaml qdrant: networkPolicies: ingress: - from: - ipBlock: cidr: 192.168.0.0/22 - podSelector: matchLabels: app: client-app namespaceSelector: matchLabels: kubernetes.io/metadata.name: client-namespace - podSelector: matchLabels: app: traefik namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system ports: - port: 6333 protocol: TCP - port: 6334 protocol: TCP ``` ## Logging You can access the logs with kubectl or the Kubernetes log management tool of your choice. For example: ```bash kubectl -n qdrant-namespace logs -l app=qdrant,cluster-id=9a9f48c7-bb90-4fb2-816f-418a46a74b24 ``` **Configuring log levels:** You can configure log levels for the databases individually in the configuration section of the Qdrant Cluster detail page. The log level for the **Qdrant Cloud Agent** and **Operator** can be set in the [Hybrid Cloud Environment configuration](/documentation/hybrid-cloud/operator-configuration/). ## Monitoring The Qdrant Cloud console gives you access to basic metrics about CPU, memory and disk usage of your Qdrant clusters. You can also access Prometheus metrics endpoint of your Qdrant databases. Finally, you can use a Kubernetes workload monitoring tool of your choice to monitor your Qdrant clusters. ",documentation/hybrid-cloud/networking-logging-monitoring.md "--- title: Deployment Platforms weight: 5 --- # Qdrant Hybrid Cloud: Hosting Platforms & Deployment Options This page provides an overview of how to deploy Qdrant Hybrid Cloud on various managed Kubernetes platforms. For a general list of prerequisites and installation steps, see our [Hybrid Cloud setup guide](/documentation/hybrid-cloud/hybrid-cloud-setup/). ![Akamai](/documentation/cloud/cloud-providers/akamai.jpg) ## Akamai (Linode) [The Linode Kubernetes Engine (LKE)](https://www.linode.com/products/kubernetes/) is a managed container orchestration engine built on top of Kubernetes. LKE enables you to quickly deploy and manage your containerized applications without needing to build (and maintain) your own Kubernetes cluster. All LKE instances are equipped with a fully managed control plane at no additional cost. First, consult Linode's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on LKE**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/). ### More on Linode Kubernetes Engine - [Getting Started with LKE](https://www.linode.com/docs/products/compute/kubernetes/get-started/) - [LKE Guides](https://www.linode.com/docs/products/compute/kubernetes/guides/) - [LKE API Reference](https://www.linode.com/docs/api/) At the time of writing, Linode [does not support CSI Volume Snapshots](https://github.com/linode/linode-blockstorage-csi-driver/issues/107). ![AWS](/documentation/cloud/cloud-providers/aws.jpg) ## Amazon Web Services (AWS) [Amazon Elastic Kubernetes Service (Amazon EKS)](https://aws.amazon.com/eks/) is a managed service to run Kubernetes in the AWS cloud and on-premises data centers which can then be paired with Qdrant's hybrid cloud. With Amazon EKS, you can take advantage of all the performance, scale, reliability, and availability of AWS infrastructure, as well as integrations with AWS networking and security services. First, consult AWS' managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on AWS**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/). ### More on Amazon Elastic Kubernetes Service - [Getting Started with Amazon EKS](https://docs.aws.amazon.com/eks/) - [Amazon EKS User Guide](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) - [Amazon EKS API Reference](https://docs.aws.amazon.com/eks/latest/APIReference/Welcome.html) Your EKS cluster needs the EKS EBS CSI driver or a similar storage driver: - [Amazon EBS CSI Driver](https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html) To allow vertical scaling, you need a StorageClass with volume expansion enabled: - [Amazon EBS CSI Volume Resizing](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/examples/kubernetes/resizing/README.md) ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: ""true"" name: ebs-sc provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true ``` To allow backups and restores, your EKS cluster needs the CSI snapshot controller: - [Amazon EBS CSI Snapshot Controller](https://docs.aws.amazon.com/eks/latest/userguide/csi-snapshot-controller.html) And you need to create a VolumeSnapshotClass: ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-snapclass deletionPolicy: Delete driver: ebs.csi.aws.com ``` ![Civo](/documentation/cloud/cloud-providers/civo.jpg) ## Civo [Civo Kubernetes](https://www.civo.com/kubernetes) is a robust, scalable, and managed Kubernetes service. Civo supplies a CNCF-compliant Kubernetes cluster and makes it easy to provide standard Kubernetes applications and containerized workloads. User-defined Kubernetes clusters can be created as self-service without complications using the Civo Portal. First, consult Civo's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Civo**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/). ### More on Civo Kubernetes - [Getting Started with Civo Kubernetes](https://www.civo.com/docs/kubernetes) - [Civo Tutorials](https://www.civo.com/learn) - [Frequently Asked Questions on Civo](https://www.civo.com/docs/faq) To allow backups and restores, you need to create a VolumeSnapshotClass: ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-snapclass deletionPolicy: Delete driver: csi.civo.com ``` ![Digital Ocean](/documentation/cloud/cloud-providers/digital-ocean.jpg) ## Digital Ocean [DigitalOcean Kubernetes (DOKS)](https://www.digitalocean.com/products/kubernetes) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and volumes. First, consult Digital Ocean's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on DigitalOcean**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/). ### More on DigitalOcean Kubernetes - [Getting Started with DOKS](https://docs.digitalocean.com/products/kubernetes/getting-started/quickstart/) - [DOKS - How To Guides](https://docs.digitalocean.com/products/kubernetes/how-to/) - [DOKS - Reference Manual](https://docs.digitalocean.com/products/kubernetes/reference/) ![Gcore](/documentation/cloud/cloud-providers/gcore.svg) ## Gcore [Gcore Managed Kubernetes](https://gcore.com/cloud/managed-kubernetes) is a managed container orchestration engine built on top of Kubernetes. Gcore enables you to quickly deploy and manage your containerized applications without needing to build (and maintain) your own Kubernetes cluster. All Gcore instances are equipped with a fully managed control plane at no additional cost. First, consult Gcore's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Gcore**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/). ### More on Gcore Kubernetes Engine - [Getting Started with Kubnetes on Gcore](https://gcore.com/docs/cloud/kubernetes/about-gcore-kubernetes) ![Google Cloud Platform](/documentation/cloud/cloud-providers/gcp.jpg) ## Google Cloud Platform (GCP) [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) is a managed Kubernetes service that you can use to deploy and operate containerized applications at scale using Google's infrastructure. GKE provides the operational power of Kubernetes while managing many of the underlying components, such as the control plane and nodes, for you. First, consult GCP's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on GCP**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/). ### More on the Google Kubernetes Engine - [Getting Started with GKE](https://cloud.google.com/kubernetes-engine/docs/quickstart) - [GKE Tutorials](https://cloud.google.com/kubernetes-engine/docs/tutorials) - [GKE Documentation](https://cloud.google.com/kubernetes-engine/docs/) To allow backups and restores, your GKE cluster needs the CSI VolumeSnapshot controller and class: - [Google GKE Volume Snapshots](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/volume-snapshots) ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-snapclass deletionPolicy: Delete driver: pd.csi.storage.gke.io ``` ![Microsoft Azure](/documentation/cloud/cloud-providers/azure.jpg) ## Mircrosoft Azure With [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-in/products/kubernetes-service), you can start developing and deploying cloud-native apps in Azure, data centres, or at the edge. Get unified management and governance for on-premises, edge, and multi-cloud Kubernetes clusters. Interoperate with Azure security, identity, cost management, and migration services. First, consult Azure's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Azure**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/). ### More on Azure Kubernetes Service - [Getting Started with AKS](https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks-start-here) - [AKS Documentation](https://learn.microsoft.com/en-in/azure/aks/) - [Best Practices with AKS](https://learn.microsoft.com/en-in/azure/aks/best-practices) To allow backups and restores, your AKS cluster needs the CSI VolumeSnapshot controller and class: - [Azure AKS Volume Snapshots](https://learn.microsoft.com/en-us/azure/aks/azure-disk-csi#create-a-volume-snapshot) ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-snapclass deletionPolicy: Delete driver: disk.csi.azure.com ``` ![Oracle Cloud Infrastructure](/documentation/cloud/cloud-providers/oracle.jpg) ## Oracle Cloud Infrastructure [Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)](https://www.oracle.com/in/cloud/cloud-native/container-engine-kubernetes/) is a managed Kubernetes solution that enables you to deploy Kubernetes clusters while ensuring stable operations for both the control plane and the worker nodes through automatic scaling, upgrades, and security patching. Additionally, OKE offers a completely serverless Kubernetes experience with virtual nodes. First, consult OCI's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on OCI**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/). ### More on OCI Container Engine - [Getting Started with OCI](https://docs.oracle.com/en-us/iaas/Content/ContEng/home.htm) - [Frequently Asked Questions on OCI](https://www.oracle.com/in/cloud/cloud-native/container-engine-kubernetes/faq/) - [OCI Product Updates](https://docs.oracle.com/en-us/iaas/releasenotes/services/conteng/) To allow backups and restores, your OCI cluster needs the CSI VolumeSnapshot controller and class: - [Prerequisites for Creating Volume Snapshots ](https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengcreatingpersistentvolumeclaim_topic-Provisioning_PVCs_on_BV.htm#contengcreatingpersistentvolumeclaim_topic-Provisioning_PVCs_on_BV-PV_From_Snapshot_CSI__section_volume-snapshot-prerequisites) ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-snapclass deletionPolicy: Delete driver: blockvolume.csi.oraclecloud.com ``` ![OVHcloud](/documentation/cloud/cloud-providers/ovh.jpg) ## OVHcloud [Service Managed Kubernetes](https://www.ovhcloud.com/en-in/public-cloud/kubernetes/), powered by OVH Public Cloud Instances, a leading European cloud provider. With OVHcloud Load Balancers and disks built in. OVHcloud Managed Kubernetes provides high availability, compliance, and CNCF conformance, allowing you to focus on your containerized software layers with total reversibility. First, consult OVHcloud's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on OVHcloud**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/). ### More on Service Managed Kubernetes by OVHcloud - [Getting Started with OVH Managed Kubernetes](https://help.ovhcloud.com/csm/en-in-documentation-public-cloud-containers-orchestration-managed-kubernetes-k8s-getting-started) - [OVH Managed Kubernetes Documentation](https://help.ovhcloud.com/csm/en-in-documentation-public-cloud-containers-orchestration-managed-kubernetes-k8s) - [OVH Managed Kubernetes Tutorials](https://help.ovhcloud.com/csm/en-in-documentation-public-cloud-containers-orchestration-managed-kubernetes-k8s-tutorials) ![Red Hat](/documentation/cloud/cloud-providers/redhat.jpg) ## Red Hat OpenShift [Red Hat OpenShift Kubernetes Engine](https://www.redhat.com/en/technologies/cloud-computing/openshift/kubernetes-engine) provides you with the basic functionality of Red Hat OpenShift. It offers a subset of the features that Red Hat OpenShift Container Platform offers, like full access to an enterprise-ready Kubernetes environment and an extensive compatibility test matrix with many of the software elements that you might use in your data centre. First, consult Red Hat's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Red Hat OpenShift**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/). ### More on OpenShift Kubernetes Engine - [Getting Started with Red Hat OpenShift Kubernetes](https://docs.openshift.com/container-platform/4.15/getting_started/kubernetes-overview.html) - [Red Hat OpenShift Kubernetes Documentation](https://docs.openshift.com/container-platform/4.15/welcome/index.html) - [Installing on Container Platforms](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/html/installing/index) Qdrant databases need a persistent storage solution. See [Openshift Storage Overview](https://docs.openshift.com/container-platform/4.15/storage/index.html). To allow vertical scaling, you need a StorageClass with [volume expansion enabled](https://docs.openshift.com/container-platform/4.15/storage/expanding-persistent-volumes.html). To allow backups and restores, your OpenShift cluster needs the [CSI snapshot controller](https://docs.openshift.com/container-platform/4.15/storage/container_storage_interface/persistent-storage-csi-snapshots.html), and you need to create a VolumeSnapshotClass. ![Scaleway](/documentation/cloud/cloud-providers/scaleway.jpg) ## Scaleway [Scaleway Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/) and [Kosmos](https://www.scaleway.com/en/kubernetes-kosmos/) are managed Kubernetes services from [Scaleway](https://www.scaleway.com/en/). They abstract away the complexities of managing and operating a Kubernetes cluster. The primary difference being, Kapsule clusters are composed solely of Scaleway Instances. Whereas, a Kosmos cluster is a managed multi-cloud Kubernetes engine that allows you to connect instances from any cloud provider to a single managed Control-Plane. First, consult Scaleway's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Scaleway**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/). ### More on Scaleway Kubernetes - [Getting Started with Scaleway Kubernetes](https://www.scaleway.com/en/docs/containers/kubernetes/quickstart/#how-to-add-a-scaleway-pool-to-a-kubernetes-cluster) - [Scaleway Kubernetes Documentation](https://www.scaleway.com/en/docs/containers/kubernetes/) - [Frequently Asked Questions on Scaleway Kubernetes](https://www.scaleway.com/en/docs/faq/kubernetes/) ![STACKIT](/documentation/cloud/cloud-providers/stackit.jpg) ## STACKIT [STACKIT Kubernetes Engine (SKE)](https://www.stackit.de/en/product/kubernetes/) is a robust, scalable, and managed Kubernetes service. SKE supplies a CNCF-compliant Kubernetes cluster and makes it easy to provide standard Kubernetes applications and containerized workloads. User-defined Kubernetes clusters can be created as self-service without complications using the STACKIT Portal. First, consult STACKIT's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on STACKIT**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/). ### More on STACKIT Kubernetes Engine - [Getting Started with SKE](https://docs.stackit.cloud/stackit/en/getting-started-ske-10125565.html) - [SKE Tutorials](https://docs.stackit.cloud/stackit/en/tutorials-ske-66683162.html) - [Frequently Asked Questions on SKE](https://docs.stackit.cloud/stackit/en/faq-known-issues-of-ske-28476393.html) To allow backups and restores, you need to create a VolumeSnapshotClass: ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-snapclass deletionPolicy: Delete driver: cinder.csi.openstack.org ``` ![Vultr](/documentation/cloud/cloud-providers/vultr.jpg) ## Vultr [Vultr Kubernetes Engine (VKE)](https://www.vultr.com/kubernetes/) is a fully-managed product offering with predictable pricing that makes Kubernetes easy to use. Vultr manages the control plane and worker nodes and provides integration with other managed services such as Load Balancers, Block Storage, and DNS. First, consult Vultr's managed Kubernetes instructions below. Then, **to set up Qdrant Hybrid Cloud on Vultr**, follow our [step-by-step documentation](/documentation/hybrid-cloud/hybrid-cloud-setup/). ### More on Vultr Kubernetes Engine - [VKE Guide](https://docs.vultr.com/vultr-kubernetes-engine) - [VKE Documentation](https://docs.vultr.com/) - [Frequently Asked Questions on VKE](https://docs.vultr.com/vultr-kubernetes-engine#frequently-asked-questions) At the time of writing, Vultr does not support CSI Volume Snapshots. ![Kubernetes](/documentation/cloud/cloud-providers/kubernetes.jpg) ## Generic Kubernetes Support (on-premises, cloud, edge) Qdrant Hybrid Cloud works with any Kubernetes cluster that meets the [standard compliance](https://www.cncf.io/training/certification/software-conformance/) requirements. This includes for example: - [VMWare Tanzu](https://tanzu.vmware.com/kubernetes-grid) - [Red Hat OpenShift](https://www.openshift.com/) - [SUSE Rancher](https://www.rancher.com/) - [Canonical Kubernetes](https://ubuntu.com/kubernetes) - [RKE](https://rancher.com/docs/rke/latest/en/) - [RKE2](https://docs.rke2.io/) - [K3s](https://k3s.io/) Qdrant databases need persistent block storage. Most storage solutions provide a CSI driver that can be used with Kubernetes. See [CSI drivers](https://kubernetes-csi.github.io/docs/drivers.html) for more information. To allow vertical scaling, you need a StorageClass with volume expansion enabled. See [Volume Expansion](https://kubernetes.io/docs/concepts/storage/storage-classes/#allow-volume-expansion) for more information. To allow backups and restores, your CSI driver needs to support volume snapshots cluster needs the CSI VolumeSnapshot controller and class. See [CSI Volume Snapshots](https://kubernetes-csi.github.io/docs/snapshot-controller.html) for more information. ## Next Steps Once you've got a Kubernetes cluster deployed on a platform of your choosing, you can begin setting up Qdrant Hybrid Cloud. Head to our Qdrant Hybrid Cloud [setup guide](/documentation/hybrid-cloud/hybrid-cloud-setup/) for instructions. ",documentation/hybrid-cloud/platform-deployment-options.md "--- title: Create a Cluster weight: 2 --- # Creating a Qdrant Cluster in Hybrid Cloud Once you have created a Hybrid Cloud Environment, you can create a Qdrant cluster in that enviroment. Use the same process to [Create a cluster](/documentation/cloud/create-cluster/). Make sure to select your Hybrid Cloud Environment as the target. Note that in the ""Kubernetes Configuration"" section you can additionally configure: * Node selectors for the Qdrant database pods * Toleration for the Qdrant database pods * Additional labels for the Qdrant database pods * A service type and annotations for the Qdrant database service These settings can also be changed after the cluster is created on the cluster detail page. ### Authentication to your Qdrant clusters In Hybrid Cloud the authentication information is provided by Kubernetes secrets. You can configure authentication for your Qdrant clusters in the ""Configuration"" section of the Qdrant Cluster detail page. There you can configure the Kubernetes secret name and key to be used as an API key and/or read-only API key. One way to create a secret is with kubectl: ```shell kubectl create secret generic qdrant-api-key --from-literal=api-key=your-secret-api-key --namespace the-qdrant-namespace ``` The resulting secret will look like this: ```yaml apiVersion: v1 data: api-key: ... kind: Secret metadata: name: qdrant-api-key namespace: the-qdrant-namespace type: kubernetes.io/tls ``` With this command the secret name would be `qdrant-api-key` and the key would be `api-key`. If you want to retrieve the secret again, you can also use `kubectl`: ```shell kubectl get secret qdrant-api-key -o jsonpath=""{.data.api-key}"" --namespace the-qdrant-namespace | base64 --decode ``` ### Exposing Qdrant clusters to your client applications You can expose your Qdrant clusters to your client applications using Kubernetes services and ingresses. By default, a `ClusterIP` service is created for each Qdrant cluster. Within your Kubernetes cluster, you can access the Qdrant cluster using the service name and port: ``` http://qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24.qdrant-namespace.svc:6333 ``` This endpoint is also visible on the cluster detail page. If you want to access the database from your local developer machine, you can use `kubectl port-forward` to forward the service port to your local machine: ``` kubectl --namespace your-qdrant-namespace port-forward service/qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24 6333:6333 ``` You can also expose the database outside the Kubernetes cluster with a `LoadBalancer` (if supported in your Kubernetes environment) or `NodePort` service or an ingress. The service type and necessary annotations can be configured in the ""Kubernetes Configuration"" section during cluster creation, or on the cluster detail page. Especially if you create a LoadBalancer Service, you may need to provide annotations for the loadbalancer configration. Please refer to the documention of your cloud provider for more details. Examples: * [AWS EKS LoadBalancer annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/) * [Azure AKS Public LoadBalancer annotations](https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard) * [Azure AKS Internal LoadBalancer annotations](https://learn.microsoft.com/en-us/azure/aks/internal-lb) * [GCP GKE LoadBalancer annotations](https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer-parameters) You could also create a Loadbalancer service manually like this: ```yaml apiVersion: v1 kind: Service metadata: name: qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24-lb namespace: qdrant-namespace spec: type: LoadBalancer ports: - name: http port: 6333 - name: grpc port: 6334 selector: app: qdrant cluster-id: 9a9f48c7-bb90-4fb2-816f-418a46a74b24 ``` An ingress could look like this: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24 namespace: qdrant-namespace spec: rules: - host: qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24.your-domain.com http: paths: - path: / pathType: Prefix backend: service: name: qdrant-9a9f48c7-bb90-4fb2-816f-418a46a74b24 port: number: 6333 ``` Please refer to the Kubernetes, ingress controller and cloud provider documention for more details. If you expose the database like this, you will be able to see this also reflected as an endpoint on the cluster detail page. And will see the Qdrant database dashboard link pointing to it. ### Configuring TLS If you want to configure TLS for accessing your Qdrant database in Hybrid Cloud, there are two options: * You can offload TLS at the ingress or loadbalancer level. * You can configure TLS directly in the Qdrant database. If you want to configure TLS directly in the Qdrant database, you can reference a secret containing the TLS certificate and key in the ""Configuration"" section of the Qdrant Cluster detail page. To create such a secret, you can use `kubectl`: ```shell kubectl create secret tls qdrant-tls --cert=mydomain.com.crt --key=mydomain.com.key --namespace the-qdrant-namespace ``` The resulting secret will look like this: ```yaml apiVersion: v1 data: tls.crt: ... tls.key: ... kind: Secret metadata: name: qdrant-tls namespace: the-qdrant-namespace type: kubernetes.io/tls ``` With this command the secret name to enter into the UI would be `qdrant-tls` and the keys would be `tls.crt` and `tls.key`.",documentation/hybrid-cloud/hybrid-cloud-cluster-creation.md "--- title: Hybrid Cloud weight: 9 --- # Qdrant Hybrid Cloud Seamlessly deploy and manage your vector database across diverse environments, ensuring performance, security, and cost efficiency for AI-driven applications. [Qdrant Hybrid Cloud](/hybrid-cloud/) integrates Kubernetes clusters from any setting - cloud, on-premises, or edge - into a unified, enterprise-grade managed service. You can use [Qdrant Cloud's UI](/documentation/cloud/create-cluster/) to create and manage your database clusters, while they still remain within your infrastructure. **All Qdrant databases will operate solely within your network, using your storage and compute resources. All user data will stay securely within your environment and won't be accessible by the Qdrant Cloud platform, or anyone else outside your organization.** Qdrant Hybrid Cloud ensures data privacy, deployment flexibility, low latency, and delivers cost savings, elevating standards for vector search and AI applications. **How it works:** Qdrant Hybrid Cloud relies on Kubernetes and works with any standard compliant Kubernetes distribution. When you onboard a Kubernetes cluster as a Hybrid Cloud Environment, you can deploy the Qdrant Kubernetes Operator and Cloud Agent into this cluster. These will manage Qdrant databases within your Kubernetes cluster and establish an outgoing connection to Qdrant Cloud to transport telemetry and receive management instructions. You can then benefit from the same cloud management features and transport telemetry that is available with any managed Qdrant Cloud cluster. **Setup instructions:** To begin using Qdrant Hybrid Cloud, [read our installation guide](/documentation/hybrid-cloud/hybrid-cloud-setup/). ## Hybrid Cloud architecture The Hybrid Cloud onboarding will install a Kubernetes Operator and Cloud Agent into your Kubernetes cluster. The Cloud Agent will establish an outgoing connection to `cloud.qdrant.io` on port `443` to transport telemetry and receive management instructions. It will also interact with the Kubernetes API through a ServiceAccount to create, read, update and delete the necessary Qdrant CRs (Custom Resources) based on the configuration setup in the Qdrant Cloud Console. The Qdrant Kubernetes Operator will manage the Qdrant databases within your Kubernetes cluster. Based on the Qdrant CRs, it will interact with the Kubernetes API through a ServiceAccount to create and manage the necessary resources to deploy and run Qdrant databases, such as Pods, Services, ConfigMaps, and Secrets. Both component's access is limited to the Kubernetes namespace that you chose during the onboarding process. After the initial onboarding, the lifecycle of these components will be controlled by the Qdrant Cloud platform via the built-in Helm controller. You don't need to expose your Kubernetes Cluster to the Qdrant Cloud platform, you don't need to open any ports for incoming traffic, and you don't need to provide any Kubernetes or cloud provider credentials to the Qdrant Cloud platform. ![hybrid-cloud-architecture](/blog/hybrid-cloud/hybrid-cloud-architecture.png) ",documentation/hybrid-cloud/_index.md "--- title: Multitenancy weight: 12 aliases: - ../tutorials/multiple-partitions - /tutorials/multiple-partitions/ --- # Configure Multitenancy **How many collections should you create?** In most cases, you should only use a single collection with payload-based partitioning. This approach is called multitenancy. It is efficient for most of users, but it requires additional configuration. This document will show you how to set it up. **When should you create multiple collections?** When you have a limited number of users and you need isolation. This approach is flexible, but it may be more costly, since creating numerous collections may result in resource overhead. Also, you need to ensure that they do not affect each other in any way, including performance-wise. ## Partition by payload When an instance is shared between multiple users, you may need to partition vectors by user. This is done so that each user can only access their own vectors and can't see the vectors of other users. > ### NOTE > > The key doesn't necessarily need to be named `group_id`. You can choose a name that best suits your data structure and naming conventions. 1. Add a `group_id` field to each vector in the collection. ```http PUT /collections/{collection_name}/points { ""points"": [ { ""id"": 1, ""payload"": {""group_id"": ""user_1""}, ""vector"": [0.9, 0.1, 0.1] }, { ""id"": 2, ""payload"": {""group_id"": ""user_1""}, ""vector"": [0.1, 0.9, 0.1] }, { ""id"": 3, ""payload"": {""group_id"": ""user_2""}, ""vector"": [0.1, 0.1, 0.9] }, ] } ``` ```python client.upsert( collection_name=""{collection_name}"", points=[ models.PointStruct( id=1, payload={""group_id"": ""user_1""}, vector=[0.9, 0.1, 0.1], ), models.PointStruct( id=2, payload={""group_id"": ""user_1""}, vector=[0.1, 0.9, 0.1], ), models.PointStruct( id=3, payload={""group_id"": ""user_2""}, vector=[0.1, 0.1, 0.9], ), ], ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.upsert(""{collection_name}"", { points: [ { id: 1, payload: { group_id: ""user_1"" }, vector: [0.9, 0.1, 0.1], }, { id: 2, payload: { group_id: ""user_1"" }, vector: [0.1, 0.9, 0.1], }, { id: 3, payload: { group_id: ""user_2"" }, vector: [0.1, 0.1, 0.9], }, ], }); ``` ```rust use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .upsert_points(UpsertPointsBuilder::new( ""{collection_name}"", vec![ PointStruct::new(1, vec![0.9, 0.1, 0.1], [(""group_id"", ""user_1"".into())]), PointStruct::new(2, vec![0.1, 0.9, 0.1], [(""group_id"", ""user_1"".into())]), PointStruct::new(3, vec![0.1, 0.1, 0.9], [(""group_id"", ""user_2"".into())]), ], )) .await?; ``` ```java import java.util.List; import java.util.Map; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .upsertAsync( ""{collection_name}"", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.9f, 0.1f, 0.1f)) .putAllPayload(Map.of(""group_id"", value(""user_1""))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.1f, 0.9f, 0.1f)) .putAllPayload(Map.of(""group_id"", value(""user_1""))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.1f, 0.1f, 0.9f)) .putAllPayload(Map.of(""group_id"", value(""user_2""))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpsertAsync( collectionName: ""{collection_name}"", points: new List { new() { Id = 1, Vectors = new[] { 0.9f, 0.1f, 0.1f }, Payload = { [""group_id""] = ""user_1"" } }, new() { Id = 2, Vectors = new[] { 0.1f, 0.9f, 0.1f }, Payload = { [""group_id""] = ""user_1"" } }, new() { Id = 3, Vectors = new[] { 0.1f, 0.1f, 0.9f }, Payload = { [""group_id""] = ""user_2"" } } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Upsert(context.Background(), &qdrant.UpsertPoints{ CollectionName: ""{collection_name}"", Points: []*qdrant.PointStruct{ { Id: qdrant.NewIDNum(1), Vectors: qdrant.NewVectors(0.9, 0.1, 0.1), Payload: qdrant.NewValueMap(map[string]any{""group_id"": ""user_1""}), }, { Id: qdrant.NewIDNum(2), Vectors: qdrant.NewVectors(0.1, 0.9, 0.1), Payload: qdrant.NewValueMap(map[string]any{""group_id"": ""user_1""}), }, { Id: qdrant.NewIDNum(3), Vectors: qdrant.NewVectors(0.1, 0.1, 0.9), Payload: qdrant.NewValueMap(map[string]any{""group_id"": ""user_2""}), }, }, }) ``` 2. Use a filter along with `group_id` to filter vectors for each user. ```http POST /collections/{collection_name}/points/query { ""query"": [0.1, 0.1, 0.9], ""filter"": { ""must"": [ { ""key"": ""group_id"", ""match"": { ""value"": ""user_1"" } } ] }, ""limit"": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=[0.1, 0.1, 0.9], query_filter=models.Filter( must=[ models.FieldCondition( key=""group_id"", match=models.MatchValue( value=""user_1"", ), ) ] ), limit=10, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: [0.1, 0.1, 0.9], filter: { must: [{ key: ""group_id"", match: { value: ""user_1"" } }], }, limit: 10, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, QueryPointsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.1, 0.1, 0.9]) .limit(10) .filter(Filter::must([Condition::matches( ""group_id"", ""user_1"".to_string(), )])), ) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.QueryPoints; import static io.qdrant.client.QueryFactory.nearest; import static io.qdrant.client.ConditionFactory.matchKeyword; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter( Filter.newBuilder().addMust(matchKeyword(""group_id"", ""user_1"")).build()) .setQuery(nearest(0.1f, 0.1f, 0.9f)) .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[] { 0.1f, 0.1f, 0.9f }, filter: MatchKeyword(""group_id"", ""user_1""), limit: 10 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.1, 0.1, 0.9), Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""group_id"", ""user_1""), }, }, }) ``` ## Calibrate performance The speed of indexation may become a bottleneck in this case, as each user's vector will be indexed into the same collection. To avoid this bottleneck, consider _bypassing the construction of a global vector index_ for the entire collection and building it only for individual groups instead. By adopting this strategy, Qdrant will index vectors for each user independently, significantly accelerating the process. To implement this approach, you should: 1. Set `payload_m` in the HNSW configuration to a non-zero value, such as 16. 2. Set `m` in hnsw config to 0. This will disable building global index for the whole collection. ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"" }, ""hnsw_config"": { ""payload_m"": 16, ""m"": 0 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), hnsw_config=models.HnswConfigDiff( payload_m=16, m=0, ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", }, hnsw_config: { payload_m: 16, m: 0, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Distance, HnswConfigDiffBuilder, VectorParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine)) .hnsw_config(HnswConfigDiffBuilder::default().payload_m(16).m(0)), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.HnswConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setHnswConfig(HnswConfigDiff.newBuilder().setPayloadM(16).setM(0).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, hnswConfig: new HnswConfigDiff { PayloadM = 16, M = 0 } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 768, Distance: qdrant.Distance_Cosine, }), HnswConfig: &qdrant.HnswConfigDiff{ PayloadM: qdrant.PtrOf(uint64(16)), M: qdrant.PtrOf(uint64(0)), }, }) ``` 3. Create keyword payload index for `group_id` field. ```http PUT /collections/{collection_name}/index { ""field_name"": ""group_id"", ""field_schema"": { ""type"": ""keyword"", ""is_tenant"": true } } ``` ```python client.create_payload_index( collection_name=""{collection_name}"", field_name=""group_id"", field_schema=models.KeywordIndexParams( type=""keywprd"", is_tenant=True, ), ) ``` ```typescript client.createPayloadIndex(""{collection_name}"", { field_name: ""group_id"", field_schema: { type: ""keyword"", is_tenant: true, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateFieldIndexCollectionBuilder, KeywordIndexParamsBuilder, FieldType }; use qdrant_client::{Qdrant, QdrantError}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.create_field_index( CreateFieldIndexCollectionBuilder::new( ""{collection_name}"", ""group_id"", FieldType::Keyword, ).field_index_params( KeywordIndexParamsBuilder::default() .is_tenant(true) ) ).await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadIndexParams; import io.qdrant.client.grpc.Collections.PayloadSchemaType; import io.qdrant.client.grpc.Collections.KeywordIndexParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createPayloadIndexAsync( ""{collection_name}"", ""group_id"", PayloadSchemaType.Keyword, PayloadIndexParams.newBuilder() .setKeywordIndexParams( KeywordIndexParams.newBuilder() .setIsTenant(true) .build()) .build(), null, null, null) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.CreatePayloadIndexAsync( collectionName: ""{collection_name}"", fieldName: ""group_id"", schemaType: PayloadSchemaType.Keyword, indexParams: new PayloadIndexParams { KeywordIndexParams = new KeywordIndexParams { IsTenant = true } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{ CollectionName: ""{collection_name}"", FieldName: ""group_id"", FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(), FieldIndexParams: qdrant.NewPayloadIndexParams( &qdrant.KeywordIndexParams{ IsTenant: qdrant.PtrOf(true), }), }) ``` `is_tenant=true` parameter is optional, but specifying it provides storage with additional inforamtion about the usage patterns the collection is going to use. When specified, storage structure will be organized in a way to co-locate vectors of the same tenant together, which can significantly improve performance in some cases. ## Limitations One downside to this approach is that global requests (without the `group_id` filter) will be slower since they will necessitate scanning all groups to identify the nearest neighbors. ",documentation/guides/multiple-partitions.md "--- title: Administration weight: 10 aliases: - ../administration --- # Administration Qdrant exposes administration tools which enable to modify at runtime the behavior of a qdrant instance without changing its configuration manually. ## Locking A locking API enables users to restrict the possible operations on a qdrant process. It is important to mention that: - The configuration is not persistent therefore it is necessary to lock again following a restart. - Locking applies to a single node only. It is necessary to call lock on all the desired nodes in a distributed deployment setup. Lock request sample: ```http POST /locks { ""error_message"": ""write is forbidden"", ""write"": true } ``` Write flags enables/disables write lock. If the write lock is set to true, qdrant doesn't allow creating new collections or adding new data to the existing storage. However, deletion operations or updates are not forbidden under the write lock. This feature enables administrators to prevent a qdrant process from using more disk space while permitting users to search and delete unnecessary data. You can optionally provide the error message that should be used for error responses to users. ## Recovery mode *Available as of v1.2.0* Recovery mode can help in situations where Qdrant fails to start repeatedly. When starting in recovery mode, Qdrant only loads collection metadata to prevent going out of memory. This allows you to resolve out of memory situations, for example, by deleting a collection. After resolving Qdrant can be restarted normally to continue operation. In recovery mode, collection operations are limited to [deleting](../../concepts/collections/#delete-collection) a collection. That is because only collection metadata is loaded during recovery. To enable recovery mode with the Qdrant Docker image you must set the environment variable `QDRANT_ALLOW_RECOVERY_MODE=true`. The container will try to start normally first, and restarts in recovery mode if initialisation fails due to an out of memory error. This behavior is disabled by default. If using a Qdrant binary, recovery mode can be enabled by setting a recovery message in an environment variable, such as `QDRANT__STORAGE__RECOVERY_MODE=""My recovery message""`. ",documentation/guides/administration.md "--- title: Troubleshooting weight: 170 aliases: - ../tutorials/common-errors - /documentation/troubleshooting/ --- # Solving common errors ## Too many files open (OS error 24) Each collection segment needs some files to be open. At some point you may encounter the following errors in your server log: ```text Error: Too many files open (OS error 24) ``` In such a case you may need to increase the limit of the open files. It might be done, for example, while you launch the Docker container: ```bash docker run --ulimit nofile=10000:10000 qdrant/qdrant:latest ``` The command above will set both soft and hard limits to `10000`. If you are not using Docker, the following command will change the limit for the current user session: ```bash ulimit -n 10000 ``` Please note, the command should be executed before you run Qdrant server. ## Can't open Collections meta Wal When starting a Qdrant instance as part of a distributed deployment, you may come across an error message similar to this: ```bash Can't open Collections meta Wal: Os { code: 11, kind: WouldBlock, message: ""Resource temporarily unavailable"" } ``` It means that Qdrant cannot start because a collection cannot be loaded. Its associated [WAL](../../concepts/storage/#versioning) files are currently unavailable, likely because the same files are already being used by another Qdrant instance. Each node must have their own separate storage directory, volume or mount. The formed cluster will take care of sharing all data with each node, putting it all in the correct places for you. If using Kubernetes, each node must have their own volume. If using Docker, each node must have their own storage mount or volume. If using Qdrant directly, each node must have their own storage directory. ",documentation/guides/common-errors.md "--- title: Configuration weight: 160 aliases: - ../configuration - /guides/configuration/ --- # Configuration To change or correct Qdrant's behavior, default collection settings, and network interface parameters, you can use configuration files. The default configuration file is located at [config/config.yaml](https://github.com/qdrant/qdrant/blob/master/config/config.yaml). To change the default configuration, add a new configuration file and specify the path with `--config-path path/to/custom_config.yaml`. If running in production mode, you could also choose to overwrite `config/production.yaml`. See [ordering](#order-and-priority) for details on how configurations are loaded. The [Installation](../installation/) guide contains examples of how to set up Qdrant with a custom configuration for the different deployment methods. ## Order and priority *Effective as of v1.2.1* Multiple configurations may be loaded on startup. All of them are merged into a single effective configuration that is used by Qdrant. Configurations are loaded in the following order, if present: 1. Embedded base configuration ([source](https://github.com/qdrant/qdrant/blob/master/config/config.yaml)) 2. File `config/config.yaml` 3. File `config/{RUN_MODE}.yaml` (such as `config/production.yaml`) 4. File `config/local.yaml` 5. Config provided with `--config-path PATH` (if set) 6. [Environment variables](#environment-variables) This list is from least to most significant. Properties in later configurations will overwrite those loaded before it. For example, a property set with `--config-path` will overwrite those in other files. Most of these files are included by default in the Docker container. But it is likely that they are absent on your local machine if you run the `qdrant` binary manually. If file 2 or 3 are not found, a warning is shown on startup. If file 5 is provided but not found, an error is shown on startup. Other supported configuration file formats and extensions include: `.toml`, `.json`, `.ini`. ## Environment variables It is possible to set configuration properties using environment variables. Environment variables are always the most significant and cannot be overwritten (see [ordering](#order-and-priority)). All environment variables are prefixed with `QDRANT__` and are separated with `__`. These variables: ```bash QDRANT__LOG_LEVEL=INFO QDRANT__SERVICE__HTTP_PORT=6333 QDRANT__SERVICE__ENABLE_TLS=1 QDRANT__TLS__CERT=./tls/cert.pem QDRANT__TLS__CERT_TTL=3600 ``` result in this configuration: ```yaml log_level: INFO service: http_port: 6333 enable_tls: true tls: cert: ./tls/cert.pem cert_ttl: 3600 ``` To run Qdrant locally with a different HTTP port you could use: ```bash QDRANT__SERVICE__HTTP_PORT=1234 ./qdrant ``` ## Configuration file example ```yaml log_level: INFO storage: # Where to store all the data storage_path: ./storage # Where to store snapshots snapshots_path: ./snapshots snapshots_config: # ""local"" or ""s3"" - where to store snapshots snapshots_storage: local # s3_config: # bucket: """" # region: """" # access_key: """" # secret_key: """" # endpoint_url: """" # Where to store temporary files # If null, temporary snapshot are stored in: storage/snapshots_temp/ temp_path: null # If true - point's payload will not be stored in memory. # It will be read from the disk every time it is requested. # This setting saves RAM by (slightly) increasing the response time. # Note: those payload values that are involved in filtering and are indexed - remain in RAM. on_disk_payload: true # Maximum number of concurrent updates to shard replicas # If `null` - maximum concurrency is used. update_concurrency: null # Write-ahead-log related configuration wal: # Size of a single WAL segment wal_capacity_mb: 32 # Number of WAL segments to create ahead of actual data requirement wal_segments_ahead: 0 # Normal node - receives all updates and answers all queries node_type: ""Normal"" # Listener node - receives all updates, but does not answer search/read queries # Useful for setting up a dedicated backup node # node_type: ""Listener"" performance: # Number of parallel threads used for search operations. If 0 - auto selection. max_search_threads: 0 # Max number of threads (jobs) for running optimizations across all collections, each thread runs one job. # If 0 - have no limit and choose dynamically to saturate CPU. # Note: each optimization job will also use `max_indexing_threads` threads by itself for index building. max_optimization_threads: 0 # CPU budget, how many CPUs (threads) to allocate for an optimization job. # If 0 - auto selection, keep 1 or more CPUs unallocated depending on CPU size # If negative - subtract this number of CPUs from the available CPUs. # If positive - use this exact number of CPUs. optimizer_cpu_budget: 0 # Prevent DDoS of too many concurrent updates in distributed mode. # One external update usually triggers multiple internal updates, which breaks internal # timings. For example, the health check timing and consensus timing. # If null - auto selection. update_rate_limit: null # Limit for number of incoming automatic shard transfers per collection on this node, does not affect user-requested transfers. # The same value should be used on all nodes in a cluster. # Default is to allow 1 transfer. # If null - allow unlimited transfers. #incoming_shard_transfers_limit: 1 # Limit for number of outgoing automatic shard transfers per collection on this node, does not affect user-requested transfers. # The same value should be used on all nodes in a cluster. # Default is to allow 1 transfer. # If null - allow unlimited transfers. #outgoing_shard_transfers_limit: 1 # Enable async scorer which uses io_uring when rescoring. # Only supported on Linux, must be enabled in your kernel. # See: #async_scorer: false optimizers: # The minimal fraction of deleted vectors in a segment, required to perform segment optimization deleted_threshold: 0.2 # The minimal number of vectors in a segment, required to perform segment optimization vacuum_min_vector_number: 1000 # Target amount of segments optimizer will try to keep. # Real amount of segments may vary depending on multiple parameters: # - Amount of stored points # - Current write RPS # # It is recommended to select default number of segments as a factor of the number of search threads, # so that each segment would be handled evenly by one of the threads. # If `default_segment_number = 0`, will be automatically selected by the number of available CPUs default_segment_number: 0 # Do not create segments larger this size (in KiloBytes). # Large segments might require disproportionately long indexation times, # therefore it makes sense to limit the size of segments. # # If indexation speed have more priority for your - make this parameter lower. # If search speed is more important - make this parameter higher. # Note: 1Kb = 1 vector of size 256 # If not set, will be automatically selected considering the number of available CPUs. max_segment_size_kb: null # Maximum size (in KiloBytes) of vectors to store in-memory per segment. # Segments larger than this threshold will be stored as read-only memmaped file. # To enable memmap storage, lower the threshold # Note: 1Kb = 1 vector of size 256 # To explicitly disable mmap optimization, set to `0`. # If not set, will be disabled by default. memmap_threshold_kb: null # Maximum size (in KiloBytes) of vectors allowed for plain index. # Default value based on https://github.com/google-research/google-research/blob/master/scann/docs/algorithms.md # Note: 1Kb = 1 vector of size 256 # To explicitly disable vector indexing, set to `0`. # If not set, the default value will be used. indexing_threshold_kb: 20000 # Interval between forced flushes. flush_interval_sec: 5 # Max number of threads (jobs) for running optimizations per shard. # Note: each optimization job will also use `max_indexing_threads` threads by itself for index building. # If null - have no limit and choose dynamically to saturate CPU. # If 0 - no optimization threads, optimizations will be disabled. max_optimization_threads: null # This section has the same options as 'optimizers' above. All values specified here will overwrite the collections # optimizers configs regardless of the config above and the options specified at collection creation. #optimizers_overwrite: # deleted_threshold: 0.2 # vacuum_min_vector_number: 1000 # default_segment_number: 0 # max_segment_size_kb: null # memmap_threshold_kb: null # indexing_threshold_kb: 20000 # flush_interval_sec: 5 # max_optimization_threads: null # Default parameters of HNSW Index. Could be overridden for each collection or named vector individually hnsw_index: # Number of edges per node in the index graph. Larger the value - more accurate the search, more space required. m: 16 # Number of neighbours to consider during the index building. Larger the value - more accurate the search, more time required to build index. ef_construct: 100 # Minimal size (in KiloBytes) of vectors for additional payload-based indexing. # If payload chunk is smaller than `full_scan_threshold_kb` additional indexing won't be used - # in this case full-scan search should be preferred by query planner and additional indexing is not required. # Note: 1Kb = 1 vector of size 256 full_scan_threshold_kb: 10000 # Number of parallel threads used for background index building. # If 0 - automatically select. # Best to keep between 8 and 16 to prevent likelihood of building broken/inefficient HNSW graphs. # On small CPUs, less threads are used. max_indexing_threads: 0 # Store HNSW index on disk. If set to false, index will be stored in RAM. Default: false on_disk: false # Custom M param for hnsw graph built for payload index. If not set, default M will be used. payload_m: null # Default shard transfer method to use if none is defined. # If null - don't have a shard transfer preference, choose automatically. # If stream_records, snapshot or wal_delta - prefer this specific method. # More info: https://qdrant.tech/documentation/guides/distributed_deployment/#shard-transfer-method shard_transfer_method: null # Default parameters for collections collection: # Number of replicas of each shard that network tries to maintain replication_factor: 1 # How many replicas should apply the operation for us to consider it successful write_consistency_factor: 1 # Default parameters for vectors. vectors: # Whether vectors should be stored in memory or on disk. on_disk: null # shard_number_per_node: 1 # Default quantization configuration. # More info: https://qdrant.tech/documentation/guides/quantization quantization: null service: # Maximum size of POST data in a single request in megabytes max_request_size_mb: 32 # Number of parallel workers used for serving the api. If 0 - equal to the number of available cores. # If missing - Same as storage.max_search_threads max_workers: 0 # Host to bind the service on host: 0.0.0.0 # HTTP(S) port to bind the service on http_port: 6333 # gRPC port to bind the service on. # If `null` - gRPC is disabled. Default: null # Comment to disable gRPC: grpc_port: 6334 # Enable CORS headers in REST API. # If enabled, browsers would be allowed to query REST endpoints regardless of query origin. # More info: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS # Default: true enable_cors: true # Enable HTTPS for the REST and gRPC API enable_tls: false # Check user HTTPS client certificate against CA file specified in tls config verify_https_client_certificate: false # Set an api-key. # If set, all requests must include a header with the api-key. # example header: `api-key: ` # # If you enable this you should also enable TLS. # (Either above or via an external service like nginx.) # Sending an api-key over an unencrypted channel is insecure. # # Uncomment to enable. # api_key: your_secret_api_key_here # Set an api-key for read-only operations. # If set, all requests must include a header with the api-key. # example header: `api-key: ` # # If you enable this you should also enable TLS. # (Either above or via an external service like nginx.) # Sending an api-key over an unencrypted channel is insecure. # # Uncomment to enable. # read_only_api_key: your_secret_read_only_api_key_here # Uncomment to enable JWT Role Based Access Control (RBAC). # If enabled, you can generate JWT tokens with fine-grained rules for access control. # Use generated token instead of API key. # # jwt_rbac: true cluster: # Use `enabled: true` to run Qdrant in distributed deployment mode enabled: false # Configuration of the inter-cluster communication p2p: # Port for internal communication between peers port: 6335 # Use TLS for communication between peers enable_tls: false # Configuration related to distributed consensus algorithm consensus: # How frequently peers should ping each other. # Setting this parameter to lower value will allow consensus # to detect disconnected nodes earlier, but too frequent # tick period may create significant network and CPU overhead. # We encourage you NOT to change this parameter unless you know what you are doing. tick_period_ms: 100 # Set to true to prevent service from sending usage statistics to the developers. # Read more: https://qdrant.tech/documentation/guides/telemetry telemetry_disabled: false # TLS configuration. # Required if either service.enable_tls or cluster.p2p.enable_tls is true. tls: # Server certificate chain file cert: ./tls/cert.pem # Server private key file key: ./tls/key.pem # Certificate authority certificate file. # This certificate will be used to validate the certificates # presented by other nodes during inter-cluster communication. # # If verify_https_client_certificate is true, it will verify # HTTPS client certificate # # Required if cluster.p2p.enable_tls is true. ca_cert: ./tls/cacert.pem # TTL in seconds to reload certificate from disk, useful for certificate rotations. # Only works for HTTPS endpoints. Does not support gRPC (and intra-cluster communication). # If `null` - TTL is disabled. cert_ttl: 3600 ``` ## Validation *Available since v1.1.1* The configuration is validated on startup. If a configuration is loaded but validation fails, a warning is logged. E.g.: ```text WARN Settings configuration file has validation errors: WARN - storage.optimizers.memmap_threshold: value 123 invalid, must be 1000 or larger WARN - storage.hnsw_index.m: value 1 invalid, must be from 4 to 10000 ``` The server will continue to operate. Any validation errors should be fixed as soon as possible though to prevent problematic behavior. ",documentation/guides/configuration.md "--- title: Optimize Resources weight: 11 aliases: - ../tutorials/optimize --- # Optimize Qdrant Different use cases have different requirements for balancing between memory, speed, and precision. Qdrant is designed to be flexible and customizable so you can tune it to your needs. ![Trafeoff](/docs/tradeoff.png) Let's look deeper into each of those possible optimization scenarios. ## Prefer low memory footprint with high speed search The main way to achieve high speed search with low memory footprint is to keep vectors on disk while at the same time minimizing the number of disk reads. Vector quantization is one way to achieve this. Quantization converts vectors into a more compact representation, which can be stored in memory and used for search. With smaller vectors you can cache more in RAM and reduce the number of disk reads. To configure in-memory quantization, with on-disk original vectors, you need to create a collection with the following configuration: ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"", ""on_disk"": true }, ""quantization_config"": { ""scalar"": { ""type"": ""int8"", ""always_ram"": true } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE, on_disk=True), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, always_ram=True, ), ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", on_disk: true, }, quantization_config: { scalar: { type: ""int8"", always_ram: true, }, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder, VectorParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine)) .quantization_config( ScalarQuantizationBuilder::default() .r#type(QuantizationType::Int8.into()) .always_ram(true), ), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .setOnDisk(true) .build()) .build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setAlwaysRam(true) .build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine, OnDisk = true }, quantizationConfig: new QuantizationConfig { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 768, Distance: qdrant.Distance_Cosine, OnDisk: qdrant.PtrOf(true), }), QuantizationConfig: qdrant.NewQuantizationScalar(&qdrant.ScalarQuantization{ Type: qdrant.QuantizationType_Int8, AlwaysRam: qdrant.PtrOf(true), }), }) ``` `on_disk` will ensure that vectors will be stored on disk, while `always_ram` will ensure that quantized vectors will be stored in RAM. Optionally, you can disable rescoring with search `params`, which will reduce the number of disk reads even further, but potentially slightly decrease the precision. ```http POST /collections/{collection_name}/points/query { ""query"": [0.2, 0.1, 0.9, 0.7], ""params"": { ""quantization"": { ""rescore"": false } }, ""limit"": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=[0.2, 0.1, 0.9, 0.7], search_params=models.SearchParams( quantization=models.QuantizationSearchParams(rescore=False) ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: [0.2, 0.1, 0.9, 0.7], params: { quantization: { rescore: false, }, }, }); ``` ```rust use qdrant_client::qdrant::{ QuantizationSearchParamsBuilder, QueryPointsBuilder, SearchParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.2, 0.1, 0.9, 0.7]) .limit(3) .params( SearchParamsBuilder::default() .quantization(QuantizationSearchParamsBuilder::default().rescore(false)), ), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QuantizationSearchParams; import io.qdrant.client.grpc.Points.QueryPoints; import io.qdrant.client.grpc.Points.SearchParams; import static io.qdrant.client.QueryFactory.nearest; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setParams( SearchParams.newBuilder() .setQuantization( QuantizationSearchParams.newBuilder().setRescore(false).build()) .build()) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, searchParams: new SearchParams { Quantization = new QuantizationSearchParams { Rescore = false } }, limit: 3 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), Params: &qdrant.SearchParams{ Quantization: &qdrant.QuantizationSearchParams{ Rescore: qdrant.PtrOf(true), }, }, }) ``` ## Prefer high precision with low memory footprint In case you need high precision, but don't have enough RAM to store vectors in memory, you can enable on-disk vectors and HNSW index. ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"", ""on_disk"": true }, ""hnsw_config"": { ""on_disk"": true } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE, on_disk=True), hnsw_config=models.HnswConfigDiff(on_disk=True), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", on_disk: true, }, hnsw_config: { on_disk: true, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Distance, HnswConfigDiffBuilder, VectorParamsBuilder, }; use qdrant_client::{Qdrant, QdrantError}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine).on_disk(true)) .hnsw_config(HnswConfigDiffBuilder::default().on_disk(true)), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.HnswConfigDiff; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .setOnDisk(true) .build()) .build()) .setHnswConfig(HnswConfigDiff.newBuilder().setOnDisk(true).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine, OnDisk = true}, hnswConfig: new HnswConfigDiff { OnDisk = true } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 768, Distance: qdrant.Distance_Cosine, OnDisk: qdrant.PtrOf(true), }), HnswConfig: &qdrant.HnswConfigDiff{ OnDisk: qdrant.PtrOf(true), }, }) ``` In this scenario you can increase the precision of the search by increasing the `ef` and `m` parameters of the HNSW index, even with limited RAM. ```json ... ""hnsw_config"": { ""m"": 64, ""ef_construct"": 512, ""on_disk"": true } ... ``` The disk IOPS is a critical factor in this scenario, it will determine how fast you can perform search. You can use [fio](https://gist.github.com/superboum/aaa45d305700a7873a8ebbab1abddf2b) to measure disk IOPS. ## Prefer high precision with high speed search For high speed and high precision search it is critical to keep as much data in RAM as possible. By default, Qdrant follows this approach, but you can tune it to your needs. It is possible to achieve high search speed and tunable accuracy by applying quantization with re-scoring. ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"" }, ""quantization_config"": { ""scalar"": { ""type"": ""int8"", ""always_ram"": true } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, always_ram=True, ), ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", }, quantization_config: { scalar: { type: ""int8"", always_ram: true, }, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder, VectorParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine)) .quantization_config( ScalarQuantizationBuilder::default() .r#type(QuantizationType::Int8.into()) .always_ram(true), ), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setAlwaysRam(true) .build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine}, quantizationConfig: new QuantizationConfig { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 768, Distance: qdrant.Distance_Cosine, }), QuantizationConfig: qdrant.NewQuantizationScalar(&qdrant.ScalarQuantization{ Type: qdrant.QuantizationType_Int8, AlwaysRam: qdrant.PtrOf(true), }), }) ``` There are also some search-time parameters you can use to tune the search accuracy and speed: ```http POST /collections/{collection_name}/points/query { ""query"": [0.2, 0.1, 0.9, 0.7], ""params"": { ""hnsw_ef"": 128, ""exact"": false }, ""limit"": 3 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=[0.2, 0.1, 0.9, 0.7], search_params=models.SearchParams(hnsw_ef=128, exact=False), limit=3, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: [0.2, 0.1, 0.9, 0.7], params: { hnsw_ef: 128, exact: false, }, limit: 3, }); ``` ```rust use qdrant_client::qdrant::{QueryPointsBuilder, SearchParamsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.2, 0.1, 0.9, 0.7]) .limit(3) .params(SearchParamsBuilder::default().hnsw_ef(128).exact(false)), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPoints; import io.qdrant.client.grpc.Points.SearchParams; import static io.qdrant.client.QueryFactory.nearest; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setParams(SearchParams.newBuilder().setHnswEf(128).setExact(false).build()) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, searchParams: new SearchParams { HnswEf = 128, Exact = false }, limit: 3 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), Params: &qdrant.SearchParams{ HnswEf: qdrant.PtrOf(uint64(128)), Exact: qdrant.PtrOf(false), }, }) ``` - `hnsw_ef` - controls the number of neighbors to visit during search. The higher the value, the more accurate and slower the search will be. Recommended range is 32-512. - `exact` - if set to `true`, will perform exact search, which will be slower, but more accurate. You can use it to compare results of the search with different `hnsw_ef` values versus the ground truth. ## Latency vs Throughput - There are two main approaches to measure the speed of search: - latency of the request - the time from the moment request is submitted to the moment a response is received - throughput - the number of requests per second the system can handle Those approaches are not mutually exclusive, but in some cases it might be preferable to optimize for one or another. To prefer minimizing latency, you can set up Qdrant to use as many cores as possible for a single request\. You can do this by setting the number of segments in the collection to be equal to the number of cores in the system. In this case, each segment will be processed in parallel, and the final result will be obtained faster. ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"" }, ""optimizers_config"": { ""default_segment_number"": 16 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(default_segment_number=16), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", }, optimizers_config: { default_segment_number: 16, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Distance, OptimizersConfigDiffBuilder, VectorParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine)) .optimizers_config( OptimizersConfigDiffBuilder::default().default_segment_number(16), ), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setDefaultSegmentNumber(16).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { DefaultSegmentNumber = 16 } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 768, Distance: qdrant.Distance_Cosine, }), OptimizersConfig: &qdrant.OptimizersConfigDiff{ DefaultSegmentNumber: qdrant.PtrOf(uint64(16)), }, }) ``` To prefer throughput, you can set up Qdrant to use as many cores as possible for processing multiple requests in parallel. To do that, you can configure qdrant to use minimal number of segments, which is usually 2. Large segments benefit from the size of the index and overall smaller number of vector comparisons required to find the nearest neighbors. But at the same time require more time to build index. ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"" }, ""optimizers_config"": { ""default_segment_number"": 2 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(default_segment_number=2), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", }, optimizers_config: { default_segment_number: 2, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Distance, OptimizersConfigDiffBuilder, VectorParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine)) .optimizers_config( OptimizersConfigDiffBuilder::default().default_segment_number(2), ), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setDefaultSegmentNumber(2).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { DefaultSegmentNumber = 2 } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 768, Distance: qdrant.Distance_Cosine, }), OptimizersConfig: &qdrant.OptimizersConfigDiff{ DefaultSegmentNumber: qdrant.PtrOf(uint64(2)), }, }) ```",documentation/guides/optimize.md "--- title: Telemetry weight: 150 aliases: - ../telemetry --- # Telemetry Qdrant collects anonymized usage statistics from users in order to improve the engine. You can [deactivate](#deactivate-telemetry) at any time, and any data that has already been collected can be [deleted on request](#request-information-deletion). ## Why do we collect telemetry? We want to make Qdrant fast and reliable. To do this, we need to understand how it performs in real-world scenarios. We do a lot of benchmarking internally, but it is impossible to cover all possible use cases, hardware, and configurations. In order to identify bottlenecks and improve Qdrant, we need to collect information about how it is used. Additionally, Qdrant uses a bunch of internal heuristics to optimize the performance. To better set up parameters for these heuristics, we need to collect timings and counters of various pieces of code. With this information, we can make Qdrant faster for everyone. ## What information is collected? There are 3 types of information that we collect: * System information - general information about the system, such as CPU, RAM, and disk type. As well as the configuration of the Qdrant instance. * Performance - information about timings and counters of various pieces of code. * Critical error reports - information about critical errors, such as backtraces, that occurred in Qdrant. This information would allow to identify problems nobody yet reported to us. ### We **never** collect the following information: - User's IP address - Any data that can be used to identify the user or the user's organization - Any data, stored in the collections - Any names of the collections - Any URLs ## How do we anonymize data? We understand that some users may be concerned about the privacy of their data. That is why we make an extra effort to ensure your privacy. There are several different techniques that we use to anonymize the data: - We use a random UUID to identify instances. This UUID is generated on each startup and is not stored anywhere. There are no other ways to distinguish between different instances. - We round all big numbers, so that the last digits are always 0. For example, if the number is 123456789, we will store 123456000. - We replace all names with irreversibly hashed values. So no collection or field names will leak into the telemetry. - All urls are hashed as well. You can see exact version of anomymized collected data by accessing the [telemetry API](https://api.qdrant.tech/master/api-reference/service/telemetry) with `anonymize=true` parameter. For example, ## Deactivate telemetry You can deactivate telemetry by: - setting the `QDRANT__TELEMETRY_DISABLED` environment variable to `true` - setting the config option `telemetry_disabled` to `true` in the `config/production.yaml` or `config/config.yaml` files - using cli option `--disable-telemetry` Any of these options will prevent Qdrant from sending any telemetry data. If you decide to deactivate telemetry, we kindly ask you to share your feedback with us in the [Discord community](https://qdrant.to/discord) or GitHub [discussions](https://github.com/qdrant/qdrant/discussions) ## Request information deletion We provide an email address so that users can request the complete removal of their data from all of our tools. To do so, send an email to privacy@qdrant.com containing the unique identifier generated for your Qdrant installation. You can find this identifier in the telemetry API response (`""id""` field), or in the logs of your Qdrant instance. Any questions regarding the management of the data we collect can also be sent to this email address. ",documentation/guides/telemetry.md "--- title: Distributed Deployment weight: 100 aliases: - ../distributed_deployment - /guides/distributed_deployment --- # Distributed deployment Since version v0.8.0 Qdrant supports a distributed deployment mode. In this mode, multiple Qdrant services communicate with each other to distribute the data across the peers to extend the storage capabilities and increase stability. ## How many Qdrant nodes should I run? The ideal number of Qdrant nodes depends on how much you value cost-saving, resilience, and performance/scalability in relation to each other. - **Prioritizing cost-saving**: If cost is most important to you, run a single Qdrant node. This is not recommended for production environments. Drawbacks: - Resilience: Users will experience downtime during node restarts, and recovery is not possible unless you have backups or snapshots. - Performance: Limited to the resources of a single server. - **Prioritizing resilience**: If resilience is most important to you, run a Qdrant cluster with three or more nodes and two or more shard replicas. Clusters with three or more nodes and replication can perform all operations even while one node is down. Additionally, they gain performance benefits from load-balancing and they can recover from the permanent loss of one node without the need for backups or snapshots (but backups are still strongly recommended). This is most recommended for production environments. Drawbacks: - Cost: Larger clusters are more costly than smaller clusters, which is the only drawback of this configuration. - **Balancing cost, resilience, and performance**: Running a two-node Qdrant cluster with replicated shards allows the cluster to respond to most read/write requests even when one node is down, such as during maintenance events. Having two nodes also means greater performance than a single-node cluster while still being cheaper than a three-node cluster. Drawbacks: - Resilience (uptime): The cluster cannot perform operations on collections when one node is down. Those operations require >50% of nodes to be running, so this is only possible in a 3+ node cluster. Since creating, editing, and deleting collections are usually rare operations, many users find this drawback to be negligible. - Resilience (data integrity): If the data on one of the two nodes is permanently lost or corrupted, it cannot be recovered aside from snapshots or backups. Only 3+ node clusters can recover from the permanent loss of a single node since recovery operations require >50% of the cluster to be healthy. - Cost: Replicating your shards requires storing two copies of your data. - Performance: The maximum performance of a Qdrant cluster increases as you add more nodes. In summary, single-node clusters are best for non-production workloads, replicated 3+ node clusters are the gold standard, and replicated 2-node clusters strike a good balance. ## Enabling distributed mode in self-hosted Qdrant To enable distributed deployment - enable the cluster mode in the [configuration](../configuration/) or using the ENV variable: `QDRANT__CLUSTER__ENABLED=true`. ```yaml cluster: # Use `enabled: true` to run Qdrant in distributed deployment mode enabled: true # Configuration of the inter-cluster communication p2p: # Port for internal communication between peers port: 6335 # Configuration related to distributed consensus algorithm consensus: # How frequently peers should ping each other. # Setting this parameter to lower value will allow consensus # to detect disconnected node earlier, but too frequent # tick period may create significant network and CPU overhead. # We encourage you NOT to change this parameter unless you know what you are doing. tick_period_ms: 100 ``` By default, Qdrant will use port `6335` for its internal communication. All peers should be accessible on this port from within the cluster, but make sure to isolate this port from outside access, as it might be used to perform write operations. Additionally, you must provide the `--uri` flag to the first peer so it can tell other nodes how it should be reached: ```bash ./qdrant --uri 'http://qdrant_node_1:6335' ``` Subsequent peers in a cluster must know at least one node of the existing cluster to synchronize through it with the rest of the cluster. To do this, they need to be provided with a bootstrap URL: ```bash ./qdrant --bootstrap 'http://qdrant_node_1:6335' ``` The URL of the new peers themselves will be calculated automatically from the IP address of their request. But it is also possible to provide them individually using the `--uri` argument. ```text USAGE: qdrant [OPTIONS] OPTIONS: --bootstrap Uri of the peer to bootstrap from in case of multi-peer deployment. If not specified - this peer will be considered as a first in a new deployment --uri Uri of this peer. Other peers should be able to reach it by this uri. This value has to be supplied if this is the first peer in a new deployment. In case this is not the first peer and it bootstraps the value is optional. If not supplied then qdrant will take internal grpc port from config and derive the IP address of this peer on bootstrap peer (receiving side) ``` After a successful synchronization you can observe the state of the cluster through the [REST API](https://api.qdrant.tech/master/api-reference/distributed/cluster-status): ```http GET /cluster ``` Example result: ```json { ""result"": { ""status"": ""enabled"", ""peer_id"": 11532566549086892000, ""peers"": { ""9834046559507417430"": { ""uri"": ""http://172.18.0.3:6335/"" }, ""11532566549086892528"": { ""uri"": ""http://qdrant_node_1:6335/"" } }, ""raft_info"": { ""term"": 1, ""commit"": 4, ""pending_operations"": 1, ""leader"": 11532566549086892000, ""role"": ""Leader"" } }, ""status"": ""ok"", ""time"": 5.731e-06 } ``` Note that enabling distributed mode does not automatically replicate your data. See the section on [making use of a new distributed Qdrant cluster](#making-use-of-a-new-distributed-qdrant-cluster) for the next steps. ## Enabling distributed mode in Qdrant Cloud For best results, first ensure your cluster is running Qdrant v1.7.4 or higher. Older versions of Qdrant do support distributed mode, but improvements in v1.7.4 make distributed clusters more resilient during outages. In the [Qdrant Cloud console](https://cloud.qdrant.io/), click ""Scale Up"" to increase your cluster size to >1. Qdrant Cloud configures the distributed mode settings automatically. After the scale-up process completes, you will have a new empty node running alongside your existing node(s). To replicate data into this new empty node, see the next section. ## Making use of a new distributed Qdrant cluster When you enable distributed mode and scale up to two or more nodes, your data does not move to the new node automatically; it starts out empty. To make use of your new empty node, do one of the following: * Create a new replicated collection by setting the [replication_factor](#replication-factor) to 2 or more and setting the [number of shards](#choosing-the-right-number-of-shards) to a multiple of your number of nodes. * If you have an existing collection which does not contain enough shards for each node, you must create a new collection as described in the previous bullet point. * If you already have enough shards for each node and you merely need to replicate your data, follow the directions for [creating new shard replicas](#creating-new-shard-replicas). * If you already have enough shards for each node and your data is already replicated, you can move data (without replicating it) onto the new node(s) by [moving shards](#moving-shards). ## Raft Qdrant uses the [Raft](https://raft.github.io/) consensus protocol to maintain consistency regarding the cluster topology and the collections structure. Operations on points, on the other hand, do not go through the consensus infrastructure. Qdrant is not intended to have strong transaction guarantees, which allows it to perform point operations with low overhead. In practice, it means that Qdrant does not guarantee atomic distributed updates but allows you to wait until the [operation is complete](../../concepts/points/#awaiting-result) to see the results of your writes. Operations on collections, on the contrary, are part of the consensus which guarantees that all operations are durable and eventually executed by all nodes. In practice it means that a majority of nodes agree on what operations should be applied before the service will perform them. Practically, it means that if the cluster is in a transition state - either electing a new leader after a failure or starting up, the collection update operations will be denied. You may use the cluster [REST API](https://api.qdrant.tech/master/api-reference/distributed/cluster-status) to check the state of the consensus. ## Sharding A Collection in Qdrant is made of one or more shards. A shard is an independent store of points which is able to perform all operations provided by collections. There are two methods of distributing points across shards: - **Automatic sharding**: Points are distributed among shards by using a [consistent hashing](https://en.wikipedia.org/wiki/Consistent_hashing) algorithm, so that shards are managing non-intersecting subsets of points. This is the default behavior. - **User-defined sharding**: _Available as of v1.7.0_ - Each point is uploaded to a specific shard, so that operations can hit only the shard or shards they need. Even with this distribution, shards still ensure having non-intersecting subsets of points. [See more...](#user-defined-sharding) Each node knows where all parts of the collection are stored through the [consensus protocol](./#raft), so when you send a search request to one Qdrant node, it automatically queries all other nodes to obtain the full search result. ### Choosing the right number of shards When you create a collection, Qdrant splits the collection into `shard_number` shards. If left unset, `shard_number` is set to the number of nodes in your cluster when the collection was created. The `shard_number` cannot be changed without recreating the collection. ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 300, ""distance"": ""Cosine"" }, ""shard_number"": 6 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE), shard_number=6, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 300, distance: ""Cosine"", }, shard_number: 6, }); ``` ```rust use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(300, Distance::Cosine)) .shard_number(6), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(300) .setDistance(Distance.Cosine) .build()) .build()) .setShardNumber(6) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine }, shardNumber: 6 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 300, Distance: qdrant.Distance_Cosine, }), ShardNumber: qdrant.PtrOf(uint32(6)), }) ``` To ensure all nodes in your cluster are evenly utilized, the number of shards must be a multiple of the number of nodes you are currently running in your cluster. > Aside: Advanced use cases such as multitenancy may require an uneven distribution of shards. See [Multitenancy](/articles/multitenancy/). We recommend creating at least 2 shards per node to allow future expansion without having to re-shard. Re-sharding should be avoided since it requires creating a new collection. In-place re-sharding is planned for a future version of Qdrant. If you anticipate a lot of growth, we recommend 12 shards since you can expand from 1 node up to 2, 3, 6, and 12 nodes without having to re-shard. Having more than 12 shards in a small cluster may not be worth the performance overhead. Shards are evenly distributed across all existing nodes when a collection is first created, but Qdrant does not automatically rebalance shards if your cluster size or replication factor changes (since this is an expensive operation on large clusters). See the next section for how to move shards after scaling operations. ### Moving shards *Available as of v0.9.0* Qdrant allows moving shards between nodes in the cluster and removing nodes from the cluster. This functionality unlocks the ability to dynamically scale the cluster size without downtime. It also allows you to upgrade or migrate nodes without downtime. Qdrant provides the information regarding the current shard distribution in the cluster with the [Collection Cluster info API](https://api.qdrant.tech/master/api-reference/distributed/collection-cluster-info). Use the [Update collection cluster setup API](https://api.qdrant.tech/master/api-reference/distributed/update-collection-cluster) to initiate the shard transfer: ```http POST /collections/{collection_name}/cluster { ""move_shard"": { ""shard_id"": 0, ""from_peer_id"": 381894127, ""to_peer_id"": 467122995 } } ``` After the transfer is initiated, the service will process it based on the used [transfer method](#shard-transfer-method) keeping both shards in sync. Once the transfer is completed, the old shard is deleted from the source node. In case you want to downscale the cluster, you can move all shards away from a peer and then remove the peer using the [remove peer API](https://api.qdrant.tech/master/api-reference/distributed/remove-peer). ```http DELETE /cluster/peer/{peer_id} ``` After that, Qdrant will exclude the node from the consensus, and the instance will be ready for shutdown. ### User-defined sharding *Available as of v1.7.0* Qdrant allows you to specify the shard for each point individually. This feature is useful if you want to control the shard placement of your data, so that operations can hit only the subset of shards they actually need. In big clusters, this can significantly improve the performance of operations that do not require the whole collection to be scanned. A clear use-case for this feature is managing a multi-tenant collection, where each tenant (let it be a user or organization) is assumed to be segregated, so they can have their data stored in separate shards. To enable user-defined sharding, set `sharding_method` to `custom` during collection creation: ```http PUT /collections/{collection_name} { ""shard_number"": 1, ""sharding_method"": ""custom"" // ... other collection parameters } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", shard_number=1, sharding_method=models.ShardingMethod.CUSTOM, # ... other collection parameters ) client.create_shard_key(""{collection_name}"", ""{shard_key}"") ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { shard_number: 1, sharding_method: ""custom"", // ... other collection parameters }); client.createShardKey(""{collection_name}"", { shard_key: ""{shard_key}"" }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, CreateShardKeyBuilder, CreateShardKeyRequestBuilder, Distance, ShardingMethod, VectorParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(300, Distance::Cosine)) .shard_number(1) .sharding_method(ShardingMethod::Custom.into()), ) .await?; client .create_shard_key( CreateShardKeyRequestBuilder::new(""{collection_name}"") .request(CreateShardKeyBuilder::default().shard_key(""{shard_key"".to_string())), ) .await?; ``` ```java import static io.qdrant.client.ShardKeyFactory.shardKey; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.ShardingMethod; import io.qdrant.client.grpc.Collections.CreateShardKey; import io.qdrant.client.grpc.Collections.CreateShardKeyRequest; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") // ... other collection parameters .setShardNumber(1) .setShardingMethod(ShardingMethod.Custom) .build()) .get(); client.createShardKeyAsync(CreateShardKeyRequest.newBuilder() .setCollectionName(""{collection_name}"") .setRequest(CreateShardKey.newBuilder() .setShardKey(shardKey(""{shard_key}"")) .build()) .build()).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", // ... other collection parameters shardNumber: 1, shardingMethod: ShardingMethod.Custom ); await client.CreateShardKeyAsync( ""{collection_name}"", new CreateShardKey { ShardKey = new ShardKey { Keyword = ""{shard_key}"", } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", // ... other collection parameters ShardNumber: qdrant.PtrOf(uint32(1)), ShardingMethod: qdrant.ShardingMethod_Custom.Enum(), }) client.CreateShardKey(context.Background(), ""{collection_name}"", &qdrant.CreateShardKey{ ShardKey: qdrant.NewShardKey(""{shard_key}""), }) ``` In this mode, the `shard_number` means the number of shards per shard key, where points will be distributed evenly. For example, if you have 10 shard keys and a collection config with these settings: ```json { ""shard_number"": 1, ""sharding_method"": ""custom"", ""replication_factor"": 2 } ``` Then you will have `1 * 10 * 2 = 20` total physical shards in the collection. Physical shards require a large amount of resources, so make sure your custom sharding key has a low cardinality. For large cardinality keys, it is recommended to use [partition by payload](/documentation/guides/multiple-partitions/#partition-by-payload) instead. To specify the shard for each point, you need to provide the `shard_key` field in the upsert request: ```http PUT /collections/{collection_name}/points { ""points"": [ { ""id"": 1111, ""vector"": [0.1, 0.2, 0.3] }, ] ""shard_key"": ""user_1"" } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.upsert( collection_name=""{collection_name}"", points=[ models.PointStruct( id=1111, vector=[0.1, 0.2, 0.3], ), ], shard_key_selector=""user_1"", ) ``` ```typescript client.upsertPoints(""{collection_name}"", { points: [ { id: 1111, vector: [0.1, 0.2, 0.3], }, ], shard_key: ""user_1"", }); ``` ```rust use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder}; use qdrant_client::Payload; client .upsert_points( UpsertPointsBuilder::new( ""{collection_name}"", vec![PointStruct::new( 111, vec![0.1, 0.2, 0.3], Payload::default(), )], ) .shard_key_selector(""user_1"".to_string()), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ShardKeySelectorFactory.shardKeySelector; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.UpsertPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .upsertAsync( UpsertPoints.newBuilder() .setCollectionName(""{collection_name}"") .addAllPoints( List.of( PointStruct.newBuilder() .setId(id(111)) .setVectors(vectors(0.1f, 0.2f, 0.3f)) .build())) .setShardKeySelector(shardKeySelector(""user_1"")) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpsertAsync( collectionName: ""{collection_name}"", points: new List { new() { Id = 111, Vectors = new[] { 0.1f, 0.2f, 0.3f } } }, shardKeySelector: new ShardKeySelector { ShardKeys = { new List { ""user_1"" } } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Upsert(context.Background(), &qdrant.UpsertPoints{ CollectionName: ""{collection_name}"", Points: []*qdrant.PointStruct{ { Id: qdrant.NewIDNum(111), Vectors: qdrant.NewVectors(0.1, 0.2, 0.3), }, }, ShardKeySelector: &qdrant.ShardKeySelector{ ShardKeys: []*qdrant.ShardKey{ qdrant.NewShardKey(""user_1""), }, }, }) ``` * When using custom sharding, IDs are only enforced to be unique within a shard key. This means that you can have multiple points with the same ID, if they have different shard keys. This is a limitation of the current implementation, and is an anti-pattern that should be avoided because it can create scenarios of points with the same ID to have different contents. In the future, we plan to add a global ID uniqueness check. Now you can target the operations to specific shard(s) by specifying the `shard_key` on any operation you do. Operations that do not specify the shard key will be executed on __all__ shards. Another use-case would be to have shards that track the data chronologically, so that you can do more complex itineraries like uploading live data in one shard and archiving it once a certain age has passed. ### Shard transfer method *Available as of v1.7.0* There are different methods for transferring a shard, such as moving or replicating, to another node. Depending on what performance and guarantees you'd like to have and how you'd like to manage your cluster, you likely want to choose a specific method. Each method has its own pros and cons. Which is fastest depends on the size and state of a shard. Available shard transfer methods are: - `stream_records`: _(default)_ transfer by streaming just its records to the target node in batches. - `snapshot`: transfer including its index and quantized data by utilizing a [snapshot](../../concepts/snapshots/) automatically. - `wal_delta`: _(auto recovery default)_ transfer by resolving [WAL] difference; the operations that were missed. Each has pros, cons and specific requirements, some of which are: | Method: | Stream records | Snapshot | WAL delta | |:---|:---|:---|:---| | **Version** | v0.8.0+ | v1.7.0+ | v1.8.0+ | | **Target** | New/existing shard | New/existing shard | Existing shard | | **Connectivity** | Internal gRPC API (6335) | REST API (6333)
Internal gRPC API (6335) | Internal gRPC API (6335) | | **HNSW index** | Doesn't transfer, will reindex on target. | Does transfer, immediately ready on target. | Doesn't transfer, may index on target. | | **Quantization** | Doesn't transfer, will requantize on target. | Does transfer, immediately ready on target. | Doesn't transfer, may quantize on target. | | **Ordering** | Unordered updates on target[^unordered] | Ordered updates on target[^ordered] | Ordered updates on target[^ordered] | | **Disk space** | No extra required | Extra required for snapshot on both nodes | No extra required | [^unordered]: Weak ordering for updates: All records are streamed to the target node in order. New updates are received on the target node in parallel, while the transfer of records is still happening. We therefore have `weak` ordering, regardless of what [ordering](#write-ordering) is used for updates. [^ordered]: Strong ordering for updates: A snapshot of the shard is created, it is transferred and recovered on the target node. That ensures the state of the shard is kept consistent. New updates are queued on the source node, and transferred in order to the target node. Updates therefore have the same [ordering](#write-ordering) as the user selects, making `strong` ordering possible. To select a shard transfer method, specify the `method` like: ```http POST /collections/{collection_name}/cluster { ""move_shard"": { ""shard_id"": 0, ""from_peer_id"": 381894127, ""to_peer_id"": 467122995, ""method"": ""snapshot"" } } ``` The `stream_records` transfer method is the simplest available. It simply transfers all shard records in batches to the target node until it has transferred all of them, keeping both shards in sync. It will also make sure the transferred shard indexing process is keeping up before performing a final switch. The method has two common disadvantages: 1. It does not transfer index or quantization data, meaning that the shard has to be optimized again on the new node, which can be very expensive. 2. The ordering guarantees are `weak`[^unordered], which is not suitable for some applications. Because it is so simple, it's also very robust, making it a reliable choice if the above cons are acceptable in your use case. If your cluster is unstable and out of resources, it's probably best to use the `stream_records` transfer method, because it is unlikely to fail. The `snapshot` transfer method utilizes [snapshots](../../concepts/snapshots/) to transfer a shard. A snapshot is created automatically. It is then transferred and restored on the target node. After this is done, the snapshot is removed from both nodes. While the snapshot/transfer/restore operation is happening, the source node queues up all new operations. All queued updates are then sent in order to the target shard to bring it into the same state as the source. There are two important benefits: 1. It transfers index and quantization data, so that the shard does not have to be optimized again on the target node, making them immediately available. This way, Qdrant ensures that there will be no degradation in performance at the end of the transfer. Especially on large shards, this can give a huge performance improvement. 2. The ordering guarantees can be `strong`[^ordered], required for some applications. The `wal_delta` transfer method only transfers the difference between two shards. More specifically, it transfers all operations that were missed to the target shard. The [WAL] of both shards is used to resolve this. There are two benefits: 1. It will be very fast because it only transfers the difference rather than all data. 2. The ordering guarantees can be `strong`[^ordered], required for some applications. Two disadvantages are: 1. It can only be used to transfer to a shard that already exists on the other node. 2. Applicability is limited because the WALs normally don't hold more than 64MB of recent operations. But that should be enough for a node that quickly restarts, to upgrade for example. If a delta cannot be resolved, this method automatically falls back to `stream_records` which equals transferring the full shard. The `stream_records` method is currently used as default. This may change in the future. As of Qdrant 1.9.0 `wal_delta` is used for automatic shard replications to recover dead shards. [WAL]: ../../concepts/storage/#versioning ## Replication *Available as of v0.11.0* Qdrant allows you to replicate shards between nodes in the cluster. Shard replication increases the reliability of the cluster by keeping several copies of a shard spread across the cluster. This ensures the availability of the data in case of node failures, except if all replicas are lost. ### Replication factor When you create a collection, you can control how many shard replicas you'd like to store by changing the `replication_factor`. By default, `replication_factor` is set to ""1"", meaning no additional copy is maintained automatically. You can change that by setting the `replication_factor` when you create a collection. Currently, the replication factor of a collection can only be configured at creation time. ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 300, ""distance"": ""Cosine"" }, ""shard_number"": 6, ""replication_factor"": 2, } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE), shard_number=6, replication_factor=2, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 300, distance: ""Cosine"", }, shard_number: 6, replication_factor: 2, }); ``` ```rust use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(300, Distance::Cosine)) .shard_number(6) .replication_factor(2), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(300) .setDistance(Distance.Cosine) .build()) .build()) .setShardNumber(6) .setReplicationFactor(2) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine }, shardNumber: 6, replicationFactor: 2 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 300, Distance: qdrant.Distance_Cosine, }), ShardNumber: qdrant.PtrOf(uint32(6)), ReplicationFactor: qdrant.PtrOf(uint32(2)), }) ``` This code sample creates a collection with a total of 6 logical shards backed by a total of 12 physical shards. Since a replication factor of ""2"" would require twice as much storage space, it is advised to make sure the hardware can host the additional shard replicas beforehand. ### Creating new shard replicas It is possible to create or delete replicas manually on an existing collection using the [Update collection cluster setup API](https://api.qdrant.tech/master/api-reference/distributed/update-collection-cluster). A replica can be added on a specific peer by specifying the peer from which to replicate. ```http POST /collections/{collection_name}/cluster { ""replicate_shard"": { ""shard_id"": 0, ""from_peer_id"": 381894127, ""to_peer_id"": 467122995 } } ``` And a replica can be removed on a specific peer. ```http POST /collections/{collection_name}/cluster { ""drop_replica"": { ""shard_id"": 0, ""peer_id"": 381894127 } } ``` Keep in mind that a collection must contain at least one active replica of a shard. ### Error handling Replicas can be in different states: - Active: healthy and ready to serve traffic - Dead: unhealthy and not ready to serve traffic - Partial: currently under resynchronization before activation A replica is marked as dead if it does not respond to internal healthchecks or if it fails to serve traffic. A dead replica will not receive traffic from other peers and might require a manual intervention if it does not recover automatically. This mechanism ensures data consistency and availability if a subset of the replicas fail during an update operation. ### Node Failure Recovery Sometimes hardware malfunctions might render some nodes of the Qdrant cluster unrecoverable. No system is immune to this. But several recovery scenarios allow qdrant to stay available for requests and even avoid performance degradation. Let's walk through them from best to worst. **Recover with replicated collection** If the number of failed nodes is less than the replication factor of the collection, then your cluster should still be able to perform read, search and update queries. Now, if the failed node restarts, consensus will trigger the replication process to update the recovering node with the newest updates it has missed. If the failed node never restarts, you can recover the lost shards if you have a 3+ node cluster. You cannot recover lost shards in smaller clusters because recovery operations go through [raft](#raft) which requires >50% of the nodes to be healthy. **Recreate node with replicated collections** If a node fails and it is impossible to recover it, you should exclude the dead node from the consensus and create an empty node. To exclude failed nodes from the consensus, use [remove peer](https://api.qdrant.tech/master/api-reference/distributed/remove-peer) API. Apply the `force` flag if necessary. When you create a new node, make sure to attach it to the existing cluster by specifying `--bootstrap` CLI parameter with the URL of any of the running cluster nodes. Once the new node is ready and synchronized with the cluster, you might want to ensure that the collection shards are replicated enough. Remember that Qdrant will not automatically balance shards since this is an expensive operation. Use the [Replicate Shard Operation](https://api.qdrant.tech/master/api-reference/distributed/update-collection-cluster) to create another copy of the shard on the newly connected node. It's worth mentioning that Qdrant only provides the necessary building blocks to create an automated failure recovery. Building a completely automatic process of collection scaling would require control over the cluster machines themself. Check out our [cloud solution](https://qdrant.to/cloud), where we made exactly that. **Recover from snapshot** If there are no copies of data in the cluster, it is still possible to recover from a snapshot. Follow the same steps to detach failed node and create a new one in the cluster: * To exclude failed nodes from the consensus, use [remove peer](https://api.qdrant.tech/master/api-reference/distributed/remove-peer) API. Apply the `force` flag if necessary. * Create a new node, making sure to attach it to the existing cluster by specifying the `--bootstrap` CLI parameter with the URL of any of the running cluster nodes. Snapshot recovery, used in single-node deployment, is different from cluster one. Consensus manages all metadata about all collections and does not require snapshots to recover it. But you can use snapshots to recover missing shards of the collections. Use the [Collection Snapshot Recovery API](../../concepts/snapshots/#recover-in-cluster-deployment) to do it. The service will download the specified snapshot of the collection and recover shards with data from it. Once all shards of the collection are recovered, the collection will become operational again. ### Temporary node failure If properly configured, running Qdrant in distributed mode can make your cluster resistant to outages when one node fails temporarily. Here is how differently-configured Qdrant clusters respond: * 1-node clusters: All operations time out or fail for up to a few minutes. It depends on how long it takes to restart and load data from disk. * 2-node clusters where shards ARE NOT replicated: All operations will time out or fail for up to a few minutes. It depends on how long it takes to restart and load data from disk. * 2-node clusters where all shards ARE replicated to both nodes: All requests except for operations on collections continue to work during the outage. * 3+-node clusters where all shards are replicated to at least 2 nodes: All requests continue to work during the outage. ## Consistency guarantees By default, Qdrant focuses on availability and maximum throughput of search operations. For the majority of use cases, this is a preferable trade-off. During the normal state of operation, it is possible to search and modify data from any peers in the cluster. Before responding to the client, the peer handling the request dispatches all operations according to the current topology in order to keep the data synchronized across the cluster. - reads are using a partial fan-out strategy to optimize latency and availability - writes are executed in parallel on all active sharded replicas ![Embeddings](/docs/concurrent-operations-replicas.png) However, in some cases, it is necessary to ensure additional guarantees during possible hardware instabilities, mass concurrent updates of same documents, etc. Qdrant provides a few options to control consistency guarantees: - `write_consistency_factor` - defines the number of replicas that must acknowledge a write operation before responding to the client. Increasing this value will make write operations tolerant to network partitions in the cluster, but will require a higher number of replicas to be active to perform write operations. - Read `consistency` param, can be used with search and retrieve operations to ensure that the results obtained from all replicas are the same. If this option is used, Qdrant will perform the read operation on multiple replicas and resolve the result according to the selected strategy. This option is useful to avoid data inconsistency in case of concurrent updates of the same documents. This options is preferred if the update operations are frequent and the number of replicas is low. - Write `ordering` param, can be used with update and delete operations to ensure that the operations are executed in the same order on all replicas. If this option is used, Qdrant will route the operation to the leader replica of the shard and wait for the response before responding to the client. This option is useful to avoid data inconsistency in case of concurrent updates of the same documents. This options is preferred if read operations are more frequent than update and if search performance is critical. ### Write consistency factor The `write_consistency_factor` represents the number of replicas that must acknowledge a write operation before responding to the client. It is set to one by default. It can be configured at the collection's creation time. ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 300, ""distance"": ""Cosine"" }, ""shard_number"": 6, ""replication_factor"": 2, ""write_consistency_factor"": 2, } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=300, distance=models.Distance.COSINE), shard_number=6, replication_factor=2, write_consistency_factor=2, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 300, distance: ""Cosine"", }, shard_number: 6, replication_factor: 2, write_consistency_factor: 2, }); ``` ```rust use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(300, Distance::Cosine)) .shard_number(6) .replication_factor(2) .write_consistency_factor(2), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(300) .setDistance(Distance.Cosine) .build()) .build()) .setShardNumber(6) .setReplicationFactor(2) .setWriteConsistencyFactor(2) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 300, Distance = Distance.Cosine }, shardNumber: 6, replicationFactor: 2, writeConsistencyFactor: 2 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 300, Distance: qdrant.Distance_Cosine, }), ShardNumber: qdrant.PtrOf(uint32(6)), ReplicationFactor: qdrant.PtrOf(uint32(2)), WriteConsistencyFactor: qdrant.PtrOf(uint32(2)), }) ``` Write operations will fail if the number of active replicas is less than the `write_consistency_factor`. ### Read consistency Read `consistency` can be specified for most read requests and will ensure that the returned result is consistent across cluster nodes. - `all` will query all nodes and return points, which present on all of them - `majority` will query all nodes and return points, which present on the majority of them - `quorum` will query randomly selected majority of nodes and return points, which present on all of them - `1`/`2`/`3`/etc - will query specified number of randomly selected nodes and return points which present on all of them - default `consistency` is `1` ```http POST /collections/{collection_name}/points/query?consistency=majority { ""query"": [0.2, 0.1, 0.9, 0.7], ""filter"": { ""must"": [ { ""key"": ""city"", ""match"": { ""value"": ""London"" } } ] }, ""params"": { ""hnsw_ef"": 128, ""exact"": false }, ""limit"": 3 } ``` ```python client.query_points( collection_name=""{collection_name}"", query=[0.2, 0.1, 0.9, 0.7], query_filter=models.Filter( must=[ models.FieldCondition( key=""city"", match=models.MatchValue( value=""London"", ), ) ] ), search_params=models.SearchParams(hnsw_ef=128, exact=False), limit=3, consistency=""majority"", ) ``` ```typescript client.query(""{collection_name}"", { query: [0.2, 0.1, 0.9, 0.7], filter: { must: [{ key: ""city"", match: { value: ""London"" } }], }, params: { hnsw_ef: 128, exact: false, }, limit: 3, consistency: ""majority"", }); ``` ```rust use qdrant_client::qdrant::{ read_consistency::Value, Condition, Filter, QueryPointsBuilder, ReadConsistencyType, SearchParamsBuilder, }; use qdrant_client::{Qdrant, QdrantError}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.2, 0.1, 0.9, 0.7]) .limit(3) .filter(Filter::must([Condition::matches( ""city"", ""London"".to_string(), )])) .params(SearchParamsBuilder::default().hnsw_ef(128).exact(false)) .read_consistency(Value::Type(ReadConsistencyType::Majority.into())), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.QueryPoints; import io.qdrant.client.grpc.Points.ReadConsistency; import io.qdrant.client.grpc.Points.ReadConsistencyType; import io.qdrant.client.grpc.Points.SearchParams; import static io.qdrant.client.QueryFactory.nearest; import static io.qdrant.client.ConditionFactory.matchKeyword; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter(Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")).build()) .setQuery(nearest(.2f, 0.1f, 0.9f, 0.7f)) .setParams(SearchParams.newBuilder().setHnswEf(128).setExact(false).build()) .setLimit(3) .setReadConsistency( ReadConsistency.newBuilder().setType(ReadConsistencyType.Majority).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, filter: MatchKeyword(""city"", ""London""), searchParams: new SearchParams { HnswEf = 128, Exact = false }, limit: 3, readConsistency: new ReadConsistency { Type = ReadConsistencyType.Majority } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""city"", ""London""), }, }, Params: &qdrant.SearchParams{ HnswEf: qdrant.PtrOf(uint64(128)), }, Limit: qdrant.PtrOf(uint64(3)), ReadConsistency: qdrant.NewReadConsistencyType(qdrant.ReadConsistencyType_Majority), }) ``` ### Write ordering Write `ordering` can be specified for any write request to serialize it through a single ""leader"" node, which ensures that all write operations (issued with the same `ordering`) are performed and observed sequentially. - `weak` _(default)_ ordering does not provide any additional guarantees, so write operations can be freely reordered. - `medium` ordering serializes all write operations through a dynamically elected leader, which might cause minor inconsistencies in case of leader change. - `strong` ordering serializes all write operations through the permanent leader, which provides strong consistency, but write operations may be unavailable if the leader is down. ```http PUT /collections/{collection_name}/points?ordering=strong { ""batch"": { ""ids"": [1, 2, 3], ""payloads"": [ {""color"": ""red""}, {""color"": ""green""}, {""color"": ""blue""} ], ""vectors"": [ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9] ] } } ``` ```python client.upsert( collection_name=""{collection_name}"", points=models.Batch( ids=[1, 2, 3], payloads=[ {""color"": ""red""}, {""color"": ""green""}, {""color"": ""blue""}, ], vectors=[ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9], ], ), ordering=models.WriteOrdering.STRONG, ) ``` ```typescript client.upsert(""{collection_name}"", { batch: { ids: [1, 2, 3], payloads: [{ color: ""red"" }, { color: ""green"" }, { color: ""blue"" }], vectors: [ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9], ], }, ordering: ""strong"", }); ``` ```rust use qdrant_client::qdrant::{ PointStruct, UpsertPointsBuilder, WriteOrdering, WriteOrderingType }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .upsert_points( UpsertPointsBuilder::new( ""{collection_name}"", vec![ PointStruct::new(1, vec![0.9, 0.1, 0.1], [(""color"", ""red"".into())]), PointStruct::new(2, vec![0.1, 0.9, 0.1], [(""color"", ""green"".into())]), PointStruct::new(3, vec![0.1, 0.1, 0.9], [(""color"", ""blue"".into())]), ], ) .ordering(WriteOrdering { r#type: WriteOrderingType::Strong.into(), }), ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.UpsertPoints; import io.qdrant.client.grpc.Points.WriteOrdering; import io.qdrant.client.grpc.Points.WriteOrderingType; client .upsertAsync( UpsertPoints.newBuilder() .setCollectionName(""{collection_name}"") .addAllPoints( List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.9f, 0.1f, 0.1f)) .putAllPayload(Map.of(""color"", value(""red""))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.1f, 0.9f, 0.1f)) .putAllPayload(Map.of(""color"", value(""green""))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.1f, 0.1f, 0.94f)) .putAllPayload(Map.of(""color"", value(""blue""))) .build())) .setOrdering(WriteOrdering.newBuilder().setType(WriteOrderingType.Strong).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpsertAsync( collectionName: ""{collection_name}"", points: new List { new() { Id = 1, Vectors = new[] { 0.9f, 0.1f, 0.1f }, Payload = { [""color""] = ""red"" } }, new() { Id = 2, Vectors = new[] { 0.1f, 0.9f, 0.1f }, Payload = { [""color""] = ""green"" } }, new() { Id = 3, Vectors = new[] { 0.1f, 0.1f, 0.9f }, Payload = { [""color""] = ""blue"" } } }, ordering: WriteOrderingType.Strong ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Upsert(context.Background(), &qdrant.UpsertPoints{ CollectionName: ""{collection_name}"", Points: []*qdrant.PointStruct{ { Id: qdrant.NewIDNum(1), Vectors: qdrant.NewVectors(0.9, 0.1, 0.1), Payload: qdrant.NewValueMap(map[string]any{""color"": ""red""}), }, { Id: qdrant.NewIDNum(2), Vectors: qdrant.NewVectors(0.1, 0.9, 0.1), Payload: qdrant.NewValueMap(map[string]any{""color"": ""green""}), }, { Id: qdrant.NewIDNum(3), Vectors: qdrant.NewVectors(0.1, 0.1, 0.9), Payload: qdrant.NewValueMap(map[string]any{""color"": ""blue""}), }, }, Ordering: &qdrant.WriteOrdering{ Type: qdrant.WriteOrderingType_Strong, }, }) ``` ## Listener mode In some cases it might be useful to have a Qdrant node that only accumulates data and does not participate in search operations. There are several scenarios where this can be useful: - Listener option can be used to store data in a separate node, which can be used for backup purposes or to store data for a long time. - Listener node can be used to syncronize data into another region, while still performing search operations in the local region. To enable listener mode, set `node_type` to `Listener` in the config file: ```yaml storage: node_type: ""Listener"" ``` Listener node will not participate in search operations, but will still accept write operations and will store the data in the local storage. All shards, stored on the listener node, will be converted to the `Listener` state. Additionally, all write requests sent to the listener node will be processed with `wait=false` option, which means that the write oprations will be considered successful once they are written to WAL. This mechanism should allow to minimize upsert latency in case of parallel snapshotting. ## Consensus Checkpointing Consensus checkpointing is a technique used in Raft to improve performance and simplify log management by periodically creating a consistent snapshot of the system state. This snapshot represents a point in time where all nodes in the cluster have reached agreement on the state, and it can be used to truncate the log, reducing the amount of data that needs to be stored and transferred between nodes. For example, if you attach a new node to the cluster, it should replay all the log entries to catch up with the current state. In long-running clusters, this can take a long time, and the log can grow very large. To prevent this, one can use a special checkpointing mechanism, that will truncate the log and create a snapshot of the current state. To use this feature, simply call the `/cluster/recover` API on required node: ```http POST /cluster/recover ``` This API can be triggered on any non-leader node, it will send a request to the current consensus leader to create a snapshot. The leader will in turn send the snapshot back to the requesting node for application. In some cases, this API can be used to recover from an inconsistent cluster state by forcing a snapshot creation. ",documentation/guides/distributed_deployment.md "--- title: Installation weight: 5 aliases: - ../install - ../installation --- ## Installation requirements The following sections describe the requirements for deploying Qdrant. ### CPU and memory The CPU and RAM that you need depends on: - Number of vectors - Vector dimensions - [Payloads](/documentation/concepts/payload/) and their indexes - Storage - Replication - How you configure quantization Our [Cloud Pricing Calculator](https://cloud.qdrant.io/calculator) can help you estimate required resources without payload or index data. ### Storage For persistent storage, Qdrant requires block-level access to storage devices with a [POSIX-compatible file system](https://www.quobyte.com/storage-explained/posix-filesystem/). Network systems such as [iSCSI](https://en.wikipedia.org/wiki/ISCSI) that provide block-level access are also acceptable. Qdrant won't work with [Network file systems](https://en.wikipedia.org/wiki/File_system#Network_file_systems) such as NFS, or [Object storage](https://en.wikipedia.org/wiki/Object_storage) systems such as S3. If you offload vectors to a local disk, we recommend you use a solid-state (SSD or NVMe) drive. ### Networking Each Qdrant instance requires three open ports: * `6333` - For the HTTP API, for the [Monitoring](/documentation/guides/monitoring/) health and metrics endpoints * `6334` - For the [gRPC](/documentation/interfaces/#grpc-interface) API * `6335` - For [Distributed deployment](/documentation/guides/distributed_deployment/) All Qdrant instances in a cluster must be able to: - Communicate with each other over these ports - Allow incoming connections to ports `6333` and `6334` from clients that use Qdrant. ### Security The default configuration of Qdrant might not be secure enough for every situation. Please see [our security documentation](/documentation/guides/security/) for more information. ## Installation options Qdrant can be installed in different ways depending on your needs: For production, you can use our Qdrant Cloud to run Qdrant either fully managed in our infrastructure or with Hybrid Cloud in yours. For testing or development setups, you can run the Qdrant container or as a binary executable. If you want to run Qdrant in your own infrastructure, without any cloud connection, we recommend to install Qdrant in a Kubernetes cluster with our Helm chart, or to use our Qdrant Enterprise Operator ## Production For production, we recommend that you configure Qdrant in the cloud, with Kubernetes, or with a Qdrant Enterprise Operator. ### Qdrant Cloud You can set up production with the [Qdrant Cloud](https://qdrant.to/cloud), which provides fully managed Qdrant databases. It provides horizontal and vertical scaling, one click installation and upgrades, monitoring, logging, as well as backup and disaster recovery. For more information, see the [Qdrant Cloud documentation](/documentation/cloud/). ### Kubernetes You can use a ready-made [Helm Chart](https://helm.sh/docs/) to run Qdrant in your Kubernetes cluster: ```bash helm repo add qdrant https://qdrant.to/helm helm install qdrant qdrant/qdrant ``` For more information, see the [qdrant-helm](https://github.com/qdrant/qdrant-helm/tree/main/charts/qdrant) README. ### Qdrant Kubernetes Operator We provide a Qdrant Enterprise Operator for Kubernetes installations. For more information, [use this form](https://qdrant.to/contact-us) to contact us. ### Docker and Docker Compose Usually, we recommend to run Qdrant in Kubernetes, or use the Qdrant Cloud for production setups. This makes setting up highly available and scalable Qdrant clusters with backups and disaster recovery a lot easier. However, you can also use Docker and Docker Compose to run Qdrant in production, by following the setup instructions in the [Docker](#docker) and [Docker Compose](#docker-compose) Development sections. In addition, you have to make sure: * To use a performant [persistent storage](#storage) for your data * To configure the [security settings](/documentation/guides/security/) for your deployment * To set up and configure Qdrant on multiple nodes for a highly available [distributed deployment](/documentation/guides/distributed_deployment/) * To set up a load balancer for your Qdrant cluster * To create a [backup and disaster recovery strategy](/documentation/concepts/snapshots/) for your data * To integrate Qdrant with your [monitoring](/documentation/guides/monitoring/) and logging solutions ## Development For development and testing, we recommend that you set up Qdrant in Docker. We also have different client libraries. ### Docker The easiest way to start using Qdrant for testing or development is to run the Qdrant container image. The latest versions are always available on [DockerHub](https://hub.docker.com/r/qdrant/qdrant/tags?page=1&ordering=last_updated). Make sure that [Docker](https://docs.docker.com/engine/install/), [Podman](https://podman.io/docs/installation) or the container runtime of your choice is installed and running. The following instructions use Docker. Pull the image: ```bash docker pull qdrant/qdrant ``` In the following command, revise `$(pwd)/path/to/data` for your Docker configuration. Then use the updated command to run the container: ```bash docker run -p 6333:6333 \ -v $(pwd)/path/to/data:/qdrant/storage \ qdrant/qdrant ``` With this command, you start a Qdrant instance with the default configuration. It stores all data in the `./path/to/data` directory. By default, Qdrant uses port 6333, so at [localhost:6333](http://localhost:6333) you should see the welcome message. To change the Qdrant configuration, you can overwrite the production configuration: ```bash docker run -p 6333:6333 \ -v $(pwd)/path/to/data:/qdrant/storage \ -v $(pwd)/path/to/custom_config.yaml:/qdrant/config/production.yaml \ qdrant/qdrant ``` Alternatively, you can use your own `custom_config.yaml` configuration file: ```bash docker run -p 6333:6333 \ -v $(pwd)/path/to/data:/qdrant/storage \ -v $(pwd)/path/to/custom_config.yaml:/qdrant/config/custom_config.yaml \ qdrant/qdrant \ ./qdrant --config-path config/custom_config.yaml ``` For more information, see the [Configuration](/documentation/guides/configuration/) documentation. ### Docker Compose You can also use [Docker Compose](https://docs.docker.com/compose/) to run Qdrant. Here is an example customized compose file for a single node Qdrant cluster: ```yaml services: qdrant: image: qdrant/qdrant:latest restart: always container_name: qdrant ports: - 6333:6333 - 6334:6334 expose: - 6333 - 6334 - 6335 configs: - source: qdrant_config target: /qdrant/config/production.yaml volumes: - ./qdrant_data:/qdrant/storage configs: qdrant_config: content: | log_level: INFO ``` ### From source Qdrant is written in Rust and can be compiled into a binary executable. This installation method can be helpful if you want to compile Qdrant for a specific processor architecture or if you do not want to use Docker. Before compiling, make sure that the necessary libraries and the [rust toolchain](https://www.rust-lang.org/tools/install) are installed. The current list of required libraries can be found in the [Dockerfile](https://github.com/qdrant/qdrant/blob/master/Dockerfile). Build Qdrant with Cargo: ```bash cargo build --release --bin qdrant ``` After a successful build, you can find the binary in the following subdirectory `./target/release/qdrant`. ## Client libraries In addition to the service, Qdrant provides a variety of client libraries for different programming languages. For a full list, see our [Client libraries](../../interfaces/#client-libraries) documentation. ",documentation/guides/installation.md "--- title: Quantization weight: 120 aliases: - ../quantization - /articles/dedicated-service/documentation/guides/quantization/ - /guides/quantization/ --- # Quantization Quantization is an optional feature in Qdrant that enables efficient storage and search of high-dimensional vectors. By transforming original vectors into a new representations, quantization compresses data while preserving close to original relative distances between vectors. Different quantization methods have different mechanics and tradeoffs. We will cover them in this section. Quantization is primarily used to reduce the memory footprint and accelerate the search process in high-dimensional vector spaces. In the context of the Qdrant, quantization allows you to optimize the search engine for specific use cases, striking a balance between accuracy, storage efficiency, and search speed. There are tradeoffs associated with quantization. On the one hand, quantization allows for significant reductions in storage requirements and faster search times. This can be particularly beneficial in large-scale applications where minimizing the use of resources is a top priority. On the other hand, quantization introduces an approximation error, which can lead to a slight decrease in search quality. The level of this tradeoff depends on the quantization method and its parameters, as well as the characteristics of the data. ## Scalar Quantization *Available as of v1.1.0* Scalar quantization, in the context of vector search engines, is a compression technique that compresses vectors by reducing the number of bits used to represent each vector component. For instance, Qdrant uses 32-bit floating numbers to represent the original vector components. Scalar quantization allows you to reduce the number of bits used to 8. In other words, Qdrant performs `float32 -> uint8` conversion for each vector component. Effectively, this means that the amount of memory required to store a vector is reduced by a factor of 4. In addition to reducing the memory footprint, scalar quantization also speeds up the search process. Qdrant uses a special SIMD CPU instruction to perform fast vector comparison. This instruction works with 8-bit integers, so the conversion to `uint8` allows Qdrant to perform the comparison faster. The main drawback of scalar quantization is the loss of accuracy. The `float32 -> uint8` conversion introduces an error that can lead to a slight decrease in search quality. However, this error is usually negligible, and tends to be less significant for high-dimensional vectors. In our experiments, we found that the error introduced by scalar quantization is usually less than 1%. However, this value depends on the data and the quantization parameters. Please refer to the [Quantization Tips](#quantization-tips) section for more information on how to optimize the quantization parameters for your use case. ## Binary Quantization *Available as of v1.5.0* Binary quantization is an extreme case of scalar quantization. This feature lets you represent each vector component as a single bit, effectively reducing the memory footprint by a **factor of 32**. This is the fastest quantization method, since it lets you perform a vector comparison with a few CPU instructions. Binary quantization can achieve up to a **40x** speedup compared to the original vectors. However, binary quantization is only efficient for high-dimensional vectors and require a centered distribution of vector components. At the moment, binary quantization shows good accuracy results with the following models: - OpenAI `text-embedding-ada-002` - 1536d tested with [dbpedia dataset](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) achieving 0.98 recall@100 with 4x oversampling - Cohere AI `embed-english-v2.0` - 4096d tested on [wikipedia embeddings](https://huggingface.co/datasets/nreimers/wikipedia-22-12-large/tree/main) - 0.98 recall@50 with 2x oversampling Models with a lower dimensionality or a different distribution of vector components may require additional experiments to find the optimal quantization parameters. We recommend using binary quantization only with rescoring enabled, as it can significantly improve the search quality with just a minor performance impact. Additionally, oversampling can be used to tune the tradeoff between search speed and search quality in the query time. ### Binary Quantization as Hamming Distance The additional benefit of this method is that you can efficiently emulate Hamming distance with dot product. Specifically, if original vectors contain `{-1, 1}` as possible values, then the dot product of two vectors is equal to the Hamming distance by simply replacing `-1` with `0` and `1` with `1`.
Sample truth table | Vector 1 | Vector 2 | Dot product | |----------|----------|-------------| | 1 | 1 | 1 | | 1 | -1 | -1 | | -1 | 1 | -1 | | -1 | -1 | 1 | | Vector 1 | Vector 2 | Hamming distance | |----------|----------|------------------| | 1 | 1 | 0 | | 1 | 0 | 1 | | 0 | 1 | 1 | | 0 | 0 | 0 |
As you can see, both functions are equal up to a constant factor, which makes similarity search equivalent. Binary quantization makes it efficient to compare vectors using this representation. ## Product Quantization *Available as of v1.2.0* Product quantization is a method of compressing vectors to minimize their memory usage by dividing them into chunks and quantizing each segment individually. Each chunk is approximated by a centroid index that represents the original vector component. The positions of the centroids are determined through the utilization of a clustering algorithm such as k-means. For now, Qdrant uses only 256 centroids, so each centroid index can be represented by a single byte. Product quantization can compress by a more prominent factor than a scalar one. But there are some tradeoffs. Product quantization distance calculations are not SIMD-friendly, so it is slower than scalar quantization. Also, product quantization has a loss of accuracy, so it is recommended to use it only for high-dimensional vectors. Please refer to the [Quantization Tips](#quantization-tips) section for more information on how to optimize the quantization parameters for your use case. ## How to choose the right quantization method Here is a brief table of the pros and cons of each quantization method: | Quantization method | Accuracy | Speed | Compression | |---------------------|----------|--------------|-------------| | Scalar | 0.99 | up to x2 | 4 | | Product | 0.7 | 0.5 | up to 64 | | Binary | 0.95* | up to x40 | 32 | `*` - for compatible models - **Binary Quantization** is the fastest method and the most memory-efficient, but it requires a centered distribution of vector components. It is recommended to use with tested models only. - **Scalar Quantization** is the most universal method, as it provides a good balance between accuracy, speed, and compression. It is recommended as default quantization if binary quantization is not applicable. - **Product Quantization** may provide a better compression ratio, but it has a significant loss of accuracy and is slower than scalar quantization. It is recommended if the memory footprint is the top priority and the search speed is not critical. ## Setting up Quantization in Qdrant You can configure quantization for a collection by specifying the quantization parameters in the `quantization_config` section of the collection configuration. Quantization will be automatically applied to all vectors during the indexation process. Quantized vectors are stored alongside the original vectors in the collection, so you will still have access to the original vectors if you need them. *Available as of v1.1.1* The `quantization_config` can also be set on a per vector basis by specifying it in a named vector. ### Setting up Scalar Quantization To enable scalar quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration. ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"" }, ""quantization_config"": { ""scalar"": { ""type"": ""int8"", ""quantile"": 0.99, ""always_ram"": true } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, quantile=0.99, always_ram=True, ), ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", }, quantization_config: { scalar: { type: ""int8"", quantile: 0.99, always_ram: true, }, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder, VectorParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine)) .quantization_config( ScalarQuantizationBuilder::default() .r#type(QuantizationType::Int8.into()) .quantile(0.99) .always_ram(true), ), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setQuantile(0.99f) .setAlwaysRam(true) .build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, quantizationConfig: new QuantizationConfig { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, Quantile = 0.99f, AlwaysRam = true } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 768, Distance: qdrant.Distance_Cosine, }), QuantizationConfig: qdrant.NewQuantizationScalar( &qdrant.ScalarQuantization{ Type: qdrant.QuantizationType_Int8, Quantile: qdrant.PtrOf(float32(0.99)), AlwaysRam: qdrant.PtrOf(true), }, ), }) ``` There are 3 parameters that you can specify in the `quantization_config` section: `type` - the type of the quantized vector components. Currently, Qdrant supports only `int8`. `quantile` - the quantile of the quantized vector components. The quantile is used to calculate the quantization bounds. For instance, if you specify `0.99` as the quantile, 1% of extreme values will be excluded from the quantization bounds. Using quantiles lower than `1.0` might be useful if there are outliers in your vector components. This parameter only affects the resulting precision and not the memory footprint. It might be worth tuning this parameter if you experience a significant decrease in search quality. `always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors. However, in some setups you might want to keep quantized vectors in RAM to speed up the search process. In this case, you can set `always_ram` to `true` to store quantized vectors in RAM. ### Setting up Binary Quantization To enable binary quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration. ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 1536, ""distance"": ""Cosine"" }, ""quantization_config"": { ""binary"": { ""always_ram"": true } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=1536, distance=models.Distance.COSINE), quantization_config=models.BinaryQuantization( binary=models.BinaryQuantizationConfig( always_ram=True, ), ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 1536, distance: ""Cosine"", }, quantization_config: { binary: { always_ram: true, }, }, }); ``` ```rust use qdrant_client::qdrant::{ BinaryQuantizationBuilder, CreateCollectionBuilder, Distance, VectorParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(1536, Distance::Cosine)) .quantization_config(BinaryQuantizationBuilder::new(true)), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.BinaryQuantization; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(1536) .setDistance(Distance.Cosine) .build()) .build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setBinary(BinaryQuantization.newBuilder().setAlwaysRam(true).build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 1536, Distance = Distance.Cosine }, quantizationConfig: new QuantizationConfig { Binary = new BinaryQuantization { AlwaysRam = true } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 1536, Distance: qdrant.Distance_Cosine, }), QuantizationConfig: qdrant.NewQuantizationBinary( &qdrant.BinaryQuantization{ AlwaysRam: qdrant.PtrOf(true), }, ), }) ``` `always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors. However, in some setups you might want to keep quantized vectors in RAM to speed up the search process. In this case, you can set `always_ram` to `true` to store quantized vectors in RAM. ### Setting up Product Quantization To enable product quantization, you need to specify the quantization parameters in the `quantization_config` section of the collection configuration. ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"" }, ""quantization_config"": { ""product"": { ""compression"": ""x16"", ""always_ram"": true } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), quantization_config=models.ProductQuantization( product=models.ProductQuantizationConfig( compression=models.CompressionRatio.X16, always_ram=True, ), ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", }, quantization_config: { product: { compression: ""x16"", always_ram: true, }, }, }); ``` ```rust use qdrant_client::qdrant::{ CompressionRatio, CreateCollectionBuilder, Distance, ProductQuantizationBuilder, VectorParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine)) .quantization_config( ProductQuantizationBuilder::new(CompressionRatio::X16.into()).always_ram(true), ), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CompressionRatio; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.ProductQuantization; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setProduct( ProductQuantization.newBuilder() .setCompression(CompressionRatio.x16) .setAlwaysRam(true) .build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, quantizationConfig: new QuantizationConfig { Product = new ProductQuantization { Compression = CompressionRatio.X16, AlwaysRam = true } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 768, Distance: qdrant.Distance_Cosine, }), QuantizationConfig: qdrant.NewQuantizationProduct( &qdrant.ProductQuantization{ Compression: qdrant.CompressionRatio_x16, AlwaysRam: qdrant.PtrOf(true), }, ), }) ``` There are two parameters that you can specify in the `quantization_config` section: `compression` - compression ratio. Compression ratio represents the size of the quantized vector in bytes divided by the size of the original vector in bytes. In this case, the quantized vector will be 16 times smaller than the original vector. `always_ram` - whether to keep quantized vectors always cached in RAM or not. By default, quantized vectors are loaded in the same way as the original vectors. However, in some setups you might want to keep quantized vectors in RAM to speed up the search process. Then set `always_ram` to `true`. ### Searching with Quantization Once you have configured quantization for a collection, you don't need to do anything extra to search with quantization. Qdrant will automatically use quantized vectors if they are available. However, there are a few options that you can use to control the search process: ```http POST /collections/{collection_name}/points/query { ""query"": [0.2, 0.1, 0.9, 0.7], ""params"": { ""quantization"": { ""ignore"": false, ""rescore"": true, ""oversampling"": 2.0 } }, ""limit"": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=[0.2, 0.1, 0.9, 0.7], search_params=models.SearchParams( quantization=models.QuantizationSearchParams( ignore=False, rescore=True, oversampling=2.0, ) ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: [0.2, 0.1, 0.9, 0.7], params: { quantization: { ignore: false, rescore: true, oversampling: 2.0, }, }, limit: 10, }); ``` ```rust use qdrant_client::qdrant::{ QuantizationSearchParamsBuilder, QueryPointsBuilder, SearchParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.2, 0.1, 0.9, 0.7]) .limit(10) .params( SearchParamsBuilder::default().quantization( QuantizationSearchParamsBuilder::default() .ignore(false) .rescore(true) .oversampling(2.0), ), ), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QuantizationSearchParams; import io.qdrant.client.grpc.Points.QueryPoints; import io.qdrant.client.grpc.Points.SearchParams; import static io.qdrant.client.QueryFactory.nearest; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setParams( SearchParams.newBuilder() .setQuantization( QuantizationSearchParams.newBuilder() .setIgnore(false) .setRescore(true) .setOversampling(2.0) .build()) .build()) .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, searchParams: new SearchParams { Quantization = new QuantizationSearchParams { Ignore = false, Rescore = true, Oversampling = 2.0 } }, limit: 10 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), Params: &qdrant.SearchParams{ Quantization: &qdrant.QuantizationSearchParams{ Ignore: qdrant.PtrOf(false), Rescore: qdrant.PtrOf(true), Oversampling: qdrant.PtrOf(2.0), }, }, }) ``` `ignore` - Toggle whether to ignore quantized vectors during the search process. By default, Qdrant will use quantized vectors if they are available. `rescore` - Having the original vectors available, Qdrant can re-evaluate top-k search results using the original vectors. This can improve the search quality, but may slightly decrease the search speed, compared to the search without rescore. It is recommended to disable rescore only if the original vectors are stored on a slow storage (e.g. HDD or network storage). By default, rescore is enabled. **Available as of v1.3.0** `oversampling` - Defines how many extra vectors should be pre-selected using quantized index, and then re-scored using original vectors. For example, if oversampling is 2.4 and limit is 100, then 240 vectors will be pre-selected using quantized index, and then top-100 will be returned after re-scoring. Oversampling is useful if you want to tune the tradeoff between search speed and search quality in the query time. ## Quantization tips #### Accuracy tuning In this section, we will discuss how to tune the search precision. The fastest way to understand the impact of quantization on the search quality is to compare the search results with and without quantization. In order to disable quantization, you can set `ignore` to `true` in the search request: ```http POST /collections/{collection_name}/points/query { ""query"": [0.2, 0.1, 0.9, 0.7], ""params"": { ""quantization"": { ""ignore"": true } }, ""limit"": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=[0.2, 0.1, 0.9, 0.7], search_params=models.SearchParams( quantization=models.QuantizationSearchParams( ignore=True, ) ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: [0.2, 0.1, 0.9, 0.7], params: { quantization: { ignore: true, }, }, }); ``` ```rust use qdrant_client::qdrant::{ QuantizationSearchParamsBuilder, QueryPointsBuilder, SearchParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.2, 0.1, 0.9, 0.7]) .limit(3) .params( SearchParamsBuilder::default() .quantization(QuantizationSearchParamsBuilder::default().ignore(true)), ), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QuantizationSearchParams; import io.qdrant.client.grpc.Points.QueryPoints; import io.qdrant.client.grpc.Points.SearchParams; import static io.qdrant.client.QueryFactory.nearest; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setParams( SearchParams.newBuilder() .setQuantization( QuantizationSearchParams.newBuilder().setIgnore(true).build()) .build()) .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, searchParams: new SearchParams { Quantization = new QuantizationSearchParams { Ignore = true } }, limit: 10 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), Params: &qdrant.SearchParams{ Quantization: &qdrant.QuantizationSearchParams{ Ignore: qdrant.PtrOf(false), }, }, }) ``` - **Adjust the quantile parameter**: The quantile parameter in scalar quantization determines the quantization bounds. By setting it to a value lower than 1.0, you can exclude extreme values (outliers) from the quantization bounds. For example, if you set the quantile to 0.99, 1% of the extreme values will be excluded. By adjusting the quantile, you find an optimal value that will provide the best search quality for your collection. - **Enable rescore**: Having the original vectors available, Qdrant can re-evaluate top-k search results using the original vectors. On large collections, this can improve the search quality, with just minor performance impact. #### Memory and speed tuning In this section, we will discuss how to tune the memory and speed of the search process with quantization. There are 3 possible modes to place storage of vectors within the qdrant collection: - **All in RAM** - all vector, original and quantized, are loaded and kept in RAM. This is the fastest mode, but requires a lot of RAM. Enabled by default. - **Original on Disk, quantized in RAM** - this is a hybrid mode, allows to obtain a good balance between speed and memory usage. Recommended scenario if you are aiming to shrink the memory footprint while keeping the search speed. This mode is enabled by setting `always_ram` to `true` in the quantization config while using memmap storage: ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"", ""on_disk"": true }, ""quantization_config"": { ""scalar"": { ""type"": ""int8"", ""always_ram"": true } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE, on_disk=True), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, always_ram=True, ), ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", on_disk: true, }, quantization_config: { scalar: { type: ""int8"", always_ram: true, }, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder, VectorParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine).on_disk(true)) .quantization_config( ScalarQuantizationBuilder::default() .r#type(QuantizationType::Int8.into()) .always_ram(true), ), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .setOnDisk(true) .build()) .build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setAlwaysRam(true) .build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine, OnDisk = true}, quantizationConfig: new QuantizationConfig { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = true } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 768, Distance: qdrant.Distance_Cosine, OnDisk: qdrant.PtrOf(true), }), QuantizationConfig: qdrant.NewQuantizationScalar( &qdrant.ScalarQuantization{ Type: qdrant.QuantizationType_Int8, AlwaysRam: qdrant.PtrOf(true), }, ), }) ``` In this scenario, the number of disk reads may play a significant role in the search speed. In a system with high disk latency, the re-scoring step may become a bottleneck. Consider disabling `rescore` to improve the search speed: ```http POST /collections/{collection_name}/points/query { ""query"": [0.2, 0.1, 0.9, 0.7], ""params"": { ""quantization"": { ""rescore"": false } }, ""limit"": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=[0.2, 0.1, 0.9, 0.7], search_params=models.SearchParams( quantization=models.QuantizationSearchParams(rescore=False) ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: [0.2, 0.1, 0.9, 0.7], params: { quantization: { rescore: false, }, }, }); ``` ```rust use qdrant_client::qdrant::{ QuantizationSearchParamsBuilder, QueryPointsBuilder, SearchParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.2, 0.1, 0.9, 0.7]) .limit(3) .params( SearchParamsBuilder::default() .quantization(QuantizationSearchParamsBuilder::default().rescore(false)), ), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QuantizationSearchParams; import io.qdrant.client.grpc.Points.QueryPoints; import io.qdrant.client.grpc.Points.SearchParams; import static io.qdrant.client.QueryFactory.nearest; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setParams( SearchParams.newBuilder() .setQuantization( QuantizationSearchParams.newBuilder().setRescore(false).build()) .build()) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, searchParams: new SearchParams { Quantization = new QuantizationSearchParams { Rescore = false } }, limit: 3 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), Params: &qdrant.SearchParams{ Quantization: &qdrant.QuantizationSearchParams{ Rescore: qdrant.PtrOf(false), }, }, }) ``` - **All on Disk** - all vectors, original and quantized, are stored on disk. This mode allows to achieve the smallest memory footprint, but at the cost of the search speed. It is recommended to use this mode if you have a large collection and fast storage (e.g. SSD or NVMe). This mode is enabled by setting `always_ram` to `false` in the quantization config while using mmap storage: ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"", ""on_disk"": true }, ""quantization_config"": { ""scalar"": { ""type"": ""int8"", ""always_ram"": false } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE, on_disk=True), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, always_ram=False, ), ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", on_disk: true, }, quantization_config: { scalar: { type: ""int8"", always_ram: false, }, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Distance, QuantizationType, ScalarQuantizationBuilder, VectorParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine).on_disk(true)) .quantization_config( ScalarQuantizationBuilder::default() .r#type(QuantizationType::Int8.into()) .always_ram(false), ), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationConfig; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .setOnDisk(true) .build()) .build()) .setQuantizationConfig( QuantizationConfig.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setAlwaysRam(false) .build()) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine, OnDisk = true}, quantizationConfig: new QuantizationConfig { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, AlwaysRam = false } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 768, Distance: qdrant.Distance_Cosine, OnDisk: qdrant.PtrOf(true), }), QuantizationConfig: qdrant.NewQuantizationScalar( &qdrant.ScalarQuantization{ Type: qdrant.QuantizationType_Int8, AlwaysRam: qdrant.PtrOf(false), }, ), }) ``` ",documentation/guides/quantization.md "--- title: Monitoring weight: 155 aliases: - ../monitoring --- # Monitoring Qdrant exposes its metrics in [Prometheus](https://prometheus.io/docs/instrumenting/exposition_formats/#text-based-format)/[OpenMetrics](https://github.com/OpenObservability/OpenMetrics) format, so you can integrate them easily with the compatible tools and monitor Qdrant with your own monitoring system. You can use the `/metrics` endpoint and configure it as a scrape target. Metrics endpoint: The integration with Qdrant is easy to [configure](https://prometheus.io/docs/prometheus/latest/getting_started/#configure-prometheus-to-monitor-the-sample-targets) with Prometheus and Grafana. ## Monitoring multi-node clusters When scraping metrics from multi-node Qdrant clusters, it is important to scrape from each node individually instead of using a load-balanced URL. Otherwise, your metrics will appear inconsistent after each scrape. ## Monitoring in Qdrant Cloud To scrape metrics from a Qdrant cluster running in Qdrant Cloud, note that an [API key](/documentation/cloud/authentication/) is required to access `/metrics`. Qdrant Cloud also supports supplying the API key as a [Bearer token](https://www.rfc-editor.org/rfc/rfc6750.html), which may be required by some providers. ## Exposed metrics Each Qdrant server will expose the following metrics. | Name | Type | Meaning | |-------------------------------------|---------|---------------------------------------------------| | app_info | gauge | Information about Qdrant server | | app_status_recovery_mode | gauge | If Qdrant is currently started in recovery mode | | collections_total | gauge | Number of collections | | collections_vector_total | gauge | Total number of vectors in all collections | | collections_full_total | gauge | Number of full collections | | collections_aggregated_total | gauge | Number of aggregated collections | | rest_responses_total | counter | Total number of responses through REST API | | rest_responses_fail_total | counter | Total number of failed responses through REST API | | rest_responses_avg_duration_seconds | gauge | Average response duration in REST API | | rest_responses_min_duration_seconds | gauge | Minimum response duration in REST API | | rest_responses_max_duration_seconds | gauge | Maximum response duration in REST API | | grpc_responses_total | counter | Total number of responses through gRPC API | | grpc_responses_fail_total | counter | Total number of failed responses through REST API | | grpc_responses_avg_duration_seconds | gauge | Average response duration in gRPC API | | grpc_responses_min_duration_seconds | gauge | Minimum response duration in gRPC API | | grpc_responses_max_duration_seconds | gauge | Maximum response duration in gRPC API | | cluster_enabled | gauge | Whether the cluster support is enabled. 1 - YES | ### Cluster-related metrics There are also some metrics which are exposed in distributed mode only. | Name | Type | Meaning | | -------------------------------- | ------- | ---------------------------------------------------------------------- | | cluster_peers_total | gauge | Total number of cluster peers | | cluster_term | counter | Current cluster term | | cluster_commit | counter | Index of last committed (finalized) operation cluster peer is aware of | | cluster_pending_operations_total | gauge | Total number of pending operations for cluster peer | | cluster_voter | gauge | Whether the cluster peer is a voter or learner. 1 - VOTER | ## Kubernetes health endpoints *Available as of v1.5.0* Qdrant exposes three endpoints, namely [`/healthz`](http://localhost:6333/healthz), [`/livez`](http://localhost:6333/livez) and [`/readyz`](http://localhost:6333/readyz), to indicate the current status of the Qdrant server. These currently provide the most basic status response, returning HTTP 200 if Qdrant is started and ready to be used. Regardless of whether an [API key](../security/#authentication) is configured, the endpoints are always accessible. You can read more about Kubernetes health endpoints [here](https://kubernetes.io/docs/reference/using-api/health-checks/). ",documentation/guides/monitoring.md "--- title: Guides weight: 12 # If the index.md file is empty, the link to the section will be hidden from the sidebar is_empty: true ---",documentation/guides/_index.md "--- title: Security weight: 165 aliases: - ../security --- # Security Please read this page carefully. Although there are various ways to secure your Qdrant instances, **they are unsecured by default**. You need to enable security measures before production use. Otherwise, they are completely open to anyone ## Authentication *Available as of v1.2.0* Qdrant supports a simple form of client authentication using a static API key. This can be used to secure your instance. To enable API key based authentication in your own Qdrant instance you must specify a key in the configuration: ```yaml service: # Set an api-key. # If set, all requests must include a header with the api-key. # example header: `api-key: ` # # If you enable this you should also enable TLS. # (Either above or via an external service like nginx.) # Sending an api-key over an unencrypted channel is insecure. api_key: your_secret_api_key_here ``` Or alternatively, you can use the environment variable: ```bash export QDRANT__SERVICE__API_KEY=your_secret_api_key_here ``` For using API key based authentication in Qdrant Cloud see the cloud [Authentication](/documentation/cloud/authentication/) section. The API key then needs to be present in all REST or gRPC requests to your instance. All official Qdrant clients for Python, Go, Rust, .NET and Java support the API key parameter. ```bash curl \ -X GET https://localhost:6333 \ --header 'api-key: your_secret_api_key_here' ``` ```python from qdrant_client import QdrantClient client = QdrantClient( url=""https://localhost:6333"", api_key=""your_secret_api_key_here"", ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ url: ""http://localhost"", port: 6333, apiKey: ""your_secret_api_key_here"", }); ``` ```rust use qdrant_client::Qdrant; let client = Qdrant::from_url(""https://xyz-example.eu-central.aws.cloud.qdrant.io:6334"") .api_key("""") .build()?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder( ""xyz-example.eu-central.aws.cloud.qdrant.io"", 6334, true) .withApiKey("""") .build()); ``` ```csharp using Qdrant.Client; var client = new QdrantClient( host: ""xyz-example.eu-central.aws.cloud.qdrant.io"", https: true, apiKey: """" ); ``` ```go import ""github.com/qdrant/go-client/qdrant"" client, err := qdrant.NewClient(&qdrant.Config{ Host: ""xyz-example.eu-central.aws.cloud.qdrant.io"", Port: 6334, APIKey: """", UseTLS: true, }) ``` ### Read-only API key *Available as of v1.7.0* In addition to the regular API key, Qdrant also supports a read-only API key. This key can be used to access read-only operations on the instance. ```yaml service: read_only_api_key: your_secret_read_only_api_key_here ``` Or with the environment variable: ```bash export QDRANT__SERVICE__READ_ONLY_API_KEY=your_secret_read_only_api_key_here ``` Both API keys can be used simultaneously. ### Granular access control with JWT *Available as of v1.9.0* For more complex cases, Qdrant supports granular access control with [JSON Web Tokens (JWT)](https://jwt.io/). This allows you to have tokens, which allow restricited access to a specific parts of the stored data and build [Role-based access control (RBAC)](https://en.wikipedia.org/wiki/Role-based_access_control) on top of that. In this way, you can define permissions for users and restrict access to sensitive endpoints. To enable JWT-based authentication in your own Qdrant instance you need to specify the `api-key` and enable the `jwt_rbac` feature in the configuration: ```yaml service: api_key: you_secret_api_key_here jwt_rbac: true ``` Or with the environment variables: ```bash export QDRANT__SERVICE__API_KEY=your_secret_api_key_here export QDRANT__SERVICE__JWT_RBAC=true ``` The `api_key` you set in the configuration will be used to encode and decode the JWTs, so –needless to say– keep it secure. If your `api_key` changes, all existing tokens will be invalid. To use JWT-based authentication, you need to provide it as a bearer token in the `Authorization` header, or as an key in the `Api-Key` header of your requests. ```http Authorization: Bearer // or Api-Key: ``` ```python from qdrant_client import QdrantClient qdrant_client = QdrantClient( ""xyz-example.eu-central.aws.cloud.qdrant.io"", api_key="""", ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""xyz-example.eu-central.aws.cloud.qdrant.io"", apiKey: """", }); ``` ```rust use qdrant_client::Qdrant; let client = Qdrant::from_url(""https://xyz-example.eu-central.aws.cloud.qdrant.io:6334"") .api_key("""") .build()?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder( ""xyz-example.eu-central.aws.cloud.qdrant.io"", 6334, true) .withApiKey("""") .build()); ``` ```csharp using Qdrant.Client; var client = new QdrantClient( host: ""xyz-example.eu-central.aws.cloud.qdrant.io"", https: true, apiKey: """" ); ``` ```go import ""github.com/qdrant/go-client/qdrant"" client, err := qdrant.NewClient(&qdrant.Config{ Host: ""xyz-example.eu-central.aws.cloud.qdrant.io"", Port: 6334, APIKey: """", UseTLS: true, }) ``` #### Generating JSON Web Tokens Due to the nature of JWT, anyone who knows the `api_key` can generate tokens by using any of the existing libraries and tools, it is not necessary for them to have access to the Qdrant instance to generate them. For convenience, we have added a JWT generation tool the Qdrant Web UI under the 🔑 tab, if you're using the default url, it will be at `http://localhost:6333/dashboard#/jwt`. - **JWT Header** - Qdrant uses the `HS256` algorithm to decode the tokens. ```json { ""alg"": ""HS256"", ""typ"": ""JWT"" } ``` - **JWT Payload** - You can include any combination of the [parameters available](#jwt-configuration) in the payload. Keep reading for more info on each one. ```json { ""exp"": 1640995200, // Expiration time ""value_exists"": ..., // Validate this token by looking for a point with a payload value ""access"": ""r"", // Define the access level. } ``` **Signing the token** - To confirm that the generated token is valid, it needs to be signed with the `api_key` you have set in the configuration. That would mean, that someone who knows the `api_key` gives the authorization for the new token to be used in the Qdrant instance. Qdrant can validate the signature, because it knows the `api_key` and can decode the token. The process of token generation can be done on the client side offline, and doesn't require any communication with the Qdrant instance. Here is an example of libraries that can be used to generate JWT tokens: - Python: [PyJWT](https://pyjwt.readthedocs.io/en/stable/) - JavaScript: [jsonwebtoken](https://www.npmjs.com/package/jsonwebtoken) - Rust: [jsonwebtoken](https://crates.io/crates/jsonwebtoken) #### JWT Configuration These are the available options, or **claims** in the JWT lingo. You can use them in the JWT payload to define its functionality. - **`exp`** - The expiration time of the token. This is a Unix timestamp in seconds. The token will be invalid after this time. The check for this claim includes a 30-second leeway to account for clock skew. ```json { ""exp"": 1640995200, // Expiration time } ``` - **`value_exists`** - This is a claim that can be used to validate the token against the data stored in a collection. Structure of this claim is as follows: ```json { ""value_exists"": { ""collection"": ""my_validation_collection"", ""matches"": [ { ""key"": ""my_key"", ""value"": ""value_that_must_exist"" } ], }, } ``` If this claim is present, Qdrant will check if there is a point in the collection with the specified key-values. If it does, the token is valid. This claim is especially useful if you want to have an ability to revoke tokens without changing the `api_key`. Consider a case where you have a collection of users, and you want to revoke access to a specific user. ```json { ""value_exists"": { ""collection"": ""users"", ""matches"": [ { ""key"": ""user_id"", ""value"": ""andrey"" }, { ""key"": ""role"", ""value"": ""manager"" } ], }, } ``` You can create a token with this claim, and when you want to revoke access, you can change the `role` of the user to something else, and the token will be invalid. - **`access`** - This claim defines the [access level](#table-of-access) of the token. If this claim is present, Qdrant will check if the token has the required access level to perform the operation. If this claim is **not** present, **manage** access is assumed. It can provide global access with `r` for read-only, or `m` for manage. For example: ```json { ""access"": ""r"" } ``` It can also be specific to one or more collections. The `access` level for each collection is `r` for read-only, or `rw` for read-write, like this: ```json { ""access"": [ { ""collection"": ""my_collection"", ""access"": ""rw"" } ] } ``` You can also specify which subset of the collection the user is able to access by specifying a `payload` restriction that the points must have. ```json { ""access"": [ { ""collection"": ""my_collection"", ""access"": ""r"", ""payload"": { ""user_id"": ""user_123456"" } } ] } ``` This `payload` claim will be used to implicitly filter the points in the collection. It will be equivalent to appending this filter to each request: ```json { ""filter"": { ""must"": [{ ""key"": ""user_id"", ""match"": { ""value"": ""user_123456"" } }] } } ``` ### Table of access Check out this table to see which actions are allowed or denied based on the access level. This is also applicable to using api keys instead of tokens. In that case, `api_key` maps to **manage**, while `read_only_api_key` maps to **read-only**.
Symbols: ✅ Allowed | ❌ Denied | 🟡 Allowed, but filtered
| Action | manage | read-only | collection read-write | collection read-only | collection with payload claim (r / rw) | |--------|--------|-----------|----------------------|-----------------------|------------------------------------| | list collections | ✅ | ✅ | 🟡 | 🟡 | 🟡 | | get collection info | ✅ | ✅ | ✅ | ✅ | ❌ | | create collection | ✅ | ❌ | ❌ | ❌ | ❌ | | delete collection | ✅ | ❌ | ❌ | ❌ | ❌ | | update collection params | ✅ | ❌ | ❌ | ❌ | ❌ | | get collection cluster info | ✅ | ✅ | ✅ | ✅ | ❌ | | collection exists | ✅ | ✅ | ✅ | ✅ | ✅ | | update collection cluster setup | ✅ | ❌ | ❌ | ❌ | ❌ | | update aliases | ✅ | ❌ | ❌ | ❌ | ❌ | | list collection aliases | ✅ | ✅ | 🟡 | 🟡 | 🟡 | | list aliases | ✅ | ✅ | 🟡 | 🟡 | 🟡 | | create shard key | ✅ | ❌ | ❌ | ❌ | ❌ | | delete shard key | ✅ | ❌ | ❌ | ❌ | ❌ | | create payload index | ✅ | ❌ | ✅ | ❌ | ❌ | | delete payload index | ✅ | ❌ | ✅ | ❌ | ❌ | | list collection snapshots | ✅ | ✅ | ✅ | ✅ | ❌ | | create collection snapshot | ✅ | ❌ | ✅ | ❌ | ❌ | | delete collection snapshot | ✅ | ❌ | ✅ | ❌ | ❌ | | download collection snapshot | ✅ | ✅ | ✅ | ✅ | ❌ | | upload collection snapshot | ✅ | ❌ | ❌ | ❌ | ❌ | | recover collection snapshot | ✅ | ❌ | ❌ | ❌ | ❌ | | list shard snapshots | ✅ | ✅ | ✅ | ✅ | ❌ | | create shard snapshot | ✅ | ❌ | ✅ | ❌ | ❌ | | delete shard snapshot | ✅ | ❌ | ✅ | ❌ | ❌ | | download shard snapshot | ✅ | ✅ | ✅ | ✅ | ❌ | | upload shard snapshot | ✅ | ❌ | ❌ | ❌ | ❌ | | recover shard snapshot | ✅ | ❌ | ❌ | ❌ | ❌ | | list full snapshots | ✅ | ✅ | ❌ | ❌ | ❌ | | create full snapshot | ✅ | ❌ | ❌ | ❌ | ❌ | | delete full snapshot | ✅ | ❌ | ❌ | ❌ | ❌ | | download full snapshot | ✅ | ✅ | ❌ | ❌ | ❌ | | get cluster info | ✅ | ✅ | ❌ | ❌ | ❌ | | recover raft state | ✅ | ❌ | ❌ | ❌ | ❌ | | delete peer | ✅ | ❌ | ❌ | ❌ | ❌ | | get point | ✅ | ✅ | ✅ | ✅ | ❌ | | get points | ✅ | ✅ | ✅ | ✅ | ❌ | | upsert points | ✅ | ❌ | ✅ | ❌ | ❌ | | update points batch | ✅ | ❌ | ✅ | ❌ | ❌ | | delete points | ✅ | ❌ | ✅ | ❌ | ❌ / 🟡 | | update vectors | ✅ | ❌ | ✅ | ❌ | ❌ | | delete vectors | ✅ | ❌ | ✅ | ❌ | ❌ / 🟡 | | set payload | ✅ | ❌ | ✅ | ❌ | ❌ | | overwrite payload | ✅ | ❌ | ✅ | ❌ | ❌ | | delete payload | ✅ | ❌ | ✅ | ❌ | ❌ | | clear payload | ✅ | ❌ | ✅ | ❌ | ❌ | | scroll points | ✅ | ✅ | ✅ | ✅ | 🟡 | | query points | ✅ | ✅ | ✅ | ✅ | 🟡 | | search points | ✅ | ✅ | ✅ | ✅ | 🟡 | | search groups | ✅ | ✅ | ✅ | ✅ | 🟡 | | recommend points | ✅ | ✅ | ✅ | ✅ | ❌ | | recommend groups | ✅ | ✅ | ✅ | ✅ | ❌ | | discover points | ✅ | ✅ | ✅ | ✅ | ❌ | | count points | ✅ | ✅ | ✅ | ✅ | 🟡 | | version | ✅ | ✅ | ✅ | ✅ | ✅ | | readyz, healthz, livez | ✅ | ✅ | ✅ | ✅ | ✅ | | telemetry | ✅ | ✅ | ❌ | ❌ | ❌ | | metrics | ✅ | ✅ | ❌ | ❌ | ❌ | | update locks | ✅ | ❌ | ❌ | ❌ | ❌ | | get locks | ✅ | ✅ | ❌ | ❌ | ❌ | ## TLS *Available as of v1.2.0* TLS for encrypted connections can be enabled on your Qdrant instance to secure connections. First make sure you have a certificate and private key for TLS, usually in `.pem` format. On your local machine you may use [mkcert](https://github.com/FiloSottile/mkcert#readme) to generate a self signed certificate. To enable TLS, set the following properties in the Qdrant configuration with the correct paths and restart: ```yaml service: # Enable HTTPS for the REST and gRPC API enable_tls: true # TLS configuration. # Required if either service.enable_tls or cluster.p2p.enable_tls is true. tls: # Server certificate chain file cert: ./tls/cert.pem # Server private key file key: ./tls/key.pem ``` For internal communication when running cluster mode, TLS can be enabled with: ```yaml cluster: # Configuration of the inter-cluster communication p2p: # Use TLS for communication between peers enable_tls: true ``` With TLS enabled, you must start using HTTPS connections. For example: ```bash curl -X GET https://localhost:6333 ``` ```python from qdrant_client import QdrantClient client = QdrantClient( url=""https://localhost:6333"", ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ url: ""https://localhost"", port: 6333 }); ``` ```rust use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; ``` Certificate rotation is enabled with a default refresh time of one hour. This reloads certificate files every hour while Qdrant is running. This way changed certificates are picked up when they get updated externally. The refresh time can be tuned by changing the `tls.cert_ttl` setting. You can leave this on, even if you don't plan to update your certificates. Currently this is only supported for the REST API. Optionally, you can enable client certificate validation on the server against a local certificate authority. Set the following properties and restart: ```yaml service: # Check user HTTPS client certificate against CA file specified in tls config verify_https_client_certificate: false # TLS configuration. # Required if either service.enable_tls or cluster.p2p.enable_tls is true. tls: # Certificate authority certificate file. # This certificate will be used to validate the certificates # presented by other nodes during inter-cluster communication. # # If verify_https_client_certificate is true, it will verify # HTTPS client certificate # # Required if cluster.p2p.enable_tls is true. ca_cert: ./tls/cacert.pem ``` ## Hardening We recommend reducing the amount of permissions granted to Qdrant containers so that you can reduce the risk of exploitation. Here are some ways to reduce the permissions of a Qdrant container: * Run Qdrant as a non-root user. This can help mitigate the risk of future container breakout vulnerabilities. Qdrant does not need the privileges of the root user for any purpose. - You can use the image `qdrant/qdrant:-unprivileged` instead of the default Qdrant image. - You can use the flag `--user=1000:2000` when running [`docker run`](https://docs.docker.com/reference/cli/docker/container/run/). - You can set [`user: 1000`](https://docs.docker.com/compose/compose-file/05-services/#user) when using Docker Compose. - You can set [`runAsUser: 1000`](https://kubernetes.io/docs/tasks/configure-pod-container/security-context) when running in Kubernetes (our [Helm chart](https://github.com/qdrant/qdrant-helm) does this by default). * Run Qdrant with a read-only root filesystem. This can help mitigate vulnerabilities that require the ability to modify system files, which is a permission Qdrant does not need. As long as the container uses mounted volumes for storage (`/qdrant/storage` and `/qdrant/snapshots` by default), Qdrant can continue to operate while being prevented from writing data outside of those volumes. - You can use the flag `--read-only` when running [`docker run`](https://docs.docker.com/reference/cli/docker/container/run/). - You can set [`read_only: true`](https://docs.docker.com/compose/compose-file/05-services/#read_only) when using Docker Compose. - You can set [`readOnlyRootFilesystem: true`](https://kubernetes.io/docs/tasks/configure-pod-container/security-context) when running in Kubernetes (our [Helm chart](https://github.com/qdrant/qdrant-helm) does this by default). * Block Qdrant's external network access. This can help mitigate [server side request forgery attacks](https://owasp.org/www-community/attacks/Server_Side_Request_Forgery), like via the [snapshot recovery API](https://api.qdrant.tech/api-reference/snapshots/recover-from-snapshot). Single-node Qdrant clusters do not require any outbound network access. Multi-node Qdrant clusters only need the ability to connect to other Qdrant nodes via TCP ports 6333, 6334, and 6335. - You can use [`docker network create --internal `](https://docs.docker.com/reference/cli/docker/network/create/#internal) and use that network when running [`docker run --network `](https://docs.docker.com/reference/cli/docker/container/run/#network). - You can create an [internal network](https://docs.docker.com/compose/compose-file/06-networks/#internal) when using Docker Compose. - You can create a [NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) when using Kubernetes. Note that multi-node Qdrant clusters [will also need access to cluster DNS in Kubernetes](https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/11-deny-egress-traffic-from-an-application.md#allowing-dns-traffic). There are other techniques for reducing the permissions such as dropping [Linux capabilities](https://www.man7.org/linux/man-pages/man7/capabilities.7.html) depending on your deployment method, but the methods mentioned above are the most important. ",documentation/guides/security.md "--- title: Private RAG Information Extraction Engine weight: 32 social_preview_image: /blog/hybrid-cloud-vultr/hybrid-cloud-vultr-tutorial.png aliases: - /documentation/tutorials/rag-chatbot-vultr-dspy-ollama/ --- # Private RAG Information Extraction Engine | Time: 90 min | Level: Advanced | | | |--------------|-----------------|--|----| Handling private documents is a common task in many industries. Various businesses possess a large amount of unstructured data stored as huge files that must be processed and analyzed. Industry reports, financial analysis, legal documents, and many other documents are stored in PDF, Word, and other formats. Conversational chatbots built on top of RAG pipelines are one of the viable solutions for finding the relevant answers in such documents. However, if we want to extract structured information from these documents, and pass them to downstream systems, we need to use a different approach. Information extraction is a process of structuring unstructured data into a format that can be easily processed by machines. In this tutorial, we will show you how to use [DSPy](https://dspy-docs.vercel.app/) to perform that process on a set of documents. Assuming we cannot send our data to an external service, we will use [Ollama](https://ollama.com/) to run our own LLM model on our premises, using [Vultr](https://www.vultr.com/) as a cloud provider. Qdrant, acting in this setup as a knowledge base providing the relevant pieces of documents for a given query, will also be hosted in the Hybrid Cloud mode on Vultr. The last missing piece, the DSPy application will be also running in the same environment. If you work in a regulated industry, or just need to keep your data private, this tutorial is for you. ![Architecture diagram](/documentation/examples/information-extraction-ollama-vultr/architecture-diagram.png) ## Deploying Qdrant Hybrid Cloud on Vultr All the services we are going to use in this tutorial will be running on [Vultr Kubernetes Engine](https://www.vultr.com/kubernetes/). That gives us a lot of flexibility in terms of scaling and managing the resources. Vultr manages the control plane and worker nodes and provides integration with other managed services such as Load Balancers, Block Storage, and DNS. 1. To start using managed Kubernetes on Vultr, follow the [platform-specific documentation](/documentation/hybrid-cloud/platform-deployment-options/#vultr). 2. Once your Kubernetes clusters are up, [you can begin deploying Qdrant Hybrid Cloud](/documentation/hybrid-cloud/). ### Installing the necessary packages We are going to need a couple of Python packages to run our application. They might be installed together with the `dspy-ai` package and `qdrant` extra: ```shell pip install dspy-ai[qdrant] ``` ### Qdrant Hybrid Cloud Our [documentation](/documentation/hybrid-cloud/) contains a comprehensive guide on how to set up Qdrant in the Hybrid Cloud mode on Vultr. Please follow it carefully to get your Qdrant instance up and running. Once it's done, we need to store the Qdrant URL and the API key in the environment variables. You can do it by running the following commands: ```shell export QDRANT_URL=""https://qdrant.example.com"" export QDRANT_API_KEY=""your-api-key"" ``` ```python import os os.environ[""QDRANT_URL""] = ""https://qdrant.example.com"" os.environ[""QDRANT_API_KEY""] = ""your-api-key"" ``` DSPy is framework we are going to use. It's integrated with Qdrant already, but it assumes you use [FastEmbed](https://qdrant.github.io/fastembed/) to create the embeddings. DSPy does not provide a way to index the data, but leaves this task to the user. We are going to create a collection on our own, and fill it with the embeddings of our document chunks. #### Data indexing FastEmbed uses the `BAAI/bge-small-en` as the default embedding model. We are going to use it as well. Our collection will be created automatically if we call the `.add` method on an existing `QdrantClient` instance. In this tutorial we are not going to focus much on the document parsing, as there are plenty of tools that can help with that. The [`unstructured`](https://github.com/Unstructured-IO/unstructured) library is one of the options you can launch on your infrastructure. In our simplified example, we are going to use a list of strings as our documents. These are the descriptions of the made up technical events. Each of them should contain the name of the event along with the location and start and end dates. ```python documents = [ ""Taking place in San Francisco, USA, from the 10th to the 12th of June, 2024, the Global Developers Conference is the annual gathering spot for developers worldwide, offering insights into software engineering, web development, and mobile applications."", ""The AI Innovations Summit, scheduled for 15-17 September 2024 in London, UK, aims at professionals and researchers advancing artificial intelligence and machine learning."", ""Berlin, Germany will host the CyberSecurity World Conference between November 5th and 7th, 2024, serving as a key forum for cybersecurity professionals to exchange strategies and research on threat detection and mitigation."", ""Data Science Connect in New York City, USA, occurring from August 22nd to 24th, 2024, connects data scientists, analysts, and engineers to discuss data science's innovative methodologies, tools, and applications."", ""Set for July 14-16, 2024, in Tokyo, Japan, the Frontend Developers Fest invites developers to delve into the future of UI/UX design, web performance, and modern JavaScript frameworks."", ""The Blockchain Expo Global, happening May 20-22, 2024, in Dubai, UAE, focuses on blockchain technology's applications, opportunities, and challenges for entrepreneurs, developers, and investors."", ""Singapore's Cloud Computing Summit, scheduled for October 3-5, 2024, is where IT professionals and cloud experts will convene to discuss strategies, architectures, and cloud solutions."", ""The IoT World Forum, taking place in Barcelona, Spain from December 1st to 3rd, 2024, is the premier conference for those focused on the Internet of Things, from smart cities to IoT security."", ""Los Angeles, USA, will become the hub for game developers, designers, and enthusiasts at the Game Developers Arcade, running from April 18th to 20th, 2024, to showcase new games and discuss development tools."", ""The TechWomen Summit in Sydney, Australia, from March 8-10, 2024, aims to empower women in tech with workshops, keynotes, and networking opportunities."", ""Seoul, South Korea's Mobile Tech Conference, happening from September 29th to October 1st, 2024, will explore the future of mobile technology, including 5G networks and app development trends."", ""The Open Source Summit, to be held in Helsinki, Finland from August 11th to 13th, 2024, celebrates open source technologies and communities, offering insights into the latest software and collaboration techniques."", ""Vancouver, Canada will play host to the VR/AR Innovation Conference from June 20th to 22nd, 2024, focusing on the latest in virtual and augmented reality technologies."", ""Scheduled for May 5-7, 2024, in London, UK, the Fintech Leaders Forum brings together experts to discuss the future of finance, including innovations in blockchain, digital currencies, and payment technologies."", ""The Digital Marketing Summit, set for April 25-27, 2024, in New York City, USA, is designed for marketing professionals and strategists to discuss digital marketing and social media trends."", ""EcoTech Symposium in Paris, France, unfolds over 2024-10-09 to 2024-10-11, spotlighting sustainable technologies and green innovations for environmental scientists, tech entrepreneurs, and policy makers."", ""Set in Tokyo, Japan, from 16th to 18th May '24, the Robotic Innovations Conference showcases automation, robotics, and AI-driven solutions, appealing to enthusiasts and engineers."", ""The Software Architecture World Forum in Dublin, Ireland, occurring 22-24 Sept 2024, gathers software architects and IT managers to discuss modern architecture patterns."", ""Quantum Computing Summit, convening in Silicon Valley, USA from 2024/11/12 to 2024/11/14, is a rendezvous for exploring quantum computing advancements with physicists and technologists."", ""From March 3 to 5, 2024, the Global EdTech Conference in London, UK, discusses the intersection of education and technology, featuring e-learning and digital classrooms."", ""Bangalore, India's NextGen DevOps Days, from 28 to 30 August 2024, is a hotspot for IT professionals keen on the latest DevOps tools and innovations."", ""The UX/UI Design Conference, slated for April 21-23, 2024, in New York City, USA, invites discussions on the latest in user experience and interface design among designers and developers."", ""Big Data Analytics Summit, taking place 2024 July 10-12 in Amsterdam, Netherlands, brings together data professionals to delve into big data analysis and insights."", ""Toronto, Canada, will see the HealthTech Innovation Forum from June 8 to 10, '24, focusing on technology's impact on healthcare with professionals and innovators."", ""Blockchain for Business Summit, happening in Singapore from 2024-05-02 to 2024-05-04, focuses on blockchain's business applications, from finance to supply chain."", ""Las Vegas, USA hosts the Global Gaming Expo from October 18th to 20th, 2024, a premiere event for game developers, publishers, and enthusiasts."", ""The Renewable Energy Tech Conference in Copenhagen, Denmark, from 2024/09/05 to 2024/09/07, discusses renewable energy innovations and policies."", ""Set for 2024 Apr 9-11 in Boston, USA, the Artificial Intelligence in Healthcare Summit gathers healthcare professionals to discuss AI's healthcare applications."", ""Nordic Software Engineers Conference, happening in Stockholm, Sweden from June 15 to 17, 2024, focuses on software development in the Nordic region."", ""The International Space Exploration Symposium, scheduled in Houston, USA from 2024-08-05 to 2024-08-07, invites discussions on space exploration technologies and missions."" ] ``` We'll be able to ask general questions, for example, about topics we are interested in or events happening in a specific location, but expect the results to be returned in a structured format. ![An example of extracted information](/documentation/examples/information-extraction-ollama-vultr/extracted-information.png) Indexing in Qdrant is a single call if we have the documents defined: ```python client.add( collection_name=""document-parts"", documents=documents, metadata=[{""document"": document} for document in documents], ) ``` Our collection is ready to be queried. We can now move to the next step, which is setting up the Ollama model. ### Ollama on Vultr Ollama is a great tool for running the LLM models on your own infrastructure. It's designed to be lightweight and easy to use, and [an official Docker image](https://hub.docker.com/r/ollama/ollama) is available. We can use it to run Ollama on our Vultr Kubernetes cluster. In case of LLMs we may have some special requirements, like a GPU, and Vultr provides the [Vultr Kubernetes Engine for Cloud GPU](https://www.vultr.com/products/cloud-gpu/) so the model can be run on a specialized machine. Please refer to the official documentation to get Ollama up and running within your environment. Once it's done, we need to store the Ollama URL in the environment variable: ```shell export OLLAMA_URL=""https://ollama.example.com"" ``` ```python os.environ[""OLLAMA_URL""] = ""https://ollama.example.com"" ``` We will refer to this URL later on when configuring the Ollama model in our application. #### Setting up the Large Language Model We are going to use one of the lightweight LLMs available in Ollama, a `gemma:2b` model. It was developed by Google DeepMind team and has 3B parameters. The [Ollama version](https://ollama.com/library/gemma:2b) uses 4-bit quantization. Installing the model is as simple as running the following command on the machine where Ollama is running: ```shell ollama run gemma:2b ``` Ollama models are also integrated with DSPy, so we can use them directly in our application. ## Implementing the information extraction pipeline DSPy is a bit different from the other LLM frameworks. It's designed to optimize the prompts and weights of LMs in a pipeline. It's a bit like a compiler for LMs: you write a pipeline in a high-level language, and DSPy generates the prompts and weights for you. This means you can build complex systems without having to worry about the details of how to prompt your LMs, as DSPy will do that for you. It is somehow similar to PyTorch but for LLMs. First of all, we will define the Language Model we are going to use: ```python import dspy gemma_model = dspy.OllamaLocal( model=""gemma:2b"", base_url=os.environ.get(""OLLAMA_URL""), max_tokens=500, ) ``` Similarly, we have to define connection to our Qdrant Hybrid Cloud cluster: ```python from dspy.retrieve.qdrant_rm import QdrantRM from qdrant_client import QdrantClient, models client = QdrantClient( os.environ.get(""QDRANT_URL""), api_key=os.environ.get(""QDRANT_API_KEY""), ) qdrant_retriever = QdrantRM( qdrant_collection_name=""document-parts"", qdrant_client=client, ) ``` Finally, both components have to be configured in DSPy with a simple call to one of the functions: ```python dspy.configure(lm=gemma_model, rm=qdrant_retriever) ``` ### Application logic There is a concept of signatures which defines input and output formats of the pipeline. We are going to define a simple signature for the event: ```python class Event(dspy.Signature): description = dspy.InputField( desc=""Textual description of the event, including name, location and dates"" ) event_name = dspy.OutputField(desc=""Name of the event"") location = dspy.OutputField(desc=""Location of the event"") start_date = dspy.OutputField(desc=""Start date of the event, YYYY-MM-DD"") end_date = dspy.OutputField(desc=""End date of the event, YYYY-MM-DD"") ``` It is designed to derive the structured information from the textual description of the event. Now, we can build our module that will use it, along with Qdrant and Ollama model. Let's call it `EventExtractor`: ```python class EventExtractor(dspy.Module): def __init__(self): super().__init__() # Retrieve module to get relevant documents self.retriever = dspy.Retrieve(k=3) # Predict module for the created signature self.predict = dspy.Predict(Event) def forward(self, query: str): # Retrieve the most relevant documents results = self.retriever.forward(query) # Try to extract events from the retrieved documents events = [] for document in results.passages: event = self.predict(description=document) events.append(event) return events ``` The logic is simple: we retrieve the most relevant documents from Qdrant, and then try to extract the structured information from them using the `Event` signature. We can simply call it and see the results: ```python extractor = EventExtractor() extractor.forward(""Blockchain events close to Europe"") ``` Output: ```python [ Prediction( event_name='Event Name: Blockchain Expo Global', location='Dubai, UAE', start_date='2024-05-20', end_date='2024-05-22' ), Prediction( event_name='Event Name: Blockchain for Business Summit', location='Singapore', start_date='2024-05-02', end_date='2024-05-04' ), Prediction( event_name='Event Name: Open Source Summit', location='Helsinki, Finland', start_date='2024-08-11', end_date='2024-08-13' ) ] ``` The task was solved successfully, even without any optimization. However, each of the events has the ""Event Name: "" prefix that we might want to remove. DSPy allows optimizing the module, so we can improve the results. Optimization might be done in different ways, and it's [well covered in the DSPy documentation](https://dspy-docs.vercel.app/docs/building-blocks/optimizers). We are not going to go through the optimization process in this tutorial. However, we encourage you to experiment with it, as it might significantly improve the performance of your pipeline. Created module might be easily stored on a specific path, and loaded later on: ```python extractor.save(""event_extractor"") ``` To load, just create an instance of the module and call the `load` method: ```python second_extractor = EventExtractor() second_extractor.load(""event_extractor"") ``` This is especially useful when you optimize the module, as the optimized version might be stored and loaded later on without redoing the optimization process each time you run the application. ### Deploying the extraction pipeline Vultr gives us a lot of flexibility in terms of deploying the applications. Perfectly, we would use the Kubernetes cluster we set up earlier to run it. The deployment is as simple as running any other Python application. This time we don't need a GPU, as Ollama is already running on a separate machine, and DSPy just interacts with it. ## Wrapping up In this tutorial, we showed you how to set up a private environment for information extraction using DSPy, Ollama, and Qdrant. All the components might be securely hosted on the Vultr cloud, giving you full control over your data. ",documentation/examples/rag-chatbot-vultr-dspy-ollama.md "--- title: ""Inference with Mighty"" short_description: ""Mighty offers a speedy scalable embedding, a perfect fit for the speedy scalable Qdrant search. Let's combine them!"" description: ""We combine Mighty and Qdrant to create a semantic search service in Rust with just a few lines of code."" weight: 17 author: Andre Bogus author_link: https://llogiq.github.io date: 2023-06-01T11:24:20+01:00 draft: true keywords: - vector search - embeddings - mighty - rust - semantic search --- # Semantic Search with Mighty and Qdrant Much like Qdrant, the [Mighty](https://max.io/) inference server is written in Rust and promises to offer low latency and high scalability. This brief demo combines Mighty and Qdrant into a simple semantic search service that is efficient, affordable and easy to setup. We will use [Rust](https://rust-lang.org) and our [qdrant\_client crate](https://docs.rs/qdrant_client) for this integration. ## Initial setup For Mighty, start up a [docker container](https://hub.docker.com/layers/maxdotio/mighty-sentence-transformers/0.9.9/images/sha256-0d92a89fbdc2c211d927f193c2d0d34470ecd963e8179798d8d391a4053f6caf?context=explore) with an open port 5050. Just loading the port in a window shows the following: ```json { ""name"": ""sentence-transformers/all-MiniLM-L6-v2"", ""architectures"": [ ""BertModel"" ], ""model_type"": ""bert"", ""max_position_embeddings"": 512, ""labels"": null, ""named_entities"": null, ""image_size"": null, ""source"": ""https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2"" } ``` Note that this uses the `MiniLM-L6-v2` model from Hugging Face. As per their website, the model ""maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search"". The distance measure to use is cosine similarity. Verify that mighty works by calling `curl https://
:5050/sentence-transformer?q=hello+mighty`. This will give you a result like (formatted via `jq`): ```json { ""outputs"": [ [ -0.05019686743617058, 0.051746174693107605, 0.048117730766534805, ... (381 values skipped) ] ], ""shape"": [ 1, 384 ], ""texts"": [ ""Hello mighty"" ], ""took"": 77 } ``` For Qdrant, follow our [cloud documentation](../../cloud/cloud-quick-start/) to spin up a [free tier](https://cloud.qdrant.io/). Make sure to retrieve an API key. ## Implement model API For mighty, you will need a way to emit HTTP(S) requests. This version uses the [reqwest](https://docs.rs/reqwest) crate, so add the following to your `Cargo.toml`'s dependencies section: ```toml [dependencies] reqwest = { version = ""0.11.18"", default-features = false, features = [""json"", ""rustls-tls""] } ``` Mighty offers a variety of model APIs which will download and cache the model on first use. For semantic search, use the `sentence-transformer` API (as in the above `curl` command). The Rust code to make the call is: ```rust use anyhow::anyhow; use reqwest::Client; use serde::Deserialize; use serde_json::Value as JsonValue; #[derive(Deserialize)] struct EmbeddingsResponse { pub outputs: Vec>, } pub async fn get_mighty_embedding( client: &Client, url: &str, text: &str ) -> anyhow::Result> { let response = client.get(url).query(&[(""text"", text)]).send().await?; if !response.status().is_success() { return Err(anyhow!( ""Mighty API returned status code {}"", response.status() )); } let embeddings: EmbeddingsResponse = response.json().await?; // ignore multiple embeddings at the moment embeddings.get(0).ok_or_else(|| anyhow!(""mighty returned empty embedding"")) } ``` Note that mighty can return multiple embeddings (if the input is too long to fit the model, it is automatically split). ## Create embeddings and run a query Use this code to create embeddings both for insertion and search. On the Qdrant side, take the embedding and run a query: ```rust use anyhow::anyhow; use qdrant_client::prelude::*; pub const SEARCH_LIMIT: u64 = 5; const COLLECTION_NAME: &str = ""mighty""; pub async fn qdrant_search_embeddings( qdrant_client: &QdrantClient, vector: Vec, ) -> anyhow::Result> { qdrant_client .search_points(&SearchPoints { collection_name: COLLECTION_NAME.to_string(), vector, limit: SEARCH_LIMIT, with_payload: Some(true.into()), ..Default::default() }) .await .map_err(|err| anyhow!(""Failed to search Qdrant: {}"", err)) } ``` You can convert the [`ScoredPoint`](https://docs.rs/qdrant-client/latest/qdrant_client/qdrant/struct.ScoredPoint.html)s to fit your desired output format.",documentation/examples/mighty.md "--- title: Question-Answering System for AI Customer Support weight: 26 social_preview_image: /blog/hybrid-cloud-airbyte/hybrid-cloud-airbyte-tutorial.png aliases: - /documentation/tutorials/rag-customer-support-cohere-airbyte-aws/ --- # Question-Answering System for AI Customer Support | Time: 120 min | Level: Advanced | | | --- | ----------- | ----------- |----------- | Maintaining top-notch customer service is vital to business success. As your operation expands, so does the influx of customer queries. Many of these queries are repetitive, making automation a time-saving solution. Your support team's expertise is typically kept private, but you can still use AI to automate responses securely. In this tutorial we will setup a private AI service that answers customer support queries with high accuracy and effectiveness. By leveraging Cohere's powerful models (deployed to [AWS](https://cohere.com/deployment-options/aws)) with Qdrant Hybrid Cloud, you can create a fully private customer support system. Data synchronization, facilitated by [Airbyte](https://airbyte.com/), will complete the setup. ![Architecture diagram](/documentation/examples/customer-support-cohere-airbyte/architecture-diagram.png) ## System design The history of past interactions with your customers is not a static dataset. It is constantly evolving, as new questions are coming in. You probably have a ticketing system that stores all the interactions, or use a different way to communicate with your customers. No matter what is the communication channel, you need to bring the correct answers to the selected Large Language Model, and have an established way to do it in a continuous manner. Thus, we will build an ingestion pipeline and then a Retrieval Augmented Generation application that will use the data. - **Dataset:** a [set of Frequently Asked Questions from Qdrant users](/documentation/faq/qdrant-fundamentals/) as an incrementally updated Excel sheet - **Embedding model:** Cohere `embed-multilingual-v3.0`, to support different languages with the same pipeline - **Knowledge base:** Qdrant, running in Hybrid Cloud mode - **Ingestion pipeline:** [Airbyte](https://airbyte.com/), loading the data into Qdrant - **Large Language Model:** Cohere [Command-R](https://docs.cohere.com/docs/command-r) - **RAG:** Cohere [RAG](https://docs.cohere.com/docs/retrieval-augmented-generation-rag) using our knowledge base through a custom connector All the selected components are compatible with the [AWS](https://aws.amazon.com/) infrastructure. Thanks to Cohere models' availability, you can build a fully private customer support system completely isolates data within your infrastructure. Also, if you have AWS credits, you can now use them without spending additional money on the models or semantic search layer. ### Data ingestion Building a RAG starts with a well-curated dataset. In your specific case you may prefer loading the data directly from a ticketing system, such as [Zendesk Support](https://airbyte.com/connectors/zendesk-support), [Freshdesk](https://airbyte.com/connectors/freshdesk), or maybe integrate it with a shared inbox. However, in case of customer questions quality over quantity is the key. There should be a conscious decision on what data to include in the knowledge base, so we do not confuse the model with possibly irrelevant information. We'll assume there is an [Excel sheet](https://docs.airbyte.com/integrations/sources/file) available over HTTP/FTP that Airbyte can access and load into Qdrant in an incremental manner. ### Cohere <> Qdrant Connector for RAG Cohere RAG relies on [connectors](https://docs.cohere.com/docs/connectors) which brings additional context to the model. The connector is a web service that implements a specific interface, and exposes its data through HTTP API. With that setup, the Large Language Model becomes responsible for communicating with the connectors, so building a prompt with the context is not needed anymore. ### Answering bot Finally, we want to automate the responses and send them automatically when we are sure that the model is confident enough. Again, the way such an application should be created strongly depends on the system you are using within the customer support team. If it exposes a way to set up a webhook whenever a new question is coming in, you can create a web service and use it to automate the responses. In general, our bot should be created specifically for the platform you use, so we'll just cover the general idea here and build a simple CLI tool. ## Prerequisites ### Cohere models on AWS One of the possible ways to deploy Cohere models on AWS is to use AWS SageMaker. Cohere's website has [a detailed guide on how to deploy the models in that way](https://docs.cohere.com/docs/amazon-sagemaker-setup-guide), so you can follow the steps described there to set up your own instance. ### Qdrant Hybrid Cloud on AWS Our documentation covers the deployment of Qdrant on AWS as a Hybrid Cloud Environment, so you can follow the steps described there to set up your own instance. The deployment process is quite straightforward, and you can have your Qdrant cluster up and running in a few minutes. [//]: # (TODO: refer to the documentation on how to deploy Qdrant on AWS) Once you perform all the steps, your Qdrant cluster should be running on a specific URL. You will need this URL and the API key to interact with Qdrant, so let's store them both in the environment variables: ```shell export QDRANT_URL=""https://qdrant.example.com"" export QDRANT_API_KEY=""your-api-key"" ``` ```python import os os.environ[""QDRANT_URL""] = ""https://qdrant.example.com"" os.environ[""QDRANT_API_KEY""] = ""your-api-key"" ``` ### Airbyte Open Source Airbyte is an open-source data integration platform that helps you replicate your data in your warehouses, lakes, and databases. You can install it on your infrastructure and use it to load the data into Qdrant. The installation process for AWS EC2 is described in the [official documentation](https://docs.airbyte.com/deploying-airbyte/on-aws-ec2). Please follow the instructions to set up your own instance. #### Setting up the connection Once you have an Airbyte up and running, you can configure the connection to load the data from the respective source into Qdrant. The configuration will require setting up the source and destination connectors. In this tutorial we will use the following connectors: - **Source:** [File](https://docs.airbyte.com/integrations/sources/file) to load the data from an Excel sheet - **Destination:** [Qdrant](https://docs.airbyte.com/integrations/destinations/qdrant) to load the data into Qdrant Airbyte UI will guide you through the process of setting up the source and destination and connecting them. Here is how the configuration of the source might look like: ![Airbyte source configuration](/documentation/examples/customer-support-cohere-airbyte/airbyte-excel-source.png) Qdrant is our target destination, so we need to set up the connection to it. We need to specify which fields should be included to generate the embeddings. In our case it makes complete sense to embed just the questions, as we are going to look for similar questions asked in the past and provide the answers. ![Airbyte destination configuration](/documentation/examples/customer-support-cohere-airbyte/airbyte-qdrant-destination.png) Once we have the destination set up, we can finally configure a connection. The connection will define the schedule of the data synchronization. ![Airbyte connection configuration](/documentation/examples/customer-support-cohere-airbyte/airbyte-connection.png) Airbyte should now be ready to accept any data updates from the source and load them into Qdrant. You can monitor the progress of the synchronization in the UI. ## RAG connector One of our previous tutorials, guides you step-by-step on [implementing custom connector for Cohere RAG](../cohere-rag-connector/) with Cohere Embed v3 and Qdrant. You can just point it to use your Hybrid Cloud Qdrant instance running on AWS. Created connector might be deployed to Amazon Web Services in various ways, even in a [Serverless](https://aws.amazon.com/serverless/) manner using [AWS Lambda](https://aws.amazon.com/lambda/?c=ser&sec=srv). In general, RAG connector has to expose a single endpoint that will accept POST requests with `query` parameter and return the matching documents as JSON document with a specific structure. Our FastAPI implementation created [in the related tutorial](../cohere-rag-connector/) is a perfect fit for this task. The only difference is that you should point it to the Cohere models and Qdrant running on AWS infrastructure. > Our connector is a lightweight web service that exposes a single endpoint and glues the Cohere embedding model with > our Qdrant Hybrid Cloud instance. Thus, it perfectly fits the serverless architecture, requiring no additional > infrastructure to run. You can also run the connector as another service within your [Kubernetes cluster running on AWS (EKS)](https://aws.amazon.com/eks/), or by launching an [EC2](https://aws.amazon.com/ec2/) compute instance. This step is dependent on the way you deploy your other services, so we'll leave it to you to decide how to run the connector. Eventually, the web service should be available under a specific URL, and it's a good practice to store it in the environment variable, so the other services can easily access it. ```shell export RAG_CONNECTOR_URL=""https://rag-connector.example.com/search"" ``` ```python os.environ[""RAG_CONNECTOR_URL""] = ""https://rag-connector.example.com/search"" ``` ## Customer interface At this part we have all the data loaded into Qdrant, and the RAG connector is ready to serve the relevant context. The last missing piece is the customer interface, that will call the Command model to create the answer. Such a system should be built specifically for the platform you use and integrated into its workflow, but we will build the strong foundation for it and show how to use it in a simple CLI tool. > Our application does not have to connect to Qdrant anymore, as the model will connect to the RAG connector directly. First of all, we have to create a connection to Cohere services through the Cohere SDK. ```python import cohere # Create a Cohere client pointing to the AWS instance cohere_client = cohere.Client(...) ``` Next, our connector should be registered. **Please make sure to do it once, and store the id of the connector in the environment variable or in any other way that will be accessible to the application.** ```python import os connector_response = cohere_client.connectors.create( name=""customer-support"", url=os.environ[""RAG_CONNECTOR_URL""], ) # The id returned by the API should be stored for future use connector_id = connector_response.connector.id ``` Finally, we can create a prompt and get the answer from the model. Additionally, we define which of the connectors should be used to provide the context, as we may have multiple connectors and want to use specific ones, depending on some conditions. Let's start with asking a question. ```python query = ""Why Qdrant does not return my vectors?"" ``` Now we can send the query to the model, get the response, and possibly send it back to the customer. ```python response = cohere_client.chat( message=query, connectors=[ cohere.ChatConnector(id=connector_id), ], model=""command-r"", ) print(response.text) ``` The output should be the answer to the question, generated by the model, for example: > Qdrant is set up by default to minimize network traffic and therefore doesn't return vectors in search results. However, you can make Qdrant return your vectors by setting the 'with_vector' parameter of the Search/Scroll function to true. Customer support should not be fully automated, as some completely new issues might require human intervention. We should play with prompt engineering and expect the model to provide the answer with a certain confidence level. If the confidence is too low, we should not send the answer automatically but present it to the support team for review. ## Wrapping up This tutorial shows how to build a fully private customer support system using Cohere models, Qdrant Hybrid Cloud, and Airbyte, which runs on AWS infrastructure. You can ensure your data does not leave your premises and focus on providing the best customer support experience without bothering your team with repetitive tasks. ",documentation/examples/rag-customer-support-cohere-airbyte-aws.md "--- title: Movie Recommendation System weight: 34 social_preview_image: /blog/hybrid-cloud-ovhcloud/hybrid-cloud-ovhcloud-tutorial.png aliases: - /documentation/tutorials/recommendation-system-ovhcloud/ --- # Movie Recommendation System | Time: 120 min | Level: Advanced | Output: [GitHub](https://github.com/infoslack/qdrant-example/blob/main/HC-demo/HC-OVH.ipynb) | | --- | ----------- | ----------- |----------- | In this tutorial, you will build a mechanism that recommends movies based on defined preferences. Vector databases like Qdrant are good for storing high-dimensional data, such as user and item embeddings. They can enable personalized recommendations by quickly retrieving similar entries based on advanced indexing techniques. In this specific case, we will use [sparse vectors](/articles/sparse-vectors/) to create an efficient and accurate recommendation system. **Privacy and Sovereignty:** Since preference data is proprietary, it should be stored in a secure and controlled environment. Our vector database can easily be hosted on [OVHcloud](https://ovhcloud.com/), our trusted [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/) partner. This means that Qdrant can be run from your OVHcloud region, but the database itself can still be managed from within Qdrant Cloud's interface. Both products have been tested for compatibility and scalability, and we recommend their [managed Kubernetes](https://www.ovhcloud.com/en/public-cloud/kubernetes/) service. > To see the entire output, use our [notebook with complete instructions](https://github.com/infoslack/qdrant-example/blob/main/HC-demo/HC-OVH.ipynb). ## Components - **Dataset:** The [MovieLens dataset](https://grouplens.org/datasets/movielens/) contains a list of movies and ratings given by users. - **Cloud:** [OVHcloud](https://ovhcloud.com/), with managed Kubernetes. - **Vector DB:** [Qdrant Hybrid Cloud](https://hybrid-cloud.qdrant.tech) running on [OVHcloud](https://ovhcloud.com/). **Methodology:** We're adopting a collaborative filtering approach to construct a recommendation system from the dataset provided. Collaborative filtering works on the premise that if two users share similar tastes, they're likely to enjoy similar movies. Leveraging this concept, we'll identify users whose ratings align closely with ours, and explore the movies they liked but we haven't seen yet. To do this, we'll represent each user's ratings as a vector in a high-dimensional, sparse space. Using Qdrant, we'll index these vectors and search for users whose ratings vectors closely match ours. Ultimately, we will see which movies were enjoyed by users similar to us. ![](/documentation/examples/recommendation-system-ovhcloud/architecture-diagram.png) ## Deploying Qdrant Hybrid Cloud on OVHcloud [Service Managed Kubernetes](https://www.ovhcloud.com/en-in/public-cloud/kubernetes/), powered by OVH Public Cloud Instances, a leading European cloud provider. With OVHcloud Load Balancers and disks built in. OVHcloud Managed Kubernetes provides high availability, compliance, and CNCF conformance, allowing you to focus on your containerized software layers with total reversibility. 1. To start using managed Kubernetes on OVHcloud, follow the [platform-specific documentation](/documentation/hybrid-cloud/platform-deployment-options/#ovhcloud). 2. Once your Kubernetes clusters are up, [you can begin deploying Qdrant Hybrid Cloud](/documentation/hybrid-cloud/). ## Prerequisites Download and unzip the MovieLens dataset: ```shell mkdir -p data wget https://files.grouplens.org/datasets/movielens/ml-1m.zip unzip ml-1m.zip -d data ``` The necessary * libraries are installed using `pip`, including `pandas` for data manipulation, `qdrant-client` for interfacing with Qdrant, and `*-dotenv` for managing environment variables. ```python !pip install -U \ pandas \ qdrant-client \ *-dotenv ``` The `.env` file is used to store sensitive information like the Qdrant host URL and API key securely. ```shell QDRANT_HOST QDRANT_API_KEY ``` Load all environment variables into the setup: ```python import os from dotenv import load_dotenv load_dotenv('./.env') ``` ## Implementation Load the data from the MovieLens dataset into pandas DataFrames to facilitate data manipulation and analysis. ```python from qdrant_client import QdrantClient, models import pandas as pd ``` Load user data: ```python users = pd.read_csv( 'data/ml-1m/users.dat', sep='::', names=['user_id', 'gender', 'age', 'occupation', 'zip'], engine='*' ) users.head() ``` Add movies: ```python movies = pd.read_csv( 'data/ml-1m/movies.dat', sep='::', names=['movie_id', 'title', 'genres'], engine='*', encoding='latin-1' ) movies.head() ``` Finally, add the ratings: ```python ratings = pd.read_csv( 'data/ml-1m/ratings.dat', sep='::', names=['user_id', 'movie_id', 'rating', 'timestamp'], engine='*' ) ratings.head() ``` ### Normalize the ratings Sparse vectors can use advantage of negative values, so we can normalize ratings to have a mean of 0 and a standard deviation of 1. This normalization ensures that ratings are consistent and centered around zero, enabling accurate similarity calculations. In this scenario we can take into account movies that we don't like. ```python ratings.rating = (ratings.rating - ratings.rating.mean()) / ratings.rating.std() ``` To get the results: ```python ratings.head() ``` ### Data preparation Now you will transform user ratings into sparse vectors, where each vector represents ratings for different movies. This step prepares the data for indexing in Qdrant. First, create a collection with configured sparse vectors. For sparse vectors, you don't need to specify the dimension, because it's extracted from the data automatically. ```python from collections import defaultdict user_sparse_vectors = defaultdict(lambda: {""values"": [], ""indices"": []}) for row in ratings.itertuples(): user_sparse_vectors[row.user_id][""values""].append(row.rating) user_sparse_vectors[row.user_id][""indices""].append(row.movie_id) ``` Connect to Qdrant and create a collection called **movielens**: ```python client = QdrantClient( url = os.getenv(""QDRANT_HOST""), api_key = os.getenv(""QDRANT_API_KEY"") ) client.create_collection( ""movielens"", vectors_config={}, sparse_vectors_config={ ""ratings"": models.SparseVectorParams() } ) ``` Upload user ratings to the **movielens** collection in Qdrant as sparse vectors, along with user metadata. This step populates the database with the necessary data for recommendation generation. ```python def data_generator(): for user in users.itertuples(): yield models.PointStruct( id=user.user_id, vector={ ""ratings"": user_sparse_vectors[user.user_id] }, payload=user._asdict() ) client.upload_points( ""movielens"", data_generator() ) ``` ## Recommendations Personal movie ratings are specified, where positive ratings indicate likes and negative ratings indicate dislikes. These ratings serve as the basis for finding similar users with comparable tastes. Personal ratings are converted into a sparse vector representation suitable for querying Qdrant. This vector represents the user's preferences across different movies. Let's try to recommend something for ourselves: ``` 1 = Like -1 = dislike ``` ```python # Search with movies[movies.title.str.contains(""Matrix"", case=False)]. my_ratings = { 2571: 1, # Matrix 329: 1, # Star Trek 260: 1, # Star Wars 2288: -1, # The Thing 1: 1, # Toy Story 1721: -1, # Titanic 296: -1, # Pulp Fiction 356: 1, # Forrest Gump 2116: 1, # Lord of the Rings 1291: -1, # Indiana Jones 1036: -1 # Die Hard } inverse_ratings = {k: -v for k, v in my_ratings.items()} def to_vector(ratings): vector = models.SparseVector( values=[], indices=[] ) for movie_id, rating in ratings.items(): vector.values.append(rating) vector.indices.append(movie_id) return vector ``` Query Qdrant to find users with similar tastes based on the provided personal ratings. The search returns a list of similar users along with their ratings, facilitating collaborative filtering. ```python results = client.query_points( ""movielens"", query=to_vector(my_ratings), using=""ratings"", with_vectors=True, # We will use those to find new movies limit=20 ).points ``` Movie scores are computed based on how frequently each movie appears in the ratings of similar users, weighted by their ratings. This step identifies popular movies among users with similar tastes. Calculate how frequently each movie is found in similar users' ratings ```python def results_to_scores(results): movie_scores = defaultdict(lambda: 0) for user in results: user_scores = user.vector['ratings'] for idx, rating in zip(user_scores.indices, user_scores.values): if idx in my_ratings: continue movie_scores[idx] += rating return movie_scores ``` The top-rated movies are sorted based on their scores and printed as recommendations for the user. These recommendations are tailored to the user's preferences and aligned with their tastes. Sort movies by score and print top five: ```python movie_scores = results_to_scores(results) top_movies = sorted(movie_scores.items(), key=lambda x: x[1], reverse=True) for movie_id, score in top_movies[:5]: print(movies[movies.movie_id == movie_id].title.values[0], score) ``` Result: ```text Star Wars: Episode V - The Empire Strikes Back (1980) 20.02387858 Star Wars: Episode VI - Return of the Jedi (1983) 16.443184379999998 Princess Bride, The (1987) 15.840068229999996 Raiders of the Lost Ark (1981) 14.94489462 Sixth Sense, The (1999) 14.570322149999999 ```",documentation/examples/recommendation-system-ovhcloud.md "--- title: Chat With Product PDF Manuals Using Hybrid Search weight: 27 social_preview_image: /blog/hybrid-cloud-llamaindex/hybrid-cloud-llamaindex-tutorial.png aliases: - /documentation/tutorials/hybrid-search-llamaindex-jinaai/ --- # Chat With Product PDF Manuals Using Hybrid Search | Time: 120 min | Level: Advanced | Output: [GitHub](https://github.com/infoslack/qdrant-example/blob/main/HC-demo/HC-DO-LlamaIndex-Jina-v2.ipynb) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/infoslack/qdrant-example/blob/main/HC-demo/HC-DO-LlamaIndex-Jina-v2.ipynb) | | --- | ----------- | ----------- |----------- | With the proliferation of digital manuals and the increasing demand for quick and accurate customer support, having a chatbot capable of efficiently parsing through complex PDF documents and delivering precise information can be a game-changer for any business. In this tutorial, we'll walk you through the process of building a RAG-based chatbot, designed specifically to assist users with understanding the operation of various household appliances. We'll cover the essential steps required to build your system, including data ingestion, natural language understanding, and response generation for customer support use cases. ## Components - **Embeddings:** Jina Embeddings, served via the [Jina Embeddings API](https://jina.ai/embeddings/#apiform) - **Database:** [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/), deployed in a managed Kubernetes cluster on [DigitalOcean (DOKS)](https://www.digitalocean.com/products/kubernetes) - **LLM:** [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) language model on HuggingFace - **Framework:** [LlamaIndex](https://www.llamaindex.ai/) for extended RAG functionality and [Hybrid Search support](https://docs.llamaindex.ai/en/stable/examples/vector_stores/qdrant_hybrid/). - **Parser:** [LlamaParse](https://github.com/run-llama/llama_parse) as a way to parse complex documents with embedded objects such as tables and figures. ![Architecture diagram](/documentation/examples/hybrid-search-llamaindex-jinaai/architecture-diagram.png) ### Procedure Retrieval Augmented Generation (RAG) combines search with language generation. An external information retrieval system is used to identify documents likely to provide information relevant to the user's query. These documents, along with the user's request, are then passed on to a text-generating language model, producing a natural response. This method enables a language model to respond to questions and access information from a much larger set of documents than it could see otherwise. The language model only looks at a few relevant sections of the documents when generating responses, which also helps to reduce inexplicable errors. ## [Service Managed Kubernetes](https://www.ovhcloud.com/en-in/public-cloud/kubernetes/), powered by OVH Public Cloud Instances, a leading European cloud provider. With OVHcloud Load Balancers and disks built in. OVHcloud Managed Kubernetes provides high availability, compliance, and CNCF conformance, allowing you to focus on your containerized software layers with total reversibility. ## Prerequisites ### Deploying Qdrant Hybrid Cloud on DigitalOcean [DigitalOcean Kubernetes (DOKS)](https://www.digitalocean.com/products/kubernetes) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and volumes. 1. To start using managed Kubernetes on DigitalOcean, follow the [platform-specific documentation](/documentation/hybrid-cloud/platform-deployment-options/#digital-ocean). 2. Once your Kubernetes clusters are up, [you can begin deploying Qdrant Hybrid Cloud](/documentation/hybrid-cloud/). 3. Once it's deployed, you should have a running Qdrant cluster with an API key. ### Development environment Then, install all dependencies: ```python !pip install -U \ llama-index \ llama-parse \ python-dotenv \ llama-index-embeddings-jinaai \ llama-index-llms-huggingface \ llama-index-vector-stores-qdrant \ ""huggingface_hub[inference]"" \ datasets ``` Set up secret key values on `.env` file: ```bash JINAAI_API_KEY HF_INFERENCE_API_KEY LLAMA_CLOUD_API_KEY QDRANT_HOST QDRANT_API_KEY ``` Load all environment variables: ```python import os from dotenv import load_dotenv load_dotenv('./.env') ``` ## Implementation ### Connect Jina Embeddings and Mixtral LLM LlamaIndex provides built-in support for the [Jina Embeddings API](https://jina.ai/embeddings/#apiform). To use it, you need to initialize the `JinaEmbedding` object with your API Key and model name. For the LLM, you need wrap it in a subclass of `llama_index.llms.CustomLLM` to make it compatible with LlamaIndex. ```python # connect embeddings from llama_index.embeddings.jinaai import JinaEmbedding jina_embedding_model = JinaEmbedding( model=""jina-embeddings-v2-base-en"", api_key=os.getenv(""JINAAI_API_KEY""), ) # connect LLM from llama_index.llms.huggingface import HuggingFaceInferenceAPI mixtral_llm = HuggingFaceInferenceAPI( model_name = ""mistralai/Mixtral-8x7B-Instruct-v0.1"", token=os.getenv(""HF_INFERENCE_API_KEY""), ) ``` ### Prepare data for RAG This example will use household appliance manuals, which are generally available as PDF documents. LlamaPar In the `data` folder, we have three documents, and we will use it to extract the textual content from the PDF and use it as a knowledge base in a simple RAG. The free LlamaIndex Cloud plan is sufficient for our example: ```python import nest_asyncio nest_asyncio.apply() from llama_parse import LlamaParse llamaparse_api_key = os.getenv(""LLAMA_CLOUD_API_KEY"") llama_parse_documents = LlamaParse(api_key=llamaparse_api_key, result_type=""markdown"").load_data([ ""data/DJ68-00682F_0.0.pdf"", ""data/F500E_WF80F5E_03445F_EN.pdf"", ""data/O_ME4000R_ME19R7041FS_AA_EN.pdf"" ]) ``` ### Store data into Qdrant The code below does the following: - create a vector store with Qdrant client; - get an embedding for each chunk using Jina Embeddings API; - combines `sparse` and `dense` vectors for hybrid search; - stores all data into Qdrant; Hybrid search with Qdrant must be enabled from the beginning - we can simply set `enable_hybrid=True`. ```python # By default llamaindex uses OpenAI models # setting embed_model to Jina and llm model to Mixtral from llama_index.core import Settings Settings.embed_model = jina_embedding_model Settings.llm = mixtral_llm from llama_index.core import VectorStoreIndex, StorageContext from llama_index.vector_stores.qdrant import QdrantVectorStore import qdrant_client client = qdrant_client.QdrantClient( url=os.getenv(""QDRANT_HOST""), api_key=os.getenv(""QDRANT_API_KEY"") ) vector_store = QdrantVectorStore( client=client, collection_name=""demo"", enable_hybrid=True, batch_size=20 ) Settings.chunk_size = 512 storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents( documents=llama_parse_documents, storage_context=storage_context ) ``` ### Prepare a prompt Here we will create a custom prompt template. This prompt asks the LLM to use only the context information retrieved from Qdrant. When querying with hybrid mode, we can set `similarity_top_k` and `sparse_top_k` separately: - `sparse_top_k` represents how many nodes will be retrieved from each dense and sparse query. - `similarity_top_k` controls the final number of returned nodes. In the above setting, we end up with 10 nodes. Then, we assemble the query engine using the prompt. ```python from llama_index.core import PromptTemplate qa_prompt_tmpl = ( ""Context information is below.\n"" ""-------------------------------"" ""{context_str}\n"" ""-------------------------------"" ""Given the context information and not prior knowledge,"" ""answer the query. Please be concise, and complete.\n"" ""If the context does not contain an answer to the query,"" ""respond with \""I don't know!\""."" ""Query: {query_str}\n"" ""Answer: "" ) qa_prompt = PromptTemplate(qa_prompt_tmpl) from llama_index.core.retrievers import VectorIndexRetriever from llama_index.core.query_engine import RetrieverQueryEngine from llama_index.core import get_response_synthesizer from llama_index.core import Settings Settings.embed_model = jina_embedding_model Settings.llm = mixtral_llm # retriever retriever = VectorIndexRetriever( index=index, similarity_top_k=2, sparse_top_k=12, vector_store_query_mode=""hybrid"" ) # response synthesizer response_synthesizer = get_response_synthesizer( llm=mixtral_llm, text_qa_template=qa_prompt, response_mode=""compact"", ) # query engine query_engine = RetrieverQueryEngine( retriever=retriever, response_synthesizer=response_synthesizer, ) ``` ## Run a test query Now you can ask questions and receive answers based on the data: **Question** ```python result = query_engine.query(""What temperature should I use for my laundry?"") print(result.response) ``` **Answer** ```text The water temperature is set to 70 ˚C during the Eco Drum Clean cycle. You cannot change the water temperature. However, the temperature for other cycles is not specified in the context. ``` And that's it! Feel free to scale this up to as many documents and complex PDFs as you like. ",documentation/examples/hybrid-search-llamaindex-jinaai.md "--- title: Region-Specific Contract Management System weight: 28 social_preview_image: /blog/hybrid-cloud-aleph-alpha/hybrid-cloud-aleph-alpha-tutorial.png aliases: - /documentation/tutorials/rag-contract-management-stackit-aleph-alpha/ --- # Region-Specific Contract Management System | Time: 90 min | Level: Advanced | | | --- | ----------- | ----------- |----------- | Contract management benefits greatly from Retrieval Augmented Generation (RAG), streamlining the handling of lengthy business contract texts. With AI assistance, complex questions can be asked and well-informed answers generated, facilitating efficient document management. This proves invaluable for businesses with extensive relationships, like shipping companies, construction firms, and consulting practices. Access to such contracts is often restricted to authorized team members due to security and regulatory requirements, such as GDPR in Europe, necessitating secure storage practices. Companies want their data to be kept and processed within specific geographical boundaries. For that reason, this RAG-centric tutorial focuses on dealing with a region-specific cloud provider. You will set up a contract management system using [Aleph Alpha's](https://aleph-alpha.com/) embeddings and LLM. You will host everything on [STACKIT](https://www.stackit.de/), a German business cloud provider. On this platform, you will run Qdrant Hybrid Cloud as well as the rest of your RAG application. This setup will ensure that your data is stored and processed in Germany. ![Architecture diagram](/documentation/examples/contract-management-stackit-aleph-alpha/architecture-diagram.png) ## Components A contract management platform is not a simple CLI tool, but an application that should be available to all team members. It needs an interface to upload, search, and manage the documents. Ideally, the system should be integrated with org's existing stack, and the permissions/access controls inherited from LDAP or Active Directory. > **Note:** In this tutorial, we are going to build a solid foundation for such a system. However, it is up to your organization's setup to implement the entire solution. - **Dataset** - a collection of documents, using different formats, such as PDF or DOCx, scraped from internet - **Asymmetric semantic embeddings** - [Aleph Alpha embedding](https://docs.aleph-alpha.com/api/semantic-embed/) to convert the queries and the documents into vectors - **Large Language Model** - the [Luminous-extended-control model](https://docs.aleph-alpha.com/docs/introduction/model-card/), but you can play with a different one from the Luminous family - **Qdrant Hybrid Cloud** - a knowledge base to store the vectors and search over the documents - **STACKIT** - a [German business cloud](https://www.stackit.de) to run the Qdrant Hybrid Cloud and the application processes We will implement the process of uploading the documents, converting them into vectors, and storing them in Qdrant. Then, we will build a search interface to query the documents and get the answers. All that, assuming the user interacts with the system with some set of permissions, and can only access the documents they are allowed to. ## Prerequisites ### Aleph Alpha account Since you will be using Aleph Alpha's models, [sign up](https://app.aleph-alpha.com/signup) with their managed service and generate an API token in the [User Profile](https://app.aleph-alpha.com/profile). Once you have it ready, store it as an environment variable: ```shell export ALEPH_ALPHA_API_KEY="""" ``` ```python import os os.environ[""ALEPH_ALPHA_API_KEY""] = """" ``` ### Qdrant Hybrid Cloud on STACKIT Please refer to our documentation to see [how to deploy Qdrant Hybrid Cloud on STACKIT](/documentation/hybrid-cloud/platform-deployment-options/#stackit). Once you finish the deployment, you will have the API endpoint to interact with the Qdrant server. Let's store it in the environment variable as well: ```shell export QDRANT_URL=""https://qdrant.example.com"" export QDRANT_API_KEY=""your-api-key"" ``` ```python os.environ[""QDRANT_URL""] = ""https://qdrant.example.com"" os.environ[""QDRANT_API_KEY""] = ""your-api-key"" ``` Qdrant will be running on a specific URL and access will be restricted by the API key. Make sure to store them both as environment variables as well: *Optional:* Whenever you use LangChain, you can also [configure LangSmith](https://docs.smith.langchain.com/), which will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=""your-api-key"" export LANGCHAIN_PROJECT=""your-project"" # if not specified, defaults to ""default"" ``` ## Implementation To build the application, we can use the official SDKs of Aleph Alpha and Qdrant. However, to streamline the process let's use [LangChain](https://python.langchain.com/docs/get_started/introduction). This framework is already integrated with both services, so we can focus our efforts on developing business logic. ### Qdrant collection Aleph Alpha embeddings are high dimensional vectors by default, with a dimensionality of `5120`. However, a pretty unique feature of that model is that they might be compressed to a size of `128`, with a small drop in accuracy performance (4-6%, according to the docs). Qdrant can store even the original vectors easily, and this sounds like a good idea to enable [Binary Quantization](/documentation/guides/quantization/#binary-quantization) to save space and make the retrieval faster. Let's create a collection with such settings: ```python from qdrant_client import QdrantClient, models client = QdrantClient( location=os.environ[""QDRANT_URL""], api_key=os.environ[""QDRANT_API_KEY""], ) client.create_collection( collection_name=""contracts"", vectors_config=models.VectorParams( size=5120, distance=models.Distance.COSINE, quantization_config=models.BinaryQuantization( binary=models.BinaryQuantizationConfig( always_ram=True, ) ) ), ) ``` We are going to use the `contracts` collection to store the vectors of the documents. The `always_ram` flag is set to `True` to keep the quantized vectors in RAM, which will speed up the search process. We also wanted to restrict access to the individual documents, so only users with the proper permissions can see them. In Qdrant that should be solved by adding a payload field that defines who can access the document. We'll call this field `roles` and set it to an array of strings with the roles that can access the document. ```python client.create_payload_index( collection_name=""contracts"", field_name=""metadata.roles"", field_schema=models.PayloadSchemaType.KEYWORD, ) ``` Since we use Langchain, the `roles` field is a nested field of the `metadata`, so we have to define it as `metadata.roles`. The schema says that the field is a keyword, which means it is a string or an array of strings. We are going to use the name of the customers as the roles, so the access control will be based on the customer name. ### Ingestion pipeline Semantic search systems rely on high-quality data as their foundation. With the [unstructured integration of Langchain](https://python.langchain.com/docs/integrations/providers/unstructured), ingestion of various document formats like PDFs, Microsoft Word files, and PowerPoint presentations becomes effortless. However, it's crucial to split the text intelligently to avoid converting entire documents into vectors; instead, they should be divided into meaningful chunks. Subsequently, the extracted documents are converted into vectors using Aleph Alpha embeddings and stored in the Qdrant collection. Let's start by defining the components and connecting them together: ```python embeddings = AlephAlphaAsymmetricSemanticEmbedding( model=""luminous-base"", aleph_alpha_api_key=os.environ[""ALEPH_ALPHA_API_KEY""], normalize=True, ) qdrant = Qdrant( client=client, collection_name=""contracts"", embeddings=embeddings, ) ``` Now it's high time to index our documents. Each of the documents is a separate file, and we also have to know the customer name to set the access control properly. There might be several roles for a single document, so let's keep them in a list. ```python documents = { ""data/Data-Processing-Agreement_STACKIT_Cloud_version-1.2.pdf"": [""stackit""], ""data/langchain-terms-of-service.pdf"": [""langchain""], } ``` This is how the documents might look like: ![Example of the indexed document](/documentation/examples/contract-management-stackit-aleph-alpha/indexed-document.png) Each has to be split into chunks first; there is no silver bullet. Our chunking algorithm will be simple and based on recursive splitting, with the maximum chunk size of 500 characters and the overlap of 100 characters. ```python from langchain_text_splitters import RecursiveCharacterTextSplitter text_splitter = RecursiveCharacterTextSplitter( chunk_size=500, chunk_overlap=100, ) ``` Now we can iterate over the documents, split them into chunks, convert them into vectors with Aleph Alpha embedding model, and store them in the Qdrant. ```python from langchain_community.document_loaders.unstructured import UnstructuredFileLoader for document_path, roles in documents.items(): document_loader = UnstructuredFileLoader(file_path=document_path) # Unstructured loads each file into a single Document object loaded_documents = document_loader.load() for doc in loaded_documents: doc.metadata[""roles""] = roles # Chunks will have the same metadata as the original document document_chunks = text_splitter.split_documents(loaded_documents) # Add the documents to the Qdrant collection qdrant.add_documents(document_chunks, batch_size=20) ``` Our collection is filled with data, and we can start searching over it. In a real-world scenario, the ingestion process should be automated and triggered by the new documents uploaded to the system. Since we already use Qdrant Hybrid Cloud running on Kubernetes, we can easily deploy the ingestion pipeline as a job to the same environment. On STACKIT, you probably use the [STACKIT Kubernetes Engine (SKE)](https://www.stackit.de/en/product/kubernetes/) and launch it in a container. The [Compute Engine](https://www.stackit.de/en/product/stackit-compute-engine/) is also an option, but everything depends on the specifics of your organization. ### Search application Specialized Document Management Systems have a lot of features, but semantic search is not yet a standard. We are going to build a simple search mechanism which could be possibly integrated with the existing system. The search process is quite simple: we convert the query into a vector using the same Aleph Alpha model, and then search for the most similar documents in the Qdrant collection. The access control is also applied, so the user can only see the documents they are allowed to. We start with creating an instance of the LLM of our choice, and set the maximum number of tokens to 200, as the default value is 64, which might be too low for our purposes. ```python from langchain.llms.aleph_alpha import AlephAlpha llm = AlephAlpha( model=""luminous-extended-control"", aleph_alpha_api_key=os.environ[""ALEPH_ALPHA_API_KEY""], maximum_tokens=200, ) ``` Then, we can glue the components together and build the search process. `RetrievalQA` is a class that takes implements the Question Retrieval process, with a specified retriever and Large Language Model. The instance of `Qdrant` might be converted into a retriever, with additional filter that will be passed to the `similarity_search` method. The filter is created as [in a regular Qdrant query](../../../documentation/concepts/filtering/), with the `roles` field set to the user's roles. ```python user_roles = [""stackit"", ""aleph-alpha""] qdrant_retriever = qdrant.as_retriever( search_kwargs={ ""filter"": models.Filter( must=[ models.FieldCondition( key=""metadata.roles"", match=models.MatchAny(any=user_roles) ) ] ) } ) ``` We set the user roles to `stackit` and `aleph-alpha`, so the user can see the documents that are accessible to these customers, but not to the others. The final step is to create the `RetrievalQA` instance and use it to search over the documents, with the custom prompt. ```python from langchain.prompts import PromptTemplate from langchain.chains.retrieval_qa.base import RetrievalQA prompt_template = """""" Question: {question} Answer the question using the Source. If there's no answer, say ""NO ANSWER IN TEXT"". Source: {context} ### Response: """""" prompt = PromptTemplate( template=prompt_template, input_variables=[""context"", ""question""] ) retrieval_qa = RetrievalQA.from_chain_type( llm=llm, chain_type=""stuff"", retriever=qdrant_retriever, return_source_documents=True, chain_type_kwargs={""prompt"": prompt}, ) response = retrieval_qa.invoke({""query"": ""What are the rules of performing the audit?""}) print(response[""result""]) ``` Output: ```text The rules for performing the audit are as follows: 1. The Customer must inform the Contractor in good time (usually at least two weeks in advance) about any and all circumstances related to the performance of the audit. 2. The Customer is entitled to perform one audit per calendar year. Any additional audits may be performed if agreed with the Contractor and are subject to reimbursement of expenses. 3. If the Customer engages a third party to perform the audit, the Customer must obtain the Contractor's consent and ensure that the confidentiality agreements with the third party are observed. 4. The Contractor may object to any third party deemed unsuitable. ``` There are some other parameters that might be tuned to optimize the search process. The `k` parameter defines how many documents should be returned, but Langchain allows us also to control the retrieval process by choosing the type of the search operation. The default is `similarity`, which is just vector search, but we can also use `mmr` which stands for Maximal Marginal Relevance. It is a technique to diversify the search results, so the user gets the most relevant documents, but also the most diverse ones. The `mmr` search is slower, but might be more user-friendly. Our search application is ready, and we can deploy it to the same environment as the ingestion pipeline on STACKIT. The same rules apply here, so you can use the SKE or the Compute Engine, depending on the specifics of your organization. ## Next steps We built a solid foundation for the contract management system, but there is still a lot to do. If you want to make the system production-ready, you should consider implementing the mechanism into your existing stack. If you have any questions, feel free to ask on our [Discord community](https://qdrant.to/discord).",documentation/examples/rag-contract-management-stackit-aleph-alpha.md "--- title: Implement Cohere RAG connector weight: 24 aliases: - /documentation/tutorials/cohere-rag-connector/ --- # Implement custom connector for Cohere RAG | Time: 45 min | Level: Intermediate | | | |--------------|---------------------|-|----| The usual approach to implementing Retrieval Augmented Generation requires users to build their prompts with the relevant context the LLM may rely on, and manually sending them to the model. Cohere is quite unique here, as their models can now speak to the external tools and extract meaningful data on their own. You can virtually connect any data source and let the Cohere LLM know how to access it. Obviously, vector search goes well with LLMs, and enabling semantic search over your data is a typical case. Cohere RAG has lots of interesting features, such as inline citations, which help you to refer to the specific parts of the documents used to generate the response. ![Cohere RAG citations](/documentation/tutorials/cohere-rag-connector/cohere-rag-citations.png) *Source: https://docs.cohere.com/docs/retrieval-augmented-generation-rag* The connectors have to implement a specific interface and expose the data source as HTTP REST API. Cohere documentation [describes a general process of creating a connector](https://docs.cohere.com/docs/creating-and-deploying-a-connector). This tutorial guides you step by step on building such a service around Qdrant. ## Qdrant connector You probably already have some collections you would like to bring to the LLM. Maybe your pipeline was set up using some of the popular libraries such as Langchain, Llama Index, or Haystack. Cohere connectors may implement even more complex logic, e.g. hybrid search. In our case, we are going to start with a fresh Qdrant collection, index data using Cohere Embed v3, build the connector, and finally connect it with the [Command-R model](https://txt.cohere.com/command-r/). ### Building the collection First things first, let's build a collection and configure it for the Cohere `embed-multilingual-v3.0` model. It produces 1024-dimensional embeddings, and we can choose any of the distance metrics available in Qdrant. Our connector will act as a personal assistant of a software engineer, and it will expose our notes to suggest the priorities or actions to perform. ```python from qdrant_client import QdrantClient, models client = QdrantClient( ""https://my-cluster.cloud.qdrant.io:6333"", api_key=""my-api-key"", ) client.create_collection( collection_name=""personal-notes"", vectors_config=models.VectorParams( size=1024, distance=models.Distance.DOT, ), ) ``` Our notes will be represented as simple JSON objects with a `title` and `text` of the specific note. The embeddings will be created from the `text` field only. ```python notes = [ { ""title"": ""Project Alpha Review"", ""text"": ""Review the current progress of Project Alpha, focusing on the integration of the new API. Check for any compatibility issues with the existing system and document the steps needed to resolve them. Schedule a meeting with the development team to discuss the timeline and any potential roadblocks."" }, { ""title"": ""Learning Path Update"", ""text"": ""Update the learning path document with the latest courses on React and Node.js from Pluralsight. Schedule at least 2 hours weekly to dedicate to these courses. Aim to complete the React course by the end of the month and the Node.js course by mid-next month."" }, { ""title"": ""Weekly Team Meeting Agenda"", ""text"": ""Prepare the agenda for the weekly team meeting. Include the following topics: project updates, review of the sprint backlog, discussion on the new feature requests, and a brainstorming session for improving remote work practices. Send out the agenda and the Zoom link by Thursday afternoon."" }, { ""title"": ""Code Review Process Improvement"", ""text"": ""Analyze the current code review process to identify inefficiencies. Consider adopting a new tool that integrates with our version control system. Explore options such as GitHub Actions for automating parts of the process. Draft a proposal with recommendations and share it with the team for feedback."" }, { ""title"": ""Cloud Migration Strategy"", ""text"": ""Draft a plan for migrating our current on-premise infrastructure to the cloud. The plan should cover the selection of a cloud provider, cost analysis, and a phased migration approach. Identify critical applications for the first phase and any potential risks or challenges. Schedule a meeting with the IT department to discuss the plan."" }, { ""title"": ""Quarterly Goals Review"", ""text"": ""Review the progress towards the quarterly goals. Update the documentation to reflect any completed objectives and outline steps for any remaining goals. Schedule individual meetings with team members to discuss their contributions and any support they might need to achieve their targets."" }, { ""title"": ""Personal Development Plan"", ""text"": ""Reflect on the past quarter's achievements and areas for improvement. Update the personal development plan to include new technical skills to learn, certifications to pursue, and networking events to attend. Set realistic timelines and check-in points to monitor progress."" }, { ""title"": ""End-of-Year Performance Reviews"", ""text"": ""Start preparing for the end-of-year performance reviews. Collect feedback from peers and managers, review project contributions, and document achievements. Consider areas for improvement and set goals for the next year. Schedule preliminary discussions with each team member to gather their self-assessments."" }, { ""title"": ""Technology Stack Evaluation"", ""text"": ""Conduct an evaluation of our current technology stack to identify any outdated technologies or tools that could be replaced for better performance and productivity. Research emerging technologies that might benefit our projects. Prepare a report with findings and recommendations to present to the management team."" }, { ""title"": ""Team Building Event Planning"", ""text"": ""Plan a team-building event for the next quarter. Consider activities that can be done remotely, such as virtual escape rooms or online game nights. Survey the team for their preferences and availability. Draft a budget proposal for the event and submit it for approval."" } ] ``` Storing the embeddings along with the metadata is fairly simple. ```python import cohere import uuid cohere_client = cohere.Client(api_key=""my-cohere-api-key"") response = cohere_client.embed( texts=[ note.get(""text"") for note in notes ], model=""embed-multilingual-v3.0"", input_type=""search_document"", ) client.upload_points( collection_name=""personal-notes"", points=[ models.PointStruct( id=uuid.uuid4().hex, vector=embedding, payload=note, ) for note, embedding in zip(notes, response.embeddings) ] ) ``` Our collection is now ready to be searched over. In the real world, the set of notes would be changing over time, so the ingestion process won't be as straightforward. This data is not yet exposed to the LLM, but we will build the connector in the next step. ### Connector web service [FastAPI](https://fastapi.tiangolo.com/) is a modern web framework and perfect a choice for a simple HTTP API. We are going to use it for the purposes of our connector. There will be just one endpoint, as required by the model. It will accept POST requests at the `/search` path. There is a single `query` parameter required. Let's define a corresponding model. ```python from pydantic import BaseModel class SearchQuery(BaseModel): query: str ``` RAG connector does not have to return the documents in any specific format. There are [some good practices to follow](https://docs.cohere.com/docs/creating-and-deploying-a-connector#configure-the-connection-between-the-connector-and-the-chat-api), but Cohere models are quite flexible here. Results just have to be returned as JSON, with a list of objects in a `results` property of the output. We will use the same document structure as we did for the Qdrant payloads, so there is no conversion required. That requires two additional models to be created. ```python from typing import List class Document(BaseModel): title: str text: str class SearchResults(BaseModel): results: List[Document] ``` Once our model classes are ready, we can implement the logic that will get the query and provide the notes that are relevant to it. Please note the LLM is not going to define the number of documents to be returned. That's completely up to you how many of them you want to bring to the context. There are two services we need to interact with - Qdrant server and Cohere API. FastAPI has a concept of a [dependency injection](https://fastapi.tiangolo.com/tutorial/dependencies/#dependencies), and we will use it to provide both clients into the implementation. In case of queries, we need to set the `input_type` to `search_query` in the calls to Cohere API. ```python from fastapi import FastAPI, Depends from typing import Annotated app = FastAPI() def client() -> QdrantClient: return QdrantClient(config.QDRANT_URL, api_key=config.QDRANT_API_KEY) def cohere_client() -> cohere.Client: return cohere.Client(api_key=config.COHERE_API_KEY) @app.post(""/search"") def search( query: SearchQuery, client: Annotated[QdrantClient, Depends(client)], cohere_client: Annotated[cohere.Client, Depends(cohere_client)], ) -> SearchResults: response = cohere_client.embed( texts=[query.query], model=""embed-multilingual-v3.0"", input_type=""search_query"", ) results = client.query_points( collection_name=""personal-notes"", query=response.embeddings[0], limit=2, ).points return SearchResults( results=[ Document(**point.payload) for point in results ] ) ``` Our app might be launched locally for the development purposes, given we have the `uvicorn` server installed: ```shell uvicorn main:app ``` FastAPI exposes an interactive documentation at `http://localhost:8000/docs`, where we can test our endpoint. The `/search` endpoint is available there. ![FastAPI documentation](/documentation/tutorials/cohere-rag-connector/fastapi-openapi.png) We can interact with it and check the documents that will be returned for a specific query. For example, we want to know recall what we are supposed to do regarding the infrastructure for your projects. ```shell curl -X ""POST"" \ -H ""Content-type: application/json"" \ -d '{""query"": ""Is there anything I have to do regarding the project infrastructure?""}' \ ""http://localhost:8000/search"" ``` The output should look like following: ```json { ""results"": [ { ""title"": ""Cloud Migration Strategy"", ""text"": ""Draft a plan for migrating our current on-premise infrastructure to the cloud. The plan should cover the selection of a cloud provider, cost analysis, and a phased migration approach. Identify critical applications for the first phase and any potential risks or challenges. Schedule a meeting with the IT department to discuss the plan."" }, { ""title"": ""Project Alpha Review"", ""text"": ""Review the current progress of Project Alpha, focusing on the integration of the new API. Check for any compatibility issues with the existing system and document the steps needed to resolve them. Schedule a meeting with the development team to discuss the timeline and any potential roadblocks."" } ] } ``` ### Connecting to Command-R Our web service is implemented, yet running only on our local machine. It has to be exposed to the public before Command-R can interact with it. For a quick experiment, it might be enough to set up tunneling using services such as [ngrok](https://ngrok.com/). We won't cover all the details in the tutorial, but their [Quickstart](https://ngrok.com/docs/guides/getting-started/) is a great resource describing the process step-by-step. Alternatively, you can also deploy the service with a public URL. Once it's done, we can create the connector first, and then tell the model to use it, while interacting through the chat API. Creating a connector is a single call to Cohere client: ```python connector_response = cohere_client.connectors.create( name=""personal-notes"", url=""https:/this-is-my-domain.app/search"", ) ``` The `connector_response.connector` will be a descriptor, with `id` being one of the attributes. We'll use this identifier for our interactions like this: ```python response = cohere_client.chat( message=( ""Is there anything I have to do regarding the project infrastructure? "" ""Please mention the tasks briefly."" ), connectors=[ cohere.ChatConnector(id=connector_response.connector.id) ], model=""command-r"", ) ``` We changed the `model` to `command-r`, as this is currently the best Cohere model available to public. The `response.text` is the output of the model: ```text Here are some of the tasks related to project infrastructure that you might have to perform: - You need to draft a plan for migrating your on-premise infrastructure to the cloud and come up with a plan for the selection of a cloud provider, cost analysis, and a gradual migration approach. - It's important to evaluate your current technology stack to identify any outdated technologies. You should also research emerging technologies and the benefits they could bring to your projects. ``` You only need to create a specific connector once! Please do not call `cohere_client.connectors.create` for every single message you send to the `chat` method. ## Wrapping up We have built a Cohere RAG connector that integrates with your existing knowledge base stored in Qdrant. We covered just the basic flow, but in real world scenarios, you should also consider e.g. [building the authentication system](https://docs.cohere.com/docs/connector-authentication) to prevent unauthorized access.",documentation/examples/cohere-rag-connector.md "--- title: Aleph Alpha Search weight: 16 draft: true --- # Multimodal Semantic Search with Aleph Alpha | Time: 30 min | Level: Beginner | | | | --- | ----------- | ----------- |----------- | This tutorial shows you how to run a proper multimodal semantic search system with a few lines of code, without the need to annotate the data or train your networks. In most cases, semantic search is limited to homogenous data types for both documents and queries (text-text, image-image, audio-audio, etc.). With the recent growth of multimodal architectures, it is now possible to encode different data types into the same latent space. That opens up some great possibilities, as you can finally explore non-textual data, for example visual, with text queries. In the past, this would require labelling every image with a description of what it presents. Right now, you can rely on vector embeddings, which can represent all the inputs in the same space. *Figure 1: Two examples of text-image pairs presenting a similar object, encoded by a multimodal network into the same 2D latent space. Both texts are examples of English [pangrams](https://en.wikipedia.org/wiki/Pangram). https://deepai.org generated the images with pangrams used as input prompts.* ![](/docs/integrations/aleph-alpha/2d_text_image_embeddings.png) ## Sample dataset You will be using [COCO](https://cocodataset.org/), a large-scale object detection, segmentation, and captioning dataset. It provides various splits, 330,000 images in total. For demonstration purposes, this tutorials uses the [2017 validation split](http://images.cocodataset.org/zips/train2017.zip) that contains 5000 images from different categories with total size about 19GB. ```terminal wget http://images.cocodataset.org/zips/train2017.zip ``` ## Prerequisites There is no need to curate your datasets and train the models. [Aleph Alpha](https://www.aleph-alpha.com/), already has multimodality and multilinguality already built-in. There is an [official Python client](https://github.com/Aleph-Alpha/aleph-alpha-client) that simplifies the integration. In order to enable the search capabilities, you need to build the search index to query on. For this example, you are going to vectorize the images and store their embeddings along with the filenames. You can then return the most similar files for given query. There are two things you need to set up before you start: 1. You need to have a Qdrant instance running. If you want to launch it locally, [Docker is the fastest way to do that](/documentation/quick_start/#installation). 2. You need to have a registered [Aleph Alpha account](https://app.aleph-alpha.com/). 3. Upon registration, create an API key (see: [API Tokens](https://app.aleph-alpha.com/profile)). Now you can store the Aleph Alpha API key in a variable and choose the model your are going to use. ```python aa_token = ""<< your_token >>"" model = ""luminous-base"" ``` ## Vectorize the dataset In this example, images have been extracted and are stored in the `val2017` directory: ```python from aleph_alpha_client import ( Prompt, AsyncClient, SemanticEmbeddingRequest, SemanticRepresentation, Image, ) from glob import glob ids, vectors, payloads = [], [], [] async with AsyncClient(token=aa_token) as aa_client: for i, image_path in enumerate(glob(""./val2017/*.jpg"")): # Convert the JPEG file into the embedding by calling # Aleph Alpha API prompt = Image.from_file(image_path) prompt = Prompt.from_image(prompt) query_params = { ""prompt"": prompt, ""representation"": SemanticRepresentation.Symmetric, ""compress_to_size"": 128, } query_request = SemanticEmbeddingRequest(**query_params) query_response = await aa_client.semantic_embed(request=query_request, model=model) # Finally store the id, vector and the payload ids.append(i) vectors.append(query_response.embedding) payloads.append({""filename"": image_path}) ``` ## Load embeddings into Qdrant Add all created embeddings, along with their ids and payloads into the `COCO` collection. ```python import qdrant_client from qdrant_client.models import Batch, VectorParams, Distance client = qdrant_client.QdrantClient() client.create_collection( collection_name=""COCO"", vectors_config=VectorParams( size=len(vectors[0]), distance=Distance.COSINE, ), ) client.upsert( collection_name=""COCO"", points=Batch( ids=ids, vectors=vectors, payloads=payloads, ), ) ``` ## Query the database The `luminous-base`, model can provide you the vectors for both texts and images, which means you can run both text queries and reverse image search. Assume you want to find images similar to the one below: ![An image used to query the database](/docs/integrations/aleph-alpha/visual_search_query.png) With the following code snippet create its vector embedding and then perform the lookup in Qdrant: ```python async with AsyncCliet(token=aa_token) as aa_client: prompt = ImagePrompt.from_file(""query.jpg"") prompt = Prompt.from_image(prompt) query_params = { ""prompt"": prompt, ""representation"": SemanticRepresentation.Symmetric, ""compress_to_size"": 128, } query_request = SemanticEmbeddingRequest(**query_params) query_response = await aa_client.semantic_embed(request=query_request, model=model) results = client.query_points( collection_name=""COCO"", query=query_response.embedding, limit=3, ).points print(results) ``` Here are the results: ![Visual search results](/docs/integrations/aleph-alpha/visual_search_results.png) **Note:** AlephAlpha models can provide embeddings for English, French, German, Italian and Spanish. Your search is not only multimodal, but also multilingual, without any need for translations. ```python text = ""Surfing"" async with AsyncClient(token=aa_token) as aa_client: query_params = { ""prompt"": Prompt.from_text(text), ""representation"": SemanticRepresentation.Symmetric, ""compres_to_size"": 128, } query_request = SemanticEmbeddingRequest(**query_params) query_response = await aa_client.semantic_embed(request=query_request, model=model) results = client.query_points( collection_name=""COCO"", query=query_response.embedding, limit=3, ).points print(results) ``` Here are the top 3 results for “Surfing”: ![Text search results](/docs/integrations/aleph-alpha/text_search_results.png) ",documentation/examples/aleph-alpha-search.md "--- title: Private Chatbot for Interactive Learning weight: 23 social_preview_image: /blog/hybrid-cloud-red-hat-openshift/hybrid-cloud-red-hat-openshift-tutorial.png aliases: - /documentation/tutorials/rag-chatbot-red-hat-openshift-haystack/ --- # Private Chatbot for Interactive Learning | Time: 120 min | Level: Advanced | | | --- | ----------- | ----------- |----------- | With chatbots, companies can scale their training programs to accommodate a large workforce, delivering consistent and standardized learning experiences across departments, locations, and time zones. Furthermore, having already completed their online training, corporate employees might want to refer back old course materials. Most of this information is proprietary to the company, and manually searching through an entire library of materials takes time. However, a chatbot built on this knowledge can respond in the blink of an eye. With a simple RAG pipeline, you can build a private chatbot. In this tutorial, you will combine open source tools inside of a closed infrastructure and tie them together with a reliable framework. This custom solution lets you run a chatbot without public internet access. You will be able to keep sensitive data secure without compromising privacy. ![OpenShift](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/openshift-diagram.png) **Figure 1:** The LLM and Qdrant Hybrid Cloud are containerized as separate services. Haystack combines them into a RAG pipeline and exposes the API via Hayhooks. ## Components To maintain complete data isolation, we need to limit ourselves to open-source tools and use them in a private environment, such as [Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift). The pipeline will run internally and will be inaccessible from the internet. - **Dataset:** [Red Hat Interactive Learning Portal](https://developers.redhat.com/learn), an online library of Red Hat course materials. - **LLM:** `mistralai/Mistral-7B-Instruct-v0.1`, deployed as a standalone service on OpenShift. - **Embedding Model:** `BAAI/bge-base-en-v1.5`, lightweight embedding model deployed from within the Haystack pipeline with [FastEmbed](https://github.com/qdrant/fastembed) - **Vector DB:** [Qdrant Hybrid Cloud](https://hybrid-cloud.qdrant.tech) running on OpenShift. - **Framework:** [Haystack 2.x](https://haystack.deepset.ai/) to connect all and [Hayhooks](https://docs.haystack.deepset.ai/docs/hayhooks) to serve the app through HTTP endpoints. ### Procedure The [Haystack](https://haystack.deepset.ai/) framework leverages two pipelines, which combine our components sequentially to process data. 1. The **Indexing Pipeline** will run offline in batches, when new data is added or updated. 2. The **Search Pipeline** will retrieve information from Qdrant and use an LLM to produce an answer. > **Note:** We will define the pipelines in Python and then export them to YAML format, so that [Hayhooks](https://docs.haystack.deepset.ai/docs/hayhooks) can run them as a web service. ## Prerequisites ### Deploy the LLM to OpenShift Follow the steps in [Chapter 6. Serving large language models](https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/2.5/html/working_on_data_science_projects/serving-large-language-models_serving-large-language-models#doc-wrapper). This will download the LLM from the [HuggingFace](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1), and deploy it to OpenShift using a *single model serving platform*. Your LLM service will have a URL, which you need to store as an environment variable. ```shell export INFERENCE_ENDPOINT_URL=""http://mistral-service.default.svc.cluster.local"" ``` ```python import os os.environ[""INFERENCE_ENDPOINT_URL""] = ""http://mistral-service.default.svc.cluster.local"" ``` ### Launch Qdrant Hybrid Cloud Complete **How to Set Up Qdrant on Red Hat OpenShift**. When in Hybrid Cloud, your Qdrant instance is private and and its nodes run on the same OpenShift infrastructure as your other components. Retrieve your Qdrant URL and API key and store them as environment variables: ```shell export QDRANT_URL=""https://qdrant.example.com"" export QDRANT_API_KEY=""your-api-key"" ``` ```python os.environ[""QDRANT_URL""] = ""https://qdrant.example.com"" os.environ[""QDRANT_API_KEY""] = ""your-api-key"" ``` ## Implementation We will first create an indexing pipeline to add documents to the system. Then, the search pipeline will retrieve relevant data from our documents. After the pipelines are tested, we will export them to YAML files. ### Indexing pipeline [Haystack 2.x](https://haystack.deepset.ai/) comes packed with a lot of useful components, from data fetching, through HTML parsing, up to the vector storage. Before we start, there are a few Python packages that we need to install: ```shell pip install haystack-ai \ qdrant-client \ qdrant-haystack \ fastembed-haystack ``` Our environment is now ready, so we can jump right into the code. Let's define an empty pipeline and gradually add components to it: ```python from haystack import Pipeline indexing_pipeline = Pipeline() ``` #### Data fetching and conversion In this step, we will use Haystack's `LinkContentFetcher` to download course content from a list of URLs and store it in Qdrant for retrieval. As we don't want to store raw HTML, this tool will extract text content from each webpage. Then, the fetcher will divide them into digestible chunks, since the documents might be pretty long. Let's start with data fetching and text conversion: ```python from haystack.components.fetchers import LinkContentFetcher from haystack.components.converters import HTMLToDocument fetcher = LinkContentFetcher() converter = HTMLToDocument() indexing_pipeline.add_component(""fetcher"", fetcher) indexing_pipeline.add_component(""converter"", converter) ``` Our pipeline knows there are two components, but they are not connected yet. We need to define the flow between them: ```python indexing_pipeline.connect(""fetcher.streams"", ""converter.sources"") ``` Each component has a set of inputs and outputs which might be combined in a directed graph. The definitions of the inputs and outputs are usually provided in the documentation of the component. The `LinkContentFetcher` has the following parameters: ![Parameters of the `LinkContentFetcher`](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/haystack-link-content-fetcher.png) *Source: https://docs.haystack.deepset.ai/docs/linkcontentfetcher* #### Chunking and creating the embeddings We used `HTMLToDocument` to convert the HTML sources into `Document` instances of Haystack, which is a base class containing some data to be queried. However, a single document might be too long to be processed by the embedding model, and it also carries way too much information to make the search relevant. Therefore, we need to split the document into smaller parts and convert them into embeddings. For this, we will use the `DocumentSplitter` and `FastembedDocumentEmbedder` pointed to our `BAAI/bge-base-en-v1.5` model: ```python from haystack.components.preprocessors import DocumentSplitter from haystack_integrations.components.embedders.fastembed import FastembedDocumentEmbedder splitter = DocumentSplitter(split_by=""sentence"", split_length=5, split_overlap=2) embedder = FastembedDocumentEmbedder(model=""BAAI/bge-base-en-v1.5"") embedder.warm_up() indexing_pipeline.add_component(""splitter"", splitter) indexing_pipeline.add_component(""embedder"", embedder) indexing_pipeline.connect(""converter.documents"", ""splitter.documents"") indexing_pipeline.connect(""splitter.documents"", ""embedder.documents"") ``` #### Writing data to Qdrant The splitter will be producing chunks with a maximum length of 5 sentences, with an overlap of 2 sentences. Then, these smaller portions will be converted into embeddings. Finally, we need to store our embeddings in Qdrant. ```python from haystack.utils import Secret from haystack_integrations.document_stores.qdrant import QdrantDocumentStore from haystack.components.writers import DocumentWriter document_store = QdrantDocumentStore( os.environ[""QDRANT_URL""], api_key=Secret.from_env_var(""QDRANT_API_KEY""), index=""red-hat-learning"", return_embedding=True, embedding_dim=768, ) writer = DocumentWriter(document_store=document_store) indexing_pipeline.add_component(""writer"", writer) indexing_pipeline.connect(""embedder.documents"", ""writer.documents"") ``` Our pipeline is now complete. Haystack comes with a handy visualization of the pipeline, so you can see and verify the connections between the components. It is displayed in the Jupyter notebook, but you can also export it to a file: ```python indexing_pipeline.draw(""indexing_pipeline.png"") ``` ![Structure of the indexing pipeline](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/indexing_pipeline.png) #### Test the entire pipeline We can finally run it on a list of URLs to index the content in Qdrant. We have a bunch of URLs to all the Red Hat OpenShift Foundations course lessons, so let's use them: ```python course_urls = [ ""https://developers.redhat.com/learn/openshift/foundations-openshift"", ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:openshift-and-developer-sandbox"", ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:overview-web-console"", ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:use-terminal-window-within-red-hat-openshift-web-console"", ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-source-code-github-repository-using-openshift-web-console"", ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-linux-container-image-repository-using-openshift-web-console"", ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-linux-container-image-using-oc-cli-tool"", ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-source-code-using-oc-cli-tool"", ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:scale-applications-using-openshift-web-console"", ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:scale-applications-using-oc-cli-tool"", ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:work-databases-openshift-using-oc-cli-tool"", ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:work-databases-openshift-web-console"", ""https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:view-performance-information-using-openshift-web-console"", ] indexing_pipeline.run(data={ ""fetcher"": { ""urls"": course_urls, } }) ``` The execution might take a while, as the model needs to process all the documents. After the process is finished, we should have all the documents stored in Qdrant, ready for search. You should see a short summary of processed documents: ```shell {'writer': {'documents_written': 381}} ``` ### Search pipeline Our documents are now indexed and ready for search. The next pipeline is a bit simpler, but we still need to define a few components. Let's start again with an empty pipeline: ```python search_pipeline = Pipeline() ``` Our second process takes user input, converts it into embeddings and then searches for the most relevant documents using the query embedding. This might look familiar, but we arent working with `Document` instances anymore, since the query only accepts raw text. Thus, some of the components will be different, especially the embedder, as it has to accept a single string as an input and produce a single embedding as an output: ```python from haystack_integrations.components.embedders.fastembed import FastembedTextEmbedder from haystack_integrations.components.retrievers.qdrant import QdrantEmbeddingRetriever query_embedder = FastembedTextEmbedder(model=""BAAI/bge-base-en-v1.5"") query_embedder.warm_up() retriever = QdrantEmbeddingRetriever( document_store=document_store, # The same document store as the one used for indexing top_k=3, # Number of documents to return ) search_pipeline.add_component(""query_embedder"", query_embedder) search_pipeline.add_component(""retriever"", retriever) search_pipeline.connect(""query_embedder.embedding"", ""retriever.query_embedding"") ``` #### Run a test query If our goal was to just retrieve the relevant documents, we could stop here. Let's try the current pipeline on a simple query: ```python query = ""How to install an application using the OpenShift web console?"" search_pipeline.run(data={ ""query_embedder"": { ""text"": query } }) ``` We set the `top_k` parameter to 3, so the retriever should return the three most relevant documents. Your output should look like this: ```text { 'retriever': { 'documents': [ Document(id=867b4aa4c37a91e72dc7ff452c47972c1a46a279a7531cd6af14169bcef1441b, content: 'Install a Node.js application from GitHub using the web console The following describes the steps r...', meta: {'content_type': 'text/html', 'source_id': 'f56e8f827dda86abe67c0ba3b4b11331d896e2d4f7b2b43c74d3ce973d07be0c', 'url': 'https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:work-databases-openshift-web-console'}, score: 0.9209432), Document(id=0c74381c178597dd91335ebfde790d13bf5989b682d73bf5573c7734e6765af7, content: 'How to remove an application from OpenShift using the web console. In addition to providing the cap...', meta: {'content_type': 'text/html', 'source_id': '2a0759f3ce4a37d9f5c2af9c0ffcc80879077c102fb8e41e576e04833c9d24ce', 'url': 'https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-linux-container-image-repository-using-openshift-web-console'}, score: 0.9132109500000001), Document(id=3e5f8923a34ab05611ef20783211e5543e880c709fd6534d9c1f63576edc4061, content: 'Path resource: Install an application from source code in a GitHub repository using the OpenShift w...', meta: {'content_type': 'text/html', 'source_id': 'a4c4cd62d07c0d9d240e3289d2a1cc0a3d1127ae70704529967f715601559089', 'url': 'https://developers.redhat.com/learning/learn:openshift:foundations-openshift/resource/resources:install-application-source-code-github-repository-using-openshift-web-console'}, score: 0.912748935) ] } } ``` #### Generating the answer Retrieval should serve more than just documents. Therefore, we will need to use an LLM to generate exact answers to our question. This is the final component of our second pipeline. Haystack will create a prompt which adds your documents to the model's context. ```python from haystack.components.builders.prompt_builder import PromptBuilder from haystack.components.generators import HuggingFaceTGIGenerator prompt_builder = PromptBuilder("""""" Given the following information, answer the question. Context: {% for document in documents %} {{ document.content }} {% endfor %} Question: {{ query }} """""") llm = HuggingFaceTGIGenerator( model=""mistralai/Mistral-7B-Instruct-v0.1"", url=os.environ[""INFERENCE_ENDPOINT_URL""], generation_kwargs={ ""max_new_tokens"": 1000, # Allow longer responses }, ) search_pipeline.add_component(""prompt_builder"", prompt_builder) search_pipeline.add_component(""llm"", llm) search_pipeline.connect(""retriever.documents"", ""prompt_builder.documents"") search_pipeline.connect(""prompt_builder.prompt"", ""llm.prompt"") ``` The `PromptBuilder` is a Jinja2 template that will be filled with the documents and the query. The `HuggingFaceTGIGenerator` connects to the LLM service and generates the answer. Let's run the pipeline again: ```python query = ""How to install an application using the OpenShift web console?"" response = search_pipeline.run(data={ ""query_embedder"": { ""text"": query }, ""prompt_builder"": { ""query"": query }, }) ``` The LLM may provide multiple replies, if asked to do so, so let's iterate over and print them out: ```python for reply in response[""llm""][""replies""]: print(reply.strip()) ``` In our case there is a single response, which should be the answer to the question: ```text Answer: To install an application using the OpenShift web console, follow these steps: 1. Select +Add on the left side of the web console. 2. Identify the container image to install. 3. Using your web browser, navigate to the Developer Sandbox for Red Hat OpenShift and select Start your Sandbox for free. 4. Install an application from source code stored in a GitHub repository using the OpenShift web console. ``` Our final search pipeline might also be visualized, so we can see how the components are glued together: ```python search_pipeline.draw(""search_pipeline.png"") ``` ![Structure of the search pipeline](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/search_pipeline.png) ## Deployment The pipelines are now ready, and we can export them to YAML. Hayhooks will use these files to run the pipelines as HTTP endpoints. To do this, specify both file paths and your environment variables. > Note: The indexing pipeline might be run inside your ETL tool, but search should be definitely exposed as an HTTP endpoint. Let's run it on the local machine: ```shell pip install hayhooks ``` First of all, we need to save the pipelines to the YAML file: ```python with open(""search-pipeline.yaml"", ""w"") as fp: search_pipeline.dump(fp) ``` And now we are able to run the Hayhooks service: ```shell hayhooks run ``` The command should start the service on the default port, so you can access it at `http://localhost:1416`. The pipeline is not deployed yet, but we can do it with just another command: ```shell hayhooks deploy search-pipeline.yaml ``` Once it's finished, you should be able to see the OpenAPI documentation at [http://localhost:1416/docs](http://localhost:1416/docs), and test the newly created endpoint. ![Search pipeline in the OpenAPI documentation](/documentation/examples/student-rag-haystack-red-hat-openshift-hc/hayhooks-openapi.png) Our search is now accessible through the HTTP endpoint, so we can integrate it with any other service. We can even control the other parameters, like the number of documents to return: ```shell curl -X 'POST' \ 'http://localhost:1416/search-pipeline' \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ ""llm"": { }, ""prompt_builder"": { ""query"": ""How can I remove an application?"" }, ""query_embedder"": { ""text"": ""How can I remove an application?"" }, ""retriever"": { ""top_k"": 5 } }' ``` The response should be similar to the one we got in the Python before: ```json { ""llm"": { ""replies"": [ ""\n\nAnswer: You can remove an application running in OpenShift by right-clicking on the circular graphic representing the application in Topology view and selecting the Delete Application text from the dialog that appears when you click the graphic’s outer ring. Alternatively, you can use the oc CLI tool to delete an installed application using the oc delete all command."" ], ""meta"": [ { ""model"": ""mistralai/Mistral-7B-Instruct-v0.1"", ""index"": 0, ""finish_reason"": ""eos_token"", ""usage"": { ""completion_tokens"": 75, ""prompt_tokens"": 642, ""total_tokens"": 717 } } ] } } ``` ## Next steps - In this example, [Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is the infrastructure of choice for proprietary chatbots. [Read more](https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/2.8) about how to host AI projects in their [extensive documentation](https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/2.8). - [Haystack's documentation](https://docs.haystack.deepset.ai/docs/kubernetes) describes [how to deploy the Hayhooks service in a Kubernetes environment](https://docs.haystack.deepset.ai/docs/kubernetes), so you can easily move it to your own OpenShift infrastructure. - If you are just getting started and need more guidance on Qdrant, read the [quickstart](/documentation/quick-start/) or try out our [beginner tutorial](/documentation/tutorials/neural-search/).",documentation/examples/rag-chatbot-red-hat-openshift-haystack.md "--- title: Blog-Reading Chatbot with GPT-4o weight: 35 social_preview_image: /blog/hybrid-cloud-scaleway/hybrid-cloud-scaleway-tutorial.png aliases: - /documentation/tutorials/rag-chatbot-scaleway/ --- # Blog-Reading Chatbot with GPT-4o | Time: 90 min | Level: Advanced |[GitHub](https://github.com/qdrant/examples/blob/master/langchain-lcel-rag/Langchain-LCEL-RAG-Demo.ipynb)| | |--------------|-----------------|--|----| In this tutorial, you will build a RAG system that combines blog content ingestion with the capabilities of semantic search. **OpenAI's GPT-4o LLM** is powerful, but scaling its use requires us to supply context systematically. RAG enhances the LLM's generation of answers by retrieving relevant documents to aid the question-answering process. This setup showcases the integration of advanced search and AI language processing to improve information retrieval and generation tasks. A notebook for this tutorial is available on [GitHub](https://github.com/qdrant/examples/blob/master/langchain-lcel-rag/Langchain-LCEL-RAG-Demo.ipynb). **Data Privacy and Sovereignty:** RAG applications often rely on sensitive or proprietary internal data. Running the entire stack within your own environment becomes crucial for maintaining control over this data. Qdrant Hybrid Cloud deployed on [Scaleway](https://www.scaleway.com/) addresses this need perfectly, offering a secure, scalable platform that still leverages the full potential of RAG. Scaleway offers serverless [Functions](https://www.scaleway.com/en/serverless-functions/) and serverless [Jobs](https://www.scaleway.com/en/serverless-jobs/), both of which are ideal for embedding creation in large-scale RAG cases. ## Components - **Cloud Host:** [Scaleway on managed Kubernetes](https://www.scaleway.com/en/kubernetes-kapsule/) for compatibility with Qdrant Hybrid Cloud. - **Vector Database:** Qdrant Hybrid Cloud as the vector search engine for retrieval. - **LLM:** GPT-4o, developed by OpenAI is utilized as the generator for producing answers. - **Framework:** [LangChain](https://www.langchain.com/) for extensive RAG capabilities. ![Architecture diagram](/documentation/examples/rag-chatbot-scaleway/architecture-diagram.png) > Langchain [supports a wide range of LLMs](https://python.langchain.com/docs/integrations/chat/), and GPT-4o is used as the main generator in this tutorial. You can easily swap it out for your preferred model that might be launched on your premises to complete the fully private setup. For the sake of simplicity, we used the OpenAI APIs, but LangChain makes the transition seamless. ## Deploying Qdrant Hybrid Cloud on Scaleway [Scaleway Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/) and [Kosmos](https://www.scaleway.com/en/kubernetes-kosmos/) are managed Kubernetes services from [Scaleway](https://www.scaleway.com/en/). They abstract away the complexities of managing and operating a Kubernetes cluster. The primary difference being, Kapsule clusters are composed solely of Scaleway Instances. Whereas, a Kosmos cluster is a managed multi-cloud Kubernetes engine that allows you to connect instances from any cloud provider to a single managed Control-Plane. 1. To start using managed Kubernetes on Scaleway, follow the [platform-specific documentation](/documentation/hybrid-cloud/platform-deployment-options/#scaleway). 2. Once your Kubernetes clusters are up, [you can begin deploying Qdrant Hybrid Cloud](/documentation/hybrid-cloud/). ## Prerequisites To prepare the environment for working with Qdrant and related libraries, it's necessary to install all required Python packages. This can be done using Poetry, a tool for dependency management and packaging in Python. The code snippet imports various libraries essential for the tasks ahead, including `bs4` for parsing HTML and XML documents, `langchain` and its community extensions for working with language models and document loaders, and `Qdrant` for vector storage and retrieval. These imports lay the groundwork for utilizing Qdrant alongside other tools for natural language processing and machine learning tasks. Qdrant will be running on a specific URL and access will be restricted by the API key. Make sure to store them both as environment variables as well: ```shell export QDRANT_URL=""https://qdrant.example.com"" export QDRANT_API_KEY=""your-api-key"" ``` *Optional:* Whenever you use LangChain, you can also [configure LangSmith](https://docs.smith.langchain.com/), which will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=""your-api-key"" export LANGCHAIN_PROJECT=""your-project"" # if not specified, defaults to ""default"" ``` Now you can get started: ```python import getpass import os import bs4 from langchain import hub from langchain_community.document_loaders import WebBaseLoader from langchain_qdrant import Qdrant from langchain_core.output_parsers import StrOutputParser from langchain_core.runnables import RunnablePassthrough from langchain_openai import ChatOpenAI, OpenAIEmbeddings from langchain_text_splitters import RecursiveCharacterTextSplitter ``` Set up the OpenAI API key: ```python os.environ[""OPENAI_API_KEY""] = getpass.getpass() ``` Initialize the language model: ```python llm = ChatOpenAI(model=""gpt-4o"") ``` It is here that we configure both the Embeddings and LLM. You can replace this with your own models using Ollama or other services. Scaleway has some great [L4 GPU Instances](https://www.scaleway.com/en/l4-gpu-instance/) you can use for compute here. ## Download and parse data To begin working with blog post contents, the process involves loading and parsing the HTML content. This is achieved using `urllib` and `BeautifulSoup`, which are tools designed for such tasks. After the content is loaded and parsed, it is indexed using Qdrant, a powerful tool for managing and querying vector data. The code snippet demonstrates how to load, chunk, and index the contents of a blog post by specifying the URL of the blog and the specific HTML elements to parse. This step is crucial for preparing the data for further processing and analysis with Qdrant. ```python # Load, chunk and index the contents of the blog. loader = WebBaseLoader( web_paths=(""https://lilianweng.github.io/posts/2023-06-23-agent/"",), bs_kwargs=dict( parse_only=bs4.SoupStrainer( class_=(""post-content"", ""post-title"", ""post-header"") ) ), ) docs = loader.load() ``` ### Chunking data When dealing with large documents, such as a blog post exceeding 42,000 characters, it's crucial to manage the data efficiently for processing. Many models have a limited context window and struggle with long inputs, making it difficult to extract or find relevant information. To overcome this, the document is divided into smaller chunks. This approach enhances the model's ability to process and retrieve the most pertinent sections of the document effectively. In this scenario, the document is split into chunks using the `RecursiveCharacterTextSplitter` with a specified chunk size and overlap. This method ensures that no critical information is lost between chunks. Following the splitting, these chunks are then indexed into Qdrant—a vector database for efficient similarity search and storage of embeddings. The `Qdrant.from_documents` function is utilized for indexing, with documents being the split chunks and embeddings generated through `OpenAIEmbeddings`. The entire process is facilitated within an in-memory database, signifying that the operations are performed without the need for persistent storage, and the collection is named ""lilianweng"" for reference. This chunking and indexing strategy significantly improves the management and retrieval of information from large documents, making it a practical solution for handling extensive texts in data processing workflows. ```python text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200) text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200) splits = text_splitter.split_documents(docs) vectorstore = Qdrant.from_documents( documents=splits, embedding=OpenAIEmbeddings(), collection_name=""lilianweng"", url=os.environ[""QDRANT_URL""], api_key=os.environ[""QDRANT_API_KEY""], ) ``` ## Retrieve and generate content The `vectorstore` is used as a retriever to fetch relevant documents based on vector similarity. The `hub.pull(""rlm/rag-prompt"")` function is used to pull a specific prompt from a repository, which is designed to work with retrieved documents and a question to generate a response. The `format_docs` function formats the retrieved documents into a single string, preparing them for further processing. This formatted string, along with a question, is passed through a chain of operations. Firstly, the context (formatted documents) and the question are processed by the retriever and the prompt. Then, the result is fed into a large language model (`llm`) for content generation. Finally, the output is parsed into a string format using `StrOutputParser()`. This chain of operations demonstrates a sophisticated approach to information retrieval and content generation, leveraging both the semantic understanding capabilities of vector search and the generative prowess of large language models. Now, retrieve and generate data using relevant snippets from the blogL ```python retriever = vectorstore.as_retriever() prompt = hub.pull(""rlm/rag-prompt"") def format_docs(docs): return ""\n\n"".join(doc.page_content for doc in docs) rag_chain = ( {""context"": retriever | format_docs, ""question"": RunnablePassthrough()} | prompt | llm | StrOutputParser() ) ``` ### Invoking the RAG Chain ```python rag_chain.invoke(""What is Task Decomposition?"") ``` ## Next steps: We built a solid foundation for a simple chatbot, but there is still a lot to do. If you want to make the system production-ready, you should consider implementing the mechanism into your existing stack. We recommend Our vector database can easily be hosted on [Scaleway](https://www.scaleway.com/), our trusted [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/) partner. This means that Qdrant can be run from your Scaleway region, but the database itself can still be managed from within Qdrant Cloud's interface. Both products have been tested for compatibility and scalability, and we recommend their [managed Kubernetes](https://www.scaleway.com/en/kubernetes-kapsule/) service. Their French deployment regions e.g. France are excellent for network latency and data sovereignty. For hosted GPUs, try [rendering with L4 GPU instances](https://www.scaleway.com/en/l4-gpu-instance/). If you have any questions, feel free to ask on our [Discord community](https://qdrant.to/discord). ",documentation/examples/rag-chatbot-scaleway.md "--- title: Multitenancy with LlamaIndex weight: 18 aliases: - /documentation/tutorials/llama-index-multitenancy/ --- # Multitenancy with LlamaIndex If you are building a service that serves vectors for many independent users, and you want to isolate their data, the best practice is to use a single collection with payload-based partitioning. This approach is called **multitenancy**. Our guide on the [Separate Partitions](/documentation/guides/multiple-partitions/) describes how to set it up in general, but if you use [LlamaIndex](/documentation/integrations/llama-index/) as a backend, you may prefer reading a more specific instruction. So here it is! ## Prerequisites This tutorial assumes that you have already installed Qdrant and LlamaIndex. If you haven't, please run the following commands: ```bash pip install llama-index llama-index-vector-stores-qdrant ``` We are going to use a local Docker-based instance of Qdrant. If you want to use a remote instance, please adjust the code accordingly. Here is how we can start a local instance: ```bash docker run -d --name qdrant -p 6333:6333 -p 6334:6334 qdrant/qdrant:latest ``` ## Setting up LlamaIndex pipeline We are going to implement an end-to-end example of multitenant application using LlamaIndex. We'll be indexing the documentation of different Python libraries, and we definitely don't want any users to see the results coming from a library they are not interested in. In real case scenarios, this is even more dangerous, as the documents may contain sensitive information. ### Creating vector store [QdrantVectorStore](https://docs.llamaindex.ai/en/stable/examples/vector_stores/QdrantIndexDemo.html) is a wrapper around Qdrant that provides all the necessary methods to work with your vector database in LlamaIndex. Let's create a vector store for our collection. It requires setting a collection name and passing an instance of `QdrantClient`. ```python from qdrant_client import QdrantClient from llama_index.vector_stores.qdrant import QdrantVectorStore client = QdrantClient(""http://localhost:6333"") vector_store = QdrantVectorStore( collection_name=""my_collection"", client=client, ) ``` ### Defining chunking strategy and embedding model Any semantic search application requires a way to convert text queries into vectors - an embedding model. `ServiceContext` is a bundle of commonly used resources used during the indexing and querying stage in any LlamaIndex application. We can also use it to set up an embedding model - in our case, a local [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). set up ```python from llama_index.core import ServiceContext service_context = ServiceContext.from_defaults( embed_model=""local:BAAI/bge-small-en-v1.5"", ) ``` *Note*, in case you are using Large Language Model different from OpenAI's ChatGPT, you should specify `llm` parameter for `ServiceContext`. We can also control how our documents are split into chunks, or nodes using LLamaIndex's terminology. The `SimpleNodeParser` splits documents into fixed length chunks with an overlap. The defaults are reasonable, but we can also adjust them if we want to. Both values are defined in tokens. ```python from llama_index.core.node_parser import SimpleNodeParser node_parser = SimpleNodeParser.from_defaults(chunk_size=512, chunk_overlap=32) ``` Now we also need to inform the `ServiceContext` about our choices: ```python service_context = ServiceContext.from_defaults( embed_model=""local:BAAI/bge-large-en-v1.5"", node_parser=node_parser, ) ``` Both embedding model and selected node parser will be implicitly used during the indexing and querying. ### Combining everything together The last missing piece, before we can start indexing, is the `VectorStoreIndex`. It is a wrapper around `VectorStore` that provides a convenient interface for indexing and querying. It also requires a `ServiceContext` to be initialized. ```python from llama_index.core import VectorStoreIndex index = VectorStoreIndex.from_vector_store( vector_store=vector_store, service_context=service_context ) ``` ## Indexing documents No matter how our documents are generated, LlamaIndex will automatically split them into nodes, if required, encode using selected embedding model, and then store in the vector store. Let's define some documents manually and insert them into Qdrant collection. Our documents are going to have a single metadata attribute - a library name they belong to. ```python from llama_index.core.schema import Document documents = [ Document( text=""LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models."", metadata={ ""library"": ""llama-index"", }, ), Document( text=""Qdrant is a vector database & vector similarity search engine."", metadata={ ""library"": ""qdrant"", }, ), ] ``` Now we can index them using our `VectorStoreIndex`: ```python for document in documents: index.insert(document) ``` ### Performance considerations Our documents have been split into nodes, encoded using the embedding model, and stored in the vector store. However, we don't want to allow our users to search for all the documents in the collection, but only for the documents that belong to a library they are interested in. For that reason, we need to set up the Qdrant [payload index](/documentation/concepts/indexing/#payload-index), so the search is more efficient. ```python from qdrant_client import models client.create_payload_index( collection_name=""my_collection"", field_name=""metadata.library"", field_type=models.PayloadSchemaType.KEYWORD, ) ``` The payload index is not the only thing we want to change. Since none of the search queries will be executed on the whole collection, we can also change its configuration, so the HNSW graph is not built globally. This is also done due to [performance reasons](/documentation/guides/multiple-partitions/#calibrate-performance). **You should not be changing these parameters, if you know there will be some global search operations done on the collection.** ```python client.update_collection( collection_name=""my_collection"", hnsw_config=models.HnswConfigDiff(payload_m=16, m=0), ) ``` Once both operations are completed, we can start searching for our documents. ## Querying documents with constraints Let's assume we are searching for some information about large language models, but are only allowed to use Qdrant documentation. LlamaIndex has a concept of retrievers, responsible for finding the most relevant nodes for a given query. Our `VectorStoreIndex` can be used as a retriever, with some additional constraints - in our case value of the `library` metadata attribute. ```python from llama_index.core.vector_stores.types import MetadataFilters, ExactMatchFilter qdrant_retriever = index.as_retriever( filters=MetadataFilters( filters=[ ExactMatchFilter( key=""library"", value=""qdrant"", ) ] ) ) nodes_with_scores = qdrant_retriever.retrieve(""large language models"") for node in nodes_with_scores: print(node.text, node.score) # Output: Qdrant is a vector database & vector similarity search engine. 0.60551536 ``` The description of Qdrant was the best match, even though it didn't mention large language models at all. However, it was the only document that belonged to the `qdrant` library, so there was no other choice. Let's try to search for something that is not present in the collection. Let's define another retrieve, this time for the `llama-index` library: ```python llama_index_retriever = index.as_retriever( filters=MetadataFilters( filters=[ ExactMatchFilter( key=""library"", value=""llama-index"", ) ] ) ) nodes_with_scores = llama_index_retriever.retrieve(""large language models"") for node in nodes_with_scores: print(node.text, node.score) # Output: LlamaIndex is a simple, flexible data framework for connecting custom data sources to large language models. 0.63576734 ``` The results returned by both retrievers are different, due to the different constraints, so we implemented a real multitenant search application! ",documentation/examples/llama-index-multitenancy.md "--- title: Build Prototypes weight: 19 --- # Examples | End-to-End Code Samples | Description | Stack | |---------------------------------------------------------------------------------|-------------------------------------------------------------------|---------------------------------------------| | [Multitenancy with LlamaIndex](../examples/llama-index-multitenancy/) | Handle data coming from multiple users in LlamaIndex. | Qdrant, Python, LlamaIndex | | [Implement custom connector for Cohere RAG](../examples/cohere-rag-connector/) | Bring data stored in Qdrant to Cohere RAG | Qdrant, Cohere, FastAPI | | [Chatbot for Interactive Learning](../examples/rag-chatbot-red-hat-openshift-haystack/) | Build a Private RAG Chatbot for Interactive Learning | Qdrant, Haystack, OpenShift | | [Information Extraction Engine](../examples/rag-chatbot-vultr-dspy-ollama/) | Build a Private RAG Information Extraction Engine | Qdrant, Vultr, DSPy, Ollama | | [System for Employee Onboarding](../examples/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/) | Build a RAG System for Employee Onboarding | Qdrant, Cohere, LangChain | | [System for Contract Management](../examples/rag-contract-management-stackit-aleph-alpha/) | Build a Region-Specific RAG System for Contract Management | Qdrant, Aleph Alpha, STACKIT | | [Question-Answering System for Customer Support](../examples/rag-customer-support-cohere-airbyte-aws/) | Build a RAG System for AI Customer Support | Qdrant, Cohere, Airbyte, AWS | | [Hybrid Search on PDF Documents](../examples/hybrid-search-llamaindex-jinaai/) | Develop a Hybrid Search System for Product PDF Manuals | Qdrant, LlamaIndex, Jina AI | [Blog-Reading RAG Chatbot](../examples/rag-chatbot-scaleway) | Develop a RAG-based Chatbot on Scaleway and with LangChain | Qdrant, LangChain, GPT-4o | [Movie Recommendation System](../examples/recommendation-system-ovhcloud/) | Build a Movie Recommendation System with LlamaIndex and With JinaAI | Qdrant | ## Notebooks Our Notebooks offer complex instructions that are supported with a throrough explanation. Follow along by trying out the code and get the most out of each example. | Example | Description | Stack | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|----------------------------| | [Intro to Semantic Search and Recommendations Systems](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_getting_started/getting_started.ipynb) | Learn how to get started building semantic search and recommendation systems. | Qdrant | | [Search and Recommend Newspaper Articles](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_text_data/qdrant_and_text_data.ipynb) | Work with text data to develop a semantic search and a recommendation engine for news articles. | Qdrant | | [Recommendation System for Songs](https://githubtocolab.com/qdrant/examples/blob/master/qdrant_101_audio_data/03_qdrant_101_audio.ipynb) | Use Qdrant to develop a music recommendation engine based on audio embeddings. | Qdrant | | [Image Comparison System for Skin Conditions](https://colab.research.google.com/github/qdrant/examples/blob/master/qdrant_101_image_data/04_qdrant_101_cv.ipynb) | Use Qdrant to compare challenging images with labels representing different skin diseases. | Qdrant | | [Question and Answer System with LlamaIndex](https://githubtocolab.com/qdrant/examples/blob/master/llama_index_recency/Qdrant%20and%20LlamaIndex%20%E2%80%94%20A%20new%20way%20to%20keep%20your%20Q%26A%20systems%20up-to-date.ipynb) | Combine Qdrant and LlamaIndex to create a self-updating Q&A system. | Qdrant, LlamaIndex, Cohere | | [Extractive QA System](https://githubtocolab.com/qdrant/examples/blob/master/extractive_qa/extractive-question-answering.ipynb) | Extract answers directly from context to generate highly relevant answers. | Qdrant | | [Ecommerce Reverse Image Search](https://githubtocolab.com/qdrant/examples/blob/master/ecommerce_reverse_image_search/ecommerce-reverse-image-search.ipynb) | Accept images as search queries to receive semantically appropriate answers. | Qdrant | | [Basic RAG](https://githubtocolab.com/qdrant/examples/blob/master/rag-openai-qdrant/rag-openai-qdrant.ipynb) | Basic RAG pipeline with Qdrant and OpenAI SDKs. | OpenAI, Qdrant, FastEmbed | ",documentation/examples/_index.md "--- title: RAG System for Employee Onboarding weight: 30 social_preview_image: /blog/hybrid-cloud-oracle-cloud-infrastructure/hybrid-cloud-oracle-cloud-infrastructure-tutorial.png aliases: - /documentation/tutorials/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/ --- # RAG System for Employee Onboarding Public websites are a great way to share information with a wide audience. However, finding the right information can be challenging, if you are not familiar with the website's structure or the terminology used. That's what the search bar is for, but it is not always easy to formulate a query that will return the desired results, if you are not yet familiar with the content. This is even more important in a corporate environment, and for the new employees, who are just starting to learn the ropes, and don't even know how to ask the right questions yet. You may have even the best intranet pages, but onboarding is more than just reading the documentation, it is about understanding the processes. Semantic search can help with finding right resources easier, but wouldn't it be easier to just chat with the website, like you would with a colleague? Technological advancements have made it possible to interact with websites using natural language. This tutorial will guide you through the process of integrating [Cohere](https://cohere.com/)'s language models with Qdrant to enable natural language search on your documentation. We are going to use [LangChain](https://langchain.com/) as an orchestrator. Everything will be hosted on [Oracle Cloud Infrastructure (OCI)](https://www.oracle.com/cloud/), so you can scale your application as needed, and do not send your data to third parties. That is especially important when you are working with confidential or sensitive data. ## Building up the application Our application will consist of two main processes: indexing and searching. Langchain will glue everything together, as we will use a few components, including Cohere and Qdrant, as well as some OCI services. Here is a high-level overview of the architecture: ![Architecture diagram of the target system](/documentation/examples/faq-oci-cohere-langchain/architecture-diagram.png) ### Prerequisites Before we dive into the implementation, make sure to set up all the necessary accounts and tools. #### Libraries We are going to use a few Python libraries. Of course, Langchain will be our main framework, but the Cohere models on OCI are accessible via the [OCI SDK](https://docs.oracle.com/en-us/iaas/tools/python/2.125.1/). Let's install all the necessary libraries: ```shell pip install langchain oci qdrant-client langchainhub ``` #### Oracle Cloud Our application will be fully running on Oracle Cloud Infrastructure (OCI). It's up to you to choose how you want to deploy your application. Qdrant Hybrid Cloud will be running in your [Kubernetes cluster running on Oracle Cloud (OKE)](https://www.oracle.com/cloud/cloud-native/container-engine-kubernetes/), so all the processes might be also deployed there. You can get started with signing up for an account on [Oracle Cloud](https://signup.cloud.oracle.com/). Cohere models are available on OCI as a part of the [Generative AI Service](https://www.oracle.com/artificial-intelligence/generative-ai/generative-ai-service/). We need both the [Generation models](https://docs.oracle.com/en-us/iaas/Content/generative-ai/use-playground-generate.htm) and the [Embedding models](https://docs.oracle.com/en-us/iaas/Content/generative-ai/use-playground-embed.htm). Please follow the linked tutorials to grasp the basics of using Cohere models there. Accessing the models programmatically requires knowing the compartment OCID. Please refer to the [documentation that describes how to find it](https://docs.oracle.com/en-us/iaas/Content/GSG/Tasks/contactingsupport_topic-Locating_Oracle_Cloud_Infrastructure_IDs.htm#Finding_the_OCID_of_a_Compartment). For the further reference, we will assume that the compartment OCID is stored in the environment variable: ```shell export COMPARTMENT_OCID="""" ``` ```python import os os.environ[""COMPARTMENT_OCID""] = """" ``` #### Qdrant Hybrid Cloud Qdrant Hybrid Cloud running on Oracle Cloud helps you build a solution without sending your data to external services. Our documentation provides a step-by-step guide on how to [deploy Qdrant Hybrid Cloud on Oracle Cloud](/documentation/hybrid-cloud/platform-deployment-options/#oracle-cloud-infrastructure). Qdrant will be running on a specific URL and access will be restricted by the API key. Make sure to store them both as environment variables as well: ```shell export QDRANT_URL=""https://qdrant.example.com"" export QDRANT_API_KEY=""your-api-key"" ``` *Optional:* Whenever you use LangChain, you can also [configure LangSmith](https://docs.smith.langchain.com/), which will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=""your-api-key"" export LANGCHAIN_PROJECT=""your-project"" # if not specified, defaults to ""default"" ``` Now you can get started: ```python import os os.environ[""QDRANT_URL""] = ""https://qdrant.example.com"" os.environ[""QDRANT_API_KEY""] = ""your-api-key"" ``` Let's create the collection that will store the indexed documents. We will use the `qdrant-client` library, and our collection will be named `oracle-cloud-website`. Our embedding model, `cohere.embed-english-v3.0`, produces embeddings of size 1024, and we have to specify that when creating the collection. ```python from qdrant_client import QdrantClient, models client = QdrantClient( location=os.environ.get(""QDRANT_URL""), api_key=os.environ.get(""QDRANT_API_KEY""), ) client.create_collection( collection_name=""oracle-cloud-website"", vectors_config=models.VectorParams( size=1024, distance=models.Distance.COSINE, ), ) ``` ### Indexing process We have all the necessary tools set up, so let's start with the indexing process. We will use the Cohere Embedding models to convert the text into vectors, and then store them in Qdrant. Langchain is integrated with OCI Generative AI Service, so we can easily access the models. Our dataset will be fairly simple, as it will consist of the questions and answers from the [Oracle Cloud Free Tier FAQ page](https://www.oracle.com/cloud/free/faq/). ![Some examples of the Oracle Cloud FAQ](/documentation/examples/faq-oci-cohere-langchain/oracle-faq.png) Questions and answers are presented in an HTML format, but we don't want to manually extract the text and adapt it for each subpage. Instead, we will use the `WebBaseLoader` that just loads the HTML content from given URL and converts it to text. ```python from langchain_community.document_loaders.web_base import WebBaseLoader loader = WebBaseLoader(""https://www.oracle.com/cloud/free/faq/"") documents = loader.load() ``` Our `documents` is a list with just a single element, which is the text of the whole page. We need to split it into meaningful parts, so we will use the `RecursiveCharacterTextSplitter` component. It will try to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text. The chunk size and overlap are both parameters that can be adjusted to fit the specific use case. ```python from langchain_text_splitters import RecursiveCharacterTextSplitter splitter = RecursiveCharacterTextSplitter(chunk_size=300, chunk_overlap=100) split_documents = splitter.split_documents(documents) ``` Our documents might be now indexed, but we need to convert them into vectors. Let's configure the embeddings so the `cohere.embed-english-v3.0` is used. Not all the regions support the Generative AI Service, so we need to specify the region where the models are stored. We will use the `us-chicago-1`, but please check the [documentation](https://docs.oracle.com/en-us/iaas/Content/generative-ai/overview.htm#regions) for the most up-to-date list of supported regions. ```python from langchain_community.embeddings.oci_generative_ai import OCIGenAIEmbeddings embeddings = OCIGenAIEmbeddings( model_id=""cohere.embed-english-v3.0"", service_endpoint=""https://inference.generativeai.us-chicago-1.oci.oraclecloud.com"", compartment_id=os.environ.get(""COMPARTMENT_OCID""), ) ``` Now we can embed the documents and store them in Qdrant. We will create an instance of `Qdrant` and add the split documents to the collection. ```python from langchain.vectorstores.qdrant import Qdrant qdrant = Qdrant( client=client, collection_name=""oracle-cloud-website"", embeddings=embeddings, ) qdrant.add_documents(split_documents, batch_size=20) ``` Our documents should be now indexed and ready for searching. Let's move to the next step. ### Speaking to the website The intended method of interaction with the website is through the chatbot. Large Language Model, in our case [Cohere Command](https://cohere.com/command), will be answering user's questions based on the relevant documents that Qdrant will return using the question as a query. Our LLM is also hosted on OCI, so we can access it similarly to the embedding model: ```python from langchain_community.llms.oci_generative_ai import OCIGenAI llm = OCIGenAI( model_id=""cohere.command"", service_endpoint=""https://inference.generativeai.us-chicago-1.oci.oraclecloud.com"", compartment_id=os.environ.get(""COMPARTMENT_OCID""), ) ``` Connection to Qdrant might be established in the same way as we did during the indexing process. We can use it to create a retrieval chain, which implements the question-answering process. The retrieval chain also requires an additional chain that will combine retrieved documents before sending them to an LLM. ```python from langchain.chains.combine_documents import create_stuff_documents_chain from langchain.chains.retrieval import create_retrieval_chain from langchain import hub retriever = qdrant.as_retriever() combine_docs_chain = create_stuff_documents_chain( llm=llm, # Default prompt is loaded from the hub, but we can also modify it prompt=hub.pull(""langchain-ai/retrieval-qa-chat""), ) retrieval_qa_chain = create_retrieval_chain( retriever=retriever, combine_docs_chain=combine_docs_chain, ) response = retrieval_qa_chain.invoke({""input"": ""What is the Oracle Cloud Free Tier?""}) ``` The output of the `.invoke` method is a dictionary-like structure with the query and answer, but we can also access the source documents used to generate the response. This might be useful for debugging or for further processing. ```python { 'input': 'What is the Oracle Cloud Free Tier?', 'context': [ Document( page_content='* Free Tier is generally available in regions where commercial Oracle Cloud Infrastructure service is available. See the data regions page for detailed service availability (the exact regions available for Free Tier may differ during the sign-up process). The US$300 cloud credit is available in', metadata={ 'language': 'en-US', 'source': 'https://www.oracle.com/cloud/free/faq/', 'title': ""FAQ on Oracle's Cloud Free Tier"", '_id': 'c8cf98e0-4b88-4750-be42-4157495fed2c', '_collection_name': 'oracle-cloud-website' } ), Document( page_content='Oracle Cloud Free Tier allows you to sign up for an Oracle Cloud account which provides a number of Always Free services and a Free Trial with US$300 of free credit to use on all eligible Oracle Cloud Infrastructure services for up to 30 days. The Always Free services are available for an unlimited', metadata={ 'language': 'en-US', 'source': 'https://www.oracle.com/cloud/free/faq/', 'title': ""FAQ on Oracle's Cloud Free Tier"", '_id': 'dc291430-ff7b-4181-944a-39f6e7a0de69', '_collection_name': 'oracle-cloud-website' } ), Document( page_content='Oracle Cloud Free Tier does not include SLAs. Community support through our forums is available to all customers. Customers using only Always Free resources are not eligible for Oracle Support. Limited support is available for Oracle Cloud Free Tier with Free Trial credits. After you use all of', metadata={ 'language': 'en-US', 'source': 'https://www.oracle.com/cloud/free/faq/', 'title': ""FAQ on Oracle's Cloud Free Tier"", '_id': '9e831039-7ccc-47f7-9301-20dbddd2fc07', '_collection_name': 'oracle-cloud-website' } ), Document( page_content='looking to test things before moving to cloud, a student wanting to learn, or an academic developing curriculum in the cloud, Oracle Cloud Free Tier enables you to learn, explore, build and test for free.', metadata={ 'language': 'en-US', 'source': 'https://www.oracle.com/cloud/free/faq/', 'title': ""FAQ on Oracle's Cloud Free Tier"", '_id': 'e2dc43e1-50ee-4678-8284-6df60a835cf5', '_collection_name': 'oracle-cloud-website' } ) ], 'answer': ' Oracle Cloud Free Tier is a subscription that gives you access to Always Free services and a Free Trial with $300 of credit that can be used on all eligible Oracle Cloud Infrastructure services for up to 30 days. \n\nThrough this Free Tier, you can learn, explore, build, and test for free. It is aimed at those who want to experiment with cloud services before making a commitment, as wellTheir use cases range from testing prior to cloud migration to learning and academic curriculum development. ' } ``` #### Other experiments Asking the basic questions is just the beginning. What you want to avoid is a hallucination, where the model generates an answer that is not based on the actual content. The default prompt of Langchain should already prevent this, but you might still want to check it. Let's ask a question that is not directly answered on the FAQ page: ```python response = retrieval_qa.invoke({ ""input"": ""Is Oracle Generative AI Service included in the free tier?"" }) ``` Output: > Oracle Generative AI Services are not specifically mentioned as being available in the free tier. As per the text, the > $300 free credit can be used on all eligible services for up to 30 days. To confirm if Oracle Generative AI Services > are included in the free credit offer, it is best to check the official Oracle Cloud website or contact their support. It seems that Cohere Command model could not find the exact answer in the provided documents, but it tried to interpret the context and provide a reasonable answer, without making up the information. This is a good sign that the model is not hallucinating in that case. ## Wrapping up This tutorial has shown how to integrate Cohere's language models with Qdrant to enable natural language search on your website. We have used Langchain as an orchestrator, and everything was hosted on Oracle Cloud Infrastructure (OCI). Real world would require integrating this mechanism into your organization's systems, but we built a solid foundation that can be further developed. ",documentation/examples/natural-language-search-oracle-cloud-infrastructure-cohere-langchain.md "--- title: Authentication weight: 30 --- # Authenticating to Qdrant Cloud This page shows you how to use the Qdrant Cloud Console to create a custom API key for a cluster. You will learn how to connect to your cluster using the new API key. ## Create API keys The API key is only shown once after creation. If you lose it, you will need to create a new one. However, we recommend rotating the keys from time to time. To create additional API keys do the following. 1. Go to the [Cloud Dashboard](https://qdrant.to/cloud). 2. Select **Access Management** to display available API keys, or go to the **API Keys** section of the Cluster detail page. 3. Click **Create** and choose a cluster name from the dropdown menu. > **Note:** You can create a key that provides access to multiple clusters. Select desired clusters in the dropdown box. 4. Click **OK** and retrieve your API key. ## Test cluster access After creation, you will receive a code snippet to access your cluster. Your generated request should look very similar to this one: ```bash curl \ -X GET 'https://xyz-example.eu-central.aws.cloud.qdrant.io:6333' \ --header 'api-key: ' ``` Open Terminal and run the request. You should get a response that looks like this: ```bash {""title"":""qdrant - vector search engine"",""version"":""1.8.1""} ``` > **Note:** You need to include the API key in the request header for every > request over REST or gRPC. ## Authenticate via SDK Now that you have created your first cluster and key, you might want to access Qdrant Cloud from within your application. Our official Qdrant clients for Python, TypeScript, Go, Rust, .NET and Java all support the API key parameter. ```bash curl \ -X GET https://xyz-example.eu-central.aws.cloud.qdrant.io:6333 \ --header 'api-key: ' # Alternatively, you can use the `Authorization` header with the `Bearer` prefix curl \ -X GET https://xyz-example.eu-central.aws.cloud.qdrant.io:6333 \ --header 'Authorization: Bearer ' ``` ```python from qdrant_client import QdrantClient qdrant_client = QdrantClient( ""xyz-example.eu-central.aws.cloud.qdrant.io"", api_key="""", ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""xyz-example.eu-central.aws.cloud.qdrant.io"", apiKey: """", }); ``` ```rust use qdrant_client::Qdrant; let client = Qdrant::from_url(""https://xyz-example.eu-central.aws.cloud.qdrant.io:6334"") .api_key("""") .build()?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder( ""xyz-example.eu-central.aws.cloud.qdrant.io"", 6334, true) .withApiKey("""") .build()); ``` ```csharp using Qdrant.Client; var client = new QdrantClient( host: ""xyz-example.eu-central.aws.cloud.qdrant.io"", https: true, apiKey: """" ); ``` ```go import ""github.com/qdrant/go-client/qdrant"" client, err := qdrant.NewClient(&qdrant.Config{ Host: ""xyz-example.eu-central.aws.cloud.qdrant.io"", Port: 6334, APIKey: """", UseTLS: true, }) ``` ",documentation/cloud/authentication.md "--- title: Account Setup weight: 10 aliases: --- # Setting up a Qdrant Cloud Account ## Registration There are different ways to register for a Qdrant Cloud account: * With an email address and passwordless login via email * With a Google account * With a GitHub account * By connection an enterprise SSO solution Every account is tied to an email address. You can invite additional users to your account and manage their permissions. ### Email registration 1. Register for a [Cloud account](https://cloud.qdrant.io/) with your email, Google or GitHub credentials. ## Inviting additional users to an account You can invite additional users to your account, and manage their permissions on the *Account Management* page in the Qdrant Cloud Console. ![Invitations](/documentation/cloud/invitations.png) Invited users will receive an email with an invitation link to join Qdrant Cloud. Once they signed up, they can accept the invitation from the Overview page. ![Accepting invitation](/documentation/cloud/accept-invitation.png) ## Switching between accounts If you have access to multiple accounts, you can switch between accounts with the account switcher on the top menu bar of the Qdrant Cloud Console. ![Switching between accounts](/documentation/cloud/account-switcher.png) ## Account settings You can configure your account settings in the Qdrant Cloud Console, by clicking on your account picture in the top right corner, and selecting *Profile*. The following functionality is available. ### Renaming an account If you use multiple accounts for different purposes, it is a good idea to give them descriptive names, for example *Development*, *Production*, *Testing*. You can also choose which account should be the default one, when you log in. ![Account management](/documentation/cloud/account-management.png) ### Deleting an account When you delete an account, all database clusters and associated data will be deleted. ",documentation/cloud/qdrant-cloud-setup.md "--- title: Create a Cluster weight: 20 --- # Creating a Qdrant Cloud Cluster Qdrant Cloud offers two types of clusters: **Free** and **Standard**. ## Free Clusters Free tier clusters are perfect for prototyping and testing. You don't need a credit card to join. A free tier cluster only includes 1 single node with the following resources: | Resource | Value | |------------|-------| | RAM | 1 GB | | vCPU | 0.5 | | Disk space | 4 GB | | Nodes | 1 | This configuration supports serving about 1 M vectors of 768 dimensions. To calculate your needs, refer to our documentation on [Capacity and sizing](/documentation/cloud/capacity-sizing/). The choice of cloud providers and regions is limited. It includes: - Standard Support - Basic monitoring - Basic log access - Basic alerting - Version upgrades with downtime - Only manual snapshots and restores via API - No dedicated resources If unused, free tier clusters are automatically suspended after 1 week, and deleted after 4 weeks of inactivity if not reactivated. You can always upgrade to a standard cluster with more resources and features. ## Standard Clusters On top of the Free cluster features, Standard clusters offer: - Response time and uptime SLAs - Dedicated resources - Backup and disaster recovery - Multi-node clusters for high availability - Horizontal and vertical scaling - Monitoring and log management - Zero-downtime upgrades for multi-node clusters with replication You have a broad choice of regions on AWS, Azure and Google Cloud. For payment information see [**Pricing and Payments**](/documentation/cloud/pricing-payments/). ## Create a cluster This page shows you how to use the Qdrant Cloud Console to create a custom Qdrant Cloud cluster. > **Prerequisite:** Please make sure you have provided billing information before creating a custom cluster. 1. Start in the **Clusters** section of the [Cloud Dashboard](https://cloud.qdrant.io/). 1. Select **Clusters** and then click **+ Create**. 1. In the **Create a cluster** screen select **Free** or **Standard** Most of the remaining configuration options are only available for standard clusters. 1. Select a provider. Currently, you can deploy to: - Amazon Web Services (AWS) - Google Cloud Platform (GCP) - Microsoft Azure - Your own [Hybrid Cloud](/documentation/hybrid-cloud/) Infrastructure 1. Choose your data center region or Hybrid Cloud environment. 1. Configure RAM for each node. > For more information, see our [**Capacity and Sizing**](/documentation/cloud/capacity-sizing/) guidance. 1. Choose the number of vCPUs per node. If you add more RAM, the menu provides different options for vCPUs. 1. Select the number of nodes you want the cluster to be deployed on. > Each node is automatically attached with a disk, that has enough space to store data with Qdrant's default collection configuration. 1. Select additional disk space for your deployment. > Depending on your collection configuration, you may need more disk space per RAM. For example, if you configure `on_disk: true` and only use RAM for caching. 1. Review your cluster configuration and pricing. 1. When you're ready, select **Create**. It takes some time to provision your cluster. Once provisioned, you can access your cluster on ports 443 and 6333 (REST) and 6334 (gRPC). ![Cluster configured in the UI](/docs/cloud/create-cluster-test.png) You should now see the new cluster in the **Clusters** menu. ## Next steps You will need to connect to your new Qdrant Cloud cluster. Follow [**Authentication**](/documentation/cloud/authentication/) to create one or more API keys. You can also scale your cluster both horizontally and vertically. Read more in [**Cluster Scaling**](/documentation/cloud/cluster-scaling/). If a new Qdrant version becomes available, you can upgrade your cluster. See [**Cluster Upgrades**](/documentation/cloud/cluster-upgrades/). For more information on creating and restoring backups of a cluster, see [**Backups**](/documentation/cloud/backups/). ",documentation/cloud/create-cluster.md "--- title: Cloud Support weight: 99 aliases: --- # Qdrant Cloud Support and Troubleshooting All Qdrant Cloud users are welcome to join our [Discord community](https://qdrant.to/discord/). Our Support Engineers are available to help you anytime. ![Discord](/documentation/cloud/discord.png) Paid customers can also contact support directly. Links to the support portal are available in the Qdrant Cloud Console. ![Support Portal](/documentation/cloud/support-portal.png) ",documentation/cloud/support.md "--- title: Backup Clusters weight: 61 --- # Backing up Qdrant Cloud Clusters Qdrant organizes cloud instances as clusters. On occasion, you may need to restore your cluster because of application or system failure. You may already have a source of truth for your data in a regular database. If you have a problem, you could reindex the data into your Qdrant vector search cluster. However, this process can take time. For high availability critical projects we recommend replication. It guarantees the proper cluster functionality as long as at least one replica is running. For other use-cases such as disaster recovery, you can set up automatic or self-service backups. ## Prerequisites You can back up your Qdrant clusters though the Qdrant Cloud Dashboard at https://cloud.qdrant.io. This section assumes that you've already set up your cluster, as described in the following sections: - [Create a cluster](/documentation/cloud/create-cluster/) - Set up [Authentication](/documentation/cloud/authentication/) - Configure one or more [Collections](/documentation/concepts/collections/) ## Automatic backups You can set up automatic backups of your clusters with our Cloud UI. With the procedures listed in this page, you can set up snapshots on a daily/weekly/monthly basis. You can keep as many snapshots as you need. You can restore a cluster from the snapshot of your choice. > Note: When you restore a snapshot, consider the following: > - The affected cluster is not available while a snapshot is being restored. > - If you changed the cluster setup after the copy was created, the cluster resets to the previous configuration. > - The previous configuration includes: > - CPU > - Memory > - Node count > - Qdrant version ### Configure a backup After you have taken the prerequisite steps, you can configure a backup with the [Qdrant Cloud Dashboard](https://cloud.qdrant.io). To do so, take these steps: 1. Sign in to the dashboard 1. Select Clusters. 1. Select the cluster that you want to back up. ![Select a cluster](/documentation/cloud/select-cluster.png) 1. Find and select the **Backups** tab. 1. Now you can set up a backup schedule. The **Days of Retention** is the number of days after a backup snapshot is deleted. 1. Alternatively, you can select **Backup now** to take an immediate snapshot. ![Configure a cluster backup](/documentation/cloud/backup-schedule.png) ### Restore a backup If you have a backup, it appears in the list of **Available Backups**. You can choose to restore or delete the backups of your choice. ![Restore or delete a cluster backup](/documentation/cloud/restore-delete.png) ## Backups with a snapshot Qdrant also offers a snapshot API which allows you to create a snapshot of a specific collection or your entire cluster. For more information, see our [snapshot documentation](/documentation/concepts/snapshots/). Here is how you can take a snapshot and recover a collection: 1. Take a snapshot: - For a single node cluster, call the snapshot endpoint on the exposed URL. - For a multi node cluster call a snapshot on each node of the collection. Specifically, prepend `node-{num}-` to your cluster URL. Then call the [snapshot endpoint](../../concepts/snapshots/#create-snapshot) on the individual hosts. Start with node 0. - In the response, you'll see the name of the snapshot. 2. Delete and recreate the collection. 3. Recover the snapshot: - Call the [recover endpoint](../../concepts/snapshots/#recover-in-cluster-deployment). Set a location which points to the snapshot file (`file:///qdrant/snapshots/{collection_name}/{snapshot_file_name}`) for each host. ## Backup considerations Backups are incremental. For example, if you have two backups, backup number 2 contains only the data that changed since backup number 1. This reduces the total cost of your backups. You can create multiple backup schedules. When you restore a snapshot, any changes made after the date of the snapshot are lost. ",documentation/cloud/backups.md "--- title: Configure Size & Capacity weight: 40 aliases: - capacity --- # Configuring Qdrant Cloud Cluster Capacity and Size We have been asked a lot about the optimal cluster configuration to serve a number of vectors. The only right answer is “It depends”. It depends on a number of factors and options you can choose for your collections. ## Basic configuration If you need to keep all vectors in memory for maximum performance, there is a very rough formula for estimating the needed memory size looks like this: ```text memory_size = number_of_vectors * vector_dimension * 4 bytes * 1.5 ``` Extra 50% is needed for metadata (indexes, point versions, etc.) as well as for temporary segments constructed during the optimization process. If you need to have payloads along with the vectors, it is recommended to store it on the disc, and only keep [indexed fields](../../concepts/indexing/#payload-index) in RAM. Read more about the payload storage in the [Storage](../../concepts/storage/#payload-storage) section. ## Storage focused configuration If your priority is to serve large amount of vectors with an average search latency, it is recommended to configure [mmap storage](../../concepts/storage/#configuring-memmap-storage). In this case vectors will be stored on the disc in memory-mapped files, and only the most frequently used vectors will be kept in RAM. The amount of available RAM will significantly affect the performance of the search. As a rule of thumb, if you keep 2 times less vectors in RAM, the search latency will be 2 times lower. The speed of disks is also important. [Let us know](mailto:cloud@qdrant.io) if you have special requirements for a high-volume search. ## Sub-groups oriented configuration If your use case assumes that the vectors are split into multiple collections or sub-groups based on payload values, it is recommended to configure memory-map storage. For example, if you serve search for multiple users, but each of them has an subset of vectors which they use independently. In this scenario only the active subset of vectors will be kept in RAM, which allows the fast search for the most active and recent users. In this case you can estimate required memory size as follows: ```text memory_size = number_of_active_vectors * vector_dimension * 4 bytes * 1.5 ``` ## Disk space Clusters that support vector search require significant disk space. If you're running low on disk space in your cluster, you can use the UI at [cloud.qdrant.io](https://cloud.qdrant.io/) to **Scale Up** your cluster. If you're running low on disk space, consider the following advantages: - Larger Datasets: Supports larger datasets. With vector search, larger datasets can improve the relevance and quality of search results. - Improved Indexing: Supports the use of indexing strategies such as HNSW (Hierarchical Navigable Small World). - Caching: Improves speed when you cache frequently accessed data on disk. - Backups and Redundancy: Allows more frequent backups. Perhaps the most important advantage. ",documentation/cloud/capacity-sizing.md "--- title: Scale Clusters weight: 50 --- # Scaling Qdrant Cloud Clusters The amount of data is always growing and at some point you might need to upgrade or downgrade the capacity of your cluster. ![Cluster Scaling](/documentation/cloud/cluster-scaling.png) There are different options for how it can be done. ## Vertical scaling Vertical scaling is the process of increasing the capacity of a cluster by adding or removing CPU, storage and memory resources on each database node. You can start with a minimal cluster configuration of 2GB of RAM and resize it up to 64GB of RAM (or even more if desired) over the time step by step with the growing amount of data in your application. If your cluster consists of several nodes each node will need to be scaled to the same size. Please note that vertical cluster scaling will require a short downtime period to restart your cluster. In order to avoid a downtime you can make use of data replication, which can be configured on the collection level. Vertical scaling can be initiated on the cluster detail page via the button ""scale"". If you want to scale your cluster down, the new, smaller memory size must be still sufficient to store all the data in the cluster. Otherwise, the database cluster could run out of memory and crash. Therefore, the new memory size must be at least as large as the current memory usage of the database cluster including a bit of buffer. Qdrant Cloud will automatically prevent you from scaling down the Qdrant datab ase cluster with a too small memory size. Note, that it is not possible to scale down the disk space of the cluster due to technical limitations of the underlying cloud providers. ## Horizontal scaling Vertical scaling can be an effective way to improve the performance of a cluster and extend the capacity, but it has some limitations. The main disadvantage of vertical scaling is that there are limits to how much a cluster can be expanded. At some point, adding more resources to a cluster can become impractical or cost-prohibitive. In such cases, horizontal scaling may be a more effective solution. Horizontal scaling, also known as horizontal expansion, is the process of increasing the capacity of a cluster by adding more nodes and distributing the load and data among them. The horizontal scaling at Qdrant starts on the collection level. You have to choose the number of shards you want to distribute your collection around while creating the collection. Please refer to the [sharding documentation](../../guides/distributed_deployment/#sharding) section for details. After that, you can configure, or change the amount of Qdrant database nodes within a cluster during cluster creation, or on the cluster detail page via ""Scale"" button. Important: The number of shards means the maximum amount of nodes you can add to your cluster. In the beginning, all the shards can reside on one node. With the growing amount of data you can add nodes to your cluster and move shards to the dedicated nodes using the [cluster setup API](../../guides/distributed_deployment/#cluster-scaling). Note, that it is currently not possible to horizontally scale down the cluster in the Qdrant Cloud UI. If you require a horizontal scale down, please open a support ticket. We will be glad to consult you on an optimal strategy for scaling. [Let us know](mailto:cloud@qdrant.io) your needs and decide together on a proper solution. ",documentation/cloud/cluster-scaling.md "--- title: Monitor Clusters weight: 55 --- # Monitoring Qdrant Cloud Clusters ## Telemetry Qdrant Cloud provides you with a set of metrics to monitor the health of your database cluster. You can access these metrics in the Qdrant Cloud Console in the **Metrics** and **Request** sections of the cluster details page. ## Logs Logs of the database cluster are available in the Qdrant Cloud Console in the **Logs** section of the cluster details page. ## Alerts You will receive automatic alerts via email before your cluster reaches the currently configured memory or storage limits, including recommendations for scaling your cluster. ",documentation/cloud/cluster-monitoring.md "--- title: Billing & Payments weight: 65 aliases: - aws-marketplace - gcp-marketplace - azure-marketplace --- # Qdrant Cloud Billing & Payments Qdrant database clusters in Qdrant Cloud are priced based on CPU, memory, and disk storage usage. To get a clearer idea for the pricing structure, based on the amounts of vectors you want to store, please use our [Pricing Calculator](https://cloud.qdrant.io/calculator). ## Billing You can pay for your Qdrant Cloud database clusters either with a credit card or through an AWS, GCP, or Azure Marketplace subscription. Your payment method is charged at the beginning of each month for the previous month's usage. There is no difference in pricing between the different payment methods. If you choose to pay through a marketplace, the Qdrant Cloud usage costs are added as usage units to your existing billing for your cloud provider services. A detailed breakdown of your usage is available in the Qdrant Cloud Console. Note: Even if you pay using a marketplace subscription, your database clusters will still be deployed into Qdrant-owned infrastructure. The setup and management of Qdrant database clusters will also still be done via the Qdrant Cloud Console UI. If you wish to deploy Qdrant database clusters into your own environment from Qdrant Cloud then we recommend our [Hybrid Cloud](/documentation/hybrid-cloud/) solution. ![Payment Options](/documentation/cloud/payment-options.png) ### Credit Card Credit card payments are processed through Stripe. To set up a credit card, go to the Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/), select **Stripe** as the payment method, and enter your credit card details. ### AWS Marketplace Our [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-rtphb42tydtzg) listing streamlines access to Qdrant for users who rely on Amazon Web Services for hosting and application development. To subscribe: 1. Go to Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/) 2. Select **AWS Marketplace** as the payment method. You will be redirected to the AWS Marketplace listing for Qdrant. 3. Click the bright orange button - **View purchase options**. 4. On the next screen, under Purchase, click **Subscribe**. 5. Up top, on the green banner, click **Set up your account**. You will be redirected to the Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/). From there you can start to create Qdrant database clusters. ### GCP Marketplace Our [GCP Marketplace](https://console.cloud.google.com/marketplace/product/qdrant-public/qdrant) listing streamlines access to Qdrant for users who rely on the Google Cloud Platform for hosting and application development. To subscribe: 1. Go to Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/) 2. Select **GCP Marketplace** as the payment method. You will be redirected to the GCP Marketplace listing for Qdrant. 3. Select **Subscribe**. (If you have already subscribed, select **Manage on Provider**.) 4. On the next screen, choose options as required, and select **Subscribe**. 5. On the pop-up window that appers, select **Sign up with Qdrant**. You will be redirected to the Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/). From there you can start to create Qdrant database clusters. ### Azure Marketplace Our [Azure Marketplace](https://portal.azure.com/#view/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/qdrantsolutionsgmbh1698769709989.qdrant-db/selectionMode~/false/resourceGroupId//resourceGroupLocation//dontDiscardJourney~/false/selectedMenuId/home/launchingContext~/%7B%22galleryItemId%22%3A%22qdrantsolutionsgmbh1698769709989.qdrant-dbqdrant_cloud_unit%22%2C%22source%22%3A%5B%22GalleryFeaturedMenuItemPart%22%2C%22VirtualizedTileDetails%22%5D%2C%22menuItemId%22%3A%22home%22%2C%22subMenuItemId%22%3A%22Search%20results%22%2C%22telemetryId%22%3A%221df5537b-8b29-4200-80ce-0cd38c7e0e56%22%7D/searchTelemetryId/6b44fb90-7b9c-4286-aad8-59f88f3cc2ff) listing streamlines access to Qdrant for users who rely on Microsoft Azure for hosting and application development. To subscribe: 1. Go to Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/) 2. Select **Azure Marketplace** as the payment method. You will be redirected to the Azure Marketplace listing for Qdrant. 3. Select **Subscribe**. 4. On the next screen, choose options as required, and select **Review + Subscribe**. 5. After reviewing all settings, select **Subscribe**. 6. Once the SaaS subscription is created, select **Configure account now**. You will be redirected to the Billing Details screen in the [Qdrant Cloud Console](https://cloud.qdrant.io/). From there you can start to create Qdrant database clusters. ",documentation/cloud/pricing-payments.md "--- title: Upgrade Clusters weight: 55 --- # Upgrading Qdrant Cloud Clusters As soon as a new Qdrant version is available. Qdrant Cloud will show you an upgrade notification in the Cluster list and on the Cluster details page. To upgrade to a new version, go to the Cluster details page, choose the new version from the version dropdown and click **Upgrade**. ![Cluster Upgrades](/documentation/cloud/cluster-upgrades.png) If you have a multi-node cluster and if your collections have a replication factor of at least **2**, the upgrade process will be zero-downtime and done in a rolling fashion. You will be able to use your database cluster normally. If you have a single-node cluster or a collection with a replication factor of **1**, the upgrade process will require a short downtime period to restart your cluster with the new version. ",documentation/cloud/cluster-upgrades.md "--- title: Managed Cloud weight: 8 aliases: - /documentation/overview/qdrant-alternatives/documentation/cloud/ --- # About Qdrant Managed Cloud Qdrant Managed Cloud is our SaaS (software-as-a-service) solution, providing managed Qdrant database clusters on the cloud. We provide you the same fast and reliable similarity search engine, but without the need to maintain your own infrastructure. Transitioning to the Managed Cloud version of Qdrant does not change how you interact with the service. All you need is a [Qdrant Cloud account](https://qdrant.to/cloud/) and an [API key](/documentation/cloud/authentication/) for each request. You can also attach your own infrastructure as a Hybrid Cloud Environment. For details, see our [Hybrid Cloud](/documentation/hybrid-cloud/) documentation. ## Cluster configuration Each database cluster comes pre-configured with the following tools, features, and support services: - Allows the creation of highly available clusters with automatic failover. - Supports upgrades to later versions of Qdrant as they are released. - Upgrades are zero-downtime on highly available clusters. - Includes monitoring and logging to observe the health of each cluster. - Horizontally and vertically scalable. - Available natively on AWS and GCP, and Azure. - Available on your own infrastructure and other providers if you use the Hybrid Cloud. ## Getting started with Qdrant Cloud To get started with Qdrant Cloud: 1. [**Set up an account**](/documentation/cloud/qdrant-cloud-setup/) 2. [**Create a Qdrant cluster**](/documentation/cloud/create-cluster/) ",documentation/cloud/_index.md "--- title: Storage weight: 80 aliases: - ../storage --- # Storage All data within one collection is divided into segments. Each segment has its independent vector and payload storage as well as indexes. Data stored in segments usually do not overlap. However, storing the same point in different segments will not cause problems since the search contains a deduplication mechanism. The segments consist of vector and payload storages, vector and payload [indexes](../indexing/), and id mapper, which stores the relationship between internal and external ids. A segment can be `appendable` or `non-appendable` depending on the type of storage and index used. You can freely add, delete and query data in the `appendable` segment. With `non-appendable` segment can only read and delete data. The configuration of the segments in the collection can be different and independent of one another, but at least one `appendable' segment must be present in a collection. ## Vector storage Depending on the requirements of the application, Qdrant can use one of the data storage options. The choice has to be made between the search speed and the size of the RAM used. **In-memory storage** - Stores all vectors in RAM, has the highest speed since disk access is required only for persistence. **Memmap storage** - Creates a virtual address space associated with the file on disk. [Wiki](https://en.wikipedia.org/wiki/Memory-mapped_file). Mmapped files are not directly loaded into RAM. Instead, they use page cache to access the contents of the file. This scheme allows flexible use of available memory. With sufficient RAM, it is almost as fast as in-memory storage. ### Configuring Memmap storage There are two ways to configure the usage of memmap(also known as on-disk) storage: - Set up `on_disk` option for the vectors in the collection create API: *Available as of v1.2.0* ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"", ""on_disk"": true } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams( size=768, distance=models.Distance.COSINE, on_disk=True ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", on_disk: true, }, }); ``` ```rust use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine).on_disk(true)), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( ""{collection_name}"", VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .setOnDisk(true) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( ""{collection_name}"", new VectorParams { Size = 768, Distance = Distance.Cosine, OnDisk = true } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 768, Distance: qdrant.Distance_Cosine, OnDisk: qdrant.PtrOf(true), }), }) ``` This will create a collection with all vectors immediately stored in memmap storage. This is the recommended way, in case your Qdrant instance operates with fast disks and you are working with large collections. - Set up `memmap_threshold_kb` option (deprecated). This option will set the threshold after which the segment will be converted to memmap storage. There are two ways to do this: 1. You can set the threshold globally in the [configuration file](../../guides/configuration/). The parameter is called `memmap_threshold_kb`. 2. You can set the threshold for each collection separately during [creation](../collections/#create-collection) or [update](../collections/#update-collection-parameters). ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"" }, ""optimizers_config"": { ""memmap_threshold"": 20000 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", }, optimizers_config: { memmap_threshold: 20000, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Distance, OptimizersConfigDiffBuilder, VectorParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine)) .optimizers_config(OptimizersConfigDiffBuilder::default().memmap_threshold(20000)), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 768, Distance: qdrant.Distance_Cosine, }), OptimizersConfig: &qdrant.OptimizersConfigDiff{ MaxSegmentSize: qdrant.PtrOf(uint64(20000)), }, }) ``` The rule of thumb to set the memmap threshold parameter is simple: - if you have a balanced use scenario - set memmap threshold the same as `indexing_threshold` (default is 20000). In this case the optimizer will not make any extra runs and will optimize all thresholds at once. - if you have a high write load and low RAM - set memmap threshold lower than `indexing_threshold` to e.g. 10000. In this case the optimizer will convert the segments to memmap storage first and will only apply indexing after that. In addition, you can use memmap storage not only for vectors, but also for HNSW index. To enable this, you need to set the `hnsw_config.on_disk` parameter to `true` during collection [creation](../collections/#create-a-collection) or [updating](../collections/#update-collection-parameters). ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"" }, ""optimizers_config"": { ""memmap_threshold"": 20000 }, ""hnsw_config"": { ""on_disk"": true } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000), hnsw_config=models.HnswConfigDiff(on_disk=True), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", }, optimizers_config: { memmap_threshold: 20000, }, hnsw_config: { on_disk: true, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Distance, HnswConfigDiffBuilder, OptimizersConfigDiffBuilder, VectorParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(768, Distance::Cosine)) .optimizers_config(OptimizersConfigDiffBuilder::default().memmap_threshold(20000)) .hnsw_config(HnswConfigDiffBuilder::default().on_disk(true)), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.HnswConfigDiff; import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(768) .setDistance(Distance.Cosine) .build()) .build()) .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setMemmapThreshold(20000).build()) .setHnswConfig(HnswConfigDiff.newBuilder().setOnDisk(true).build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 768, Distance = Distance.Cosine }, optimizersConfig: new OptimizersConfigDiff { MemmapThreshold = 20000 }, hnswConfig: new HnswConfigDiff { OnDisk = true } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 768, Distance: qdrant.Distance_Cosine, }), OptimizersConfig: &qdrant.OptimizersConfigDiff{ MaxSegmentSize: qdrant.PtrOf(uint64(20000)), }, HnswConfig: &qdrant.HnswConfigDiff{ OnDisk: qdrant.PtrOf(true), }, }) ``` ## Payload storage Qdrant supports two types of payload storages: InMemory and OnDisk. InMemory payload storage is organized in the same way as in-memory vectors. The payload data is loaded into RAM at service startup while disk and [RocksDB](https://rocksdb.org/) are used for persistence only. This type of storage works quite fast, but it may require a lot of space to keep all the data in RAM, especially if the payload has large values attached - abstracts of text or even images. In the case of large payload values, it might be better to use OnDisk payload storage. This type of storage will read and write payload directly to RocksDB, so it won't require any significant amount of RAM to store. The downside, however, is the access latency. If you need to query vectors with some payload-based conditions - checking values stored on disk might take too much time. In this scenario, we recommend creating a payload index for each field used in filtering conditions to avoid disk access. Once you create the field index, Qdrant will preserve all values of the indexed field in RAM regardless of the payload storage type. You can specify the desired type of payload storage with [configuration file](../../guides/configuration/) or with collection parameter `on_disk_payload` during [creation](../collections/#create-collection) of the collection. ## Versioning To ensure data integrity, Qdrant performs all data changes in 2 stages. In the first step, the data is written to the Write-ahead-log(WAL), which orders all operations and assigns them a sequential number. Once a change has been added to the WAL, it will not be lost even if a power loss occurs. Then the changes go into the segments. Each segment stores the last version of the change applied to it as well as the version of each individual point. If the new change has a sequential number less than the current version of the point, the updater will ignore the change. This mechanism allows Qdrant to safely and efficiently restore the storage from the WAL in case of an abnormal shutdown. ",documentation/concepts/storage.md "--- title: Explore weight: 55 aliases: - ../explore --- # Explore the data After mastering the concepts in [search](../search/), you can start exploring your data in other ways. Qdrant provides a stack of APIs that allow you to find similar vectors in a different fashion, as well as to find the most dissimilar ones. These are useful tools for recommendation systems, data exploration, and data cleaning. ## Recommendation API In addition to the regular search, Qdrant also allows you to search based on multiple positive and negative examples. The API is called ***recommend***, and the examples can be point IDs, so that you can leverage the already encoded objects; and, as of v1.6, you can also use raw vectors as input, so that you can create your vectors on the fly without uploading them as points. REST API - API Schema definition is available [here](https://api.qdrant.tech/api-reference/search/recommend-points) ```http POST /collections/{collection_name}/points/query { ""query"": { ""recommend"": { ""positive"": [100, 231], ""negative"": [718, [0.2, 0.3, 0.4, 0.5]], ""strategy"": ""average_vector"" } }, ""filter"": { ""must"": [ { ""key"": ""city"", ""match"": { ""value"": ""London"" } } ] } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=models.RecommendQuery( recommend=models.RecommendInput( positive=[100, 231], negative=[718, [0.2, 0.3, 0.4, 0.5]], strategy=models.RecommendStrategy.AVERAGE_VECTOR, ) ), query_filter=models.Filter( must=[ models.FieldCondition( key=""city"", match=models.MatchValue( value=""London"", ), ) ] ), limit=3, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: { recommend: { positive: [100, 231], negative: [718, [0.2, 0.3, 0.4, 0.5]], strategy: ""average_vector"" } }, filter: { must: [ { key: ""city"", match: { value: ""London"", }, }, ], }, limit: 3 }); ``` ```rust use qdrant_client::qdrant::{ Condition, Filter, QueryPointsBuilder, RecommendInputBuilder, RecommendStrategy, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query( RecommendInputBuilder::default() .add_positive(100) .add_positive(231) .add_positive(vec![0.2, 0.3, 0.4, 0.5]) .add_negative(718) .strategy(RecommendStrategy::AverageVector) .build(), ) .limit(3) .filter(Filter::must([Condition::matches( ""city"", ""London"".to_string(), )])), ) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPoints; import io.qdrant.client.grpc.Points.RecommendInput; import io.qdrant.client.grpc.Points.RecommendStrategy; import io.qdrant.client.grpc.Points.Filter; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.VectorInputFactory.vectorInput; import static io.qdrant.client.QueryFactory.recommend; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync(QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(recommend(RecommendInput.newBuilder() .addAllPositive(List.of(vectorInput(100), vectorInput(200), vectorInput(100.0f, 231.0f))) .addAllNegative(List.of(vectorInput(718), vectorInput(0.2f, 0.3f, 0.4f, 0.5f))) .setStrategy(RecommendStrategy.AverageVector) .build())) .setFilter(Filter.newBuilder().addMust(matchKeyword(""city"", ""London""))) .setLimit(3) .build()).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new RecommendInput { Positive = { 100, 231 }, Negative = { 718 } }, filter: MatchKeyword(""city"", ""London""), limit: 3 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{ Positive: []*qdrant.VectorInput{ qdrant.NewVectorInputID(qdrant.NewIDNum(100)), qdrant.NewVectorInputID(qdrant.NewIDNum(231)), }, Negative: []*qdrant.VectorInput{ qdrant.NewVectorInputID(qdrant.NewIDNum(718)), }, }), Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""city"", ""London""), }, }, }) ``` Example result of this API would be ```json { ""result"": [ { ""id"": 10, ""score"": 0.81 }, { ""id"": 14, ""score"": 0.75 }, { ""id"": 11, ""score"": 0.73 } ], ""status"": ""ok"", ""time"": 0.001 } ``` The algorithm used to get the recommendations is selected from the available `strategy` options. Each of them has its own strengths and weaknesses, so experiment and choose the one that works best for your case. ### Average vector strategy The default and first strategy added to Qdrant is called `average_vector`. It preprocesses the input examples to create a single vector that is used for the search. Since the preprocessing step happens very fast, the performance of this strategy is on-par with regular search. The intuition behind this kind of recommendation is that each vector component represents an independent feature of the data, so, by averaging the examples, we should get a good recommendation. The way to produce the searching vector is by first averaging all the positive and negative examples separately, and then combining them into a single vector using the following formula: ```rust avg_positive + avg_positive - avg_negative ``` In the case of not having any negative examples, the search vector will simply be equal to `avg_positive`. This is the default strategy that's going to be set implicitly, but you can explicitly define it by setting `""strategy"": ""average_vector""` in the recommendation request. ### Best score strategy *Available as of v1.6.0* A new strategy introduced in v1.6, is called `best_score`. It is based on the idea that the best way to find similar vectors is to find the ones that are closer to a positive example, while avoiding the ones that are closer to a negative one. The way it works is that each candidate is measured against every example, then we select the best positive and best negative scores. The final score is chosen with this step formula: ```rust let score = if best_positive_score > best_negative_score { best_positive_score } else { -(best_negative_score * best_negative_score) }; ``` Since we are computing similarities to every example at each step of the search, the performance of this strategy will be linearly impacted by the amount of examples. This means that the more examples you provide, the slower the search will be. However, this strategy can be very powerful and should be more embedding-agnostic. To use this algorithm, you need to set `""strategy"": ""best_score""` in the recommendation request. #### Using only negative examples A beneficial side-effect of `best_score` strategy is that you can use it with only negative examples. This will allow you to find the most dissimilar vectors to the ones you provide. This can be useful for finding outliers in your data, or for finding the most dissimilar vectors to a given one. Combining negative-only examples with filtering can be a powerful tool for data exploration and cleaning. ### Multiple vectors *Available as of v0.10.0* If the collection was created with multiple vectors, the name of the vector should be specified in the recommendation request: ```http POST /collections/{collection_name}/points/query { ""query"": { ""recommend"": { ""positive"": [100, 231], ""negative"": [718] } }, ""using"": ""image"", ""limit"": 10 } ``` ```python client.query_points( collection_name=""{collection_name}"", query=models.RecommendQuery( recommend=models.RecommendInput( positive=[100, 231], negative=[718], ) ), using=""image"", limit=10, ) ``` ```typescript client.query(""{collection_name}"", { query: { recommend: { positive: [100, 231], negative: [718], } }, using: ""image"", limit: 10 }); ``` ```rust use qdrant_client::qdrant::{QueryPointsBuilder, RecommendInputBuilder}; client .query( QueryPointsBuilder::new(""{collection_name}"") .query( RecommendInputBuilder::default() .add_positive(100) .add_positive(231) .add_negative(718) .build(), ) .limit(10) .using(""image""), ) .await?; ``` ```java import java.util.List; import io.qdrant.client.grpc.Points.QueryPoints; import io.qdrant.client.grpc.Points.RecommendInput; import static io.qdrant.client.VectorInputFactory.vectorInput; import static io.qdrant.client.QueryFactory.recommend; client.queryAsync(QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(recommend(RecommendInput.newBuilder() .addAllPositive(List.of(vectorInput(100), vectorInput(231))) .addAllNegative(List.of(vectorInput(718))) .build())) .setUsing(""image"") .setLimit(10) .build()).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new RecommendInput { Positive = { 100, 231 }, Negative = { 718 } }, usingVector: ""image"", limit: 10 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{ Positive: []*qdrant.VectorInput{ qdrant.NewVectorInputID(qdrant.NewIDNum(100)), qdrant.NewVectorInputID(qdrant.NewIDNum(231)), }, Negative: []*qdrant.VectorInput{ qdrant.NewVectorInputID(qdrant.NewIDNum(718)), }, }), Using: qdrant.PtrOf(""image""), }) ``` Parameter `using` specifies which stored vectors to use for the recommendation. ### Lookup vectors from another collection *Available as of v0.11.6* If you have collections with vectors of the same dimensionality, and you want to look for recommendations in one collection based on the vectors of another collection, you can use the `lookup_from` parameter. It might be useful, e.g. in the item-to-user recommendations scenario. Where user and item embeddings, although having the same vector parameters (distance type and dimensionality), are usually stored in different collections. ```http POST /collections/{collection_name}/points/query { ""query"": { ""recommend"": { ""positive"": [100, 231], ""negative"": [718] } }, ""limit"": 10, ""lookup_from"": { ""collection"": ""{external_collection_name}"", ""vector"": ""{external_vector_name}"" } } ``` ```python client.query_points( collection_name=""{collection_name}"", query=models.RecommendQuery( recommend=models.RecommendInput( positive=[100, 231], negative=[718], ) ), using=""image"", limit=10, lookup_from=models.LookupLocation( collection=""{external_collection_name}"", vector=""{external_vector_name}"" ), ) ``` ```typescript client.query(""{collection_name}"", { query: { recommend: { positive: [100, 231], negative: [718], } }, using: ""image"", limit: 10, lookup_from: { collection: ""{external_collection_name}"", vector: ""{external_vector_name}"" } }); ``` ```rust use qdrant_client::qdrant::{LookupLocationBuilder, QueryPointsBuilder, RecommendInputBuilder}; client .query( QueryPointsBuilder::new(""{collection_name}"") .query( RecommendInputBuilder::default() .add_positive(100) .add_positive(231) .add_negative(718) .build(), ) .limit(10) .using(""image"") .lookup_from( LookupLocationBuilder::new(""{external_collection_name}"") .vector_name(""{external_vector_name}""), ), ) .await?; ``` ```java import java.util.List; import io.qdrant.client.grpc.Points.LookupLocation; import io.qdrant.client.grpc.Points.QueryPoints; import io.qdrant.client.grpc.Points.RecommendInput; import static io.qdrant.client.VectorInputFactory.vectorInput; import static io.qdrant.client.QueryFactory.recommend; client.queryAsync(QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(recommend(RecommendInput.newBuilder() .addAllPositive(List.of(vectorInput(100), vectorInput(231))) .addAllNegative(List.of(vectorInput(718))) .build())) .setUsing(""image"") .setLimit(10) .setLookupFrom( LookupLocation.newBuilder() .setCollectionName(""{external_collection_name}"") .setVectorName(""{external_vector_name}"") .build()) .build()).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new RecommendInput { Positive = { 100, 231 }, Negative = { 718 } }, usingVector: ""image"", limit: 10, lookupFrom: new LookupLocation { CollectionName = ""{external_collection_name}"", VectorName = ""{external_vector_name}"", } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{ Positive: []*qdrant.VectorInput{ qdrant.NewVectorInputID(qdrant.NewIDNum(100)), qdrant.NewVectorInputID(qdrant.NewIDNum(231)), }, Negative: []*qdrant.VectorInput{ qdrant.NewVectorInputID(qdrant.NewIDNum(718)), }, }), Using: qdrant.PtrOf(""image""), LookupFrom: &qdrant.LookupLocation{ CollectionName: ""{external_collection_name}"", VectorName: qdrant.PtrOf(""{external_vector_name}""), }, }) ``` Vectors are retrieved from the external collection by ids provided in the `positive` and `negative` lists. These vectors then used to perform the recommendation in the current collection, comparing against the ""using"" or default vector. ## Batch recommendation API *Available as of v0.10.0* Similar to the batch search API in terms of usage and advantages, it enables the batching of recommendation requests. ```http POST /collections/{collection_name}/query/batch { ""searches"": [ { ""query"": { ""recommend"": { ""positive"": [100, 231], ""negative"": [718] } }, ""filter"": { ""must"": [ { ""key"": ""city"", ""match"": { ""value"": ""London"" } } ] }, ""limit"": 10 }, { ""query"": { ""recommend"": { ""positive"": [200, 67], ""negative"": [300] } }, ""filter"": { ""must"": [ { ""key"": ""city"", ""match"": { ""value"": ""London"" } } ] }, ""limit"": 10 } ] } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") filter_ = models.Filter( must=[ models.FieldCondition( key=""city"", match=models.MatchValue( value=""London"", ), ) ] ) recommend_queries = [ models.QueryRequest( query=models.RecommendQuery( recommend=models.RecommendInput(positive=[100, 231], negative=[718]) ), filter=filter_, limit=3, ), models.QueryRequest( query=models.RecommendQuery( recommend=models.RecommendInput(positive=[200, 67], negative=[300]) ), filter=filter_, limit=3, ), ] client.query_batch_points( collection_name=""{collection_name}"", requests=recommend_queries ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); const filter = { must: [ { key: ""city"", match: { value: ""London"", }, }, ], }; const searches = [ { query: { recommend: { positive: [100, 231], negative: [718] } }, filter, limit: 3, }, { query: { recommend: { positive: [200, 67], negative: [300] } }, filter, limit: 3, }, ]; client.queryBatch(""{collection_name}"", { searches, }); ``` ```rust use qdrant_client::qdrant::{ Condition, Filter, QueryBatchPointsBuilder, QueryPointsBuilder, RecommendInputBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let filter = Filter::must([Condition::matches(""city"", ""London"".to_string())]); let recommend_queries = vec![ QueryPointsBuilder::new(""{collection_name}"") .query( RecommendInputBuilder::default() .add_positive(100) .add_positive(231) .add_negative(718) .build(), ) .filter(filter.clone()) .build(), QueryPointsBuilder::new(""{collection_name}"") .query( RecommendInputBuilder::default() .add_positive(200) .add_positive(67) .add_negative(300) .build(), ) .filter(filter) .build(), ]; client .query_batch(QueryBatchPointsBuilder::new( ""{collection_name}"", recommend_queries, )) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.QueryPoints; import io.qdrant.client.grpc.Points.RecommendInput; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.VectorInputFactory.vectorInput; import static io.qdrant.client.QueryFactory.recommend; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); Filter filter = Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")).build(); List recommendQueries = List.of( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(recommend( RecommendInput.newBuilder() .addAllPositive(List.of(vectorInput(100), vectorInput(231))) .addAllNegative(List.of(vectorInput(731))) .build())) .setFilter(filter) .setLimit(3) .build(), QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(recommend( RecommendInput.newBuilder() .addAllPositive(List.of(vectorInput(200), vectorInput(67))) .addAllNegative(List.of(vectorInput(300))) .build())) .setFilter(filter) .setLimit(3) .build()); client.queryBatchAsync(""{collection_name}"", recommendQueries).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); var filter = MatchKeyword(""city"", ""london""); await client.QueryBatchAsync( collectionName: ""{collection_name}"", queries: [ new QueryPoints() { CollectionName = ""{collection_name}"", Query = new RecommendInput { Positive = { 100, 231 }, Negative = { 718 }, }, Limit = 3, Filter = filter, }, new QueryPoints() { CollectionName = ""{collection_name}"", Query = new RecommendInput { Positive = { 200, 67 }, Negative = { 300 }, }, Limit = 3, Filter = filter, } ] ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) filter := qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""city"", ""London""), }, } client.QueryBatch(context.Background(), &qdrant.QueryBatchPoints{ CollectionName: ""{collection_name}"", QueryPoints: []*qdrant.QueryPoints{ { CollectionName: ""{collection_name}"", Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{ Positive: []*qdrant.VectorInput{ qdrant.NewVectorInputID(qdrant.NewIDNum(100)), qdrant.NewVectorInputID(qdrant.NewIDNum(231)), }, Negative: []*qdrant.VectorInput{ qdrant.NewVectorInputID(qdrant.NewIDNum(718)), }, }, ), Filter: &filter, }, { CollectionName: ""{collection_name}"", Query: qdrant.NewQueryRecommend(&qdrant.RecommendInput{ Positive: []*qdrant.VectorInput{ qdrant.NewVectorInputID(qdrant.NewIDNum(200)), qdrant.NewVectorInputID(qdrant.NewIDNum(67)), }, Negative: []*qdrant.VectorInput{ qdrant.NewVectorInputID(qdrant.NewIDNum(300)), }, }, ), Filter: &filter, }, }, }, ) ``` The result of this API contains one array per recommendation requests. ```json { ""result"": [ [ { ""id"": 10, ""score"": 0.81 }, { ""id"": 14, ""score"": 0.75 }, { ""id"": 11, ""score"": 0.73 } ], [ { ""id"": 1, ""score"": 0.92 }, { ""id"": 3, ""score"": 0.89 }, { ""id"": 9, ""score"": 0.75 } ] ], ""status"": ""ok"", ""time"": 0.001 } ``` ## Discovery API *Available as of v1.7* REST API Schema definition available [here](https://api.qdrant.tech/api-reference/search/discover-points) In this API, Qdrant introduces the concept of `context`, which is used for splitting the space. Context is a set of positive-negative pairs, and each pair divides the space into positive and negative zones. In that mode, the search operation prefers points based on how many positive zones they belong to (or how much they avoid negative zones). The interface for providing context is similar to the recommendation API (ids or raw vectors). Still, in this case, they need to be provided in the form of positive-negative pairs. Discovery API lets you do two new types of search: - **Discovery search**: Uses the context (the pairs of positive-negative vectors) and a target to return the points more similar to the target, but constrained by the context. - **Context search**: Using only the context pairs, get the points that live in the best zone, where loss is minimized The way positive and negative examples should be arranged in the context pairs is completely up to you. So you can have the flexibility of trying out different permutation techniques based on your model and data. ### Discovery search This type of search works specially well for combining multimodal, vector-constrained searches. Qdrant already has extensive support for filters, which constrain the search based on its payload, but using discovery search, you can also constrain the vector space in which the search is performed. ![Discovery search](/docs/discovery-search.png) The formula for the discovery score can be expressed as: $$ \text{rank}(v^+, v^-) = \begin{cases} 1, &\quad s(v^+) \geq s(v^-) \\\\ -1, &\quad s(v^+) < s(v^-) \end{cases} $$ where $v^+$ represents a positive example, $v^-$ represents a negative example, and $s(v)$ is the similarity score of a vector $v$ to the target vector. The discovery score is then computed as: $$ \text{discovery score} = \text{sigmoid}(s(v_t))+ \sum \text{rank}(v_i^+, v_i^-), $$ where $s(v)$ is the similarity function, $v_t$ is the target vector, and again $v_i^+$ and $v_i^-$ are the positive and negative examples, respectively. The sigmoid function is used to normalize the score between 0 and 1 and the sum of ranks is used to penalize vectors that are closer to the negative examples than to the positive ones. In other words, the sum of individual ranks determines how many positive zones a point is in, while the closeness hierarchy comes second. Example: ```http POST /collections/{collection_name}/points/query { ""query"": { ""discover"": { ""target"": [0.2, 0.1, 0.9, 0.7], ""context"": [ { ""positive"": 100, ""negative"": 718 }, { ""positive"": 200, ""negative"": 300 } ] } }, ""limit"": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") discover_queries = [ models.QueryRequest( query=models.DiscoverQuery( discover=models.DiscoverInput( target=[0.2, 0.1, 0.9, 0.7], context=[ models.ContextPair( positive=100, negative=718, ), models.ContextPair( positive=200, negative=300, ), ], ) ), limit=10, ), ] ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: { discover: { target: [0.2, 0.1, 0.9, 0.7], context: [ { positive: 100, negative: 718, }, { positive: 200, negative: 300, }, ], } }, limit: 10, }); ``` ```rust use qdrant_client::qdrant::{ContextInputBuilder, DiscoverInputBuilder, QueryPointsBuilder}; use qdrant_client::Qdrant; client .query( QueryPointsBuilder::new(""{collection_name}"").query( DiscoverInputBuilder::new( vec![0.2, 0.1, 0.9, 0.7], ContextInputBuilder::default() .add_pair(100, 718) .add_pair(200, 300), ) .build(), ), ) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.ContextInput; import io.qdrant.client.grpc.Points.ContextInputPair; import io.qdrant.client.grpc.Points.DiscoverInput; import io.qdrant.client.grpc.Points.QueryPoints; import static io.qdrant.client.VectorInputFactory.vectorInput; import static io.qdrant.client.QueryFactory.discover; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync(QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(discover(DiscoverInput.newBuilder() .setTarget(vectorInput(0.2f, 0.1f, 0.9f, 0.7f)) .setContext(ContextInput.newBuilder() .addAllPairs(List.of( ContextInputPair.newBuilder() .setPositive(vectorInput(100)) .setNegative(vectorInput(718)) .build(), ContextInputPair.newBuilder() .setPositive(vectorInput(200)) .setNegative(vectorInput(300)) .build())) .build()) .build())) .setLimit(10) .build()).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new DiscoverInput { Target = new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, Context = new ContextInput { Pairs = { new ContextInputPair { Positive = 100, Negative = 718 }, new ContextInputPair { Positive = 200, Negative = 300 }, } }, }, limit: 10 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQueryDiscover(&qdrant.DiscoverInput{ Target: qdrant.NewVectorInput(0.2, 0.1, 0.9, 0.7), Context: &qdrant.ContextInput{ Pairs: []*qdrant.ContextInputPair{ { Positive: qdrant.NewVectorInputID(qdrant.NewIDNum(100)), Negative: qdrant.NewVectorInputID(qdrant.NewIDNum(718)), }, { Positive: qdrant.NewVectorInputID(qdrant.NewIDNum(200)), Negative: qdrant.NewVectorInputID(qdrant.NewIDNum(300)), }, }, }, }), }) ``` ### Context search Conversely, in the absence of a target, a rigid integer-by-integer function doesn't provide much guidance for the search when utilizing a proximity graph like HNSW. Instead, context search employs a function derived from the [triplet-loss](/articles/triplet-loss/) concept, which is usually applied during model training. For context search, this function is adapted to steer the search towards areas with fewer negative examples. ![Context search](/docs/context-search.png) We can directly associate the score function to a loss function, where 0.0 is the maximum score a point can have, which means it is only in positive areas. As soon as a point exists closer to a negative example, its loss will simply be the difference of the positive and negative similarities. $$ \text{context score} = \sum \min(s(v^+_i) - s(v^-_i), 0.0) $$ Where $v^+_i$ and $v^-_i$ are the positive and negative examples of each pair, and $s(v)$ is the similarity function. Using this kind of search, you can expect the output to not necessarily be around a single point, but rather, to be any point that isn’t closer to a negative example, which creates a constrained diverse result. So, even when the API is not called [`recommend`](#recommendation-api), recommendation systems can also use this approach and adapt it for their specific use-cases. Example: ```http POST /collections/{collection_name}/points/query { ""query"": { ""context"": [ { ""positive"": 100, ""negative"": 718 }, { ""positive"": 200, ""negative"": 300 } ] }, ""limit"": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") discover_queries = [ models.QueryRequest( query=models.ContextQuery( context=[ models.ContextPair( positive=100, negative=718, ), models.ContextPair( positive=200, negative=300, ), ], ), limit=10, ), ] ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: { context: [ { positive: 100, negative: 718, }, { positive: 200, negative: 300, }, ] }, limit: 10, }); ``` ```rust use qdrant_client::qdrant::{ContextInputBuilder, QueryPointsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"").query( ContextInputBuilder::default() .add_pair(100, 718) .add_pair(200, 300) .build(), ), ) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.ContextInput; import io.qdrant.client.grpc.Points.ContextInputPair; import io.qdrant.client.grpc.Points.QueryPoints; import static io.qdrant.client.VectorInputFactory.vectorInput; import static io.qdrant.client.QueryFactory.context; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync(QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(context(ContextInput.newBuilder() .addAllPairs(List.of( ContextInputPair.newBuilder() .setPositive(vectorInput(100)) .setNegative(vectorInput(718)) .build(), ContextInputPair.newBuilder() .setPositive(vectorInput(200)) .setNegative(vectorInput(300)) .build())) .build())) .setLimit(10) .build()).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new ContextInput { Pairs = { new ContextInputPair { Positive = 100, Negative = 718 }, new ContextInputPair { Positive = 200, Negative = 300 }, } }, limit: 10 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQueryContext(&qdrant.ContextInput{ Pairs: []*qdrant.ContextInputPair{ { Positive: qdrant.NewVectorInputID(qdrant.NewIDNum(100)), Negative: qdrant.NewVectorInputID(qdrant.NewIDNum(718)), }, { Positive: qdrant.NewVectorInputID(qdrant.NewIDNum(200)), Negative: qdrant.NewVectorInputID(qdrant.NewIDNum(300)), }, }, }), }) ``` ",documentation/concepts/explore.md "--- title: Optimizer weight: 70 aliases: - ../optimizer --- # Optimizer It is much more efficient to apply changes in batches than perform each change individually, as many other databases do. Qdrant here is no exception. Since Qdrant operates with data structures that are not always easy to change, it is sometimes necessary to rebuild those structures completely. Storage optimization in Qdrant occurs at the segment level (see [storage](../storage/)). In this case, the segment to be optimized remains readable for the time of the rebuild. ![Segment optimization](/docs/optimization.svg) The availability is achieved by wrapping the segment into a proxy that transparently handles data changes. Changed data is placed in the copy-on-write segment, which has priority for retrieval and subsequent updates. ## Vacuum Optimizer The simplest example of a case where you need to rebuild a segment repository is to remove points. Like many other databases, Qdrant does not delete entries immediately after a query. Instead, it marks records as deleted and ignores them for future queries. This strategy allows us to minimize disk access - one of the slowest operations. However, a side effect of this strategy is that, over time, deleted records accumulate, occupy memory and slow down the system. To avoid these adverse effects, Vacuum Optimizer is used. It is used if the segment has accumulated too many deleted records. The criteria for starting the optimizer are defined in the configuration file. Here is an example of parameter values: ```yaml storage: optimizers: # The minimal fraction of deleted vectors in a segment, required to perform segment optimization deleted_threshold: 0.2 # The minimal number of vectors in a segment, required to perform segment optimization vacuum_min_vector_number: 1000 ``` ## Merge Optimizer The service may require the creation of temporary segments. Such segments, for example, are created as copy-on-write segments during optimization itself. It is also essential to have at least one small segment that Qdrant will use to store frequently updated data. On the other hand, too many small segments lead to suboptimal search performance. There is the Merge Optimizer, which combines the smallest segments into one large segment. It is used if too many segments are created. The criteria for starting the optimizer are defined in the configuration file. Here is an example of parameter values: ```yaml storage: optimizers: # If the number of segments exceeds this value, the optimizer will merge the smallest segments. max_segment_number: 5 ``` ## Indexing Optimizer Qdrant allows you to choose the type of indexes and data storage methods used depending on the number of records. So, for example, if the number of points is less than 10000, using any index would be less efficient than a brute force scan. The Indexing Optimizer is used to implement the enabling of indexes and memmap storage when the minimal amount of records is reached. The criteria for starting the optimizer are defined in the configuration file. Here is an example of parameter values: ```yaml storage: optimizers: # Maximum size (in kilobytes) of vectors to store in-memory per segment. # Segments larger than this threshold will be stored as read-only memmaped file. # Memmap storage is disabled by default, to enable it, set this threshold to a reasonable value. # To disable memmap storage, set this to `0`. # Note: 1Kb = 1 vector of size 256 memmap_threshold_kb: 200000 # Maximum size (in kilobytes) of vectors allowed for plain index, exceeding this threshold will enable vector indexing # Default value is 20,000, based on . # To disable vector indexing, set to `0`. # Note: 1kB = 1 vector of size 256. indexing_threshold_kb: 20000 ``` In addition to the configuration file, you can also set optimizer parameters separately for each [collection](../collections/). Dynamic parameter updates may be useful, for example, for more efficient initial loading of points. You can disable indexing during the upload process with these settings and enable it immediately after it is finished. As a result, you will not waste extra computation resources on rebuilding the index.",documentation/concepts/optimizer.md "--- title: Search weight: 50 aliases: - ../search --- # Similarity search Searching for the nearest vectors is at the core of many representational learning applications. Modern neural networks are trained to transform objects into vectors so that objects close in the real world appear close in vector space. It could be, for example, texts with similar meanings, visually similar pictures, or songs of the same genre. {{< figure src=""/docs/encoders.png"" caption=""This is how vector similarity works"" width=""70%"" >}} ## Query API *Available as of v1.10.0* Qdrant provides a single interface for all kinds of search and exploration requests - the `Query API`. Here is a reference list of what kind of queries you can perform with the `Query API` in Qdrant: Depending on the `query` parameter, Qdrant might prefer different strategies for the search. | | | | --- | --- | | Nearest Neighbors Search | Vector Similarity Search, also known as k-NN | | Search By Id | Search by an already stored vector - skip embedding model inference | | [Recommendations](../explore/#recommendation-api) | Provide positive and negative examples | | [Discovery Search](../explore/#discovery-api) | Guide the search using context as a one-shot training set | | [Scroll](../points/#scroll-points) | Get all points with optional filtering | | [Grouping](../search/#grouping-api) | Group results by a certain field | | [Order By](../hybrid-queries/#re-ranking-with-stored-values) | Order points by payload key | | [Hybrid Search](../hybrid-queries/#hybrid-search) | Combine multiple queries to get better results | | [Multi-Stage Search](../hybrid-queries/#multi-stage-queries) | Optimize performance for large embeddings | | [Random Sampling](#random-sampling) | Get random points from the collection | **Nearest Neighbors Search** ```http POST /collections/{collection_name}/points/query { ""query"": [0.2, 0.1, 0.9, 0.7] // <--- Dense vector } ``` ```python client.query_points( collection_name=""{collection_name}"", query=[0.2, 0.1, 0.9, 0.7], # <--- Dense vector ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: [0.2, 0.1, 0.9, 0.7], // <--- Dense vector }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Condition, Filter, Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(Query::new_nearest(vec![0.2, 0.1, 0.9, 0.7])) ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.QueryFactory.nearest; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync(QueryPoints.newBuilder() .setCollectionName(""{collectionName}"") .setQuery(nearest(List.of(0.2f, 0.1f, 0.9f, 0.7f))) .build()).get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), }) ``` **Search By Id** ```http POST /collections/{collection_name}/points/query { ""query"": ""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"" // <--- point id } ``` ```python client.query_points( collection_name=""{collection_name}"", query=""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"", # <--- point id ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: '43cf51e2-8777-4f52-bc74-c2cbde0c8b04', // <--- point id }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Condition, Filter, PointId, Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(Query::new_nearest(PointId::new(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04""))) ) .await?; ``` ```java import java.util.UUID; import static io.qdrant.client.QueryFactory.nearest; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync(QueryPoints.newBuilder() .setCollectionName(""{collectionName}"") .setQuery(nearest(UUID.fromString(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04""))) .build()).get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: Guid.Parse(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"") ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQueryID(qdrant.NewID(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"")), }) ``` ## Metrics There are many ways to estimate the similarity of vectors with each other. In Qdrant terms, these ways are called metrics. The choice of metric depends on the vectors obtained and, in particular, on the neural network encoder training method. Qdrant supports these most popular types of metrics: * Dot product: `Dot` - * Cosine similarity: `Cosine` - * Euclidean distance: `Euclid` - * Manhattan distance: `Manhattan`*- *Available as of v1.7 The most typical metric used in similarity learning models is the cosine metric. ![Embeddings](/docs/cos.png) Qdrant counts this metric in 2 steps, due to which a higher search speed is achieved. The first step is to normalize the vector when adding it to the collection. It happens only once for each vector. The second step is the comparison of vectors. In this case, it becomes equivalent to dot production - a very fast operation due to SIMD. Depending on the query configuration, Qdrant might prefer different strategies for the search. Read more about it in the [query planning](#query-planning) section. ## Search API Let's look at an example of a search query. REST API - API Schema definition is available [here](https://api.qdrant.tech/api-reference/search/query-points) ```http POST /collections/{collection_name}/points/query { ""query"": [0.2, 0.1, 0.9, 0.79], ""filter"": { ""must"": [ { ""key"": ""city"", ""match"": { ""value"": ""London"" } } ] }, ""params"": { ""hnsw_ef"": 128, ""exact"": false }, ""limit"": 3 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=[0.2, 0.1, 0.9, 0.7], query_filter=models.Filter( must=[ models.FieldCondition( key=""city"", match=models.MatchValue( value=""London"", ), ) ] ), search_params=models.SearchParams(hnsw_ef=128, exact=False), limit=3, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: [0.2, 0.1, 0.9, 0.7], filter: { must: [ { key: ""city"", match: { value: ""London"", }, }, ], }, params: { hnsw_ef: 128, exact: false, }, limit: 3, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, QueryPointsBuilder, SearchParamsBuilder}; use qdrant_client::Qdrant; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.2, 0.1, 0.9, 0.7]) .limit(3) .filter(Filter::must([Condition::matches( ""city"", ""London"".to_string(), )])) .params(SearchParamsBuilder::default().hnsw_ef(128).exact(false)), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.QueryFactory.nearest; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.QueryPoints; import io.qdrant.client.grpc.Points.SearchParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync(QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setFilter(Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")).build()) .setParams(SearchParams.newBuilder().setExact(false).setHnswEf(128).build()) .setLimit(3) .build()).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, filter: MatchKeyword(""city"", ""London""), searchParams: new SearchParams { Exact = false, HnswEf = 128 }, limit: 3 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""city"", ""London""), }, }, Params: &qdrant.SearchParams{ Exact: qdrant.PtrOf(false), HnswEf: qdrant.PtrOf(uint64(128)), }, }) ``` In this example, we are looking for vectors similar to vector `[0.2, 0.1, 0.9, 0.7]`. Parameter `limit` (or its alias - `top`) specifies the amount of most similar results we would like to retrieve. Values under the key `params` specify custom parameters for the search. Currently, it could be: * `hnsw_ef` - value that specifies `ef` parameter of the HNSW algorithm. * `exact` - option to not use the approximate search (ANN). If set to true, the search may run for a long as it performs a full scan to retrieve exact results. * `indexed_only` - With this option you can disable the search in those segments where vector index is not built yet. This may be useful if you want to minimize the impact to the search performance whilst the collection is also being updated. Using this option may lead to a partial result if the collection is not fully indexed yet, consider using it only if eventual consistency is acceptable for your use case. Since the `filter` parameter is specified, the search is performed only among those points that satisfy the filter condition. See details of possible filters and their work in the [filtering](../filtering/) section. Example result of this API would be ```json { ""result"": [ { ""id"": 10, ""score"": 0.81 }, { ""id"": 14, ""score"": 0.75 }, { ""id"": 11, ""score"": 0.73 } ], ""status"": ""ok"", ""time"": 0.001 } ``` The `result` contains ordered by `score` list of found point ids. Note that payload and vector data is missing in these results by default. See [payload and vector in the result](#payload-and-vector-in-the-result) on how to include it. *Available as of v0.10.0* If the collection was created with multiple vectors, the name of the vector to use for searching should be provided: ```http POST /collections/{collection_name}/points/query { ""query"": [0.2, 0.1, 0.9, 0.7], ""using"": ""image"", ""limit"": 3 } ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=[0.2, 0.1, 0.9, 0.7], using=""image"", limit=3, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: [0.2, 0.1, 0.9, 0.7], using: ""image"", limit: 3, }); ``` ```rust use qdrant_client::qdrant::QueryPointsBuilder; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.2, 0.1, 0.9, 0.7]) .limit(3) .using(""image""), ) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPoints; import static io.qdrant.client.QueryFactory.nearest; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync(QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setUsing(""image"") .setLimit(3) .build()).get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, usingVector: ""image"", limit: 3 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), Using: qdrant.PtrOf(""image""), }) ``` Search is processing only among vectors with the same name. *Available as of v1.7.0* If the collection was created with sparse vectors, the name of the sparse vector to use for searching should be provided: You can still use payload filtering and other features of the search API with sparse vectors. There are however important differences between dense and sparse vector search: | Index| Sparse Query | Dense Query | | --- | --- | --- | | Scoring Metric | Default is `Dot product`, no need to specify it | `Distance` has supported metrics e.g. Dot, Cosine | | Search Type | Always exact in Qdrant | HNSW is an approximate NN | | Return Behaviour | Returns only vectors with non-zero values in the same indices as the query vector | Returns `limit` vectors | In general, the speed of the search is proportional to the number of non-zero values in the query vector. ```http POST /collections/{collection_name}/points/query { ""query"": { ""indices"": [6, 7], ""values"": [1, 2] }, ""using"": ""text"", ""limit"": 3 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=models.SparseVector( indices=[1, 7], values=[2.0, 1.0], ), using=""text"", limit=3, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: { indices: [1, 7], values: [2.0, 1.0] }, using: ""text"", limit: 3, }); ``` ```rust use qdrant_client::qdrant::QueryPointsBuilder; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![(1, 2.0), (7, 1.0)]) .limit(3) .using(""text""), ) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPoints; import static io.qdrant.client.QueryFactory.nearest; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setUsing(""text"") .setQuery(nearest(List.of(2.0f, 1.0f), List.of(1, 7))) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new (float, uint)[] { (2.0f, 1), (1.0f, 2) }, usingVector: ""text"", limit: 3 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuerySparse( []uint32{1, 2}, []float32{2.0, 1.0}), Using: qdrant.PtrOf(""text""), }) ``` ### Filtering results by score In addition to payload filtering, it might be useful to filter out results with a low similarity score. For example, if you know the minimal acceptance score for your model and do not want any results which are less similar than the threshold. In this case, you can use `score_threshold` parameter of the search query. It will exclude all results with a score worse than the given. ### Payload and vector in the result By default, retrieval methods do not return any stored information such as payload and vectors. Additional parameters `with_vectors` and `with_payload` alter this behavior. Example: ```http POST /collections/{collection_name}/points/query { """": [0.2, 0.1, 0.9, 0.7], ""with_vectors"": true, ""with_payload"": true } ``` ```python client.query_points( collection_name=""{collection_name}"", query=[0.2, 0.1, 0.9, 0.7], with_vectors=True, with_payload=True, ) ``` ```typescript client.query(""{collection_name}"", { query: [0.2, 0.1, 0.9, 0.7], with_vector: true, with_payload: true, }); ``` ```rust use qdrant_client::qdrant::QueryPointsBuilder; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.2, 0.1, 0.9, 0.7]) .limit(3) .with_payload(true) .with_vectors(true), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.WithVectorsSelectorFactory; import io.qdrant.client.grpc.Points.QueryPoints; import static io.qdrant.client.QueryFactory.nearest; import static io.qdrant.client.WithPayloadSelectorFactory.enable; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(enable(true)) .setWithVectors(WithVectorsSelectorFactory.enable(true)) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, payloadSelector: true, vectorsSelector: true, limit: 3 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), WithPayload: qdrant.NewWithPayload(true), WithVectors: qdrant.NewWithVectors(true), }) ``` You can use `with_payload` to scope to or filter a specific payload subset. You can even specify an array of items to include, such as `city`, `village`, and `town`: ```http POST /collections/{collection_name}/points/query { ""query"": [0.2, 0.1, 0.9, 0.7], ""with_payload"": [""city"", ""village"", ""town""] } ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=[0.2, 0.1, 0.9, 0.7], with_payload=[""city"", ""village"", ""town""], ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: [0.2, 0.1, 0.9, 0.7], with_payload: [""city"", ""village"", ""town""], }); ``` ```rust use qdrant_client::qdrant::{with_payload_selector::SelectorOptions, QueryPointsBuilder}; use qdrant_client::Qdrant; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.2, 0.1, 0.9, 0.7]) .limit(3) .with_payload(SelectorOptions::Include( vec![ ""city"".to_string(), ""village"".to_string(), ""town"".to_string(), ] .into(), )) .with_vectors(true), ) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPoints; import static io.qdrant.client.QueryFactory.nearest; import static io.qdrant.client.WithPayloadSelectorFactory.include; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(include(List.of(""city"", ""village"", ""town""))) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, payloadSelector: new WithPayloadSelector { Include = new PayloadIncludeSelector { Fields = { new string[] { ""city"", ""village"", ""town"" } } } }, limit: 3 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), WithPayload: qdrant.NewWithPayloadInclude(""city"", ""village"", ""town""), }) ``` Or use `include` or `exclude` explicitly. For example, to exclude `city`: ```http POST /collections/{collection_name}/points/query { ""query"": [0.2, 0.1, 0.9, 0.7], ""with_payload"": { ""exclude"": [""city""] } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=[0.2, 0.1, 0.9, 0.7], with_payload=models.PayloadSelectorExclude( exclude=[""city""], ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: [0.2, 0.1, 0.9, 0.7], with_payload: { exclude: [""city""], }, }); ``` ```rust use qdrant_client::qdrant::{with_payload_selector::SelectorOptions, QueryPointsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.2, 0.1, 0.9, 0.7]) .limit(3) .with_payload(SelectorOptions::Exclude(vec![""city"".to_string()].into())) .with_vectors(true), ) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPoints; import static io.qdrant.client.QueryFactory.nearest; import static io.qdrant.client.WithPayloadSelectorFactory.exclude; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(exclude(List.of(""city""))) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, payloadSelector: new WithPayloadSelector { Exclude = new PayloadExcludeSelector { Fields = { new string[] { ""city"" } } } }, limit: 3 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), WithPayload: qdrant.NewWithPayloadExclude(""city""), }) ``` It is possible to target nested fields using a dot notation: * `payload.nested_field` - for a nested field * `payload.nested_array[].sub_field` - for projecting nested fields within an array Accessing array elements by index is currently not supported. ## Batch search API *Available as of v0.10.0* The batch search API enables to perform multiple search requests via a single request. Its semantic is straightforward, `n` batched search requests are equivalent to `n` singular search requests. This approach has several advantages. Logically, fewer network connections are required which can be very beneficial on its own. More importantly, batched requests will be efficiently processed via the query planner which can detect and optimize requests if they have the same `filter`. This can have a great effect on latency for non trivial filters as the intermediary results can be shared among the request. In order to use it, simply pack together your search requests. All the regular attributes of a search request are of course available. ```http POST /collections/{collection_name}/points/query/batch { ""searches"": [ { ""query"": [0.2, 0.1, 0.9, 0.7], ""filter"": { ""must"": [ { ""key"": ""city"", ""match"": { ""value"": ""London"" } } ] }, ""limit"": 3 }, { ""query"": [0.5, 0.3, 0.2, 0.3], ""filter"": { ""must"": [ { ""key"": ""city"", ""match"": { ""value"": ""London"" } } ] }, ""limit"": 3 } ] } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") filter_ = models.Filter( must=[ models.FieldCondition( key=""city"", match=models.MatchValue( value=""London"", ), ) ] ) search_queries = [ models.QueryRequest(query=[0.2, 0.1, 0.9, 0.7], filter=filter_, limit=3), models.QueryRequest(query=[0.5, 0.3, 0.2, 0.3], filter=filter_, limit=3), ] client.query_batch_points(collection_name=""{collection_name}"", requests=search_queries) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); const filter = { must: [ { key: ""city"", match: { value: ""London"", }, }, ], }; const searches = [ { query: [0.2, 0.1, 0.9, 0.7], filter, limit: 3, }, { query: [0.5, 0.3, 0.2, 0.3], filter, limit: 3, }, ]; client.queryBatch(""{collection_name}"", { searches, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, QueryBatchPointsBuilder, QueryPointsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let filter = Filter::must([Condition::matches(""city"", ""London"".to_string())]); let searches = vec![ QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.1, 0.2, 0.3, 0.4]) .limit(3) .filter(filter.clone()) .build(), QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.5, 0.3, 0.2, 0.3]) .limit(3) .filter(filter) .build(), ]; client .query_batch(QueryBatchPointsBuilder::new(""{collection_name}"", searches)) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.QueryPoints; import static io.qdrant.client.QueryFactory.nearest; import static io.qdrant.client.ConditionFactory.matchKeyword; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); Filter filter = Filter.newBuilder().addMust(matchKeyword(""city"", ""London"")).build(); List searches = List.of( QueryPoints.newBuilder() .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setFilter(filter) .setLimit(3) .build(), QueryPoints.newBuilder() .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setFilter(filter) .setLimit(3) .build()); client.queryBatchAsync(""{collection_name}"", searches).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); var filter = MatchKeyword(""city"", ""London""); var queries = new List { new() { CollectionName = ""{collection_name}"", Query = new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, Filter = filter, Limit = 3 }, new() { CollectionName = ""{collection_name}"", Query = new float[] { 0.5f, 0.3f, 0.2f, 0.3f }, Filter = filter, Limit = 3 } }; await client.QueryBatchAsync(collectionName: ""{collection_name}"", queries: queries); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) filter := qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""city"", ""London""), }, } client.QueryBatch(context.Background(), &qdrant.QueryBatchPoints{ CollectionName: ""{collection_name}"", QueryPoints: []*qdrant.QueryPoints{ { CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), Filter: &filter, }, { CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.5, 0.3, 0.2, 0.3), Filter: &filter, }, }, }) ``` The result of this API contains one array per search requests. ```json { ""result"": [ [ { ""id"": 10, ""score"": 0.81 }, { ""id"": 14, ""score"": 0.75 }, { ""id"": 11, ""score"": 0.73 } ], [ { ""id"": 1, ""score"": 0.92 }, { ""id"": 3, ""score"": 0.89 }, { ""id"": 9, ""score"": 0.75 } ] ], ""status"": ""ok"", ""time"": 0.001 } ``` ## Pagination *Available as of v0.8.3* Search and [recommendation](../explore/#recommendation-api) APIs allow to skip first results of the search and return only the result starting from some specified offset: Example: ```http POST /collections/{collection_name}/points/query { ""query"": [0.2, 0.1, 0.9, 0.7], ""with_vectors"": true, ""with_payload"": true, ""limit"": 10, ""offset"": 100 } ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=[0.2, 0.1, 0.9, 0.7], with_vectors=True, with_payload=True, limit=10, offset=100, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: [0.2, 0.1, 0.9, 0.7], with_vector: true, with_payload: true, limit: 10, offset: 100, }); ``` ```rust use qdrant_client::qdrant::QueryPointsBuilder; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![0.2, 0.1, 0.9, 0.7]) .with_payload(true) .with_vectors(true) .limit(10) .offset(100), ) .await?; ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.WithVectorsSelectorFactory; import io.qdrant.client.grpc.Points.QueryPoints; import static io.qdrant.client.QueryFactory.nearest; import static io.qdrant.client.WithPayloadSelectorFactory.enable; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setWithPayload(enable(true)) .setWithVectors(WithVectorsSelectorFactory.enable(true)) .setLimit(10) .setOffset(100) .build()) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, payloadSelector: true, vectorsSelector: true, limit: 10, offset: 100 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), WithPayload: qdrant.NewWithPayload(true), WithVectors: qdrant.NewWithVectors(true), Offset: qdrant.PtrOf(uint64(100)), }) ``` Is equivalent to retrieving the 11th page with 10 records per page. Vector-based retrieval in general and HNSW index in particular, are not designed to be paginated. It is impossible to retrieve Nth closest vector without retrieving the first N vectors first. However, using the offset parameter saves the resources by reducing network traffic and the number of times the storage is accessed. Using an `offset` parameter, will require to internally retrieve `offset + limit` points, but only access payload and vector from the storage those points which are going to be actually returned. ## Grouping API *Available as of v1.2.0* It is possible to group results by a certain field. This is useful when you have multiple points for the same item, and you want to avoid redundancy of the same item in the results. For example, if you have a large document split into multiple chunks, and you want to search or [recommend](../explore/#recommendation-api) on a per-document basis, you can group the results by the document ID. Consider having points with the following payloads: ```json [ { ""id"": 0, ""payload"": { ""chunk_part"": 0, ""document_id"": ""a"" }, ""vector"": [0.91] }, { ""id"": 1, ""payload"": { ""chunk_part"": 1, ""document_id"": [""a"", ""b""] }, ""vector"": [0.8] }, { ""id"": 2, ""payload"": { ""chunk_part"": 2, ""document_id"": ""a"" }, ""vector"": [0.2] }, { ""id"": 3, ""payload"": { ""chunk_part"": 0, ""document_id"": 123 }, ""vector"": [0.79] }, { ""id"": 4, ""payload"": { ""chunk_part"": 1, ""document_id"": 123 }, ""vector"": [0.75] }, { ""id"": 5, ""payload"": { ""chunk_part"": 0, ""document_id"": -10 }, ""vector"": [0.6] } ] ``` With the ***groups*** API, you will be able to get the best *N* points for each document, assuming that the payload of the points contains the document ID. Of course there will be times where the best *N* points cannot be fulfilled due to lack of points or a big distance with respect to the query. In every case, the `group_size` is a best-effort parameter, akin to the `limit` parameter. ### Search groups REST API ([Schema](https://api.qdrant.tech/api-reference/search/query-points-groups)): ```http POST /collections/{collection_name}/points/query/groups { // Same as in the regular query API ""query"": [1.1], // Grouping parameters ""group_by"": ""document_id"", // Path of the field to group by ""limit"": 4, // Max amount of groups ""group_size"": 2 // Max amount of points per group } ``` ```python client.query_points_groups( collection_name=""{collection_name}"", # Same as in the regular query_points() API query=[1.1], # Grouping parameters group_by=""document_id"", # Path of the field to group by limit=4, # Max amount of groups group_size=2, # Max amount of points per group ) ``` ```typescript client.queryGroups(""{collection_name}"", { query: [1.1], group_by: ""document_id"", limit: 4, group_size: 2, }); ``` ```rust use qdrant_client::qdrant::QueryPointGroupsBuilder; client .query_groups( QueryPointGroupsBuilder::new(""{collection_name}"", ""document_id"") .query(vec![0.2, 0.1, 0.9, 0.7]) .group_size(2u64) .with_payload(true) .with_vectors(true) .limit(4u64), ) .await?; ``` ```java import java.util.List; import io.qdrant.client.grpc.Points.SearchPointGroups; client.queryGroupsAsync( QueryPointGroups.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setGroupBy(""document_id"") .setLimit(4) .setGroupSize(2) .build()) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.QueryGroupsAsync( collectionName: ""{collection_name}"", query: new float[] { 0.2f, 0.1f, 0.9f, 0.7f }, groupBy: ""document_id"", limit: 4, groupSize: 2 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.QueryGroups(context.Background(), &qdrant.QueryPointGroups{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), GroupBy: ""document_id"", GroupSize: qdrant.PtrOf(uint64(2)), }) ``` The output of a ***groups*** call looks like this: ```json { ""result"": { ""groups"": [ { ""id"": ""a"", ""hits"": [ { ""id"": 0, ""score"": 0.91 }, { ""id"": 1, ""score"": 0.85 } ] }, { ""id"": ""b"", ""hits"": [ { ""id"": 1, ""score"": 0.85 } ] }, { ""id"": 123, ""hits"": [ { ""id"": 3, ""score"": 0.79 }, { ""id"": 4, ""score"": 0.75 } ] }, { ""id"": -10, ""hits"": [ { ""id"": 5, ""score"": 0.6 } ] } ] }, ""status"": ""ok"", ""time"": 0.001 } ``` The groups are ordered by the score of the top point in the group. Inside each group the points are sorted too. If the `group_by` field of a point is an array (e.g. `""document_id"": [""a"", ""b""]`), the point can be included in multiple groups (e.g. `""document_id"": ""a""` and `document_id: ""b""`). **Limitations**: * Only [keyword](../payload/#keyword) and [integer](../payload/#integer) payload values are supported for the `group_by` parameter. Payload values with other types will be ignored. * At the moment, pagination is not enabled when using **groups**, so the `offset` parameter is not allowed. ### Lookup in groups *Available as of v1.3.0* Having multiple points for parts of the same item often introduces redundancy in the stored data. Which may be fine if the information shared by the points is small, but it can become a problem if the payload is large, because it multiplies the storage space needed to store the points by a factor of the amount of points we have per group. One way of optimizing storage when using groups is to store the information shared by the points with the same group id in a single point in another collection. Then, when using the [**groups** API](#grouping-api), add the `with_lookup` parameter to bring the information from those points into each group. ![Group id matches point id](/docs/lookup_id_linking.png) This has the extra benefit of having a single point to update when the information shared by the points in a group changes. For example, if you have a collection of documents, you may want to chunk them and store the points for the chunks in a separate collection, making sure that you store the point id from the document it belongs in the payload of the chunk point. In this case, to bring the information from the documents into the chunks grouped by the document id, you can use the `with_lookup` parameter: ```http POST /collections/chunks/points/query/groups { // Same as in the regular query API ""query"": [1.1], // Grouping parameters ""group_by"": ""document_id"", ""limit"": 2, ""group_size"": 2, // Lookup parameters ""with_lookup"": { // Name of the collection to look up points in ""collection"": ""documents"", // Options for specifying what to bring from the payload // of the looked up point, true by default ""with_payload"": [""title"", ""text""], // Options for specifying what to bring from the vector(s) // of the looked up point, true by default ""with_vectors"": false } } ``` ```python client.query_points_groups( collection_name=""chunks"", # Same as in the regular search() API query=[1.1], # Grouping parameters group_by=""document_id"", # Path of the field to group by limit=2, # Max amount of groups group_size=2, # Max amount of points per group # Lookup parameters with_lookup=models.WithLookup( # Name of the collection to look up points in collection=""documents"", # Options for specifying what to bring from the payload # of the looked up point, True by default with_payload=[""title"", ""text""], # Options for specifying what to bring from the vector(s) # of the looked up point, True by default with_vectors=False, ), ) ``` ```typescript client.queryGroups(""{collection_name}"", { query: [1.1], group_by: ""document_id"", limit: 2, group_size: 2, with_lookup: { collection: ""documents"", with_payload: [""title"", ""text""], with_vectors: false, }, }); ``` ```rust use qdrant_client::qdrant::{with_payload_selector::SelectorOptions, QueryPointGroupsBuilder, WithLookupBuilder}; client .query_groups( QueryPointGroupsBuilder::new(""{collection_name}"", ""document_id"") .query(vec![0.2, 0.1, 0.9, 0.7]) .limit(2u64) .limit(2u64) .with_lookup( WithLookupBuilder::new(""documents"") .with_payload(SelectorOptions::Include( vec![""title"".to_string(), ""text"".to_string()].into(), )) .with_vectors(false), ), ) .await?; ``` ```java import java.util.List; import io.qdrant.client.grpc.Points.QueryPointGroups; import io.qdrant.client.grpc.Points.WithLookup; import static io.qdrant.client.QueryFactory.nearest; import static io.qdrant.client.WithVectorsSelectorFactory.enable; import static io.qdrant.client.WithPayloadSelectorFactory.include; client.queryGroupsAsync( QueryPointGroups.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(0.2f, 0.1f, 0.9f, 0.7f)) .setGroupBy(""document_id"") .setLimit(2) .setGroupSize(2) .setWithLookup( WithLookup.newBuilder() .setCollection(""documents"") .setWithPayload(include(List.of(""title"", ""text""))) .setWithVectors(enable(false)) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.SearchGroupsAsync( collectionName: ""{collection_name}"", vector: new float[] { 0.2f, 0.1f, 0.9f, 0.7f}, groupBy: ""document_id"", limit: 2, groupSize: 2, withLookup: new WithLookup { Collection = ""documents"", WithPayload = new WithPayloadSelector { Include = new PayloadIncludeSelector { Fields = { new string[] { ""title"", ""text"" } } } }, WithVectors = false } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.QueryGroups(context.Background(), &qdrant.QueryPointGroups{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.2, 0.1, 0.9, 0.7), GroupBy: ""document_id"", GroupSize: qdrant.PtrOf(uint64(2)), WithLookup: &qdrant.WithLookup{ Collection: ""documents"", WithPayload: qdrant.NewWithPayloadInclude(""title"", ""text""), }, }) ``` For the `with_lookup` parameter, you can also use the shorthand `with_lookup=""documents""` to bring the whole payload and vector(s) without explicitly specifying it. The looked up result will show up under `lookup` in each group. ```json { ""result"": { ""groups"": [ { ""id"": 1, ""hits"": [ { ""id"": 0, ""score"": 0.91 }, { ""id"": 1, ""score"": 0.85 } ], ""lookup"": { ""id"": 1, ""payload"": { ""title"": ""Document A"", ""text"": ""This is document A"" } } }, { ""id"": 2, ""hits"": [ { ""id"": 1, ""score"": 0.85 } ], ""lookup"": { ""id"": 2, ""payload"": { ""title"": ""Document B"", ""text"": ""This is document B"" } } } ] }, ""status"": ""ok"", ""time"": 0.001 } ``` Since the lookup is done by matching directly with the point id, any group id that is not an existing (and valid) point id in the lookup collection will be ignored, and the `lookup` field will be empty. ## Random Sampling *Available as of v1.11.0* In some cases it might be useful to retrieve a random sample of points from the collection. This can be useful for debugging, testing, or for providing entry points for exploration. Random sampling API is a part of [Universal Query API](#query-api) and can be used in the same way as regular search API. ```http { ""collection_name"": ""{collection_name}"", ""query"": { ""sample"": ""random"" } } ``` ```python from qdrant_client import QdrantClient, models sampled = client.query_points( collection_name=""{collection_name}"", query=models.SampleQuery(sample=models.Sample.Random) ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); const sampled = await client.query(""{collection_name}"", { query: { sample: ""random"", }, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let sampled = client .query( QueryPointsBuilder::new(""{collection_name}"") .query(Query::new_sample(Sample::Random)) ) .await?; ``` ```java import static io.qdrant.client.QueryFactory.sample; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPoints; import io.qdrant.client.grpc.Points.Sample; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(sample(Sample.Random)) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync(collectionName: ""{collection_name}"", query: Sample.Random); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.QueryGroups(context.Background(), &qdrant.QueryPointGroups{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuerySample(qdrant.Sample_Random), }) ``` ## Query planning Depending on the filter used in the search - there are several possible scenarios for query execution. Qdrant chooses one of the query execution options depending on the available indexes, the complexity of the conditions and the cardinality of the filtering result. This process is called query planning. The strategy selection process relies heavily on heuristics and can vary from release to release. However, the general principles are: * planning is performed for each segment independently (see [storage](../storage/) for more information about segments) * prefer a full scan if the amount of points is below a threshold * estimate the cardinality of a filtered result before selecting a strategy * retrieve points using payload index (see [indexing](../indexing/)) if cardinality is below threshold * use filterable vector index if the cardinality is above a threshold You can adjust the threshold using a [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml), as well as independently for each collection. ",documentation/concepts/search.md "--- title: Payload weight: 45 aliases: - ../payload --- # Payload One of the significant features of Qdrant is the ability to store additional information along with vectors. This information is called `payload` in Qdrant terminology. Qdrant allows you to store any information that can be represented using JSON. Here is an example of a typical payload: ```json { ""name"": ""jacket"", ""colors"": [""red"", ""blue""], ""count"": 10, ""price"": 11.99, ""locations"": [ { ""lon"": 52.5200, ""lat"": 13.4050 } ], ""reviews"": [ { ""user"": ""alice"", ""score"": 4 }, { ""user"": ""bob"", ""score"": 5 } ] } ``` ## Payload types In addition to storing payloads, Qdrant also allows you search based on certain kinds of values. This feature is implemented as additional filters during the search and will enable you to incorporate custom logic on top of semantic similarity. During the filtering, Qdrant will check the conditions over those values that match the type of the filtering condition. If the stored value type does not fit the filtering condition - it will be considered not satisfied. For example, you will get an empty output if you apply the [range condition](../filtering/#range) on the string data. However, arrays (multiple values of the same type) are treated a little bit different. When we apply a filter to an array, it will succeed if at least one of the values inside the array meets the condition. The filtering process is discussed in detail in the section [Filtering](../filtering/). Let's look at the data types that Qdrant supports for searching: ### Integer `integer` - 64-bit integer in the range from `-9223372036854775808` to `9223372036854775807`. Example of single and multiple `integer` values: ```json { ""count"": 10, ""sizes"": [35, 36, 38] } ``` ### Float `float` - 64-bit floating point number. Example of single and multiple `float` values: ```json { ""price"": 11.99, ""ratings"": [9.1, 9.2, 9.4] } ``` ### Bool Bool - binary value. Equals to `true` or `false`. Example of single and multiple `bool` values: ```json { ""is_delivered"": true, ""responses"": [false, false, true, false] } ``` ### Keyword `keyword` - string value. Example of single and multiple `keyword` values: ```json { ""name"": ""Alice"", ""friends"": [ ""bob"", ""eva"", ""jack"" ] } ``` ### Geo `geo` is used to represent geographical coordinates. Example of single and multiple `geo` values: ```json { ""location"": { ""lon"": 52.5200, ""lat"": 13.4050 }, ""cities"": [ { ""lon"": 51.5072, ""lat"": 0.1276 }, { ""lon"": 40.7128, ""lat"": 74.0060 } ] } ``` Coordinate should be described as an object containing two fields: `lon` - for longitude, and `lat` - for latitude. ### Datetime *Available as of v1.8.0* `datetime` - date and time in [RFC 3339] format. See the following examples of single and multiple `datetime` values: ```json { ""created_at"": ""2023-02-08T10:49:00Z"", ""updated_at"": [ ""2023-02-08T13:52:00Z"", ""2023-02-21T21:23:00Z"" ] } ``` The following formats are supported: - `""2023-02-08T10:49:00Z""` ([RFC 3339], UTC) - `""2023-02-08T11:49:00+01:00""` ([RFC 3339], with timezone) - `""2023-02-08T10:49:00""` (without timezone, UTC is assumed) - `""2023-02-08T10:49""` (without timezone and seconds) - `""2023-02-08""` (only date, midnight is assumed) Notes about the format: - `T` can be replaced with a space. - The `T` and `Z` symbols are case-insensitive. - UTC is always assumed when the timezone is not specified. - Timezone can have the following formats: `±HH:MM`, `±HHMM`, `±HH`, or `Z`. - Seconds can have up to 6 decimals, so the finest granularity for `datetime` is microseconds. [RFC 3339]: https://datatracker.ietf.org/doc/html/rfc3339#section-5.6 ### UUID *Available as of v1.11.0* In addition to the basic `keyword` type, Qdrant supports `uuid` type for storing UUID values. Functionally, it works the same as `keyword`, internally stores parsed UUID values. ```json { ""uuid"": ""550e8400-e29b-41d4-a716-446655440000"", ""uuids"": [ ""550e8400-e29b-41d4-a716-446655440000"", ""550e8400-e29b-41d4-a716-446655440001"" ] } ``` String representation of UUID (e.g. `550e8400-e29b-41d4-a716-446655440000`) occupies 36 bytes. But when numeric representation is used, it is only 128 bits (16 bytes). Usage of `uuid` index type is recommended in payload-heavy collections to save RAM and improve search performance. ## Create point with payload REST API ([Schema](https://api.qdrant.tech/api-reference/points/upsert-points)) ```http PUT /collections/{collection_name}/points { ""points"": [ { ""id"": 1, ""vector"": [0.05, 0.61, 0.76, 0.74], ""payload"": {""city"": ""Berlin"", ""price"": 1.99} }, { ""id"": 2, ""vector"": [0.19, 0.81, 0.75, 0.11], ""payload"": {""city"": [""Berlin"", ""London""], ""price"": 1.99} }, { ""id"": 3, ""vector"": [0.36, 0.55, 0.47, 0.94], ""payload"": {""city"": [""Berlin"", ""Moscow""], ""price"": [1.99, 2.99]} } ] } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.upsert( collection_name=""{collection_name}"", points=[ models.PointStruct( id=1, vector=[0.05, 0.61, 0.76, 0.74], payload={ ""city"": ""Berlin"", ""price"": 1.99, }, ), models.PointStruct( id=2, vector=[0.19, 0.81, 0.75, 0.11], payload={ ""city"": [""Berlin"", ""London""], ""price"": 1.99, }, ), models.PointStruct( id=3, vector=[0.36, 0.55, 0.47, 0.94], payload={ ""city"": [""Berlin"", ""Moscow""], ""price"": [1.99, 2.99], }, ), ], ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.upsert(""{collection_name}"", { points: [ { id: 1, vector: [0.05, 0.61, 0.76, 0.74], payload: { city: ""Berlin"", price: 1.99, }, }, { id: 2, vector: [0.19, 0.81, 0.75, 0.11], payload: { city: [""Berlin"", ""London""], price: 1.99, }, }, { id: 3, vector: [0.36, 0.55, 0.47, 0.94], payload: { city: [""Berlin"", ""Moscow""], price: [1.99, 2.99], }, }, ], }); ``` ```rust use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder}; use qdrant_client::{Payload, Qdrant, QdrantError}; use serde_json::json; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let points = vec![ PointStruct::new( 1, vec![0.05, 0.61, 0.76, 0.74], Payload::try_from(json!({""city"": ""Berlin"", ""price"": 1.99})).unwrap(), ), PointStruct::new( 2, vec![0.19, 0.81, 0.75, 0.11], Payload::try_from(json!({""city"": [""Berlin"", ""London""]})).unwrap(), ), PointStruct::new( 3, vec![0.36, 0.55, 0.47, 0.94], Payload::try_from(json!({""city"": [""Berlin"", ""Moscow""], ""price"": [1.99, 2.99]})) .unwrap(), ), ]; client .upsert_points(UpsertPointsBuilder::new(""{collection_name}"", points).wait(true)) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .upsertAsync( ""{collection_name}"", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f)) .putAllPayload(Map.of(""city"", value(""Berlin""), ""price"", value(1.99))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.19f, 0.81f, 0.75f, 0.11f)) .putAllPayload( Map.of(""city"", list(List.of(value(""Berlin""), value(""London""))))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.36f, 0.55f, 0.47f, 0.94f)) .putAllPayload( Map.of( ""city"", list(List.of(value(""Berlin""), value(""London""))), ""price"", list(List.of(value(1.99), value(2.99))))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpsertAsync( collectionName: ""{collection_name}"", points: new List { new PointStruct { Id = 1, Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f }, Payload = { [""city""] = ""Berlin"", [""price""] = 1.99 } }, new PointStruct { Id = 2, Vectors = new[] { 0.19f, 0.81f, 0.75f, 0.11f }, Payload = { [""city""] = new[] { ""Berlin"", ""London"" } } }, new PointStruct { Id = 3, Vectors = new[] { 0.36f, 0.55f, 0.47f, 0.94f }, Payload = { [""city""] = new[] { ""Berlin"", ""Moscow"" }, [""price""] = new Value { ListValue = new ListValue { Values = { new Value[] { 1.99, 2.99 } } } } } } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Upsert(context.Background(), &qdrant.UpsertPoints{ CollectionName: ""{collection_name}"", Points: []*qdrant.PointStruct{ { Id: qdrant.NewIDNum(1), Vectors: qdrant.NewVectors(0.05, 0.61, 0.76, 0.74), Payload: qdrant.NewValueMap(map[string]any{ ""city"": ""Berlin"", ""price"": 1.99}), }, { Id: qdrant.NewIDNum(2), Vectors: qdrant.NewVectors(0.19, 0.81, 0.75, 0.11), Payload: qdrant.NewValueMap(map[string]any{ ""city"": []any{""Berlin"", ""London""}}), }, { Id: qdrant.NewIDNum(3), Vectors: qdrant.NewVectors(0.36, 0.55, 0.47, 0.94), Payload: qdrant.NewValueMap(map[string]any{ ""city"": []any{""Berlin"", ""London""}, ""price"": []any{1.99, 2.99}}), }, }, }) ``` ## Update payload ### Set payload Set only the given payload values on a point. REST API ([Schema](https://api.qdrant.tech/api-reference/points/set-payload)): ```http POST /collections/{collection_name}/points/payload { ""payload"": { ""property1"": ""string"", ""property2"": ""string"" }, ""points"": [ 0, 3, 100 ] } ``` ```python client.set_payload( collection_name=""{collection_name}"", payload={ ""property1"": ""string"", ""property2"": ""string"", }, points=[0, 3, 10], ) ``` ```typescript client.setPayload(""{collection_name}"", { payload: { property1: ""string"", property2: ""string"", }, points: [0, 3, 10], }); ``` ```rust use qdrant_client::qdrant::{ PointsIdsList, SetPayloadPointsBuilder, }; use qdrant_client::Payload,; use serde_json::json; client .set_payload( SetPayloadPointsBuilder::new( ""{collection_name}"", Payload::try_from(json!({ ""property1"": ""string"", ""property2"": ""string"", })) .unwrap(), ) .points_selector(PointsIdsList { ids: vec![0.into(), 3.into(), 10.into()], }) .wait(true), ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; client .setPayloadAsync( ""{collection_name}"", Map.of(""property1"", value(""string""), ""property2"", value(""string"")), List.of(id(0), id(3), id(10)), true, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.SetPayloadAsync( collectionName: ""{collection_name}"", payload: new Dictionary { { ""property1"", ""string"" }, { ""property2"", ""string"" } }, ids: new ulong[] { 0, 3, 10 } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.SetPayload(context.Background(), &qdrant.SetPayloadPoints{ CollectionName: ""{collection_name}"", Payload: qdrant.NewValueMap( map[string]any{""property1"": ""string"", ""property2"": ""string""}), PointsSelector: qdrant.NewPointsSelector( qdrant.NewIDNum(0), qdrant.NewIDNum(3)), }) ``` You don't need to know the ids of the points you want to modify. The alternative is to use filters. ```http POST /collections/{collection_name}/points/payload { ""payload"": { ""property1"": ""string"", ""property2"": ""string"" }, ""filter"": { ""must"": [ { ""key"": ""color"", ""match"": { ""value"": ""red"" } } ] } } ``` ```python client.set_payload( collection_name=""{collection_name}"", payload={ ""property1"": ""string"", ""property2"": ""string"", }, points=models.Filter( must=[ models.FieldCondition( key=""color"", match=models.MatchValue(value=""red""), ), ], ), ) ``` ```typescript client.setPayload(""{collection_name}"", { payload: { property1: ""string"", property2: ""string"", }, filter: { must: [ { key: ""color"", match: { value: ""red"", }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, SetPayloadPointsBuilder}; use qdrant_client::Payload; use serde_json::json; client .set_payload( SetPayloadPointsBuilder::new( ""{collection_name}"", Payload::try_from(json!({ ""property1"": ""string"", ""property2"": ""string"", })) .unwrap(), ) .points_selector(Filter::must([Condition::matches( ""color"", ""red"".to_string(), )])) .wait(true), ) .await?; ``` ```java import java.util.Map; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.ValueFactory.value; client .setPayloadAsync( ""{collection_name}"", Map.of(""property1"", value(""string""), ""property2"", value(""string"")), Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build(), true, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.SetPayloadAsync( collectionName: ""{collection_name}"", payload: new Dictionary { { ""property1"", ""string"" }, { ""property2"", ""string"" } }, filter: MatchKeyword(""color"", ""red"") ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.SetPayload(context.Background(), &qdrant.SetPayloadPoints{ CollectionName: ""{collection_name}"", Payload: qdrant.NewValueMap( map[string]any{""property1"": ""string"", ""property2"": ""string""}), PointsSelector: qdrant.NewPointsSelectorFilter(&qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""color"", ""red""), }, }), }) ``` _Available as of v1.8.0_ It is possible to modify only a specific key of the payload by using the `key` parameter. For instance, given the following payload JSON object on a point: ```json { ""property1"": { ""nested_property"": ""foo"", }, ""property2"": { ""nested_property"": ""bar"", } } ``` You can modify the `nested_property` of `property1` with the following request: ```http POST /collections/{collection_name}/points/payload { ""payload"": { ""nested_property"": ""qux"", }, ""key"": ""property1"", ""points"": [1] } ``` Resulting in the following payload: ```json { ""property1"": { ""nested_property"": ""qux"", }, ""property2"": { ""nested_property"": ""bar"", } } ``` ### Overwrite payload Fully replace any existing payload with the given one. REST API ([Schema](https://api.qdrant.tech/api-reference/points/overwrite-payload)): ```http PUT /collections/{collection_name}/points/payload { ""payload"": { ""property1"": ""string"", ""property2"": ""string"" }, ""points"": [ 0, 3, 100 ] } ``` ```python client.overwrite_payload( collection_name=""{collection_name}"", payload={ ""property1"": ""string"", ""property2"": ""string"", }, points=[0, 3, 10], ) ``` ```typescript client.overwritePayload(""{collection_name}"", { payload: { property1: ""string"", property2: ""string"", }, points: [0, 3, 10], }); ``` ```rust use qdrant_client::qdrant::{PointsIdsList, SetPayloadPointsBuilder}; use qdrant_client::Payload; use serde_json::json; client .overwrite_payload( SetPayloadPointsBuilder::new( ""{collection_name}"", Payload::try_from(json!({ ""property1"": ""string"", ""property2"": ""string"", })) .unwrap(), ) .points_selector(PointsIdsList { ids: vec![0.into(), 3.into(), 10.into()], }) .wait(true), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; client .overwritePayloadAsync( ""{collection_name}"", Map.of(""property1"", value(""string""), ""property2"", value(""string"")), List.of(id(0), id(3), id(10)), true, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.OverwritePayloadAsync( collectionName: ""{collection_name}"", payload: new Dictionary { { ""property1"", ""string"" }, { ""property2"", ""string"" } }, ids: new ulong[] { 0, 3, 10 } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.OverwritePayload(context.Background(), &qdrant.SetPayloadPoints{ CollectionName: ""{collection_name}"", Payload: qdrant.NewValueMap( map[string]any{""property1"": ""string"", ""property2"": ""string""}), PointsSelector: qdrant.NewPointsSelector( qdrant.NewIDNum(0), qdrant.NewIDNum(3)), }) ``` Like [set payload](#set-payload), you don't need to know the ids of the points you want to modify. The alternative is to use filters. ### Clear payload This method removes all payload keys from specified points REST API ([Schema](https://api.qdrant.tech/api-reference/points/clear-payload)): ```http POST /collections/{collection_name}/points/payload/clear { ""points"": [0, 3, 100] } ``` ```python client.clear_payload( collection_name=""{collection_name}"", points_selector=[0, 3, 100], ) ``` ```typescript client.clearPayload(""{collection_name}"", { points: [0, 3, 100], }); ``` ```rust use qdrant_client::qdrant::{ClearPayloadPointsBuilder, PointsIdsList}; client .clear_payload( ClearPayloadPointsBuilder::new(""{collection_name}"") .points(PointsIdsList { ids: vec![0.into(), 3.into(), 10.into()], }) .wait(true), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; client .clearPayloadAsync(""{collection_name}"", List.of(id(0), id(3), id(100)), true, null, null) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.ClearPayloadAsync(collectionName: ""{collection_name}"", ids: new ulong[] { 0, 3, 100 }); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.ClearPayload(context.Background(), &qdrant.ClearPayloadPoints{ CollectionName: ""{collection_name}"", Points: qdrant.NewPointsSelector( qdrant.NewIDNum(0), qdrant.NewIDNum(3)), }) ``` ### Delete payload keys Delete specific payload keys from points. REST API ([Schema](https://api.qdrant.tech/api-reference/points/delete-payload)): ```http POST /collections/{collection_name}/points/payload/delete { ""keys"": [""color"", ""price""], ""points"": [0, 3, 100] } ``` ```python client.delete_payload( collection_name=""{collection_name}"", keys=[""color"", ""price""], points=[0, 3, 100], ) ``` ```typescript client.deletePayload(""{collection_name}"", { keys: [""color"", ""price""], points: [0, 3, 100], }); ``` ```rust use qdrant_client::qdrant::{DeletePayloadPointsBuilder, PointsIdsList}; client .delete_payload( DeletePayloadPointsBuilder::new( ""{collection_name}"", vec![""color"".to_string(), ""price"".to_string()], ) .points_selector(PointsIdsList { ids: vec![0.into(), 3.into(), 10.into()], }) .wait(true), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; client .deletePayloadAsync( ""{collection_name}"", List.of(""color"", ""price""), List.of(id(0), id(3), id(100)), true, null, null) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.DeletePayloadAsync( collectionName: ""{collection_name}"", keys: [""color"", ""price""], ids: new ulong[] { 0, 3, 100 } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.DeletePayload(context.Background(), &qdrant.DeletePayloadPoints{ CollectionName: ""{collection_name}"", Keys: []string{""color"", ""price""}, PointsSelector: qdrant.NewPointsSelector( qdrant.NewIDNum(0), qdrant.NewIDNum(3)), }) ``` Alternatively, you can use filters to delete payload keys from the points. ```http POST /collections/{collection_name}/points/payload/delete { ""keys"": [""color"", ""price""], ""filter"": { ""must"": [ { ""key"": ""color"", ""match"": { ""value"": ""red"" } } ] } } ``` ```python client.delete_payload( collection_name=""{collection_name}"", keys=[""color"", ""price""], points=models.Filter( must=[ models.FieldCondition( key=""color"", match=models.MatchValue(value=""red""), ), ], ), ) ``` ```typescript client.deletePayload(""{collection_name}"", { keys: [""color"", ""price""], filter: { must: [ { key: ""color"", match: { value: ""red"", }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, DeletePayloadPointsBuilder, Filter}; client .delete_payload( DeletePayloadPointsBuilder::new( ""{collection_name}"", vec![""color"".to_string(), ""price"".to_string()], ) .points_selector(Filter::must([Condition::matches( ""color"", ""red"".to_string(), )])) .wait(true), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; client .deletePayloadAsync( ""{collection_name}"", List.of(""color"", ""price""), Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build(), true, null, null) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.DeletePayloadAsync( collectionName: ""{collection_name}"", keys: [""color"", ""price""], filter: MatchKeyword(""color"", ""red"") ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.DeletePayload(context.Background(), &qdrant.DeletePayloadPoints{ CollectionName: ""{collection_name}"", Keys: []string{""color"", ""price""}, PointsSelector: qdrant.NewPointsSelectorFilter( &qdrant.Filter{ Must: []*qdrant.Condition{qdrant.NewMatch(""color"", ""red"")}, }, ), }) ``` ## Payload indexing To search more efficiently with filters, Qdrant allows you to create indexes for payload fields by specifying the name and type of field it is intended to be. The indexed fields also affect the vector index. See [Indexing](../indexing/) for details. In practice, we recommend creating an index on those fields that could potentially constrain the results the most. For example, using an index for the object ID will be much more efficient, being unique for each record, than an index by its color, which has only a few possible values. In compound queries involving multiple fields, Qdrant will attempt to use the most restrictive index first. To create index for the field, you can use the following: REST API ([Schema](https://api.qdrant.tech/api-reference/indexes/create-field-index)) ```http PUT /collections/{collection_name}/index { ""field_name"": ""name_of_the_field_to_index"", ""field_schema"": ""keyword"" } ``` ```python client.create_payload_index( collection_name=""{collection_name}"", field_name=""name_of_the_field_to_index"", field_schema=""keyword"", ) ``` ```typescript client.createPayloadIndex(""{collection_name}"", { field_name: ""name_of_the_field_to_index"", field_schema: ""keyword"", }); ``` ```rust use qdrant_client::qdrant::{CreateFieldIndexCollectionBuilder, FieldType}; client .create_field_index( CreateFieldIndexCollectionBuilder::new( ""{collection_name}"", ""name_of_the_field_to_index"", FieldType::Keyword, ) .wait(true), ) .await?; ``` ```java import io.qdrant.client.grpc.Collections.PayloadSchemaType; client.createPayloadIndexAsync( ""{collection_name}"", ""name_of_the_field_to_index"", PayloadSchemaType.Keyword, null, true, null, null); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.CreatePayloadIndexAsync( collectionName: ""{collection_name}"", fieldName: ""name_of_the_field_to_index"" ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{ CollectionName: ""{collection_name}"", FieldName: ""name_of_the_field_to_index"", FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(), }) ``` The index usage flag is displayed in the payload schema with the [collection info API](https://api.qdrant.tech/api-reference/collections/get-collection). Payload schema example: ```json { ""payload_schema"": { ""property1"": { ""data_type"": ""keyword"" }, ""property2"": { ""data_type"": ""integer"" } } } ``` ",documentation/concepts/payload.md "--- title: Collections weight: 30 aliases: - ../collections - /concepts/collections/ - /documentation/frameworks/fondant/documentation/concepts/collections/ --- # Collections A collection is a named set of points (vectors with a payload) among which you can search. The vector of each point within the same collection must have the same dimensionality and be compared by a single metric. [Named vectors](#collection-with-multiple-vectors) can be used to have multiple vectors in a single point, each of which can have their own dimensionality and metric requirements. Distance metrics are used to measure similarities among vectors. The choice of metric depends on the way vectors obtaining and, in particular, on the method of neural network encoder training. Qdrant supports these most popular types of metrics: * Dot product: `Dot` - [[wiki]](https://en.wikipedia.org/wiki/Dot_product) * Cosine similarity: `Cosine` - [[wiki]](https://en.wikipedia.org/wiki/Cosine_similarity) * Euclidean distance: `Euclid` - [[wiki]](https://en.wikipedia.org/wiki/Euclidean_distance) * Manhattan distance: `Manhattan` - [[wiki]](https://en.wikipedia.org/wiki/Taxicab_geometry) In addition to metrics and vector size, each collection uses its own set of parameters that controls collection optimization, index construction, and vacuum. These settings can be changed at any time by a corresponding request. ## Setting up multitenancy **How many collections should you create?** In most cases, you should only use a single collection with payload-based partitioning. This approach is called [multitenancy](https://en.wikipedia.org/wiki/Multitenancy). It is efficient for most of users, but it requires additional configuration. [Learn how to set it up](../../tutorials/multiple-partitions/) **When should you create multiple collections?** When you have a limited number of users and you need isolation. This approach is flexible, but it may be more costly, since creating numerous collections may result in resource overhead. Also, you need to ensure that they do not affect each other in any way, including performance-wise. ## Create a collection ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 300, ""distance"": ""Cosine"" } } ``` ```bash curl -X PUT http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ ""vectors"": { ""size"": 300, ""distance"": ""Cosine"" } }' ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=100, distance=models.Distance.COSINE), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 100, distance: ""Cosine"" }, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{CreateCollectionBuilder, VectorParamsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(100, Distance::Cosine)), ) .await?; ``` ```java import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.createCollectionAsync(""{collection_name}"", VectorParams.newBuilder().setDistance(Distance.Cosine).setSize(100).build()).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 100, Distance = Distance.Cosine } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 100, Distance: qdrant.Distance_Cosine, }), }) ``` In addition to the required options, you can also specify custom values for the following collection options: * `hnsw_config` - see [indexing](../indexing/#vector-index) for details. * `wal_config` - Write-Ahead-Log related configuration. See more details about [WAL](../storage/#versioning) * `optimizers_config` - see [optimizer](../optimizer/) for details. * `shard_number` - which defines how many shards the collection should have. See [distributed deployment](../../guides/distributed_deployment/#sharding) section for details. * `on_disk_payload` - defines where to store payload data. If `true` - payload will be stored on disk only. Might be useful for limiting the RAM usage in case of large payload. * `quantization_config` - see [quantization](../../guides/quantization/#setting-up-quantization-in-qdrant) for details. Default parameters for the optional collection parameters are defined in [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml). See [schema definitions](https://api.qdrant.tech/api-reference/collections/create-collection) and a [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml) for more information about collection and vector parameters. *Available as of v1.2.0* Vectors all live in RAM for very quick access. The `on_disk` parameter can be set in the vector configuration. If true, all vectors will live on disk. This will enable the use of [memmaps](../../concepts/storage/#configuring-memmap-storage), which is suitable for ingesting a large amount of data. ### Create collection from another collection *Available as of v1.0.0* It is possible to initialize a collection from another existing collection. This might be useful for experimenting quickly with different configurations for the same data set. Make sure the vectors have the same `size` and `distance` function when setting up the vectors configuration in the new collection. If you used the previous sample code, `""size"": 300` and `""distance"": ""Cosine""`. ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 100, ""distance"": ""Cosine"" }, ""init_from"": { ""collection"": ""{from_collection_name}"" } } ``` ```bash curl -X PUT http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ ""vectors"": { ""size"": 300, ""distance"": ""Cosine"" }, ""init_from"": { ""collection"": {from_collection_name} } }' ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=100, distance=models.Distance.COSINE), init_from=models.InitFrom(collection=""{from_collection_name}""), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 100, distance: ""Cosine"" }, init_from: { collection: ""{from_collection_name}"" }, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{CreateCollectionBuilder, Distance, VectorParamsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config(VectorParamsBuilder::new(100, Distance::Cosine)) .init_from_collection(""{from_collection_name}""), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig( VectorsConfig.newBuilder() .setParams( VectorParams.newBuilder() .setSize(100) .setDistance(Distance.Cosine) .build())) .setInitFromCollection(""{from_collection_name}"") .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 100, Distance = Distance.Cosine }, initFromCollection: ""{from_collection_name}"" ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 100, Distance: qdrant.Distance_Cosine, }), InitFromCollection: qdrant.PtrOf(""{from_collection_name}""), }) ``` ### Collection with multiple vectors *Available as of v0.10.0* It is possible to have multiple vectors per record. This feature allows for multiple vector storages per collection. To distinguish vectors in one record, they should have a unique name defined when creating the collection. Each named vector in this mode has its distance and size: ```http PUT /collections/{collection_name} { ""vectors"": { ""image"": { ""size"": 4, ""distance"": ""Dot"" }, ""text"": { ""size"": 8, ""distance"": ""Cosine"" } } } ``` ```bash curl -X PUT http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ ""vectors"": { ""image"": { ""size"": 4, ""distance"": ""Dot"" }, ""text"": { ""size"": 8, ""distance"": ""Cosine"" } } }' ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config={ ""image"": models.VectorParams(size=4, distance=models.Distance.DOT), ""text"": models.VectorParams(size=8, distance=models.Distance.COSINE), }, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { image: { size: 4, distance: ""Dot"" }, text: { size: 8, distance: ""Cosine"" }, }, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{ CreateCollectionBuilder, Distance, VectorParamsBuilder, VectorsConfigBuilder, }; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let mut vectors_config = VectorsConfigBuilder::default(); vectors_config .add_named_vector_params(""image"", VectorParamsBuilder::new(4, Distance::Dot).build()); vectors_config.add_named_vector_params( ""text"", VectorParamsBuilder::new(8, Distance::Cosine).build(), ); client .create_collection( CreateCollectionBuilder::new(""{collection_name}"").vectors_config(vectors_config), ) .await?; ``` ```java import java.util.Map; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( ""{collection_name}"", Map.of( ""image"", VectorParams.newBuilder().setSize(4).setDistance(Distance.Dot).build(), ""text"", VectorParams.newBuilder().setSize(8).setDistance(Distance.Cosine).build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParamsMap { Map = { [""image""] = new VectorParams { Size = 4, Distance = Distance.Dot }, [""text""] = new VectorParams { Size = 8, Distance = Distance.Cosine }, } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfigMap( map[string]*qdrant.VectorParams{ ""image"": { Size: 4, Distance: qdrant.Distance_Dot, }, ""text"": { Size: 8, Distance: qdrant.Distance_Cosine, }, }), }) ``` For rare use cases, it is possible to create a collection without any vector storage. *Available as of v1.1.1* For each named vector you can optionally specify [`hnsw_config`](../indexing/#vector-index) or [`quantization_config`](../../guides/quantization/#setting-up-quantization-in-qdrant) to deviate from the collection configuration. This can be useful to fine-tune search performance on a vector level. *Available as of v1.2.0* Vectors all live in RAM for very quick access. On a per-vector basis you can set `on_disk` to true to store all vectors on disk at all times. This will enable the use of [memmaps](../../concepts/storage/#configuring-memmap-storage), which is suitable for ingesting a large amount of data. ### Vector datatypes *Available as of v1.9.0* Some embedding providers may provide embeddings in a pre-quantized format. One of the most notable examples is the [Cohere int8 & binary embeddings](https://cohere.com/blog/int8-binary-embeddings). Qdrant has direct support for uint8 embeddings, which you can also use in combination with binary quantization. To create a collection with uint8 embeddings, you can use the following configuration: ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 1024, ""distance"": ""Cosine"", ""datatype"": ""uint8"" } } ``` ```bash curl -X PUT http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ ""vectors"": { ""size"": 1024, ""distance"": ""Cosine"", ""datatype"": ""uint8"" } }' ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams( size=1024, distance=models.Distance.COSINE, datatype=models.Datatype.UINT8, ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { image: { size: 1024, distance: ""Cosine"", datatype: ""uint8"" }, }, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{ CreateCollectionBuilder, Datatype, Distance, VectorParamsBuilder, }; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"").vectors_config( VectorParamsBuilder::new(1024, Distance::Cosine).datatype(Datatype::Uint8), ), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.grpc.Collections.Datatype; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync(""{collection_name}"", VectorParams.newBuilder() .setSize(1024) .setDistance(Distance.Cosine) .setDatatype(Datatype.Uint8) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 1024, Distance = Distance.Cosine, Datatype = Datatype.Uint8 } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 1024, Distance: qdrant.Distance_Cosine, Datatype: qdrant.Datatype_Uint8.Enum(), }), }) ``` Vectors with `uint8` datatype are stored in a more compact format, which can save memory and improve search speed at the cost of some precision. If you choose to use the `uint8` datatype, elements of the vector will be stored as unsigned 8-bit integers, which can take values **from 0 to 255**. ### Collection with sparse vectors *Available as of v1.7.0* Qdrant supports sparse vectors as a first-class citizen. Sparse vectors are useful for text search, where each word is represented as a separate dimension. Collections can contain sparse vectors as additional [named vectors](#collection-with-multiple-vectors) along side regular dense vectors in a single point. Unlike dense vectors, sparse vectors must be named. And additionally, sparse vectors and dense vectors must have different names within a collection. ```http PUT /collections/{collection_name} { ""sparse_vectors"": { ""text"": { }, } } ``` ```bash curl -X PUT http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ ""sparse_vectors"": { ""text"": { } } }' ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", sparse_vectors_config={ ""text"": models.SparseVectorParams(), }, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { sparse_vectors: { text: { }, }, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{ CreateCollectionBuilder, SparseVectorParamsBuilder, SparseVectorsConfigBuilder, }; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let mut sparse_vector_config = SparseVectorsConfigBuilder::default(); sparse_vector_config.add_named_vector_params(""text"", SparseVectorParamsBuilder::default()); client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .sparse_vectors_config(sparse_vector_config), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.SparseVectorConfig; import io.qdrant.client.grpc.Collections.SparseVectorParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setSparseVectorsConfig( SparseVectorConfig.newBuilder() .putMap(""text"", SparseVectorParams.getDefaultInstance())) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", sparseVectorsConfig: (""text"", new SparseVectorParams()) ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_namee}"", SparseVectorsConfig: qdrant.NewSparseVectorsConfig( map[string]*qdrant.SparseVectorParams{ ""text"": {}, }), }) ``` Outside of a unique name, there are no required configuration parameters for sparse vectors. The distance function for sparse vectors is always `Dot` and does not need to be specified. However, there are optional parameters to tune the underlying [sparse vector index](../indexing/#sparse-vector-index). ### Check collection existence *Available as of v1.8.0* ```http GET http://localhost:6333/collections/{collection_name}/exists ``` ```bash curl -X GET http://localhost:6333/collections/{collection_name}/exists ``` ```python client.collection_exists(collection_name=""{collection_name}"") ``` ```typescript client.collectionExists(""{collection_name}""); ``` ```rust client.collection_exists(""{collection_name}"").await?; ``` ```java client.collectionExistsAsync(""{collection_name}"").get(); ``` ```csharp await client.CollectionExistsAsync(""{collection_name}""); ``` ```go import ""context"" client.CollectionExists(context.Background(), ""my_collection"") ``` ### Delete collection ```http DELETE http://localhost:6333/collections/{collection_name} ``` ```bash curl -X DELETE http://localhost:6333/collections/{collection_name} ``` ```python client.delete_collection(collection_name=""{collection_name}"") ``` ```typescript client.deleteCollection(""{collection_name}""); ``` ```rust client.delete_collection(""{collection_name}"").await?; ``` ```java client.deleteCollectionAsync(""{collection_name}"").get(); ``` ```csharp await client.DeleteCollectionAsync(""{collection_name}""); ``` ```go import ""context"" client.DeleteCollection(context.Background(), ""{collection_name}"") ``` ### Update collection parameters Dynamic parameter updates may be helpful, for example, for more efficient initial loading of vectors. For example, you can disable indexing during the upload process, and enable it immediately after the upload is finished. As a result, you will not waste extra computation resources on rebuilding the index. The following command enables indexing for segments that have more than 10000 kB of vectors stored: ```http PATCH /collections/{collection_name} { ""optimizers_config"": { ""indexing_threshold"": 10000 } } ``` ```bash curl -X PATCH http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ ""optimizers_config"": { ""indexing_threshold"": 10000 } }' ``` ```python client.update_collection( collection_name=""{collection_name}"", optimizer_config=models.OptimizersConfigDiff(indexing_threshold=10000), ) ``` ```typescript client.updateCollection(""{collection_name}"", { optimizers_config: { indexing_threshold: 10000, }, }); ``` ```rust use qdrant_client::qdrant::{OptimizersConfigDiffBuilder, UpdateCollectionBuilder}; client .update_collection( UpdateCollectionBuilder::new(""{collection_name}"").optimizers_config( OptimizersConfigDiffBuilder::default().indexing_threshold(10000), ), ) .await?; ``` ```java import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.UpdateCollection; client.updateCollectionAsync( UpdateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setOptimizersConfig( OptimizersConfigDiff.newBuilder().setIndexingThreshold(10000).build()) .build()); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpdateCollectionAsync( collectionName: ""{collection_name}"", optimizersConfig: new OptimizersConfigDiff { IndexingThreshold = 10000 } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.UpdateCollection(context.Background(), &qdrant.UpdateCollection{ CollectionName: ""{collection_name}"", OptimizersConfig: &qdrant.OptimizersConfigDiff{ IndexingThreshold: qdrant.PtrOf(uint64(10000)), }, }) ``` The following parameters can be updated: * `optimizers_config` - see [optimizer](../optimizer/) for details. * `hnsw_config` - see [indexing](../indexing/#vector-index) for details. * `quantization_config` - see [quantization](../../guides/quantization/#setting-up-quantization-in-qdrant) for details. * `vectors` - vector-specific configuration, including individual `hnsw_config`, `quantization_config` and `on_disk` settings. * `params` - other collection parameters, including `write_consistency_factor` and `on_disk_payload`. Full API specification is available in [schema definitions](https://api.qdrant.tech/api-reference/collections/update-collection). Calls to this endpoint may be blocking as it waits for existing optimizers to finish. We recommended against using this in a production database as it may introduce huge overhead due to the rebuilding of the index. #### Update vector parameters *Available as of v1.4.0* Qdrant 1.4 adds support for updating more collection parameters at runtime. HNSW index, quantization and disk configurations can now be changed without recreating a collection. Segments (with index and quantized data) will automatically be rebuilt in the background to match updated parameters. To put vector data on disk for a collection that **does not have** named vectors, use `""""` as name: ```http PATCH /collections/{collection_name} { ""vectors"": { """": { ""on_disk"": true } } } ``` ```bash curl -X PATCH http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ ""vectors"": { """": { ""on_disk"": true } } }' ``` To put vector data on disk for a collection that **does have** named vectors: Note: To create a vector name, follow the procedure from our [Points](/documentation/concepts/points/#create-vector-name). ```http PATCH /collections/{collection_name} { ""vectors"": { ""my_vector"": { ""on_disk"": true } } } ``` ```bash curl -X PATCH http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ ""vectors"": { ""my_vector"": { ""on_disk"": true } } }' ``` In the following example the HNSW index and quantization parameters are updated, both for the whole collection, and for `my_vector` specifically: ```http PATCH /collections/{collection_name} { ""vectors"": { ""my_vector"": { ""hnsw_config"": { ""m"": 32, ""ef_construct"": 123 }, ""quantization_config"": { ""product"": { ""compression"": ""x32"", ""always_ram"": true } }, ""on_disk"": true } }, ""hnsw_config"": { ""ef_construct"": 123 }, ""quantization_config"": { ""scalar"": { ""type"": ""int8"", ""quantile"": 0.8, ""always_ram"": false } } } ``` ```bash curl -X PATCH http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ ""vectors"": { ""my_vector"": { ""hnsw_config"": { ""m"": 32, ""ef_construct"": 123 }, ""quantization_config"": { ""product"": { ""compression"": ""x32"", ""always_ram"": true } }, ""on_disk"": true } }, ""hnsw_config"": { ""ef_construct"": 123 }, ""quantization_config"": { ""scalar"": { ""type"": ""int8"", ""quantile"": 0.8, ""always_ram"": false } } }' ``` ```python client.update_collection( collection_name=""{collection_name}"", vectors_config={ ""my_vector"": models.VectorParamsDiff( hnsw_config=models.HnswConfigDiff( m=32, ef_construct=123, ), quantization_config=models.ProductQuantization( product=models.ProductQuantizationConfig( compression=models.CompressionRatio.X32, always_ram=True, ), ), on_disk=True, ), }, hnsw_config=models.HnswConfigDiff( ef_construct=123, ), quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, quantile=0.8, always_ram=False, ), ), ) ``` ```typescript client.updateCollection(""{collection_name}"", { vectors: { my_vector: { hnsw_config: { m: 32, ef_construct: 123, }, quantization_config: { product: { compression: ""x32"", always_ram: true, }, }, on_disk: true, }, }, hnsw_config: { ef_construct: 123, }, quantization_config: { scalar: { type: ""int8"", quantile: 0.8, always_ram: true, }, }, }); ``` ```rust use std::collections::HashMap; use qdrant_client::qdrant::{ quantization_config_diff::Quantization, vectors_config_diff::Config, HnswConfigDiffBuilder, QuantizationType, ScalarQuantizationBuilder, UpdateCollectionBuilder, VectorParamsDiffBuilder, VectorParamsDiffMap, }; client .update_collection( UpdateCollectionBuilder::new(""{collection_name}"") .hnsw_config(HnswConfigDiffBuilder::default().ef_construct(123)) .vectors_config(Config::ParamsMap(VectorParamsDiffMap { map: HashMap::from([( (""my_vector"".into()), VectorParamsDiffBuilder::default() .hnsw_config(HnswConfigDiffBuilder::default().m(32).ef_construct(123)) .build(), )]), })) .quantization_config(Quantization::Scalar( ScalarQuantizationBuilder::default() .r#type(QuantizationType::Int8.into()) .quantile(0.8) .always_ram(true) .build(), )), ) .await?; ``` ```java import io.qdrant.client.grpc.Collections.HnswConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationConfigDiff; import io.qdrant.client.grpc.Collections.QuantizationType; import io.qdrant.client.grpc.Collections.ScalarQuantization; import io.qdrant.client.grpc.Collections.UpdateCollection; import io.qdrant.client.grpc.Collections.VectorParamsDiff; import io.qdrant.client.grpc.Collections.VectorParamsDiffMap; import io.qdrant.client.grpc.Collections.VectorsConfigDiff; client .updateCollectionAsync( UpdateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setHnswConfig(HnswConfigDiff.newBuilder().setEfConstruct(123).build()) .setVectorsConfig( VectorsConfigDiff.newBuilder() .setParamsMap( VectorParamsDiffMap.newBuilder() .putMap( ""my_vector"", VectorParamsDiff.newBuilder() .setHnswConfig( HnswConfigDiff.newBuilder() .setM(3) .setEfConstruct(123) .build()) .build()))) .setQuantizationConfig( QuantizationConfigDiff.newBuilder() .setScalar( ScalarQuantization.newBuilder() .setType(QuantizationType.Int8) .setQuantile(0.8f) .setAlwaysRam(true) .build())) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpdateCollectionAsync( collectionName: ""{collection_name}"", hnswConfig: new HnswConfigDiff { EfConstruct = 123 }, vectorsConfig: new VectorParamsDiffMap { Map = { { ""my_vector"", new VectorParamsDiff { HnswConfig = new HnswConfigDiff { M = 3, EfConstruct = 123 } } } } }, quantizationConfig: new QuantizationConfigDiff { Scalar = new ScalarQuantization { Type = QuantizationType.Int8, Quantile = 0.8f, AlwaysRam = true } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.UpdateCollection(context.Background(), &qdrant.UpdateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfigDiffMap( map[string]*qdrant.VectorParamsDiff{ ""my_vector"": { HnswConfig: &qdrant.HnswConfigDiff{ M: qdrant.PtrOf(uint64(3)), EfConstruct: qdrant.PtrOf(uint64(123)), }, }, }), QuantizationConfig: qdrant.NewQuantizationDiffScalar( &qdrant.ScalarQuantization{ Type: qdrant.QuantizationType_Int8, Quantile: qdrant.PtrOf(float32(0.8)), AlwaysRam: qdrant.PtrOf(true), }), }) ``` ## Collection info Qdrant allows determining the configuration parameters of an existing collection to better understand how the points are distributed and indexed. ```http GET /collections/{collection_name} ``` ```bash curl -X GET http://localhost:6333/collections/{collection_name} ``` ```python client.get_collection(collection_name=""{collection_name}"") ``` ```typescript client.getCollection(""{collection_name}""); ``` ```rust client.collection_info(""{collection_name}"").await?; ``` ```java client.getCollectionInfoAsync(""{collection_name}"").get(); ``` ```csharp await client.GetCollectionInfoAsync(""{collection_name}""); ``` ```go import ""context"" client.GetCollectionInfo(context.Background(), ""{collection_name}"") ```
Expected result ```json { ""result"": { ""status"": ""green"", ""optimizer_status"": ""ok"", ""vectors_count"": 1068786, ""indexed_vectors_count"": 1024232, ""points_count"": 1068786, ""segments_count"": 31, ""config"": { ""params"": { ""vectors"": { ""size"": 384, ""distance"": ""Cosine"" }, ""shard_number"": 1, ""replication_factor"": 1, ""write_consistency_factor"": 1, ""on_disk_payload"": false }, ""hnsw_config"": { ""m"": 16, ""ef_construct"": 100, ""full_scan_threshold"": 10000, ""max_indexing_threads"": 0 }, ""optimizer_config"": { ""deleted_threshold"": 0.2, ""vacuum_min_vector_number"": 1000, ""default_segment_number"": 0, ""max_segment_size"": null, ""memmap_threshold"": null, ""indexing_threshold"": 20000, ""flush_interval_sec"": 5, ""max_optimization_threads"": 1 }, ""wal_config"": { ""wal_capacity_mb"": 32, ""wal_segments_ahead"": 0 } }, ""payload_schema"": {} }, ""status"": ""ok"", ""time"": 0.00010143 } ```
If you insert the vectors into the collection, the `status` field may become `yellow` whilst it is optimizing. It will become `green` once all the points are successfully processed. The following color statuses are possible: - 🟢 `green`: collection is ready - 🟡 `yellow`: collection is optimizing - ⚫ `grey`: collection is pending optimization ([help](#grey-collection-status)) - 🔴 `red`: an error occurred which the engine could not recover from ### Grey collection status _Available as of v1.9.0_ A collection may have the grey ⚫ status or show ""optimizations pending, awaiting update operation"" as optimization status. This state is normally caused by restarting a Qdrant instance while optimizations were ongoing. It means the collection has optimizations pending, but they are paused. You must send any update operation to trigger and start the optimizations again. For example: ```http PATCH /collections/{collection_name} { ""optimizers_config"": {} } ``` ```bash curl -X PATCH http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ ""optimizers_config"": {} }' ``` ```python client.update_collection( collection_name=""{collection_name}"", optimizer_config=models.OptimizersConfigDiff(), ) ``` ```typescript client.updateCollection(""{collection_name}"", { optimizers_config: {}, }); ``` ```rust use qdrant_client::qdrant::{OptimizersConfigDiffBuilder, UpdateCollectionBuilder}; client .update_collection( UpdateCollectionBuilder::new(""{collection_name}"") .optimizers_config(OptimizersConfigDiffBuilder::default()), ) .await?; ``` ```java import io.qdrant.client.grpc.Collections.OptimizersConfigDiff; import io.qdrant.client.grpc.Collections.UpdateCollection; client.updateCollectionAsync( UpdateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setOptimizersConfig( OptimizersConfigDiff.getDefaultInstance()) .build()); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpdateCollectionAsync( collectionName: ""{collection_name}"", optimizersConfig: new OptimizersConfigDiff { } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.UpdateCollection(context.Background(), &qdrant.UpdateCollection{ CollectionName: ""{collection_name}"", OptimizersConfig: &qdrant.OptimizersConfigDiff{}, }) ``` ### Approximate point and vector counts You may be interested in the count attributes: - `points_count` - total number of objects (vectors and their payloads) stored in the collection - `vectors_count` - total number of vectors in a collection, useful if you have multiple vectors per point - `indexed_vectors_count` - total number of vectors stored in the HNSW or sparse index. Qdrant does not store all the vectors in the index, but only if an index segment might be created for a given configuration. The above counts are not exact, but should be considered approximate. Depending on how you use Qdrant these may give very different numbers than what you may expect. It's therefore important **not** to rely on them. More specifically, these numbers represent the count of points and vectors in Qdrant's internal storage. Internally, Qdrant may temporarily duplicate points as part of automatic optimizations. It may keep changed or deleted points for a bit. And it may delay indexing of new points. All of that is for optimization reasons. Updates you do are therefore not directly reflected in these numbers. If you see a wildly different count of points, it will likely resolve itself once a new round of automatic optimizations has completed. To clarify: these numbers don't represent the exact amount of points or vectors you have inserted, nor does it represent the exact number of distinguishable points or vectors you can query. If you want to know exact counts, refer to the [count API](../points/#counting-points). _Note: these numbers may be removed in a future version of Qdrant._ ### Indexing vectors in HNSW In some cases, you might be surprised the value of `indexed_vectors_count` is lower than `vectors_count`. This is an intended behaviour and depends on the [optimizer configuration](../optimizer/). A new index segment is built if the size of non-indexed vectors is higher than the value of `indexing_threshold`(in kB). If your collection is very small or the dimensionality of the vectors is low, there might be no HNSW segment created and `indexed_vectors_count` might be equal to `0`. It is possible to reduce the `indexing_threshold` for an existing collection by [updating collection parameters](#update-collection-parameters). ## Collection aliases In a production environment, it is sometimes necessary to switch different versions of vectors seamlessly. For example, when upgrading to a new version of the neural network. There is no way to stop the service and rebuild the collection with new vectors in these situations. Aliases are additional names for existing collections. All queries to the collection can also be done identically, using an alias instead of the collection name. Thus, it is possible to build a second collection in the background and then switch alias from the old to the new collection. Since all changes of aliases happen atomically, no concurrent requests will be affected during the switch. ### Create alias ```http POST /collections/aliases { ""actions"": [ { ""create_alias"": { ""collection_name"": ""example_collection"", ""alias_name"": ""production_collection"" } } ] } ``` ```bash curl -X POST http://localhost:6333/collections/aliases \ -H 'Content-Type: application/json' \ --data-raw '{ ""actions"": [ { ""create_alias"": { ""collection_name"": ""example_collection"", ""alias_name"": ""production_collection"" } } ] }' ``` ```python client.update_collection_aliases( change_aliases_operations=[ models.CreateAliasOperation( create_alias=models.CreateAlias( collection_name=""example_collection"", alias_name=""production_collection"" ) ) ] ) ``` ```typescript client.updateCollectionAliases({ actions: [ { create_alias: { collection_name: ""example_collection"", alias_name: ""production_collection"", }, }, ], }); ``` ```rust use qdrant_client::qdrant::CreateAliasBuilder; client .create_alias(CreateAliasBuilder::new( ""example_collection"", ""production_collection"", )) .await?; ``` ```java client.createAliasAsync(""production_collection"", ""example_collection"").get(); ``` ```csharp await client.CreateAliasAsync(aliasName: ""production_collection"", collectionName: ""example_collection""); ``` ```go import ""context"" client.CreateAlias(context.Background(), ""production_collection"", ""example_collection"") ``` ### Remove alias ```bash curl -X POST http://localhost:6333/collections/aliases \ -H 'Content-Type: application/json' \ --data-raw '{ ""actions"": [ { ""delete_alias"": { ""alias_name"": ""production_collection"" } } ] }' ``` ```http POST /collections/aliases { ""actions"": [ { ""delete_alias"": { ""alias_name"": ""production_collection"" } } ] } ``` ```python client.update_collection_aliases( change_aliases_operations=[ models.DeleteAliasOperation( delete_alias=models.DeleteAlias(alias_name=""production_collection"") ), ] ) ``` ```typescript client.updateCollectionAliases({ actions: [ { delete_alias: { alias_name: ""production_collection"", }, }, ], }); ``` ```rust client.delete_alias(""production_collection"").await?; ``` ```java client.deleteAliasAsync(""production_collection"").get(); ``` ```csharp await client.DeleteAliasAsync(""production_collection""); ``` ```go import ""context"" client.DeleteAlias(context.Background(), ""production_collection"") ``` ### Switch collection Multiple alias actions are performed atomically. For example, you can switch underlying collection with the following command: ```http POST /collections/aliases { ""actions"": [ { ""delete_alias"": { ""alias_name"": ""production_collection"" } }, { ""create_alias"": { ""collection_name"": ""example_collection"", ""alias_name"": ""production_collection"" } } ] } ``` ```bash curl -X POST http://localhost:6333/collections/aliases \ -H 'Content-Type: application/json' \ --data-raw '{ ""actions"": [ { ""delete_alias"": { ""alias_name"": ""production_collection"" } }, { ""create_alias"": { ""collection_name"": ""example_collection"", ""alias_name"": ""production_collection"" } } ] }' ``` ```python client.update_collection_aliases( change_aliases_operations=[ models.DeleteAliasOperation( delete_alias=models.DeleteAlias(alias_name=""production_collection"") ), models.CreateAliasOperation( create_alias=models.CreateAlias( collection_name=""example_collection"", alias_name=""production_collection"" ) ), ] ) ``` ```typescript client.updateCollectionAliases({ actions: [ { delete_alias: { alias_name: ""production_collection"", }, }, { create_alias: { collection_name: ""example_collection"", alias_name: ""production_collection"", }, }, ], }); ``` ```rust use qdrant_client::qdrant::CreateAliasBuilder; client.delete_alias(""production_collection"").await?; client .create_alias(CreateAliasBuilder::new( ""example_collection"", ""production_collection"", )) .await?; ``` ```java client.deleteAliasAsync(""production_collection"").get(); client.createAliasAsync(""production_collection"", ""example_collection"").get(); ``` ```csharp await client.DeleteAliasAsync(""production_collection""); await client.CreateAliasAsync(aliasName: ""production_collection"", collectionName: ""example_collection""); ``` ```go import ""context"" client.DeleteAlias(context.Background(), ""production_collection"") client.CreateAlias(context.Background(), ""production_collection"", ""example_collection"") ``` ### List collection aliases ```http GET /collections/{collection_name}/aliases ``` ```bash curl -X GET http://localhost:6333/collections/{collection_name}/aliases ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") client.get_collection_aliases(collection_name=""{collection_name}"") ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.getCollectionAliases(""{collection_name}""); ``` ```rust use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.list_collection_aliases(""{collection_name}"").await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.listCollectionAliasesAsync(""{collection_name}"").get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.ListCollectionAliasesAsync(""{collection_name}""); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.ListCollectionAliases(context.Background(), ""{collection_name}"") ``` ### List all aliases ```http GET /aliases ``` ```bash curl -X GET http://localhost:6333/aliases ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") client.get_aliases() ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.getAliases(); ``` ```rust use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.list_aliases().await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.listAliasesAsync().get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.ListAliasesAsync(); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.ListAliases(context.Background()) ``` ### List all collections ```http GET /collections ``` ```bash curl -X GET http://localhost:6333/collections ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") client.get_collections() ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.getCollections(); ``` ```rust use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.list_collections().await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.listCollectionsAsync().get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.ListCollectionsAsync(); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.ListCollections(context.Background()) ``` ",documentation/concepts/collections.md "--- title: Indexing weight: 90 aliases: - ../indexing --- # Indexing A key feature of Qdrant is the effective combination of vector and traditional indexes. It is essential to have this because for vector search to work effectively with filters, having vector index only is not enough. In simpler terms, a vector index speeds up vector search, and payload indexes speed up filtering. The indexes in the segments exist independently, but the parameters of the indexes themselves are configured for the whole collection. Not all segments automatically have indexes. Their necessity is determined by the [optimizer](../optimizer/) settings and depends, as a rule, on the number of stored points. ## Payload Index Payload index in Qdrant is similar to the index in conventional document-oriented databases. This index is built for a specific field and type, and is used for quick point requests by the corresponding filtering condition. The index is also used to accurately estimate the filter cardinality, which helps the [query planning](../search/#query-planning) choose a search strategy. Creating an index requires additional computational resources and memory, so choosing fields to be indexed is essential. Qdrant does not make this choice but grants it to the user. To mark a field as indexable, you can use the following: ```http PUT /collections/{collection_name}/index { ""field_name"": ""name_of_the_field_to_index"", ""field_schema"": ""keyword"" } ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") client.create_payload_index( collection_name=""{collection_name}"", field_name=""name_of_the_field_to_index"", field_schema=""keyword"", ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createPayloadIndex(""{collection_name}"", { field_name: ""name_of_the_field_to_index"", field_schema: ""keyword"", }); ``` ```rust use qdrant_client::qdrant::{CreateFieldIndexCollectionBuilder, FieldType}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_field_index(CreateFieldIndexCollectionBuilder::new( ""{collection_name}"", ""name_of_the_field_to_index"", FieldType::Keyword, )) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadSchemaType; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createPayloadIndexAsync( ""{collection_name}"", ""name_of_the_field_to_index"", PayloadSchemaType.Keyword, null, null, null, null) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.CreatePayloadIndexAsync(collectionName: ""{collection_name}"", fieldName: ""name_of_the_field_to_index""); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{ CollectionName: ""{collection_name}"", FieldName: ""name_of_the_field_to_index"", FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(), }) ``` You can use dot notation to specify a nested field for indexing. Similar to specifying [nested filters](../filtering/#nested-key). Available field types are: * `keyword` - for [keyword](../payload/#keyword) payload, affects [Match](../filtering/#match) filtering conditions. * `integer` - for [integer](../payload/#integer) payload, affects [Match](../filtering/#match) and [Range](../filtering/#range) filtering conditions. * `float` - for [float](../payload/#float) payload, affects [Range](../filtering/#range) filtering conditions. * `bool` - for [bool](../payload/#bool) payload, affects [Match](../filtering/#match) filtering conditions (available as of v1.4.0). * `geo` - for [geo](../payload/#geo) payload, affects [Geo Bounding Box](../filtering/#geo-bounding-box) and [Geo Radius](../filtering/#geo-radius) filtering conditions. * `datetime` - for [datetime](../payload/#datetime) payload, affects [Range](../filtering/#range) filtering conditions (available as of v1.8.0). * `text` - a special kind of index, available for [keyword](../payload/#keyword) / string payloads, affects [Full Text search](../filtering/#full-text-match) filtering conditions. * `uuid` - a special type of index, similar to `keyword`, but optimized for [UUID values](../payload/#uuid). Affects [Match](../filtering/#match) filtering conditions. (available as of v1.11.0) Payload index may occupy some additional memory, so it is recommended to only use index for those fields that are used in filtering conditions. If you need to filter by many fields and the memory limits does not allow to index all of them, it is recommended to choose the field that limits the search result the most. As a rule, the more different values a payload value has, the more efficiently the index will be used. ### Full-text index *Available as of v0.10.0* Qdrant supports full-text search for string payload. Full-text index allows you to filter points by the presence of a word or a phrase in the payload field. Full-text index configuration is a bit more complex than other indexes, as you can specify the tokenization parameters. Tokenization is the process of splitting a string into tokens, which are then indexed in the inverted index. To create a full-text index, you can use the following: ```http PUT /collections/{collection_name}/index { ""field_name"": ""name_of_the_field_to_index"", ""field_schema"": { ""type"": ""text"", ""tokenizer"": ""word"", ""min_token_len"": 2, ""max_token_len"": 20, ""lowercase"": true } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_payload_index( collection_name=""{collection_name}"", field_name=""name_of_the_field_to_index"", field_schema=models.TextIndexParams( type=""text"", tokenizer=models.TokenizerType.WORD, min_token_len=2, max_token_len=15, lowercase=True, ), ) ``` ```typescript import { QdrantClient, Schemas } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createPayloadIndex(""{collection_name}"", { field_name: ""name_of_the_field_to_index"", field_schema: { type: ""text"", tokenizer: ""word"", min_token_len: 2, max_token_len: 15, lowercase: true, }, }); ``` ```rust use qdrant_client::qdrant::{ payload_index_params::IndexParams, CreateFieldIndexCollectionBuilder, FieldType, PayloadIndexParams, TextIndexParams, TokenizerType, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_field_index( CreateFieldIndexCollectionBuilder::new( ""{collection_name}"", ""name_of_the_field_to_index"", FieldType::Text, ) .field_index_params(PayloadIndexParams { index_params: Some(IndexParams::TextIndexParams(TextIndexParams { tokenizer: TokenizerType::Word as i32, min_token_len: Some(2), max_token_len: Some(10), lowercase: Some(true), })), }), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadIndexParams; import io.qdrant.client.grpc.Collections.PayloadSchemaType; import io.qdrant.client.grpc.Collections.TextIndexParams; import io.qdrant.client.grpc.Collections.TokenizerType; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createPayloadIndexAsync( ""{collection_name}"", ""name_of_the_field_to_index"", PayloadSchemaType.Text, PayloadIndexParams.newBuilder() .setTextIndexParams( TextIndexParams.newBuilder() .setTokenizer(TokenizerType.Word) .setMinTokenLen(2) .setMaxTokenLen(10) .setLowercase(true) .build()) .build(), null, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreatePayloadIndexAsync( collectionName: ""{collection_name}"", fieldName: ""name_of_the_field_to_index"", schemaType: PayloadSchemaType.Text, indexParams: new PayloadIndexParams { TextIndexParams = new TextIndexParams { Tokenizer = TokenizerType.Word, MinTokenLen = 2, MaxTokenLen = 10, Lowercase = true } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{ CollectionName: ""{collection_name}"", FieldName: ""name_of_the_field_to_index"", FieldType: qdrant.FieldType_FieldTypeText.Enum(), FieldIndexParams: qdrant.NewPayloadIndexParamsText( &qdrant.TextIndexParams{ Tokenizer: qdrant.TokenizerType_Whitespace, MinTokenLen: qdrant.PtrOf(uint64(2)), MaxTokenLen: qdrant.PtrOf(uint64(10)), Lowercase: qdrant.PtrOf(true), }), }) ``` Available tokenizers are: * `word` - splits the string into words, separated by spaces, punctuation marks, and special characters. * `whitespace` - splits the string into words, separated by spaces. * `prefix` - splits the string into words, separated by spaces, punctuation marks, and special characters, and then creates a prefix index for each word. For example: `hello` will be indexed as `h`, `he`, `hel`, `hell`, `hello`. * `multilingual` - special type of tokenizer based on [charabia](https://github.com/meilisearch/charabia) package. It allows proper tokenization and lemmatization for multiple languages, including those with non-latin alphabets and non-space delimiters. See [charabia documentation](https://github.com/meilisearch/charabia) for full list of supported languages supported normalization options. In the default build configuration, qdrant does not include support for all languages, due to the increasing size of the resulting binary. Chinese, Japanese and Korean languages are not enabled by default, but can be enabled by building qdrant from source with `--features multiling-chinese,multiling-japanese,multiling-korean` flags. See [Full Text match](../filtering/#full-text-match) for examples of querying with full-text index. ### Parameterized index *Available as of v1.8.0* We've added a parameterized variant to the `integer` index, which allows you to fine-tune indexing and search performance. Both the regular and parameterized `integer` indexes use the following flags: - `lookup`: enables support for direct lookup using [Match](/documentation/concepts/filtering/#match) filters. - `range`: enables support for [Range](/documentation/concepts/filtering/#range) filters. The regular `integer` index assumes both `lookup` and `range` are `true`. In contrast, to configure a parameterized index, you would set only one of these filters to `true`: | `lookup` | `range` | Result | |----------|---------|-----------------------------| | `true` | `true` | Regular integer index | | `true` | `false` | Parameterized integer index | | `false` | `true` | Parameterized integer index | | `false` | `false` | No integer index | The parameterized index can enhance performance in collections with millions of points. We encourage you to try it out. If it does not enhance performance in your use case, you can always restore the regular `integer` index. Note: If you set `""lookup"": true` with a range filter, that may lead to significant performance issues. For example, the following code sets up a parameterized integer index which supports only range filters: ```http PUT /collections/{collection_name}/index { ""field_name"": ""name_of_the_field_to_index"", ""field_schema"": { ""type"": ""integer"", ""lookup"": false, ""range"": true } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_payload_index( collection_name=""{collection_name}"", field_name=""name_of_the_field_to_index"", field_schema=models.IntegerIndexParams( type=models.IntegerIndexType.INTEGER, lookup=False, range=True, ), ) ``` ```typescript import { QdrantClient, Schemas } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createPayloadIndex(""{collection_name}"", { field_name: ""name_of_the_field_to_index"", field_schema: { type: ""integer"", lookup: false, range: true, }, }); ``` ```rust use qdrant_client::qdrant::{ payload_index_params::IndexParams, CreateFieldIndexCollectionBuilder, FieldType, IntegerIndexParams, PayloadIndexParams, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_field_index( CreateFieldIndexCollectionBuilder::new( ""{collection_name}"", ""name_of_the_field_to_index"", FieldType::Integer, ) .field_index_params(PayloadIndexParams { index_params: Some(IndexParams::IntegerIndexParams(IntegerIndexParams { lookup: false, range: true, })), }), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.IntegerIndexParams; import io.qdrant.client.grpc.Collections.PayloadIndexParams; import io.qdrant.client.grpc.Collections.PayloadSchemaType; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createPayloadIndexAsync( ""{collection_name}"", ""name_of_the_field_to_index"", PayloadSchemaType.Integer, PayloadIndexParams.newBuilder() .setIntegerIndexParams( IntegerIndexParams.newBuilder().setLookup(false).setRange(true).build()) .build(), null, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreatePayloadIndexAsync( collectionName: ""{collection_name}"", fieldName: ""name_of_the_field_to_index"", schemaType: PayloadSchemaType.Integer, indexParams: new PayloadIndexParams { IntegerIndexParams = new() { Lookup = false, Range = true } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{ CollectionName: ""{collection_name}"", FieldName: ""name_of_the_field_to_index"", FieldType: qdrant.FieldType_FieldTypeInteger.Enum(), FieldIndexParams: qdrant.NewPayloadIndexParamsInt( &qdrant.IntegerIndexParams{ Lookup: false, Range: true, }), }) ``` ### On-disk payload index *Available as of v1.11.0* By default all payload-related structures are stored in memory. In this way, the vector index can quickly access payload values during search. As latency in this case is critical, it is recommended to keep hot payload indexes in memory. There are, however, cases when payload indexes are too large or rarely used. In those cases, it is possible to store payload indexes on disk. To configure on-disk payload index, you can use the following index parameters: ```http PUT /collections/{collection_name}/index { ""field_name"": ""payload_field_name"", ""field_schema"": { ""type"": ""keyword"", ""on_disk"": true } } ``` ```python client.create_payload_index( collection_name=""{collection_name}"", field_name=""payload_field_name"", field_schema=models.KeywordIndexParams( type=""keyword"", on_disk=True, ), ) ``` ```typescript client.createPayloadIndex(""{collection_name}"", { field_name: ""payload_field_name"", field_schema: { type: ""keyword"", on_disk: true }, }); ``` ```rust use qdrant_client::qdrant::{ CreateFieldIndexCollectionBuilder, KeywordIndexParamsBuilder, FieldType }; use qdrant_client::{Qdrant, QdrantError}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.create_field_index( CreateFieldIndexCollectionBuilder::new( ""{collection_name}"", ""payload_field_name"", FieldType::Keyword, ) .field_index_params( KeywordIndexParamsBuilder::default() .on_disk(true), ), ); ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadIndexParams; import io.qdrant.client.grpc.Collections.PayloadSchemaType; import io.qdrant.client.grpc.Collections.KeywordIndexParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createPayloadIndexAsync( ""{collection_name}"", ""payload_field_name"", PayloadSchemaType.Keyword, PayloadIndexParams.newBuilder() .setKeywordIndexParams( KeywordIndexParams.newBuilder() .setOnDisk(true) .build()) .build(), null, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreatePayloadIndexAsync( collectionName: ""{collection_name}"", fieldName: ""payload_field_name"", schemaType: PayloadSchemaType.Keyword, indexParams: new PayloadIndexParams { KeywordIndexParams = new KeywordIndexParams { OnDisk = true } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{ CollectionName: ""{collection_name}"", FieldName: ""name_of_the_field_to_index"", FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(), FieldIndexParams: qdrant.NewPayloadIndexParamsKeyword( &qdrant.KeywordIndexParams{ OnDisk: qdrant.PtrOf(true), }), }) ``` Payload index on-disk is supported for following types: * `keyword` * `integer` * `float` * `datetime` * `uuid` The list will be extended in future versions. ### Tenant Index *Available as of v1.11.0* Many vector search use-cases require multitenancy. In a multi-tenant scenario the collection is expected to contain multiple subsets of data, where each subset belongs to a different tenant. Qdrant supports efficient multi-tenant search by enabling [special configuration](../guides/multiple-partitions/) vector index, which disables global search and only builds sub-indexes for each tenant. However, knowing that the collection contains multiple tenants unlocks more opportunities for optimization. To optimize storage in Qdrant further, you can enable tenant indexing for payload fields. This option will tell Qdrant which fields are used for tenant identification and will allow Qdrant to structure storage for faster search of tenant-specific data. One example of such optimization is localizing tenant-specific data closer on disk, which will reduce the number of disk reads during search. To enable tenant index for a field, you can use the following index parameters: ```http PUT /collections/{collection_name}/index { ""field_name"": ""payload_field_name"", ""field_schema"": { ""type"": ""keyword"", ""is_tenant"": true } } ``` ```python client.create_payload_index( collection_name=""{collection_name}"", field_name=""payload_field_name"", field_schema=models.KeywordIndexParams( type=""keyword"", is_tenant=True, ), ) ``` ```typescript client.createPayloadIndex(""{collection_name}"", { field_name: ""payload_field_name"", field_schema: { type: ""keyword"", is_tenant: true }, }); ``` ```rust use qdrant_client::qdrant::{ CreateFieldIndexCollectionBuilder, KeywordIndexParamsBuilder, FieldType }; use qdrant_client::{Qdrant, QdrantError}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.create_field_index( CreateFieldIndexCollectionBuilder::new( ""{collection_name}"", ""payload_field_name"", FieldType::Keyword, ) .field_index_params( KeywordIndexParamsBuilder::default() .is_tenant(true), ), ); ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadIndexParams; import io.qdrant.client.grpc.Collections.PayloadSchemaType; import io.qdrant.client.grpc.Collections.KeywordIndexParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createPayloadIndexAsync( ""{collection_name}"", ""payload_field_name"", PayloadSchemaType.Keyword, PayloadIndexParams.newBuilder() .setKeywordIndexParams( KeywordIndexParams.newBuilder() .setIsTenant(true) .build()) .build(), null, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreatePayloadIndexAsync( collectionName: ""{collection_name}"", fieldName: ""payload_field_name"", schemaType: PayloadSchemaType.Keyword, indexParams: new PayloadIndexParams { KeywordIndexParams = new KeywordIndexParams { IsTenant = true } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{ CollectionName: ""{collection_name}"", FieldName: ""name_of_the_field_to_index"", FieldType: qdrant.FieldType_FieldTypeKeyword.Enum(), FieldIndexParams: qdrant.NewPayloadIndexParamsKeyword( &qdrant.KeywordIndexParams{ IsTenant: qdrant.PtrOf(true), }), }) ``` Tenant optimization is supported for the following datatypes: * `keyword` * `uuid` ### Principal Index *Available as of v1.11.0* Similar to the tenant index, the principal index is used to optimize storage for faster search, assuming that the search request is primarily filtered by the principal field. A good example of a use case for the principal index is time-related data, where each point is associated with a timestamp. In this case, the principal index can be used to optimize storage for faster search with time-based filters. ```http PUT /collections/{collection_name}/index { ""field_name"": ""timestamp"", ""field_schema"": { ""type"": ""integer"", ""is_principal"": true } } ``` ```python client.create_payload_index( collection_name=""{collection_name}"", field_name=""timestamp"", field_schema=models.KeywordIndexParams( type=""integer"", is_principal=True, ), ) ``` ```typescript client.createPayloadIndex(""{collection_name}"", { field_name: ""timestamp"", field_schema: { type: ""integer"", is_principal: true }, }); ``` ```rust use qdrant_client::qdrant::{ CreateFieldIndexCollectionBuilder, IntegerdIndexParamsBuilder, FieldType }; use qdrant_client::{Qdrant, QdrantError}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.create_field_index( CreateFieldIndexCollectionBuilder::new( ""{collection_name}"", ""timestamp"", FieldType::Integer, ) .field_index_params( IntegerdIndexParamsBuilder::default() .is_principal(true), ), ); ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadIndexParams; import io.qdrant.client.grpc.Collections.PayloadSchemaType; import io.qdrant.client.grpc.Collections.IntegerIndexParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createPayloadIndexAsync( ""{collection_name}"", ""timestamp"", PayloadSchemaType.Integer, PayloadIndexParams.newBuilder() .setIntegerIndexParams( KeywordIndexParams.newBuilder() .setIsPrincipa(true) .build()) .build(), null, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreatePayloadIndexAsync( collectionName: ""{collection_name}"", fieldName: ""timestamp"", schemaType: PayloadSchemaType.Integer, indexParams: new PayloadIndexParams { IntegerIndexParams = new IntegerIndexParams { IsPrincipal = true } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateFieldIndex(context.Background(), &qdrant.CreateFieldIndexCollection{ CollectionName: ""{collection_name}"", FieldName: ""name_of_the_field_to_index"", FieldType: qdrant.FieldType_FieldTypeInteger.Enum(), FieldIndexParams: qdrant.NewPayloadIndexParamsInt( &qdrant.IntegerIndexParams{ IsPrincipal: qdrant.PtrOf(true), }), }) ``` Principal optimization is supported for following types: * `integer` * `float` * `datetime` ## Vector Index A vector index is a data structure built on vectors through a specific mathematical model. Through the vector index, we can efficiently query several vectors similar to the target vector. Qdrant currently only uses HNSW as a dense vector index. [HNSW](https://arxiv.org/abs/1603.09320) (Hierarchical Navigable Small World Graph) is a graph-based indexing algorithm. It builds a multi-layer navigation structure for an image according to certain rules. In this structure, the upper layers are more sparse and the distances between nodes are farther. The lower layers are denser and the distances between nodes are closer. The search starts from the uppermost layer, finds the node closest to the target in this layer, and then enters the next layer to begin another search. After multiple iterations, it can quickly approach the target position. In order to improve performance, HNSW limits the maximum degree of nodes on each layer of the graph to `m`. In addition, you can use `ef_construct` (when building index) or `ef` (when searching targets) to specify a search range. The corresponding parameters could be configured in the configuration file: ```yaml storage: # Default parameters of HNSW Index. Could be overridden for each collection or named vector individually hnsw_index: # Number of edges per node in the index graph. # Larger the value - more accurate the search, more space required. m: 16 # Number of neighbours to consider during the index building. # Larger the value - more accurate the search, more time required to build index. ef_construct: 100 # Minimal size (in KiloBytes) of vectors for additional payload-based indexing. # If payload chunk is smaller than `full_scan_threshold_kb` additional indexing won't be used - # in this case full-scan search should be preferred by query planner and additional indexing is not required. # Note: 1Kb = 1 vector of size 256 full_scan_threshold: 10000 ``` And so in the process of creating a [collection](../collections/). The `ef` parameter is configured during [the search](../search/) and by default is equal to `ef_construct`. HNSW is chosen for several reasons. First, HNSW is well-compatible with the modification that allows Qdrant to use filters during a search. Second, it is one of the most accurate and fastest algorithms, according to [public benchmarks](https://github.com/erikbern/ann-benchmarks). *Available as of v1.1.1* The HNSW parameters can also be configured on a collection and named vector level by setting [`hnsw_config`](../indexing/#vector-index) to fine-tune search performance. ## Sparse Vector Index *Available as of v1.7.0* Sparse vectors in Qdrant are indexed with a special data structure, which is optimized for vectors that have a high proportion of zeroes. In some ways, this indexing method is similar to the inverted index, which is used in text search engines. - A sparse vector index in Qdrant is exact, meaning it does not use any approximation algorithms. - All sparse vectors added to the collection are immediately indexed in the mutable version of a sparse index. With Qdrant, you can benefit from a more compact and efficient immutable sparse index, which is constructed during the same optimization process as the dense vector index. This approach is particularly useful for collections storing both dense and sparse vectors. To configure a sparse vector index, create a collection with the following parameters: ```http PUT /collections/{collection_name} { ""sparse_vectors"": { ""text"": { ""index"": { ""on_disk"": false } } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", sparse_vectors={ ""text"": models.SparseVectorIndexParams( index=models.SparseVectorIndexType( on_disk=False, ), ), }, ) ``` ```typescript import { QdrantClient, Schemas } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { sparse_vectors: { ""splade-model-name"": { index: { on_disk: false } } } }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, SparseIndexConfigBuilder, SparseVectorParamsBuilder, SparseVectorsConfigBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let mut sparse_vectors_config = SparseVectorsConfigBuilder::default(); sparse_vectors_config.add_named_vector_params( ""splade-model-name"", SparseVectorParamsBuilder::default() .index(SparseIndexConfigBuilder::default().on_disk(true)), ); client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .sparse_vectors_config(sparse_vectors_config), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections; QdrantClient client = new QdrantClient( QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.createCollectionAsync( Collections.CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setSparseVectorsConfig( Collections.SparseVectorConfig.newBuilder().putMap( ""splade-model-name"", Collections.SparseVectorParams.newBuilder() .setIndex( Collections.SparseIndexConfig .newBuilder() .setOnDisk(false) .build() ).build() ).build() ).build() ).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", sparseVectorsConfig: (""splade-model-name"", new SparseVectorParams{ Index = new SparseIndexConfig { OnDisk = false, } }) ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", SparseVectorsConfig: qdrant.NewSparseVectorsConfig( map[string]*qdrant.SparseVectorParams{ ""splade-model-name"": { Index: &qdrant.SparseIndexConfig{ OnDisk: qdrant.PtrOf(false), }}, }), }) ```` The following parameters may affect performance: - `on_disk: true` - The index is stored on disk, which lets you save memory. This may slow down search performance. - `on_disk: false` - The index is still persisted on disk, but it is also loaded into memory for faster search. Unlike a dense vector index, a sparse vector index does not require a pre-defined vector size. It automatically adjusts to the size of the vectors added to the collection. **Note:** A sparse vector index only supports dot-product similarity searches. It does not support other distance metrics. ### IDF Modifier *Available as of v1.10.0* For many search algorithms, it is important to consider how often an item occurs in a collection. Intuitively speaking, the less frequently an item appears in a collection, the more important it is in a search. This is also known as the Inverse Document Frequency (IDF). It is used in text search engines to rank search results based on the rarity of a word in a collection. IDF depends on the currently stored documents and therefore can't be pre-computed in the sparse vectors in streaming inference mode. In order to support IDF in the sparse vector index, Qdrant provides an option to modify the sparse vector query with the IDF statistics automatically. The only requirement is to enable the IDF modifier in the collection configuration: ```http PUT /collections/{collection_name} { ""sparse_vectors"": { ""text"": { ""modifier"": ""idf"" } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", sparse_vectors={ ""text"": models.SparseVectorParams( modifier=models.Modifier.IDF, ), }, ) ``` ```typescript import { QdrantClient, Schemas } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { sparse_vectors: { ""text"": { modifier: ""idf"" } } }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Modifier, SparseVectorParamsBuilder, SparseVectorsConfigBuilder, }; use qdrant_client::{Qdrant, QdrantError}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let mut sparse_vectors_config = SparseVectorsConfigBuilder::default(); sparse_vectors_config.add_named_vector_params( ""text"", SparseVectorParamsBuilder::default().modifier(Modifier::Idf), ); client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .sparse_vectors_config(sparse_vectors_config), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Modifier; import io.qdrant.client.grpc.Collections.SparseVectorConfig; import io.qdrant.client.grpc.Collections.SparseVectorParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setSparseVectorsConfig( SparseVectorConfig.newBuilder() .putMap(""text"", SparseVectorParams.newBuilder().setModifier(Modifier.Idf).build())) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", sparseVectorsConfig: (""text"", new SparseVectorParams { Modifier = Modifier.Idf, }) ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", SparseVectorsConfig: qdrant.NewSparseVectorsConfig( map[string]*qdrant.SparseVectorParams{ ""text"": { Modifier: qdrant.Modifier_Idf.Enum(), }, }), }) ``` Qdrant uses the following formula to calculate the IDF modifier: $$ \text{IDF}(q_i) = \ln \left(\frac{N - n(q_i) + 0.5}{n(q_i) + 0.5}+1\right) $$ Where: - `N` is the total number of documents in the collection. - `n` is the number of documents containing non-zero values for the given vector element. ## Filtrable Index Separately, a payload index and a vector index cannot solve the problem of search using the filter completely. In the case of weak filters, you can use the HNSW index as it is. In the case of stringent filters, you can use the payload index and complete rescore. However, for cases in the middle, this approach does not work well. On the one hand, we cannot apply a full scan on too many vectors. On the other hand, the HNSW graph starts to fall apart when using too strict filters. ![HNSW fail](/docs/precision_by_m.png) ![hnsw graph](/docs/graph.gif) You can find more information on why this happens in our [blog post](https://blog.vasnetsov.com/posts/categorical-hnsw/). Qdrant solves this problem by extending the HNSW graph with additional edges based on the stored payload values. Extra edges allow you to efficiently search for nearby vectors using the HNSW index and apply filters as you search in the graph. This approach minimizes the overhead on condition checks since you only need to calculate the conditions for a small fraction of the points involved in the search. ",documentation/concepts/indexing.md "--- title: Points weight: 40 aliases: - ../points --- # Points The points are the central entity that Qdrant operates with. A point is a record consisting of a [vector](../vectors/) and an optional [payload](../payload/). It looks like this: ```json // This is a simple point { ""id"": 129, ""vector"": [0.1, 0.2, 0.3, 0.4], ""payload"": {""color"": ""red""}, } ``` You can search among the points grouped in one [collection](../collections/) based on vector similarity. This procedure is described in more detail in the [search](../search/) and [filtering](../filtering/) sections. This section explains how to create and manage vectors. Any point modification operation is asynchronous and takes place in 2 steps. At the first stage, the operation is written to the Write-ahead-log. After this moment, the service will not lose the data, even if the machine loses power supply. ## Point IDs Qdrant supports using both `64-bit unsigned integers` and `UUID` as identifiers for points. Examples of UUID string representations: - simple: `936DA01F9ABD4d9d80C702AF85C822A8` - hyphenated: `550e8400-e29b-41d4-a716-446655440000` - urn: `urn:uuid:F9168C5E-CEB2-4faa-B6BF-329BF39FA1E4` That means that in every request UUID string could be used instead of numerical id. Example: ```http PUT /collections/{collection_name}/points { ""points"": [ { ""id"": ""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"", ""payload"": {""color"": ""red""}, ""vector"": [0.9, 0.1, 0.1] } ] } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.upsert( collection_name=""{collection_name}"", points=[ models.PointStruct( id=""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"", payload={ ""color"": ""red"", }, vector=[0.9, 0.1, 0.1], ), ], ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.upsert(""{collection_name}"", { points: [ { id: ""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"", payload: { color: ""red"", }, vector: [0.9, 0.1, 0.1], }, ], }); ``` ```rust use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .upsert_points( UpsertPointsBuilder::new( ""{collection_name}"", vec![PointStruct::new( ""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"", vec![0.9, 0.1, 0.1], [(""color"", ""Red"".into())], )], ) .wait(true), ) .await?; ``` ```java import java.util.List; import java.util.Map; import java.util.UUID; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .upsertAsync( ""{collection_name}"", List.of( PointStruct.newBuilder() .setId(id(UUID.fromString(""5c56c793-69f3-4fbf-87e6-c4bf54c28c26""))) .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f)) .putAllPayload(Map.of(""color"", value(""Red""))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpsertAsync( collectionName: ""{collection_name}"", points: new List { new() { Id = Guid.Parse(""5c56c793-69f3-4fbf-87e6-c4bf54c28c26""), Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f }, Payload = { [""color""] = ""Red"" } } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Upsert(context.Background(), &qdrant.UpsertPoints{ CollectionName: ""{collection_name}"", Points: []*qdrant.PointStruct{ { Id: qdrant.NewID(""5c56c793-69f3-4fbf-87e6-c4bf54c28c26""), Vectors: qdrant.NewVectors(0.05, 0.61, 0.76, 0.74), Payload: qdrant.NewValueMap(map[string]any{""color"": ""Red""}), }, }, }) ``` and ```http PUT /collections/{collection_name}/points { ""points"": [ { ""id"": 1, ""payload"": {""color"": ""red""}, ""vector"": [0.9, 0.1, 0.1] } ] } ``` ```python client.upsert( collection_name=""{collection_name}"", points=[ models.PointStruct( id=1, payload={ ""color"": ""red"", }, vector=[0.9, 0.1, 0.1], ), ], ) ``` ```typescript client.upsert(""{collection_name}"", { points: [ { id: 1, payload: { color: ""red"", }, vector: [0.9, 0.1, 0.1], }, ], }); ``` ```rust use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .upsert_points( UpsertPointsBuilder::new( ""{collection_name}"", vec![PointStruct::new( 1, vec![0.9, 0.1, 0.1], [(""color"", ""Red"".into())], )], ) .wait(true), ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .upsertAsync( ""{collection_name}"", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.05f, 0.61f, 0.76f, 0.74f)) .putAllPayload(Map.of(""color"", value(""Red""))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpsertAsync( collectionName: ""{collection_name}"", points: new List { new() { Id = 1, Vectors = new[] { 0.05f, 0.61f, 0.76f, 0.74f }, Payload = { [""color""] = ""Red"" } } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Upsert(context.Background(), &qdrant.UpsertPoints{ CollectionName: ""{collection_name}"", Points: []*qdrant.PointStruct{ { Id: qdrant.NewIDNum(1), Vectors: qdrant.NewVectors(0.05, 0.61, 0.76, 0.74), Payload: qdrant.NewValueMap(map[string]any{""color"": ""Red""}), }, }, }) ``` are both possible. ## Vectors Each point in qdrant may have one or more vectors. Vectors are the central component of the Qdrant architecture, qdrant relies on different types of vectors to provide different types of data exploration and search. Here is a list of supported vector types: ||| |-|-| | Dense Vectors | A regular vectors, generated by majority of the embedding models. | | Sparse Vectors | Vectors with no fixed length, but only a few non-zero elements.
Useful for exact token match and collaborative filtering recommendations. | | MultiVectors | Matrices of numbers with fixed length but variable height.
Usually obtained from late interraction models like ColBERT. | It is possible to attach more than one type of vector to a single point. In Qdrant we call it Named Vectors. Read more about vector types, how they are stored and optimized in the [vectors](../vectors/) section. ## Upload points To optimize performance, Qdrant supports batch loading of points. I.e., you can load several points into the service in one API call. Batching allows you to minimize the overhead of creating a network connection. The Qdrant API supports two ways of creating batches - record-oriented and column-oriented. Internally, these options do not differ and are made only for the convenience of interaction. Create points with batch: ```http PUT /collections/{collection_name}/points { ""batch"": { ""ids"": [1, 2, 3], ""payloads"": [ {""color"": ""red""}, {""color"": ""green""}, {""color"": ""blue""} ], ""vectors"": [ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9] ] } } ``` ```python client.upsert( collection_name=""{collection_name}"", points=models.Batch( ids=[1, 2, 3], payloads=[ {""color"": ""red""}, {""color"": ""green""}, {""color"": ""blue""}, ], vectors=[ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9], ], ), ) ``` ```typescript client.upsert(""{collection_name}"", { batch: { ids: [1, 2, 3], payloads: [{ color: ""red"" }, { color: ""green"" }, { color: ""blue"" }], vectors: [ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], [0.1, 0.1, 0.9], ], }, }); ``` or record-oriented equivalent: ```http PUT /collections/{collection_name}/points { ""points"": [ { ""id"": 1, ""payload"": {""color"": ""red""}, ""vector"": [0.9, 0.1, 0.1] }, { ""id"": 2, ""payload"": {""color"": ""green""}, ""vector"": [0.1, 0.9, 0.1] }, { ""id"": 3, ""payload"": {""color"": ""blue""}, ""vector"": [0.1, 0.1, 0.9] } ] } ``` ```python client.upsert( collection_name=""{collection_name}"", points=[ models.PointStruct( id=1, payload={ ""color"": ""red"", }, vector=[0.9, 0.1, 0.1], ), models.PointStruct( id=2, payload={ ""color"": ""green"", }, vector=[0.1, 0.9, 0.1], ), models.PointStruct( id=3, payload={ ""color"": ""blue"", }, vector=[0.1, 0.1, 0.9], ), ], ) ``` ```typescript client.upsert(""{collection_name}"", { points: [ { id: 1, payload: { color: ""red"" }, vector: [0.9, 0.1, 0.1], }, { id: 2, payload: { color: ""green"" }, vector: [0.1, 0.9, 0.1], }, { id: 3, payload: { color: ""blue"" }, vector: [0.1, 0.1, 0.9], }, ], }); ``` ```rust use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder}; client .upsert_points( UpsertPointsBuilder::new( ""{collection_name}"", vec![ PointStruct::new(1, vec![0.9, 0.1, 0.1], [(""city"", ""red"".into())]), PointStruct::new(2, vec![0.1, 0.9, 0.1], [(""city"", ""green"".into())]), PointStruct::new(3, vec![0.1, 0.1, 0.9], [(""city"", ""blue"".into())]), ], ) .wait(true), ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .upsertAsync( ""{collection_name}"", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(0.9f, 0.1f, 0.1f)) .putAllPayload(Map.of(""color"", value(""red""))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors(vectors(0.1f, 0.9f, 0.1f)) .putAllPayload(Map.of(""color"", value(""green""))) .build(), PointStruct.newBuilder() .setId(id(3)) .setVectors(vectors(0.1f, 0.1f, 0.9f)) .putAllPayload(Map.of(""color"", value(""blue""))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpsertAsync( collectionName: ""{collection_name}"", points: new List { new() { Id = 1, Vectors = new[] { 0.9f, 0.1f, 0.1f }, Payload = { [""color""] = ""red"" } }, new() { Id = 2, Vectors = new[] { 0.1f, 0.9f, 0.1f }, Payload = { [""color""] = ""green"" } }, new() { Id = 3, Vectors = new[] { 0.1f, 0.1f, 0.9f }, Payload = { [""color""] = ""blue"" } } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Upsert(context.Background(), &qdrant.UpsertPoints{ CollectionName: ""{collection_name}"", Points: []*qdrant.PointStruct{ { Id: qdrant.NewIDNum(1), Vectors: qdrant.NewVectors(0.9, 0.1, 0.1), Payload: qdrant.NewValueMap(map[string]any{""color"": ""red""}), }, { Id: qdrant.NewIDNum(2), Vectors: qdrant.NewVectors(0.1, 0.9, 0.1), Payload: qdrant.NewValueMap(map[string]any{""color"": ""green""}), }, { Id: qdrant.NewIDNum(3), Vectors: qdrant.NewVectors(0.1, 0.1, 0.9), Payload: qdrant.NewValueMap(map[string]any{""color"": ""blue""}), }, }, }) ``` The Python client has additional features for loading points, which include: - Parallelization - A retry mechanism - Lazy batching support For example, you can read your data directly from hard drives, to avoid storing all data in RAM. You can use these features with the `upload_collection` and `upload_points` methods. Similar to the basic upsert API, these methods support both record-oriented and column-oriented formats. Column-oriented format: ```python client.upload_collection( collection_name=""{collection_name}"", ids=[1, 2], payload=[ {""color"": ""red""}, {""color"": ""green""}, ], vectors=[ [0.9, 0.1, 0.1], [0.1, 0.9, 0.1], ], parallel=4, max_retries=3, ) ``` Record-oriented format: ```python client.upload_points( collection_name=""{collection_name}"", points=[ models.PointStruct( id=1, payload={ ""color"": ""red"", }, vector=[0.9, 0.1, 0.1], ), models.PointStruct( id=2, payload={ ""color"": ""green"", }, vector=[0.1, 0.9, 0.1], ), ], parallel=4, max_retries=3, ) ``` All APIs in Qdrant, including point loading, are idempotent. It means that executing the same method several times in a row is equivalent to a single execution. In this case, it means that points with the same id will be overwritten when re-uploaded. Idempotence property is useful if you use, for example, a message queue that doesn't provide an exactly-ones guarantee. Even with such a system, Qdrant ensures data consistency. [_Available as of v0.10.0_](#create-vector-name) If the collection was created with multiple vectors, each vector data can be provided using the vector's name: ```http PUT /collections/{collection_name}/points { ""points"": [ { ""id"": 1, ""vector"": { ""image"": [0.9, 0.1, 0.1, 0.2], ""text"": [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2] } }, { ""id"": 2, ""vector"": { ""image"": [0.2, 0.1, 0.3, 0.9], ""text"": [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9] } } ] } ``` ```python client.upsert( collection_name=""{collection_name}"", points=[ models.PointStruct( id=1, vector={ ""image"": [0.9, 0.1, 0.1, 0.2], ""text"": [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2], }, ), models.PointStruct( id=2, vector={ ""image"": [0.2, 0.1, 0.3, 0.9], ""text"": [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9], }, ), ], ) ``` ```typescript client.upsert(""{collection_name}"", { points: [ { id: 1, vector: { image: [0.9, 0.1, 0.1, 0.2], text: [0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2], }, }, { id: 2, vector: { image: [0.2, 0.1, 0.3, 0.9], text: [0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9], }, }, ], }); ``` ```rust use std::collections::HashMap; use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder}; use qdrant_client::Payload; client .upsert_points( UpsertPointsBuilder::new( ""{collection_name}"", vec![ PointStruct::new( 1, HashMap::from([ (""image"".to_string(), vec![0.9, 0.1, 0.1, 0.2]), ( ""text"".to_string(), vec![0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2], ), ]), Payload::default(), ), PointStruct::new( 2, HashMap::from([ (""image"".to_string(), vec![0.2, 0.1, 0.3, 0.9]), ( ""text"".to_string(), vec![0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9], ), ]), Payload::default(), ), ], ) .wait(true), ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.VectorFactory.vector; import static io.qdrant.client.VectorsFactory.namedVectors; import io.qdrant.client.grpc.Points.PointStruct; client .upsertAsync( ""{collection_name}"", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors( namedVectors( Map.of( ""image"", vector(List.of(0.9f, 0.1f, 0.1f, 0.2f)), ""text"", vector(List.of(0.4f, 0.7f, 0.1f, 0.8f, 0.1f, 0.1f, 0.9f, 0.2f))))) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors( namedVectors( Map.of( ""image"", List.of(0.2f, 0.1f, 0.3f, 0.9f), ""text"", List.of(0.5f, 0.2f, 0.7f, 0.4f, 0.7f, 0.2f, 0.3f, 0.9f)))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpsertAsync( collectionName: ""{collection_name}"", points: new List { new() { Id = 1, Vectors = new Dictionary { [""image""] = [0.9f, 0.1f, 0.1f, 0.2f], [""text""] = [0.4f, 0.7f, 0.1f, 0.8f, 0.1f, 0.1f, 0.9f, 0.2f] } }, new() { Id = 2, Vectors = new Dictionary { [""image""] = [0.2f, 0.1f, 0.3f, 0.9f], [""text""] = [0.5f, 0.2f, 0.7f, 0.4f, 0.7f, 0.2f, 0.3f, 0.9f] } } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Upsert(context.Background(), &qdrant.UpsertPoints{ CollectionName: ""{collection_name}"", Points: []*qdrant.PointStruct{ { Id: qdrant.NewIDNum(1), Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{ ""image"": qdrant.NewVector(0.9, 0.1, 0.1, 0.2), ""text"": qdrant.NewVector(0.4, 0.7, 0.1, 0.8, 0.1, 0.1, 0.9, 0.2), }), }, { Id: qdrant.NewIDNum(2), Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{ ""image"": qdrant.NewVector(0.2, 0.1, 0.3, 0.9), ""text"": qdrant.NewVector(0.5, 0.2, 0.7, 0.4, 0.7, 0.2, 0.3, 0.9), }), }, }, }) ``` _Available as of v1.2.0_ Named vectors are optional. When uploading points, some vectors may be omitted. For example, you can upload one point with only the `image` vector and a second one with only the `text` vector. When uploading a point with an existing ID, the existing point is deleted first, then it is inserted with just the specified vectors. In other words, the entire point is replaced, and any unspecified vectors are set to null. To keep existing vectors unchanged and only update specified vectors, see [update vectors](#update-vectors). _Available as of v1.7.0_ Points can contain dense and sparse vectors. A sparse vector is an array in which most of the elements have a value of zero. It is possible to take advantage of this property to have an optimized representation, for this reason they have a different shape than dense vectors. They are represented as a list of `(index, value)` pairs, where `index` is an integer and `value` is a floating point number. The `index` is the position of the non-zero value in the vector. The `values` is the value of the non-zero element. For example, the following vector: ``` [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 2.0, 0.0, 0.0] ``` can be represented as a sparse vector: ``` [(6, 1.0), (7, 2.0)] ``` Qdrant uses the following JSON representation throughout its APIs. ```json { ""indices"": [6, 7], ""values"": [1.0, 2.0] } ``` The `indices` and `values` arrays must have the same length. And the `indices` must be unique. If the `indices` are not sorted, Qdrant will sort them internally so you may not rely on the order of the elements. Sparse vectors must be named and can be uploaded in the same way as dense vectors. ```http PUT /collections/{collection_name}/points { ""points"": [ { ""id"": 1, ""vector"": { ""text"": { ""indices"": [6, 7], ""values"": [1.0, 2.0] } } }, { ""id"": 2, ""vector"": { ""text"": { ""indices"": [1, 1, 2, 3, 4, 5], ""values"": [0.1, 0.2, 0.3, 0.4, 0.5] } } } ] } ``` ```python client.upsert( collection_name=""{collection_name}"", points=[ models.PointStruct( id=1, vector={ ""text"": models.SparseVector( indices=[6, 7], values=[1.0, 2.0], ) }, ), models.PointStruct( id=2, vector={ ""text"": models.SparseVector( indices=[1, 2, 3, 4, 5], values=[0.1, 0.2, 0.3, 0.4, 0.5], ) }, ), ], ) ``` ```typescript client.upsert(""{collection_name}"", { points: [ { id: 1, vector: { text: { indices: [6, 7], values: [1.0, 2.0], }, }, }, { id: 2, vector: { text: { indices: [1, 2, 3, 4, 5], values: [0.1, 0.2, 0.3, 0.4, 0.5], }, }, }, ], }); ``` ```rust use std::collections::HashMap; use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder, Vector}; use qdrant_client::Payload; client .upsert_points( UpsertPointsBuilder::new( ""{collection_name}"", vec![ PointStruct::new( 1, HashMap::from([(""text"".to_string(), vec![(6, 1.0), (7, 2.0)])]), Payload::default(), ), PointStruct::new( 2, HashMap::from([( ""text"".to_string(), vec![(1, 0.1), (2, 0.2), (3, 0.3), (4, 0.4), (5, 0.5)], )]), Payload::default(), ), ], ) .wait(true), ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.VectorFactory.vector; import io.qdrant.client.grpc.Points.NamedVectors; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.Vectors; client .upsertAsync( ""{collection_name}"", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors( Vectors.newBuilder() .setVectors( NamedVectors.newBuilder() .putAllVectors( Map.of( ""text"", vector(List.of(1.0f, 2.0f), List.of(6, 7)))) .build()) .build()) .build(), PointStruct.newBuilder() .setId(id(2)) .setVectors( Vectors.newBuilder() .setVectors( NamedVectors.newBuilder() .putAllVectors( Map.of( ""text"", vector( List.of(0.1f, 0.2f, 0.3f, 0.4f, 0.5f), List.of(1, 2, 3, 4, 5)))) .build()) .build()) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpsertAsync( collectionName: ""{collection_name}"", points: new List { new() { Id = 1, Vectors = new Dictionary { [""text""] = ([1.0f, 2.0f], [6, 7]) } }, new() { Id = 2, Vectors = new Dictionary { [""text""] = ([0.1f, 0.2f, 0.3f, 0.4f, 0.5f], [1, 2, 3, 4, 5]) } } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Upsert(context.Background(), &qdrant.UpsertPoints{ CollectionName: ""{collection_name}"", Points: []*qdrant.PointStruct{ { Id: qdrant.NewIDNum(1), Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{ ""text"": qdrant.NewVectorSparse( []uint32{6, 7}, []float32{1.0, 2.0}), }), }, { Id: qdrant.NewIDNum(2), Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{ ""text"": qdrant.NewVectorSparse( []uint32{1, 2, 3, 4, 5}, []float32{0.1, 0.2, 0.3, 0.4, 0.5}), }), }, }, }) ``` ## Modify points To change a point, you can modify its vectors or its payload. There are several ways to do this. ### Update vectors _Available as of v1.2.0_ This method updates the specified vectors on the given points. Unspecified vectors are kept unchanged. All given points must exist. REST API ([Schema](https://api.qdrant.tech/api-reference/points/update-vectors)): ```http PUT /collections/{collection_name}/points/vectors { ""points"": [ { ""id"": 1, ""vector"": { ""image"": [0.1, 0.2, 0.3, 0.4] } }, { ""id"": 2, ""vector"": { ""text"": [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2] } } ] } ``` ```python client.update_vectors( collection_name=""{collection_name}"", points=[ models.PointVectors( id=1, vector={ ""image"": [0.1, 0.2, 0.3, 0.4], }, ), models.PointVectors( id=2, vector={ ""text"": [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2], }, ), ], ) ``` ```typescript client.updateVectors(""{collection_name}"", { points: [ { id: 1, vector: { image: [0.1, 0.2, 0.3, 0.4], }, }, { id: 2, vector: { text: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2], }, }, ], }); ``` ```rust use std::collections::HashMap; use qdrant_client::qdrant::{ PointVectors, UpdatePointVectorsBuilder, }; client .update_vectors( UpdatePointVectorsBuilder::new( ""{collection_name}"", vec![ PointVectors { id: Some(1.into()), vectors: Some( HashMap::from([(""image"".to_string(), vec![0.1, 0.2, 0.3, 0.4])]).into(), ), }, PointVectors { id: Some(2.into()), vectors: Some( HashMap::from([( ""text"".to_string(), vec![0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2], )]) .into(), ), }, ], ) .wait(true), ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.VectorFactory.vector; import static io.qdrant.client.VectorsFactory.namedVectors; client .updateVectorsAsync( ""{collection_name}"", List.of( PointVectors.newBuilder() .setId(id(1)) .setVectors(namedVectors(Map.of(""image"", vector(List.of(0.1f, 0.2f, 0.3f, 0.4f))))) .build(), PointVectors.newBuilder() .setId(id(2)) .setVectors( namedVectors( Map.of( ""text"", vector(List.of(0.9f, 0.8f, 0.7f, 0.6f, 0.5f, 0.4f, 0.3f, 0.2f))))) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpdateVectorsAsync( collectionName: ""{collection_name}"", points: new List { new() { Id = 1, Vectors = (""image"", new float[] { 0.1f, 0.2f, 0.3f, 0.4f }) }, new() { Id = 2, Vectors = (""text"", new float[] { 0.9f, 0.8f, 0.7f, 0.6f, 0.5f, 0.4f, 0.3f, 0.2f }) } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.UpdateVectors(context.Background(), &qdrant.UpdatePointVectors{ CollectionName: ""{collection_name}"", Points: []*qdrant.PointVectors{ { Id: qdrant.NewIDNum(1), Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{ ""image"": qdrant.NewVector(0.1, 0.2, 0.3, 0.4), }), }, { Id: qdrant.NewIDNum(2), Vectors: qdrant.NewVectorsMap(map[string]*qdrant.Vector{ ""text"": qdrant.NewVector(0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2), }), }, }, }) ``` To update points and replace all of its vectors, see [uploading points](#upload-points). ### Delete vectors _Available as of v1.2.0_ This method deletes just the specified vectors from the given points. Other vectors are kept unchanged. Points are never deleted. REST API ([Schema](https://api.qdrant.tech/api-reference/points/delete-vectors)): ```http POST /collections/{collection_name}/points/vectors/delete { ""points"": [0, 3, 100], ""vectors"": [""text"", ""image""] } ``` ```python client.delete_vectors( collection_name=""{collection_name}"", points=[0, 3, 100], vectors=[""text"", ""image""], ) ``` ```typescript client.deleteVectors(""{collection_name}"", { points: [0, 3, 10], vectors: [""text"", ""image""], }); ``` ```rust use qdrant_client::qdrant::{ DeletePointVectorsBuilder, PointsIdsList, }; client .delete_vectors( DeletePointVectorsBuilder::new(""{collection_name}"") .points_selector(PointsIdsList { ids: vec![0.into(), 3.into(), 10.into()], }) .vectors(vec![""text"".into(), ""image"".into()]) .wait(true), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; client .deleteVectorsAsync( ""{collection_name}"", List.of(""text"", ""image""), List.of(id(0), id(3), id(10))) .get(); ``` To delete entire points, see [deleting points](#delete-points). ### Update payload Learn how to modify the payload of a point in the [Payload](../payload/#update-payload) section. ## Delete points REST API ([Schema](https://api.qdrant.tech/api-reference/points/delete-points)): ```http POST /collections/{collection_name}/points/delete { ""points"": [0, 3, 100] } ``` ```python client.delete( collection_name=""{collection_name}"", points_selector=models.PointIdsList( points=[0, 3, 100], ), ) ``` ```typescript client.delete(""{collection_name}"", { points: [0, 3, 100], }); ``` ```rust use qdrant_client::qdrant::{DeletePointsBuilder, PointsIdsList}; client .delete_points( DeletePointsBuilder::new(""{collection_name}"") .points(PointsIdsList { ids: vec![0.into(), 3.into(), 100.into()], }) .wait(true), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; client.deleteAsync(""{collection_name}"", List.of(id(0), id(3), id(100))); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.DeleteAsync(collectionName: ""{collection_name}"", ids: [0, 3, 100]); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Delete(context.Background(), &qdrant.DeletePoints{ CollectionName: ""{collection_name}"", Points: qdrant.NewPointsSelector( qdrant.NewIDNum(0), qdrant.NewIDNum(3), qdrant.NewIDNum(100), ), }) ``` Alternative way to specify which points to remove is to use filter. ```http POST /collections/{collection_name}/points/delete { ""filter"": { ""must"": [ { ""key"": ""color"", ""match"": { ""value"": ""red"" } } ] } } ``` ```python client.delete( collection_name=""{collection_name}"", points_selector=models.FilterSelector( filter=models.Filter( must=[ models.FieldCondition( key=""color"", match=models.MatchValue(value=""red""), ), ], ) ), ) ``` ```typescript client.delete(""{collection_name}"", { filter: { must: [ { key: ""color"", match: { value: ""red"", }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, DeletePointsBuilder, Filter}; client .delete_points( DeletePointsBuilder::new(""{collection_name}"") .points(Filter::must([Condition::matches( ""color"", ""red"".to_string(), )])) .wait(true), ) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; client .deleteAsync( ""{collection_name}"", Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.DeleteAsync(collectionName: ""{collection_name}"", filter: MatchKeyword(""color"", ""red"")); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Delete(context.Background(), &qdrant.DeletePoints{ CollectionName: ""{collection_name}"", Points: qdrant.NewPointsSelectorFilter( &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""color"", ""red""), }, }, ), }) ``` This example removes all points with `{ ""color"": ""red"" }` from the collection. ## Retrieve points There is a method for retrieving points by their ids. REST API ([Schema](https://api.qdrant.tech/api-reference/points/get-points)): ```http POST /collections/{collection_name}/points { ""ids"": [0, 3, 100] } ``` ```python client.retrieve( collection_name=""{collection_name}"", ids=[0, 3, 100], ) ``` ```typescript client.retrieve(""{collection_name}"", { ids: [0, 3, 100], }); ``` ```rust use qdrant_client::qdrant::GetPointsBuilder; client .get_points(GetPointsBuilder::new( ""{collection_name}"", vec![0.into(), 30.into(), 100.into()], )) .await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; client .retrieveAsync(""{collection_name}"", List.of(id(0), id(30), id(100)), false, false, null) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.RetrieveAsync( collectionName: ""{collection_name}"", ids: [0, 30, 100], withPayload: false, withVectors: false ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Get(context.Background(), &qdrant.GetPoints{ CollectionName: ""{collection_name}"", Ids: []*qdrant.PointId{ qdrant.NewIDNum(0), qdrant.NewIDNum(3), qdrant.NewIDNum(100), }, }) ``` This method has additional parameters `with_vectors` and `with_payload`. Using these parameters, you can select parts of the point you want as a result. Excluding helps you not to waste traffic transmitting useless data. The single point can also be retrieved via the API: REST API ([Schema](https://api.qdrant.tech/api-reference/points/get-point)): ```http GET /collections/{collection_name}/points/{point_id} ``` ## Scroll points Sometimes it might be necessary to get all stored points without knowing ids, or iterate over points that correspond to a filter. REST API ([Schema](https://api.qdrant.tech/master/api-reference/search/scroll-points)): ```http POST /collections/{collection_name}/points/scroll { ""filter"": { ""must"": [ { ""key"": ""color"", ""match"": { ""value"": ""red"" } } ] }, ""limit"": 1, ""with_payload"": true, ""with_vector"": false } ``` ```python client.scroll( collection_name=""{collection_name}"", scroll_filter=models.Filter( must=[ models.FieldCondition(key=""color"", match=models.MatchValue(value=""red"")), ] ), limit=1, with_payload=True, with_vectors=False, ) ``` ```typescript client.scroll(""{collection_name}"", { filter: { must: [ { key: ""color"", match: { value: ""red"", }, }, ], }, limit: 1, with_payload: true, with_vector: false, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder}; client .scroll( ScrollPointsBuilder::new(""{collection_name}"") .filter(Filter::must([Condition::matches( ""color"", ""red"".to_string(), )])) .limit(1) .with_payload(true) .with_vectors(false), ) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.WithPayloadSelectorFactory.enable; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter(Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build()) .setLimit(1) .setWithPayload(enable(true)) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.ScrollAsync( collectionName: ""{collection_name}"", filter: MatchKeyword(""color"", ""red""), limit: 1, payloadSelector: true ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Scroll(context.Background(), &qdrant.ScrollPoints{ CollectionName: ""{collection_name}"", Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""color"", ""red""), }, }, Limit: qdrant.PtrOf(uint32(1)), WithPayload: qdrant.NewWithPayload(true), }) ``` Returns all point with `color` = `red`. ```json { ""result"": { ""next_page_offset"": 1, ""points"": [ { ""id"": 0, ""payload"": { ""color"": ""red"" } } ] }, ""status"": ""ok"", ""time"": 0.0001 } ``` The Scroll API will return all points that match the filter in a page-by-page manner. All resulting points are sorted by ID. To query the next page it is necessary to specify the largest seen ID in the `offset` field. For convenience, this ID is also returned in the field `next_page_offset`. If the value of the `next_page_offset` field is `null` - the last page is reached. ### Order points by payload key _Available as of v1.8.0_ When using the [`scroll`](#scroll-points) API, you can sort the results by payload key. For example, you can retrieve points in chronological order if your payloads have a `""timestamp""` field, as is shown from the example below: ```http POST /collections/{collection_name}/points/scroll { ""limit"": 15, ""order_by"": ""timestamp"", // <-- this! } ``` ```python client.scroll( collection_name=""{collection_name}"", limit=15, order_by=""timestamp"", # <-- this! ) ``` ```typescript client.scroll(""{collection_name}"", { limit: 15, order_by: ""timestamp"", // <-- this! }); ``` ```rust use qdrant_client::qdrant::{OrderByBuilder, ScrollPointsBuilder}; client .scroll( ScrollPointsBuilder::new(""{collection_name}"") .limit(15) .order_by(OrderByBuilder::new(""timestamp"")), ) .await?; ``` ```java import io.qdrant.client.grpc.Points.OrderBy; import io.qdrant.client.grpc.Points.ScrollPoints; client.scrollAsync(ScrollPoints.newBuilder() .setCollectionName(""{collection_name}"") .setLimit(15) .setOrderBy(OrderBy.newBuilder().setKey(""timestamp"").build()) .build()).get(); ``` ```csharp await client.ScrollAsync(""{collection_name}"", limit: 15, orderBy: ""timestamp""); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Scroll(context.Background(), &qdrant.ScrollPoints{ CollectionName: ""{collection_name}"", Limit: qdrant.PtrOf(uint32(15)), OrderBy: &qdrant.OrderBy{ Key: ""timestamp"", }, }) ``` You need to use the `order_by` `key` parameter to specify the payload key. Then you can add other fields to control the ordering, such as `direction` and `start_from`: ```http ""order_by"": { ""key"": ""timestamp"", ""direction"": ""desc"" // default is ""asc"" ""start_from"": 123, // start from this value } ``` ```python order_by=models.OrderBy( key=""timestamp"", direction=""desc"", # default is ""asc"" start_from=123, # start from this value ) ``` ```typescript order_by: { key: ""timestamp"", direction: ""desc"", // default is ""asc"" start_from: 123, // start from this value } ``` ```rust use qdrant_client::qdrant::{start_from::Value, Direction, OrderByBuilder}; OrderByBuilder::new(""timestamp"") .direction(Direction::Desc.into()) .start_from(Value::Integer(123)) .build(); ``` ```java import io.qdrant.client.grpc.Points.Direction; import io.qdrant.client.grpc.Points.OrderBy; import io.qdrant.client.grpc.Points.StartFrom; OrderBy.newBuilder() .setKey(""timestamp"") .setDirection(Direction.Desc) .setStartFrom(StartFrom.newBuilder() .setInteger(123) .build()) .build(); ``` ```csharp using Qdrant.Client.Grpc; new OrderBy { Key = ""timestamp"", Direction = Direction.Desc, StartFrom = 123 }; ``` ```go import ""github.com/qdrant/go-client/qdrant"" qdrant.OrderBy{ Key: ""timestamp"", Direction: qdrant.Direction_Desc.Enum(), StartFrom: qdrant.NewStartFromInt(123), } ``` When sorting is based on a non-unique value, it is not possible to rely on an ID offset. Thus, next_page_offset is not returned within the response. However, you can still do pagination by combining `""order_by"": { ""start_from"": ... }` with a `{ ""must_not"": [{ ""has_id"": [...] }] }` filter. ## Counting points _Available as of v0.8.4_ Sometimes it can be useful to know how many points fit the filter conditions without doing a real search. Among others, for example, we can highlight the following scenarios: - Evaluation of results size for faceted search - Determining the number of pages for pagination - Debugging the query execution speed REST API ([Schema](https://api.qdrant.tech/master/api-reference/points/count-points)): ```http POST /collections/{collection_name}/points/count { ""filter"": { ""must"": [ { ""key"": ""color"", ""match"": { ""value"": ""red"" } } ] }, ""exact"": true } ``` ```python client.count( collection_name=""{collection_name}"", count_filter=models.Filter( must=[ models.FieldCondition(key=""color"", match=models.MatchValue(value=""red"")), ] ), exact=True, ) ``` ```typescript client.count(""{collection_name}"", { filter: { must: [ { key: ""color"", match: { value: ""red"", }, }, ], }, exact: true, }); ``` ```rust use qdrant_client::qdrant::{Condition, CountPointsBuilder, Filter}; client .count( CountPointsBuilder::new(""{collection_name}"") .filter(Filter::must([Condition::matches( ""color"", ""red"".to_string(), )])) .exact(true), ) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; client .countAsync( ""{collection_name}"", Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build(), true) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.CountAsync( collectionName: ""{collection_name}"", filter: MatchKeyword(""color"", ""red""), exact: true ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Count(context.Background(), &qdrant.CountPoints{ CollectionName: ""midlib"", Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""color"", ""red""), }, }, }) ``` Returns number of counts matching given filtering conditions: ```json { ""count"": 3811 } ``` ## Batch update _Available as of v1.5.0_ You can batch multiple point update operations. This includes inserting, updating and deleting points, vectors and payload. A batch update request consists of a list of operations. These are executed in order. These operations can be batched: - [Upsert points](#upload-points): `upsert` or `UpsertOperation` - [Delete points](#delete-points): `delete_points` or `DeleteOperation` - [Update vectors](#update-vectors): `update_vectors` or `UpdateVectorsOperation` - [Delete vectors](#delete-vectors): `delete_vectors` or `DeleteVectorsOperation` - [Set payload](/documentation/concepts/payload/#set-payload): `set_payload` or `SetPayloadOperation` - [Overwrite payload](/documentation/concepts/payload/#overwrite-payload): `overwrite_payload` or `OverwritePayload` - [Delete payload](/documentation/concepts/payload/#delete-payload-keys): `delete_payload` or `DeletePayloadOperation` - [Clear payload](/documentation/concepts/payload/#clear-payload): `clear_payload` or `ClearPayloadOperation` The following example snippet makes use of all operations. REST API ([Schema](https://api.qdrant.tech/master/api-reference/points/batch-update)): ```http POST /collections/{collection_name}/points/batch { ""operations"": [ { ""upsert"": { ""points"": [ { ""id"": 1, ""vector"": [1.0, 2.0, 3.0, 4.0], ""payload"": {} } ] } }, { ""update_vectors"": { ""points"": [ { ""id"": 1, ""vector"": [1.0, 2.0, 3.0, 4.0] } ] } }, { ""delete_vectors"": { ""points"": [1], ""vector"": [""""] } }, { ""overwrite_payload"": { ""payload"": { ""test_payload"": ""1"" }, ""points"": [1] } }, { ""set_payload"": { ""payload"": { ""test_payload_2"": ""2"", ""test_payload_3"": ""3"" }, ""points"": [1] } }, { ""delete_payload"": { ""keys"": [""test_payload_2""], ""points"": [1] } }, { ""clear_payload"": { ""points"": [1] } }, {""delete"": {""points"": [1]}} ] } ``` ```python client.batch_update_points( collection_name=""{collection_name}"", update_operations=[ models.UpsertOperation( upsert=models.PointsList( points=[ models.PointStruct( id=1, vector=[1.0, 2.0, 3.0, 4.0], payload={}, ), ] ) ), models.UpdateVectorsOperation( update_vectors=models.UpdateVectors( points=[ models.PointVectors( id=1, vector=[1.0, 2.0, 3.0, 4.0], ) ] ) ), models.DeleteVectorsOperation( delete_vectors=models.DeleteVectors(points=[1], vector=[""""]) ), models.OverwritePayloadOperation( overwrite_payload=models.SetPayload( payload={""test_payload"": 1}, points=[1], ) ), models.SetPayloadOperation( set_payload=models.SetPayload( payload={ ""test_payload_2"": 2, ""test_payload_3"": 3, }, points=[1], ) ), models.DeletePayloadOperation( delete_payload=models.DeletePayload(keys=[""test_payload_2""], points=[1]) ), models.ClearPayloadOperation(clear_payload=models.PointIdsList(points=[1])), models.DeleteOperation(delete=models.PointIdsList(points=[1])), ], ) ``` ```typescript client.batchUpdate(""{collection_name}"", { operations: [ { upsert: { points: [ { id: 1, vector: [1.0, 2.0, 3.0, 4.0], payload: {}, }, ], }, }, { update_vectors: { points: [ { id: 1, vector: [1.0, 2.0, 3.0, 4.0], }, ], }, }, { delete_vectors: { points: [1], vector: [""""], }, }, { overwrite_payload: { payload: { test_payload: 1, }, points: [1], }, }, { set_payload: { payload: { test_payload_2: 2, test_payload_3: 3, }, points: [1], }, }, { delete_payload: { keys: [""test_payload_2""], points: [1], }, }, { clear_payload: { points: [1], }, }, { delete: { points: [1], }, }, ], }); ``` ```rust use std::collections::HashMap; use qdrant_client::qdrant::{ points_update_operation::{ ClearPayload, DeletePayload, DeletePoints, DeleteVectors, Operation, OverwritePayload, PointStructList, SetPayload, UpdateVectors, }, PointStruct, PointVectors, PointsUpdateOperation, UpdateBatchPointsBuilder, VectorsSelector, }; use qdrant_client::Payload; client .update_points_batch( UpdateBatchPointsBuilder::new( ""{collection_name}"", vec![ PointsUpdateOperation { operation: Some(Operation::Upsert(PointStructList { points: vec![PointStruct::new( 1, vec![1.0, 2.0, 3.0, 4.0], Payload::default(), )], ..Default::default() })), }, PointsUpdateOperation { operation: Some(Operation::UpdateVectors(UpdateVectors { points: vec![PointVectors { id: Some(1.into()), vectors: Some(vec![1.0, 2.0, 3.0, 4.0].into()), }], ..Default::default() })), }, PointsUpdateOperation { operation: Some(Operation::DeleteVectors(DeleteVectors { points_selector: Some(vec![1.into()].into()), vectors: Some(VectorsSelector { names: vec!["""".into()], }), ..Default::default() })), }, PointsUpdateOperation { operation: Some(Operation::OverwritePayload(OverwritePayload { points_selector: Some(vec![1.into()].into()), payload: HashMap::from([(""test_payload"".to_string(), 1.into())]), ..Default::default() })), }, PointsUpdateOperation { operation: Some(Operation::SetPayload(SetPayload { points_selector: Some(vec![1.into()].into()), payload: HashMap::from([ (""test_payload_2"".to_string(), 2.into()), (""test_payload_3"".to_string(), 3.into()), ]), ..Default::default() })), }, PointsUpdateOperation { operation: Some(Operation::DeletePayload(DeletePayload { points_selector: Some(vec![1.into()].into()), keys: vec![""test_payload_2"".to_string()], ..Default::default() })), }, PointsUpdateOperation { operation: Some(Operation::ClearPayload(ClearPayload { points: Some(vec![1.into()].into()), ..Default::default() })), }, PointsUpdateOperation { operation: Some(Operation::DeletePoints(DeletePoints { points: Some(vec![1.into()].into()), ..Default::default() })), }, ], ) .wait(true), ) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.ValueFactory.value; import static io.qdrant.client.VectorsFactory.vectors; import io.qdrant.client.grpc.Points.PointStruct; import io.qdrant.client.grpc.Points.PointVectors; import io.qdrant.client.grpc.Points.PointsIdsList; import io.qdrant.client.grpc.Points.PointsSelector; import io.qdrant.client.grpc.Points.PointsUpdateOperation; import io.qdrant.client.grpc.Points.PointsUpdateOperation.ClearPayload; import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeletePayload; import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeletePoints; import io.qdrant.client.grpc.Points.PointsUpdateOperation.DeleteVectors; import io.qdrant.client.grpc.Points.PointsUpdateOperation.PointStructList; import io.qdrant.client.grpc.Points.PointsUpdateOperation.SetPayload; import io.qdrant.client.grpc.Points.PointsUpdateOperation.UpdateVectors; import io.qdrant.client.grpc.Points.VectorsSelector; client .batchUpdateAsync( ""{collection_name}"", List.of( PointsUpdateOperation.newBuilder() .setUpsert( PointStructList.newBuilder() .addPoints( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(1.0f, 2.0f, 3.0f, 4.0f)) .build()) .build()) .build(), PointsUpdateOperation.newBuilder() .setUpdateVectors( UpdateVectors.newBuilder() .addPoints( PointVectors.newBuilder() .setId(id(1)) .setVectors(vectors(1.0f, 2.0f, 3.0f, 4.0f)) .build()) .build()) .build(), PointsUpdateOperation.newBuilder() .setDeleteVectors( DeleteVectors.newBuilder() .setPointsSelector( PointsSelector.newBuilder() .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build()) .build()) .setVectors(VectorsSelector.newBuilder().addNames("""").build()) .build()) .build(), PointsUpdateOperation.newBuilder() .setOverwritePayload( SetPayload.newBuilder() .setPointsSelector( PointsSelector.newBuilder() .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build()) .build()) .putAllPayload(Map.of(""test_payload"", value(1))) .build()) .build(), PointsUpdateOperation.newBuilder() .setSetPayload( SetPayload.newBuilder() .setPointsSelector( PointsSelector.newBuilder() .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build()) .build()) .putAllPayload( Map.of(""test_payload_2"", value(2), ""test_payload_3"", value(3))) .build()) .build(), PointsUpdateOperation.newBuilder() .setDeletePayload( DeletePayload.newBuilder() .setPointsSelector( PointsSelector.newBuilder() .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build()) .build()) .addKeys(""test_payload_2"") .build()) .build(), PointsUpdateOperation.newBuilder() .setClearPayload( ClearPayload.newBuilder() .setPoints( PointsSelector.newBuilder() .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build()) .build()) .build()) .build(), PointsUpdateOperation.newBuilder() .setDeletePoints( DeletePoints.newBuilder() .setPoints( PointsSelector.newBuilder() .setPoints(PointsIdsList.newBuilder().addIds(id(1)).build()) .build()) .build()) .build())) .get(); ``` To batch many points with a single operation type, please use batching functionality in that operation directly. ## Awaiting result If the API is called with the `&wait=false` parameter, or if it is not explicitly specified, the client will receive an acknowledgment of receiving data: ```json { ""result"": { ""operation_id"": 123, ""status"": ""acknowledged"" }, ""status"": ""ok"", ""time"": 0.000206061 } ``` This response does not mean that the data is available for retrieval yet. This uses a form of eventual consistency. It may take a short amount of time before it is actually processed as updating the collection happens in the background. In fact, it is possible that such request eventually fails. If inserting a lot of vectors, we also recommend using asynchronous requests to take advantage of pipelining. If the logic of your application requires a guarantee that the vector will be available for searching immediately after the API responds, then use the flag `?wait=true`. In this case, the API will return the result only after the operation is finished: ```json { ""result"": { ""operation_id"": 0, ""status"": ""completed"" }, ""status"": ""ok"", ""time"": 0.000206061 } ```",documentation/concepts/points.md "--- title: Vectors weight: 41 aliases: - /vectors --- # Vectors Vectors (or embeddings) are the core concept of the Qdrant Vector Search engine. Vectors define the similarity between objects in the vector space. If a pair of vectors are similar in vector space, it means that the objects they represent are similar in some way. For example, if you have a collection of images, you can represent each image as a vector. If two images are similar, their vectors will be close to each other in the vector space. In order to obtain a vector representation of an object, you need to apply a vectorization algorithm to the object. Usually, this algorithm is a neural network that converts the object into a fixed-size vector. The neural network is usually [trained](/articles/metric-learning-tips/) on a pairs or [triplets](/articles/triplet-loss/) of similar and dissimilar objects, so it learns to recognize a specific type of similarity. By using this property of vectors, you can explore your data in a number of ways; e.g. by searching for similar objects, clustering objects, and more. ## Vector Types Modern neural networks can output vectors in different shapes and sizes, and Qdrant supports most of them. Let's take a look at the most common types of vectors supported by Qdrant. ### Dense Vectors This is the most common type of vector. It is a simple list of numbers, it has a fixed length and each element of the list is a floating-point number. It looks like this: ```json // A piece of a real-world dense vector [ -0.013052909, 0.020387933, -0.007869, -0.11111383, -0.030188112, -0.0053388323, 0.0010654867, 0.072027855, -0.04167721, 0.014839341, -0.032948174, -0.062975034, -0.024837125, .... ] ``` The majority of neural networks create dense vectors, so you can use them with Qdrant without any additional processing. Although compatible with most embedding models out there, Qdrant has been tested with the following [verified embedding providers](/documentation/embeddings/). ### Sparse Vectors Sparse vectors are a special type of vectors. Mathematically, they are the same as dense vectors, but they contain many zeros so they are stored in a special format. Sparse vectors in Qdrant don't have a fixed length, as it is dynamically allocated during vector insertion. In order to define a sparse vector, you need to provide a list of non-zero elements and their indexes. ```json // A sparse vector with 4 non-zero elements { ""indexes"": [1, 3, 5, 7], ""values"": [0.1, 0.2, 0.3, 0.4] } ``` Sparse vectors in Qdrant are kept in special storage and indexed in a separate index, so their configuration is different from dense vectors. To create a collection with sparse vectors: ```http PUT /collections/{collection_name} { ""sparse_vectors"": { ""text"": { }, } } ``` ```bash curl -X PUT http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ ""sparse_vectors"": { ""text"": { } } }' ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", sparse_vectors_config={ ""text"": models.SparseVectorParams(), }, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { sparse_vectors: { text: { }, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, SparseVectorParamsBuilder, SparseVectorsConfigBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let mut sparse_vectors_config = SparseVectorsConfigBuilder::default(); sparse_vectors_config.add_named_vector_params(""text"", SparseVectorParamsBuilder::default()); client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .sparse_vectors_config(sparse_vectors_config), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.SparseVectorConfig; import io.qdrant.client.grpc.Collections.SparseVectorParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setSparseVectorsConfig( SparseVectorConfig.newBuilder() .putMap(""text"", SparseVectorParams.getDefaultInstance())) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", sparseVectorsConfig: (""text"", new SparseVectorParams()) ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", SparseVectorsConfig: qdrant.NewSparseVectorsConfig( map[string]*qdrant.SparseVectorParams{ ""text"": {}, }), }) ``` Insert a point with a sparse vector into the created collection: ```http PUT /collections/{collection_name}/points { ""points"": [ { ""id"": 1, ""vector"": { ""text"": { ""indices"": [1, 3, 5, 7], ""values"": [0.1, 0.2, 0.3, 0.4] } } } ] } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.upsert( collection_name=""{collection_name}"", points=[ models.PointStruct( id=1, payload={}, # Add any additional payload if necessary vector={ ""text"": models.SparseVector( indices=[1, 3, 5, 7], values=[0.1, 0.2, 0.3, 0.4] ) }, ) ], ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.upsert(""{collection_name}"", { points: [ { id: 1, vector: { text: { indices: [1, 3, 5, 7], values: [0.1, 0.2, 0.3, 0.4] }, }, } }); ``` ```rust use qdrant_client::qdrant::{NamedVectors, PointStruct, UpsertPointsBuilder, Vector}; use qdrant_client::{Payload, Qdrant}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let points = vec![PointStruct::new( 1, NamedVectors::default().add_vector( ""text"", Vector::new_sparse(vec![1, 3, 5, 7], vec![0.1, 0.2, 0.3, 0.4]), ), Payload::new(), )]; client .upsert_points(UpsertPointsBuilder::new(""{collection_name}"", points)) .await?; ``` ```java import java.util.List; import java.util.Map; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.VectorFactory.vector; import static io.qdrant.client.VectorsFactory.namedVectors; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .upsertAsync( ""{collection_name}"", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors( namedVectors(Map.of( ""text"", vector(List.of(1.0f, 2.0f), List.of(6, 7)))) ) .build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpsertAsync( collectionName: ""{collection_name}"", points: new List < PointStruct > { new() { Id = 1, Vectors = new Dictionary < string, Vector > { [""text""] = ([0.1 f, 0.2 f, 0.3 f, 0.4 f], [1, 3, 5, 7]) } } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Upsert(context.Background(), &qdrant.UpsertPoints{ CollectionName: ""{collection_name}"", Points: []*qdrant.PointStruct{ { Id: qdrant.NewIDNum(1), Vectors: qdrant.NewVectorsMap( map[string]*qdrant.Vector{ ""text"": qdrant.NewVectorSparse( []uint32{1, 3, 5, 7}, []float32{0.1, 0.2, 0.3, 0.4}), }), }, }, }) ``` Now you can run a search with sparse vectors: ```http POST /collections/{collection_name}/points/query { ""query"": { ""indices"": [1, 3, 5, 7], ""values"": [0.1, 0.2, 0.3, 0.4] }, ""using"": ""text"" } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") result = client.query_points( collection_name=""{collection_name}"", query_vector=models.SparseVector(indices=[1, 3, 5, 7], values=[0.1, 0.2, 0.3, 0.4]), using=""text"", ).points ``` ```rust use qdrant_client::qdrant::QueryPointsBuilder; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(vec![(1, 0.2), (3, 0.1), (5, 0.9), (7, 0.7)]) .limit(10) .using(""text""), ) .await?; ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: { indices: [1, 3, 5, 7], values: [0.1, 0.2, 0.3, 0.4] }, using: ""text"", limit: 3, }); ``` ```java import java.util.List; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPoints; import static io.qdrant.client.QueryFactory.nearest; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setUsing(""text"") .setQuery(nearest(List.of(0.1f, 0.2f, 0.3f, 0.4f), List.of(1, 3, 5, 7))) .setLimit(3) .build()) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new (float, uint)[] {(0.1f, 1), (0.2f, 3), (0.3f, 5), (0.4f, 7)}, usingVector: ""text"", limit: 3 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuerySparse( []uint32{1, 3, 5, 7}, []float32{0.1, 0.2, 0.3, 0.4}), Using: qdrant.PtrOf(""text""), }) ``` ### Multivectors **Available as of v1.10.0** Qdrant supports the storing of a variable amount of same-shaped dense vectors in a single point. This means that instead of a single dense vector, you can upload a matrix of dense vectors. The length of the matrix is fixed, but the number of vectors in the matrix can be different for each point. Multivectors look like this: ```json // A multivector of size 4 ""vector"": [ [-0.013, 0.020, -0.007, -0.111], [-0.030, -0.055, 0.001, 0.072], [-0.041, 0.014, -0.032, -0.062], .... ] ``` There are two scenarios where multivectors are useful: * **Multiple representation of the same object** - For example, you can store multiple embeddings for pictures of the same object, taken from different angles. This approach assumes that the payload is same for all vectors. * **Late interaction embeddings** - Some text embedding models can output multiple vectors for a single text. For example, a family of models such as ColBERT output a relatively small vector for each token in the text. In order to use multivectors, we need to specify a function that will be used to compare between matrices of vectors Currently, Qdrant supports `max_sim` function, which is defined as a sum of maximum similarities between each pair of vectors in the matrices. $$ score = \sum_{i=1}^{N} \max_{j=1}^{M} \text{Sim}(\text{vectorA}_i, \text{vectorB}_j) $$ Where $N$ is the number of vectors in the first matrix, $M$ is the number of vectors in the second matrix, and $\text{Sim}$ is a similarity function, for example, cosine similarity. To use multivectors, create a collection with the following configuration: ```http PUT collections/{collection_name} { ""vectors"": { ""size"": 128, ""distance"": ""Cosine"", ""multivector_config"": { ""comparator"": ""max_sim"" } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams( size=128, distance=models.Distance.Cosine, multivector_config=models.MultiVectorConfig( comparator=models.MultiVectorComparator.MAX_SIM ), ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 128, distance: ""Cosine"", multivector_config: { comparator: ""max_sim"" } }, }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Distance, VectorParamsBuilder, MultiVectorComparator, MultiVectorConfigBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .vectors_config( VectorParamsBuilder::new(100, Distance::Cosine) .multivector_config( MultiVectorConfigBuilder::new(MultiVectorComparator::MaxSim) ), ), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.MultiVectorComparator; import io.qdrant.client.grpc.Collections.MultiVectorConfig; import io.qdrant.client.grpc.Collections.VectorParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.createCollectionAsync(""{collection_name}"", VectorParams.newBuilder().setSize(128) .setDistance(Distance.Cosine) .setMultivectorConfig(MultiVectorConfig.newBuilder() .setComparator(MultiVectorComparator.MaxSim) .build()) .build()).get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 128, Distance = Distance.Cosine, MultivectorConfig = new() { Comparator = MultiVectorComparator.MaxSim } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 128, Distance: qdrant.Distance_Cosine, MultivectorConfig: &qdrant.MultiVectorConfig{ Comparator: qdrant.MultiVectorComparator_MaxSim, }, }), }) ``` To insert a point with multivector: ```http PUT collections/{collection_name}/points { ""points"": [ { ""id"": 1, ""vector"": [ [-0.013, 0.020, -0.007, -0.111, ...], [-0.030, -0.055, 0.001, 0.072, ...], [-0.041, 0.014, -0.032, -0.062, ...] ] } ] } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.upsert( collection_name=""{collection_name}"", points=[ models.PointStruct( id=1, vector=[ [-0.013, 0.020, -0.007, -0.111, ...], [-0.030, -0.055, 0.001, 0.072, ...], [-0.041, 0.014, -0.032, -0.062, ...] ], ) ], ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.upsert(""{collection_name}"", { points: [ { id: 1, vector: [ [-0.013, 0.020, -0.007, -0.111, ...], [-0.030, -0.055, 0.001, 0.072, ...], [-0.041, 0.014, -0.032, -0.062, ...] ], } ] }); ``` ```rust use qdrant_client::qdrant::{PointStruct, UpsertPointsBuilder, Vector}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let points = vec![ PointStruct::new( 1, Vector::new_multi(vec![ vec![-0.013, 0.020, -0.007, -0.111], vec![-0.030, -0.055, 0.001, 0.072], vec![-0.041, 0.014, -0.032, -0.062], ]), Payload::new() ) ]; client .upsert_points( UpsertPointsBuilder::new(""{collection_name}"", points) ).await?; ``` ```java import java.util.List; import static io.qdrant.client.PointIdFactory.id; import static io.qdrant.client.VectorsFactory.vectors; import static io.qdrant.client.VectorFactory.multiVector; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PointStruct; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .upsertAsync( ""{collection_name}"", List.of( PointStruct.newBuilder() .setId(id(1)) .setVectors(vectors(multiVector(new float[][] { {-0.013f, 0.020f, -0.007f, -0.111f}, {-0.030f, -0.055f, 0.001f, 0.072f}, {-0.041f, 0.014f, -0.032f, -0.062f} }))) .build() )) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.UpsertAsync( collectionName: ""{collection_name}"", points: new List { new() { Id = 1, Vectors = new float[][] { [-0.013f, 0.020f, -0.007f, -0.111f], [-0.030f, -0.05f, 0.001f, 0.072f], [-0.041f, 0.014f, -0.032f, -0.062f ], }, }, } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Upsert(context.Background(), &qdrant.UpsertPoints{ CollectionName: ""{collection_name}"", Points: []*qdrant.PointStruct{ { Id: qdrant.NewIDNum(1), Vectors: qdrant.NewVectorsMulti( [][]float32{ {-0.013, 0.020, -0.007, -0.111}, {-0.030, -0.055, 0.001, 0.072}, {-0.041, 0.014, -0.032, -0.062}}), }, }, }) ``` To search with multivector (available in `query` API): ```http POST collections/{collection_name}/points/query { ""query"": [ [-0.013, 0.020, -0.007, -0.111, ...], [-0.030, -0.055, 0.001, 0.072, ...], [-0.041, 0.014, -0.032, -0.062, ...] ] } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=[ [-0.013, 0.020, -0.007, -0.111, ...], [-0.030, -0.055, 0.001, 0.072, ...], [-0.041, 0.014, -0.032, -0.062, ...] ], ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { ""query"": [ [-0.013, 0.020, -0.007, -0.111, ...], [-0.030, -0.055, 0.001, 0.072, ...], [-0.041, 0.014, -0.032, -0.062, ...] ] }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{ QueryPointsBuilder, VectorInput }; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let res = client.query( QueryPointsBuilder::new(""{collection_name}"") .query(VectorInput::new_multi( vec![ vec![-0.013, 0.020, -0.007, -0.111, ...], vec![-0.030, -0.055, 0.001, 0.072, ...], vec![-0.041, 0.014, -0.032, -0.062, ...], ] )) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync(QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(new float[][] { {-0.013f, 0.020f, -0.007f, -0.111f}, {-0.030f, -0.055f, 0.001f, 0.072f}, {-0.041f, 0.014f, -0.032f, -0.062f} })) .build()).get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: new float[][] { [-0.013f, 0.020f, -0.007f, -0.111f], [-0.030f, -0.055f, 0.001 , 0.072f], [-0.041f, 0.014f, -0.032f, -0.062f], } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQueryMulti( [][]float32{ {-0.013, 0.020, -0.007, -0.111}, {-0.030, -0.055, 0.001, 0.072}, {-0.041, 0.014, -0.032, -0.062}, }), }) ``` ## Named Vectors Aside from storing multiple vectors of the same shape in a single point, Qdrant supports storing multiple different vectors in a single point. Each of these vectors should have a unique configuration and should be addressed by a unique name. Also, each vector can be of a different type and be generated by a different embedding model. To create a collection with named vectors, you need to specify a configuration for each vector: ```http PUT /collections/{collection_name} { ""vectors"": { ""image"": { ""size"": 4, ""distance"": ""Dot"" }, ""text"": { ""size"": 8, ""distance"": ""Cosine"" } } } ``` ```bash curl -X PUT http://localhost:6333/collections/{collection_name} \ -H 'Content-Type: application/json' \ --data-raw '{ ""vectors"": { ""image"": { ""size"": 4, ""distance"": ""Dot"" }, ""text"": { ""size"": 8, ""distance"": ""Cosine"" } } }' ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config={ ""image"": models.VectorParams(size=4, distance=models.Distance.DOT), ""text"": models.VectorParams(size=8, distance=models.Distance.COSINE), }, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { image: { size: 4, distance: ""Dot"" }, text: { size: 8, distance: ""Cosine"" }, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Distance, VectorParamsBuilder, VectorsConfigBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let mut vector_config = VectorsConfigBuilder::default(); vector_config.add_named_vector_params(""text"", VectorParamsBuilder::new(4, Distance::Dot)); vector_config.add_named_vector_params(""image"", VectorParamsBuilder::new(8, Distance::Cosine)); client .create_collection( CreateCollectionBuilder::new(""{collection_name}"").vectors_config(vector_config), ) .await?; ``` ```java import java.util.Map; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( ""{collection_name}"", Map.of( ""image"", VectorParams.newBuilder().setSize(4).setDistance(Distance.Dot).build(), ""text"", VectorParams.newBuilder().setSize(8).setDistance(Distance.Cosine).build())) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParamsMap { Map = { [""image""] = new VectorParams { Size = 4, Distance = Distance.Dot }, [""text""] = new VectorParams { Size = 8, Distance = Distance.Cosine }, } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfigMap( map[string]*qdrant.VectorParams{ ""image"": { Size: 4, Distance: qdrant.Distance_Dot, }, ""text"": { Size: 8, Distance: qdrant.Distance_Cosine, }, }), }) ``` ## Datatypes Newest versions of embeddings models generate vectors with very large dimentionalities. With OpenAI's `text-embedding-3-large` embedding model, the dimensionality can go up to 3072. The amount of memory required to store such vectors grows linearly with the dimensionality, so it is important to choose the right datatype for the vectors. The choice between datatypes is a trade-off between memory consumption and precision of vectors. Qdrant supports a number of datatypes for both dense and sparse vectors: **Float32** This is the default datatype for vectors in Qdrant. It is a 32-bit (4 bytes) floating-point number. The standard OpenAI embedding of 1536 dimensionality will require 6KB of memory to store in Float32. You don't need to specify the datatype for vectors in Qdrant, as it is set to Float32 by default. **Float16** This is a 16-bit (2 bytes) floating-point number. It is also known as half-precision float. Intuitively, it looks like this: ```text float32 -> float16 delta (float32 - float16).abs 0.79701585 -> 0.796875 delta 0.00014084578 0.7850789 -> 0.78515625 delta 0.00007736683 0.7775044 -> 0.77734375 delta 0.00016063452 0.85776305 -> 0.85791016 delta 0.00014710426 0.6616839 -> 0.6616211 delta 0.000062823296 ``` The main advantage of Float16 is that it requires half the memory of Float32, while having virtually no impact on the quality of vector search. To use Float16, you need to specify the datatype for vectors in the collection configuration: ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 128, ""distance"": ""Cosine"", ""datatype"": ""float16"" // <-- For dense vectors }, ""sparse_vectors"": { ""text"": { ""index"": { ""datatype"": ""float16"" // <-- And for sparse vectors } } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams( size=128, distance=models.Distance.COSINE, datatype=models.Datatype.FLOAT16 ), sparse_vectors_config={ ""text"": models.SparseVectorParams( index=models.SparseIndexConfig(datatype=models.Datatype.FLOAT16) ), }, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 128, distance: ""Cosine"", datatype: ""float16"" }, sparse_vectors: { text: { index: { datatype: ""float16"" } } } }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Datatype, Distance, SparseIndexConfigBuilder, SparseVectorParamsBuilder, SparseVectorsConfigBuilder, VectorParamsBuilder }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let mut sparse_vector_config = SparseVectorsConfigBuilder::default(); sparse_vector_config.add_named_vector_params( ""text"", SparseVectorParamsBuilder::default() .index(SparseIndexConfigBuilder::default().datatype(Datatype::Float32)), ); let create_collection = CreateCollectionBuilder::new(""{collection_name}"") .sparse_vectors_config(sparse_vector_config) .vectors_config( VectorParamsBuilder::new(128, Distance::Cosine).datatype(Datatype::Float16), ); client.create_collection(create_collection).await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Datatype; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.SparseIndexConfig; import io.qdrant.client.grpc.Collections.SparseVectorConfig; import io.qdrant.client.grpc.Collections.SparseVectorParams; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig(VectorsConfig.newBuilder() .setParams(VectorParams.newBuilder() .setSize(128) .setDistance(Distance.Cosine) .setDatatype(Datatype.Float16) .build()) .build()) .setSparseVectorsConfig( SparseVectorConfig.newBuilder() .putMap(""text"", SparseVectorParams.newBuilder() .setIndex(SparseIndexConfig.newBuilder() .setDatatype(Datatype.Float16) .build()) .build())) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 128, Distance = Distance.Cosine, Datatype = Datatype.Float16 }, sparseVectorsConfig: ( ""text"", new SparseVectorParams { Index = new SparseIndexConfig { Datatype = Datatype.Float16 } } ) ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 128, Distance: qdrant.Distance_Cosine, Datatype: qdrant.Datatype_Float16.Enum(), }), SparseVectorsConfig: qdrant.NewSparseVectorsConfig( map[string]*qdrant.SparseVectorParams{ ""text"": { Index: &qdrant.SparseIndexConfig{ Datatype: qdrant.Datatype_Float16.Enum(), }, }, }), }) ``` **Uint8** Another step towards memory optimization is to use the Uint8 datatype for vectors. Unlike Float16, Uint8 is not a floating-point number, but an integer number in the range from 0 to 255. Not all embeddings models generate vectors in the range from 0 to 255, so you need to be careful when using Uint8 datatype. In order to convert a number from float range to Uint8 range, you need to apply a process called quantization. Some embedding providers may provide embeddings in a pre-quantized format. One of the most notable examples is the [Cohere int8 & binary embeddings](https://cohere.com/blog/int8-binary-embeddings). For other embeddings, you will need to apply quantization yourself. ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 128, ""distance"": ""Cosine"", ""datatype"": ""uint8"" // <-- For dense vectors }, ""sparse_vectors"": { ""text"": { ""index"": { ""datatype"": ""uint8"" // <-- For sparse vectors } } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams( size=128, distance=models.Distance.COSINE, datatype=models.Datatype.UINT8 ), sparse_vectors_config={ ""text"": models.SparseVectorParams( index=models.SparseIndexConfig(datatype=models.Datatype.UINT8) ), }, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 128, distance: ""Cosine"", datatype: ""uint8"" }, sparse_vectors: { text: { index: { datatype: ""uint8"" } } } }); ``` ```rust use qdrant_client::qdrant::{ CreateCollectionBuilder, Datatype, Distance, SparseIndexConfigBuilder, SparseVectorParamsBuilder, SparseVectorsConfigBuilder, VectorParamsBuilder, }; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let mut sparse_vector_config = SparseVectorsConfigBuilder::default(); sparse_vector_config.add_named_vector_params( ""text"", SparseVectorParamsBuilder::default() .index(SparseIndexConfigBuilder::default().datatype(Datatype::Uint8)), ); let create_collection = CreateCollectionBuilder::new(""{collection_name}"") .sparse_vectors_config(sparse_vector_config) .vectors_config( VectorParamsBuilder::new(128, Distance::Cosine) .datatype(Datatype::Uint8) ); client.create_collection(create_collection).await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Datatype; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.SparseIndexConfig; import io.qdrant.client.grpc.Collections.SparseVectorConfig; import io.qdrant.client.grpc.Collections.SparseVectorParams; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig(VectorsConfig.newBuilder() .setParams(VectorParams.newBuilder() .setSize(128) .setDistance(Distance.Cosine) .setDatatype(Datatype.Uint8) .build()) .build()) .setSparseVectorsConfig( SparseVectorConfig.newBuilder() .putMap(""text"", SparseVectorParams.newBuilder() .setIndex(SparseIndexConfig.newBuilder() .setDatatype(Datatype.Uint8) .build()) .build())) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 128, Distance = Distance.Cosine, Datatype = Datatype.Uint8 }, sparseVectorsConfig: ( ""text"", new SparseVectorParams { Index = new SparseIndexConfig { Datatype = Datatype.Uint8 } } ) ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateCollection(context.Background(), &qdrant.CreateCollection{ CollectionName: ""{collection_name}"", VectorsConfig: qdrant.NewVectorsConfig(&qdrant.VectorParams{ Size: 128, Distance: qdrant.Distance_Cosine, Datatype: qdrant.Datatype_Uint8.Enum(), }), SparseVectorsConfig: qdrant.NewSparseVectorsConfig( map[string]*qdrant.SparseVectorParams{ ""text"": { Index: &qdrant.SparseIndexConfig{ Datatype: qdrant.Datatype_Uint8.Enum(), }, }, }), }) ``` ## Quantization Apart from changing the datatype of the original vectors, Qdrant can create quantized representations of vectors alongside the original ones. This quantized representation can be used to quickly select candidates for rescoring with the original vectors or even used directly for search. Quantization is applied in the background, during the optimization process. More information about the quantization process can be found in the [Quantization](/documentation/guides/quantization/) section. ## Vector Storage Depending on the requirements of the application, Qdrant can use one of the data storage options. Keep in mind that you will have to tradeoff between search speed and the size of RAM used. More information about the storage options can be found in the [Storage](/documentation/concepts/storage/#vector-storage) section. ",documentation/concepts/vectors.md "--- title: Snapshots weight: 110 aliases: - ../snapshots --- # Snapshots *Available as of v0.8.4* Snapshots are `tar` archive files that contain data and configuration of a specific collection on a specific node at a specific time. In a distributed setup, when you have multiple nodes in your cluster, you must create snapshots for each node separately when dealing with a single collection. This feature can be used to archive data or easily replicate an existing deployment. For disaster recovery, Qdrant Cloud users may prefer to use [Backups](/documentation/cloud/backups/) instead, which are physical disk-level copies of your data. For a step-by-step guide on how to use snapshots, see our [tutorial](/documentation/tutorials/create-snapshot/). ## Create snapshot To create a new snapshot for an existing collection: ```http POST /collections/{collection_name}/snapshots ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") client.create_snapshot(collection_name=""{collection_name}"") ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createSnapshot(""{collection_name}""); ``` ```rust use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.create_snapshot(""{collection_name}"").await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.createSnapshotAsync(""{collection_name}"").get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.CreateSnapshotAsync(""{collection_name}""); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateSnapshot(context.Background(), ""{collection_name}"") ``` This is a synchronous operation for which a `tar` archive file will be generated into the `snapshot_path`. ### Delete snapshot *Available as of v1.0.0* ```http DELETE /collections/{collection_name}/snapshots/{snapshot_name} ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") client.delete_snapshot( collection_name=""{collection_name}"", snapshot_name=""{snapshot_name}"" ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.deleteSnapshot(""{collection_name}"", ""{snapshot_name}""); ``` ```rust use qdrant_client::qdrant::DeleteSnapshotRequestBuilder; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .delete_snapshot(DeleteSnapshotRequestBuilder::new( ""{collection_name}"", ""{snapshot_name}"", )) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.deleteSnapshotAsync(""{collection_name}"", ""{snapshot_name}"").get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.DeleteSnapshotAsync(collectionName: ""{collection_name}"", snapshotName: ""{snapshot_name}""); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.DeleteSnapshot(context.Background(), ""{collection_name}"", ""{snapshot_name}"") ``` ## List snapshot List of snapshots for a collection: ```http GET /collections/{collection_name}/snapshots ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") client.list_snapshots(collection_name=""{collection_name}"") ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.listSnapshots(""{collection_name}""); ``` ```rust use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.list_snapshots(""{collection_name}"").await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.listSnapshotAsync(""{collection_name}"").get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.ListSnapshotsAsync(""{collection_name}""); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.ListSnapshots(context.Background(), ""{collection_name}"") ``` ## Retrieve snapshot To download a specified snapshot from a collection as a file: ```http GET /collections/{collection_name}/snapshots/{snapshot_name} ``` ```shell curl 'http://{qdrant-url}:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.snapshot' \ -H 'api-key: ********' \ --output 'filename.snapshot' ``` ## Restore snapshot Snapshots can be restored in three possible ways: 1. [Recovering from a URL or local file](#recover-from-a-url-or-local-file) (useful for restoring a snapshot file that is on a remote server or already stored on the node) 3. [Recovering from an uploaded file](#recover-from-an-uploaded-file) (useful for migrating data to a new cluster) 3. [Recovering during start-up](#recover-during-start-up) (useful when running a self-hosted single-node Qdrant instance) Regardless of the method used, Qdrant will extract the shard data from the snapshot and properly register shards in the cluster. If there are other active replicas of the recovered shards in the cluster, Qdrant will replicate them to the newly recovered node by default to maintain data consistency. ### Recover from a URL or local file *Available as of v0.11.3* This method of recovery requires the snapshot file to be downloadable from a URL or exist as a local file on the node (like if you [created the snapshot](#create-snapshot) on this node previously). If instead you need to upload a snapshot file, see the next section. To recover from a URL or local file use the [snapshot recovery endpoint](https://api.qdrant.tech/master/api-reference/snapshots/recover-from-snapshot). This endpoint accepts either a URL like `https://example.com` or a [file URI](https://en.wikipedia.org/wiki/File_URI_scheme) like `file:///tmp/snapshot-2022-10-10.snapshot`. If the target collection does not exist, it will be created. ```http PUT /collections/{collection_name}/snapshots/recover { ""location"": ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot"" } ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://qdrant-node-2:6333"") client.recover_snapshot( ""{collection_name}"", ""http://qdrant-node-1:6333/collections/collection_name/snapshots/snapshot-2022-10-10.shapshot"", ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.recoverSnapshot(""{collection_name}"", { location: ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot"", }); ``` ### Recover from an uploaded file The snapshot file can also be uploaded as a file and restored using the [recover from uploaded snapshot](https://api.qdrant.tech/master/api-reference/snapshots/recover-from-uploaded-snapshot). This endpoint accepts the raw snapshot data in the request body. If the target collection does not exist, it will be created. ```bash curl -X POST 'http://{qdrant-url}:6333/collections/{collection_name}/snapshots/upload?priority=snapshot' \ -H 'api-key: ********' \ -H 'Content-Type:multipart/form-data' \ -F 'snapshot=@/path/to/snapshot-2022-10-10.shapshot' ``` This method is typically used to migrate data from one cluster to another, so we recommend setting the [priority](#snapshot-priority) to ""snapshot"" for that use-case. ### Recover during start-up If you have a single-node deployment, you can recover any collection at start-up and it will be immediately available. Restoring snapshots is done through the Qdrant CLI at start-up time via the `--snapshot` argument which accepts a list of pairs such as `:` For example: ```bash ./qdrant --snapshot /snapshots/test-collection-archive.snapshot:test-collection --snapshot /snapshots/test-collection-archive.snapshot:test-copy-collection ``` The target collection **must** be absent otherwise the program will exit with an error. If you wish instead to overwrite an existing collection, use the `--force_snapshot` flag with caution. ### Snapshot priority When recovering a snapshot to a non-empty node, there may be conflicts between the snapshot data and the existing data. The ""priority"" setting controls how Qdrant handles these conflicts. The priority setting is important because different priorities can give very different end results. The default priority may not be best for all situations. The available snapshot recovery priorities are: - `replica`: _(default)_ prefer existing data over the snapshot. - `snapshot`: prefer snapshot data over existing data. - `no_sync`: restore snapshot without any additional synchronization. To recover a new collection from a snapshot, you need to set the priority to `snapshot`. With `snapshot` priority, all data from the snapshot will be recovered onto the cluster. With `replica` priority _(default)_, you'd end up with an empty collection because the collection on the cluster did not contain any points and that source was preferred. `no_sync` is for specialized use cases and is not commonly used. It allows managing shards and transferring shards between clusters manually without any additional synchronization. Using it incorrectly will leave your cluster in a broken state. To recover from a URL, you specify an additional parameter in the request body: ```http PUT /collections/{collection_name}/snapshots/recover { ""location"": ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot"", ""priority"": ""snapshot"" } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://qdrant-node-2:6333"") client.recover_snapshot( ""{collection_name}"", ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot"", priority=models.SnapshotPriority.SNAPSHOT, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.recoverSnapshot(""{collection_name}"", { location: ""http://qdrant-node-1:6333/collections/{collection_name}/snapshots/snapshot-2022-10-10.shapshot"", priority: ""snapshot"" }); ``` ```bash curl -X POST 'http://qdrant-node-1:6333/collections/{collection_name}/snapshots/upload?priority=snapshot' \ -H 'api-key: ********' \ -H 'Content-Type:multipart/form-data' \ -F 'snapshot=@/path/to/snapshot-2022-10-10.shapshot' ``` ## Snapshots for the whole storage *Available as of v0.8.5* Sometimes it might be handy to create snapshot not just for a single collection, but for the whole storage, including collection aliases. Qdrant provides a dedicated API for that as well. It is similar to collection-level snapshots, but does not require `collection_name`. ### Create full storage snapshot ```http POST /snapshots ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") client.create_full_snapshot() ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createFullSnapshot(); ``` ```rust use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.create_full_snapshot().await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.createFullSnapshotAsync().get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.CreateFullSnapshotAsync(); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.CreateFullSnapshot(context.Background()) ``` ### Delete full storage snapshot *Available as of v1.0.0* ```http DELETE /snapshots/{snapshot_name} ``` ```python from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") client.delete_full_snapshot(snapshot_name=""{snapshot_name}"") ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.deleteFullSnapshot(""{snapshot_name}""); ``` ```rust use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.delete_full_snapshot(""{snapshot_name}"").await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.deleteFullSnapshotAsync(""{snapshot_name}"").get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.DeleteFullSnapshotAsync(""{snapshot_name}""); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.DeleteFullSnapshot(context.Background(), ""{snapshot_name}"") ``` ### List full storage snapshots ```http GET /snapshots ``` ```python from qdrant_client import QdrantClient client = QdrantClient(""localhost"", port=6333) client.list_full_snapshots() ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.listFullSnapshots(); ``` ```rust use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.list_full_snapshots().await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.listFullSnapshotAsync().get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.ListFullSnapshotsAsync(); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.ListFullSnapshots(context.Background()) ``` ### Download full storage snapshot ```http GET /snapshots/{snapshot_name} ``` ## Restore full storage snapshot Restoring snapshots can only be done through the Qdrant CLI at startup time. For example: ```bash ./qdrant --storage-snapshot /snapshots/full-snapshot-2022-07-18-11-20-51.snapshot ``` ## Storage Created, uploaded and recovered snapshots are stored as `.snapshot` files. By default, they're stored on the [local file system](#local-file-system). You may also configure to use an [S3 storage](#s3) service for them. ### Local file system By default, snapshots are stored at `./snapshots` or at `/qdrant/snapshots` when using our Docker image. The target directory can be controlled through the [configuration](../../guides/configuration/): ```yaml storage: # Specify where you want to store snapshots. snapshots_path: ./snapshots ``` Alternatively you may use the environment variable `QDRANT__STORAGE__SNAPSHOTS_PATH=./snapshots`. *Available as of v1.3.0* While a snapshot is being created, temporary files are placed in the configured storage directory by default. In case of limited capacity or a slow network attached disk, you can specify a separate location for temporary files: ```yaml storage: # Where to store temporary files temp_path: /tmp ``` ### S3 *Available as of v1.10.0* Rather than storing snapshots on the local file system, you may also configure to store snapshots in an S3-compatible storage service. To enable this, you must configure it in the [configuration](../../guides/configuration/) file. For example, to configure for AWS S3: ```yaml storage: snapshots_config: # Use 's3' to store snapshots on S3 snapshots_storage: s3 s3_config: # Bucket name bucket: your_bucket_here # Bucket region (e.g. eu-central-1) region: your_bucket_region_here # Storage access key # Can be specified either here or in the `QDRANT__STORAGE__SNAPSHOTS_CONFIG__S3_CONFIG__ACCESS_KEY` environment variable. access_key: your_access_key_here # Storage secret key # Can be specified either here or in the `QDRANT__STORAGE__SNAPSHOTS_CONFIG__S3_CONFIG__SECRET_KEY` environment variable. secret_key: your_secret_key_here # S3-Compatible Storage URL # Can be specified either here or in the `QDRANT__STORAGE__SNAPSHOTS_CONFIG__S3_CONFIG__ENDPOINT_URL` environment variable. endpoint_url: your_url_here ``` ",documentation/concepts/snapshots.md "--- title: Hybrid Queries #required weight: 57 # This is the order of the page in the sidebar. The lower the number, the higher the page will be in the sidebar. aliases: - ../hybrid-queries hideInSidebar: false # Optional. If true, the page will not be shown in the sidebar. It can be used in regular documentation pages and in documentation section pages (_index.md). --- # Hybrid and Multi-Stage Queries *Available as of v1.10.0* With the introduction of [many named vectors per point](../vectors/#named-vectors), there are use-cases when the best search is obtained by combining multiple queries, or by performing the search in more than one stage. Qdrant has a flexible and universal interface to make this possible, called `Query API` ([API reference](https://api.qdrant.tech/api-reference/search/query-points)). The main component for making the combinations of queries possible is the `prefetch` parameter, which enables making sub-requests. Specifically, whenever a query has at least one prefetch, Qdrant will: 1. Perform the prefetch query (or queries), 2. Apply the main query over the results of its prefetch(es). Additionally, prefetches can have prefetches themselves, so you can have nested prefetches. ## Hybrid Search One of the most common problems when you have different representations of the same data is to combine the queried points for each representation into a single result. {{< figure src=""/docs/fusion-idea.png"" caption=""Fusing results from multiple queries"" width=""80%"" >}} For example, in text search, it is often useful to combine dense and sparse vectors get the best of semantics, plus the best of matching specific words. Qdrant currently has two ways of combining the results from different queries: - `rrf` - Reciprocal Rank Fusion Considers the positions of results within each query, and boosts the ones that appear closer to the top in multiple of them. - `dbsf` - Distribution-Based Score Fusion *(available as of v1.11.0)* Normalizes the scores of the points in each query, using the mean +/- the 3rd standard deviation as limits, and then sums the scores of the same point across different queries. Here is an example of Reciprocal Rank Fusion for a query containing two prefetches against different named vectors configured to respectively hold sparse and dense vectors. ```http POST /collections/{collection_name}/points/query { ""prefetch"": [ { ""query"": { ""indices"": [1, 42], // <┐ ""values"": [0.22, 0.8] // <┴─sparse vector }, ""using"": ""sparse"", ""limit"": 20 }, { ""query"": [0.01, 0.45, 0.67, ...], // <-- dense vector ""using"": ""dense"", ""limit"": 20 } ], ""query"": { ""fusion"": ""rrf"" }, // <--- reciprocal rank fusion ""limit"": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", prefetch=[ models.Prefetch( query=models.SparseVector(indices=[1, 42], values=[0.22, 0.8]), using=""sparse"", limit=20, ), models.Prefetch( query=[0.01, 0.45, 0.67, ...], # <-- dense vector using=""dense"", limit=20, ), ], query=models.FusionQuery(fusion=models.Fusion.RRF), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { prefetch: [ { query: { values: [0.22, 0.8], indices: [1, 42], }, using: 'sparse', limit: 20, }, { query: [0.01, 0.45, 0.67], using: 'dense', limit: 20, }, ], query: { fusion: 'rrf', }, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Fusion, PrefetchQueryBuilder, Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.query( QueryPointsBuilder::new(""{collection_name}"") .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest([(1, 0.22), (42, 0.8)].as_slice())) .using(""sparse"") .limit(20u64) ) .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest(vec![0.01, 0.45, 0.67])) .using(""dense"") .limit(20u64) ) .query(Query::new_fusion(Fusion::Rrf)) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import java.util.List; import static io.qdrant.client.QueryFactory.fusion; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Fusion; import io.qdrant.client.grpc.Points.PrefetchQuery; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .addPrefetch(PrefetchQuery.newBuilder() .setQuery(nearest(List.of(0.22f, 0.8f), List.of(1, 42))) .setUsing(""sparse"") .setLimit(20) .build()) .addPrefetch(PrefetchQuery.newBuilder() .setQuery(nearest(List.of(0.01f, 0.45f, 0.67f))) .setUsing(""dense"") .setLimit(20) .build()) .setQuery(fusion(Fusion.RRF)) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", prefetch: new List < PrefetchQuery > { new() { Query = new(float, uint)[] { (0.22f, 1), (0.8f, 42), }, Using = ""sparse"", Limit = 20 }, new() { Query = new float[] { 0.01f, 0.45f, 0.67f }, Using = ""dense"", Limit = 20 } }, query: Fusion.Rrf ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Prefetch: []*qdrant.PrefetchQuery{ { Query: qdrant.NewQuerySparse([]uint32{1, 42}, []float32{0.22, 0.8}), Using: qdrant.PtrOf(""sparse""), }, { Query: qdrant.NewQueryDense([]float32{0.01, 0.45, 0.67}), Using: qdrant.PtrOf(""dense""), }, }, Query: qdrant.NewQueryFusion(qdrant.Fusion_RRF), }) ``` ## Multi-stage queries In many cases, the usage of a larger vector representation gives more accurate search results, but it is also more expensive to compute. Splitting the search into two stages is a known technique: * First, use a smaller and cheaper representation to get a large list of candidates. * Then, re-score the candidates using the larger and more accurate representation. There are a few ways to build search architectures around this idea: * The quantized vectors as a first stage, and the full-precision vectors as a second stage. * Leverage Matryoshka Representation Learning (MRL) to generate candidate vectors with a shorter vector, and then refine them with a longer one. * Use regular dense vectors to pre-fetch the candidates, and then re-score them with a multi-vector model like ColBERT. To get the best of all worlds, Qdrant has a convenient interface to perform the queries in stages, such that the coarse results are fetched first, and then they are refined later with larger vectors. ### Re-scoring examples Fetch 1000 results using a shorter MRL byte vector, then re-score them using the full vector and get the top 10. ```http POST /collections/{collection_name}/points/query { ""prefetch"": { ""query"": [1, 23, 45, 67], // <------------- small byte vector ""using"": ""mrl_byte"" ""limit"": 1000 }, ""query"": [0.01, 0.299, 0.45, 0.67, ...], // <-- full vector ""using"": ""full"", ""limit"": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", prefetch=models.Prefetch( query=[1, 23, 45, 67], # <------------- small byte vector using=""mrl_byte"", limit=1000, ), query=[0.01, 0.299, 0.45, 0.67, ...], # <-- full vector using=""full"", limit=10, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { prefetch: { query: [1, 23, 45, 67], // <------------- small byte vector using: 'mrl_byte', limit: 1000, }, query: [0.01, 0.299, 0.45, 0.67, ...], // <-- full vector, using: 'full', limit: 10, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{PrefetchQueryBuilder, Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.query( QueryPointsBuilder::new(""{collection_name}"") .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest(vec![1.0, 23.0, 45.0, 67.0])) .using(""mlr_byte"") .limit(1000u64) ) .query(Query::new_nearest(vec![0.01, 0.299, 0.45, 0.67])) .using(""full"") .limit(10u64) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PrefetchQuery; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .addPrefetch( PrefetchQuery.newBuilder() .setQuery(nearest(1, 23, 45, 67)) // <------------- small byte vector .setLimit(1000) .setUsing(""mrl_byte"") .build()) .setQuery(nearest(0.01f, 0.299f, 0.45f, 0.67f)) // <-- full vector .setUsing(""full"") .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", prefetch: new List { new() { Query = new float[] { 1,23, 45, 67 }, // <------------- small byte vector Using = ""mrl_byte"", Limit = 1000 } }, query: new float[] { 0.01f, 0.299f, 0.45f, 0.67f }, // <-- full vector usingVector: ""full"", limit: 10 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Prefetch: []*qdrant.PrefetchQuery{ { Query: qdrant.NewQueryDense([]float32{1, 23, 45, 67}), Using: qdrant.PtrOf(""mrl_byte""), Limit: qdrant.PtrOf(uint64(1000)), }, }, Query: qdrant.NewQueryDense([]float32{0.01, 0.299, 0.45, 0.67}), Using: qdrant.PtrOf(""full""), }) ``` Fetch 100 results using the default vector, then re-score them using a multi-vector to get the top 10. ```http POST /collections/{collection_name}/points/query { ""prefetch"": { ""query"": [0.01, 0.45, 0.67, ...], // <-- dense vector ""limit"": 100 }, ""query"": [ // <─┐ [0.1, 0.2, ...], // < │ [0.2, 0.1, ...], // < ├─ multi-vector [0.8, 0.9, ...] // < │ ], // <─┘ ""using"": ""colbert"", ""limit"": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", prefetch=models.Prefetch( query=[0.01, 0.45, 0.67, ...], # <-- dense vector limit=100, ), query=[ [0.1, 0.2, ...], # <─┐ [0.2, 0.1, ...], # < ├─ multi-vector [0.8, 0.9, ...], # < ┘ ], using=""colbert"", limit=10, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { prefetch: { query: [1, 23, 45, 67], // <------------- small byte vector limit: 100, }, query: [ [0.1, 0.2], // <─┐ [0.2, 0.1], // < ├─ multi-vector [0.8, 0.9], // < ┘ ], using: 'colbert', limit: 10, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{PrefetchQueryBuilder, Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.query( QueryPointsBuilder::new(""{collection_name}"") .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest(vec![0.01, 0.45, 0.67])) .limit(100u64) ) .query(Query::new_nearest(vec![ vec![0.1, 0.2], vec![0.2, 0.1], vec![0.8, 0.9], ])) .using(""colbert"") .limit(10u64) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PrefetchQuery; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .addPrefetch( PrefetchQuery.newBuilder() .setQuery(nearest(0.01f, 0.45f, 0.67f)) // <-- dense vector .setLimit(100) .build()) .setQuery( nearest( new float[][] { {0.1f, 0.2f}, // <─┐ {0.2f, 0.1f}, // < ├─ multi-vector {0.8f, 0.9f} // < ┘ })) .setUsing(""colbert"") .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", prefetch: new List { new() { Query = new float[] { 0.01f, 0.45f, 0.67f }, // <-- dense vector**** Limit = 100 } }, query: new float[][] { [0.1f, 0.2f], // <─┐ [0.2f, 0.1f], // < ├─ multi-vector [0.8f, 0.9f] // < ┘ }, usingVector: ""colbert"", limit: 10 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Prefetch: []*qdrant.PrefetchQuery{ { Query: qdrant.NewQueryDense([]float32{0.01, 0.45, 0.67}), Limit: qdrant.PtrOf(uint64(100)), }, }, Query: qdrant.NewQueryMulti([][]float32{ {0.1, 0.2}, {0.2, 0.1}, {0.8, 0.9}, }), Using: qdrant.PtrOf(""colbert""), }) ``` It is possible to combine all the above techniques in a single query: ```http POST /collections/{collection_name}/points/query { ""prefetch"": { ""prefetch"": { ""query"": [1, 23, 45, 67], // <------ small byte vector ""using"": ""mrl_byte"" ""limit"": 1000 }, ""query"": [0.01, 0.45, 0.67, ...], // <-- full dense vector ""using"": ""full"" ""limit"": 100 }, ""query"": [ // <─┐ [0.1, 0.2, ...], // < │ [0.2, 0.1, ...], // < ├─ multi-vector [0.8, 0.9, ...] // < │ ], // <─┘ ""using"": ""colbert"", ""limit"": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", prefetch=models.Prefetch( prefetch=models.Prefetch( query=[1, 23, 45, 67], # <------ small byte vector using=""mrl_byte"", limit=1000, ), query=[0.01, 0.45, 0.67, ...], # <-- full dense vector using=""full"", limit=100, ), query=[ [0.1, 0.2, ...], # <─┐ [0.2, 0.1, ...], # < ├─ multi-vector [0.8, 0.9, ...], # < ┘ ], using=""colbert"", limit=10, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { prefetch: { prefetch: { query: [1, 23, 45, 67, ...], // <------------- small byte vector using: 'mrl_byte', limit: 1000, }, query: [0.01, 0.45, 0.67, ...], // <-- full dense vector using: 'full', limit: 100, }, query: [ [0.1, 0.2], // <─┐ [0.2, 0.1], // < ├─ multi-vector [0.8, 0.9], // < ┘ ], using: 'colbert', limit: 10, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{PrefetchQueryBuilder, Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.query( QueryPointsBuilder::new(""{collection_name}"") .add_prefetch(PrefetchQueryBuilder::default() .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest(vec![1.0, 23.0, 45.0, 67.0])) .using(""mlr_byte"") .limit(1000u64) ) .query(Query::new_nearest(vec![0.01, 0.45, 0.67])) .using(""full"") .limit(100u64) ) .query(Query::new_nearest(vec![ vec![0.1, 0.2], vec![0.2, 0.1], vec![0.8, 0.9], ])) .using(""colbert"") .limit(10u64) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PrefetchQuery; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .addPrefetch( PrefetchQuery.newBuilder() .addPrefetch( PrefetchQuery.newBuilder() .setQuery(nearest(1, 23, 45, 67)) // <------------- small byte vector .setUsing(""mrl_byte"") .setLimit(1000) .build()) .setQuery(nearest(0.01f, 0.45f, 0.67f)) // <-- dense vector .setUsing(""full"") .setLimit(100) .build()) .setQuery( nearest( new float[][] { {0.1f, 0.2f}, // <─┐ {0.2f, 0.1f}, // < ├─ multi-vector {0.8f, 0.9f} // < ┘ })) .setUsing(""colbert"") .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", prefetch: new List { new() { Prefetch = { new List { new() { Query = new float[] { 1, 23, 45, 67 }, // <------------- small byte vector Using = ""mrl_byte"", Limit = 1000 }, } }, Query = new float[] {0.01f, 0.45f, 0.67f}, // <-- dense vector Using = ""full"", Limit = 100 } }, query: new float[][] { [0.1f, 0.2f], // <─┐ [0.2f, 0.1f], // < ├─ multi-vector [0.8f, 0.9f] // < ┘ }, usingVector: ""colbert"", limit: 10 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Prefetch: []*qdrant.PrefetchQuery{ { Prefetch: []*qdrant.PrefetchQuery{ { Query: qdrant.NewQueryDense([]float32{1, 23, 45, 67}), Using: qdrant.PtrOf(""mrl_byte""), Limit: qdrant.PtrOf(uint64(1000)), }, }, Query: qdrant.NewQueryDense([]float32{0.01, 0.45, 0.67}), Limit: qdrant.PtrOf(uint64(100)), Using: qdrant.PtrOf(""full""), }, }, Query: qdrant.NewQueryMulti([][]float32{ {0.1, 0.2}, {0.2, 0.1}, {0.8, 0.9}, }), Using: qdrant.PtrOf(""colbert""), }) ``` ## Flexible interface Other than the introduction of `prefetch`, the `Query API` has been designed to make querying simpler. Let's look at a few bonus features: ### Query by ID Whenever you need to use a vector as an input, you can always use a [point ID](../points/#point-ids) instead. ```http POST /collections/{collection_name}/points/query { ""query"": ""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"" // <--- point id } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"", # <--- point id ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: '43cf51e2-8777-4f52-bc74-c2cbde0c8b04', // <--- point id }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Condition, Filter, PointId, Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .query( QueryPointsBuilder::new(""{collection_name}"") .query(Query::new_nearest(PointId::new(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04""))) ) .await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPoints; import java.util.UUID; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(UUID.fromString(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04""))) .build()) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: Guid.Parse(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"") // <--- point id ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQueryID(qdrant.NewID(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"")), }) ``` The above example will fetch the default vector from the point with this id, and use it as the query vector. If the `using` parameter is also specified, Qdrant will use the vector with that name. It is also possible to reference an ID from a different collection, by setting the `lookup_from` parameter. ```http POST /collections/{collection_name}/points/query { ""query"": ""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"", // <--- point id ""using"": ""512d-vector"" ""lookup_from"": { ""collection"": ""another_collection"", // <--- other collection name ""vector"": ""image-512"" // <--- vector name in the other collection } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", query=""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"", # <--- point id using=""512d-vector"", lookup_from=models.LookupFrom( collection=""another_collection"", # <--- other collection name vector=""image-512"", # <--- vector name in the other collection ) ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { query: '43cf51e2-8777-4f52-bc74-c2cbde0c8b04', // <--- point id using: '512d-vector', lookup_from: { collection: 'another_collection', // <--- other collection name vector: 'image-512', // <--- vector name in the other collection } }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{LookupLocationBuilder, PointId, Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.query( QueryPointsBuilder::new(""{collection_name}"") .query(Query::new_nearest(PointId::new(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04""))) .using(""512d-vector"") .lookup_from( LookupLocationBuilder::new(""another_collection"") .vector_name(""image-512"") ) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.LookupLocation; import io.qdrant.client.grpc.Points.QueryPoints; import java.util.UUID; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(nearest(UUID.fromString(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04""))) .setUsing(""512d-vector"") .setLookupFrom( LookupLocation.newBuilder() .setCollectionName(""another_collection"") .setVectorName(""image-512"") .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: Guid.Parse(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04""), // <--- point id usingVector: ""512d-vector"", lookupFrom: new() { CollectionName = ""another_collection"", // <--- other collection name VectorName = ""image-512"" // <--- vector name in the other collection } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Query: qdrant.NewQueryID(qdrant.NewID(""43cf51e2-8777-4f52-bc74-c2cbde0c8b04"")), Using: qdrant.PtrOf(""512d-vector""), LookupFrom: &qdrant.LookupLocation{ CollectionName: ""another_collection"", VectorName: qdrant.PtrOf(""image-512""), }, }) ``` In the case above, Qdrant will fetch the `""image-512""` vector from the specified point id in the collection `another_collection`. ## Re-ranking with payload values The Query API can retrieve points not only by vector similarity but also by the content of the payload. There are two ways to make use of the payload in the query: * Apply filters to the payload fields, to only get the points that match the filter. * Order the results by the payload field. Let's see an example of when this might be useful: ```http POST /collections/{collection_name}/points/query { ""prefetch"": [ { ""query"": [0.01, 0.45, 0.67, ...], // <-- dense vector ""filter"": { ""must"": { ""key"": ""color"", ""match"": { ""value"": ""red"" } } }, ""limit"": 10 }, { ""query"": [0.01, 0.45, 0.67, ...], // <-- dense vector ""filter"": { ""must"": { ""key"": ""color"", ""match"": { ""value"": ""green"" } } }, ""limit"": 10 } ], ""query"": { ""order_by"": ""price"" } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", prefetch=[ models.Prefetch( query=[0.01, 0.45, 0.67, ...], # <-- dense vector filter=models.Filter( must=models.FieldCondition( key=""color"", match=models.Match(value=""red""), ), ), limit=10, ), models.Prefetch( query=[0.01, 0.45, 0.67, ...], # <-- dense vector filter=models.Filter( must=models.FieldCondition( key=""color"", match=models.Match(value=""green""), ), ), limit=10, ), ], query=models.OrderByQuery(order_by=""price""), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { prefetch: [ { query: [0.01, 0.45, 0.67], // <-- dense vector filter: { must: { key: 'color', match: { value: 'red', }, } }, limit: 10, }, { query: [0.01, 0.45, 0.67], // <-- dense vector filter: { must: { key: 'color', match: { value: 'green', }, } }, limit: 10, }, ], query: { order_by: 'price', }, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Condition, Filter, PrefetchQueryBuilder, Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.query( QueryPointsBuilder::new(""{collection_name}"") .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest(vec![0.01, 0.45, 0.67])) .filter(Filter::must([Condition::matches( ""color"", ""red"".to_string(), )])) .limit(10u64) ) .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest(vec![0.01, 0.45, 0.67])) .filter(Filter::must([Condition::matches( ""color"", ""green"".to_string(), )])) .limit(10u64) ) .query(Query::new_order_by(""price"")) ).await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.QueryFactory.nearest; import static io.qdrant.client.QueryFactory.orderBy; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.PrefetchQuery; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .addPrefetch( PrefetchQuery.newBuilder() .setQuery(nearest(0.01f, 0.45f, 0.67f)) .setFilter( Filter.newBuilder().addMust(matchKeyword(""color"", ""red"")).build()) .setLimit(10) .build()) .addPrefetch( PrefetchQuery.newBuilder() .setQuery(nearest(0.01f, 0.45f, 0.67f)) .setFilter( Filter.newBuilder().addMust(matchKeyword(""color"", ""green"")).build()) .setLimit(10) .build()) .setQuery(orderBy(""price"")) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", prefetch: new List { new() { Query = new float[] { 0.01f, 0.45f, 0.67f }, Filter = MatchKeyword(""color"", ""red""), Limit = 10 }, new() { Query = new float[] { 0.01f, 0.45f, 0.67f }, Filter = MatchKeyword(""color"", ""green""), Limit = 10 } }, query: (OrderBy) ""price"", limit: 10 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Query(context.Background(), &qdrant.QueryPoints{ CollectionName: ""{collection_name}"", Prefetch: []*qdrant.PrefetchQuery{ { Query: qdrant.NewQuery(0.01, 0.45, 0.67), Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""color"", ""red""), }, }, }, { Query: qdrant.NewQuery(0.01, 0.45, 0.67), Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""color"", ""green""), }, }, }, }, Query: qdrant.NewQueryOrderBy(&qdrant.OrderBy{ Key: ""price"", }), }) ``` In this example, we first fetch 10 points with the color `""red""` and then 10 points with the color `""green""`. Then, we order the results by the price field. This is how we can guarantee even sampling of both colors in the results and also get the cheapest ones first. ## Grouping *Available as of v1.11.0* It is possible to group results by a certain field. This is useful when you have multiple points for the same item, and you want to avoid redundancy of the same item in the results. REST API ([Schema](https://api.qdrant.tech/master/api-reference/search/query-points-groups)): ```http POST /collections/{collection_name}/points/query/groups { ""query"": [0.01, 0.45, 0.67], group_by=""document_id"", # Path of the field to group by limit=4, # Max amount of groups group_size=2, # Max amount of points per group } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points_groups( collection_name=""{collection_name}"", query=[0.01, 0.45, 0.67], group_by=""document_id"", limit=4, group_size=2, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.queryGroups(""{collection_name}"", { query: [0.01, 0.45, 0.67], group_by: ""document_id"", limit: 4, group_size: 2, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.query_groups( QueryPointGroupsBuilder::new(""{collection_name}"", ""document_id"") .query(Query::from(vec![0.01, 0.45, 0.67])) .limit(4u64) .group_size(2u64) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPointGroups; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .queryGroupsAsync( QueryPointGroups.newBuilder() .setCollectionName(""{collection_name}"") .setGroupBy(""document_id"") .setQuery(nearest(0.01f, 0.45f, 0.67f)) .setLimit(4) .setGroupSize(2) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryGroupsAsync( collectionName: ""{collection_name}"", groupBy: ""document_id"", query: new float[] { 0.01f, 0.45f, 0.67f }, limit: 4, groupSize: 2 ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.QueryGroups(context.Background(), &qdrant.QueryPointGroups{ CollectionName: ""{collection_name}"", Query: qdrant.NewQuery(0.01, 0.45, 0.67), GroupBy: ""document_id"", GroupSize: qdrant.PtrOf(uint64(2)), }) ``` For more information on the `grouping` capabilities refer to the reference documentation for search with [grouping](./search/#search-groups) and [lookup](./search/#lookup-in-groups). ",documentation/concepts/hybrid-queries.md "--- title: Filtering weight: 60 aliases: - ../filtering --- # Filtering With Qdrant, you can set conditions when searching or retrieving points. For example, you can impose conditions on both the [payload](../payload/) and the `id` of the point. Setting additional conditions is important when it is impossible to express all the features of the object in the embedding. Examples include a variety of business requirements: stock availability, user location, or desired price range. ## Filtering clauses Qdrant allows you to combine conditions in clauses. Clauses are different logical operations, such as `OR`, `AND`, and `NOT`. Clauses can be recursively nested into each other so that you can reproduce an arbitrary boolean expression. Let's take a look at the clauses implemented in Qdrant. Suppose we have a set of points with the following payload: ```json [ { ""id"": 1, ""city"": ""London"", ""color"": ""green"" }, { ""id"": 2, ""city"": ""London"", ""color"": ""red"" }, { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" }, { ""id"": 4, ""city"": ""Berlin"", ""color"": ""red"" }, { ""id"": 5, ""city"": ""Moscow"", ""color"": ""green"" }, { ""id"": 6, ""city"": ""Moscow"", ""color"": ""blue"" } ] ``` ### Must Example: ```http POST /collections/{collection_name}/points/scroll { ""filter"": { ""must"": [ { ""key"": ""city"", ""match"": { ""value"": ""London"" } }, { ""key"": ""color"", ""match"": { ""value"": ""red"" } } ] } ... } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.scroll( collection_name=""{collection_name}"", scroll_filter=models.Filter( must=[ models.FieldCondition( key=""city"", match=models.MatchValue(value=""London""), ), models.FieldCondition( key=""color"", match=models.MatchValue(value=""red""), ), ] ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.scroll(""{collection_name}"", { filter: { must: [ { key: ""city"", match: { value: ""London"" }, }, { key: ""color"", match: { value: ""red"" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .scroll( ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::must([ Condition::matches(""city"", ""london"".to_string()), Condition::matches(""color"", ""red"".to_string()), ])), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter( Filter.newBuilder() .addAllMust( List.of(matchKeyword(""city"", ""London""), matchKeyword(""color"", ""red""))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); // & operator combines two conditions in an AND conjunction(must) await client.ScrollAsync( collectionName: ""{collection_name}"", filter: MatchKeyword(""city"", ""London"") & MatchKeyword(""color"", ""red"") ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Scroll(context.Background(), &qdrant.ScrollPoints{ CollectionName: ""{collection_name}"", Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""city"", ""London""), qdrant.NewMatch(""color"", ""red""), }, }, }) ``` Filtered points would be: ```json [{ ""id"": 2, ""city"": ""London"", ""color"": ""red"" }] ``` When using `must`, the clause becomes `true` only if every condition listed inside `must` is satisfied. In this sense, `must` is equivalent to the operator `AND`. ### Should Example: ```http POST /collections/{collection_name}/points/scroll { ""filter"": { ""should"": [ { ""key"": ""city"", ""match"": { ""value"": ""London"" } }, { ""key"": ""color"", ""match"": { ""value"": ""red"" } } ] } } ``` ```python client.scroll( collection_name=""{collection_name}"", scroll_filter=models.Filter( should=[ models.FieldCondition( key=""city"", match=models.MatchValue(value=""London""), ), models.FieldCondition( key=""color"", match=models.MatchValue(value=""red""), ), ] ), ) ``` ```typescript client.scroll(""{collection_name}"", { filter: { should: [ { key: ""city"", match: { value: ""London"" }, }, { key: ""color"", match: { value: ""red"" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .scroll( ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::should([ Condition::matches(""city"", ""london"".to_string()), Condition::matches(""color"", ""red"".to_string()), ])), ) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; import java.util.List; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter( Filter.newBuilder() .addAllShould( List.of(matchKeyword(""city"", ""London""), matchKeyword(""color"", ""red""))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); // | operator combines two conditions in an OR disjunction(should) await client.ScrollAsync( collectionName: ""{collection_name}"", filter: MatchKeyword(""city"", ""London"") | MatchKeyword(""color"", ""red"") ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Scroll(context.Background(), &qdrant.ScrollPoints{ CollectionName: ""{collection_name}"", Filter: &qdrant.Filter{ Should: []*qdrant.Condition{ qdrant.NewMatch(""city"", ""London""), qdrant.NewMatch(""color"", ""red""), }, }, }) ``` Filtered points would be: ```json [ { ""id"": 1, ""city"": ""London"", ""color"": ""green"" }, { ""id"": 2, ""city"": ""London"", ""color"": ""red"" }, { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" }, { ""id"": 4, ""city"": ""Berlin"", ""color"": ""red"" } ] ``` When using `should`, the clause becomes `true` if at least one condition listed inside `should` is satisfied. In this sense, `should` is equivalent to the operator `OR`. ### Must Not Example: ```http POST /collections/{collection_name}/points/scroll { ""filter"": { ""must_not"": [ { ""key"": ""city"", ""match"": { ""value"": ""London"" } }, { ""key"": ""color"", ""match"": { ""value"": ""red"" } } ] } } ``` ```python client.scroll( collection_name=""{collection_name}"", scroll_filter=models.Filter( must_not=[ models.FieldCondition(key=""city"", match=models.MatchValue(value=""London"")), models.FieldCondition(key=""color"", match=models.MatchValue(value=""red"")), ] ), ) ``` ```typescript client.scroll(""{collection_name}"", { filter: { must_not: [ { key: ""city"", match: { value: ""London"" }, }, { key: ""color"", match: { value: ""red"" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .scroll( ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::must_not([ Condition::matches(""city"", ""london"".to_string()), Condition::matches(""color"", ""red"".to_string()), ])), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter( Filter.newBuilder() .addAllMustNot( List.of(matchKeyword(""city"", ""London""), matchKeyword(""color"", ""red""))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); // The ! operator negates the condition(must not) await client.ScrollAsync( collectionName: ""{collection_name}"", filter: !(MatchKeyword(""city"", ""London"") & MatchKeyword(""color"", ""red"")) ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Scroll(context.Background(), &qdrant.ScrollPoints{ CollectionName: ""{collection_name}"", Filter: &qdrant.Filter{ MustNot: []*qdrant.Condition{ qdrant.NewMatch(""city"", ""London""), qdrant.NewMatch(""color"", ""red""), }, }, }) ``` Filtered points would be: ```json [ { ""id"": 5, ""city"": ""Moscow"", ""color"": ""green"" }, { ""id"": 6, ""city"": ""Moscow"", ""color"": ""blue"" } ] ``` When using `must_not`, the clause becomes `true` if none if the conditions listed inside `should` is satisfied. In this sense, `must_not` is equivalent to the expression `(NOT A) AND (NOT B) AND (NOT C)`. ### Clauses combination It is also possible to use several clauses simultaneously: ```http POST /collections/{collection_name}/points/scroll { ""filter"": { ""must"": [ { ""key"": ""city"", ""match"": { ""value"": ""London"" } } ], ""must_not"": [ { ""key"": ""color"", ""match"": { ""value"": ""red"" } } ] } } ``` ```python client.scroll( collection_name=""{collection_name}"", scroll_filter=models.Filter( must=[ models.FieldCondition(key=""city"", match=models.MatchValue(value=""London"")), ], must_not=[ models.FieldCondition(key=""color"", match=models.MatchValue(value=""red"")), ], ), ) ``` ```typescript client.scroll(""{collection_name}"", { filter: { must: [ { key: ""city"", match: { value: ""London"" }, }, ], must_not: [ { key: ""color"", match: { value: ""red"" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder}; client .scroll( ScrollPointsBuilder::new(""{collection_name}"").filter(Filter { must: vec![Condition::matches(""city"", ""London"".to_string())], must_not: vec![Condition::matches(""color"", ""red"".to_string())], ..Default::default() }), ) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter( Filter.newBuilder() .addMust(matchKeyword(""city"", ""London"")) .addMustNot(matchKeyword(""color"", ""red"")) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.ScrollAsync( collectionName: ""{collection_name}"", filter: MatchKeyword(""city"", ""London"") & !MatchKeyword(""color"", ""red"") ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Scroll(context.Background(), &qdrant.ScrollPoints{ CollectionName: ""{collection_name}"", Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""city"", ""London""), }, MustNot: []*qdrant.Condition{ qdrant.NewMatch(""color"", ""red""), }, }, }) ``` Filtered points would be: ```json [ { ""id"": 1, ""city"": ""London"", ""color"": ""green"" }, { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" } ] ``` In this case, the conditions are combined by `AND`. Also, the conditions could be recursively nested. Example: ```http POST /collections/{collection_name}/points/scroll { ""filter"": { ""must_not"": [ { ""must"": [ { ""key"": ""city"", ""match"": { ""value"": ""London"" } }, { ""key"": ""color"", ""match"": { ""value"": ""red"" } } ] } ] } } ``` ```python client.scroll( collection_name=""{collection_name}"", scroll_filter=models.Filter( must_not=[ models.Filter( must=[ models.FieldCondition( key=""city"", match=models.MatchValue(value=""London"") ), models.FieldCondition( key=""color"", match=models.MatchValue(value=""red"") ), ], ), ], ), ) ``` ```typescript client.scroll(""{collection_name}"", { filter: { must_not: [ { must: [ { key: ""city"", match: { value: ""London"" }, }, { key: ""color"", match: { value: ""red"" }, }, ], }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder}; client .scroll( ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::must_not([Filter::must( [ Condition::matches(""city"", ""London"".to_string()), Condition::matches(""color"", ""red"".to_string()), ], ) .into()])), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.filter; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter( Filter.newBuilder() .addMustNot( filter( Filter.newBuilder() .addAllMust( List.of( matchKeyword(""city"", ""London""), matchKeyword(""color"", ""red""))) .build())) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.ScrollAsync( collectionName: ""{collection_name}"", filter: new Filter { MustNot = { MatchKeyword(""city"", ""London"") & MatchKeyword(""color"", ""red"") } } ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Scroll(context.Background(), &qdrant.ScrollPoints{ CollectionName: ""{collection_name}"", Filter: &qdrant.Filter{ MustNot: []*qdrant.Condition{ qdrant.NewFilterAsCondition(&qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""city"", ""London""), qdrant.NewMatch(""color"", ""red""), }, }), }, }, }) ``` Filtered points would be: ```json [ { ""id"": 1, ""city"": ""London"", ""color"": ""green"" }, { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" }, { ""id"": 4, ""city"": ""Berlin"", ""color"": ""red"" }, { ""id"": 5, ""city"": ""Moscow"", ""color"": ""green"" }, { ""id"": 6, ""city"": ""Moscow"", ""color"": ""blue"" } ] ``` ## Filtering conditions Different types of values in payload correspond to different kinds of queries that we can apply to them. Let's look at the existing condition variants and what types of data they apply to. ### Match ```json { ""key"": ""color"", ""match"": { ""value"": ""red"" } } ``` ```python models.FieldCondition( key=""color"", match=models.MatchValue(value=""red""), ) ``` ```typescript { key: 'color', match: {value: 'red'} } ``` ```rust Condition::matches(""color"", ""red"".to_string()) ``` ```java matchKeyword(""color"", ""red""); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; MatchKeyword(""color"", ""red""); ``` ```go import ""github.com/qdrant/go-client/qdrant"" qdrant.NewMatch(""color"", ""red"") ``` For the other types, the match condition will look exactly the same, except for the type used: ```json { ""key"": ""count"", ""match"": { ""value"": 0 } } ``` ```python models.FieldCondition( key=""count"", match=models.MatchValue(value=0), ) ``` ```typescript { key: 'count', match: {value: 0} } ``` ```rust Condition::matches(""count"", 0) ``` ```java import static io.qdrant.client.ConditionFactory.match; match(""count"", 0); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; Match(""count"", 0); ``` ```go import ""github.com/qdrant/go-client/qdrant"" qdrant.NewMatchInt(""count"", 0) ``` The simplest kind of condition is one that checks if the stored value equals the given one. If several values are stored, at least one of them should match the condition. You can apply it to [keyword](../payload/#keyword), [integer](../payload/#integer) and [bool](../payload/#bool) payloads. ### Match Any *Available as of v1.1.0* In case you want to check if the stored value is one of multiple values, you can use the Match Any condition. Match Any works as a logical OR for the given values. It can also be described as a `IN` operator. You can apply it to [keyword](../payload/#keyword) and [integer](../payload/#integer) payloads. Example: ```json { ""key"": ""color"", ""match"": { ""any"": [""black"", ""yellow""] } } ``` ```python models.FieldCondition( key=""color"", match=models.MatchAny(any=[""black"", ""yellow""]), ) ``` ```typescript { key: 'color', match: {any: ['black', 'yellow']} } ``` ```rust Condition::matches(""color"", vec![""black"".to_string(), ""yellow"".to_string()]) ``` ```java import static io.qdrant.client.ConditionFactory.matchKeywords; matchKeywords(""color"", List.of(""black"", ""yellow"")); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; Match(""color"", [""black"", ""yellow""]); ``` ```go import ""github.com/qdrant/go-client/qdrant"" qdrant.NewMatchKeywords(""color"", ""black"", ""yellow"") ``` In this example, the condition will be satisfied if the stored value is either `black` or `yellow`. If the stored value is an array, it should have at least one value matching any of the given values. E.g. if the stored value is `[""black"", ""green""]`, the condition will be satisfied, because `""black""` is in `[""black"", ""yellow""]`. ### Match Except *Available as of v1.2.0* In case you want to check if the stored value is not one of multiple values, you can use the Match Except condition. Match Except works as a logical NOR for the given values. It can also be described as a `NOT IN` operator. You can apply it to [keyword](../payload/#keyword) and [integer](../payload/#integer) payloads. Example: ```json { ""key"": ""color"", ""match"": { ""except"": [""black"", ""yellow""] } } ``` ```python models.FieldCondition( key=""color"", match=models.MatchExcept(**{""except"": [""black"", ""yellow""]}), ) ``` ```typescript { key: 'color', match: {except: ['black', 'yellow']} } ``` ```rust use qdrant_client::qdrant::r#match::MatchValue; Condition::matches( ""color"", !MatchValue::from(vec![""black"".to_string(), ""yellow"".to_string()]), ) ``` ```java import static io.qdrant.client.ConditionFactory.matchExceptKeywords; matchExceptKeywords(""color"", List.of(""black"", ""yellow"")); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; Match(""color"", [""black"", ""yellow""]); ``` ```go import ""github.com/qdrant/go-client/qdrant"" qdrant.NewMatchExcept(""color"", ""black"", ""yellow"") ``` In this example, the condition will be satisfied if the stored value is neither `black` nor `yellow`. If the stored value is an array, it should have at least one value not matching any of the given values. E.g. if the stored value is `[""black"", ""green""]`, the condition will be satisfied, because `""green""` does not match `""black""` nor `""yellow""`. ### Nested key *Available as of v1.1.0* Payloads being arbitrary JSON object, it is likely that you will need to filter on a nested field. For convenience, we use a syntax similar to what can be found in the [Jq](https://stedolan.github.io/jq/manual/#Basicfilters) project. Suppose we have a set of points with the following payload: ```json [ { ""id"": 1, ""country"": { ""name"": ""Germany"", ""cities"": [ { ""name"": ""Berlin"", ""population"": 3.7, ""sightseeing"": [""Brandenburg Gate"", ""Reichstag""] }, { ""name"": ""Munich"", ""population"": 1.5, ""sightseeing"": [""Marienplatz"", ""Olympiapark""] } ] } }, { ""id"": 2, ""country"": { ""name"": ""Japan"", ""cities"": [ { ""name"": ""Tokyo"", ""population"": 9.3, ""sightseeing"": [""Tokyo Tower"", ""Tokyo Skytree""] }, { ""name"": ""Osaka"", ""population"": 2.7, ""sightseeing"": [""Osaka Castle"", ""Universal Studios Japan""] } ] } } ] ``` You can search on a nested field using a dot notation. ```http POST /collections/{collection_name}/points/scroll { ""filter"": { ""should"": [ { ""key"": ""country.name"", ""match"": { ""value"": ""Germany"" } } ] } } ``` ```python client.scroll( collection_name=""{collection_name}"", scroll_filter=models.Filter( should=[ models.FieldCondition( key=""country.name"", match=models.MatchValue(value=""Germany"") ), ], ), ) ``` ```typescript client.scroll(""{collection_name}"", { filter: { should: [ { key: ""country.name"", match: { value: ""Germany"" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder}; client .scroll( ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::should([ Condition::matches(""country.name"", ""Germany"".to_string()), ])), ) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter( Filter.newBuilder() .addShould(matchKeyword(""country.name"", ""Germany"")) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.ScrollAsync(collectionName: ""{collection_name}"", filter: MatchKeyword(""country.name"", ""Germany"")); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Scroll(context.Background(), &qdrant.ScrollPoints{ CollectionName: ""{collection_name}"", Filter: &qdrant.Filter{ Should: []*qdrant.Condition{ qdrant.NewMatch(""country.name"", ""Germany""), }, }, }) ``` You can also search through arrays by projecting inner values using the `[]` syntax. ```http POST /collections/{collection_name}/points/scroll { ""filter"": { ""should"": [ { ""key"": ""country.cities[].population"", ""range"": { ""gte"": 9.0, } } ] } } ``` ```python client.scroll( collection_name=""{collection_name}"", scroll_filter=models.Filter( should=[ models.FieldCondition( key=""country.cities[].population"", range=models.Range( gt=None, gte=9.0, lt=None, lte=None, ), ), ], ), ) ``` ```typescript client.scroll(""{collection_name}"", { filter: { should: [ { key: ""country.cities[].population"", range: { gt: null, gte: 9.0, lt: null, lte: null, }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, Range, ScrollPointsBuilder}; client .scroll( ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::should([ Condition::range( ""country.cities[].population"", Range { gte: Some(9.0), ..Default::default() }, ), ])), ) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.range; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.Range; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter( Filter.newBuilder() .addShould( range( ""country.cities[].population"", Range.newBuilder().setGte(9.0).build())) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.ScrollAsync( collectionName: ""{collection_name}"", filter: Range(""country.cities[].population"", new Qdrant.Client.Grpc.Range { Gte = 9.0 }) ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Scroll(context.Background(), &qdrant.ScrollPoints{ CollectionName: ""{collection_name}"", Filter: &qdrant.Filter{ Should: []*qdrant.Condition{ qdrant.NewRange(""country.cities[].population"", &qdrant.Range{ Gte: qdrant.PtrOf(9.0), }), }, }, }) ``` This query would only output the point with id 2 as only Japan has a city with population greater than 9.0. And the leaf nested field can also be an array. ```http POST /collections/{collection_name}/points/scroll { ""filter"": { ""should"": [ { ""key"": ""country.cities[].sightseeing"", ""match"": { ""value"": ""Osaka Castle"" } } ] } } ``` ```python client.scroll( collection_name=""{collection_name}"", scroll_filter=models.Filter( should=[ models.FieldCondition( key=""country.cities[].sightseeing"", match=models.MatchValue(value=""Osaka Castle""), ), ], ), ) ``` ```typescript client.scroll(""{collection_name}"", { filter: { should: [ { key: ""country.cities[].sightseeing"", match: { value: ""Osaka Castle"" }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder}; client .scroll( ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::should([ Condition::matches(""country.cities[].sightseeing"", ""Osaka Castle"".to_string()), ])), ) .await?; ``` ```java import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter( Filter.newBuilder() .addShould(matchKeyword(""country.cities[].sightseeing"", ""Germany"")) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.ScrollAsync( collectionName: ""{collection_name}"", filter: MatchKeyword(""country.cities[].sightseeing"", ""Germany"") ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Scroll(context.Background(), &qdrant.ScrollPoints{ CollectionName: ""{collection_name}"", Filter: &qdrant.Filter{ Should: []*qdrant.Condition{ qdrant.NewMatch(""country.cities[].sightseeing"", ""Germany""), }, }, }) ``` This query would only output the point with id 2 as only Japan has a city with the ""Osaka castke"" as part of the sightseeing. ### Nested object filter *Available as of v1.2.0* By default, the conditions are taking into account the entire payload of a point. For instance, given two points with the following payload: ```json [ { ""id"": 1, ""dinosaur"": ""t-rex"", ""diet"": [ { ""food"": ""leaves"", ""likes"": false}, { ""food"": ""meat"", ""likes"": true} ] }, { ""id"": 2, ""dinosaur"": ""diplodocus"", ""diet"": [ { ""food"": ""leaves"", ""likes"": true}, { ""food"": ""meat"", ""likes"": false} ] } ] ``` The following query would match both points: ```http POST /collections/{collection_name}/points/scroll { ""filter"": { ""must"": [ { ""key"": ""diet[].food"", ""match"": { ""value"": ""meat"" } }, { ""key"": ""diet[].likes"", ""match"": { ""value"": true } } ] } } ``` ```python client.scroll( collection_name=""{collection_name}"", scroll_filter=models.Filter( must=[ models.FieldCondition( key=""diet[].food"", match=models.MatchValue(value=""meat"") ), models.FieldCondition( key=""diet[].likes"", match=models.MatchValue(value=True) ), ], ), ) ``` ```typescript client.scroll(""{collection_name}"", { filter: { must: [ { key: ""diet[].food"", match: { value: ""meat"" }, }, { key: ""diet[].likes"", match: { value: true }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder}; client .scroll( ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::must([ Condition::matches(""diet[].food"", ""meat"".to_string()), Condition::matches(""diet[].likes"", true), ])), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.match; import static io.qdrant.client.ConditionFactory.matchKeyword; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter( Filter.newBuilder() .addAllMust( List.of(matchKeyword(""diet[].food"", ""meat""), match(""diet[].likes"", true))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.ScrollAsync( collectionName: ""{collection_name}"", filter: MatchKeyword(""diet[].food"", ""meat"") & Match(""diet[].likes"", true) ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Scroll(context.Background(), &qdrant.ScrollPoints{ CollectionName: ""{collection_name}"", Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""diet[].food"", ""meat""), qdrant.NewMatchBool(""diet[].likes"", true), }, }, }) ``` This happens because both points are matching the two conditions: - the ""t-rex"" matches food=meat on `diet[1].food` and likes=true on `diet[1].likes` - the ""diplodocus"" matches food=meat on `diet[1].food` and likes=true on `diet[0].likes` To retrieve only the points which are matching the conditions on an array element basis, that is the point with id 1 in this example, you would need to use a nested object filter. Nested object filters allow arrays of objects to be queried independently of each other. It is achieved by using the `nested` condition type formed by a payload key to focus on and a filter to apply. The key should point to an array of objects and can be used with or without the bracket notation (""data"" or ""data[]""). ```http POST /collections/{collection_name}/points/scroll { ""filter"": { ""must"": [{ ""nested"": { ""key"": ""diet"", ""filter"":{ ""must"": [ { ""key"": ""food"", ""match"": { ""value"": ""meat"" } }, { ""key"": ""likes"", ""match"": { ""value"": true } } ] } } }] } } ``` ```python client.scroll( collection_name=""{collection_name}"", scroll_filter=models.Filter( must=[ models.NestedCondition( nested=models.Nested( key=""diet"", filter=models.Filter( must=[ models.FieldCondition( key=""food"", match=models.MatchValue(value=""meat"") ), models.FieldCondition( key=""likes"", match=models.MatchValue(value=True) ), ] ), ) ) ], ), ) ``` ```typescript client.scroll(""{collection_name}"", { filter: { must: [ { nested: { key: ""diet"", filter: { must: [ { key: ""food"", match: { value: ""meat"" }, }, { key: ""likes"", match: { value: true }, }, ], }, }, }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, NestedCondition, ScrollPointsBuilder}; client .scroll( ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::must([NestedCondition { key: ""diet"".to_string(), filter: Some(Filter::must([ Condition::matches(""food"", ""meat"".to_string()), Condition::matches(""likes"", true), ])), } .into()])), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.match; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.ConditionFactory.nested; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter( Filter.newBuilder() .addMust( nested( ""diet"", Filter.newBuilder() .addAllMust( List.of( matchKeyword(""food"", ""meat""), match(""likes"", true))) .build())) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.ScrollAsync( collectionName: ""{collection_name}"", filter: Nested(""diet"", MatchKeyword(""food"", ""meat"") & Match(""likes"", true)) ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Scroll(context.Background(), &qdrant.ScrollPoints{ CollectionName: ""{collection_name}"", Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewNestedFilter(""diet"", &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""food"", ""meat""), qdrant.NewMatchBool(""likes"", true), }, }), }, }, }) ``` The matching logic is modified to be applied at the level of an array element within the payload. Nested filters work in the same way as if the nested filter was applied to a single element of the array at a time. Parent document is considered to match the condition if at least one element of the array matches the nested filter. **Limitations** The `has_id` condition is not supported within the nested object filter. If you need it, place it in an adjacent `must` clause. ```http POST /collections/{collection_name}/points/scroll { ""filter"":{ ""must"":[ { ""nested"":{ ""key"":""diet"", ""filter"":{ ""must"":[ { ""key"":""food"", ""match"":{ ""value"":""meat"" } }, { ""key"":""likes"", ""match"":{ ""value"":true } } ] } } }, { ""has_id"":[ 1 ] } ] } } ``` ```python client.scroll( collection_name=""{collection_name}"", scroll_filter=models.Filter( must=[ models.NestedCondition( nested=models.Nested( key=""diet"", filter=models.Filter( must=[ models.FieldCondition( key=""food"", match=models.MatchValue(value=""meat"") ), models.FieldCondition( key=""likes"", match=models.MatchValue(value=True) ), ] ), ) ), models.HasIdCondition(has_id=[1]), ], ), ) ``` ```typescript client.scroll(""{collection_name}"", { filter: { must: [ { nested: { key: ""diet"", filter: { must: [ { key: ""food"", match: { value: ""meat"" }, }, { key: ""likes"", match: { value: true }, }, ], }, }, }, { has_id: [1], }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, NestedCondition, ScrollPointsBuilder}; client .scroll( ScrollPointsBuilder::new(""{collection_name}"").filter(Filter::must([ NestedCondition { key: ""diet"".to_string(), filter: Some(Filter::must([ Condition::matches(""food"", ""meat"".to_string()), Condition::matches(""likes"", true), ])), } .into(), Condition::has_id([1]), ])), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.hasId; import static io.qdrant.client.ConditionFactory.match; import static io.qdrant.client.ConditionFactory.matchKeyword; import static io.qdrant.client.ConditionFactory.nested; import static io.qdrant.client.PointIdFactory.id; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter( Filter.newBuilder() .addMust( nested( ""diet"", Filter.newBuilder() .addAllMust( List.of( matchKeyword(""food"", ""meat""), match(""likes"", true))) .build())) .addMust(hasId(id(1))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.ScrollAsync( collectionName: ""{collection_name}"", filter: Nested(""diet"", MatchKeyword(""food"", ""meat"") & Match(""likes"", true)) & HasId(1) ); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Scroll(context.Background(), &qdrant.ScrollPoints{ CollectionName: ""{collection_name}"", Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewNestedFilter(""diet"", &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewMatch(""food"", ""meat""), qdrant.NewMatchBool(""likes"", true), }, }), qdrant.NewHasID(qdrant.NewIDNum(1)), }, }, }) ``` ### Full Text Match *Available as of v0.10.0* A special case of the `match` condition is the `text` match condition. It allows you to search for a specific substring, token or phrase within the text field. Exact texts that will match the condition depend on full-text index configuration. Configuration is defined during the index creation and describe at [full-text index](../indexing/#full-text-index). If there is no full-text index for the field, the condition will work as exact substring match. ```json { ""key"": ""description"", ""match"": { ""text"": ""good cheap"" } } ``` ```python models.FieldCondition( key=""description"", match=models.MatchText(text=""good cheap""), ) ``` ```typescript { key: 'description', match: {text: 'good cheap'} } ``` ```rust use qdrant_client::qdrant::Condition; Condition::matches_text(""description"", ""good cheap"") ``` ```java import static io.qdrant.client.ConditionFactory.matchText; matchText(""description"", ""good cheap""); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; MatchText(""description"", ""good cheap""); ``` ```go import ""github.com/qdrant/go-client/qdrant"" qdrant.NewMatchText(""description"", ""good cheap"") ``` If the query has several words, then the condition will be satisfied only if all of them are present in the text. ### Range ```json { ""key"": ""price"", ""range"": { ""gt"": null, ""gte"": 100.0, ""lt"": null, ""lte"": 450.0 } } ``` ```python models.FieldCondition( key=""price"", range=models.Range( gt=None, gte=100.0, lt=None, lte=450.0, ), ) ``` ```typescript { key: 'price', range: { gt: null, gte: 100.0, lt: null, lte: 450.0 } } ``` ```rust use qdrant_client::qdrant::{Condition, Range}; Condition::range( ""price"", Range { gt: None, gte: Some(100.0), lt: None, lte: Some(450.0), }, ) ``` ```java import static io.qdrant.client.ConditionFactory.range; import io.qdrant.client.grpc.Points.Range; range(""price"", Range.newBuilder().setGte(100.0).setLte(450).build()); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; Range(""price"", new Qdrant.Client.Grpc.Range { Gte = 100.0, Lte = 450 }); ``` ```go import ""github.com/qdrant/go-client/qdrant"" qdrant.NewRange(""price"", &qdrant.Range{ Gte: qdrant.PtrOf(100.0), Lte: qdrant.PtrOf(450.0), }) ``` The `range` condition sets the range of possible values for stored payload values. If several values are stored, at least one of them should match the condition. Comparisons that can be used: - `gt` - greater than - `gte` - greater than or equal - `lt` - less than - `lte` - less than or equal Can be applied to [float](../payload/#float) and [integer](../payload/#integer) payloads. ### Datetime Range The datetime range is a unique range condition, used for [datetime](../payload/#datetime) payloads, which supports RFC 3339 formats. You do not need to convert dates to UNIX timestaps. During comparison, timestamps are parsed and converted to UTC. _Available as of v1.8.0_ ```json { ""key"": ""date"", ""range"": { ""gt"": ""2023-02-08T10:49:00Z"", ""gte"": null, ""lt"": null, ""lte"": ""2024-01-31 10:14:31Z"" } } ``` ```python models.FieldCondition( key=""date"", range=models.DatetimeRange( gt=""2023-02-08T10:49:00Z"", gte=None, lt=None, lte=""2024-01-31T10:14:31Z"", ), ) ``` ```typescript { key: 'date', range: { gt: '2023-02-08T10:49:00Z', gte: null, lt: null, lte: '2024-01-31T10:14:31Z' } } ``` ```rust use qdrant_client::qdrant::{Condition, DatetimeRange, Timestamp}; Condition::datetime_range( ""date"", DatetimeRange { gt: Some(Timestamp::date_time(2023, 2, 8, 10, 49, 0).unwrap()), gte: None, lt: None, lte: Some(Timestamp::date_time(2024, 1, 31, 10, 14, 31).unwrap()), }, ) ``` ```java import static io.qdrant.client.ConditionFactory.datetimeRange; import com.google.protobuf.Timestamp; import io.qdrant.client.grpc.Points.DatetimeRange; import java.time.Instant; long gt = Instant.parse(""2023-02-08T10:49:00Z"").getEpochSecond(); long lte = Instant.parse(""2024-01-31T10:14:31Z"").getEpochSecond(); datetimeRange(""date"", DatetimeRange.newBuilder() .setGt(Timestamp.newBuilder().setSeconds(gt)) .setLte(Timestamp.newBuilder().setSeconds(lte)) .build()); ``` ```csharp using Qdrant.Client.Grpc; Conditions.DatetimeRange( field: ""date"", gt: new DateTime(2023, 2, 8, 10, 49, 0, DateTimeKind.Utc), lte: new DateTime(2024, 1, 31, 10, 14, 31, DateTimeKind.Utc) ); ``` ```go import ( ""time"" ""github.com/qdrant/go-client/qdrant"" ""google.golang.org/protobuf/types/known/timestamppb"" ) qdrant.NewDatetimeRange(""date"", &qdrant.DatetimeRange{ Gt: timestamppb.New(time.Date(2023, 2, 8, 10, 49, 0, 0, time.UTC)), Lte: timestamppb.New(time.Date(2024, 1, 31, 10, 14, 31, 0, time.UTC)), }) ``` ### UUID Match _Available as of v1.11.0_ Matching of UUID values works similarly to the regular `match` condition for strings. Functionally, it will work with `keyword` and `uuid` indexes exactly the same, but `uuid` index is more memory efficient. ```json { ""key"": ""uuid"", ""match"": { ""uuid"": ""f47ac10b-58cc-4372-a567-0e02b2c3d479"" } } ``` ```python models.FieldCondition( key=""uuid"", match=models.MatchValue(uuid=""f47ac10b-58cc-4372-a567-0e02b2c3d479""), ) ``` ```typescript { key: 'uuid', match: {uuid: 'f47ac10b-58cc-4372-a567-0e02b2c3d479'} } ``` ```rust Condition::matches(""uuid"", ""f47ac10b-58cc-4372-a567-0e02b2c3d479"".to_string()) ``` ```java matchKeyword(""uuid"", ""f47ac10b-58cc-4372-a567-0e02b2c3d479""); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; MatchKeyword(""uuid"", ""f47ac10b-58cc-4372-a567-0e02b2c3d479""); ``` ```go import ""github.com/qdrant/go-client/qdrant"" qdrant.NewMatch(""uuid"", ""f47ac10b-58cc-4372-a567-0e02b2c3d479"") ``` ### Geo #### Geo Bounding Box ```json { ""key"": ""location"", ""geo_bounding_box"": { ""bottom_right"": { ""lon"": 13.455868, ""lat"": 52.495862 }, ""top_left"": { ""lon"": 13.403683, ""lat"": 52.520711 } } } ``` ```python models.FieldCondition( key=""location"", geo_bounding_box=models.GeoBoundingBox( bottom_right=models.GeoPoint( lon=13.455868, lat=52.495862, ), top_left=models.GeoPoint( lon=13.403683, lat=52.520711, ), ), ) ``` ```typescript { key: 'location', geo_bounding_box: { bottom_right: { lon: 13.455868, lat: 52.495862 }, top_left: { lon: 13.403683, lat: 52.520711 } } } ``` ```rust use qdrant_client::qdrant::{Condition, GeoBoundingBox, GeoPoint}; Condition::geo_bounding_box( ""location"", GeoBoundingBox { bottom_right: Some(GeoPoint { lon: 13.455868, lat: 52.495862, }), top_left: Some(GeoPoint { lon: 13.403683, lat: 52.520711, }), }, ) ``` ```java import static io.qdrant.client.ConditionFactory.geoBoundingBox; geoBoundingBox(""location"", 52.520711, 13.403683, 52.495862, 13.455868); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; GeoBoundingBox(""location"", 52.520711, 13.403683, 52.495862, 13.455868); ``` ```go import ""github.com/qdrant/go-client/qdrant"" qdrant.NewGeoBoundingBox(""location"", 52.520711, 13.403683, 52.495862, 13.455868) ``` It matches with `location`s inside a rectangle with the coordinates of the upper left corner in `bottom_right` and the coordinates of the lower right corner in `top_left`. #### Geo Radius ```json { ""key"": ""location"", ""geo_radius"": { ""center"": { ""lon"": 13.403683, ""lat"": 52.520711 }, ""radius"": 1000.0 } } ``` ```python models.FieldCondition( key=""location"", geo_radius=models.GeoRadius( center=models.GeoPoint( lon=13.403683, lat=52.520711, ), radius=1000.0, ), ) ``` ```typescript { key: 'location', geo_radius: { center: { lon: 13.403683, lat: 52.520711 }, radius: 1000.0 } } ``` ```rust use qdrant_client::qdrant::{Condition, GeoPoint, GeoRadius}; Condition::geo_radius( ""location"", GeoRadius { center: Some(GeoPoint { lon: 13.403683, lat: 52.520711, }), radius: 1000.0, }, ) ``` ```java import static io.qdrant.client.ConditionFactory.geoRadius; geoRadius(""location"", 52.520711, 13.403683, 1000.0f); ``` ```csharp using static Qdrant.Client.Grpc.Conditions; GeoRadius(""location"", 52.520711, 13.403683, 1000.0f); ``` ```go import ""github.com/qdrant/go-client/qdrant"" qdrant.NewGeoRadius(""location"", 52.520711, 13.403683, 1000.0) ``` It matches with `location`s inside a circle with the `center` at the center and a radius of `radius` meters. If several values are stored, at least one of them should match the condition. These conditions can only be applied to payloads that match the [geo-data format](../payload/#geo). #### Geo Polygon Geo Polygons search is useful for when you want to find points inside an irregularly shaped area, for example a country boundary or a forest boundary. A polygon always has an exterior ring and may optionally include interior rings. A lake with an island would be an example of an interior ring. If you wanted to find points in the water but not on the island, you would make an interior ring for the island. When defining a ring, you must pick either a clockwise or counterclockwise ordering for your points. The first and last point of the polygon must be the same. Currently, we only support unprojected global coordinates (decimal degrees longitude and latitude) and we are datum agnostic. ```json { ""key"": ""location"", ""geo_polygon"": { ""exterior"": { ""points"": [ { ""lon"": -70.0, ""lat"": -70.0 }, { ""lon"": 60.0, ""lat"": -70.0 }, { ""lon"": 60.0, ""lat"": 60.0 }, { ""lon"": -70.0, ""lat"": 60.0 }, { ""lon"": -70.0, ""lat"": -70.0 } ] }, ""interiors"": [ { ""points"": [ { ""lon"": -65.0, ""lat"": -65.0 }, { ""lon"": 0.0, ""lat"": -65.0 }, { ""lon"": 0.0, ""lat"": 0.0 }, { ""lon"": -65.0, ""lat"": 0.0 }, { ""lon"": -65.0, ""lat"": -65.0 } ] } ] } } ``` ```python models.FieldCondition( key=""location"", geo_polygon=models.GeoPolygon( exterior=models.GeoLineString( points=[ models.GeoPoint( lon=-70.0, lat=-70.0, ), models.GeoPoint( lon=60.0, lat=-70.0, ), models.GeoPoint( lon=60.0, lat=60.0, ), models.GeoPoint( lon=-70.0, lat=60.0, ), models.GeoPoint( lon=-70.0, lat=-70.0, ), ] ), interiors=[ models.GeoLineString( points=[ models.GeoPoint( lon=-65.0, lat=-65.0, ), models.GeoPoint( lon=0.0, lat=-65.0, ), models.GeoPoint( lon=0.0, lat=0.0, ), models.GeoPoint( lon=-65.0, lat=0.0, ), models.GeoPoint( lon=-65.0, lat=-65.0, ), ] ) ], ), ) ``` ```typescript { key: 'location', geo_polygon: { exterior: { points: [ { lon: -70.0, lat: -70.0 }, { lon: 60.0, lat: -70.0 }, { lon: 60.0, lat: 60.0 }, { lon: -70.0, lat: 60.0 }, { lon: -70.0, lat: -70.0 } ] }, interiors: { points: [ { lon: -65.0, lat: -65.0 }, { lon: 0.0, lat: -65.0 }, { lon: 0.0, lat: 0.0 }, { lon: -65.0, lat: 0.0 }, { lon: -65.0, lat: -65.0 } ] } } } ``` ```rust use qdrant_client::qdrant::{Condition, GeoLineString, GeoPoint, GeoPolygon}; Condition::geo_polygon( ""location"", GeoPolygon { exterior: Some(GeoLineString { points: vec![ GeoPoint { lon: -70.0, lat: -70.0, }, GeoPoint { lon: 60.0, lat: -70.0, }, GeoPoint { lon: 60.0, lat: 60.0, }, GeoPoint { lon: -70.0, lat: 60.0, }, GeoPoint { lon: -70.0, lat: -70.0, }, ], }), interiors: vec![GeoLineString { points: vec![ GeoPoint { lon: -65.0, lat: -65.0, }, GeoPoint { lon: 0.0, lat: -65.0, }, GeoPoint { lon: 0.0, lat: 0.0 }, GeoPoint { lon: -65.0, lat: 0.0, }, GeoPoint { lon: -65.0, lat: -65.0, }, ], }], }, ) ``` ```java import static io.qdrant.client.ConditionFactory.geoPolygon; import io.qdrant.client.grpc.Points.GeoLineString; import io.qdrant.client.grpc.Points.GeoPoint; geoPolygon( ""location"", GeoLineString.newBuilder() .addAllPoints( List.of( GeoPoint.newBuilder().setLon(-70.0).setLat(-70.0).build(), GeoPoint.newBuilder().setLon(60.0).setLat(-70.0).build(), GeoPoint.newBuilder().setLon(60.0).setLat(60.0).build(), GeoPoint.newBuilder().setLon(-70.0).setLat(60.0).build(), GeoPoint.newBuilder().setLon(-70.0).setLat(-70.0).build())) .build(), List.of( GeoLineString.newBuilder() .addAllPoints( List.of( GeoPoint.newBuilder().setLon(-65.0).setLat(-65.0).build(), GeoPoint.newBuilder().setLon(0.0).setLat(-65.0).build(), GeoPoint.newBuilder().setLon(0.0).setLat(0.0).build(), GeoPoint.newBuilder().setLon(-65.0).setLat(0.0).build(), GeoPoint.newBuilder().setLon(-65.0).setLat(-65.0).build())) .build())); ``` ```csharp using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; GeoPolygon( field: ""location"", exterior: new GeoLineString { Points = { new GeoPoint { Lat = -70.0, Lon = -70.0 }, new GeoPoint { Lat = 60.0, Lon = -70.0 }, new GeoPoint { Lat = 60.0, Lon = 60.0 }, new GeoPoint { Lat = -70.0, Lon = 60.0 }, new GeoPoint { Lat = -70.0, Lon = -70.0 } } }, interiors: [ new() { Points = { new GeoPoint { Lat = -65.0, Lon = -65.0 }, new GeoPoint { Lat = 0.0, Lon = -65.0 }, new GeoPoint { Lat = 0.0, Lon = 0.0 }, new GeoPoint { Lat = -65.0, Lon = 0.0 }, new GeoPoint { Lat = -65.0, Lon = -65.0 } } } ] ); ``` ```go import ""github.com/qdrant/go-client/qdrant"" qdrant.NewGeoPolygon(""location"", &qdrant.GeoLineString{ Points: []*qdrant.GeoPoint{ {Lat: -70, Lon: -70}, {Lat: 60, Lon: -70}, {Lat: 60, Lon: 60}, {Lat: -70, Lon: 60}, {Lat: -70, Lon: -70}, }, }, &qdrant.GeoLineString{ Points: []*qdrant.GeoPoint{ {Lat: -65, Lon: -65}, {Lat: 0, Lon: -65}, {Lat: 0, Lon: 0}, {Lat: -65, Lon: 0}, {Lat: -65, Lon: -65}, }, }) ``` A match is considered any point location inside or on the boundaries of the given polygon's exterior but not inside any interiors. If several location values are stored for a point, then any of them matching will include that point as a candidate in the resultset. These conditions can only be applied to payloads that match the [geo-data format](../payload/#geo). ### Values count In addition to the direct value comparison, it is also possible to filter by the amount of values. For example, given the data: ```json [ { ""id"": 1, ""name"": ""product A"", ""comments"": [""Very good!"", ""Excellent""] }, { ""id"": 2, ""name"": ""product B"", ""comments"": [""meh"", ""expected more"", ""ok""] } ] ``` We can perform the search only among the items with more than two comments: ```json { ""key"": ""comments"", ""values_count"": { ""gt"": 2 } } ``` ```python models.FieldCondition( key=""comments"", values_count=models.ValuesCount(gt=2), ) ``` ```typescript { key: 'comments', values_count: {gt: 2} } ``` ```rust use qdrant_client::qdrant::{Condition, ValuesCount}; Condition::values_count( ""comments"", ValuesCount { gt: Some(2), ..Default::default() }, ) ``` ```java import static io.qdrant.client.ConditionFactory.valuesCount; import io.qdrant.client.grpc.Points.ValuesCount; valuesCount(""comments"", ValuesCount.newBuilder().setGt(2).build()); ``` ```csharp using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; ValuesCount(""comments"", new ValuesCount { Gt = 2 }); ``` ```go import ""github.com/qdrant/go-client/qdrant"" qdrant.NewValuesCount(""comments"", &qdrant.ValuesCount{ Gt: qdrant.PtrOf(uint64(2)), }) ``` The result would be: ```json [{ ""id"": 2, ""name"": ""product B"", ""comments"": [""meh"", ""expected more"", ""ok""] }] ``` If stored value is not an array - it is assumed that the amount of values is equals to 1. ### Is Empty Sometimes it is also useful to filter out records that are missing some value. The `IsEmpty` condition may help you with that: ```json { ""is_empty"": { ""key"": ""reports"" } } ``` ```python models.IsEmptyCondition( is_empty=models.PayloadField(key=""reports""), ) ``` ```typescript { is_empty: { key: ""reports""; } } ``` ```rust use qdrant_client::qdrant::Condition; Condition::is_empty(""reports"") ``` ```java import static io.qdrant.client.ConditionFactory.isEmpty; isEmpty(""reports""); ``` ```csharp using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; IsEmpty(""reports""); ``` ```go import ""github.com/qdrant/go-client/qdrant"" qdrant.NewIsEmpty(""reports"") ``` This condition will match all records where the field `reports` either does not exist, or has `null` or `[]` value. ### Is Null It is not possible to test for `NULL` values with the match condition. We have to use `IsNull` condition instead: ```json { ""is_null"": { ""key"": ""reports"" } } ``` ```python models.IsNullCondition( is_null=models.PayloadField(key=""reports""), ) ``` ```typescript { is_null: { key: ""reports""; } } ``` ```rust use qdrant_client::qdrant::Condition; Condition::is_null(""reports"") ``` ```java import static io.qdrant.client.ConditionFactory.isNull; isNull(""reports""); ``` ```csharp using Qdrant.Client.Grpc; using static Qdrant.Client.Grpc.Conditions; IsNull(""reports""); ``` ```go import ""github.com/qdrant/go-client/qdrant"" qdrant.NewIsNull(""reports"") ``` This condition will match all records where the field `reports` exists and has `NULL` value. ### Has id This type of query is not related to payload, but can be very useful in some situations. For example, the user could mark some specific search results as irrelevant, or we want to search only among the specified points. ```http POST /collections/{collection_name}/points/scroll { ""filter"": { ""must"": [ { ""has_id"": [1,3,5,7,9,11] } ] } ... } ``` ```python client.scroll( collection_name=""{collection_name}"", scroll_filter=models.Filter( must=[ models.HasIdCondition(has_id=[1, 3, 5, 7, 9, 11]), ], ), ) ``` ```typescript client.scroll(""{collection_name}"", { filter: { must: [ { has_id: [1, 3, 5, 7, 9, 11], }, ], }, }); ``` ```rust use qdrant_client::qdrant::{Condition, Filter, ScrollPointsBuilder}; use qdrant_client::Qdrant; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .scroll( ScrollPointsBuilder::new(""{collection_name}"") .filter(Filter::must([Condition::has_id([1, 3, 5, 7, 9, 11])])), ) .await?; ``` ```java import java.util.List; import static io.qdrant.client.ConditionFactory.hasId; import static io.qdrant.client.PointIdFactory.id; import io.qdrant.client.grpc.Points.Filter; import io.qdrant.client.grpc.Points.ScrollPoints; client .scrollAsync( ScrollPoints.newBuilder() .setCollectionName(""{collection_name}"") .setFilter( Filter.newBuilder() .addMust(hasId(List.of(id(1), id(3), id(5), id(7), id(9), id(11)))) .build()) .build()) .get(); ``` ```csharp using Qdrant.Client; using static Qdrant.Client.Grpc.Conditions; var client = new QdrantClient(""localhost"", 6334); await client.ScrollAsync(collectionName: ""{collection_name}"", filter: HasId([1, 3, 5, 7, 9, 11])); ``` ```go import ( ""context"" ""github.com/qdrant/go-client/qdrant"" ) client, err := qdrant.NewClient(&qdrant.Config{ Host: ""localhost"", Port: 6334, }) client.Scroll(context.Background(), &qdrant.ScrollPoints{ CollectionName: ""{collection_name}"", Filter: &qdrant.Filter{ Must: []*qdrant.Condition{ qdrant.NewHasID( qdrant.NewIDNum(1), qdrant.NewIDNum(3), qdrant.NewIDNum(5), qdrant.NewIDNum(7), qdrant.NewIDNum(9), qdrant.NewIDNum(11), ), }, }, }) ``` Filtered points would be: ```json [ { ""id"": 1, ""city"": ""London"", ""color"": ""green"" }, { ""id"": 3, ""city"": ""London"", ""color"": ""blue"" }, { ""id"": 5, ""city"": ""Moscow"", ""color"": ""green"" } ] ``` ",documentation/concepts/filtering.md "--- title: Concepts weight: 11 # If the index.md file is empty, the link to the section will be hidden from the sidebar --- # Concepts Think of these concepts as a glossary. Each of these concepts include a link to detailed information, usually with examples. If you're new to AI, these concepts can help you learn more about AI and the Qdrant approach. ## Collections [Collections](/documentation/concepts/collections/) define a named set of points that you can use for your search. ## Payload A [Payload](/documentation/concepts/payload/) describes information that you can store with vectors. ## Points [Points](/documentation/concepts/points/) are a record which consists of a vector and an optional payload. ## Search [Search](/documentation/concepts/search/) describes _similarity search_, which set up related objects close to each other in vector space. ## Explore [Explore](/documentation/concepts/explore/) includes several APIs for exploring data in your collections. ## Hybrid Queries [Hybrid Queries](/documentation/concepts/hybrid-queries/) combines multiple queries or performs them in more than one stage. ## Filtering [Filtering](/documentation/concepts/filtering/) defines various database-style clauses, conditions, and more. ## Optimizer [Optimizer](/documentation/concepts/optimizer/) describes options to rebuild database structures for faster search. They include a vacuum, a merge, and an indexing optimizer. ## Storage [Storage](/documentation/concepts/storage/) describes the configuration of storage in segments, which include indexes and an ID mapper. ## Indexing [Indexing](/documentation/concepts/indexing/) lists and describes available indexes. They include payload, vector, sparse vector, and a filterable index. ## Snapshots [Snapshots](/documentation/concepts/snapshots/) describe the backup/restore process (and more) for each node at specific times. ",documentation/concepts/_index.md "--- title: Bulk Upload Vectors weight: 13 --- # Bulk upload a large number of vectors Uploading a large-scale dataset fast might be a challenge, but Qdrant has a few tricks to help you with that. The first important detail about data uploading is that the bottleneck is usually located on the client side, not on the server side. This means that if you are uploading a large dataset, you should prefer a high-performance client library. We recommend using our [Rust client library](https://github.com/qdrant/rust-client) for this purpose, as it is the fastest client library available for Qdrant. If you are not using Rust, you might want to consider parallelizing your upload process. ## Disable indexing during upload In case you are doing an initial upload of a large dataset, you might want to disable indexing during upload. It will enable to avoid unnecessary indexing of vectors, which will be overwritten by the next batch. To disable indexing during upload, set `indexing_threshold` to `0`: ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"" }, ""optimizers_config"": { ""indexing_threshold"": 0 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), optimizers_config=models.OptimizersConfigDiff( indexing_threshold=0, ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", }, optimizers_config: { indexing_threshold: 0, }, }); ``` After upload is done, you can enable indexing by setting `indexing_threshold` to a desired value (default is 20000): ```http PATCH /collections/{collection_name} { ""optimizers_config"": { ""indexing_threshold"": 20000 } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.update_collection( collection_name=""{collection_name}"", optimizer_config=models.OptimizersConfigDiff(indexing_threshold=20000), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.updateCollection(""{collection_name}"", { optimizers_config: { indexing_threshold: 20000, }, }); ``` ## Upload directly to disk When the vectors you upload do not all fit in RAM, you likely want to use [memmap](../../concepts/storage/#configuring-memmap-storage) support. During collection [creation](../../concepts/collections/#create-collection), memmaps may be enabled on a per-vector basis using the `on_disk` parameter. This will store vector data directly on disk at all times. It is suitable for ingesting a large amount of data, essential for the billion scale benchmark. Using `memmap_threshold_kb` is not recommended in this case. It would require the [optimizer](../../concepts/optimizer/) to constantly transform in-memory segments into memmap segments on disk. This process is slower, and the optimizer can be a bottleneck when ingesting a large amount of data. Read more about this in [Configuring Memmap Storage](../../concepts/storage/#configuring-memmap-storage). ## Parallel upload into multiple shards In Qdrant, each collection is split into shards. Each shard has a separate Write-Ahead-Log (WAL), which is responsible for ordering operations. By creating multiple shards, you can parallelize upload of a large dataset. From 2 to 4 shards per one machine is a reasonable number. ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 768, ""distance"": ""Cosine"" }, ""shard_number"": 2 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), shard_number=2, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 768, distance: ""Cosine"", }, shard_number: 2, }); ``` ",documentation/tutorials/bulk-upload.md "--- title: Semantic code search weight: 22 --- # Use semantic search to navigate your codebase | Time: 45 min | Level: Intermediate | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/qdrant/examples/blob/master/code-search/code-search.ipynb) | | |--------------|---------------------|--|----| You too can enrich your applications with Qdrant semantic search. In this tutorial, we describe how you can use Qdrant to navigate a codebase, to help you find relevant code snippets. As an example, we will use the [Qdrant](https://github.com/qdrant/qdrant) source code itself, which is mostly written in Rust. ## The approach We want to search codebases using natural semantic queries, and searching for code based on similar logic. You can set up these tasks with embeddings: 1. General usage neural encoder for Natural Language Processing (NLP), in our case `all-MiniLM-L6-v2` from the [sentence-transformers](https://www.sbert.net/docs/pretrained_models.html) library. 2. Specialized embeddings for code-to-code similarity search. We use the `jina-embeddings-v2-base-code` model. To prepare our code for `all-MiniLM-L6-v2`, we preprocess the code to text that more closely resembles natural language. The Jina embeddings model supports a variety of standard programming languages, so there is no need to preprocess the snippets. We can use the code as is. NLP-based search is based on function signatures, but code search may return smaller pieces, such as loops. So, if we receive a particular function signature from the NLP model and part of its implementation from the code model, we merge the results and highlight the overlap. ## Data preparation Chunking the application sources into smaller parts is a non-trivial task. In general, functions, class methods, structs, enums, and all the other language-specific constructs are good candidates for chunks. They are big enough to contain some meaningful information, but small enough to be processed by embedding models with a limited context window. You can also use docstrings, comments, and other metadata can be used to enrich the chunks with additional information. ![Code chunking strategy](/documentation/tutorials/code-search/data-chunking.png) ### Parsing the codebase While our example uses Rust, you can use our approach with any other language. You can parse code with a [Language Server Protocol](https://microsoft.github.io/language-server-protocol/) (**LSP**) compatible tool. You can use an LSP to build a graph of the codebase, and then extract chunks. We did our work with the [rust-analyzer](https://rust-analyzer.github.io/). We exported the parsed codebase into the [LSIF](https://microsoft.github.io/language-server-protocol/specifications/lsif/0.4.0/specification/) format, a standard for code intelligence data. Next, we used the LSIF data to navigate the codebase and extract the chunks. For details, see our [code search demo](https://github.com/qdrant/demo-code-search). We then exported the chunks into JSON documents with not only the code itself, but also context with the location of the code in the project. For example, see the description of the `await_ready_for_timeout` function from the `IsReady` struct in the `common` module: ```json { ""name"":""await_ready_for_timeout"", ""signature"":""fn await_ready_for_timeout (& self , timeout : Duration) -> bool"", ""code_type"":""Function"", ""docstring"":""= \"" Return `true` if ready, `false` if timed out.\"""", ""line"":44, ""line_from"":43, ""line_to"":51, ""context"":{ ""module"":""common"", ""file_path"":""lib/collection/src/common/is_ready.rs"", ""file_name"":""is_ready.rs"", ""struct_name"":""IsReady"", ""snippet"":"" /// Return `true` if ready, `false` if timed out.\n pub fn await_ready_for_timeout(&self, timeout: Duration) -> bool {\n let mut is_ready = self.value.lock();\n if !*is_ready {\n !self.condvar.wait_for(&mut is_ready, timeout).timed_out()\n } else {\n true\n }\n }\n"" } } ``` You can examine the Qdrant structures, parsed in JSON, in the [`structures.jsonl` file](https://storage.googleapis.com/tutorial-attachments/code-search/structures.jsonl) in our Google Cloud Storage bucket. Download it and use it as a source of data for our code search. ```shell wget https://storage.googleapis.com/tutorial-attachments/code-search/structures.jsonl ``` Next, load the file and parse the lines into a list of dictionaries: ```python import json structures = [] with open(""structures.jsonl"", ""r"") as fp: for i, row in enumerate(fp): entry = json.loads(row) structures.append(entry) ``` ### Code to *natural language* conversion Each programming language has its own syntax which is not a part of the natural language. Thus, a general-purpose model probably does not understand the code as is. We can, however, normalize the data by removing code specifics and including additional context, such as module, class, function, and file name. We took the following steps: 1. Extract the signature of the function, method, or other code construct. 2. Divide camel case and snake case names into separate words. 3. Take the docstring, comments, and other important metadata. 4. Build a sentence from the extracted data using a predefined template. 5. Remove the special characters and replace them with spaces. As input, expect dictionaries with the same structure. Define a `textify` function to do the conversion. We'll use an `inflection` library to convert with different naming conventions. ```shell pip install inflection ``` Once all dependencies are installed, we define the `textify` function: ```python import inflection import re from typing import Dict, Any def textify(chunk: Dict[str, Any]) -> str: # Get rid of all the camel case / snake case # - inflection.underscore changes the camel case to snake case # - inflection.humanize converts the snake case to human readable form name = inflection.humanize(inflection.underscore(chunk[""name""])) signature = inflection.humanize(inflection.underscore(chunk[""signature""])) # Check if docstring is provided docstring = """" if chunk[""docstring""]: docstring = f""that does {chunk['docstring']} "" # Extract the location of that snippet of code context = ( f""module {chunk['context']['module']} "" f""file {chunk['context']['file_name']}"" ) if chunk[""context""][""struct_name""]: struct_name = inflection.humanize( inflection.underscore(chunk[""context""][""struct_name""]) ) context = f""defined in struct {struct_name} {context}"" # Combine all the bits and pieces together text_representation = ( f""{chunk['code_type']} {name} "" f""{docstring}"" f""defined as {signature} "" f""{context}"" ) # Remove any special characters and concatenate the tokens tokens = re.split(r""\W"", text_representation) tokens = filter(lambda x: x, tokens) return "" "".join(tokens) ``` Now we can use `textify` to convert all chunks into text representations: ```python text_representations = list(map(textify, structures)) ``` This is how the `await_ready_for_timeout` function description appears: ```text Function Await ready for timeout that does Return true if ready false if timed out defined as Fn await ready for timeout self timeout duration bool defined in struct Is ready module common file is_ready rs ``` ## Ingestion pipeline Next, we build the code search engine to vectorizing data and set up a semantic search mechanism for both embedding models. ### Natural language embeddings We can encode text representations through the `all-MiniLM-L6-v2` model from `sentence-transformers`. With the following command, we install `sentence-transformers` with dependencies: ```shell pip install sentence-transformers optimum onnx ``` Then we can use the model to encode the text representations: ```python from sentence_transformers import SentenceTransformer nlp_model = SentenceTransformer(""all-MiniLM-L6-v2"") nlp_embeddings = nlp_model.encode( text_representations, show_progress_bar=True, ) ``` ### Code embeddings The `jina-embeddings-v2-base-code` model is a good candidate for this task. You can also get it from the `sentence-transformers` library, with conditions. Visit [the model page](https://huggingface.co/jinaai/jina-embeddings-v2-base-code), accept the rules, and generate the access token in your [account settings](https://huggingface.co/settings/tokens). Once you have the token, you can use the model as follows: ```python HF_TOKEN = ""THIS_IS_YOUR_TOKEN"" # Extract the code snippets from the structures to a separate list code_snippets = [ structure[""context""][""snippet""] for structure in structures ] code_model = SentenceTransformer( ""jinaai/jina-embeddings-v2-base-code"", token=HF_TOKEN, trust_remote_code=True ) code_model.max_seq_length = 8192 # increase the context length window code_embeddings = code_model.encode( code_snippets, batch_size=4, show_progress_bar=True, ) ``` Remember to set the `trust_remote_code` parameter to `True`. Otherwise, the model does not produce meaningful vectors. Setting this parameter allows the library to download and possibly launch some code on your machine, so be sure to trust the source. With both the natural language and code embeddings, we can store them in the Qdrant collection. ### Building Qdrant collection We use the `qdrant-client` library to interact with the Qdrant server. Let's install that client: ```shell pip install qdrant-client ``` Of course, we need a running Qdrant server for vector search. If you need one, you can [use a local Docker container](/documentation/quick-start/) or deploy it using the [Qdrant Cloud](https://cloud.qdrant.io/). You can use either to follow this tutorial. Configure the connection parameters: ```python QDRANT_URL = ""https://my-cluster.cloud.qdrant.io:6333"" # http://localhost:6333 for local instance QDRANT_API_KEY = ""THIS_IS_YOUR_API_KEY"" # None for local instance ``` Then use the library to create a collection: ```python from qdrant_client import QdrantClient, models client = QdrantClient(QDRANT_URL, api_key=QDRANT_API_KEY) client.create_collection( ""qdrant-sources"", vectors_config={ ""text"": models.VectorParams( size=nlp_embeddings.shape[1], distance=models.Distance.COSINE, ), ""code"": models.VectorParams( size=code_embeddings.shape[1], distance=models.Distance.COSINE, ), } ) ``` Our newly created collection is ready to accept the data. Let's upload the embeddings: ```python import uuid points = [ models.PointStruct( id=uuid.uuid4().hex, vector={ ""text"": text_embedding, ""code"": code_embedding, }, payload=structure, ) for text_embedding, code_embedding, structure in zip(nlp_embeddings, code_embeddings, structures) ] client.upload_points(""qdrant-sources"", points=points, batch_size=64) ``` The uploaded points are immediately available for search. Next, query the collection to find relevant code snippets. ## Querying the codebase We use one of the models to search the collection. Start with text embeddings. Run the following query ""*How do I count points in a collection?*"". Review the results. ```python query = ""How do I count points in a collection?"" hits = client.query_points( ""qdrant-sources"", query=nlp_model.encode(query).tolist(), using=""text"", limit=5, ).points ``` Now, review the results. The following table lists the module, the file name and score. Each line includes a link to the signature, as a code block from the file. | module | file_name | score | signature | |--------------------|---------------------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | toc | point_ops.rs | 0.59448624 | [ `pub async fn count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/storage/src/content_manager/toc/point_ops.rs#L120) | | operations | types.rs | 0.5493385 | [ `pub struct CountRequestInternal`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/operations/types.rs#L831) | | collection_manager | segments_updater.rs | 0.5121002 | [ `pub(crate) fn upsert_points<'a, T>`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/collection_manager/segments_updater.rs#L339) | | collection | point_ops.rs | 0.5063539 | [ `pub async fn count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/collection/point_ops.rs#L213) | | map_index | mod.rs | 0.49973983 | [ `fn get_points_with_value_count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/map_index/mod.rs#L88) | It seems we were able to find some relevant code structures. Let's try the same with the code embeddings: ```python hits = client.query_points( ""qdrant-sources"", query=code_model.encode(query).tolist(), using=""code"", limit=5, ).points ``` Output: | module | file_name | score | signature | |---------------|----------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | field_index | geo_index.rs | 0.73278356 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/geo_index.rs#L612) | | numeric_index | mod.rs | 0.7254976 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/numeric_index/mod.rs#L322) | | map_index | mod.rs | 0.7124739 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L315) | | map_index | mod.rs | 0.7124739 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L429) | | fixtures | payload_context_fixture.rs | 0.706204 | [ `fn total_point_count`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/fixtures/payload_context_fixture.rs#L122) | While the scores retrieved by different models are not comparable, but we can see that the results are different. Code and text embeddings can capture different aspects of the codebase. We can use both models to query the collection and then combine the results to get the most relevant code snippets, from a single batch request. ```python responses = client.query_batch_points( ""qdrant-sources"", requests=[ models.QueryRequest( query=nlp_model.encode(query).tolist(), using=""text"", with_payload=True, limit=5, ), models.QueryRequest( query=code_model.encode(query).tolist(), using=""code"", with_payload=True, limit=5, ), ] ) results = [response.points for response in responses] ``` Output: | module | file_name | score | signature | |--------------------|----------------------------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | toc | point_ops.rs | 0.59448624 | [ `pub async fn count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/storage/src/content_manager/toc/point_ops.rs#L120) | | operations | types.rs | 0.5493385 | [ `pub struct CountRequestInternal`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/operations/types.rs#L831) | | collection_manager | segments_updater.rs | 0.5121002 | [ `pub(crate) fn upsert_points<'a, T>`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/collection_manager/segments_updater.rs#L339) | | collection | point_ops.rs | 0.5063539 | [ `pub async fn count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/collection/src/collection/point_ops.rs#L213) | | map_index | mod.rs | 0.49973983 | [ `fn get_points_with_value_count`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/map_index/mod.rs#L88) | | field_index | geo_index.rs | 0.73278356 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/geo_index.rs#L612) | | numeric_index | mod.rs | 0.7254976 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/numeric_index/mod.rs#L322) | | map_index | mod.rs | 0.7124739 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L315) | | map_index | mod.rs | 0.7124739 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L429) | | fixtures | payload_context_fixture.rs | 0.706204 | [ `fn total_point_count`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/fixtures/payload_context_fixture.rs#L122) | This is one example of how you can use different models and combine the results. In a real-world scenario, you might run some reranking and deduplication, as well as additional processing of the results. ### Code search demo Our [Code search demo](https://code-search.qdrant.tech/) uses the following process: 1. The user sends a query. 1. Both models vectorize that query simultaneously. We get two different vectors. 1. Both vectors are used in parallel to find relevant snippets. We expect 5 examples from the NLP search and 20 examples from the code search. 1. Once we retrieve results for both vectors, we merge them in one of the following scenarios: 1. If both methods return different results, we prefer the results from the general usage model (NLP). 1. If there is an overlap between the search results, we merge overlapping snippets. In the screenshot, we search for `flush of wal`. The result shows relevant code, merged from both models. Note the highlighted code in lines 621-629. It's where both models agree. ![Results from both models, with overlap](/documentation/tutorials/code-search/code-search-demo-example.png) Now you see semantic code intelligence, in action. ### Grouping the results You can improve the search results, by grouping them by payload properties. In our case, we can group the results by the module. If we use code embeddings, we can see multiple results from the `map_index` module. Let's group the results and assume a single result per module: ```python results = client.search_groups( ""qdrant-sources"", query_vector=( ""code"", code_model.encode(query).tolist() ), group_by=""context.module"", limit=5, group_size=1, ) ``` Output: | module | file_name | score | signature | |---------------|----------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | field_index | geo_index.rs | 0.73278356 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/7aa164bd2dda1c0fc9bf3a0da42e656c95c2e52a/lib/segment/src/index/field_index/geo_index.rs#L612) | | numeric_index | mod.rs | 0.7254976 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/numeric_index/mod.rs#L322) | | map_index | mod.rs | 0.7124739 | [ `fn count_indexed_points`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/field_index/map_index/mod.rs#L315) | | fixtures | payload_context_fixture.rs | 0.706204 | [ `fn total_point_count`](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/fixtures/payload_context_fixture.rs#L122) | | hnsw_index | graph_links.rs | 0.6998417 | [ `fn num_points `](https://github.com/qdrant/qdrant/blob/3fbe1cae6cb7f51a0c5bb4b45cfe6749ac76ed59/lib/segment/src/index/hnsw_index/graph_links.rs#L477) | With the grouping feature, we get more diverse results. ## Summary This tutorial demonstrates how to use Qdrant to navigate a codebase. For an end-to-end implementation, review the [code search notebook](https://colab.research.google.com/github/qdrant/examples/blob/master/code-search/code-search.ipynb) and the [code-search-demo](https://github.com/qdrant/demo-code-search). You can also check out [a running version of the code search demo](https://code-search.qdrant.tech/) which exposes Qdrant codebase for search with a web interface. ",documentation/tutorials/code-search.md "--- title: Measure retrieval quality weight: 21 --- # Measure retrieval quality | Time: 30 min | Level: Intermediate | | | |--------------|---------------------|--|----| Semantic search pipelines are as good as the embeddings they use. If your model cannot properly represent input data, similar objects might be far away from each other in the vector space. No surprise, that the search results will be poor in this case. There is, however, another component of the process which can also degrade the quality of the search results. It is the ANN algorithm itself. In this tutorial, we will show how to measure the quality of the semantic retrieval and how to tune the parameters of the HNSW, the ANN algorithm used in Qdrant, to obtain the best results. ## Embeddings quality The quality of the embeddings is a topic for a separate tutorial. In a nutshell, it is usually measured and compared by benchmarks, such as [Massive Text Embedding Benchmark (MTEB)](https://huggingface.co/spaces/mteb/leaderboard). The evaluation process itself is pretty straightforward and is based on a ground truth dataset built by humans. We have a set of queries and a set of the documents we would expect to receive for each of them. In the evaluation process, we take a query, find the most similar documents in the vector space and compare them with the ground truth. In that setup, **finding the most similar documents is implemented as full kNN search, without any approximation**. As a result, we can measure the quality of the embeddings themselves, without the influence of the ANN algorithm. ## Retrieval quality Embeddings quality is indeed the most important factor in the semantic search quality. However, vector search engines, such as Qdrant, do not perform pure kNN search. Instead, they use **Approximate Nearest Neighbors** (ANN) algorithms, which are much faster than the exact search, but can return suboptimal results. We can also **measure the retrieval quality of that approximation** which also contributes to the overall search quality. ### Quality metrics There are various ways of how quantify the quality of semantic search. Some of them, such as [Precision@k](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision_at_k), are based on the number of relevant documents in the top-k search results. Others, such as [Mean Reciprocal Rank (MRR)](https://en.wikipedia.org/wiki/Mean_reciprocal_rank), take into account the position of the first relevant document in the search results. [DCG and NDCG](https://en.wikipedia.org/wiki/Discounted_cumulative_gain) metrics are, in turn, based on the relevance score of the documents. If we treat the search pipeline as a whole, we could use them all. The same is true for the embeddings quality evaluation. However, for the ANN algorithm itself, anything based on the relevance score or ranking is not applicable. Ranking in vector search relies on the distance between the query and the document in the vector space, however distance is not going to change due to approximation, as the function is still the same. Therefore, it only makes sense to measure the quality of the ANN algorithm by the number of relevant documents in the top-k search results, such as `precision@k`. It is calculated as the number of relevant documents in the top-k search results divided by `k`. In case of testing just the ANN algorithm, we can use the exact kNN search as a ground truth, with `k` being fixed. It will be a measure on **how well the ANN algorithm approximates the exact search**. ## Measure the quality of the search results Let's build a quality evaluation of the ANN algorithm in Qdrant. We will, first, call the search endpoint in a standard way to obtain the approximate search results. Then, we will call the exact search endpoint to obtain the exact matches, and finally compare both results in terms of precision. Before we start, let's create a collection, fill it with some data and then start our evaluation. We will use the same dataset as in the [Loading a dataset from Hugging Face hub](/documentation/tutorials/huggingface-datasets/) tutorial, `Qdrant/arxiv-titles-instructorxl-embeddings` from the [Hugging Face hub](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings). Let's download it in a streaming mode, as we are only going to use part of it. ```python from datasets import load_dataset dataset = load_dataset( ""Qdrant/arxiv-titles-instructorxl-embeddings"", split=""train"", streaming=True ) ``` We need some data to be indexed and another set for the testing purposes. Let's get the first 50000 items for the training and the next 1000 for the testing. ```python dataset_iterator = iter(dataset) train_dataset = [next(dataset_iterator) for _ in range(60000)] test_dataset = [next(dataset_iterator) for _ in range(1000)] ``` Now, let's create a collection and index the training data. This collection will be created with the default configuration. Please be aware that it might be different from your collection settings, and it's always important to test exactly the same configuration you are going to use later in production. ```python from qdrant_client import QdrantClient, models client = QdrantClient(""http://localhost:6333"") client.create_collection( collection_name=""arxiv-titles-instructorxl-embeddings"", vectors_config=models.VectorParams( size=768, # Size of the embeddings generated by InstructorXL model distance=models.Distance.COSINE, ), ) ``` We are now ready to index the training data. Uploading the records is going to trigger the indexing process, which will build the HNSW graph. The indexing process may take some time, depending on the size of the dataset, but your data is going to be available for search immediately after receiving the response from the `upsert` endpoint. **As long as the indexing is not finished, and HNSW not built, Qdrant will perform the exact search**. We have to wait until the indexing is finished to be sure that the approximate search is performed. ```python client.upload_points( # upload_points is available as of qdrant-client v1.7.1 collection_name=""arxiv-titles-instructorxl-embeddings"", points=[ models.PointStruct( id=item[""id""], vector=item[""vector""], payload=item, ) for item in train_dataset ] ) while True: collection_info = client.get_collection(collection_name=""arxiv-titles-instructorxl-embeddings"") if collection_info.status == models.CollectionStatus.GREEN: # Collection status is green, which means the indexing is finished break ``` ## Standard mode vs exact search Qdrant has a built-in exact search mode, which can be used to measure the quality of the search results. In this mode, Qdrant performs a full kNN search for each query, without any approximation. It is not suitable for production use with high load, but it is perfect for the evaluation of the ANN algorithm and its parameters. It might be triggered by setting the `exact` parameter to `True` in the search request. We are simply going to use all the examples from the test dataset as queries and compare the results of the approximate search with the results of the exact search. Let's create a helper function with `k` being a parameter, so we can calculate the `precision@k` for different values of `k`. ```python def avg_precision_at_k(k: int): precisions = [] for item in test_dataset: ann_result = client.query_points( collection_name=""arxiv-titles-instructorxl-embeddings"", query=item[""vector""], limit=k, ).points knn_result = client.query_points( collection_name=""arxiv-titles-instructorxl-embeddings"", query=item[""vector""], limit=k, search_params=models.SearchParams( exact=True, # Turns on the exact search mode ), ).points # We can calculate the precision@k by comparing the ids of the search results ann_ids = set(item.id for item in ann_result) knn_ids = set(item.id for item in knn_result) precision = len(ann_ids.intersection(knn_ids)) / k precisions.append(precision) return sum(precisions) / len(precisions) ``` Calculating the `precision@5` is as simple as calling the function with the corresponding parameter: ```python print(f""avg(precision@5) = {avg_precision_at_k(k=5)}"") ``` Response: ```text avg(precision@5) = 0.9935999999999995 ``` As we can see, the precision of the approximate search vs exact search is pretty high. There are, however, some scenarios when we need higher precision and can accept higher latency. HNSW is pretty tunable, and we can increase the precision by changing its parameters. ## Tweaking the HNSW parameters HNSW is a hierarchical graph, where each node has a set of links to other nodes. The number of edges per node is called the `m` parameter. The larger the value of it, the higher the precision of the search, but more space required. The `ef_construct` parameter is the number of neighbours to consider during the index building. Again, the larger the value, the higher the precision, but the longer the indexing time. The default values of these parameters are `m=16` and `ef_construct=100`. Let's try to increase them to `m=32` and `ef_construct=200` and see how it affects the precision. Of course, we need to wait until the indexing is finished before we can perform the search. ```python client.update_collection( collection_name=""arxiv-titles-instructorxl-embeddings"", hnsw_config=models.HnswConfigDiff( m=32, # Increase the number of edges per node from the default 16 to 32 ef_construct=200, # Increase the number of neighbours from the default 100 to 200 ) ) while True: collection_info = client.get_collection(collection_name=""arxiv-titles-instructorxl-embeddings"") if collection_info.status == models.CollectionStatus.GREEN: # Collection status is green, which means the indexing is finished break ``` The same function can be used to calculate the average `precision@5`: ```python print(f""avg(precision@5) = {avg_precision_at_k(k=5)}"") ``` Response: ```text avg(precision@5) = 0.9969999999999998 ``` The precision has obviously increased, and we know how to control it. However, there is a trade-off between the precision and the search latency and memory requirements. In some specific cases, we may want to increase the precision as much as possible, so now we know how to do it. ## Wrapping up Assessing the quality of retrieval is a critical aspect of evaluating semantic search performance. It is imperative to measure retrieval quality when aiming for optimal quality of. your search results. Qdrant provides a built-in exact search mode, which can be used to measure the quality of the ANN algorithm itself, even in an automated way, as part of your CI/CD pipeline. Again, **the quality of the embeddings is the most important factor**. HNSW does a pretty good job in terms of precision, and it is parameterizable and tunable, when required. There are some other ANN algorithms available out there, such as [IVF*](https://github.com/facebookresearch/faiss/wiki/Faiss-indexes#cell-probe-methods-indexivf-indexes), but they usually [perform worse than HNSW in terms of quality and performance](https://nirantk.com/writing/pgvector-vs-qdrant/#correctness). ",documentation/tutorials/retrieval-quality.md "--- title: Neural Search Service weight: 1 --- # Create a Simple Neural Search Service | Time: 30 min | Level: Beginner | Output: [GitHub](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) | | --- | ----------- | ----------- |----------- | This tutorial shows you how to build and deploy your own neural search service to look through descriptions of companies from [startups-list.com](https://www.startups-list.com/) and pick the most similar ones to your query. The website contains the company names, descriptions, locations, and a picture for each entry. A neural search service uses artificial neural networks to improve the accuracy and relevance of search results. Besides offering simple keyword results, this system can retrieve results by meaning. It can understand and interpret complex search queries and provide more contextually relevant output, effectively enhancing the user's search experience. ## Workflow To create a neural search service, you will need to transform your raw data and then create a search function to manipulate it. First, you will 1) download and prepare a sample dataset using a modified version of the BERT ML model. Then, you will 2) load the data into Qdrant, 3) create a neural search API and 4) serve it using FastAPI. ![Neural Search Workflow](/docs/workflow-neural-search.png) > **Note**: The code for this tutorial can be found here: | [Step 1: Data Preparation Process](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) | [Step 2: Full Code for Neural Search](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers). | ## Prerequisites To complete this tutorial, you will need: - Docker - The easiest way to use Qdrant is to run a pre-built Docker image. - [Raw parsed data](https://storage.googleapis.com/generall-shared-data/startups_demo.json) from startups-list.com. - Python version >=3.8 ## Prepare sample dataset To conduct a neural search on startup descriptions, you must first encode the description data into vectors. To process text, you can use a pre-trained models like [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)) or sentence transformers. The [sentence-transformers](https://github.com/UKPLab/sentence-transformers) library lets you conveniently download and use many pre-trained models, such as DistilBERT, MPNet, etc. 1. First you need to download the dataset. ```bash wget https://storage.googleapis.com/generall-shared-data/startups_demo.json ``` 2. Install the SentenceTransformer library as well as other relevant packages. ```bash pip install sentence-transformers numpy pandas tqdm ``` 3. Import the required modules. ```python from sentence_transformers import SentenceTransformer import numpy as np import json import pandas as pd from tqdm.notebook import tqdm ``` You will be using a pre-trained model called `all-MiniLM-L6-v2`. This is a performance-optimized sentence embedding model and you can read more about it and other available models [here](https://www.sbert.net/docs/pretrained_models.html). 4. Download and create a pre-trained sentence encoder. ```python model = SentenceTransformer( ""all-MiniLM-L6-v2"", device=""cuda"" ) # or device=""cpu"" if you don't have a GPU ``` 5. Read the raw data file. ```python df = pd.read_json(""./startups_demo.json"", lines=True) ``` 6. Encode all startup descriptions to create an embedding vector for each. Internally, the `encode` function will split the input into batches, which will significantly speed up the process. ```python vectors = model.encode( [row.alt + "". "" + row.description for row in df.itertuples()], show_progress_bar=True, ) ``` All of the descriptions are now converted into vectors. There are 40474 vectors of 384 dimensions. The output layer of the model has this dimension ```python vectors.shape # > (40474, 384) ``` 7. Download the saved vectors into a new file named `startup_vectors.npy` ```python np.save(""startup_vectors.npy"", vectors, allow_pickle=False) ``` ## Run Qdrant in Docker Next, you need to manage all of your data using a vector engine. Qdrant lets you store, update or delete created vectors. Most importantly, it lets you search for the nearest vectors via a convenient API. > **Note:** Before you begin, create a project directory and a virtual python environment in it. 1. Download the Qdrant image from DockerHub. ```bash docker pull qdrant/qdrant ``` 2. Start Qdrant inside of Docker. ```bash docker run -p 6333:6333 \ -v $(pwd)/qdrant_storage:/qdrant/storage \ qdrant/qdrant ``` You should see output like this ```text ... [2021-02-05T00:08:51Z INFO actix_server::builder] Starting 12 workers [2021-02-05T00:08:51Z INFO actix_server::builder] Starting ""actix-web-service-0.0.0.0:6333"" service on 0.0.0.0:6333 ``` Test the service by going to [http://localhost:6333/](http://localhost:6333/). You should see the Qdrant version info in your browser. All data uploaded to Qdrant is saved inside the `./qdrant_storage` directory and will be persisted even if you recreate the container. ## Upload data to Qdrant 1. Install the official Python client to best interact with Qdrant. ```bash pip install qdrant-client ``` At this point, you should have startup records in the `startups_demo.json` file, encoded vectors in `startup_vectors.npy` and Qdrant running on a local machine. Now you need to write a script to upload all startup data and vectors into the search engine. 2. Create a client object for Qdrant. ```python # Import client library from qdrant_client import QdrantClient from qdrant_client.models import VectorParams, Distance client = QdrantClient(""http://localhost:6333"") ``` 3. Related vectors need to be added to a collection. Create a new collection for your startup vectors. ```python if not client.collection_exists(""startups""): client.create_collection( collection_name=""startups"", vectors_config=VectorParams(size=384, distance=Distance.COSINE), ) ``` 4. Create an iterator over the startup data and vectors. The Qdrant client library defines a special function that allows you to load datasets into the service. However, since there may be too much data to fit a single computer memory, the function takes an iterator over the data as input. ```python fd = open(""./startups_demo.json"") # payload is now an iterator over startup data payload = map(json.loads, fd) # Load all vectors into memory, numpy array works as iterable for itself. # Other option would be to use Mmap, if you don't want to load all data into RAM vectors = np.load(""./startup_vectors.npy"") ``` 5. Upload the data ```python client.upload_collection( collection_name=""startups"", vectors=vectors, payload=payload, ids=None, # Vector ids will be assigned automatically batch_size=256, # How many vectors will be uploaded in a single request? ) ``` Vectors are now uploaded to Qdrant. ## Build the search API Now that all the preparations are complete, let's start building a neural search class. In order to process incoming requests, neural search will need 2 things: 1) a model to convert the query into a vector and 2) the Qdrant client to perform search queries. 1. Create a file named `neural_searcher.py` and specify the following. ```python from qdrant_client import QdrantClient from sentence_transformers import SentenceTransformer class NeuralSearcher: def __init__(self, collection_name): self.collection_name = collection_name # Initialize encoder model self.model = SentenceTransformer(""all-MiniLM-L6-v2"", device=""cpu"") # initialize Qdrant client self.qdrant_client = QdrantClient(""http://localhost:6333"") ``` 2. Write the search function. ```python def search(self, text: str): # Convert text query into vector vector = self.model.encode(text).tolist() # Use `vector` for search for closest vectors in the collection search_result = self.qdrant_client.query_points( collection_name=self.collection_name, query=vector, query_filter=None, # If you don't want any filters for now limit=5, # 5 the most closest results is enough ).points # `search_result` contains found vector ids with similarity scores along with the stored payload # In this function you are interested in payload only payloads = [hit.payload for hit in search_result] return payloads ``` 3. Add search filters. With Qdrant it is also feasible to add some conditions to the search. For example, if you wanted to search for startups in a certain city, the search query could look like this: ```python from qdrant_client.models import Filter ... city_of_interest = ""Berlin"" # Define a filter for cities city_filter = Filter(**{ ""must"": [{ ""key"": ""city"", # Store city information in a field of the same name ""match"": { # This condition checks if payload field has the requested value ""value"": city_of_interest } }] }) search_result = self.qdrant_client.query_points( collection_name=self.collection_name, query=vector, query_filter=city_filter, limit=5 ).points ... ``` You have now created a class for neural search queries. Now wrap it up into a service. ## Deploy the search with FastAPI To build the service you will use the FastAPI framework. 1. Install FastAPI. To install it, use the command ```bash pip install fastapi uvicorn ``` 2. Implement the service. Create a file named `service.py` and specify the following. The service will have only one API endpoint and will look like this: ```python from fastapi import FastAPI # The file where NeuralSearcher is stored from neural_searcher import NeuralSearcher app = FastAPI() # Create a neural searcher instance neural_searcher = NeuralSearcher(collection_name=""startups"") @app.get(""/api/search"") def search_startup(q: str): return {""result"": neural_searcher.search(text=q)} if __name__ == ""__main__"": import uvicorn uvicorn.run(app, host=""0.0.0.0"", port=8000) ``` 3. Run the service. ```bash python service.py ``` 4. Open your browser at [http://localhost:8000/docs](http://localhost:8000/docs). You should be able to see a debug interface for your service. ![FastAPI Swagger interface](/docs/fastapi_neural_search.png) Feel free to play around with it, make queries regarding the companies in our corpus, and check out the results. ## Next steps The code from this tutorial has been used to develop a [live online demo](https://qdrant.to/semantic-search-demo). You can try it to get an intuition for cases when the neural search is useful. The demo contains a switch that selects between neural and full-text searches. You can turn the neural search on and off to compare your result with a regular full-text search. > **Note**: The code for this tutorial can be found here: | [Step 1: Data Preparation Process](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) | [Step 2: Full Code for Neural Search](https://github.com/qdrant/qdrant_demo/tree/sentense-transformers). | Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, publish other examples of neural networks and neural search applications. ",documentation/tutorials/neural-search.md "--- title: Semantic Search 101 weight: -100 aliases: - /documentation/tutorials/mighty.md/ --- # Semantic Search for Beginners | Time: 5 - 15 min | Level: Beginner | | | | --- | ----------- | ----------- |----------- |

## Overview If you are new to vector databases, this tutorial is for you. In 5 minutes you will build a semantic search engine for science fiction books. After you set it up, you will ask the engine about an impending alien threat. Your creation will recommend books as preparation for a potential space attack. Before you begin, you need to have a [recent version of Python](https://www.python.org/downloads/) installed. If you don't know how to run this code in a virtual environment, follow Python documentation for [Creating Virtual Environments](https://docs.python.org/3/tutorial/venv.html#creating-virtual-environments) first. This tutorial assumes you're in the bash shell. Use the Python documentation to activate a virtual environment, with commands such as: ```bash source tutorial-env/bin/activate ``` ## 1. Installation You need to process your data so that the search engine can work with it. The [Sentence Transformers](https://www.sbert.net/) framework gives you access to common Large Language Models that turn raw data into embeddings. ```bash pip install -U sentence-transformers ``` Once encoded, this data needs to be kept somewhere. Qdrant lets you store data as embeddings. You can also use Qdrant to run search queries against this data. This means that you can ask the engine to give you relevant answers that go way beyond keyword matching. ```bash pip install -U qdrant-client ``` ### Import the models Once the two main frameworks are defined, you need to specify the exact models this engine will use. Before you do, activate the Python prompt (`>>>`) with the `python` command. ```python from qdrant_client import models, QdrantClient from sentence_transformers import SentenceTransformer ``` The [Sentence Transformers](https://www.sbert.net/index.html) framework contains many embedding models. However, [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) is the fastest encoder for this tutorial. ```python encoder = SentenceTransformer(""all-MiniLM-L6-v2"") ``` ## 2. Add the dataset [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) will encode the data you provide. Here you will list all the science fiction books in your library. Each book has metadata, a name, author, publication year and a short description. ```python documents = [ { ""name"": ""The Time Machine"", ""description"": ""A man travels through time and witnesses the evolution of humanity."", ""author"": ""H.G. Wells"", ""year"": 1895, }, { ""name"": ""Ender's Game"", ""description"": ""A young boy is trained to become a military leader in a war against an alien race."", ""author"": ""Orson Scott Card"", ""year"": 1985, }, { ""name"": ""Brave New World"", ""description"": ""A dystopian society where people are genetically engineered and conditioned to conform to a strict social hierarchy."", ""author"": ""Aldous Huxley"", ""year"": 1932, }, { ""name"": ""The Hitchhiker's Guide to the Galaxy"", ""description"": ""A comedic science fiction series following the misadventures of an unwitting human and his alien friend."", ""author"": ""Douglas Adams"", ""year"": 1979, }, { ""name"": ""Dune"", ""description"": ""A desert planet is the site of political intrigue and power struggles."", ""author"": ""Frank Herbert"", ""year"": 1965, }, { ""name"": ""Foundation"", ""description"": ""A mathematician develops a science to predict the future of humanity and works to save civilization from collapse."", ""author"": ""Isaac Asimov"", ""year"": 1951, }, { ""name"": ""Snow Crash"", ""description"": ""A futuristic world where the internet has evolved into a virtual reality metaverse."", ""author"": ""Neal Stephenson"", ""year"": 1992, }, { ""name"": ""Neuromancer"", ""description"": ""A hacker is hired to pull off a near-impossible hack and gets pulled into a web of intrigue."", ""author"": ""William Gibson"", ""year"": 1984, }, { ""name"": ""The War of the Worlds"", ""description"": ""A Martian invasion of Earth throws humanity into chaos."", ""author"": ""H.G. Wells"", ""year"": 1898, }, { ""name"": ""The Hunger Games"", ""description"": ""A dystopian society where teenagers are forced to fight to the death in a televised spectacle."", ""author"": ""Suzanne Collins"", ""year"": 2008, }, { ""name"": ""The Andromeda Strain"", ""description"": ""A deadly virus from outer space threatens to wipe out humanity."", ""author"": ""Michael Crichton"", ""year"": 1969, }, { ""name"": ""The Left Hand of Darkness"", ""description"": ""A human ambassador is sent to a planet where the inhabitants are genderless and can change gender at will."", ""author"": ""Ursula K. Le Guin"", ""year"": 1969, }, { ""name"": ""The Three-Body Problem"", ""description"": ""Humans encounter an alien civilization that lives in a dying system."", ""author"": ""Liu Cixin"", ""year"": 2008, }, ] ``` ## 3. Define storage location You need to tell Qdrant where to store embeddings. This is a basic demo, so your local computer will use its memory as temporary storage. ```python client = QdrantClient("":memory:"") ``` ## 4. Create a collection All data in Qdrant is organized by collections. In this case, you are storing books, so we are calling it `my_books`. ```python client.create_collection( collection_name=""my_books"", vectors_config=models.VectorParams( size=encoder.get_sentence_embedding_dimension(), # Vector size is defined by used model distance=models.Distance.COSINE, ), ) ``` - The `vector_size` parameter defines the size of the vectors for a specific collection. If their size is different, it is impossible to calculate the distance between them. 384 is the encoder output dimensionality. You can also use model.get_sentence_embedding_dimension() to get the dimensionality of the model you are using. - The `distance` parameter lets you specify the function used to measure the distance between two points. ## 5. Upload data to collection Tell the database to upload `documents` to the `my_books` collection. This will give each record an id and a payload. The payload is just the metadata from the dataset. ```python client.upload_points( collection_name=""my_books"", points=[ models.PointStruct( id=idx, vector=encoder.encode(doc[""description""]).tolist(), payload=doc ) for idx, doc in enumerate(documents) ], ) ``` ## 6. Ask the engine a question Now that the data is stored in Qdrant, you can ask it questions and receive semantically relevant results. ```python hits = client.query_points( collection_name=""my_books"", query=encoder.encode(""alien invasion"").tolist(), limit=3, ).points for hit in hits: print(hit.payload, ""score:"", hit.score) ``` **Response:** The search engine shows three of the most likely responses that have to do with the alien invasion. Each of the responses is assigned a score to show how close the response is to the original inquiry. ```text {'name': 'The War of the Worlds', 'description': 'A Martian invasion of Earth throws humanity into chaos.', 'author': 'H.G. Wells', 'year': 1898} score: 0.570093257022374 {'name': ""The Hitchhiker's Guide to the Galaxy"", 'description': 'A comedic science fiction series following the misadventures of an unwitting human and his alien friend.', 'author': 'Douglas Adams', 'year': 1979} score: 0.5040468703143637 {'name': 'The Three-Body Problem', 'description': 'Humans encounter an alien civilization that lives in a dying system.', 'author': 'Liu Cixin', 'year': 2008} score: 0.45902943411768216 ``` ### Narrow down the query How about the most recent book from the early 2000s? ```python hits = client.query_points( collection_name=""my_books"", query=encoder.encode(""alien invasion"").tolist(), query_filter=models.Filter( must=[models.FieldCondition(key=""year"", range=models.Range(gte=2000))] ), limit=1, ).points for hit in hits: print(hit.payload, ""score:"", hit.score) ``` **Response:** The query has been narrowed down to one result from 2008. ```text {'name': 'The Three-Body Problem', 'description': 'Humans encounter an alien civilization that lives in a dying system.', 'author': 'Liu Cixin', 'year': 2008} score: 0.45902943411768216 ``` ## Next Steps Congratulations, you have just created your very first search engine! Trust us, the rest of Qdrant is not that complicated, either. For your next tutorial you should try building an actual [Neural Search Service with a complete API and a dataset](../../tutorials/neural-search/). ## Return to the bash shell To return to the bash prompt: 1. Press Ctrl+D to exit the Python prompt (`>>>`). 1. Enter the `deactivate` command to deactivate the virtual environment. ",documentation/tutorials/search-beginners.md "--- title: Multimodal Search weight: 4 --- # Multimodal Search with Qdrant and FastEmbed | Time: 15 min | Level: Beginner |Output: [GitHub](https://github.com/qdrant/examples/blob/master/multimodal-search/Multimodal_Search_with_FastEmbed.ipynb)|[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/qdrant/examples/blob/master/multimodal-search/Multimodal_Search_with_FastEmbed.ipynb) | | --- | ----------- | ----------- | ----------- | In this tutorial, you will set up a simple Multimodal Image & Text Search with Qdrant & FastEmbed. ## Overview We often understand and share information more effectively when combining different types of data. For example, the taste of comfort food can trigger childhood memories. We might describe a song with just “pam pam clap” sounds. Instead of writing paragraphs. Sometimes, we may use emojis and stickers to express how we feel or to share complex ideas. Modalities of data such as **text**, **images**, **video** and **audio** in various combinations form valuable use cases for Semantic Search applications. Vector databases, being **modality-agnostic**, are perfect for building these applications. In this simple tutorial, we are working with two simple modalities: **image** and **text** data. However, you can create a Semantic Search application with any combination of modalities if you choose the right embedding model to bridge the **semantic gap**. > The **semantic gap** refers to the difference between low-level features (aka brightness) and high-level concepts (aka cuteness). For example, the [ImageBind model](https://github.com/facebookresearch/ImageBind) from Meta AI is said to bind all 4 mentioned modalities in one shared space. ## Prerequisites > **Note**: The code for this tutorial can be found [here](https://github.com/qdrant/examples/multimodal-search) To complete this tutorial, you will need either Docker to run a pre-built Docker image of Qdrant and Python version ≥ 3.8 or a Google Collab Notebook if you don't want to install anything locally. We showed how to run Qdrant in Docker in the [""Create a Simple Neural Search Service""](https://qdrant.tech/documentation/tutorials/neural-search/) Tutorial. ## Setup First, install the required libraries `qdrant-client`, `fastembed` and `Pillow`. For example, with the `pip` package manager, it can be done in the following way. ```bash python3 -m pip install --upgrade qdrant-client fastembed Pillow ``` ## Dataset To make the demonstration simple, we created a tiny dataset of images and their captions for you. Images can be downloaded from [here](https://github.com/qdrant/examples/multimodal-search/images). It's **important** to place them in the same folder as your code/notebook, in the folder named `images`. You can check out how images look like in the following way: ```python from PIL import Image Image.open('images/lizard.jpg') ``` ## Vectorize data `FastEmbed` supports **Contrastive Language–Image Pre-training** ([CLIP](https://openai.com/index/clip/)) model, the old (2021) but gold classics of multimodal Image-Text Machine Learning. **CLIP** model was one of the first models of such kind with ZERO-SHOT capabilities. When using it for semantic search, it's important to remember that the textual encoder of CLIP is trained to process no more than **77 tokens**, so CLIP is good for short texts. Let's embed a very short selection of images and their captions in the **shared embedding space** with CLIP. ```python from fastembed import TextEmbedding, ImageEmbedding documents = [{""caption"": ""A photo of a cute pig"", ""image"": ""images/piggy.jpg""}, {""caption"": ""A picture with a coffee cup"", ""image"": ""images/coffee.jpg""}, {""caption"": ""A photo of a colourful lizard"", ""image"": ""images/lizard.jpg""} ] text_model_name = ""Qdrant/clip-ViT-B-32-text"" #CLIP text encoder text_model = TextEmbedding(model_name=text_model_name) text_embeddings_size = text_model._get_model_description(text_model_name)[""dim""] #dimension of text embeddings, produced by CLIP text encoder (512) texts_embeded = list(text_model.embed([document[""caption""] for document in documents])) #embedding captions with CLIP text encoder image_model_name = ""Qdrant/clip-ViT-B-32-vision"" #CLIP image encoder image_model = ImageEmbedding(model_name=image_model_name) image_embeddings_size = image_model._get_model_description(image_model_name)[""dim""] #dimension of image embeddings, produced by CLIP image encoder (512) images_embeded = list(image_model.embed([document[""image""] for document in documents])) #embedding images with CLIP image encoder ``` ## Upload data to Qdrant 1. **Create a client object for Qdrant**. ```python from qdrant_client import QdrantClient, models client = QdrantClient(""http://localhost:6333"") #or QdrantClient("":memory:"") if you're using Google Collab, this option is suitable only for simple prototypes/demos with Python client ``` 2. **Create a new collection for your images with captions**. CLIP’s weights were trained to maximize the scaled **Cosine Similarity** of truly corresponding image/caption pairs, so that's the **Distance Metric** we will choose for our [Collection](https://qdrant.tech/documentation/concepts/collections/) of [Named Vectors](https://qdrant.tech/documentation/concepts/collections/#collection-with-multiple-vectors). Using **Named Vectors**, we can easily showcase both Text-to-Image and Image-to-Text (Image-to-Image and Text-to-Text) search. ```python if not client.collection_exists(""text_image""): #creating a Collection client.create_collection( collection_name =""text_image"", vectors_config={ #Named Vectors ""image"": models.VectorParams(size=image_embeddings_size, distance=models.Distance.COSINE), ""text"": models.VectorParams(size=text_embeddings_size, distance=models.Distance.COSINE), } ) ``` 3. **Upload our images with captions to the Collection**. Each image with its caption will create a [Point](https://qdrant.tech/documentation/concepts/points/) in Qdrant. ```python client.upload_points( collection_name=""text_image"", points=[ models.PointStruct( id=idx, #unique id of a point, pre-defined by the user vector={ ""text"": texts_embeded[idx], #embeded caption ""image"": images_embeded[idx] #embeded image }, payload=doc #original image and its caption ) for idx, doc in enumerate(documents) ] ) ``` ## Search

Text-to-Image

Let's see what image we will get to the query ""*What would make me energetic in the morning?*"" ```python from PIL import Image find_image = text_model.embed([""What would make me energetic in the morning?""]) #query, we embed it, so it also becomes a vector Image.open(client.search( collection_name=""text_image"", #searching in our collection query_vector=(""image"", list(find_image)[0]), #searching only among image vectors with our textual query with_payload=[""image""], #user-readable information about search results, we are interested to see which image we will find limit=1 #top-1 similar to the query result )[0].payload['image']) ``` **Response:** ![Coffee Image](/docs/coffee.jpg)

Image-to-Text

Now, let's do a reverse search with an image: ```python from PIL import Image Image.open('images/piglet.jpg') ``` ![Piglet Image](/docs/piglet.jpg) Let's see what caption we will get, searching by this piglet image, which, as you can check, is not in our **Collection**. ```python find_image = image_model.embed(['images/piglet.jpg']) #embedding our image query client.search( collection_name=""text_image"", query_vector=(""text"", list(find_image)[0]), #now we are searching only among text vectors with our image query with_payload=[""caption""], #user-readable information about search results, we are interested to see which caption we will get limit=1 )[0].payload['caption'] ``` **Response:** ```text 'A photo of a cute pig' ``` ## Next steps Use cases of even just Image & Text Multimodal Search are countless: E-Commerce, Media Management, Content Recommendation, Emotion Recognition Systems, Biomedical Image Retrieval, Spoken Sign Language Transcription, etc. Imagine a scenario: user wants to find a product similar to a picture they have, but they also have specific textual requirements, like ""*in beige colour*"". You can search using just texts or images and combine their embeddings in a **late fusion manner** (summing and weighting might work surprisingly well). Moreover, using [Discovery Search](https://qdrant.tech/articles/discovery-search/) with both modalities, you can provide users with information that is impossible to retrieve unimodally! Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, experiment, and have fun!",documentation/tutorials/multimodal-search-fastembed.md "--- title: Load Hugging Face dataset weight: 19 --- # Loading a dataset from Hugging Face hub [Hugging Face](https://huggingface.co/) provides a platform for sharing and using ML models and datasets. [Qdrant](https://huggingface.co/Qdrant) also publishes datasets along with the embeddings that you can use to practice with Qdrant and build your applications based on semantic search. **Please [let us know](https://qdrant.to/discord) if you'd like to see a specific dataset!** ## arxiv-titles-instructorxl-embeddings [This dataset](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) contains embeddings generated from the paper titles only. Each vector has a payload with the title used to create it, along with the DOI (Digital Object Identifier). ```json { ""title"": ""Nash Social Welfare for Indivisible Items under Separable, Piecewise-Linear Concave Utilities"", ""DOI"": ""1612.05191"" } ``` You can find a detailed description of the dataset in the [Practice Datasets](/documentation/datasets/#journal-article-titles) section. If you prefer loading the dataset from a Qdrant snapshot, it also linked there. Loading the dataset is as simple as using the `load_dataset` function from the `datasets` library: ```python from datasets import load_dataset dataset = load_dataset(""Qdrant/arxiv-titles-instructorxl-embeddings"") ``` The dataset contains 2,250,000 vectors. This is how you can check the list of the features in the dataset: ```python dataset.features ``` ### Streaming the dataset Dataset streaming lets you work with a dataset without downloading it. The data is streamed as you iterate over the dataset. You can read more about it in the [Hugging Face documentation](https://huggingface.co/docs/datasets/stream). ```python from datasets import load_dataset dataset = load_dataset( ""Qdrant/arxiv-titles-instructorxl-embeddings"", split=""train"", streaming=True ) ``` ### Loading the dataset into Qdrant You can load the dataset into Qdrant using the [Python SDK](https://github.com/qdrant/qdrant-client). The embeddings are already precomputed, so you can store them in a collection, that we're going to create in a second: ```python from qdrant_client import QdrantClient, models client = QdrantClient(""http://localhost:6333"") client.create_collection( collection_name=""arxiv-titles-instructorxl-embeddings"", vectors_config=models.VectorParams( size=768, distance=models.Distance.COSINE, ), ) ``` It is always a good idea to use batching, while loading a large dataset, so let's do that. We are going to need a helper function to split the dataset into batches: ```python from itertools import islice def batched(iterable, n): iterator = iter(iterable) while batch := list(islice(iterator, n)): yield batch ``` If you are a happy user of Python 3.12+, you can use the [`batched` function from the `itertools` ](https://docs.python.org/3/library/itertools.html#itertools.batched) package instead. No matter what Python version you are using, you can use the `upsert` method to load the dataset, batch by batch, into Qdrant: ```python batch_size = 100 for batch in batched(dataset, batch_size): ids = [point.pop(""id"") for point in batch] vectors = [point.pop(""vector"") for point in batch] client.upsert( collection_name=""arxiv-titles-instructorxl-embeddings"", points=models.Batch( ids=ids, vectors=vectors, payloads=batch, ), ) ``` Your collection is ready to be used for search! Please [let us know using Discord](https://qdrant.to/discord) if you would like to see more datasets published on Hugging Face hub. ",documentation/tutorials/huggingface-datasets.md "--- title: Hybrid Search with Fastembed weight: 2 aliases: - /documentation/tutorials/neural-search-fastembed/ --- # Create a Hybrid Search Service with Fastembed | Time: 20 min | Level: Beginner | Output: [GitHub](https://github.com/qdrant/qdrant_demo/) | | --- | ----------- | ----------- |----------- | This tutorial shows you how to build and deploy your own hybrid search service to look through descriptions of companies from [startups-list.com](https://www.startups-list.com/) and pick the most similar ones to your query. The website contains the company names, descriptions, locations, and a picture for each entry. As we have already written on our [blog](/articles/hybrid-search/), there is no single definition of hybrid search. In this tutorial we are covering the case with a combination of dense and [sparse embeddings](/articles/sparse-vectors/). The former ones refer to the embeddings generated by such well-known neural networks as BERT, while the latter ones are more related to a traditional full-text search approach. Our hybrid search service will use [Fastembed](https://github.com/qdrant/fastembed) package to generate embeddings of text descriptions and [FastAPI](https://fastapi.tiangolo.com/) to serve the search API. Fastembed natively integrates with Qdrant client, so you can easily upload the data into Qdrant and perform search queries. ![Hybrid Search Schema](/documentation/tutorials/hybrid-search-with-fastembed/hybrid-search-schema.png) ## Workflow To create a hybrid search service, you will need to transform your raw data and then create a search function to manipulate it. First, you will 1) download and prepare a sample dataset using a modified version of the BERT ML model. Then, you will 2) load the data into Qdrant, 3) create a hybrid search API and 4) serve it using FastAPI. ![Hybrid Search Workflow](/docs/workflow-neural-search.png) ## Prerequisites To complete this tutorial, you will need: - Docker - The easiest way to use Qdrant is to run a pre-built Docker image. - [Raw parsed data](https://storage.googleapis.com/generall-shared-data/startups_demo.json) from startups-list.com. - Python version >=3.8 ## Prepare sample dataset To conduct a hybrid search on startup descriptions, you must first encode the description data into vectors. Fastembed integration into qdrant client combines encoding and uploading into a single step. It also takes care of batching and parallelization, so you don't have to worry about it. Let's start by downloading the data and installing the necessary packages. 1. First you need to download the dataset. ```bash wget https://storage.googleapis.com/generall-shared-data/startups_demo.json ``` ## Run Qdrant in Docker Next, you need to manage all of your data using a vector engine. Qdrant lets you store, update or delete created vectors. Most importantly, it lets you search for the nearest vectors via a convenient API. > **Note:** Before you begin, create a project directory and a virtual python environment in it. 1. Download the Qdrant image from DockerHub. ```bash docker pull qdrant/qdrant ``` 2. Start Qdrant inside of Docker. ```bash docker run -p 6333:6333 \ -v $(pwd)/qdrant_storage:/qdrant/storage \ qdrant/qdrant ``` You should see output like this ```text ... [2021-02-05T00:08:51Z INFO actix_server::builder] Starting 12 workers [2021-02-05T00:08:51Z INFO actix_server::builder] Starting ""actix-web-service-0.0.0.0:6333"" service on 0.0.0.0:6333 ``` Test the service by going to [http://localhost:6333/](http://localhost:6333/). You should see the Qdrant version info in your browser. All data uploaded to Qdrant is saved inside the `./qdrant_storage` directory and will be persisted even if you recreate the container. ## Upload data to Qdrant 1. Install the official Python client to best interact with Qdrant. ```bash pip install ""qdrant-client[fastembed]>=1.8.2"" ``` > **Note:** This tutorial requires fastembed of version >=0.2.6. At this point, you should have startup records in the `startups_demo.json` file and Qdrant running on a local machine. Now you need to write a script to upload all startup data and vectors into the search engine. 2. Create a client object for Qdrant. ```python # Import client library from qdrant_client import QdrantClient client = QdrantClient(url=""http://localhost:6333"") ``` 3. Select model to encode your data. You will be using two pre-trained models to compute dense and sparse vectors correspondingly: `sentence-transformers/all-MiniLM-L6-v2` and `prithivida/Splade_PP_en_v1`. ```python client.set_model(""sentence-transformers/all-MiniLM-L6-v2"") # comment this line to use dense vectors only client.set_sparse_model(""prithivida/Splade_PP_en_v1"") ``` 4. Related vectors need to be added to a collection. Create a new collection for your startup vectors. ```python if not client.collection_exists(""startups""): client.create_collection( collection_name=""startups"", vectors_config=client.get_fastembed_vector_params(), # comment this line to use dense vectors only sparse_vectors_config=client.get_fastembed_sparse_vector_params(), ) ``` Qdrant requires vectors to have their own names and configurations. Methods `get_fastembed_vector_params` and `get_fastembed_sparse_vector_params` help you to get the corresponding parameters for the models you are using. These parameters include vector size, distance function, etc. Without fastembed integration, you would need to specify the vector size and distance function manually. Read more about it [here](/documentation/tutorials/neural-search/). Additionally, you can specify extended configuration for your vectors, like `quantization_config` or `hnsw_config`. 5. Read data from the file. ```python import json payload_path = ""startups_demo.json"" metadata = [] documents = [] with open(payload_path) as fd: for line in fd: obj = json.loads(line) documents.append(obj.pop(""description"")) metadata.append(obj) ``` In this block of code, we read data from `startups_demo.json` file and split it into 2 lists: `documents` and `metadata`. Documents are the raw text descriptions of startups. Metadata is the payload associated with each startup, such as the name, location, and picture. We will use `documents` to encode the data into vectors. 6. Encode and upload data. ```python client.add( collection_name=""startups"", documents=documents, metadata=metadata, parallel=0, # Use all available CPU cores to encode data. # Requires wrapping code into if __name__ == '__main__' block ) ```
Upload processed data Download and unpack the processed data from [here](https://storage.googleapis.com/dataset-startup-search/startup-list-com/startups_hybrid_search_processed_40k.tar.gz) or use the following script: ```bash wget https://storage.googleapis.com/dataset-startup-search/startup-list-com/startups_hybrid_search_processed_40k.tar.gz tar -xvf startups_hybrid_search_processed_40k.tar.gz ``` Then you can upload the data to Qdrant. ```python from typing import List import json import numpy as np from qdrant_client import models def named_vectors(vectors: List[float], sparse_vectors: List[models.SparseVector]) -> dict: # make sure to use the same client object as previously # or `set_model_name` and `set_sparse_model_name` manually dense_vector_name = client.get_vector_field_name() sparse_vector_name = client.get_sparse_vector_field_name() for vector, sparse_vector in zip(vectors, sparse_vectors): yield { dense_vector_name: vector, sparse_vector_name: models.SparseVector(**sparse_vector), } with open(""dense_vectors.npy"", ""rb"") as f: vectors = np.load(f) with open(""sparse_vectors.json"", ""r"") as f: sparse_vectors = json.load(f) with open(""payload.json"", ""r"",) as f: payload = json.load(f) client.upload_collection( ""startups"", vectors=named_vectors(vectors, sparse_vectors), payload=payload ) ```
The `add` method will encode all documents and upload them to Qdrant. This is one of the two fastembed-specific methods, that combines encoding and uploading into a single step. The `parallel` parameter enables data-parallelism instead of built-in ONNX parallelism. Additionally, you can specify ids for each document, if you want to use them later to update or delete documents. If you don't specify ids, they will be generated automatically and returned as a result of the `add` method. You can monitor the progress of the encoding by passing tqdm progress bar to the `add` method. ```python from tqdm import tqdm client.add( collection_name=""startups"", documents=documents, metadata=metadata, ids=tqdm(range(len(documents))), ) ``` ## Build the search API Now that all the preparations are complete, let's start building a neural search class. In order to process incoming requests, the hybrid search class will need 3 things: 1) models to convert the query into a vector, 2) the Qdrant client to perform search queries, 3) fusion function to re-rank dense and sparse search results. Fastembed integration encapsulates query encoding, search and fusion into a single method call. Fastembed leverages [reciprocal rank fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) in order combine the results. 1. Create a file named `hybrid_searcher.py` and specify the following. ```python from qdrant_client import QdrantClient class HybridSearcher: DENSE_MODEL = ""sentence-transformers/all-MiniLM-L6-v2"" SPARSE_MODEL = ""prithivida/Splade_PP_en_v1"" def __init__(self, collection_name): self.collection_name = collection_name # initialize Qdrant client self.qdrant_client = QdrantClient(""http://localhost:6333"") self.qdrant_client.set_model(self.DENSE_MODEL) # comment this line to use dense vectors only self.qdrant_client.set_sparse_model(self.SPARSE_MODEL) ``` 2. Write the search function. ```python def search(self, text: str): search_result = self.qdrant_client.query( collection_name=self.collection_name, query_text=text, query_filter=None, # If you don't want any filters for now limit=5, # 5 the closest results ) # `search_result` contains found vector ids with similarity scores # along with the stored payload # Select and return metadata metadata = [hit.metadata for hit in search_result] return metadata ``` 3. Add search filters. With Qdrant it is also feasible to add some conditions to the search. For example, if you wanted to search for startups in a certain city, the search query could look like this: ```python from qdrant_client import models ... city_of_interest = ""Berlin"" # Define a filter for cities city_filter = models.Filter( must=[ models.FieldCondition( key=""city"", match=models.MatchValue(value=city_of_interest) ) ] ) search_result = self.qdrant_client.query( collection_name=self.collection_name, query_text=text, query_filter=city_filter, limit=5 ) ... ``` You have now created a class for neural search queries. Now wrap it up into a service. ## Deploy the search with FastAPI To build the service you will use the FastAPI framework. 1. Install FastAPI. To install it, use the command ```bash pip install fastapi uvicorn ``` 2. Implement the service. Create a file named `service.py` and specify the following. The service will have only one API endpoint and will look like this: ```python from fastapi import FastAPI # The file where HybridSearcher is stored from hybrid_searcher import HybridSearcher app = FastAPI() # Create a neural searcher instance hybrid_searcher = HybridSearcher(collection_name=""startups"") @app.get(""/api/search"") def search_startup(q: str): return {""result"": hybrid_searcher.search(text=q)} if __name__ == ""__main__"": import uvicorn uvicorn.run(app, host=""0.0.0.0"", port=8000) ``` 3. Run the service. ```bash python service.py ``` 4. Open your browser at [http://localhost:8000/docs](http://localhost:8000/docs). You should be able to see a debug interface for your service. ![FastAPI Swagger interface](/docs/fastapi_neural_search.png) Feel free to play around with it, make queries regarding the companies in our corpus, and check out the results. Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, publish other examples of neural networks and neural search applications. ",documentation/tutorials/hybrid-search-fastembed.md "--- title: Asynchronous API weight: 14 --- # Using Qdrant asynchronously Asynchronous programming is being broadly adopted in the Python ecosystem. Tools such as FastAPI [have embraced this new paradigm](https://fastapi.tiangolo.com/async/), but it is also becoming a standard for ML models served as SaaS. For example, the Cohere SDK [provides an async client](https://github.com/cohere-ai/cohere-python/blob/856a4c3bd29e7a75fa66154b8ac9fcdf1e0745e0/src/cohere/client.py#L189) next to its synchronous counterpart. Databases are often launched as separate services and are accessed via a network. All the interactions with them are IO-bound and can be performed asynchronously so as not to waste time actively waiting for a server response. In Python, this is achieved by using [`async/await`](https://docs.python.org/3/library/asyncio-task.html) syntax. That lets the interpreter switch to another task while waiting for a response from the server. ## When to use async API There is no need to use async API if the application you are writing will never support multiple users at once (e.g it is a script that runs once per day). However, if you are writing a web service that multiple users will use simultaneously, you shouldn't be blocking the threads of the web server as it limits the number of concurrent requests it can handle. In this case, you should use the async API. Modern web frameworks like [FastAPI](https://fastapi.tiangolo.com/) and [Quart](https://quart.palletsprojects.com/en/latest/) support async API out of the box. Mixing asynchronous code with an existing synchronous codebase might be a challenge. The `async/await` syntax cannot be used in synchronous functions. On the other hand, calling an IO-bound operation synchronously in async code is considered an antipattern. Therefore, if you build an async web service, exposed through an [ASGI](https://asgi.readthedocs.io/en/latest/) server, you should use the async API for all the interactions with Qdrant. ### Using Qdrant asynchronously The simplest way of running asynchronous code is to use define `async` function and use the `asyncio.run` in the following way to run it: ```python from qdrant_client import models import qdrant_client import asyncio async def main(): client = qdrant_client.AsyncQdrantClient(""localhost"") # Create a collection await client.create_collection( collection_name=""my_collection"", vectors_config=models.VectorParams(size=4, distance=models.Distance.COSINE), ) # Insert a vector await client.upsert( collection_name=""my_collection"", points=[ models.PointStruct( id=""5c56c793-69f3-4fbf-87e6-c4bf54c28c26"", payload={ ""color"": ""red"", }, vector=[0.9, 0.1, 0.1, 0.5], ), ], ) # Search for nearest neighbors points = await client.query_points( collection_name=""my_collection"", query=[0.9, 0.1, 0.1, 0.5], limit=2, ).points # Your async code using AsyncQdrantClient might be put here # ... asyncio.run(main()) ``` The `AsyncQdrantClient` provides the same methods as the synchronous counterpart `QdrantClient`. If you already have a synchronous codebase, switching to async API is as simple as replacing `QdrantClient` with `AsyncQdrantClient` and adding `await` before each method call. ## Supported Python libraries Qdrant integrates with numerous Python libraries. Until recently, only [Langchain](https://python.langchain.com) provided async Python API support. Qdrant is the only vector database with full coverage of async API in Langchain. Their documentation [describes how to use it](https://python.langchain.com/docs/modules/data_connection/vectorstores/#asynchronous-operations). ",documentation/tutorials/async-api.md "--- title: Create and restore from snapshot weight: 14 --- # Create and restore collections from snapshot | Time: 20 min | Level: Beginner | | | |--------------|-----------------|--|----| A collection is a basic unit of data storage in Qdrant. It contains vectors, their IDs, and payloads. However, keeping the search efficient requires additional data structures to be built on top of the data. Building these data structures may take a while, especially for large collections. That's why using snapshots is the best way to export and import Qdrant collections, as they contain all the bits and pieces required to restore the entire collection efficiently. This tutorial will show you how to create a snapshot of a collection and restore it. Since working with snapshots in a distributed environment might be thought to be a bit more complex, we will use a 3-node Qdrant cluster. However, the same approach applies to a single-node setup. You can use the techniques described in this page to migrate a cluster. Follow the instructions in this tutorial to create and download snapshots. When you [Restore from snapshot](#restore-from-snapshot), restore your data to the new cluster. ## Prerequisites Let's assume you already have a running Qdrant instance or a cluster. If not, you can follow the [installation guide](/documentation/guides/installation/) to set up a local Qdrant instance or use [Qdrant Cloud](https://cloud.qdrant.io/) to create a cluster in a few clicks. Once the cluster is running, let's install the required dependencies: ```shell pip install qdrant-client datasets ``` ### Establish a connection to Qdrant We are going to use the Python SDK and raw HTTP calls to interact with Qdrant. Since we are going to use a 3-node cluster, we need to know the URLs of all the nodes. For the simplicity, let's keep them all in constants, along with the API key, so we can refer to them later: ```python QDRANT_MAIN_URL = ""https://my-cluster.com:6333"" QDRANT_NODES = ( ""https://node-0.my-cluster.com:6333"", ""https://node-1.my-cluster.com:6333"", ""https://node-2.my-cluster.com:6333"", ) QDRANT_API_KEY = ""my-api-key"" ``` We can now create a client instance: ```python from qdrant_client import QdrantClient client = QdrantClient(QDRANT_MAIN_URL, api_key=QDRANT_API_KEY) ``` First of all, we are going to create a collection from a precomputed dataset. If you already have a collection, you can skip this step and start by [creating a snapshot](#create-and-download-snapshots).
(Optional) Create collection and import data ### Load the dataset We are going to use a dataset with precomputed embeddings, available on Hugging Face Hub. The dataset is called [Qdrant/arxiv-titles-instructorxl-embeddings](https://huggingface.co/datasets/Qdrant/arxiv-titles-instructorxl-embeddings) and was created using the [InstructorXL](https://huggingface.co/hkunlp/instructor-xl) model. It contains 2.25M embeddings for the titles of the papers from the [arXiv](https://arxiv.org/) dataset. Loading the dataset is as simple as: ```python from datasets import load_dataset dataset = load_dataset( ""Qdrant/arxiv-titles-instructorxl-embeddings"", split=""train"", streaming=True ) ``` We used the streaming mode, so the dataset is not loaded into memory. Instead, we can iterate through it and extract the id and vector embedding: ```python for payload in dataset: id_ = payload.pop(""id"") vector = payload.pop(""vector"") print(id_, vector, payload) ``` A single payload looks like this: ```json { 'title': 'Dynamics of partially localized brane systems', 'DOI': '1109.1415' } ``` ### Create a collection First things first, we need to create our collection. We're not going to play with the configuration of it, but it makes sense to do it right now. The configuration is also a part of the collection snapshot. ```python from qdrant_client import models if not client.collection_exists(""test_collection""): client.create_collection( collection_name=""test_collection"", vectors_config=models.VectorParams( size=768, # Size of the embedding vector generated by the InstructorXL model distance=models.Distance.COSINE ), ) ``` ### Upload the dataset Calculating the embeddings is usually a bottleneck of the vector search pipelines, but we are happy to have them in place already. Since the goal of this tutorial is to show how to create a snapshot, **we are going to upload only a small part of the dataset**. ```python ids, vectors, payloads = [], [], [] for payload in dataset: id_ = payload.pop(""id"") vector = payload.pop(""vector"") ids.append(id_) vectors.append(vector) payloads.append(payload) # We are going to upload only 1000 vectors if len(ids) == 1000: break client.upsert( collection_name=""test_collection"", points=models.Batch( ids=ids, vectors=vectors, payloads=payloads, ), ) ``` Our collection is now ready to be used for search. Let's create a snapshot of it.
If you already have a collection, you can skip the previous step and start by [creating a snapshot](#create-and-download-snapshots). ## Create and download snapshots Qdrant exposes an HTTP endpoint to request creating a snapshot, but we can also call it with the Python SDK. Our setup consists of 3 nodes, so we need to call the endpoint **on each of them** and create a snapshot on each node. While using Python SDK, that means creating a separate client instance for each node. ```python snapshot_urls = [] for node_url in QDRANT_NODES: node_client = QdrantClient(node_url, api_key=QDRANT_API_KEY) snapshot_info = node_client.create_snapshot(collection_name=""test_collection"") snapshot_url = f""{node_url}/collections/test_collection/snapshots/{snapshot_info.name}"" snapshot_urls.append(snapshot_url) ``` ```http // for `https://node-0.my-cluster.com:6333` POST /collections/test_collection/snapshots // for `https://node-1.my-cluster.com:6333` POST /collections/test_collection/snapshots // for `https://node-2.my-cluster.com:6333` POST /collections/test_collection/snapshots ```
Response ```json { ""result"": { ""name"": ""test_collection-559032209313046-2024-01-03-13-20-11.snapshot"", ""creation_time"": ""2024-01-03T13:20:11"", ""size"": 18956800 }, ""status"": ""ok"", ""time"": 0.307644965 } ```
Once we have the snapshot URLs, we can download them. Please make sure to include the API key in the request headers. Downloading the snapshot **can be done only through the HTTP API**, so we are going to use the `requests` library. ```python import requests import os # Create a directory to store snapshots os.makedirs(""snapshots"", exist_ok=True) local_snapshot_paths = [] for snapshot_url in snapshot_urls: snapshot_name = os.path.basename(snapshot_url) local_snapshot_path = os.path.join(""snapshots"", snapshot_name) response = requests.get( snapshot_url, headers={""api-key"": QDRANT_API_KEY} ) with open(local_snapshot_path, ""wb"") as f: response.raise_for_status() f.write(response.content) local_snapshot_paths.append(local_snapshot_path) ``` Alternatively, you can use the `wget` command: ```bash wget https://node-0.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313046-2024-01-03-13-20-11.snapshot \ --header=""api-key: ${QDRANT_API_KEY}"" \ -O node-0-shapshot.snapshot wget https://node-1.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313047-2024-01-03-13-20-12.snapshot \ --header=""api-key: ${QDRANT_API_KEY}"" \ -O node-1-shapshot.snapshot wget https://node-2.my-cluster.com:6333/collections/test_collection/snapshots/test_collection-559032209313048-2024-01-03-13-20-13.snapshot \ --header=""api-key: ${QDRANT_API_KEY}"" \ -O node-2-shapshot.snapshot ``` The snapshots are now stored locally. We can use them to restore the collection to a different Qdrant instance, or treat them as a backup. We will create another collection using the same data on the same cluster. ## Restore from snapshot Our brand-new snapshot is ready to be restored. Typically, it is used to move a collection to a different Qdrant instance, but we are going to use it to create a new collection on the same cluster. It is just going to have a different name, `test_collection_import`. We do not need to create a collection first, as it is going to be created automatically. Restoring collection is also done separately on each node, but our Python SDK does not support it yet. We are going to use the HTTP API instead, and send a request to each node using `requests` library. ```python for node_url, snapshot_path in zip(QDRANT_NODES, local_snapshot_paths): snapshot_name = os.path.basename(snapshot_path) requests.post( f""{node_url}/collections/test_collection_import/snapshots/upload?priority=snapshot"", headers={ ""api-key"": QDRANT_API_KEY, }, files={""snapshot"": (snapshot_name, open(snapshot_path, ""rb""))}, ) ``` Alternatively, you can use the `curl` command: ```bash curl -X POST 'https://node-0.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \ -H 'api-key: ${QDRANT_API_KEY}' \ -H 'Content-Type:multipart/form-data' \ -F 'snapshot=@node-0-shapshot.snapshot' curl -X POST 'https://node-1.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \ -H 'api-key: ${QDRANT_API_KEY}' \ -H 'Content-Type:multipart/form-data' \ -F 'snapshot=@node-1-shapshot.snapshot' curl -X POST 'https://node-2.my-cluster.com:6333/collections/test_collection_import/snapshots/upload?priority=snapshot' \ -H 'api-key: ${QDRANT_API_KEY}' \ -H 'Content-Type:multipart/form-data' \ -F 'snapshot=@node-2-shapshot.snapshot' ``` **Important:** We selected `priority=snapshot` to make sure that the snapshot is preferred over the data stored on the node. You can read mode about the priority in the [documentation](/documentation/concepts/snapshots/#snapshot-priority). ",documentation/tutorials/create-snapshot.md "--- title: Collaborative filtering short_description: ""Build an effective movie recommendation system using collaborative filtering and Qdrant's similarity search."" description: ""Build an effective movie recommendation system using collaborative filtering and Qdrant's similarity search."" preview_image: /blog/collaborative-filtering/social_preview.png social_preview_image: /blog/collaborative-filtering/social_preview.png weight: 23 --- # Create a collaborative filtering system | Time: 45 min | Level: Intermediate | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/qdrant/examples/blob/master/collaborative-filtering/collaborative-filtering.ipynb) | | |--------------|---------------------|--|----| Every time Spotify recommends the next song from a band you've never heard of, it uses a recommendation algorithm based on other users' interactions with that song. This type of algorithm is known as **collaborative filtering**. Unlike content-based recommendations, collaborative filtering excels when the objects' semantics are loosely or unrelated to users' preferences. This adaptability is what makes it so fascinating. Movie, music, or book recommendations are good examples of such use cases. After all, we rarely choose which book to read purely based on the plot twists. The traditional way to build a collaborative filtering engine involves training a model that converts the sparse matrix of user-to-item relations into a compressed, dense representation of user and item vectors. Some of the most commonly referenced algorithms for this purpose include [SVD (Singular Value Decomposition)](https://en.wikipedia.org/wiki/Singular_value_decomposition) and [Factorization Machines](https://en.wikipedia.org/wiki/Matrix_factorization_(recommender_systems)). However, the model training approach requires significant resource investments. Model training necessitates data, regular re-training, and a mature infrastructure. ## Methodology Fortunately, there is a way to build collaborative filtering systems without any model training. You can obtain interpretable recommendations and have a scalable system using a technique based on similarity search. Let’s explore how this works with an example of building a movie recommendation system.

## Implementation To implement this, you will use a simple yet powerful resource: [Qdrant with Sparse Vectors](https://qdrant.tech/articles/sparse-vectors/). Notebook: [You can try this code here](https://githubtocolab.com/qdrant/examples/blob/master/collaborative-filtering/collaborative-filtering.ipynb) ### Setup You have to first import the necessary libraries and define the environment. ```python import os import pandas as pd import requests from qdrant_client import QdrantClient, models from qdrant_client.models import PointStruct, SparseVector, NamedSparseVector from collections import defaultdict # OMDB API Key - for movie posters omdb_api_key = os.getenv(""OMDB_API_KEY"") # Collection name collection_name = ""movies"" # Set Qdrant Client qdrant_client = QdrantClient( os.getenv(""QDRANT_HOST""), api_key=os.getenv(""QDRANT_API_KEY"") ) ``` ### Define output Here, you will configure the recommendation engine to retrieve movie posters as output. ```python # Function to get movie poster using OMDB API def get_movie_poster(imdb_id, api_key): url = f""https://www.omdbapi.com/?i={imdb_id}&apikey={api_key}"" data = requests.get(url).json() return data.get('Poster'), data ``` ### Prepare the data Load the movie datasets. These include three main CSV files: user ratings, movie titles, and OMDB IDs. ```python # Load CSV files ratings_df = pd.read_csv('data/ratings.csv', low_memory=False) movies_df = pd.read_csv('data/movies.csv', low_memory=False) # Convert movieId in ratings_df and movies_df to string ratings_df['movieId'] = ratings_df['movieId'].astype(str) movies_df['movieId'] = movies_df['movieId'].astype(str) rating = ratings_df['rating'] # Normalize ratings ratings_df['rating'] = (rating - rating.mean()) / rating.std() # Merge ratings with movie metadata to get movie titles merged_df = ratings_df.merge( movies_df[['movieId', 'title']], left_on='movieId', right_on='movieId', how='inner' ) # Aggregate ratings to handle duplicate (userId, title) pairs ratings_agg_df = merged_df.groupby(['userId', 'movieId']).rating.mean().reset_index() ratings_agg_df.head() ``` | |userId |movieId |rating | |---|-----------|---------|---------| |0 |1 |1 |0.429960 | |1 |1 |1036 |1.369846 | |2 |1 |1049 |-0.509926| |3 |1 |1066 |0.429960 | |4 |1 |110 |0.429960 | ### Convert to sparse If you want to search across numerous reviews from different users, you can represent these reviews in a sparse matrix. ```python # Convert ratings to sparse vectors user_sparse_vectors = defaultdict(lambda: {""values"": [], ""indices"": []}) for row in ratings_agg_df.itertuples(): user_sparse_vectors[row.userId][""values""].append(row.rating) user_sparse_vectors[row.userId][""indices""].append(int(row.movieId)) ``` ![collaborative-filtering](/blog/collaborative-filtering/collaborative-filtering.png) ### Upload the data Here, you will initialize the Qdrant client and create a new collection to store the data. Convert the user ratings to sparse vectors and include the `movieId` in the payload. ```python # Define a data generator def data_generator(): for user_id, sparse_vector in user_sparse_vectors.items(): yield PointStruct( id=user_id, vector={""ratings"": SparseVector( indices=sparse_vector[""indices""], values=sparse_vector[""values""] )}, payload={""user_id"": user_id, ""movie_id"": sparse_vector[""indices""]} ) # Upload points using the data generator qdrant_client.upload_points( collection_name=collection_name, points=data_generator() ) ``` ### Define query In order to get recommendations, we need to find users with similar tastes to ours. Let's describe our preferences by providing ratings for some of our favorite movies. `1` indicates that we like the movie, `-1` indicates that we dislike it. ```python my_ratings = { 603: 1, # Matrix 13475: 1, # Star Trek 11: 1, # Star Wars 1091: -1, # The Thing 862: 1, # Toy Story 597: -1, # Titanic 680: -1, # Pulp Fiction 13: 1, # Forrest Gump 120: 1, # Lord of the Rings 87: -1, # Indiana Jones 562: -1 # Die Hard } ```
Click to see the code for to_vector ```python # Create sparse vector from my_ratings def to_vector(ratings): vector = SparseVector( values=[], indices=[] ) for movie_id, rating in ratings.items(): vector.values.append(rating) vector.indices.append(movie_id) return vector ```
### Run the query From the uploaded list of movies with ratings, we can perform a search in Qdrant to get the top most similar users to us. ```python # Perform the search results = qdrant_client.query_points( collection_name=collection_name, query=to_vector(my_ratings), using=""ratings"", limit=20 ).points ``` Now we can find the movies liked by the other similar users, but we haven't seen yet. Let's combine the results from found users, filter out seen movies, and sort by the score. ```python # Convert results to scores and sort by score def results_to_scores(results): movie_scores = defaultdict(lambda: 0) for result in results: for movie_id in result.payload[""movie_id""]: movie_scores[movie_id] += result.score return movie_scores # Convert results to scores and sort by score movie_scores = results_to_scores(results) top_movies = sorted(movie_scores.items(), key=lambda x: x[1], reverse=True) ```
Visualize results in Jupyter Notebook Finally, we display the top 5 recommended movies along with their posters and titles. ```python # Create HTML to display top 5 results html_content = ""
"" for movie_id, score in top_movies[:5]: imdb_id_row = links.loc[links['movieId'] == int(movie_id), 'imdbId'] if not imdb_id_row.empty: imdb_id = imdb_id_row.values[0] poster_url, movie_info = get_movie_poster(imdb_id, omdb_api_key) movie_title = movie_info.get('Title', 'Unknown Title') html_content += f""""""
{movie_title}
Score: {score}
"""""" else: continue # Skip if imdb_id is not found html_content += ""
"" display(HTML(html_content)) ```
## Recommendations For a complete display of movie posters, check the [notebook output](https://github.com/qdrant/examples/blob/master/collaborative-filtering/collaborative-filtering.ipynb). Here are the results without html content. ```text Toy Story, Score: 131.2033799 Monty Python and the Holy Grail, Score: 131.2033799 Star Wars: Episode V - The Empire Strikes Back, Score: 131.2033799 Star Wars: Episode VI - Return of the Jedi, Score: 131.2033799 Men in Black, Score: 131.2033799 ``` On top of collaborative filtering, we can further enhance the recommendation system by incorporating other features like user demographics, movie genres, or movie tags. Or, for example, only consider recent ratings via a time-based filter. This way, we can recommend movies that are currently popular among users. ## Conclusion As demonstrated, it is possible to build an interesting movie recommendation system without intensive model training using Qdrant and Sparse Vectors. This approach not only simplifies the recommendation process but also makes it scalable and interpretable. In future tutorials, we can experiment more with this combination to further enhance our recommendation systems. ",documentation/tutorials/collaborative-filtering.md "--- title: Tutorials weight: 13 # If the index.md file is empty, the link to the section will be hidden from the sidebar is_empty: false aliases: - how-to - tutorials --- # Tutorials These tutorials demonstrate different ways you can build vector search into your applications. | Essential How-Tos | Description | Stack | |---------------------------------------------------------------------------------|-------------------------------------------------------------------|---------------------------------------------| | [Semantic Search for Beginners](../tutorials/search-beginners/) | Create a simple search engine locally in minutes. | Qdrant | | [Simple Neural Search](../tutorials/neural-search/) | Build and deploy a neural search that browses startup data. | Qdrant, BERT, FastAPI | | [Neural Search with FastEmbed](../tutorials/neural-search-fastembed/) | Build and deploy a neural search with our FastEmbed library. | Qdrant | | [Multimodal Search](../tutorials/multimodal-search-fastembed/) | Create a simple multimodal search. | Qdrant | | [Bulk Upload Vectors](../tutorials/bulk-upload/) | Upload a large scale dataset. | Qdrant | | [Asynchronous API](../tutorials/async-api/) | Communicate with Qdrant server asynchronously with Python SDK. | Qdrant, Python | | [Create Dataset Snapshots](../tutorials/create-snapshot/) | Turn a dataset into a snapshot by exporting it from a collection. | Qdrant | | [Load HuggingFace Dataset](../tutorials/huggingface-datasets/) | Load a Hugging Face dataset to Qdrant | Qdrant, Python, datasets | | [Measure Retrieval Quality](../tutorials/retrieval-quality/) | Measure and fine-tune the retrieval quality | Qdrant, Python, datasets | | [Search Through Code](../tutorials/code-search/) | Implement semantic search application for code search tasks | Qdrant, Python, sentence-transformers, Jina | | [Setup Collaborative Filtering](../tutorials/collaborative-filtering/) | Implement a collaborative filtering system for recommendation engines | Qdrant| ",documentation/tutorials/_index.md "--- title: Semantic-Router --- # Semantic-Router [Semantic-Router](https://www.aurelio.ai/semantic-router/) is a library to build decision-making layers for your LLMs and agents. It uses vector embeddings to make tool-use decisions rather than LLM generations, routing our requests using semantic meaning. Qdrant is available as a supported index in Semantic-Router for you to ingest route data and perform retrievals. ## Installation To use Semantic-Router with Qdrant, install the `qdrant` extra: ```console pip install semantic-router[qdrant] ``` ## Usage Set up `QdrantIndex` with the appropriate configurations: ```python from semantic_router.index import QdrantIndex qdrant_index = QdrantIndex( url=""https://xyz-example.eu-central.aws.cloud.qdrant.io"", api_key="""" ) ``` Once the Qdrant index is set up with the appropriate configurations, we can pass it to the `RouteLayer`. ```python from semantic_router.layer import RouteLayer RouteLayer(encoder=some_encoder, routes=some_routes, index=qdrant_index) ``` ## Complete Example
Click to expand ```python import os from semantic_router import Route from semantic_router.encoders import OpenAIEncoder from semantic_router.index import QdrantIndex from semantic_router.layer import RouteLayer # we could use this as a guide for our chatbot to avoid political conversations politics = Route( name=""politics value"", utterances=[ ""isn't politics the best thing ever"", ""why don't you tell me about your political opinions"", ""don't you just love the president"", ""they're going to destroy this country!"", ""they will save the country!"", ], ) # this could be used as an indicator to our chatbot to switch to a more # conversational prompt chitchat = Route( name=""chitchat"", utterances=[ ""how's the weather today?"", ""how are things going?"", ""lovely weather today"", ""the weather is horrendous"", ""let's go to the chippy"", ], ) # we place both of our decisions together into single list routes = [politics, chitchat] os.environ[""OPENAI_API_KEY""] = """" encoder = OpenAIEncoder() rl = RouteLayer( encoder=encoder, routes=routes, index=QdrantIndex(location="":memory:""), ) print(rl(""What have you been upto?"").name) ``` This returns: ```console [Out]: 'chitchat' ```
## 📚 Further Reading - Semantic-Router [Documentation](https://github.com/aurelio-labs/semantic-router/tree/main/docs) - Semantic-Router [Video Course](https://www.aurelio.ai/course/semantic-router) - [Source Code](https://github.com/aurelio-labs/semantic-router/blob/main/semantic_router/index/qdrant.py) ",documentation/frameworks/semantic-router.md "--- title: Testcontainers --- # Testcontainers Qdrant is available as a [Testcontainers module](https://testcontainers.com/modules/qdrant/) in multiple languages. It facilitates the spawning of a Qdrant instance for end-to-end testing. As noted by [Testcontainers](https://testcontainers.com/), it ""is an open source framework for providing throwaway, lightweight instances of databases, message brokers, web browsers, or just about anything that can run in a Docker container."" ## Usage ```java import org.testcontainers.qdrant.QdrantContainer; QdrantContainer qdrantContainer = new QdrantContainer(""qdrant/qdrant""); ``` ```go import ( ""github.com/testcontainers/testcontainers-go"" ""github.com/testcontainers/testcontainers-go/modules/qdrant"" ) qdrantContainer, err := qdrant.RunContainer(ctx, testcontainers.WithImage(""qdrant/qdrant"")) ``` ```typescript import { QdrantContainer } from ""@testcontainers/qdrant""; const qdrantContainer = await new QdrantContainer(""qdrant/qdrant"").start(); ``` ```python from testcontainers.qdrant import QdrantContainer qdrant_container = QdrantContainer(""qdrant/qdrant"").start() ``` Testcontainers modules provide options/methods to configure ENVs, volumes, and virtually everything you can configure in a Docker container. ## Further reading - [Testcontainers Guides](https://testcontainers.com/guides/) - [Testcontainers Qdrant Module](https://testcontainers.com/modules/qdrant/) ",documentation/frameworks/testcontainers.md "--- title: Stanford DSPy aliases: [ ../integrations/dspy/ ] --- # Stanford DSPy [DSPy](https://github.com/stanfordnlp/dspy) is the framework for solving advanced tasks with language models (LMs) and retrieval models (RMs). It unifies techniques for prompting and fine-tuning LMs — and approaches for reasoning, self-improvement, and augmentation with retrieval and tools. - Provides composable and declarative modules for instructing LMs in a familiar Pythonic syntax. - Introduces an automatic compiler that teaches LMs how to conduct the declarative steps in your program. Qdrant can be used as a retrieval mechanism in the DSPy flow. ## Installation For the Qdrant retrieval integration, include `dspy-ai` with the `qdrant` extra: ```bash pip install dspy-ai[qdrant] ``` ## Usage We can configure `DSPy` settings to use the Qdrant retriever model like so: ```python import dspy from dspy.retrieve.qdrant_rm import QdrantRM from qdrant_client import QdrantClient turbo = dspy.OpenAI(model=""gpt-3.5-turbo"") qdrant_client = QdrantClient() # Defaults to a local instance at http://localhost:6333/ qdrant_retriever_model = QdrantRM(""collection-name"", qdrant_client, k=3) dspy.settings.configure(lm=turbo, rm=qdrant_retriever_model) ``` Using the retriever is pretty simple. The `dspy.Retrieve(k)` module will search for the top-k passages that match a given query. ```python retrieve = dspy.Retrieve(k=3) question = ""Some question about my data"" topK_passages = retrieve(question).passages print(f""Top {retrieve.k} passages for question: {question} \n"", ""\n"") for idx, passage in enumerate(topK_passages): print(f""{idx+1}]"", passage, ""\n"") ``` With Qdrant configured as the retriever for contexts, you can set up a DSPy module like so: ```python class RAG(dspy.Module): def __init__(self, num_passages=3): super().__init__() self.retrieve = dspy.Retrieve(k=num_passages) ... def forward(self, question): context = self.retrieve(question).passages ... ``` With the generic RAG blueprint now in place, you can add the many interactions offered by DSPy with context retrieval powered by Qdrant. ## Next steps - Find DSPy usage docs and examples [here](https://github.com/stanfordnlp/dspy#4-documentation--tutorials). - [Source Code](https://github.com/stanfordnlp/dspy/blob/main/dspy/retrieve/qdrant_rm.py) ",documentation/frameworks/dspy.md "--- title: FiftyOne aliases: [ ../integrations/fifty-one ] --- # FiftyOne [FiftyOne](https://voxel51.com/) is an open-source toolkit designed to enhance computer vision workflows by optimizing dataset quality and providing valuable insights about your models. FiftyOne 0.20, which includes a native integration with Qdrant, supporting workflows like [image similarity search](https://docs.voxel51.com/user_guide/brain.html#image-similarity) and [text search](https://docs.voxel51.com/user_guide/brain.html#text-similarity). Qdrant helps FiftyOne to find the most similar images in the dataset using vector embeddings. FiftyOne is available as a Python package that might be installed in the following way: ```bash pip install fiftyone ``` Please check out the documentation of FiftyOne on [Qdrant integration](https://docs.voxel51.com/integrations/qdrant.html). ",documentation/frameworks/fifty-one.md "--- title: Pinecone Canopy --- # Pinecone Canopy [Canopy](https://github.com/pinecone-io/canopy) is an open-source framework and context engine to build chat assistants at scale. Qdrant is supported as a knowledge base within Canopy for context retrieval and augmented generation. ## Usage Install the SDK with the Qdrant extra as described in the [Canopy README](https://github.com/pinecone-io/canopy?tab=readme-ov-file#extras). ```bash pip install canopy-sdk[qdrant] ``` ### Creating a knowledge base ```python from canopy.knowledge_base import QdrantKnowledgeBase kb = QdrantKnowledgeBase(collection_name="""") ``` To create a new Qdrant collection and connect it to the knowledge base, use the `create_canopy_collection` method: ```python kb.create_canopy_collection() ``` You can always verify the connection to the collection with the `verify_index_connection` method: ```python kb.verify_index_connection() ``` Learn more about customizing the knowledge base and its inner components [in the Canopy library](https://github.com/pinecone-io/canopy/blob/main/docs/library.md#understanding-knowledgebase-workings). ### Adding data to the knowledge base To insert data into the knowledge base, you can create a list of documents and use the `upsert` method: ```python from canopy.models.data_models import Document documents = [ Document( id=""1"", text=""U2 are an Irish rock band from Dublin, formed in 1976."", source=""https://en.wikipedia.org/wiki/U2"", ), Document( id=""2"", text=""Arctic Monkeys are an English rock band formed in Sheffield in 2002."", source=""https://en.wikipedia.org/wiki/Arctic_Monkeys"", metadata={""my-key"": ""my-value""}, ), ] kb.upsert(documents) ``` ### Querying the knowledge base You can query the knowledge base with the `query` method to find the most similar documents to a given text: ```python from canopy.models.data_models import Query kb.query( [ Query(text=""Arctic Monkeys music genre""), Query( text=""U2 music genre"", top_k=10, metadata_filter={""key"": ""my-key"", ""match"": {""value"": ""my-value""}}, ), ] ) ``` ## Further Reading - [Introduction to Canopy](https://www.pinecone.io/blog/canopy-rag-framework/) - [Canopy library reference](https://github.com/pinecone-io/canopy/blob/main/docs/library.md) - [Source Code](https://github.com/pinecone-io/canopy/tree/main/src/canopy/knowledge_base/qdrant) ",documentation/frameworks/canopy.md "--- title: Langchain Go --- # Langchain Go [Langchain Go](https://tmc.github.io/langchaingo/docs/) is a framework for developing data-aware applications powered by language models in Go. You can use Qdrant as a vector store in Langchain Go. ## Setup Install the `langchain-go` project dependency ```bash go get -u github.com/tmc/langchaingo ``` ## Usage Before you use the following code sample, customize the following values for your configuration: - `YOUR_QDRANT_REST_URL`: If you've set up Qdrant using the [Quick Start](/documentation/quick-start/) guide, set this value to `http://localhost:6333`. - `YOUR_COLLECTION_NAME`: Use our [Collections](/documentation/concepts/collections/) guide to create or list collections. ```go package main import ( ""log"" ""net/url"" ""github.com/tmc/langchaingo/embeddings"" ""github.com/tmc/langchaingo/llms/openai"" ""github.com/tmc/langchaingo/vectorstores/qdrant"" ) func main() { llm, err: = openai.New() if err != nil { log.Fatal(err) } e, err: = embeddings.NewEmbedder(llm) if err != nil { log.Fatal(err) } url, err: = url.Parse(""YOUR_QDRANT_REST_URL"") if err != nil { log.Fatal(err) } store, err: = qdrant.New( qdrant.WithURL( * url), qdrant.WithCollectionName(""YOUR_COLLECTION_NAME""), qdrant.WithEmbedder(e), ) if err != nil { log.Fatal(err) } } ``` ## Further Reading - You can find usage examples of Langchain Go [here](https://github.com/tmc/langchaingo/tree/main/examples). - [Source Code](https://github.com/tmc/langchaingo/tree/main/vectorstores/qdrant) ",documentation/frameworks/langchain-go.md "--- title: Firebase Genkit --- # Firebase Genkit [Genkit](https://firebase.google.com/products/genkit) is a framework to build, deploy, and monitor production-ready AI-powered apps. You can build apps that generate custom content, use semantic search, handle unstructured inputs, answer questions with your business data, autonomously make decisions, orchestrate tool calls, and more. You can use Qdrant for indexing/semantic retrieval of data in your Genkit applications via the [Qdrant-Genkit plugin](https://github.com/qdrant/qdrant-genkit). Genkit currently supports server-side development in JavaScript/TypeScript (Node.js) with Go support in active development. ## Installation ```bash npm i genkitx-qdrant ``` ## Configuration To use this plugin, specify it when you call `configureGenkit()`: ```js import { qdrant } from 'genkitx-qdrant'; import { textEmbeddingGecko } from '@genkit-ai/vertexai'; export default configureGenkit({ plugins: [ qdrant([ { clientParams: { host: 'localhost', port: 6333, }, collectionName: 'some-collection', embedder: textEmbeddingGecko, }, ]), ], // ... }); ``` You'll need to specify a collection name, the embedding model you want to use and the Qdrant client parameters. In addition, there are a few optional parameters: - `embedderOptions`: Additional options to pass options to the embedder: ```js embedderOptions: { taskType: 'RETRIEVAL_DOCUMENT' }, ``` - `contentPayloadKey`: Name of the payload filed with the document content. Defaults to ""content"". ```js contentPayloadKey: 'content'; ``` - `metadataPayloadKey`: Name of the payload filed with the document metadata. Defaults to ""metadata"". ```js metadataPayloadKey: 'metadata'; ``` - `collectionCreateOptions`: [Additional options](<(https://qdrant.tech/documentation/concepts/collections/#create-a-collection)>) when creating the Qdrant collection. ## Usage Import retriever and indexer references like so: ```js import { qdrantIndexerRef, qdrantRetrieverRef } from 'genkitx-qdrant'; import { Document, index, retrieve } from '@genkit-ai/ai/retriever'; ``` Then, pass the references to `retrieve()` and `index()`: ```js // To specify an indexer: export const qdrantIndexer = qdrantIndexerRef({ collectionName: 'some-collection', displayName: 'Some Collection indexer', }); await index({ indexer: qdrantIndexer, documents }); ``` ```js // To specify a retriever: export const qdrantRetriever = qdrantRetrieverRef({ collectionName: 'some-collection', displayName: 'Some Collection Retriever', }); let docs = await retrieve({ retriever: qdrantRetriever, query }); ``` You can refer to [Retrieval-augmented generation](https://firebase.google.com/docs/genkit/rag) for a general discussion on indexers and retrievers. ## Further Reading - [Introduction to Genkit](https://firebase.google.com/docs/genkit) - [Genkit Documentation](https://firebase.google.com/docs/genkit/get-started) - [Source Code](https://github.com/qdrant/qdrant-genkit) ",documentation/frameworks/genkit.md "--- title: Langchain4J --- # LangChain for Java LangChain for Java, also known as [Langchain4J](https://github.com/langchain4j/langchain4j), is a community port of [Langchain](https://www.langchain.com/) for building context-aware AI applications in Java You can use Qdrant as a vector store in Langchain4J through the [`langchain4j-qdrant`](https://central.sonatype.com/artifact/dev.langchain4j/langchain4j-qdrant) module. ## Setup Add the `langchain4j-qdrant` to your project dependencies. ```xml dev.langchain4j langchain4j-qdrant VERSION ``` ## Usage Before you use the following code sample, customize the following values for your configuration: - `YOUR_COLLECTION_NAME`: Use our [Collections](/documentation/concepts/collections/) guide to create or list collections. - `YOUR_HOST_URL`: Use the GRPC URL for your system. If you used the [Quick Start](/documentation/quick-start/) guide, it may be http://localhost:6334. If you've deployed in the [Qdrant Cloud](/documentation/cloud/), you may have a longer URL such as `https://example.location.cloud.qdrant.io:6334`. - `YOUR_API_KEY`: Substitute the API key associated with your configuration. ```java import dev.langchain4j.store.embedding.EmbeddingStore; import dev.langchain4j.store.embedding.qdrant.QdrantEmbeddingStore; EmbeddingStore embeddingStore = QdrantEmbeddingStore.builder() // Ensure the collection is configured with the appropriate dimensions // of the embedding model. // Reference https://qdrant.tech/documentation/concepts/collections/ .collectionName(""YOUR_COLLECTION_NAME"") .host(""YOUR_HOST_URL"") // GRPC port of the Qdrant server .port(6334) .apiKey(""YOUR_API_KEY"") .build(); ``` `QdrantEmbeddingStore` supports all the semantic features of Langchain4J. ## Further Reading - You can refer to the [Langchain4J examples](https://github.com/langchain4j/langchain4j-examples/) to get started. - [Source Code](https://github.com/langchain4j/langchain4j/tree/main/langchain4j-qdrant) ",documentation/frameworks/langchain4j.md "--- title: Langchain aliases: - ../integrations/langchain/ - /documentation/overview/integrations/langchain/ --- # Langchain Langchain is a library that makes developing Large Language Model-based applications much easier. It unifies the interfaces to different libraries, including major embedding providers and Qdrant. Using Langchain, you can focus on the business value instead of writing the boilerplate. Langchain distributes the Qdrant integration as a partner package. It might be installed with pip: ```bash pip install langchain-qdrant ``` The integration supports searching for relevant documents usin dense/sparse and hybrid retrieval. Qdrant acts as a vector index that may store the embeddings with the documents used to generate them. There are various ways to use it, but calling `QdrantVectorStore.from_texts` or `QdrantVectorStore.from_documents` is probably the most straightforward way to get started: ```python from langchain_qdrant import QdrantVectorStore from langchain_openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() doc_store = QdrantVectorStore.from_texts( texts, embeddings, url="""", api_key="""", collection_name=""texts"" ) ``` ## Using an existing collection To get an instance of `langchain_qdrant.QdrantVectorStore` without loading any new documents or texts, you can use the `QdrantVectorStore.from_existing_collection()` method. ```python doc_store = QdrantVectorStore.from_existing_collection( embeddings=embeddings, collection_name=""my_documents"", url="""", api_key="""", ) ``` ## Local mode Python client allows you to run the same code in local mode without running the Qdrant server. That's great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kept in memory or persisted on disk. ### In-memory For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook. ```python qdrant = QdrantVectorStore.from_documents( docs, embeddings, location="":memory:"", # Local mode with in-memory storage only collection_name=""my_documents"", ) ``` ### On-disk storage Local mode, without using the Qdrant server, may also store your vectors on disk so they’re persisted between runs. ```python qdrant = Qdrant.from_documents( docs, embeddings, path=""/tmp/local_qdrant"", collection_name=""my_documents"", ) ``` ### On-premise server deployment No matter if you choose to launch QdrantVectorStore locally with [a Docker container](/documentation/guides/installation/), or select a Kubernetes deployment with [the official Helm chart](https://github.com/qdrant/qdrant-helm), the way you're going to connect to such an instance will be identical. You'll need to provide a URL pointing to the service. ```python url = ""<---qdrant url here --->"" qdrant = QdrantVectorStore.from_documents( docs, embeddings, url, prefer_grpc=True, collection_name=""my_documents"", ) ``` ## Similarity search `QdrantVectorStore` supports 3 modes for similarity searches. They can be configured using the `retrieval_mode` parameter when setting up the class. - Dense Vector Search(Default) - Sparse Vector Search - Hybrid Search ### Dense Vector Search To search with only dense vectors, - The `retrieval_mode` parameter should be set to `RetrievalMode.DENSE`(default). - A [dense embeddings](https://python.langchain.com/v0.2/docs/integrations/text_embedding/) value should be provided for the `embedding` parameter. ```py from langchain_qdrant import RetrievalMode qdrant = QdrantVectorStore.from_documents( docs, embedding=embeddings, location="":memory:"", collection_name=""my_documents"", retrieval_mode=RetrievalMode.DENSE, ) query = ""What did the president say about Ketanji Brown Jackson"" found_docs = qdrant.similarity_search(query) ``` ### Sparse Vector Search To search with only sparse vectors, - The `retrieval_mode` parameter should be set to `RetrievalMode.SPARSE`. - An implementation of the [SparseEmbeddings interface](https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py) using any sparse embeddings provider has to be provided as value to the `sparse_embedding` parameter. The `langchain-qdrant` package provides a [FastEmbed](https://github.com/qdrant/fastembed) based implementation out of the box. To use it, install the [FastEmbed package](https://github.com/qdrant/fastembed#-installation). ```python from langchain_qdrant import FastEmbedSparse, RetrievalMode sparse_embeddings = FastEmbedSparse(model_name=""Qdrant/BM25"") qdrant = QdrantVectorStore.from_documents( docs, sparse_embedding=sparse_embeddings, location="":memory:"", collection_name=""my_documents"", retrieval_mode=RetrievalMode.SPARSE, ) query = ""What did the president say about Ketanji Brown Jackson"" found_docs = qdrant.similarity_search(query) ``` ### Hybrid Vector Search To perform a hybrid search using dense and sparse vectors with score fusion, - The `retrieval_mode` parameter should be set to `RetrievalMode.HYBRID`. - A [dense embeddings](https://python.langchain.com/v0.2/docs/integrations/text_embedding/) value should be provided for the `embedding` parameter. - An implementation of the [SparseEmbeddings interface](https://github.com/langchain-ai/langchain/blob/master/libs/partners/qdrant/langchain_qdrant/sparse_embeddings.py) using any sparse embeddings provider has to be provided as value to the `sparse_embedding` parameter. ```python from langchain_qdrant import FastEmbedSparse, RetrievalMode sparse_embeddings = FastEmbedSparse(model_name=""Qdrant/bm25"") qdrant = QdrantVectorStore.from_documents( docs, embedding=embeddings, sparse_embedding=sparse_embeddings, location="":memory:"", collection_name=""my_documents"", retrieval_mode=RetrievalMode.HYBRID, ) query = ""What did the president say about Ketanji Brown Jackson"" found_docs = qdrant.similarity_search(query) ``` Note that if you've added documents with HYBRID mode, you can switch to any retrieval mode when searching. Since both the dense and sparse vectors are available in the collection. ## Next steps If you'd like to know more about running Qdrant in a Langchain-based application, please read our article [Question Answering with Langchain and Qdrant without boilerplate](/articles/langchain-integration/). Some more information might also be found in the [Langchain documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant). - [Source Code](https://github.com/langchain-ai/langchain/tree/master/libs%2Fpartners%2Fqdrant) ",documentation/frameworks/langchain.md "--- title: LlamaIndex aliases: - ../integrations/llama-index/ - /documentation/overview/integrations/llama-index/ --- # LlamaIndex Llama Index acts as an interface between your external data and Large Language Models. So you can bring your private data and augment LLMs with it. LlamaIndex simplifies data ingestion and indexing, integrating Qdrant as a vector index. Installing Llama Index is straightforward if we use pip as a package manager. Qdrant is not installed by default, so we need to install it separately. The integration of both tools also comes as another package. ```bash pip install llama-index llama-index-vector-stores-qdrant ``` Llama Index requires providing an instance of `QdrantClient`, so it can interact with Qdrant server. ```python from llama_index.core.indices.vector_store.base import VectorStoreIndex from llama_index.vector_stores.qdrant import QdrantVectorStore import qdrant_client client = qdrant_client.QdrantClient( """", api_key="""", # For Qdrant Cloud, None for local instance ) vector_store = QdrantVectorStore(client=client, collection_name=""documents"") index = VectorStoreIndex.from_vector_store(vector_store=vector_store) ``` ## Further Reading - [LlamaIndex Documentation](https://docs.llamaindex.ai/en/stable/examples/vector_stores/QdrantIndexDemo/) - [Example Notebook](https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/QdrantIndexDemo.ipynb) - [Source Code](https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/vector_stores/llama-index-vector-stores-qdrant) ",documentation/frameworks/llama-index.md "--- title: DocArray aliases: [ ../integrations/docarray/ ] --- # DocArray You can use Qdrant natively in DocArray, where Qdrant serves as a high-performance document store to enable scalable vector search. DocArray is a library from Jina AI for nested, unstructured data in transit, including text, image, audio, video, 3D mesh, etc. It allows deep-learning engineers to efficiently process, embed, search, recommend, store, and transfer the data with a Pythonic API. To install DocArray with Qdrant support, please do ```bash pip install ""docarray[qdrant]"" ``` ## Further Reading - [DocArray documentations](https://docarray.jina.ai/advanced/document-store/qdrant/). - [Source Code](https://github.com/docarray/docarray/blob/main/docarray/index/backends/qdrant.py) ",documentation/frameworks/docarray.md "--- title: Pandas-AI --- # Pandas-AI Pandas-AI is a Python library that uses a generative AI model to interpret natural language queries and translate them into Python code to interact with pandas data frames and return the final results to the user. ## Installation ```console pip install pandasai[qdrant] ``` ## Usage You can begin a conversation by instantiating an `Agent` instance based on your Pandas data frame. The default Pandas-AI LLM requires an [API key](https://pandabi.ai). You can find the list of all supported LLMs [here](https://docs.pandas-ai.com/en/latest/LLMs/llms/) ```python import os import pandas as pd from pandasai import Agent # Sample DataFrame sales_by_country = pd.DataFrame( { ""country"": [ ""United States"", ""United Kingdom"", ""France"", ""Germany"", ""Italy"", ""Spain"", ""Canada"", ""Australia"", ""Japan"", ""China"", ], ""sales"": [5000, 3200, 2900, 4100, 2300, 2100, 2500, 2600, 4500, 7000], } ) os.environ[""PANDASAI_API_KEY""] = ""YOUR_API_KEY"" agent = Agent(sales_by_country) agent.chat(""Which are the top 5 countries by sales?"") # OUTPUT: China, United States, Japan, Germany, Australia ``` ## Qdrant support You can train Pandas-AI to understand your data better and improve the quality of the results. Qdrant can be configured as a vector store to ingest training data and retrieve semantically relevant content. ```python from pandasai.ee.vectorstores.qdrant import Qdrant qdrant = Qdrant( collection_name="""", embedding_model=""sentence-transformers/all-MiniLM-L6-v2"", url=""http://localhost:6333"", grpc_port=6334, prefer_grpc=True ) agent = Agent(df, vector_store=qdrant) # Train with custom information agent.train(docs=""The fiscal year starts in April"") # Train the q/a pairs of code snippets query = ""What are the total sales for the current fiscal year?"" response = """""" import pandas as pd df = dfs[0] # Calculate the total sales for the current fiscal year total_sales = df[df['date'] >= pd.to_datetime('today').replace(month=4, day=1)]['sales'].sum() result = { ""type"": ""number"", ""value"": total_sales } """""" agent.train(queries=[query], codes=[response]) # # The model will use the information provided in the training to generate a response ``` ## Further reading - [Getting Started with Pandas-AI](https://pandasai-docs.readthedocs.io/en/latest/getting-started/) - [Pandas-AI Reference](https://pandasai-docs.readthedocs.io/en/latest/) - [Source Code](https://github.com/Sinaptik-AI/pandas-ai/blob/main/pandasai/ee/vectorstores/qdrant.py) ",documentation/frameworks/pandas-ai.md "--- title: MemGPT --- # MemGPT [MemGPT](https://memgpt.ai/) is a system that enables LLMs to manage their own memory and overcome limited context windows to - Create perpetual chatbots that learn about you and change their personalities over time. - Create perpetual chatbots that can interface with large data stores. Qdrant is available as a storage backend in MemGPT for storing and semantically retrieving data. ## Usage #### Installation To install the required dependencies, install `pymemgpt` with the `qdrant` extra. ```sh pip install 'pymemgpt[qdrant]' ``` You can configure MemGPT to use either a Qdrant server or an in-memory instance with the `memgpt configure` command. #### Configuring the Qdrant server When you run `memgpt configure`, go through the prompts as described in the [MemGPT configuration documentation](https://memgpt.readme.io/docs/config). After you address several `memgpt` questions, you come to the following `memgpt` prompts: ```console ? Select storage backend for archival data: qdrant ? Select Qdrant backend: server ? Enter the Qdrant instance URI (Default: localhost:6333): https://xyz-example.eu-central.aws.cloud.qdrant.io ``` You can set an API key for authentication using the `QDRANT_API_KEY` environment variable. #### Configuring an in-memory instance ```console ? Select storage backend for archival data: qdrant ? Select Qdrant backend: local ``` The data is persisted at the default MemGPT storage directory. ## Further Reading - [MemGPT Examples](https://github.com/cpacker/MemGPT/tree/main/examples) - [MemGPT Documentation](https://memgpt.readme.io/docs/index). ",documentation/frameworks/memgpt.md "--- title: Vanna.AI --- # Vanna.AI [Vanna](https://vanna.ai/) is a Python package that uses retrieval augmentation to help you generate accurate SQL queries for your database using LLMs. Vanna works in two easy steps - train a RAG ""model"" on your data, and then ask questions which will return SQL queries that can be set up to automatically run on your database. Qdrant is available as a support vector store for ingesting and retrieving your RAG data. ## Installation ```console pip install 'vanna[qdrant]' ``` ## Setup You can set up a Vanna agent using Qdrant as your vector store and any of the [LLMs supported by Vanna](https://vanna.ai/docs/postgres-openai-vanna-vannadb/). We'll use OpenAI for demonstration. ```python from vanna.openai import OpenAI_Chat from vanna.qdrant import Qdrant_VectorStore from qdrant_client import QdrantClient class MyVanna(Qdrant, OpenAI_Chat): def __init__(self, config=None): Qdrant_VectorStore.__init__(self, config=config) OpenAI_Chat.__init__(self, config=config) vn = MyVanna(config={ 'client': QdrantClient(...), 'api_key': sk-..., 'model': gpt-4-..., }) ``` ## Usage Once a Vanna agent is instantiated, you can connect it to [any SQL database](https://vanna.ai/docs/FAQ/#can-i-use-this-with-my-sql-database) of your choosing. For example, Postgres. ```python vn.connect_to_postgres(host='my-host', dbname='my-dbname', user='my-user', password='my-password', port='my-port') ``` You can now train and begin querying your database with SQL. ```python # You can add DDL statements that specify table names, column names, types, and potentially relationships vn.train(ddl="""""" CREATE TABLE IF NOT EXISTS my-table ( id INT PRIMARY KEY, name VARCHAR(100), age INT ) """""") # You can add documentation about your business terminology or definitions. vn.train(documentation=""Our business defines OTIF score as the percentage of orders that are delivered on time and in full"") # You can also add SQL queries to your training data. This is useful if you have some queries already laying around. vn.train(sql=""SELECT * FROM my-table WHERE name = 'John Doe'"") # You can remove training data if there's obsolete/incorrect information. vn.remove_training_data(id='1-ddl') # Whenever you ask a new question, Vanna will retrieve 10 most relevant pieces of training data and use it as part of the LLM prompt to generate the SQL. vn.ask(question="""") ``` ## Further reading - [Getting started with Vanna.AI](https://vanna.ai/docs/app/) - [Vanna.AI documentation](https://vanna.ai/docs/) - [Source Code](https://github.com/vanna-ai/vanna/tree/main/src/vanna/qdrant) ",documentation/frameworks/vanna-ai.md "--- title: Spring AI --- # Spring AI [Spring AI](https://docs.spring.io/spring-ai/reference/) is a Java framework that provides a [Spring-friendly](https://spring.io/) API and abstractions for developing AI applications. Qdrant is available as supported vector database for use within your Spring AI projects. ## Installation You can find the Spring AI installation instructions [here](https://docs.spring.io/spring-ai/reference/getting-started.html). Add the Qdrant boot starter package. ```xml org.springframework.ai spring-ai-qdrant-store-spring-boot-starter ``` ## Usage Configure Qdrant with Spring Boot’s `application.properties`. ``` spring.ai.vectorstore.qdrant.host= spring.ai.vectorstore.qdrant.port= spring.ai.vectorstore.qdrant.api-key= spring.ai.vectorstore.qdrant.collection-name= ``` Learn more about these options in the [configuration reference](https://docs.spring.io/spring-ai/reference/api/vectordbs/qdrant.html#qdrant-vectorstore-properties). Or you can set up the Qdrant vector store with the `QdrantVectorStoreConfig` options. ```java @Bean public QdrantVectorStoreConfig qdrantVectorStoreConfig() { return QdrantVectorStoreConfig.builder() .withHost("""") .withPort() .withCollectionName("""") .withApiKey("""") .build(); } ``` Build the vector store using the config and any of the support [Spring AI embedding providers](https://docs.spring.io/spring-ai/reference/api/embeddings.html#available-implementations). ```java @Bean public VectorStore vectorStore(QdrantVectorStoreConfig config, EmbeddingClient embeddingClient) { return new QdrantVectorStore(config, embeddingClient); } ``` You can now use the `VectorStore` instance backed by Qdrant as a vector store in the Spring AI APIs. ## 📚 Further Reading - Spring AI [Qdrant reference](https://docs.spring.io/spring-ai/reference/api/vectordbs/qdrant.html) - Spring AI [API reference](https://docs.spring.io/spring-ai/reference/index.html) - [Source Code](https://github.com/spring-projects/spring-ai/tree/main/vector-stores/spring-ai-qdrant-store) ",documentation/frameworks/spring-ai.md "--- title: Autogen aliases: [ ../integrations/autogen/ ] --- # Microsoft Autogen [AutoGen](https://github.com/microsoft/autogen) is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools. - Multi-agent conversations: AutoGen agents can communicate with each other to solve tasks. This allows for more complex and sophisticated applications than would be possible with a single LLM. - Customization: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. - Human participation: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. With the Autogen-Qdrant integration, you can use the `QdrantRetrieveUserProxyAgent` from autogen to build retrieval augmented generation(RAG) services with ease. ## Installation ```bash pip install ""pyautogen[retrievechat]"" ""qdrant_client[fastembed]"" ``` ## Usage A demo application that generates code based on context w/o human feedback #### Set your API Endpoint The config_list_from_json function loads a list of configurations from an environment variable or a JSON file. ```python from autogen import config_list_from_json from autogen.agentchat.contrib.retrieve_assistant_agent import RetrieveAssistantAgent from autogen.agentchat.contrib.qdrant_retrieve_user_proxy_agent import QdrantRetrieveUserProxyAgent from qdrant_client import QdrantClient config_list = config_list_from_json( env_or_file=""OAI_CONFIG_LIST"", file_location=""."" ) ``` It first looks for the environment variable ""OAI_CONFIG_LIST"" which needs to be a valid JSON string. If that variable is not found, it then looks for a JSON file named ""OAI_CONFIG_LIST"". The file structure sample can be found [here](https://github.com/microsoft/autogen/blob/main/OAI_CONFIG_LIST_sample). #### Construct agents for RetrieveChat We start by initializing the RetrieveAssistantAgent and QdrantRetrieveUserProxyAgent. The system message needs to be set to ""You are a helpful assistant."" for RetrieveAssistantAgent. The detailed instructions are given in the user message. ```python # Print the generation steps autogen.ChatCompletion.start_logging() # 1. create a RetrieveAssistantAgent instance named ""assistant"" assistant = RetrieveAssistantAgent( name=""assistant"", system_message=""You are a helpful assistant."", llm_config={ ""request_timeout"": 600, ""seed"": 42, ""config_list"": config_list, }, ) # 2. create a QdrantRetrieveUserProxyAgent instance named ""qdrantagent"" # By default, the human_input_mode is ""ALWAYS"", i.e. the agent will ask for human input at every step. # `docs_path` is the path to the docs directory. # `task` indicates the kind of task we're working on. # `chunk_token_size` is the chunk token size for the retrieve chat. # We use an in-memory QdrantClient instance here. Not recommended for production. rag_proxy_agent = QdrantRetrieveUserProxyAgent( name=""qdrantagent"", human_input_mode=""NEVER"", max_consecutive_auto_reply=10, retrieve_config={ ""task"": ""code"", ""docs_path"": ""./path/to/docs"", ""chunk_token_size"": 2000, ""model"": config_list[0][""model""], ""client"": QdrantClient("":memory:""), ""embedding_model"": ""BAAI/bge-small-en-v1.5"", }, ) ``` #### Run the retriever service ```python # Always reset the assistant before starting a new conversation. assistant.reset() # We use the ragproxyagent to generate a prompt to be sent to the assistant as the initial message. # The assistant receives the message and generates a response. The response will be sent back to the ragproxyagent for processing. # The conversation continues until the termination condition is met, in RetrieveChat, the termination condition when no human-in-loop is no code block detected. # The query used below is for demonstration. It should usually be related to the docs made available to the agent code_problem = ""How can I use FLAML to perform a classification task?"" rag_proxy_agent.initiate_chat(assistant, problem=code_problem) ``` ## Next steps - Autogen [examples](https://microsoft.github.io/autogen/docs/Examples) - AutoGen [documentation](https://microsoft.github.io/autogen/) - [Source Code](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/qdrant_retrieve_user_proxy_agent.py) ",documentation/frameworks/autogen.md "--- title: txtai aliases: [ ../integrations/txtai/ ] --- # txtai Qdrant might be also used as an embedding backend in [txtai](https://neuml.github.io/txtai/) semantic applications. txtai simplifies building AI-powered semantic search applications using Transformers. It leverages the neural embeddings and their properties to encode high-dimensional data in a lower-dimensional space and allows to find similar objects based on their embeddings' proximity. Qdrant is not built-in txtai backend and requires installing an additional dependency: ```bash pip install qdrant-txtai ``` The examples and some more information might be found in [qdrant-txtai repository](https://github.com/qdrant/qdrant-txtai). ",documentation/frameworks/txtai.md "--- title: Frameworks weight: 15 --- ## Framework Integrations | Framework | Description | | ------------------------------------- | ---------------------------------------------------------------------------------------------------- | | [AutoGen](./autogen/) | Framework from Microsoft building LLM applications using multiple conversational agents. | | [Canopy](./canopy/) | Framework from Pinecone for building RAG applications using LLMs and knowledge bases. | | [Cheshire Cat](./cheshire-cat/) | Framework to create personalized AI assistants using custom data. | | [DocArray](./docarray/) | Python library for managing data in multi-modal AI applications. | | [DSPy](./dspy/) | Framework for algorithmically optimizing LM prompts and weights. | | [Fifty-One](./fifty-one/) | Toolkit for building high-quality datasets and computer vision models. | | [Genkit](./genkit/) | Framework to build, deploy, and monitor production-ready AI-powered apps. | | [Haystack](./haystack/) | LLM orchestration framework to build customizable, production-ready LLM applications. | | [Langchain](./langchain/) | Python framework for building context-aware, reasoning applications using LLMs. | | [Langchain-Go](./langchain-go/) | Go framework for building context-aware, reasoning applications using LLMs. | | [Langchain4j](./langchain4j/) | Java framework for building context-aware, reasoning applications using LLMs. | | [LlamaIndex](./llama-index/) | A data framework for building LLM applications with modular integrations. | | [MemGPT](./memgpt/) | System to build LLM agents with long term memory & custom tools | | [Pandas-AI](./pandas-ai/) | Python library to query/visualize your data (CSV, XLSX, PostgreSQL, etc.) in natural language | | [Semantic Router](./semantic-router/) | Python library to build a decision-making layer for AI applications using vector search. | | [Spring AI](./spring-ai/) | Java AI framework for building with Spring design principles such as portability and modular design. | | [Testcontainers](./testcontainers/) | Set of frameworks for running containerized dependencies in tests. | | [txtai](./txtai/) | Python library for semantic search, LLM orchestration and language model workflows. | | [Vanna AI](./vanna-ai/) | Python RAG framework for SQL generation and querying. | ",documentation/frameworks/_index.md "--- title: Haystack aliases: - ../integrations/haystack/ - /documentation/overview/integrations/haystack/ --- # Haystack [Haystack](https://haystack.deepset.ai/) serves as a comprehensive NLP framework, offering a modular methodology for constructing cutting-edge generative AI, QA, and semantic knowledge base search systems. A critical element in contemporary NLP systems is an efficient database for storing and retrieving extensive text data. Vector databases excel in this role, as they house vector representations of text and implement effective methods for swift retrieval. Thus, we are happy to announce the integration with Haystack - `QdrantDocumentStore`. This document store is unique, as it is maintained externally by the Qdrant team. The new document store comes as a separate package and can be updated independently of Haystack: ```bash pip install qdrant-haystack ``` `QdrantDocumentStore` supports [all the configuration properties](/documentation/collections/#create-collection) available in the Qdrant Python client. If you want to customize the default configuration of the collection used under the hood, you can provide that settings when you create an instance of the `QdrantDocumentStore`. For example, if you'd like to enable the Scalar Quantization, you'd make that in the following way: ```python from qdrant_haystack.document_stores import QdrantDocumentStore from qdrant_client import models document_store = QdrantDocumentStore( "":memory:"", index=""Document"", embedding_dim=512, recreate_index=True, quantization_config=models.ScalarQuantization( scalar=models.ScalarQuantizationConfig( type=models.ScalarType.INT8, quantile=0.99, always_ram=True, ), ), ) ``` ## Further Reading - [Haystack Documentation](https://haystack.deepset.ai/integrations/qdrant-document-store) - [Source Code](https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/qdrant) ",documentation/frameworks/haystack.md "--- title: Cheshire Cat aliases: [ ../integrations/cheshire-cat/ ] --- # Cheshire Cat [Cheshire Cat](https://cheshirecat.ai/) is an open-source framework that allows you to develop intelligent agents on top of many Large Language Models (LLM). You can develop your custom AI architecture to assist you in a wide range of tasks. ![Cheshire cat](/documentation/frameworks/cheshire-cat/cat.jpg) ## Cheshire Cat and Qdrant Cheshire Cat uses Qdrant as the default [Vector Memory](https://cheshire-cat-ai.github.io/docs/faq/llm-concepts/vector-memory/) for ingesting and retrieving documents. ``` # Decide host and port for your Cat. Default will be localhost:1865 CORE_HOST=localhost CORE_PORT=1865 # Qdrant server # QDRANT_HOST=localhost # QDRANT_PORT=6333 ``` Cheshire Cat takes great advantage of the following features of Qdrant: * [Collection Aliases](../../concepts/collections/#collection-aliases) to manage the change from one embedder to another. * [Quantization](../../guides/quantization/) to obtain a good balance between speed, memory usage and quality of the results. * [Snapshots](../../concepts/snapshots/) to not miss any information. * [Community](https://discord.com/invite/tdtYvXjC4h) ![RAG Pipeline](/documentation/frameworks/cheshire-cat/stregatto.jpg) ## How to use the Cheshire Cat ### Requirements To run the Cheshire Cat, you need to have [Docker](https://docs.docker.com/engine/install/) and [docker-compose](https://docs.docker.com/compose/install/) already installed on your system. ```shell docker run --rm -it -p 1865:80 ghcr.io/cheshire-cat-ai/core:latest ``` * Chat with the Cheshire Cat on [localhost:1865/admin](http://localhost:1865/admin). * You can also interact via REST API and try out the endpoints on [localhost:1865/docs](http://localhost:1865/docs) Check the [instructions on github](https://github.com/cheshire-cat-ai/core/blob/main/README.md) for a more comprehensive quick start. ### First configuration of the LLM * Open the Admin Portal in your browser at [localhost:1865/admin](http://localhost:1865/admin). * Configure the LLM in the `Settings` tab. * If you don't explicitly choose it using `Settings` tab, the Embedder follows the LLM. ## Next steps For more information, refer to the Cheshire Cat [documentation](https://cheshire-cat-ai.github.io/docs/) and [blog](https://cheshirecat.ai/blog/). * [Getting started](https://cheshirecat.ai/hello-world/) * [How the Cat works](https://cheshirecat.ai/how-the-cat-works/) * [Write Your First Plugin](https://cheshirecat.ai/write-your-first-plugin/) * [Cheshire Cat's use of Qdrant - Vector Space](https://cheshirecat.ai/dont-get-lost-in-vector-space/) * [Cheshire Cat's use of Qdrant - Aliases](https://cheshirecat.ai/the-drunken-cat-effect/) * [Discord Community](https://discord.com/invite/bHX5sNFCYU) ",documentation/frameworks/cheshire-cat.md "--- title: Understanding Vector Search in Qdrant weight: 1 social_preview_image: /docs/gettingstarted/vector-social.png --- # How Does Vector Search Work in Qdrant?

If you are still trying to figure out how vector search works, please read ahead. This document describes how vector search is used, covers Qdrant's place in the larger ecosystem, and outlines how you can use Qdrant to augment your existing projects. For those who want to start writing code right away, visit our [Complete Beginners tutorial](/documentation/tutorials/search-beginners/) to build a search engine in 5-15 minutes. ## A Brief History of Search Human memory is unreliable. Thus, as long as we have been trying to collect ‘knowledge’ in written form, we had to figure out how to search for relevant content without rereading the same books repeatedly. That’s why some brilliant minds introduced the inverted index. In the simplest form, it’s an appendix to a book, typically put at its end, with a list of the essential terms-and links to pages they occur at. Terms are put in alphabetical order. Back in the day, that was a manually crafted list requiring lots of effort to prepare. Once digitalization started, it became a lot easier, but still, we kept the same general principles. That worked, and still, it does. If you are looking for a specific topic in a particular book, you can try to find a related phrase and quickly get to the correct page. Of course, assuming you know the proper term. If you don’t, you must try and fail several times or find somebody else to help you form the correct query. {{< figure src=/docs/gettingstarted/inverted-index.png caption=""A simplified version of the inverted index."" >}} Time passed, and we haven’t had much change in that area for quite a long time. But our textual data collection started to grow at a greater pace. So we also started building up many processes around those inverted indexes. For example, we allowed our users to provide many words and started splitting them into pieces. That allowed finding some documents which do not necessarily contain all the query words, but possibly part of them. We also started converting words into their root forms to cover more cases, removing stopwords, etc. Effectively we were becoming more and more user-friendly. Still, the idea behind the whole process is derived from the most straightforward keyword-based search known since the Middle Ages, with some tweaks. {{< figure src=/docs/gettingstarted/tokenization.png caption=""The process of tokenization with an additional stopwords removal and converstion to root form of a word."" >}} Technically speaking, we encode the documents and queries into so-called sparse vectors where each position has a corresponding word from the whole dictionary. If the input text contains a specific word, it gets a non-zero value at that position. But in reality, none of the texts will contain more than hundreds of different words. So the majority of vectors will have thousands of zeros and a few non-zero values. That’s why we call them sparse. And they might be already used to calculate some word-based similarity by finding the documents which have the biggest overlap. {{< figure src=/docs/gettingstarted/query.png caption=""An example of a query vectorized to sparse format."" >}} Sparse vectors have relatively **high dimensionality**; equal to the size of the dictionary. And the dictionary is obtained automatically from the input data. So if we have a vector, we are able to partially reconstruct the words used in the text that created that vector. ## The Tower of Babel Every once in a while, when we discover new problems with inverted indexes, we come up with a new heuristic to tackle it, at least to some extent. Once we realized that people might describe the same concept with different words, we started building lists of synonyms to convert the query to a normalized form. But that won’t work for the cases we didn’t foresee. Still, we need to craft and maintain our dictionaries manually, so they can support the language that changes over time. Another difficult issue comes to light with multilingual scenarios. Old methods require setting up separate pipelines and keeping humans in the loop to maintain the quality. {{< figure src=/docs/gettingstarted/babel.jpg caption=""The Tower of Babel, Pieter Bruegel."" >}} ## The Representation Revolution The latest research in Machine Learning for NLP is heavily focused on training Deep Language Models. In this process, the neural network takes a large corpus of text as input and creates a mathematical representation of the words in the form of vectors. These vectors are created in such a way that words with similar meanings and occurring in similar contexts are grouped together and represented by similar vectors. And we can also take, for example, an average of all the word vectors to create the vector for a whole text (e.g query, sentence, or paragraph). ![deep neural](/docs/gettingstarted/deep-neural.png) We can take those **dense vectors** produced by the network and use them as a **different data representation**. They are dense because neural networks will rarely produce zeros at any position. In contrary to sparse ones, they have a relatively low dimensionality — hundreds or a few thousand only. Unfortunately, if we want to have a look and understand the content of the document by looking at the vector it’s no longer possible. Dimensions are no longer representing the presence of specific words. Dense vectors can capture the meaning, not the words used in a text. That being said, **Large Language Models can automatically handle synonyms**. Moreso, since those neural networks might have been trained with multilingual corpora, they translate the same sentence, written in different languages, to similar vector representations, also called **embeddings**. And we can compare them to find similar pieces of text by calculating the distance to other vectors in our database. {{< figure src=/docs/gettingstarted/input.png caption=""Input queries contain different words, but they are still converted into similar vector representations, because the neural encoder can capture the meaning of the sentences. That feature can capture synonyms but also different languages.."" >}} **Vector search** is a process of finding similar objects based on their embeddings similarity. The good thing is, you don’t have to design and train your neural network on your own. Many pre-trained models are available, either on **HuggingFace** or by using libraries like [SentenceTransformers](https://www.sbert.net/?ref=hackernoon.com). If you, however, prefer not to get your hands dirty with neural models, you can also create the embeddings with SaaS tools, like [co.embed API](https://docs.cohere.com/reference/embed?ref=hackernoon.com). ## Why Qdrant? The challenge with vector search arises when we need to find similar documents in a big set of objects. If we want to find the closest examples, the naive approach would require calculating the distance to every document. That might work with dozens or even hundreds of examples but may become a bottleneck if we have more than that. When we work with relational data, we set up database indexes to speed things up and avoid full table scans. And the same is true for vector search. Qdrant is a fully-fledged vector database that speeds up the search process by using a graph-like structure to find the closest objects in sublinear time. So you don’t calculate the distance to every object from the database, but some candidates only. {{< figure src=/docs/gettingstarted/vector-search.png caption=""Vector search with Qdrant. Thanks to HNSW graph we are able to compare the distance to some of the objects from the database, not to all of them."" >}} While doing a semantic search at scale, because this is what we sometimes call the vector search done on texts, we need a specialized tool to do it effectively — a tool like Qdrant. ## Next Steps Vector search is an exciting alternative to sparse methods. It solves the issues we had with the keyword-based search without needing to maintain lots of heuristics manually. It requires an additional component, a neural encoder, to convert text into vectors. [**Tutorial 1 - Qdrant for Complete Beginners**](/documentation/tutorials/search-beginners/) Despite its complicated background, vectors search is extraordinarily simple to set up. With Qdrant, you can have a search engine up-and-running in five minutes. Our [Complete Beginners tutorial](../../tutorials/search-beginners/) will show you how. [**Tutorial 2 - Question and Answer System**](/articles/qa-with-cohere-and-qdrant/) However, you can also choose SaaS tools to generate them and avoid building your model. Setting up a vector search project with Qdrant Cloud and Cohere co.embed API is fairly easy if you follow the [Question and Answer system tutorial](/articles/qa-with-cohere-and-qdrant/). There is another exciting thing about vector search. You can search for any kind of data as long as there is a neural network that would vectorize your data type. Do you think about a reverse image search? That’s also possible with vector embeddings. ",documentation/overview/vector-search.md "--- title: What is Qdrant? weight: 3 aliases: - overview --- # Introduction Vector databases are a relatively new way for interacting with abstract data representations derived from opaque machine learning models such as deep learning architectures. These representations are often called vectors or embeddings and they are a compressed version of the data used to train a machine learning model to accomplish a task like sentiment analysis, speech recognition, object detection, and many others. These new databases shine in many applications like [semantic search](https://en.wikipedia.org/wiki/Semantic_search) and [recommendation systems](https://en.wikipedia.org/wiki/Recommender_system), and here, we'll learn about one of the most popular and fastest growing vector databases in the market, [Qdrant](https://github.com/qdrant/qdrant). ## What is Qdrant? [Qdrant](https://github.com/qdrant/qdrant) ""is a vector similarity search engine that provides a production-ready service with a convenient API to store, search, and manage points (i.e. vectors) with an additional payload."" You can think of the payloads as additional pieces of information that can help you hone in on your search and also receive useful information that you can give to your users. You can get started using Qdrant with the Python `qdrant-client`, by pulling the latest docker image of `qdrant` and connecting to it locally, or by trying out [Qdrant's Cloud](https://cloud.qdrant.io/) free tier option until you are ready to make the full switch. With that out of the way, let's talk about what are vector databases. ## What Are Vector Databases? ![dbs](https://raw.githubusercontent.com/ramonpzg/mlops-sydney-2023/main/images/databases.png) Vector databases are a type of database designed to store and query high-dimensional vectors efficiently. In traditional [OLTP](https://www.ibm.com/topics/oltp) and [OLAP](https://www.ibm.com/topics/olap) databases (as seen in the image above), data is organized in rows and columns (and these are called **Tables**), and queries are performed based on the values in those columns. However, in certain applications including image recognition, natural language processing, and recommendation systems, data is often represented as vectors in a high-dimensional space, and these vectors, plus an id and a payload, are the elements we store in something called a **Collection** within a vector database like Qdrant. A vector in this context is a mathematical representation of an object or data point, where elements of the vector implicitly or explicitly correspond to specific features or attributes of the object. For example, in an image recognition system, a vector could represent an image, with each element of the vector representing a pixel value or a descriptor/characteristic of that pixel. In a music recommendation system, each vector could represent a song, and elements of the vector would capture song characteristics such as tempo, genre, lyrics, and so on. Vector databases are optimized for **storing** and **querying** these high-dimensional vectors efficiently, and they often using specialized data structures and indexing techniques such as Hierarchical Navigable Small World (HNSW) -- which is used to implement Approximate Nearest Neighbors -- and Product Quantization, among others. These databases enable fast similarity and semantic search while allowing users to find vectors that are the closest to a given query vector based on some distance metric. The most commonly used distance metrics are Euclidean Distance, Cosine Similarity, and Dot Product, and these three are fully supported Qdrant. Here's a quick overview of the three: - [**Cosine Similarity**](https://en.wikipedia.org/wiki/Cosine_similarity) - Cosine similarity is a way to measure how similar two vectors are. To simplify, it reflects whether the vectors have the same direction (similar) or are poles apart. Cosine similarity is often used with text representations to compare how similar two documents or sentences are to each other. The output of cosine similarity ranges from -1 to 1, where -1 means the two vectors are completely dissimilar, and 1 indicates maximum similarity. - [**Dot Product**](https://en.wikipedia.org/wiki/Dot_product) - The dot product similarity metric is another way of measuring how similar two vectors are. Unlike cosine similarity, it also considers the length of the vectors. This might be important when, for example, vector representations of your documents are built based on the term (word) frequencies. The dot product similarity is calculated by multiplying the respective values in the two vectors and then summing those products. The higher the sum, the more similar the two vectors are. If you normalize the vectors (so the numbers in them sum up to 1), the dot product similarity will become the cosine similarity. - [**Euclidean Distance**](https://en.wikipedia.org/wiki/Euclidean_distance) - Euclidean distance is a way to measure the distance between two points in space, similar to how we measure the distance between two places on a map. It's calculated by finding the square root of the sum of the squared differences between the two points' coordinates. This distance metric is also commonly used in machine learning to measure how similar or dissimilar two vectors are. Now that we know what vector databases are and how they are structurally different than other databases, let's go over why they are important. ## Why do we need Vector Databases? Vector databases play a crucial role in various applications that require similarity search, such as recommendation systems, content-based image retrieval, and personalized search. By taking advantage of their efficient indexing and searching techniques, vector databases enable faster and more accurate retrieval of unstructured data already represented as vectors, which can help put in front of users the most relevant results to their queries. In addition, other benefits of using vector databases include: 1. Efficient storage and indexing of high-dimensional data. 3. Ability to handle large-scale datasets with billions of data points. 4. Support for real-time analytics and queries. 5. Ability to handle vectors derived from complex data types such as images, videos, and natural language text. 6. Improved performance and reduced latency in machine learning and AI applications. 7. Reduced development and deployment time and cost compared to building a custom solution. Keep in mind that the specific benefits of using a vector database may vary depending on the use case of your organization and the features of the database you ultimately choose. Let's now evaluate, at a high-level, the way Qdrant is architected. ## High-Level Overview of Qdrant's Architecture ![qdrant](https://raw.githubusercontent.com/ramonpzg/mlops-sydney-2023/main/images/qdrant_overview_high_level.png) The diagram above represents a high-level overview of some of the main components of Qdrant. Here are the terminologies you should get familiar with. - [Collections](../concepts/collections/): A collection is a named set of points (vectors with a payload) among which you can search. The vector of each point within the same collection must have the same dimensionality and be compared by a single metric. [Named vectors](../concepts/collections/#collection-with-multiple-vectors) can be used to have multiple vectors in a single point, each of which can have their own dimensionality and metric requirements. - [Distance Metrics](https://en.wikipedia.org/wiki/Metric_space): These are used to measure similarities among vectors and they must be selected at the same time you are creating a collection. The choice of metric depends on the way the vectors were obtained and, in particular, on the neural network that will be used to encode new queries. - [Points](../concepts/points/): The points are the central entity that Qdrant operates with and they consist of a vector and an optional id and payload. - id: a unique identifier for your vectors. - Vector: a high-dimensional representation of data, for example, an image, a sound, a document, a video, etc. - [Payload](../concepts/payload/): A payload is a JSON object with additional data you can add to a vector. - [Storage](../concepts/storage/): Qdrant can use one of two options for storage, **In-memory** storage (Stores all vectors in RAM, has the highest speed since disk access is required only for persistence), or **Memmap** storage, (creates a virtual address space associated with the file on disk). - Clients: the programming languages you can use to connect to Qdrant. ## Next Steps Now that you know more about vector databases and Qdrant, you are ready to get started with one of our tutorials. If you've never used a vector database, go ahead and jump straight into the **Getting Started** section. Conversely, if you are a seasoned developer in these technology, jump to the section most relevant to your use case. As you go through the tutorials, please let us know if any questions come up in our [Discord channel here](https://qdrant.to/discord). 😎 ",documentation/overview/_index.md "--- title: Qdrant Web UI weight: 2 aliases: - /documentation/web-ui/ --- # Qdrant Web UI You can manage both local and cloud Qdrant deployments through the Web UI. If you've set up a deployment locally with the Qdrant [Quickstart](/documentation/quick-start/), navigate to http://localhost:6333/dashboard. If you've set up a deployment in a cloud cluster, find your Cluster URL in your cloud dashboard, at https://cloud.qdrant.io. Add `:6333/dashboard` to the end of the URL. ## Access the Web UI Qdrant's Web UI is an intuitive and efficient graphic interface for your Qdrant Collections, REST API and data points. In the **Console**, you may use the REST API to interact with Qdrant, while in **Collections**, you can manage all the collections and upload Snapshots. ![Qdrant Web UI](/articles_data/qdrant-1.3.x/web-ui.png) ### Qdrant Web UI features In the Qdrant Web UI, you can: - Run HTTP-based calls from the console - List and search existing [collections](/documentation/concepts/collections/) - Learn from our interactive tutorial You can navigate to these options directly. For example, if you used our [quick start](/documentation/quick-start/) to set up a cluster on localhost, you can review our tutorial at http://localhost:6333/dashboard#/tutorial. ",documentation/interfaces/web-ui.md "--- title: API & SDKs weight: 6 aliases: - /documentation/interfaces/ --- # Interfaces Qdrant supports these ""official"" clients. > **Note:** If you are using a language that is not listed here, you can use the REST API directly or generate a client for your language using [OpenAPI](https://github.com/qdrant/qdrant/blob/master/docs/redoc/master/openapi.json) or [protobuf](https://github.com/qdrant/qdrant/tree/master/lib/api/src/grpc/proto) definitions. ## Client Libraries ||Client Repository|Installation|Version| |-|-|-|-| |[![python](/docs/misc/python.webp)](https://python-client.qdrant.tech/)|**[Python](https://github.com/qdrant/qdrant-client)** + **[(Client Docs)](https://python-client.qdrant.tech/)**|`pip install qdrant-client[fastembed]`|[Latest Release](https://github.com/qdrant/qdrant-client/releases)| |![typescript](/docs/misc/ts.webp)|**[JavaScript / Typescript](https://github.com/qdrant/qdrant-js)**|`npm install @qdrant/js-client-rest`|[Latest Release](https://github.com/qdrant/qdrant-js/releases)| |![rust](/docs/misc/rust.png)|**[Rust](https://github.com/qdrant/rust-client)**|`cargo add qdrant-client`|[Latest Release](https://github.com/qdrant/rust-client/releases)| |![golang](/docs/misc/go.webp)|**[Go](https://github.com/qdrant/go-client)**|`go get github.com/qdrant/go-client`|[Latest Release](https://github.com/qdrant/go-client)| |![.net](/docs/misc/dotnet.webp)|**[.NET](https://github.com/qdrant/qdrant-dotnet)**|`dotnet add package Qdrant.Client`|[Latest Release](https://github.com/qdrant/qdrant-dotnet/releases)| |![java](/docs/misc/java.webp)|**[Java](https://github.com/qdrant/java-client)**|[Available on Maven Central](https://central.sonatype.com/artifact/io.qdrant/client)|[Latest Release](https://github.com/qdrant/java-client/releases)| ## API Reference All interaction with Qdrant takes place via the REST API. We recommend using REST API if you are using Qdrant for the first time or if you are working on a prototype. | API | Documentation | | -------- | ------------------------------------------------------------------------------------ | | REST API | [OpenAPI Specification](https://api.qdrant.tech/api-reference) | | gRPC API | [gRPC Documentation](https://github.com/qdrant/qdrant/blob/master/docs/grpc/docs.md) | ### gRPC Interface The gRPC methods follow the same principles as REST. For each REST endpoint, there is a corresponding gRPC method. As per the [configuration file](https://github.com/qdrant/qdrant/blob/master/config/config.yaml), the gRPC interface is available on the specified port. ```yaml service: grpc_port: 6334 ``` Running the service inside of Docker will look like this: ```bash docker run -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrant ``` **When to use gRPC:** The choice between gRPC and the REST API is a trade-off between convenience and speed. gRPC is a binary protocol and can be more challenging to debug. We recommend using gRPC if you are already familiar with Qdrant and are trying to optimize the performance of your application. ",documentation/interfaces/_index.md "--- title: API Reference weight: 1 type: external-link external_url: https://api.qdrant.tech/api-reference sitemapExclude: True ---",documentation/interfaces/api-reference.md "--- title: About Us ---",about-us/_index.md "--- title: Retrieval Augmented Generation (RAG) description: Unlock the full potential of your AI with RAG powered by Qdrant. Dive into a new era of intelligent applications that understand and interact with unprecedented accuracy and depth. startFree: text: Get Started url: https://cloud.qdrant.io/ learnMore: text: Contact Us url: /contact-us/ image: src: /img/vectors/vector-2.svg alt: Retrieval Augmented Generation sitemapExclude: true --- ",retrieval-augmented-generation/retrieval-augmented-generation-hero.md "--- title: RAG with Qdrant description: RAG, powered by Qdrant's efficient data retrieval, elevates AI's capacity to generate rich, context-aware content across text, code, and multimedia, enhancing relevance and precision on a scalable platform. Discover why Qdrant is the perfect choice for your RAG project. features: - id: 0 icon: src: /icons/outline/speedometer-blue.svg alt: Speedometer title: Highest RPS description: Qdrant leads with top requests-per-second, outperforming alternative vector databases in various datasets by up to 4x. - id: 1 icon: src: /icons/outline/time-blue.svg alt: Time title: Fast Retrieval description: ""Qdrant achieves the lowest latency, ensuring quicker response times in data retrieval: 3ms response for 1M Open AI embeddings."" - id: 2 icon: src: /icons/outline/vectors-blue.svg alt: Vectors title: Multi-Vector Support description: Integrate the strengths of multiple vectors per document, such as title and body, to create search experiences your customers admire. - id: 3 icon: src: /icons/outline/compression-blue.svg alt: Compression title: Built-in Compression description: Significantly reduce memory usage, improve search performance and save up to 30x cost for high-dimensional vectors with Quantization. sitemapExclude: true --- ",retrieval-augmented-generation/retrieval-augmented-generation-features.md "--- title: Learn how to get started with Qdrant for your RAG use case features: - id: 0 image: src: /img/retrieval-augmented-generation-use-cases/case1.svg srcMobile: /img/retrieval-augmented-generation-use-cases/case1-mobile.svg alt: Music recommendation title: Question and Answer System with LlamaIndex description: Combine Qdrant and LlamaIndex to create a self-updating Q&A system. link: text: Video Tutorial url: https://www.youtube.com/watch?v=id5ql-Abq4Y&t=56s - id: 1 image: src: /img/retrieval-augmented-generation-use-cases/case2.svg srcMobile: /img/retrieval-augmented-generation-use-cases/case2-mobile.svg alt: Food discovery title: Retrieval Augmented Generation with OpenAI and Qdrant description: Basic RAG pipeline with Qdrant and OpenAI SDKs. link: text: Learn More url: /articles/food-discovery-demo/ caseStudy: logo: src: /img/retrieval-augmented-generation-use-cases/customer-logo.svg alt: Logo title: See how Dust is using Qdrant for RAG description: Dust provides companies with the core platform to execute on their GenAI bet for their teams by deploying LLMs across the organization and providing context aware AI assistants through RAG. link: text: Read Case Study url: /blog/dust-and-qdrant/ image: src: /img/retrieval-augmented-generation-use-cases/case-study.png alt: Preview sitemapExclude: true --- ",retrieval-augmented-generation/retrieval-augmented-generation-use-cases.md "--- title: RAG Evaluation descriptionFirstPart: Retrieval Augmented Generation (RAG) harnesses large language models to enhance content generation by effectively leveraging existing information. By amalgamating specific details from various sources, RAG facilitates accurate and relevant query results, making it invaluable across domains such as medical, finance, and academia for content creation, Q&A applications, and information synthesis. descriptionSecondPart: However, evaluating RAG systems is essential to refine and optimize their performance, ensuring alignment with user expectations and validating their functionality. image: src: /img/retrieval-augmented-generation-evaluation/become-a-partner-graphic.svg alt: Graphic partnersTitle: ""We work with the best in the industry on RAG evaluation:"" logos: - id: 0 icon: src: /img/retrieval-augmented-generation-evaluation/arize-logo.svg alt: Arize logo - id: 1 icon: src: /img/retrieval-augmented-generation-evaluation/ragas-logo.svg alt: Ragas logo - id: 2 icon: src: /img/retrieval-augmented-generation-evaluation/quotient-logo.svg alt: Quotient logo sitemapExclude: true --- ",retrieval-augmented-generation/retrieval-augmented-generation-evaluation.md "--- title: Qdrant integrates with all leading LLM providers and frameworks integrations: - id: 0 icon: src: /img/integrations/integration-cohere.svg alt: Cohere logo title: Cohere description: Integrate Qdrant with Cohere's co.embed API and Python SDK. - id: 1 icon: src: /img/integrations/integration-gemini.svg alt: Gemini logo title: Gemini description: Connect Qdrant with Google's Gemini Embedding Model API seamlessly. - id: 2 icon: src: /img/integrations/integration-open-ai.svg alt: OpenAI logo title: OpenAI description: Easily integrate OpenAI embeddings with Qdrant using the official Python SDK. - id: 3 icon: src: /img/integrations/integration-aleph-alpha.svg alt: Aleph Alpha logo title: Aleph Alpha description: Integrate Qdrant with Aleph Alpha's multimodal, multilingual embeddings. - id: 4 icon: src: /img/integrations/integration-jina.svg alt: Jina logo title: Jina AI description: Easily integrate Qdrant with Jina AI's embeddings API. - id: 5 icon: src: /img/integrations/integration-aws.svg alt: AWS logo title: AWS Bedrock description: Utilize AWS Bedrock's embedding models with Qdrant seamlessly. - id: 6 icon: src: /img/integrations/integration-lang-chain.svg alt: LangChain logo title: LangChain description: Qdrant seamlessly integrates with LangChain for LLM development. - id: 7 icon: src: /img/integrations/integration-llama-index.svg alt: LlamaIndex logo title: LlamaIndex description: Qdrant integrates with LlamaIndex for efficient data indexing in LLMs. sitemapExclude: true --- ",retrieval-augmented-generation/retrieval-augmented-generation-integrations.md "--- title: ""RAG Use Case: Advanced Vector Search for AI Applications"" description: ""Learn how Qdrant's advanced vector search enhances Retrieval-Augmented Generation (RAG) AI applications, offering scalable and efficient solutions."" url: rag build: render: always cascade: - build: list: local publishResources: false render: never --- ",retrieval-augmented-generation/_index.md "--- title: Qdrant Hybrid Cloud salesTitle: Hybrid Cloud description: Bring your own Kubernetes clusters from any cloud provider, on-premise infrastructure, or edge locations and connect them to the Managed Cloud. cards: - id: 0 icon: /icons/outline/separate-blue.svg title: Deployment Flexibility description: Use your existing infrastructure, whether it be on cloud platforms, on-premise setups, or even at edge locations. - id: 1 icon: /icons/outline/money-growth-blue.svg title: Unmatched Cost Advantage description: Maximum deployment flexibility to leverage the best available resources, in the cloud or on-premise. - id: 2 icon: /icons/outline/switches-blue.svg title: Transparent Control description: Fully managed experience for your Qdrant clusters, while your data remains exclusively yours. form: title: Connect with us # description: id: contact-sales-form hubspotFormOptions: '{ ""region"": ""eu1"", ""portalId"": ""139603372"", ""formId"": ""f583c7ea-15ff-4c57-9859-650b8f34f5d3"", ""submitButtonClass"": ""button button_contained"", }' logosSectionTitle: Qdrant is trusted by top-tier enterprises --- ",contact-hybrid-cloud/_index.md "--- title: Learn how to get started with Qdrant for your search use case features: - id: 0 image: src: /img/advanced-search-use-cases/startup-semantic-search.svg alt: Startup Semantic Search title: Startup Semantic Search Demo description: The demo showcases semantic search for startup descriptions through SentenceTransformer and Qdrant, comparing neural search's accuracy with traditional searches for better content discovery. link: text: View Demo url: https://demo.qdrant.tech/ - id: 1 image: src: /img/advanced-search-use-cases/multimodal-semantic-search.svg alt: Multimodal Semantic Search title: Multimodal Semantic Search with Aleph Alpha description: This tutorial shows you how to run a proper multimodal semantic search system with a few lines of code, without the need to annotate the data or train your networks. link: text: View Tutorial url: /documentation/examples/aleph-alpha-search/ - id: 2 image: src: /img/advanced-search-use-cases/simple-neural-search.svg alt: Simple Neural Search title: Create a Simple Neural Search Service description: This tutorial shows you how to build and deploy your own neural search service. link: text: View Tutorial url: /documentation/tutorials/neural-search/ - id: 3 image: src: /img/advanced-search-use-cases/image-classification.svg alt: Image Classification title: Image Classification with Qdrant Vector Semantic Search description: In this tutorial, you will learn how a semantic search engine for images can help diagnose different types of skin conditions. link: text: View Tutorial url: https://www.youtube.com/watch?v=sNFmN16AM1o - id: 4 image: src: /img/advanced-search-use-cases/semantic-search-101.svg alt: Semantic Search 101 title: Semantic Search 101 description: Build a semantic search engine for science fiction books in 5 mins. link: text: View Tutorial url: /documentation/tutorials/search-beginners/ - id: 5 image: src: /img/advanced-search-use-cases/hybrid-search-service-fastembed.svg alt: Create a Hybrid Search Service with Fastembed title: Create a Hybrid Search Service with Fastembed description: This tutorial guides you through building and deploying your own hybrid search service using Fastembed. link: text: View Tutorial url: /documentation/tutorials/hybrid-search-fastembed/ sitemapExclude: true --- ",advanced-search/advanced-search-use-cases.md "--- title: Search with Qdrant description: Qdrant enhances search, offering semantic, similarity, multimodal, and hybrid search capabilities for accurate, user-centric results, serving applications in different industries like e-commerce to healthcare. features: - id: 0 icon: src: /icons/outline/similarity-blue.svg alt: Similarity title: Semantic Search description: Qdrant optimizes similarity search, identifying the closest database items to any query vector for applications like recommendation systems, RAG and image retrieval, enhancing accuracy and user experience. link: text: Learn More url: /documentation/concepts/search/ - id: 1 icon: src: /icons/outline/search-text-blue.svg alt: Search text title: Hybrid Search for Text description: By combining dense vector embeddings with sparse vectors e.g. BM25, Qdrant powers semantic search to deliver context-aware results, transcending traditional keyword search by understanding the deeper meaning of data. link: text: Learn More url: /documentation/tutorials/hybrid-search-fastembed/ - id: 2 icon: src: /icons/outline/selection-blue.svg alt: Selection title: Multimodal Search description: Qdrant's capability extends to multi-modal search, indexing and retrieving various data forms (text, images, audio) once vectorized, facilitating a comprehensive search experience. link: text: View Tutorial url: /documentation/tutorials/aleph-alpha-search/ - id: 3 icon: src: /icons/outline/filter-blue.svg alt: Filter title: Single Stage filtering that Works description: Qdrant enhances search speeds and control and context understanding through filtering on any nested entry in our payload. Unique architecture allows Qdrant to avoid expensive pre-filtering and post-filtering stages, making search faster and accurate. link: text: Learn More url: /articles/filtrable-hnsw/ sitemapExclude: true --- ",advanced-search/advanced-search-features.md "--- title: ""Advanced Search Solutions: High-Performance Vector Search"" description: Explore how Qdrant's advanced search solutions enhance accuracy and user interaction depth across various industries, from e-commerce to healthcare. build: render: always cascade: - build: list: local publishResources: false render: never --- ",advanced-search/_index.md "--- title: Advanced Search description: Dive into next-gen search capabilities with Qdrant, offering a smarter way to deliver precise and tailored content to users, enhancing interaction accuracy and depth. startFree: text: Get Started url: https://cloud.qdrant.io/ learnMore: text: Contact Us url: /contact-us/ image: src: /img/vectors/vector-0.svg alt: Advanced search sitemapExclude: true --- ",advanced-search/advanced-search-hero.md "--- title: Qdrant Enterprise Solutions items: - id: 0 image: src: /img/enterprise-solutions-use-cases/managed-cloud.svg alt: Managed Cloud title: Managed Cloud description: Qdrant Cloud provides optimal flexibility and offers a suite of features focused on efficient and scalable vector search - fully managed. Available on AWS, Google Cloud, and Azure. link: text: Learn More url: /cloud/ odd: true - id: 1 image: src: /img/enterprise-solutions-use-cases/hybrid-cloud.svg alt: Hybrid Cloud title: Hybrid Cloud description: Bring your own Kubernetes clusters from any cloud provider, on-premise infrastructure, or edge locations and connect them to the managed cloud. link: text: Learn More url: /hybrid-cloud/ odd: false - id: 2 image: src: /img/enterprise-solutions-use-cases/private-cloud.svg alt: Private Cloud title: Private Cloud description: Experience maximum control and security by deploying Qdrant in your own infrastructure or edge locations. link: text: Learn More url: /private-cloud/ odd: true sitemapExclude: true --- ",enterprise-solutions/enterprise-solutions-use-cases.md "--- review: Enterprises like Bosch use Qdrant for unparalleled performance and massive-scale vector search. “With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform at enterprise scale.” names: Jeremy Teichmann & Daly Singh positions: Generative AI Expert & Product Owner avatar: src: /img/customers/jeremy-t-daly-singh.svg alt: Jeremy Teichmann Avatar logo: src: /img/brands/bosch-gray.svg alt: Logo sitemapExclude: true --- ",enterprise-solutions/testimonial.md "--- title: Enterprise-Grade Vector Search description: ""The premier vector database for enterprises: flexible deployment options for low latency and state-of-the-art privacy and security features. High performance at billion vector scale."" startFree: text: Start Free url: https://cloud.qdrant.io/ contactUs: text: Talk to Sales url: /contact-sales/ image: src: /img/enterprise-solutions-hero.png srcMobile: /img/mobile/enterprise-solutions-hero-mobile.png alt: Enterprise-solutions sitemapExclude: true --- ",enterprise-solutions/enterprise-solutions-hero.md "--- title: Enterprise Benefits cards: - id: 0 icon: src: /icons/outline/security-blue.svg alt: Security title: Security description: Robust access management, backup options, and disaster recovery. - id: 1 icon: src: /icons/outline/cloud-system-blue.svg alt: Cloud System title: Data Sovereignty description: Keep your sensitive data within your secure premises. - id: 0 icon: src: /icons/outline/speedometer-blue.svg alt: Speedometer title: Low-Latency description: On-premise deployment for lightning-fast, low-latency access. - id: 0 icon: src: /icons/outline/chart-bar-blue.svg alt: Chart-Bar title: Efficiency description: Reduce memory usage with built-in compression, multitenancy, and offloading data to disk. sitemapExclude: true --- ",enterprise-solutions/enterprise-benefits.md "--- title: Enterprise Search Solutions for Your Business | Qdrant description: Unlock the power of custom vector search with Qdrant's Enterprise Search Solutions. Tailored to your business needs to grow AI capabilities and data management. url: enterprise-solutions build: render: always cascade: - build: list: local publishResources: false render: never --- ",enterprise-solutions/_index.md "--- title: Components --- ## Buttons **.button** Text ### Variants
**.button .button_contained .button_sm** Try Free **.button .button_contained .button_md** Try Free **.button .button_contained .button_lg** Try Free **.button .button_contained .button_disabled** Try Free
**.button .button_outlined .button_sm** Try Free **.button .button_outlined .button_md** Try Free **.button .button_outlined .button_lg** Try Free **.button .button_outlined .button_disabled** Try Free
## Links **.link** Text",debug.skip/components.md "--- title: Bootstrap slug: bootstrap ---

Colors

Toggle details

Text Color

Ignore the background colors in this section, they are just to show the text color.

.text-primary

.text-secondary

.text-success

.text-danger

.text-warning

.text-info

.text-light

.text-dark

.text-body

.text-muted

.text-white

.text-black-50

.text-white-50

Background with contrasting text color

Primary with contrasting color
Secondary with contrasting color
Success with contrasting color
Danger with contrasting color
Warning with contrasting color
Info with contrasting color
Light with contrasting color
Dark with contrasting color

Background Classes

.bg-primary
.bg-secondary
.bg-success
.bg-danger
.bg-warning
.bg-info
.bg-light
.bg-dark
.bg-body
.bg-white
.bg-transparent

Colored Links

Primary link
Secondary link
Success link
Danger link
Warning link
Info link
Light link
Dark link

Typography

Toggle details

h1. Bootstrap heading

h2. Bootstrap heading

h3. Bootstrap heading

h4. Bootstrap heading

h5. Bootstrap heading
h6. Bootstrap heading

h1. Bootstrap heading

h2. Bootstrap heading

h3. Bootstrap heading

h4. Bootstrap heading

h5. Bootstrap heading

h6. Bootstrap heading

Fancy display heading With faded secondary text

Display 1

Display 2

Display 3

Display 4

Display 5

Display 6

This is a lead paragraph. It stands out from regular paragraphs. Some link

You can use the mark tag to highlight text.

This line of text is meant to be treated as deleted text.

This line of text is meant to be treated as no longer accurate.

This line of text is meant to be treated as an addition to the document.

This line of text will render as underlined.

This line of text is meant to be treated as fine print.

This line rendered as bold text.

This line rendered as italicized text.

attr

HTML

This is a link

A well-known quote, contained in a blockquote element.

A well-known quote, contained in a blockquote element.

Someone famous in Source Title
  • This is a list.
  • It appears completely unstyled.
  • Structurally, it's still a list.
  • However, this style only applies to immediate child elements.
  • Nested lists:
    • are unaffected by this style
    • will still show a bullet
    • and have appropriate left margin
  • This may still come in handy in some situations.
",debug.skip/bootstrap.md "--- title: Debugging --- ",debug.skip/_index.md "--- title: ""Qdrant 1.7.0 has just landed!"" short_description: ""Qdrant 1.7.0 brought a bunch of new features. Let's take a closer look at them!"" description: ""Sparse vectors, Discovery API, user-defined sharding, and snapshot-based shard transfer. That's what you can find in the latest Qdrant 1.7.0 release!"" social_preview_image: /articles_data/qdrant-1.7.x/social_preview.png small_preview_image: /articles_data/qdrant-1.7.x/icon.svg preview_dir: /articles_data/qdrant-1.7.x/preview weight: -90 author: Kacper Łukawski author_link: https://kacperlukawski.com date: 2023-12-10T10:00:00Z draft: false keywords: - vector search - new features - sparse vectors - discovery - exploration - custom sharding - snapshot-based shard transfer - hybrid search - bm25 - tfidf - splade --- Please welcome the long-awaited [Qdrant 1.7.0 release](https://github.com/qdrant/qdrant/releases/tag/v1.7.0). Except for a handful of minor fixes and improvements, this release brings some cool brand-new features that we are excited to share! The latest version of your favorite vector search engine finally supports **sparse vectors**. That's the feature many of you requested, so why should we ignore it? We also decided to continue our journey with [vector similarity beyond search](/articles/vector-similarity-beyond-search/). The new Discovery API covers some utterly new use cases. We're more than excited to see what you will build with it! But there is more to it! Check out what's new in **Qdrant 1.7.0**! 1. Sparse vectors: do you want to use keyword-based search? Support for sparse vectors is finally here! 2. Discovery API: an entirely new way of using vectors for restricted search and exploration. 3. User-defined sharding: you can now decide which points should be stored on which shard. 4. Snapshot-based shard transfer: a new option for moving shards between nodes. Do you see something missing? Your feedback drives the development of Qdrant, so do not hesitate to [join our Discord community](https://qdrant.to/discord) and help us build the best vector search engine out there! ## New features Qdrant 1.7.0 brings a bunch of new features. Let's take a closer look at them! ### Sparse vectors Traditional keyword-based search mechanisms often rely on algorithms like TF-IDF, BM25, or comparable methods. While these techniques internally utilize vectors, they typically involve sparse vector representations. In these methods, the **vectors are predominantly filled with zeros, containing a relatively small number of non-zero values**. Those sparse vectors are theoretically high dimensional, definitely way higher than the dense vectors used in semantic search. However, since the majority of dimensions are usually zeros, we store them differently and just keep the non-zero dimensions. Until now, Qdrant has not been able to handle sparse vectors natively. Some were trying to convert them to dense vectors, but that was not the best solution or a suggested way. We even wrote a piece with [our thoughts on building a hybrid search](/articles/hybrid-search/), and we encouraged you to use a different tool for keyword lookup. Things have changed since then, as so many of you wanted a single tool for sparse and dense vectors. And responding to this [popular](https://github.com/qdrant/qdrant/issues/1678) [demand](https://github.com/qdrant/qdrant/issues/1135), we've now introduced sparse vectors! If you're coming across the topic of sparse vectors for the first time, our [Brief History of Search](/documentation/overview/vector-search/) explains the difference between sparse and dense vectors. Check out the [sparse vectors article](../sparse-vectors/) and [sparse vectors index docs](/documentation/concepts/indexing/#sparse-vector-index) for more details on what this new index means for Qdrant users. ### Discovery API The recently launched [Discovery API](/documentation/concepts/explore/#discovery-api) extends the range of scenarios for leveraging vectors. While its interface mirrors the [Recommendation API](/documentation/concepts/explore/#recommendation-api), it focuses on refining the search parameters for greater precision. The concept of 'context' refers to a collection of positive-negative pairs that define zones within a space. Each pair effectively divides the space into positive or negative segments. This concept guides the search operation to prioritize points based on their inclusion within positive zones or their avoidance of negative zones. Essentially, the search algorithm favors points that fall within multiple positive zones or steer clear of negative ones. The Discovery API can be used in two ways - either with or without the target point. The first case is called a **discovery search**, while the second is called a **context search**. #### Discovery search *Discovery search* is an operation that uses a target point to find the most relevant points in the collection, while performing the search in the preferred areas only. That is basically a search operation with more control over the search space. ![Discovery search visualization](/articles_data/qdrant-1.7.x/discovery-search.png) Please refer to the [Discovery API documentation on discovery search](/documentation/concepts/explore/#discovery-search) for more details and the internal mechanics of the operation. #### Context search The mode of *context search* is similar to the discovery search, but it does not use a target point. Instead, the `context` is used to navigate the [HNSW graph](https://arxiv.org/abs/1603.09320) towards preferred zones. It is expected that the results in that mode will be diverse, and not centered around one point. *Context Search* could serve as a solution for individuals seeking a more exploratory approach to navigate the vector space. ![Context search visualization](/articles_data/qdrant-1.7.x/context-search.png) ### User-defined sharding Qdrant's collections are divided into shards. A single **shard** is a self-contained store of points, which can be moved between nodes. Up till now, the points were distributed among shards by using a consistent hashing algorithm, so that shards were managing non-intersecting subsets of points. The latter one remains true, but now you can define your own sharding and decide which points should be stored on which shard. Sounds cool, right? But why would you need that? Well, there are multiple scenarios in which you may want to use custom sharding. For example, you may want to store some points on a dedicated node, or you may want to store points from the same user on the same shard and While the existing behavior is still the default one, you can now define the shards when you create a collection. Then, you can assign each point to a shard by providing a `shard_key` in the `upsert` operation. What's more, you can also search over the selected shards only, by providing the `shard_key` parameter in the search operation. ```http request POST /collections/my_collection/points/search { ""vector"": [0.29, 0.81, 0.75, 0.11], ""shard_key"": [""cats"", ""dogs""], ""limit"": 10, ""with_payload"": true, } ``` If you want to know more about the user-defined sharding, please refer to the [sharding documentation](/documentation/guides/distributed_deployment/#sharding). ### Snapshot-based shard transfer That's a really more in depth technical improvement for the distributed mode users, that we implemented a new options the shard transfer mechanism. The new approach is based on the snapshot of the shard, which is transferred to the target node. Moving shards is required for dynamical scaling of the cluster. Your data can migrate between nodes, and the way you move it is crucial for the performance of the whole system. The good old `stream_records` method (still the default one) transmits all the records between the machines and indexes them on the target node. In the case of moving the shard, it's necessary to recreate the HNSW index each time. However, with the introduction of the new `snapshot` approach, the snapshot itself, inclusive of all data and potentially quantized content, is transferred to the target node. This comprehensive snapshot includes the entire index, enabling the target node to seamlessly load it and promptly begin handling requests without the need for index recreation. There are multiple scenarios in which you may prefer one over the other. Please check out the docs of the [shard transfer method](/documentation/guides/distributed_deployment/#shard-transfer-method) for more details and head-to-head comparison. As for now, the old `stream_records` method is still the default one, but we may decide to change it in the future. ## Minor improvements Beyond introducing new features, Qdrant 1.7.0 enhances performance and addresses various minor issues. Here's a rundown of the key improvements: 1. Improvement of HNSW Index Building on High CPU Systems ([PR#2869](https://github.com/qdrant/qdrant/pull/2869)). 2. Improving [Search Tail Latencies](https://github.com/qdrant/qdrant/pull/2931): improvement for high CPU systems with many parallel searches, directly impacting the user experience by reducing latency. 3. [Adding Index for Geo Map Payloads](https://github.com/qdrant/qdrant/pull/2768): index for geo map payloads can significantly improve search performance, especially for applications involving geographical data. 4. Stability of Consensus on Big High Load Clusters: enhancing the stability of consensus in large, high-load environments is critical for ensuring the reliability and scalability of the system ([PR#3013](https://github.com/qdrant/qdrant/pull/3013), [PR#3026](https://github.com/qdrant/qdrant/pull/3026), [PR#2942](https://github.com/qdrant/qdrant/pull/2942), [PR#3103](https://github.com/qdrant/qdrant/pull/3103), [PR#3054](https://github.com/qdrant/qdrant/pull/3054)). 5. Configurable Timeout for Searches: allowing users to configure the timeout for searches provides greater flexibility and can help optimize system performance under different operational conditions ([PR#2748](https://github.com/qdrant/qdrant/pull/2748), [PR#2771](https://github.com/qdrant/qdrant/pull/2771)). ## Release notes [Our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.7.0) are a place to go if you are interested in more details. Please remember that Qdrant is an open source project, so feel free to [contribute](https://github.com/qdrant/qdrant/issues)! ",articles/qdrant-1.7.x.md "--- title: ""Any* Embedding Model Can Become a Late Interaction Model... If You Give It a Chance!"" short_description: ""Standard dense embedding models perform surprisingly well in late interaction scenarios."" description: ""We recently discovered that embedding models can become late interaction models & can perform surprisingly well in some scenarios. See what we learned here."" preview_dir: /articles_data/late-interaction-models/preview social_preview_image: /articles_data/late-interaction-models/social-preview.png weight: -160 author: Kacper Łukawski author_link: https://kacperlukawski.com date: 2024-08-14T00:00:00.000Z --- \* At least any open-source model, since you need access to its internals. ## You Can Adapt Dense Embedding Models for Late Interaction Qdrant 1.10 introduced support for multi-vector representations, with late interaction being a prominent example of this model. In essence, both documents and queries are represented by multiple vectors, and identifying the most relevant documents involves calculating a score based on the similarity between the corresponding query and document embeddings. If you're not familiar with this paradigm, our updated [Hybrid Search](/articles/hybrid-search/) article explains how multi-vector representations can enhance retrieval quality. **Figure 1:** We can visualize late interaction between corresponding document-query embedding pairs. ![Late interaction model](/articles_data/late-interaction-models/late-interaction.png) There are many specialized late interaction models, such as [ColBERT](https://qdrant.tech/documentation/fastembed/fastembed-colbert/), but **it appears that regular dense embedding models can also be effectively utilized in this manner**. > In this study, we will demonstrate that standard dense embedding models, traditionally used for single-vector representations, can be effectively adapted for late interaction scenarios using output token embeddings as multi-vector representations. By testing out retrieval with Qdrant’s multi-vector feature, we will show that these models can rival or surpass specialized late interaction models in retrieval performance, while offering lower complexity and greater efficiency. This work redefines the potential of dense models in advanced search pipelines, presenting a new method for optimizing retrieval systems. ## Understanding Embedding Models The inner workings of embedding models might be surprising to some. The model doesn’t operate directly on the input text; instead, it requires a tokenization step to convert the text into a sequence of token identifiers. Each token identifier is then passed through an embedding layer, which transforms it into a dense vector. Essentially, the embedding layer acts as a lookup table that maps token identifiers to dense vectors. These vectors are then fed into the transformer model as input. **Figure 2:** The tokenization step, which takes place before vectors are added to the transformer model. ![Input token embeddings](/articles_data/late-interaction-models/input-embeddings.png) The input token embeddings are context-free and are learned during the model’s training process. This means that each token always receives the same embedding, regardless of its position in the text. At this stage, the token embeddings are unaware of the context in which they appear. It is the transformer model’s role to contextualize these embeddings. Much has been discussed about the role of attention in transformer models, but in essence, this mechanism is responsible for capturing cross-token relationships. Each transformer module takes a sequence of token embeddings as input and produces a sequence of output token embeddings. Both sequences are of the same length, with each token embedding being enriched by information from the other token embeddings at the current step. **Figure 3:** The mechanism that produces a sequence of output token embeddings. ![Output token embeddings](/articles_data/late-interaction-models/output-embeddings.png) **Figure 4:** The final step performed by the embedding model is pooling the output token embeddings to generate a single vector representation of the input text. ![Pooling](/articles_data/late-interaction-models/pooling.png) There are several pooling strategies, but regardless of which one a model uses, the output is always a single vector representation, which inevitably loses some information about the input. It’s akin to giving someone detailed, step-by-step directions to the nearest grocery store versus simply pointing in the general direction. While the vague direction might suffice in some cases, the detailed instructions are more likely to lead to the desired outcome. ## Using Output Token Embeddings for Multi-Vector Representations We often overlook the output token embeddings, but the fact is—they also serve as multi-vector representations of the input text. So, why not explore their use in a multi-vector retrieval model, similar to late interaction models? ### Experimental Findings We conducted several experiments to determine whether output token embeddings could be effectively used in place of traditional late interaction models. The results are quite promising.
Dataset Model Experiment NDCG@10
SciFact prithivida/Splade_PP_en_v1 sparse vectors 0.70928
colbert-ir/colbertv2.0 late interaction model 0.69579
all-MiniLM-L6-v2 single dense vector representation 0.64508
output token embeddings 0.70724
BAAI/bge-small-en single dense vector representation 0.68213
output token embeddings 0.73696
NFCorpus prithivida/Splade_PP_en_v1 sparse vectors 0.34166
colbert-ir/colbertv2.0 late interaction model 0.35036
all-MiniLM-L6-v2 single dense vector representation 0.31594
output token embeddings 0.35779
BAAI/bge-small-en single dense vector representation 0.29696
output token embeddings 0.37502
ArguAna prithivida/Splade_PP_en_v1 sparse vectors 0.47271
colbert-ir/colbertv2.0 late interaction model 0.44534
all-MiniLM-L6-v2 single dense vector representation 0.50167
output token embeddings 0.45997
BAAI/bge-small-en single dense vector representation 0.58857
output token embeddings 0.57648
The [source code for these experiments is open-source](https://github.com/kacperlukawski/beir-qdrant/blob/main/examples/retrieval/search/evaluate_all_exact.py) and utilizes [`beir-qdrant`](https://github.com/kacperlukawski/beir-qdrant), an integration of Qdrant with the [BeIR library](https://github.com/beir-cellar/beir). While this package is not officially maintained by the Qdrant team, it may prove useful for those interested in experimenting with various Qdrant configurations to see how they impact retrieval quality. All experiments were conducted using Qdrant in exact search mode, ensuring the results are not influenced by approximate search. Even the simple `all-MiniLM-L6-v2` model can be applied in a late interaction model fashion, resulting in a positive impact on retrieval quality. However, the best results were achieved with the `BAAI/bge-small-en` model, which outperformed both sparse and late interaction models. It's important to note that ColBERT has not been trained on BeIR datasets, making its performance fully out of domain. Nevertheless, the `all-MiniLM-L6-v2` [training dataset](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2#training-data) also lacks any BeIR data, yet it still performs remarkably well. ## Comparative Analysis of Dense vs. Late Interaction Models The retrieval quality speaks for itself, but there are other important factors to consider. The traditional dense embedding models we tested are less complex than late interaction or sparse models. With fewer parameters, these models are expected to be faster during inference and more cost-effective to maintain. Below is a comparison of the models used in the experiments: | Model | Number of parameters | |------------------------------|----------------------| | `prithivida/Splade_PP_en_v1` | 109,514,298 | | `colbert-ir/colbertv2.0` | 109,580,544 | | `BAAI/bge-small-en` | 33,360,000 | | `all-MiniLM-L6-v2` | 22,713,216 | One argument against using output token embeddings is the increased storage requirements compared to ColBERT-like models. For instance, the `all-MiniLM-L6-v2` model produces 384-dimensional output token embeddings, which is three times more than the 128-dimensional embeddings generated by ColBERT-like models. This increase not only leads to higher memory usage but also impacts the computational cost of retrieval, as calculating distances takes more time. Mitigating this issue through vector compression would make a lot of sense. ## Exploring Quantization for Multi-Vector Representations Binary quantization is generally more effective for high-dimensional vectors, making the `all-MiniLM-L6-v2` model, with its relatively low-dimensional outputs, less ideal for this approach. However, scalar quantization appeared to be a viable alternative. The table below summarizes the impact of quantization on retrieval quality.
Dataset Model Experiment NDCG@10
SciFact all-MiniLM-L6-v2 output token embeddings 0.70724
output token embeddings (uint8) 0.70297
NFCorpus all-MiniLM-L6-v2 output token embeddings 0.35779
output token embeddings (uint8) 0.35572
It’s important to note that quantization doesn’t always preserve retrieval quality at the same level, but in this case, scalar quantization appears to have minimal impact on retrieval performance. The effect is negligible, while the memory savings are substantial. We managed to maintain the original quality while using four times less memory. Additionally, a quantized vector requires 384 bytes, compared to ColBERT’s 512 bytes. This results in a 25% reduction in memory usage, with retrieval quality remaining nearly unchanged. ## Practical Application: Enhancing Retrieval with Dense Models If you’re using one of the sentence transformer models, the output token embeddings are calculated by default. While a single vector representation is more efficient in terms of storage and computation, there’s no need to discard the output token embeddings. According to our experiments, these embeddings can significantly enhance retrieval quality. You can store both the single vector and the output token embeddings in Qdrant, using the single vector for the initial retrieval step and then reranking the results with the output token embeddings. **Figure 5:** A single model pipeline that relies solely on the output token embeddings for reranking. ![Single model reranking](/articles_data/late-interaction-models/single-model-reranking.png) To demonstrate this concept, we implemented a simple reranking pipeline in Qdrant. This pipeline uses a dense embedding model for the initial oversampled retrieval and then relies solely on the output token embeddings for the reranking step. ### Single Model Retrieval and Reranking Benchmarks Our tests focused on using the same model for both retrieval and reranking. The reported metric is NDCG@10. In all tests, we applied an oversampling factor of 5x, meaning the retrieval step returned 50 results, which were then narrowed down to 10 during the reranking step. Below are the results for some of the BeIR datasets:
Dataset all-miniLM-L6-v2 BAAI/bge-small-en
dense embeddings only dense + reranking dense embeddings only dense + reranking
SciFact 0.64508 0.70293 0.68213 0.73053
NFCorpus 0.31594 0.34297 0.29696 0.35996
ArguAna 0.50167 0.45378 0.58857 0.57302
Touche-2020 0.16904 0.19693 0.13055 0.19821
TREC-COVID 0.47246 0.6379 0.45788 0.53539
FiQA-2018 0.36867 0.41587 0.31091 0.39067
The source code for the benchmark is publicly available, and [you can find it in the repository of the `beir-qdrant` package](https://github.com/kacperlukawski/beir-qdrant/blob/main/examples/retrieval/search/evaluate_reranking.py). Overall, adding a reranking step using the same model typically improves retrieval quality. However, the quality of various late interaction models is [often reported based on their reranking performance when BM25 is used for the initial retrieval](https://huggingface.co/mixedbread-ai/mxbai-colbert-large-v1#1-reranking-performance). This experiment aimed to demonstrate how a single model can be effectively used for both retrieval and reranking, and the results are quite promising. Now, let's explore how to implement this using the new Query API introduced in Qdrant 1.10. ## Setting Up Qdrant for Late Interaction The new Query API in Qdrant 1.10 enables the construction of even more complex retrieval pipelines. We can use the single vector created after pooling for the initial retrieval step and then rerank the results using the output token embeddings. Assuming the collection is named `my-collection` and is configured to store two named vectors: `dense-vector` and `output-token-embeddings`, here’s how such a collection could be created in Qdrant: ```python from qdrant_client import QdrantClient, models client = QdrantClient(""http://localhost:6333"") client.create_collection( collection_name=""my-collection"", vectors_config={ ""dense-vector"": models.VectorParams( size=384, distance=models.Distance.COSINE, ), ""output-token-embeddings"": models.VectorParams( size=384, distance=models.Distance.COSINE, multivector_config=models.MultiVectorConfig( comparator=models.MultiVectorComparator.MAX_SIM ), ), } ) ``` Both vectors are of the same size since they are produced by the same `all-MiniLM-L6-v2` model. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer(""all-MiniLM-L6-v2"") ``` Now, instead of using the search API with just a single dense vector, we can create a reranking pipeline. First, we retrieve 50 results using the dense vector, and then we rerank them using the output token embeddings to obtain the top 10 results. ```python query = ""What else can be done with just all-MiniLM-L6-v2 model?"" client.query_points( collection_name=""my-collection"", prefetch=[ # Prefetch the dense embeddings of the top-50 documents models.Prefetch( query=model.encode(query).tolist(), using=""dense-vector"", limit=50, ) ], # Rerank the top-50 documents retrieved by the dense embedding model # and return just the top-10. Please note we call the same model, but # we ask for the token embeddings by setting the output_value parameter. query=model.encode(query, output_value=""token_embeddings"").tolist(), using=""output-token-embeddings"", limit=10, ) ``` ## Try the Experiment Yourself In a real-world scenario, you might take it a step further by first calculating the token embeddings and then performing pooling to obtain the single vector representation. This approach allows you to complete everything in a single pass. The simplest way to start experimenting with building complex reranking pipelines in Qdrant is by using the forever-free cluster on [Qdrant Cloud](https://cloud.qdrant.io/) and reading [Qdrant's documentation](/documentation/). The [source code for these experiments is open-source](https://github.com/kacperlukawski/beir-qdrant/blob/main/examples/retrieval/search/evaluate_all_exact.py) and uses [`beir-qdrant`](https://github.com/kacperlukawski/beir-qdrant), an integration of Qdrant with the [BeIR library](https://github.com/beir-cellar/beir). ## Future Directions and Research Opportunities The initial experiments using output token embeddings in the retrieval process have yielded promising results. However, we plan to conduct further benchmarks to validate these findings and explore the incorporation of sparse methods for the initial retrieval. Additionally, we aim to investigate the impact of quantization on multi-vector representations and its effects on retrieval quality. Finally, we will assess retrieval speed, a crucial factor for many applications.",articles/late-interaction-models.md "--- title: Metric Learning Tips & Tricks short_description: How to train an object matching model and serve it in production. description: Practical recommendations on how to train a matching model and serve it in production. Even with no labeled data. # external_link: https://vasnetsov93.medium.com/metric-learning-tips-n-tricks-2e4cfee6b75b social_preview_image: /articles_data/metric-learning-tips/preview/social_preview.jpg preview_dir: /articles_data/metric-learning-tips/preview small_preview_image: /articles_data/metric-learning-tips/scatter-graph.svg weight: 20 author: Andrei Vasnetsov author_link: https://blog.vasnetsov.com/ date: 2021-05-15T10:18:00.000Z # aliases: [ /articles/metric-learning-tips/ ] --- ## How to train object matching model with no labeled data and use it in production Currently, most machine-learning-related business cases are solved as a classification problems. Classification algorithms are so well studied in practice that even if the original problem is not directly a classification task, it is usually decomposed or approximately converted into one. However, despite its simplicity, the classification task has requirements that could complicate its production integration and scaling. E.g. it requires a fixed number of classes, where each class should have a sufficient number of training samples. In this article, I will describe how we overcome these limitations by switching to metric learning. By the example of matching job positions and candidates, I will show how to train metric learning model with no manually labeled data, how to estimate prediction confidence, and how to serve metric learning in production. ## What is metric learning and why using it? According to Wikipedia, metric learning is the task of learning a distance function over objects. In practice, it means that we can train a model that tells a number for any pair of given objects. And this number should represent a degree or score of similarity between those given objects. For example, objects with a score of 0.9 could be more similar than objects with a score of 0.5 Actual scores and their direction could vary among different implementations. In practice, there are two main approaches to metric learning and two corresponding types of NN architectures. The first is the interaction-based approach, which first builds local interactions (i.e., local matching signals) between two objects. Deep neural networks learn hierarchical interaction patterns for matching. Examples of neural network architectures include MV-LSTM, ARC-II, and MatchPyramid. ![MV-LSTM, example of interaction-based model](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/mv_lstm.png) > MV-LSTM, example of interaction-based model, [Shengxian Wan et al. ](https://www.researchgate.net/figure/Illustration-of-MV-LSTM-S-X-and-S-Y-are-the-in_fig1_285271115) via Researchgate The second is the representation-based approach. In this case distance function is composed of 2 components: the Encoder transforms an object into embedded representation - usually a large float point vector, and the Comparator takes embeddings of a pair of objects from the Encoder and calculates their similarity. The most well-known example of this embedding representation is Word2Vec. Examples of neural network architectures also include DSSM, C-DSSM, and ARC-I. The Comparator is usually a very simple function that could be calculated very quickly. It might be cosine similarity or even a dot production. Two-stage schema allows performing complex calculations only once per object. Once transformed, the Comparator can calculate object similarity independent of the Encoder much more quickly. For more convenience, embeddings can be placed into specialized storages or vector search engines. These search engines allow to manage embeddings using API, perform searches and other operations with vectors. ![C-DSSM, example of representation-based model](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/cdssm.png) > C-DSSM, example of representation-based model, [Xue Li et al.](https://arxiv.org/abs/1901.10710v2) via arXiv Pre-trained NNs can also be used. The output of the second-to-last layer could work as an embedded representation. Further in this article, I would focus on the representation-based approach, as it proved to be more flexible and fast. So what are the advantages of using metric learning comparing to classification? Object Encoder does not assume the number of classes. So if you can't split your object into classes, if the number of classes is too high, or you suspect that it could grow in the future - consider using metric learning. In our case, business goal was to find suitable vacancies for candidates who specify the title of the desired position. To solve this, we used to apply a classifier to determine the job category of the vacancy and the candidate. But this solution was limited to only a few hundred categories. Candidates were complaining that they couldn't find the right category for them. Training the classifier for new categories would be too long and require new training data for each new category. Switching to metric learning allowed us to overcome these limitations, the resulting solution could compare any pair position descriptions, even if we don't have this category reference yet. ![T-SNE with job samples](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/embeddings.png) > T-SNE with job samples, Image by Author. Play with [Embedding Projector](https://projector.tensorflow.org/?config=https://gist.githubusercontent.com/generall/7e712425e3b340c2c4dbc1a29f515d91/raw/b45b2b6f6c1d5ab3d3363c50805f3834a85c8879/config.json) yourself. With metric learning, we learn not a concrete job type but how to match job descriptions from a candidate's CV and a vacancy. Secondly, with metric learning, it is easy to add more reference occupations without model retraining. We can then add the reference to a vector search engine. Next time we will match occupations - this new reference vector will be searchable. ## Data for metric learning Unlike classifiers, a metric learning training does not require specific class labels. All that is required are examples of similar and dissimilar objects. We would call them positive and negative samples. At the same time, it could be a relative similarity between a pair of objects. For example, twins look more alike to each other than a pair of random people. And random people are more similar to each other than a man and a cat. A model can use such relative examples for learning. The good news is that the division into classes is only a special case of determining similarity. To use such datasets, it is enough to declare samples from one class as positive and samples from another class as negative. In this way, it is possible to combine several datasets with mismatched classes into one generalized dataset for metric learning. But not only datasets with division into classes are suitable for extracting positive and negative examples. If, for example, there are additional features in the description of the object, the value of these features can also be used as a similarity factor. It may not be as explicit as class membership, but the relative similarity is also suitable for learning. In the case of job descriptions, there are many ontologies of occupations, which were able to be combined into a single dataset thanks to this approach. We even went a step further and used identical job titles to find similar descriptions. As a result, we got a self-supervised universal dataset that did not require any manual labeling. Unfortunately, universality does not allow some techniques to be applied in training. Next, I will describe how to overcome this disadvantage. ## Training the model There are several ways to train a metric learning model. Among the most popular is the use of Triplet or Contrastive loss functions, but I will not go deep into them in this article. However, I will tell you about one interesting trick that helped us work with unified training examples. One of the most important practices to efficiently train the metric learning model is hard negative mining. This technique aims to include negative samples on which model gave worse predictions during the last training epoch. Most articles that describe this technique assume that training data consists of many small classes (in most cases it is people's faces). With data like this, it is easy to find bad samples - if two samples from different classes have a high similarity score, we can use it as a negative sample. But we had no such classes in our data, the only thing we have is occupation pairs assumed to be similar in some way. We cannot guarantee that there is no better match for each job occupation among this pair. That is why we can't use hard negative mining for our model. ![Loss variations](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/losses.png) > [Alfonso Medela et al.](https://arxiv.org/abs/1905.10675) via arXiv To compensate for this limitation we can try to increase the number of random (weak) negative samples. One way to achieve this is to train the model longer, so it will see more samples by the end of the training. But we found a better solution in adjusting our loss function. In a regular implementation of Triplet or Contractive loss, each positive pair is compared with some or a few negative samples. What we did is we allow pair comparison amongst the whole batch. That means that loss-function penalizes all pairs of random objects if its score exceeds any of the positive scores in a batch. This extension gives `~ N * B^2` comparisons where `B` is a size of batch and `N` is a number of batches. Much bigger than `~ N * B` in regular triplet loss. This means that increasing the size of the batch significantly increases the number of negative comparisons, and therefore should improve the model performance. We were able to observe this dependence in our experiments. Similar idea we also found in the article [Supervised Contrastive Learning](https://arxiv.org/abs/2004.11362). ## Model confidence In real life it is often needed to know how confident the model was in the prediction. Whether manual adjustment or validation of the result is required. With conventional classification, it is easy to understand by scores how confident the model is in the result. If the probability values of different classes are close to each other, the model is not confident. If, on the contrary, the most probable class differs greatly, then the model is confident. At first glance, this cannot be applied to metric learning. Even if the predicted object similarity score is small it might only mean that the reference set has no proper objects to compare with. Conversely, the model can group garbage objects with a large score. Fortunately, we found a small modification to the embedding generator, which allows us to define confidence in the same way as it is done in conventional classifiers with a Softmax activation function. The modification consists in building an embedding as a combination of feature groups. Each feature group is presented as a one-hot encoded sub-vector in the embedding. If the model can confidently predict the feature value - the corresponding sub-vector will have a high absolute value in some of its elements. For a more intuitive understanding, I recommend thinking about embeddings not as points in space, but as a set of binary features. To implement this modification and form proper feature groups we would need to change a regular linear output layer to a concatenation of several Softmax layers. Each softmax component would represent an independent feature and force the neural network to learn them. Let's take for example that we have 4 softmax components with 128 elements each. Every such component could be roughly imagined as a one-hot-encoded number in the range of 0 to 127. Thus, the resulting vector will represent one of `128^4` possible combinations. If the trained model is good enough, you can even try to interpret the values of singular features individually. ![Softmax feature embeddings](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/feature_embedding.png) > Softmax feature embeddings, Image by Author. ## Neural rules Machine learning models rarely train to 100% accuracy. In a conventional classifier, errors can only be eliminated by modifying and repeating the training process. Metric training, however, is more flexible in this matter and allows you to introduce additional steps that allow you to correct the errors of an already trained model. A common error of the metric learning model is erroneously declaring objects close although in reality they are not. To correct this kind of error, we introduce exclusion rules. Rules consist of 2 object anchors encoded into vector space. If the target object falls into one of the anchors' effects area - it triggers the rule. It will exclude all objects in the second anchor area from the prediction result. ![Exclusion rules](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/exclusion_rule.png) > Neural exclusion rules, Image by Author. The convenience of working with embeddings is that regardless of the number of rules, you only need to perform the encoding once per object. Then to find a suitable rule, it is enough to compare the target object's embedding and the pre-calculated embeddings of the rule's anchors. Which, when implemented, translates into just one additional query to the vector search engine. ## Vector search in production When implementing a metric learning model in production, the question arises about the storage and management of vectors. It should be easy to add new vectors if new job descriptions appear in the service. In our case, we also needed to apply additional conditions to the search. We needed to filter, for example, the location of candidates and the level of language proficiency. We did not find a ready-made tool for such vector management, so we created [Qdrant](https://github.com/qdrant/qdrant) - open-source vector search engine. It allows you to add and delete vectors with a simple API, independent of a programming language you are using. You can also assign the payload to vectors. This payload allows additional filtering during the search request. Qdrant has a pre-built docker image and start working with it is just as simple as running ```bash docker run -p 6333:6333 qdrant/qdrant ``` Documentation with examples could be found [here](https://api.qdrant.tech/api-reference). ## Conclusion In this article, I have shown how metric learning can be more scalable and flexible than the classification models. I suggest trying similar approaches in your tasks - it might be matching similar texts, images, or audio data. With the existing variety of pre-trained neural networks and a vector search engine, it is easy to build your metric learning-based application. ",articles/metric-learning-tips.md "--- title: Qdrant 0.10 released short_description: A short review of all the features introduced in Qdrant 0.10 description: Qdrant 0.10 brings a lot of changes. Check out what's new! preview_dir: /articles_data/qdrant-0-10-release/preview small_preview_image: /articles_data/qdrant-0-10-release/new-svgrepo-com.svg social_preview_image: /articles_data/qdrant-0-10-release/preview/social_preview.jpg weight: 70 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2022-09-19T13:30:00+02:00 draft: false --- [Qdrant 0.10 is a new version](https://github.com/qdrant/qdrant/releases/tag/v0.10.0) that brings a lot of performance improvements, but also some new features which were heavily requested by our users. Here is an overview of what has changed. ## Storing multiple vectors per object Previously, if you wanted to use semantic search with multiple vectors per object, you had to create separate collections for each vector type. This was even if the vectors shared some other attributes in the payload. With Qdrant 0.10, you can now store all of these vectors together in the same collection, which allows you to share a single copy of the payload. This makes it easier to use semantic search with multiple vector types, and reduces the amount of work you need to do to set up your collections. ## Batch vector search Previously, you had to send multiple requests to the Qdrant API to perform multiple non-related tasks. However, this can cause significant network overhead and slow down the process, especially if you have a poor connection speed. Fortunately, the [new batch search feature](/documentation/concepts/search/#batch-search-api) allows you to avoid this issue. With just one API call, Qdrant will handle multiple search requests in the most efficient way possible. This means that you can perform multiple tasks simultaneously without having to worry about network overhead or slow performance. ## Built-in ARM support To make our application accessible to ARM users, we have compiled it specifically for that platform. If it is not compiled for ARM, the device will have to emulate it, which can slow down performance. To ensure the best possible experience for ARM users, we have created Docker images specifically for that platform. Keep in mind that using a limited set of processor instructions may affect the performance of your vector search. Therefore, we have tested both ARM and non-ARM architectures using similar setups to understand the potential impact on performance. ## Full-text filtering Qdrant is a vector database that allows you to quickly search for the nearest neighbors. However, you may need to apply additional filters on top of the semantic search. Up until version 0.10, Qdrant only supported keyword filters. With the release of Qdrant 0.10, [you can now use full-text filters](/documentation/concepts/filtering/#full-text-match) as well. This new filter type can be used on its own or in combination with other filter types to provide even more flexibility in your searches. ",articles/qdrant-0-10-release.md "--- title: ""Using LangChain for Question Answering with Qdrant"" short_description: ""Large Language Models might be developed fast with modern tool. Here is how!"" description: ""We combined LangChain, a pre-trained LLM from OpenAI, SentenceTransformers & Qdrant to create a question answering system with just a few lines of code. Learn more!"" social_preview_image: /articles_data/langchain-integration/social_preview.png small_preview_image: /articles_data/langchain-integration/chain.svg preview_dir: /articles_data/langchain-integration/preview weight: 6 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-01-31T10:53:20+01:00 draft: false keywords: - vector search - langchain - llm - large language models - question answering - openai - embeddings --- # Streamlining Question Answering: Simplifying Integration with LangChain and Qdrant Building applications with Large Language Models doesn't have to be complicated. A lot has been going on recently to simplify the development, so you can utilize already pre-trained models and support even complex pipelines with a few lines of code. [LangChain](https://langchain.readthedocs.io) provides unified interfaces to different libraries, so you can avoid writing boilerplate code and focus on the value you want to bring. ## Why Use Qdrant for Question Answering with LangChain? It has been reported millions of times recently, but let's say that again. ChatGPT-like models struggle with generating factual statements if no context is provided. They have some general knowledge but cannot guarantee to produce a valid answer consistently. Thus, it is better to provide some facts we know are actual, so it can just choose the valid parts and extract them from all the provided contextual data to give a comprehensive answer. [Vector database, such as Qdrant](https://qdrant.tech/), is of great help here, as their ability to perform a [semantic search](https://qdrant.tech/documentation/tutorials/search-beginners/) over a huge knowledge base is crucial to preselect some possibly valid documents, so they can be provided into the LLM. That's also one of the **chains** implemented in [LangChain](https://qdrant.tech/documentation/frameworks/langchain/), which is called `VectorDBQA`. And Qdrant got integrated with the library, so it might be used to build it effortlessly. ### The Two-Model Approach Surprisingly enough, there will be two models required to set things up. First of all, we need an embedding model that will convert the set of facts into vectors, and store those into Qdrant. That's an identical process to any other semantic search application. We're going to use one of the `SentenceTransformers` models, so it can be hosted locally. The embeddings created by that model will be put into Qdrant and used to retrieve the most similar documents, given the query. However, when we receive a query, there are two steps involved. First of all, we ask Qdrant to provide the most relevant documents and simply combine all of them into a single text. Then, we build a prompt to the LLM (in our case [OpenAI](https://openai.com/)), including those documents as a context, of course together with the question asked. So the input to the LLM looks like the following: ```text Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. It's as certain as 2 + 2 = 4 ... Question: How much is 2 + 2? Helpful Answer: ``` There might be several context documents combined, and it is solely up to LLM to choose the right piece of content. But our expectation is, the model should respond with just `4`. ## Why do we need two different models? Both solve some different tasks. The first model performs feature extraction, by converting the text into vectors, while the second one helps in text generation or summarization. Disclaimer: This is not the only way to solve that task with LangChain. Such a chain is called `stuff` in the library nomenclature. ![](/articles_data/langchain-integration/flow-diagram.png) Enough theory! This sounds like a pretty complex application, as it involves several systems. But with LangChain, it might be implemented in just a few lines of code, thanks to the recent integration with [Qdrant](https://qdrant.tech/). We're not even going to work directly with `QdrantClient`, as everything is already done in the background by LangChain. If you want to get into the source code right away, all the processing is available as a [Google Colab notebook](https://colab.research.google.com/drive/19RxxkZdnq_YqBH5kBV10Rt0Rax-kminD?usp=sharing). ## How to Implement Question Answering with LangChain and Qdrant ### Step 1: Configuration A journey of a thousand miles begins with a single step, in our case with the configuration of all the services. We'll be using [Qdrant Cloud](https://cloud.qdrant.io), so we need an API key. The same is for OpenAI - the API key has to be obtained from their website. ![](/articles_data/langchain-integration/code-configuration.png) ### Step 2: Building the knowledge base We also need some facts from which the answers will be generated. There is plenty of public datasets available, and [Natural Questions](https://ai.google.com/research/NaturalQuestions/visualization) is one of them. It consists of the whole HTML content of the websites they were scraped from. That means we need some preprocessing to extract plain text content. As a result, we’re going to have two lists of strings - one for questions and the other one for the answers. The answers have to be vectorized with the first of our models. The `sentence-transformers/all-mpnet-base-v2` is one of the possibilities, but there are some other options available. LangChain will handle that part of the process in a single function call. ![](/articles_data/langchain-integration/code-qdrant.png) ### Step 3: Setting up QA with Qdrant in a loop `VectorDBQA` is a chain that performs the process described above. So it, first of all, loads some facts from Qdrant and then feeds them into OpenAI LLM which should analyze them to find the answer to a given question. The only last thing to do before using it is to put things together, also with a single function call. ![](/articles_data/langchain-integration/code-vectordbqa.png) ## Step 4: Testing out the chain And that's it! We can put some queries, and LangChain will perform all the required processing to find the answer in the provided context. ![](/articles_data/langchain-integration/code-answering.png) ```text > what kind of music is scott joplin most famous for Scott Joplin is most famous for composing ragtime music. > who died from the band faith no more Chuck Mosley > when does maggie come on grey's anatomy Maggie first appears in season 10, episode 1, which aired on September 26, 2013. > can't take my eyes off you lyrics meaning I don't know. > who lasted the longest on alone season 2 David McIntyre lasted the longest on Alone season 2, with a total of 66 days. ``` The great thing about such a setup is that the knowledge base might be easily extended with some new facts and those will be included in the prompts sent to LLM later on. Of course, assuming their similarity to the given question will be in the top results returned by Qdrant. If you want to run the chain on your own, the simplest way to reproduce it is to open the [Google Colab notebook](https://colab.research.google.com/drive/19RxxkZdnq_YqBH5kBV10Rt0Rax-kminD?usp=sharing). ",articles/langchain-integration.md "--- title: ""Optimizing OpenAI Embeddings: Enhance Efficiency with Qdrant's Binary Quantization"" draft: false slug: binary-quantization-openai short_description: Use Qdrant's Binary Quantization to enhance OpenAI embeddings description: Explore how Qdrant's Binary Quantization can significantly improve the efficiency and performance of OpenAI's Ada-003 embeddings. Learn best practices for real-time search applications. preview_dir: /articles_data/binary-quantization-openai/preview preview_image: /articles-data/binary-quantization-openai/Article-Image.png small_preview_image: /articles_data/binary-quantization-openai/icon.svg social_preview_image: /articles_data/binary-quantization-openai/preview/social-preview.png title_preview_image: /articles_data/binary-quantization-openai/preview/preview.webp date: 2024-02-21T13:12:08-08:00 author: Nirant Kasliwal author_link: https://nirantk.com/about/ featured: false tags: - OpenAI - binary quantization - embeddings weight: -130 aliases: [ /blog/binary-quantization-openai/ ] --- OpenAI Ada-003 embeddings are a powerful tool for natural language processing (NLP). However, the size of the embeddings are a challenge, especially with real-time search and retrieval. In this article, we explore how you can use Qdrant's Binary Quantization to enhance the performance and efficiency of OpenAI embeddings. In this post, we discuss: - The significance of OpenAI embeddings and real-world challenges. - Qdrant's Binary Quantization, and how it can improve the performance of OpenAI embeddings - Results of an experiment that highlights improvements in search efficiency and accuracy - Implications of these findings for real-world applications - Best practices for leveraging Binary Quantization to enhance OpenAI embeddings If you're new to Binary Quantization, consider reading our article which walks you through the concept and [how to use it with Qdrant](/articles/binary-quantization/) You can also try out these techniques as described in [Binary Quantization OpenAI](https://github.com/qdrant/examples/blob/openai-3/binary-quantization-openai/README.md), which includes Jupyter notebooks. ## New OpenAI embeddings: performance and changes As the technology of embedding models has advanced, demand has grown. Users are looking more for powerful and efficient text-embedding models. OpenAI's Ada-003 embeddings offer state-of-the-art performance on a wide range of NLP tasks, including those noted in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) and [MIRACL](https://openai.com/blog/new-embedding-models-and-api-updates). These models include multilingual support in over 100 languages. The transition from text-embedding-ada-002 to text-embedding-3-large has led to a significant jump in performance scores (from 31.4% to 54.9% on MIRACL). #### Matryoshka representation learning The new OpenAI models have been trained with a novel approach called ""[Matryoshka Representation Learning](https://aniketrege.github.io/blog/2024/mrl/)"". Developers can set up embeddings of different sizes (number of dimensions). In this post, we use small and large variants. Developers can select embeddings which balances accuracy and size. Here, we show how the accuracy of binary quantization is quite good across different dimensions -- for both the models. ## Enhanced performance and efficiency with binary quantization By reducing storage needs, you can scale applications with lower costs. This addresses a critical challenge posed by the original embedding sizes. Binary Quantization also speeds the search process. It simplifies the complex distance calculations between vectors into more manageable bitwise operations, which supports potentially real-time searches across vast datasets. The accompanying graph illustrates the promising accuracy levels achievable with binary quantization across different model sizes, showcasing its practicality without severely compromising on performance. This dual advantage of storage reduction and accelerated search capabilities underscores the transformative potential of Binary Quantization in deploying OpenAI embeddings more effectively across various real-world applications. ![](/blog/openai/Accuracy_Models.png) The efficiency gains from Binary Quantization are as follows: - Reduced storage footprint: It helps with large-scale datasets. It also saves on memory, and scales up to 30x at the same cost. - Enhanced speed of data retrieval: Smaller data sizes generally leads to faster searches. - Accelerated search process: It is based on simplified distance calculations between vectors to bitwise operations. This enables real-time querying even in extensive databases. ### Experiment setup: OpenAI embeddings in focus To identify Binary Quantization's impact on search efficiency and accuracy, we designed our experiment on OpenAI text-embedding models. These models, which capture nuanced linguistic features and semantic relationships, are the backbone of our analysis. We then delve deep into the potential enhancements offered by Qdrant's Binary Quantization feature. This approach not only leverages the high-caliber OpenAI embeddings but also provides a broad basis for evaluating the search mechanism under scrutiny. #### Dataset The research employs 100K random samples from the [OpenAI 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) 1M dataset, focusing on 100 randomly selected records. These records serve as queries in the experiment, aiming to assess how Binary Quantization influences search efficiency and precision within the dataset. We then use the embeddings of the queries to search for the nearest neighbors in the dataset. #### Parameters: oversampling, rescoring, and search limits For each record, we run a parameter sweep over the number of oversampling, rescoring, and search limits. We can then understand the impact of these parameters on search accuracy and efficiency. Our experiment was designed to assess the impact of Binary Quantization under various conditions, based on the following parameters: - **Oversampling**: By oversampling, we can limit the loss of information inherent in quantization. This also helps to preserve the semantic richness of your OpenAI embeddings. We experimented with different oversampling factors, and identified the impact on the accuracy and efficiency of search. Spoiler: higher oversampling factors tend to improve the accuracy of searches. However, they usually require more computational resources. - **Rescoring**: Rescoring refines the first results of an initial binary search. This process leverages the original high-dimensional vectors to refine the search results, **always** improving accuracy. We toggled rescoring on and off to measure effectiveness, when combined with Binary Quantization. We also measured the impact on search performance. - **Search Limits**: We specify the number of results from the search process. We experimented with various search limits to measure their impact the accuracy and efficiency. We explored the trade-offs between search depth and performance. The results provide insight for applications with different precision and speed requirements. Through this detailed setup, our experiment sought to shed light on the nuanced interplay between Binary Quantization and the high-quality embeddings produced by OpenAI's models. By meticulously adjusting and observing the outcomes under different conditions, we aimed to uncover actionable insights that could empower users to harness the full potential of Qdrant in combination with OpenAI's embeddings, regardless of their specific application needs. ### Results: binary quantization's impact on OpenAI embeddings To analyze the impact of rescoring (`True` or `False`), we compared results across different model configurations and search limits. Rescoring sets up a more precise search, based on results from an initial query. #### Rescoring ![Graph that measures the impact of rescoring](/blog/openai/Rescoring_Impact.png) Here are some key observations, which analyzes the impact of rescoring (`True` or `False`): 1. **Significantly Improved Accuracy**: - Across all models and dimension configurations, enabling rescoring (`True`) consistently results in higher accuracy scores compared to when rescoring is disabled (`False`). - The improvement in accuracy is true across various search limits (10, 20, 50, 100). 2. **Model and Dimension Specific Observations**: - For the `text-embedding-3-large` model with 3072 dimensions, rescoring boosts the accuracy from an average of about 76-77% without rescoring to 97-99% with rescoring, depending on the search limit and oversampling rate. - The accuracy improvement with increased oversampling is more pronounced when rescoring is enabled, indicating a better utilization of the additional binary codes in refining search results. - With the `text-embedding-3-small` model at 512 dimensions, accuracy increases from around 53-55% without rescoring to 71-91% with rescoring, highlighting the significant impact of rescoring, especially at lower dimensions. In contrast, for lower dimension models (such as text-embedding-3-small with 512 dimensions), the incremental accuracy gains from increased oversampling levels are less significant, even with rescoring enabled. This suggests a diminishing return on accuracy improvement with higher oversampling in lower dimension spaces. 3. **Influence of Search Limit**: - The performance gain from rescoring seems to be relatively stable across different search limits, suggesting that rescoring consistently enhances accuracy regardless of the number of top results considered. In summary, enabling rescoring dramatically improves search accuracy across all tested configurations. It is crucial feature for applications where precision is paramount. The consistent performance boost provided by rescoring underscores its value in refining search results, particularly when working with complex, high-dimensional data like OpenAI embeddings. This enhancement is critical for applications that demand high accuracy, such as semantic search, content discovery, and recommendation systems, where the quality of search results directly impacts user experience and satisfaction. ### Dataset combinations For those exploring the integration of text embedding models with Qdrant, it's crucial to consider various model configurations for optimal performance. The dataset combinations defined above illustrate different configurations to test against Qdrant. These combinations vary by two primary attributes: 1. **Model Name**: Signifying the specific text embedding model variant, such as ""text-embedding-3-large"" or ""text-embedding-3-small"". This distinction correlates with the model's capacity, with ""large"" models offering more detailed embeddings at the cost of increased computational resources. 2. **Dimensions**: This refers to the size of the vector embeddings produced by the model. Options range from 512 to 3072 dimensions. Higher dimensions could lead to more precise embeddings but might also increase the search time and memory usage in Qdrant. Optimizing these parameters is a balancing act between search accuracy and resource efficiency. Testing across these combinations allows users to identify the configuration that best meets their specific needs, considering the trade-offs between computational resources and the quality of search results. ```python dataset_combinations = [ { ""model_name"": ""text-embedding-3-large"", ""dimensions"": 3072, }, { ""model_name"": ""text-embedding-3-large"", ""dimensions"": 1024, }, { ""model_name"": ""text-embedding-3-large"", ""dimensions"": 1536, }, { ""model_name"": ""text-embedding-3-small"", ""dimensions"": 512, }, { ""model_name"": ""text-embedding-3-small"", ""dimensions"": 1024, }, { ""model_name"": ""text-embedding-3-small"", ""dimensions"": 1536, }, ] ``` #### Exploring dataset combinations and their impacts on model performance The code snippet iterates through predefined dataset and model combinations. For each combination, characterized by the model name and its dimensions, the corresponding experiment's results are loaded. These results, which are stored in JSON format, include performance metrics like accuracy under different configurations: with and without oversampling, and with and without a rescore step. Following the extraction of these metrics, the code computes the average accuracy across different settings, excluding extreme cases of very low limits (specifically, limits of 1 and 5). This computation groups the results by oversampling, rescore presence, and limit, before calculating the mean accuracy for each subgroup. After gathering and processing this data, the average accuracies are organized into a pivot table. This table is indexed by the limit (the number of top results considered), and columns are formed based on combinations of oversampling and rescoring. ```python import pandas as pd for combination in dataset_combinations: model_name = combination[""model_name""] dimensions = combination[""dimensions""] print(f""Model: {model_name}, dimensions: {dimensions}"") results = pd.read_json(f""../results/results-{model_name}-{dimensions}.json"", lines=True) average_accuracy = results[results[""limit""] != 1] average_accuracy = average_accuracy[average_accuracy[""limit""] != 5] average_accuracy = average_accuracy.groupby([""oversampling"", ""rescore"", ""limit""])[ ""accuracy"" ].mean() average_accuracy = average_accuracy.reset_index() acc = average_accuracy.pivot( index=""limit"", columns=[""oversampling"", ""rescore""], values=""accuracy"" ) print(acc) ``` Here is a selected slice of these results, with `rescore=True`: |Method|Dimensionality|Test Dataset|Recall|Oversampling| |-|-|-|-|-| |OpenAI text-embedding-3-large (highest MTEB score from the table) |3072|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M) | 0.9966|3x| |OpenAI text-embedding-3-small|1536|[DBpedia 100K](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-small-1536-100K)| 0.9847|3x| |OpenAI text-embedding-3-large|1536|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M)| 0.9826|3x| #### Impact of oversampling You can use oversampling in machine learning to counteract imbalances in datasets. It works well when one class significantly outnumbers others. This imbalance can skew the performance of models, which favors the majority class at the expense of others. By creating additional samples from the minority classes, oversampling helps equalize the representation of classes in the training dataset, thus enabling more fair and accurate modeling of real-world scenarios. The screenshot showcases the effect of oversampling on model performance metrics. While the actual metrics aren't shown, we expect to see improvements in measures such as precision, recall, or F1-score. These improvements illustrate the effectiveness of oversampling in creating a more balanced dataset. It allows the model to learn a better representation of all classes, not just the dominant one. Without an explicit code snippet or output, we focus on the role of oversampling in model fairness and performance. Through graphical representation, you can set up before-and-after comparisons. These comparisons illustrate the contribution to machine learning projects. ![Measuring the impact of oversampling](/blog/openai/Oversampling_Impact.png) ### Leveraging binary quantization: best practices We recommend the following best practices for leveraging Binary Quantization to enhance OpenAI embeddings: 1. Embedding Model: Use the text-embedding-3-large from MTEB. It is most accurate among those tested. 2. Dimensions: Use the highest dimension available for the model, to maximize accuracy. The results are true for English and other languages. 3. Oversampling: Use an oversampling factor of 3 for the best balance between accuracy and efficiency. This factor is suitable for a wide range of applications. 4. Rescoring: Enable rescoring to improve the accuracy of search results. 5. RAM: Store the full vectors and payload on disk. Limit what you load from memory to the binary quantization index. This helps reduce the memory footprint and improve the overall efficiency of the system. The incremental latency from the disk read is negligible compared to the latency savings from the binary scoring in Qdrant, which uses SIMD instructions where possible. ## What's next? Binary quantization is exceptional if you need to work with large volumes of data under high recall expectations. You can try this feature either by spinning up a [Qdrant container image](https://hub.docker.com/r/qdrant/qdrant) locally or, having us create one for you through a [free account](https://cloud.qdrant.io/login) in our cloud hosted service. The article gives examples of data sets and configuration you can use to get going. Our documentation covers [adding large datasets to Qdrant](/documentation/tutorials/bulk-upload/) to your Qdrant instance as well as [more quantization methods](/documentation/guides/quantization/). Want to discuss these findings and learn more about Binary Quantization? [Join our Discord community.](https://discord.gg/qdrant) ",articles/binary-quantization-openai.md "--- title: ""How to Implement Multitenancy and Custom Sharding in Qdrant"" short_description: ""Explore how Qdrant's multitenancy and custom sharding streamline machine-learning operations, enhancing scalability and data security."" description: ""Discover how multitenancy and custom sharding in Qdrant can streamline your machine-learning operations. Learn how to scale efficiently and manage data securely."" social_preview_image: /articles_data/multitenancy/social_preview.png preview_dir: /articles_data/multitenancy/preview small_preview_image: /articles_data/multitenancy/icon.svg weight: -120 author: David Myriel date: 2024-02-06T13:21:00.000Z draft: false keywords: - multitenancy - custom sharding - multiple partitions - vector database --- # Scaling Your Machine Learning Setup: The Power of Multitenancy and Custom Sharding in Qdrant We are seeing the topics of [multitenancy](/documentation/guides/multiple-partitions/) and [distributed deployment](/documentation/guides/distributed_deployment/#sharding) pop-up daily on our [Discord support channel](https://qdrant.to/discord). This tells us that many of you are looking to scale Qdrant along with the rest of your machine learning setup. Whether you are building a bank fraud-detection system, [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/) for e-commerce, or services for the federal government - you will need to leverage a multitenant architecture to scale your product. In the world of SaaS and enterprise apps, this setup is the norm. It will considerably increase your application's performance and lower your hosting costs. ## Multitenancy & custom sharding with Qdrant We have developed two major features just for this. __You can now scale a single Qdrant cluster and support all of your customers worldwide.__ Under [multitenancy](/documentation/guides/multiple-partitions/), each customer's data is completely isolated and only accessible by them. At times, if this data is location-sensitive, Qdrant also gives you the option to divide your cluster by region or other criteria that further secure your customer's access. This is called [custom sharding](/documentation/guides/distributed_deployment/#user-defined-sharding). Combining these two will result in an efficiently-partitioned architecture that further leverages the convenience of a single Qdrant cluster. This article will briefly explain the benefits and show how you can get started using both features. ## One collection, many tenants When working with Qdrant, you can upsert all your data to a single collection, and then partition each vector via its payload. This means that all your users are leveraging the power of a single Qdrant cluster, but their data is still isolated within the collection. Let's take a look at a two-tenant collection: **Figure 1:** Each individual vector is assigned a specific payload that denotes which tenant it belongs to. This is how a large number of different tenants can share a single Qdrant collection. ![Qdrant Multitenancy](/articles_data/multitenancy/multitenancy-single.png) Qdrant is built to excel in a single collection with a vast number of tenants. You should only create multiple collections when your data is not homogenous or if users' vectors are created by different embedding models. Creating too many collections may result in resource overhead and cause dependencies. This can increase costs and affect overall performance. ## Sharding your database With Qdrant, you can also specify a shard for each vector individually. This feature is useful if you want to [control where your data is kept in the cluster](/documentation/guides/distributed_deployment/#sharding). For example, one set of vectors can be assigned to one shard on its own node, while another set can be on a completely different node. During vector search, your operations will be able to hit only the subset of shards they actually need. In massive-scale deployments, __this can significantly improve the performance of operations that do not require the whole collection to be scanned__. This works in the other direction as well. Whenever you search for something, you can specify a shard or several shards and Qdrant will know where to find them. It will avoid asking all machines in your cluster for results. This will minimize overhead and maximize performance. ### Common use cases A clear use-case for this feature is managing a multitenant collection, where each tenant (let it be a user or organization) is assumed to be segregated, so they can have their data stored in separate shards. Sharding solves the problem of region-based data placement, whereby certain data needs to be kept within specific locations. To do this, however, you will need to [move your shards between nodes](/documentation/guides/distributed_deployment/#moving-shards). **Figure 2:** Users can both upsert and query shards that are relevant to them, all within the same collection. Regional sharding can help avoid cross-continental traffic. ![Qdrant Multitenancy](/articles_data/multitenancy/shards.png) Custom sharding also gives you precise control over other use cases. A time-based data placement means that data streams can index shards that represent latest updates. If you organize your shards by date, you can have great control over the recency of retrieved data. This is relevant for social media platforms, which greatly rely on time-sensitive data. ## Before I go any further.....how secure is my user data? By design, Qdrant offers three levels of isolation. We initially introduced collection-based isolation, but your scaled setup has to move beyond this level. In this scenario, you will leverage payload-based isolation (from multitenancy) and resource-based isolation (from sharding). The ultimate goal is to have a single collection, where you can manipulate and customize placement of shards inside your cluster more precisely and avoid any kind of overhead. The diagram below shows the arrangement of your data within a two-tier isolation arrangement. **Figure 3:** Users can query the collection based on two filters: the `group_id` and the individual `shard_key_selector`. This gives your data two additional levels of isolation. ![Qdrant Multitenancy](/articles_data/multitenancy/multitenancy.png) ## Create custom shards for a single collection When creating a collection, you will need to configure user-defined sharding. This lets you control the shard placement of your data, so that operations can hit only the subset of shards they actually need. In big clusters, this can significantly improve the performance of operations, since you won't need to go through the entire collection to retrieve data. ```python client.create_collection( collection_name=""{tenant_data}"", shard_number=2, sharding_method=models.ShardingMethod.CUSTOM, # ... other collection parameters ) client.create_shard_key(""{tenant_data}"", ""canada"") client.create_shard_key(""{tenant_data}"", ""germany"") ``` In this example, your cluster is divided between Germany and Canada. Canadian and German law differ when it comes to international data transfer. Let's say you are creating a RAG application that supports the healthcare industry. Your Canadian customer data will have to be clearly separated for compliance purposes from your German customer. Even though it is part of the same collection, data from each shard is isolated from other shards and can be retrieved as such. For additional examples on shards and retrieval, consult [Distributed Deployments](/documentation/guides/distributed_deployment/) documentation and [Qdrant Client specification](https://python-client.qdrant.tech). ## Configure a multitenant setup for users Let's continue and start adding data. As you upsert your vectors to your new collection, you can add a `group_id` field to each vector. If you do this, Qdrant will assign each vector to its respective group. Additionally, each vector can now be allocated to a shard. You can specify the `shard_key_selector` for each individual vector. In this example, you are upserting data belonging to `tenant_1` to the Canadian region. ```python client.upsert( collection_name=""{tenant_data}"", points=[ models.PointStruct( id=1, payload={""group_id"": ""tenant_1""}, vector=[0.9, 0.1, 0.1], ), models.PointStruct( id=2, payload={""group_id"": ""tenant_1""}, vector=[0.1, 0.9, 0.1], ), ], shard_key_selector=""canada"", ) ``` Keep in mind that the data for each `group_id` is isolated. In the example below, `tenant_1` vectors are kept separate from `tenant_2`. The first tenant will be able to access their data in the Canadian portion of the cluster. However, as shown below `tenant_2 `might only be able to retrieve information hosted in Germany. ```python client.upsert( collection_name=""{tenant_data}"", points=[ models.PointStruct( id=3, payload={""group_id"": ""tenant_2""}, vector=[0.1, 0.1, 0.9], ), ], shard_key_selector=""germany"", ) ``` ## Retrieve data via filters The access control setup is completed as you specify the criteria for data retrieval. When searching for vectors, you need to use a `query_filter` along with `group_id` to filter vectors for each user. ```python client.search( collection_name=""{tenant_data}"", query_filter=models.Filter( must=[ models.FieldCondition( key=""group_id"", match=models.MatchValue( value=""tenant_1"", ), ), ] ), query_vector=[0.1, 0.1, 0.9], limit=10, ) ``` ## Performance considerations The speed of indexation may become a bottleneck if you are adding large amounts of data in this way, as each user's vector will be indexed into the same collection. To avoid this bottleneck, consider _bypassing the construction of a global vector index_ for the entire collection and building it only for individual groups instead. By adopting this strategy, Qdrant will index vectors for each user independently, significantly accelerating the process. To implement this approach, you should: 1. Set `payload_m` in the HNSW configuration to a non-zero value, such as 16. 2. Set `m` in hnsw config to 0. This will disable building global index for the whole collection. ```python from qdrant_client import QdrantClient, models client = QdrantClient(""localhost"", port=6333) client.create_collection( collection_name=""{tenant_data}"", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), hnsw_config=models.HnswConfigDiff( payload_m=16, m=0, ), ) ``` 3. Create keyword payload index for `group_id` field. ```python client.create_payload_index( collection_name=""{tenant_data}"", field_name=""group_id"", field_schema=models.PayloadSchemaType.KEYWORD, ) ``` > Note: Keep in mind that global requests (without the `group_id` filter) will be slower since they will necessitate scanning all groups to identify the nearest neighbors. ## Explore multitenancy and custom sharding in Qdrant for scalable solutions Qdrant is ready to support a massive-scale architecture for your machine learning project. If you want to see whether our [vector database](https://qdrant.tech/) is right for you, try the [quickstart tutorial](/documentation/quick-start/) or read our [docs and tutorials](/documentation/). To spin up a free instance of Qdrant, sign up for [Qdrant Cloud](https://qdrant.to/cloud) - no strings attached. Get support or share ideas in our [Discord](https://qdrant.to/discord) community. This is where we talk about vector search theory, publish examples and demos and discuss vector database setups. ",articles/multitenancy.md "--- title: ""What is RAG: Understanding Retrieval-Augmented Generation"" draft: false slug: what-is-rag-in-ai? short_description: What is RAG? description: Explore how RAG enables LLMs to retrieve and utilize relevant external data when generating responses, rather than being limited to their original training data alone. preview_dir: /articles_data/what-is-rag-in-ai/preview weight: -150 social_preview_image: /articles_data/what-is-rag-in-ai/preview/social_preview.jpg small_preview_image: /articles_data/what-is-rag-in-ai/icon.svg date: 2024-03-19T9:29:33-03:00 author: Sabrina Aquino author_link: https://github.com/sabrinaaquino featured: true tags: - retrieval augmented generation - what is rag - embeddings - llm rag - rag application --- > Retrieval-augmented generation (RAG) integrates external information retrieval into the process of generating responses by Large Language Models (LLMs). It searches a database for information beyond its pre-trained knowledge base, significantly improving the accuracy and relevance of the generated responses. Language models have exploded on the internet ever since ChatGPT came out, and rightfully so. They can write essays, code entire programs, and even make memes (though we’re still deciding on whether that's a good thing). But as brilliant as these chatbots become, they still have **limitations** in tasks requiring external knowledge and factual information. Yes, it can describe the honeybee's waggle dance in excruciating detail. But they become far more valuable if they can generate insights from **any data** that we provide, rather than just their original training data. Since retraining those large language models from scratch costs millions of dollars and takes months, we need better ways to give our existing LLMs access to our custom data. While you could be more creative with your prompts, it is only a short-term solution. LLMs can consider only a **limited** amount of text in their responses, known as a [context window](https://www.hopsworks.ai/dictionary/context-window-for-llms). Some models like GPT-3 can see up to around 12 pages of text (that’s 4,096 tokens of context). That’s not good enough for most knowledge bases. ![How a RAG works](/articles_data/what-is-rag-in-ai/how-rag-works.jpg) The image above shows how a basic RAG system works. Before forwarding the question to the LLM, we have a layer that searches our knowledge base for the ""relevant knowledge"" to answer the user query. Specifically, in this case, the spending data from the last month. Our LLM can now generate a **relevant non-hallucinated** response about our budget. As your data grows, you’ll need efficient ways to identify the most relevant information for your LLM's limited memory. This is where you’ll want a proper way to store and retrieve the specific data you’ll need for your query, without needing the LLM to remember it. **Vector databases** store information as **vector embeddings**. This format supports efficient similarity searches to retrieve relevant data for your query. For example, Qdrant is specifically designed to perform fast, even in scenarios dealing with billions of vectors. This article will focus on RAG systems and architecture. If you’re interested in learning more about vector search, we recommend the following articles: [What is a Vector Database?](/articles/what-is-a-vector-database/) and [What are Vector Embeddings?](/articles/what-are-embeddings/). ## RAG architecture At its core, a RAG architecture includes the **retriever** and the **generator**. Let's start by understanding what each of these components does. ### The Retriever When you ask a question to the retriever, it uses **similarity search** to scan through a vast knowledge base of vector embeddings. It then pulls out the most **relevant** vectors to help answer that query. There are a few different techniques it can use to know what’s relevant: #### How indexing works in RAG retrievers The indexing process organizes the data into your vector database in a way that makes it easily searchable. This allows the RAG to access relevant information when responding to a query. ![How indexing works](/articles_data/what-is-rag-in-ai/how-indexing-works.jpg) As shown in the image above, here’s the process: * Start with a _loader_ that gathers _documents_ containing your data. These documents could be anything from articles and books to web pages and social media posts. * Next, a _splitter_ divides the documents into smaller chunks, typically sentences or paragraphs. * This is because RAG models work better with smaller pieces of text. In the diagram, these are _document snippets_. * Each text chunk is then fed into an _embedding machine_. This machine uses complex algorithms to convert the text into [vector embeddings](/articles/what-are-embeddings/). All the generated vector embeddings are stored in a knowledge base of indexed information. This supports efficient retrieval of similar pieces of information when needed. #### Query vectorization Once you have vectorized your knowledge base you can do the same to the user query. When the model sees a new query, it uses the same preprocessing and embedding techniques. This ensures that the query vector is compatible with the document vectors in the index. ![How retrieval works](/articles_data/what-is-rag-in-ai/how-retrieval-works.jpg) #### Retrieval of relevant documents When the system needs to find the most relevant documents or passages to answer a query, it utilizes vector similarity techniques. **Vector similarity** is a fundamental concept in machine learning and natural language processing (NLP) that quantifies the resemblance between vectors, which are mathematical representations of data points. The system can employ different vector similarity strategies depending on the type of vectors used to represent the data: ##### Sparse vector representations A sparse vector is characterized by a high dimensionality, with most of its elements being zero. The classic approach is **keyword search**, which scans documents for the exact words or phrases in the query. The search creates sparse vector representations of documents by counting word occurrences and inversely weighting common words. Queries with rarer words get prioritized. ![Sparse vector representation](/articles_data/what-is-rag-in-ai/sparse-vectors.jpg) [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) (Term Frequency-Inverse Document Frequency) and [BM25](https://en.wikipedia.org/wiki/Okapi_BM25) are two classic related algorithms. They're simple and computationally efficient. However, they can struggle with synonyms and don't always capture semantic similarities. If you’re interested in going deeper, refer to our article on [Sparse Vectors](/articles/sparse-vectors/). ##### Dense vector embeddings This approach uses large language models like [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)) to encode the query and passages into dense vector embeddings. These models are compact numerical representations that capture semantic meaning. Vector databases like Qdrant store these embeddings, allowing retrieval based on **semantic similarity** rather than just keywords using distance metrics like cosine similarity. This allows the retriever to match based on semantic understanding rather than just keywords. So if I ask about ""compounds that cause BO,"" it can retrieve relevant info about ""molecules that create body odor"" even if those exact words weren't used. We explain more about it in our [What are Vector Embeddings](/articles/what-are-embeddings/) article. #### Hybrid search However, neither keyword search nor vector search are always perfect. Keyword search may miss relevant information expressed differently, while vector search can sometimes struggle with specificity or neglect important statistical word patterns. Hybrid methods aim to combine the strengths of different techniques. ![Hybrid search overview](/articles_data/what-is-rag-in-ai/hybrid-search.jpg) Some common hybrid approaches include: * Using keyword search to get an initial set of candidate documents. Next, the documents are re-ranked/re-scored using semantic vector representations. * Starting with semantic vectors to find generally topically relevant documents. Next, the documents are filtered/re-ranked e based on keyword matches or other metadata. * Considering both semantic vector closeness and statistical keyword patterns/weights in a combined scoring model. * Having multiple stages were different techniques. One example: start with an initial keyword retrieval, followed by semantic re-ranking, then a final re-ranking using even more complex models. When you combine the powers of different search methods in a complementary way, you can provide higher quality, more comprehensive results. Check out our article on [Hybrid Search](/articles/hybrid-search/) if you’d like to learn more. ### The Generator With the top relevant passages retrieved, it's now the generator's job to produce a final answer by synthesizing and expressing that information in natural language. The LLM is typically a model like GPT, BART or T5, trained on massive datasets to understand and generate human-like text. It now takes not only the query (or question) as input but also the relevant documents or passages that the retriever identified as potentially containing the answer to generate its response. ![How a Generator works](/articles_data/what-is-rag-in-ai/how-generation-works.png) The retriever and generator don't operate in isolation. The image bellow shows how the output of the retrieval feeds the generator to produce the final generated response. ![The entire architecture of a RAG system](/articles_data/what-is-rag-in-ai/rag-system.jpg) ## Where is RAG being used? Because of their more knowledgeable and contextual responses, we can find RAG models being applied in many areas today, especially those who need factual accuracy and knowledge depth. ### Real-World Applications: **Question answering:** This is perhaps the most prominent use case for RAG models. They power advanced question-answering systems that can retrieve relevant information from large knowledge bases and then generate fluent answers. **Language generation:** RAG enables more factual and contextualized text generation for contextualized text summarization from multiple sources **Data-to-text generation:** By retrieving relevant structured data, RAG models can generate product/business intelligence reports from databases or describing insights from data visualizations and charts **Multimedia understanding:** RAG isn't limited to text - it can retrieve multimodal information like images, video, and audio to enhance understanding. Answering questions about images/videos by retrieving relevant textual context. ## Creating your first RAG chatbot with Langchain, Groq, and OpenAI Are you ready to create your own RAG chatbot from the ground up? We have a video explaining everything from the beginning. Daniel Romero’s will guide you through: * Setting up your chatbot * Preprocessing and organizing data for your chatbot's use * Applying vector similarity search algorithms * Enhancing the efficiency and response quality After building your RAG chatbot, you'll be able to evaluate its performance against that of a chatbot powered solely by a Large Language Model (LLM).
## What’s next? Have a RAG project you want to bring to life? Join our [Discord community](https://discord.gg/qdrant) where we’re always sharing tips and answering questions on vector search and retrieval. Learn more about how to properly evaluate your RAG responses: [Evaluating Retrieval Augmented Generation - a framework for assessment](https://superlinked.com/vectorhub/evaluating-retrieval-augmented-generation-a-framework-for-assessment).",articles/what-is-rag-in-ai.md "--- title: Semantic Search As You Type short_description: ""Instant search using Qdrant"" description: To show off Qdrant's performance, we show how to do a quick search-as-you-type that will come back within a few milliseconds. social_preview_image: /articles_data/search-as-you-type/preview/social_preview.jpg small_preview_image: /articles_data/search-as-you-type/icon.svg preview_dir: /articles_data/search-as-you-type/preview weight: -2 author: Andre Bogus author_link: https://llogiq.github.io date: 2023-08-14T00:00:00+01:00 draft: false keywords: search, semantic, vector, llm, integration, benchmark, recommend, performance, rust --- Qdrant is one of the fastest vector search engines out there, so while looking for a demo to show off, we came upon the idea to do a search-as-you-type box with a fully semantic search backend. Now we already have a semantic/keyword hybrid search on our website. But that one is written in Python, which incurs some overhead for the interpreter. Naturally, I wanted to see how fast I could go using Rust. Since Qdrant doesn't embed by itself, I had to decide on an embedding model. The prior version used the [SentenceTransformers](https://www.sbert.net/) package, which in turn employs Bert-based [All-MiniLM-L6-V2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2/tree/main) model. This model is battle-tested and delivers fair results at speed, so not experimenting on this front I took an [ONNX version](https://huggingface.co/optimum/all-MiniLM-L6-v2/tree/main) and ran that within the service. The workflow looks like this: ![Search Qdrant by Embedding](/articles_data/search-as-you-type/Qdrant_Search_by_Embedding.png) This will, after tokenizing and embedding send a `/collections/site/points/search` POST request to Qdrant, sending the following JSON: ```json POST collections/site/points/search { ""vector"": [-0.06716014,-0.056464013, ...(382 values omitted)], ""limit"": 5, ""with_payload"": true, } ``` Even with avoiding a network round-trip, the embedding still takes some time. As always in optimization, if you cannot do the work faster, a good solution is to avoid work altogether (please don't tell my employer). This can be done by pre-computing common prefixes and calculating embeddings for them, then storing them in a `prefix_cache` collection. Now the [`recommend`](https://api.qdrant.tech/api-reference/search/recommend-points) API method can find the best matches without doing any embedding. For now, I use short (up to and including 5 letters) prefixes, but I can also parse the logs to get the most common search terms and add them to the cache later. ![Qdrant Recommendation](/articles_data/search-as-you-type/Qdrant_Recommendation.png) Making that work requires setting up the `prefix_cache` collection with points that have the prefix as their `point_id` and the embedding as their `vector`, which lets us do the lookup with no search or index. The `prefix_to_id` function currently uses the `u64` variant of `PointId`, which can hold eight bytes, enough for this use. If the need arises, one could instead encode the names as UUID, hashing the input. Since I know all our prefixes are within 8 bytes, I decided against this for now. The `recommend` endpoint works roughly the same as `search_points`, but instead of searching for a vector, Qdrant searches for one or more points (you can also give negative example points the search engine will try to avoid in the results). It was built to help drive recommendation engines, saving the round-trip of sending the current point's vector back to Qdrant to find more similar ones. However Qdrant goes a bit further by allowing us to select a different collection to lookup the points, which allows us to keep our `prefix_cache` collection separate from the site data. So in our case, Qdrant first looks up the point from the `prefix_cache`, takes its vector and searches for that in the `site` collection, using the precomputed embeddings from the cache. The API endpoint expects a POST of the following JSON to `/collections/site/points/recommend`: ```json POST collections/site/points/recommend { ""positive"": [1936024932], ""limit"": 5, ""with_payload"": true, ""lookup_from"": { ""collection"": ""prefix_cache"" } } ``` Now I have, in the best Rust tradition, a blazingly fast semantic search. To demo it, I used our [Qdrant documentation website](/documentation/)'s page search, replacing our previous Python implementation. So in order to not just spew empty words, here is a benchmark, showing different queries that exercise different code paths. Since the operations themselves are far faster than the network whose fickle nature would have swamped most measurable differences, I benchmarked both the Python and Rust services locally. I'm measuring both versions on the same AMD Ryzen 9 5900HX with 16GB RAM running Linux. The table shows the average time and error bound in milliseconds. I only measured up to a thousand concurrent requests. None of the services showed any slowdown with more requests in that range. I do not expect our service to become DDOS'd, so I didn't benchmark with more load. Without further ado, here are the results: | query length | Short | Long | |---------------|-----------|------------| | Python 🐍 | 16 ± 4 ms | 16 ± 4 ms | | Rust 🦀 | 1½ ± ½ ms | 5 ± 1 ms | The Rust version consistently outperforms the Python version and offers a semantic search even on few-character queries. If the prefix cache is hit (as in the short query length), the semantic search can even get more than ten times faster than the Python version. The general speed-up is due to both the relatively lower overhead of Rust + Actix Web compared to Python + FastAPI (even if that already performs admirably), as well as using ONNX Runtime instead of SentenceTransformers for the embedding. The prefix cache gives the Rust version a real boost by doing a semantic search without doing any embedding work. As an aside, while the millisecond differences shown here may mean relatively little for our users, whose latency will be dominated by the network in between, when typing, every millisecond more or less can make a difference in user perception. Also search-as-you-type generates between three and five times as much load as a plain search, so the service will experience more traffic. Less time per request means being able to handle more of them. Mission accomplished! But wait, there's more! ### Prioritizing Exact Matches and Headings To improve on the quality of the results, Qdrant can do multiple searches in parallel, and then the service puts the results in sequence, taking the first best matches. The extended code searches: 1. Text matches in titles 2. Text matches in body (paragraphs or lists) 3. Semantic matches in titles 4. Any Semantic matches Those are put together by taking them in the above order, deduplicating as necessary. ![merge workflow](/articles_data/search-as-you-type/sayt_merge.png) Instead of sending a `search` or `recommend` request, one can also send a `search/batch` or `recommend/batch` request, respectively. Each of those contain a `""searches""` property with any number of search/recommend JSON requests: ```json POST collections/site/points/search/batch { ""searches"": [ { ""vector"": [-0.06716014,-0.056464013, ...], ""filter"": { ""must"": [ { ""key"": ""text"", ""match"": { ""text"": }}, { ""key"": ""tag"", ""match"": { ""any"": [""h1"", ""h2"", ""h3""] }}, ] } ..., }, { ""vector"": [-0.06716014,-0.056464013, ...], ""filter"": { ""must"": [ { ""key"": ""body"", ""match"": { ""text"": }} ] } ..., }, { ""vector"": [-0.06716014,-0.056464013, ...], ""filter"": { ""must"": [ { ""key"": ""tag"", ""match"": { ""any"": [""h1"", ""h2"", ""h3""] }} ] } ..., }, { ""vector"": [-0.06716014,-0.056464013, ...], ..., }, ] } ``` As the queries are done in a batch request, there isn't any additional network overhead and only very modest computation overhead, yet the results will be better in many cases. The only additional complexity is to flatten the result lists and take the first 5 results, deduplicating by point ID. Now there is one final problem: The query may be short enough to take the recommend code path, but still not be in the prefix cache. In that case, doing the search *sequentially* would mean two round-trips between the service and the Qdrant instance. The solution is to *concurrently* start both requests and take the first successful non-empty result. ![sequential vs. concurrent flow](/articles_data/search-as-you-type/sayt_concurrency.png) While this means more load for the Qdrant vector search engine, this is not the limiting factor. The relevant data is already in cache in many cases, so the overhead stays within acceptable bounds, and the maximum latency in case of prefix cache misses is measurably reduced. The code is available on the [Qdrant github](https://github.com/qdrant/page-search) To sum up: Rust is fast, recommend lets us use precomputed embeddings, batch requests are awesome and one can do a semantic search in mere milliseconds. ",articles/search-as-you-type.md "--- title: ""Vector Similarity: Going Beyond Full-Text Search | Qdrant"" short_description: Explore how vector similarity enhances data discovery beyond full-text search, including diversity sampling and more! description: Discover how vector similarity expands data exploration beyond full-text search. Explore diversity sampling and more for enhanced data discovery! preview_dir: /articles_data/vector-similarity-beyond-search/preview small_preview_image: /articles_data/vector-similarity-beyond-search/icon.svg social_preview_image: /articles_data/vector-similarity-beyond-search/preview/social_preview.jpg weight: -1 author: Luis Cossío author_link: https://coszio.github.io/ date: 2023-08-08T08:00:00+03:00 draft: false keywords: - vector similarity - exploration - dissimilarity - discovery - diversity - recommendation --- # Vector Similarity: Unleashing Data Insights Beyond Traditional Search When making use of unstructured data, there are traditional go-to solutions that are well-known for developers: - **Full-text search** when you need to find documents that contain a particular word or phrase. - **[Vector search](https://qdrant.tech/documentation/overview/vector-search/)** when you need to find documents that are semantically similar to a given query. Sometimes people mix those two approaches, so it might look like the vector similarity is just an extension of full-text search. However, in this article, we will explore some promising new techniques that can be used to expand the use-case of unstructured data and demonstrate that vector similarity creates its own stack of data exploration tools. ## What is vector similarity search? Vector similarity offers a range of powerful functions that go far beyond those available in traditional full-text search engines. From dissimilarity search to diversity and recommendation, these methods can expand the cases in which vectors are useful. Vector Databases, which are designed to store and process immense amounts of vectors, are the first candidates to implement these new techniques and allow users to exploit their data to its fullest. ## Vector similarity search vs. full-text search While there is an intersection in the functionality of these two approaches, there is also a vast area of functions that is unique to each of them. For example, the exact phrase matching and counting of results are native to full-text search, while vector similarity support for this type of operation is limited. On the other hand, vector similarity easily allows cross-modal retrieval of images by text or vice-versa, which is impossible with full-text search. This mismatch in expectations might sometimes lead to confusion. Attempting to use a vector similarity as a full-text search can result in a range of frustrations, from slow response times to poor search results, to limited functionality. As an outcome, they are getting only a fraction of the benefits of vector similarity. {{< figure width=70% src=/articles_data/vector-similarity-beyond-search/venn-diagram.png caption=""Full-text search and Vector Similarity Functionality overlap"" >}} Below we will explore why the vector similarity stack deserves new interfaces and design patterns that will unlock the full potential of this technology, which can still be used in conjunction with full-text search. ## New ways to interact with similarities Having a vector representation of unstructured data unlocks new ways of interacting with it. For example, it can be used to measure semantic similarity between words, to cluster words or documents based on their meaning, to find related images, or even to generate new text. However, these interactions can go beyond finding their nearest neighbors (kNN). There are several other techniques that can be leveraged by vector representations beyond the traditional kNN search. These include dissimilarity search, diversity search, recommendations, and discovery functions. ## Dissimilarity ssearch The Dissimilarity —or farthest— search is the most straightforward concept after the nearest search, which can’t be reproduced in a traditional full-text search. It aims to find the most un-similar or distant documents across the collection. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/dissimilarity.png caption=""Dissimilarity Search"" >}} Unlike full-text match, Vector similarity can compare any pair of documents (or points) and assign a similarity score. It doesn’t rely on keywords or other metadata. With vector similarity, we can easily achieve a dissimilarity search by inverting the search objective from maximizing similarity to minimizing it. The dissimilarity search can find items in areas where previously no other search could be used. Let’s look at a few examples. ### Case: mislabeling detection For example, we have a dataset of furniture in which we have classified our items into what kind of furniture they are: tables, chairs, lamps, etc. To ensure our catalog is accurate, we can use a dissimilarity search to highlight items that are most likely mislabeled. To do this, we only need to search for the most dissimilar items using the embedding of the category title itself as a query. This can be too broad, so, by combining it with filters —a [Qdrant superpower](/articles/filtrable-hnsw/)—, we can narrow down the search to a specific category. {{< figure src=/articles_data/vector-similarity-beyond-search/mislabelling.png caption=""Mislabeling Detection"" >}} The output of this search can be further processed with heavier models or human supervision to detect actual mislabeling. ### Case: outlier detection In some cases, we might not even have labels, but it is still possible to try to detect anomalies in our dataset. Dissimilarity search can be used for this purpose as well. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/anomaly-detection.png caption=""Anomaly Detection"" >}} The only thing we need is a bunch of reference points that we consider ""normal"". Then we can search for the most dissimilar points to this reference set and use them as candidates for further analysis. ## Diversity search Even with no input provided vector, (dis-)similarity can improve an overall selection of items from the dataset. The naive approach is to do random sampling. However, unless our dataset has a uniform distribution, the results of such sampling might be biased toward more frequent types of items. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/diversity-random.png caption=""Example of random sampling"" >}} The similarity information can increase the diversity of those results and make the first overview more interesting. That is especially useful when users do not yet know what they are looking for and want to explore the dataset. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/diversity-force.png caption=""Example of similarity-based sampling"" >}} The power of vector similarity, in the context of being able to compare any two points, allows making a diverse selection of the collection possible without any labeling efforts. By maximizing the distance between all points in the response, we can have an algorithm that will sequentially output dissimilar results. {{< figure src=/articles_data/vector-similarity-beyond-search/diversity.png caption=""Diversity Search"" >}} Some forms of diversity sampling are already used in the industry and are known as [Maximum Margin Relevance](https://python.langchain.com/docs/integrations/vectorstores/qdrant#maximum-marginal-relevance-search-mmr) (MMR). Techniques like this were developed to enhance similarity on a universal search API. However, there is still room for new ideas, particularly regarding diversity retrieval. By utilizing more advanced vector-native engines, it could be possible to take use cases to the next level and achieve even better results. ## Vector similarity recommendations Vector similarity can go above a single query vector. It can combine multiple positive and negative examples for a more accurate retrieval. Building a recommendation API in a vector database can take advantage of using already stored vectors as part of the queries, by specifying the point id. Doing this, we can skip query-time neural network inference, and make the recommendation search faster. There are multiple ways to implement recommendations with vectors. ### Vector-features recommendations The first approach is to take all positive and negative examples and average them to create a single query vector. In this technique, the more significant components of positive vectors are canceled out by the negative ones, and the resulting vector is a combination of all the features present in the positive examples, but not in the negative ones. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/feature-based-recommendations.png caption=""Vector-Features Based Recommendations"" >}} This approach is already implemented in Qdrant, and while it works great when the vectors are assumed to have each of their dimensions represent some kind of feature of the data, sometimes distances are a better tool to judge negative and positive examples. ### Relative distance recommendations Another approach is to use the distance between negative examples to the candidates to help them create exclusion areas. In this technique, we perform searches near the positive examples while excluding the points that are closer to a negative example than to a positive one. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/relative-distance-recommendations.png caption=""Relative Distance Recommendations"" >}} The main use-case of both approaches —of course— is to take some history of user interactions and recommend new items based on it. ## Discovery In many exploration scenarios, the desired destination is not known in advance. The search process in this case can consist of multiple steps, where each step would provide a little more information to guide the search in the right direction. To get more intuition about the possible ways to implement this approach, let’s take a look at how similarity modes are trained in the first place: The most well-known loss function used to train similarity models is a [triplet-loss](https://en.wikipedia.org/wiki/Triplet_loss). In this loss, the model is trained by fitting the information of relative similarity of 3 objects: the Anchor, Positive, and Negative examples. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/triplet-loss.png caption=""Triplet Loss"" >}} Using the same mechanics, we can look at the training process from the other side. Given a trained model, the user can provide positive and negative examples, and the goal of the discovery process is then to find suitable anchors across the stored collection of vectors. {{< figure width=60% src=/articles_data/vector-similarity-beyond-search/discovery.png caption=""Reversed triplet loss"" >}} Multiple positive-negative pairs can be provided to make the discovery process more accurate. Worth mentioning, that as well as in NN training, the dataset may contain noise and some portion of contradictory information, so a discovery process should be tolerant of this kind of data imperfections. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/discovery-noise.png caption=""Sample pairs"" >}} The important difference between this and the recommendation method is that the positive-negative pairs in the discovery method don’t assume that the final result should be close to positive, it only assumes that it should be closer than the negative one. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/discovery-vs-recommendations.png caption=""Discovery vs Recommendation"" >}} In combination with filtering or similarity search, the additional context information provided by the discovery pairs can be used as a re-ranking factor. ## A new API stack for vector databases When you introduce vector similarity capabilities into your text search engine, you extend its functionality. However, it doesn't work the other way around, as the vector similarity as a concept is much broader than some task-specific implementations of full-text search. [Vector databases](https://qdrant.tech/), which introduce built-in full-text functionality, must make several compromises: - Choose a specific full-text search variant. - Either sacrifice API consistency or limit vector similarity functionality to only basic kNN search. - Introduce additional complexity to the system. Qdrant, on the contrary, puts vector similarity in the center of its API and architecture, such that it allows us to move towards a new stack of vector-native operations. We believe that this is the future of vector databases, and we are excited to see what new use-cases will be unlocked by these techniques. ## Key takeaways: - Vector similarity offers advanced data exploration tools beyond traditional full-text search, including dissimilarity search, diversity sampling, and recommendation systems. - Practical applications of vector similarity include improving data quality through mislabeling detection and anomaly identification. - Enhanced user experiences are achieved by leveraging advanced search techniques, providing users with intuitive data exploration, and improving decision-making processes. Ready to unlock the full potential of your data? [Try a free demo](https://qdrant.tech/contact-us/) to explore how vector similarity can revolutionize your data insights and drive smarter decision-making. ",articles/vector-similarity-beyond-search.md "--- title: Q&A with Similarity Learning short_description: A complete guide to building a Q&A system with similarity learning. description: A complete guide to building a Q&A system using Quaterion and SentenceTransformers. social_preview_image: /articles_data/faq-question-answering/preview/social_preview.jpg preview_dir: /articles_data/faq-question-answering/preview small_preview_image: /articles_data/faq-question-answering/icon.svg weight: 9 author: George Panchuk author_link: https://medium.com/@george.panchuk date: 2022-06-28T08:57:07.604Z # aliases: [ /articles/faq-question-answering/ ] --- # Question-answering system with Similarity Learning and Quaterion Many problems in modern machine learning are approached as classification tasks. Some are the classification tasks by design, but others are artificially transformed into such. And when you try to apply an approach, which does not naturally fit your problem, you risk coming up with over-complicated or bulky solutions. In some cases, you would even get worse performance. Imagine that you got a new task and decided to solve it with a good old classification approach. Firstly, you will need labeled data. If it came on a plate with the task, you're lucky, but if it didn't, you might need to label it manually. And I guess you are already familiar with how painful it might be. Assuming you somehow labeled all required data and trained a model. It shows good performance - well done! But a day later, your manager told you about a bunch of new data with new classes, which your model has to handle. You repeat your pipeline. Then, two days later, you've been reached out one more time. You need to update the model again, and again, and again. Sounds tedious and expensive for me, does not it for you? ## Automating customer support Let's now take a look at the concrete example. There is a pressing problem with automating customer support. The service should be capable of answering user questions and retrieving relevant articles from the documentation without any human involvement. With the classification approach, you need to build a hierarchy of classification models to determine the question's topic. You have to collect and label a whole custom dataset of your private documentation topics to train that. And then, each time you have a new topic in your documentation, you have to re-train the whole pile of classifiers with additionally labeled data. Can we make it easier? ## Similarity option One of the possible alternatives is Similarity Learning, which we are going to discuss in this article. It suggests getting rid of the classes and making decisions based on the similarity between objects instead. To do it quickly, we would need some intermediate representation - embeddings. Embeddings are high-dimensional vectors with semantic information accumulated in them. As embeddings are vectors, one can apply a simple function to calculate the similarity score between them, for example, cosine or euclidean distance. So with similarity learning, all we need to do is provide pairs of correct questions and answers. And then, the model will learn to distinguish proper answers by the similarity of embeddings. >If you want to learn more about similarity learning and applications, check out this [article](/documentation/tutorials/neural-search/) which might be an asset. ## Let's build Similarity learning approach seems a lot simpler than classification in this case, and if you have some doubts on your mind, let me dispel them. As I have no any resource with exhaustive F.A.Q. which might serve as a dataset, I've scrapped it from sites of popular cloud providers. The dataset consists of just 8.5k pairs of question and answers, you can take a closer look at it [here](https://github.com/qdrant/demo-cloud-faq). Once we have data, we need to obtain embeddings for it. It is not a novel technique in NLP to represent texts as embeddings. There are plenty of algorithms and models to calculate them. You could have heard of Word2Vec, GloVe, ELMo, BERT, all these models can provide text embeddings. However, it is better to produce embeddings with a model trained for semantic similarity tasks. For instance, we can find such models at [sentence-transformers](https://www.sbert.net/docs/pretrained_models.html). Authors claim that `all-mpnet-base-v2` provides the best quality, but let's pick `all-MiniLM-L6-v2` for our tutorial as it is 5x faster and still offers good results. Having all this, we can test our approach. We won't take all our dataset at the moment, but only a part of it. To measure model's performance we will use two metrics - [mean reciprocal rank](https://en.wikipedia.org/wiki/Mean_reciprocal_rank) and [precision@1](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision_at_k). We have a [ready script](https://github.com/qdrant/demo-cloud-faq/blob/experiments/faq/baseline.py) for this experiment, let's just launch it now.
| precision@1 | reciprocal_rank | |-------------|-----------------| | 0.564 | 0.663 |
That's already quite decent quality, but maybe we can do better? ## Improving results with fine-tuning Actually, we can! Model we used has a good natural language understanding, but it has never seen our data. An approach called `fine-tuning` might be helpful to overcome this issue. With fine-tuning you don't need to design a task-specific architecture, but take a model pre-trained on another task, apply a couple of layers on top and train its parameters. Sounds good, but as similarity learning is not as common as classification, it might be a bit inconvenient to fine-tune a model with traditional tools. For this reason we will use [Quaterion](https://github.com/qdrant/quaterion) - a framework for fine-tuning similarity learning models. Let's see how we can train models with it First, create our project and call it `faq`. > All project dependencies, utils scripts not covered in the tutorial can be found in the > [repository](https://github.com/qdrant/demo-cloud-faq/tree/tutorial). ### Configure training The main entity in Quaterion is [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html). This class makes model's building process fast and convenient. `TrainableModel` is a wrapper around [pytorch_lightning.LightningModule](https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html). [Lightning](https://www.pytorchlightning.ai/) handles all the training process complexities, like training loop, device managing, etc. and saves user from a necessity to implement all this routine manually. Also Lightning's modularity is worth to be mentioned. It improves separation of responsibilities, makes code more readable, robust and easy to write. All these features make Pytorch Lightning a perfect training backend for Quaterion. To use `TrainableModel` you need to inherit your model class from it. The same way you would use `LightningModule` in pure `pytorch_lightning`. Mandatory methods are `configure_loss`, `configure_encoders`, `configure_head`, `configure_optimizers`. The majority of mentioned methods are quite easy to implement, you'll probably just need a couple of imports to do that. But `configure_encoders` requires some code:) Let's create a `model.py` with model's template and a placeholder for `configure_encoders` for the moment. ```python from typing import Union, Dict, Optional from torch.optim import Adam from quaterion import TrainableModel from quaterion.loss import MultipleNegativesRankingLoss, SimilarityLoss from quaterion_models.encoders import Encoder from quaterion_models.heads import EncoderHead from quaterion_models.heads.skip_connection_head import SkipConnectionHead class FAQModel(TrainableModel): def __init__(self, lr=10e-5, *args, **kwargs): self.lr = lr super().__init__(*args, **kwargs) def configure_optimizers(self): return Adam(self.model.parameters(), lr=self.lr) def configure_loss(self) -> SimilarityLoss: return MultipleNegativesRankingLoss(symmetric=True) def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]: ... # ToDo def configure_head(self, input_embedding_size: int) -> EncoderHead: return SkipConnectionHead(input_embedding_size) ``` - `configure_optimizers` is a method provided by Lightning. An eagle-eye of you could notice mysterious `self.model`, it is actually a [SimilarityModel](https://quaterion-models.qdrant.tech/quaterion_models.model.html) instance. We will cover it later. - `configure_loss` is a loss function to be used during training. You can choose a ready-made implementation from Quaterion. However, since Quaterion's purpose is not to cover all possible losses, or other entities and features of similarity learning, but to provide a convenient framework to build and use such models, there might not be a desired loss. In this case it is possible to use [PytorchMetricLearningWrapper](https://quaterion.qdrant.tech/quaterion.loss.extras.pytorch_metric_learning_wrapper.html) to bring required loss from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/) library, which has a rich collection of losses. You can also implement a custom loss yourself. - `configure_head` - model built via Quaterion is a combination of encoders and a top layer - head. As with losses, some head implementations are provided. They can be found at [quaterion_models.heads](https://quaterion-models.qdrant.tech/quaterion_models.heads.html). At our example we use [MultipleNegativesRankingLoss](https://quaterion.qdrant.tech/quaterion.loss.multiple_negatives_ranking_loss.html). This loss is especially good for training retrieval tasks. It assumes that we pass only positive pairs (similar objects) and considers all other objects as negative examples. `MultipleNegativesRankingLoss` use cosine to measure distance under the hood, but it is a configurable parameter. Quaterion provides implementation for other distances as well. You can find available ones at [quaterion.distances](https://quaterion.qdrant.tech/quaterion.distances.html). Now we can come back to `configure_encoders`:) ### Configure Encoder The encoder task is to convert objects into embeddings. They usually take advantage of some pre-trained models, in our case `all-MiniLM-L6-v2` from `sentence-transformers`. In order to use it in Quaterion, we need to create a wrapper inherited from the [Encoder](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html) class. Let's create our encoder in `encoder.py` ```python import os from torch import Tensor, nn from sentence_transformers.models import Transformer, Pooling from quaterion_models.encoders import Encoder from quaterion_models.types import TensorInterchange, CollateFnType class FAQEncoder(Encoder): def __init__(self, transformer, pooling): super().__init__() self.transformer = transformer self.pooling = pooling self.encoder = nn.Sequential(self.transformer, self.pooling) @property def trainable(self) -> bool: # Defines if we want to train encoder itself, or head layer only return False @property def embedding_size(self) -> int: return self.transformer.get_word_embedding_dimension() def forward(self, batch: TensorInterchange) -> Tensor: return self.encoder(batch)[""sentence_embedding""] def get_collate_fn(self) -> CollateFnType: return self.transformer.tokenize @staticmethod def _transformer_path(path: str): return os.path.join(path, ""transformer"") @staticmethod def _pooling_path(path: str): return os.path.join(path, ""pooling"") def save(self, output_path: str): transformer_path = self._transformer_path(output_path) os.makedirs(transformer_path, exist_ok=True) pooling_path = self._pooling_path(output_path) os.makedirs(pooling_path, exist_ok=True) self.transformer.save(transformer_path) self.pooling.save(pooling_path) @classmethod def load(cls, input_path: str) -> Encoder: transformer = Transformer.load(cls._transformer_path(input_path)) pooling = Pooling.load(cls._pooling_path(input_path)) return cls(transformer=transformer, pooling=pooling) ``` As you can notice, there are more methods implemented, then we've already discussed. Let's go through them now! - In `__init__` we register our pre-trained layers, similar as you do in [torch.nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html) descendant. - `trainable` defines whether current `Encoder` layers should be updated during training or not. If `trainable=False`, then all layers will be frozen. - `embedding_size` is a size of encoder's output, it is required for proper `head` configuration. - `get_collate_fn` is a tricky one. Here you should return a method which prepares a batch of raw data into the input, suitable for the encoder. If `get_collate_fn` is not overridden, then the [default_collate](https://pytorch.org/docs/stable/data.html#torch.utils.data.default_collate) will be used. The remaining methods are considered self-describing. As our encoder is ready, we now are able to fill `configure_encoders`. Just insert the following code into `model.py`: ```python ... from sentence_transformers import SentenceTransformer from sentence_transformers.models import Transformer, Pooling from faq.encoder import FAQEncoder class FAQModel(TrainableModel): ... def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]: pre_trained_model = SentenceTransformer(""all-MiniLM-L6-v2"") transformer: Transformer = pre_trained_model[0] pooling: Pooling = pre_trained_model[1] encoder = FAQEncoder(transformer, pooling) return encoder ``` ### Data preparation Okay, we have raw data and a trainable model. But we don't know yet how to feed this data to our model. Currently, Quaterion takes two types of similarity representation - pairs and groups. The groups format assumes that all objects split into groups of similar objects. All objects inside one group are similar, and all other objects outside this group considered dissimilar to them. But in the case of pairs, we can only assume similarity between explicitly specified pairs of objects. We can apply any of the approaches with our data, but pairs one seems more intuitive. The format in which Similarity is represented determines which loss can be used. For example, _ContrastiveLoss_ and _MultipleNegativesRankingLoss_ works with pairs format. [SimilarityPairSample](https://quaterion.qdrant.tech/quaterion.dataset.similarity_samples.html#quaterion.dataset.similarity_samples.SimilarityPairSample) could be used to represent pairs. Let's take a look at it: ```python @dataclass class SimilarityPairSample: obj_a: Any obj_b: Any score: float = 1.0 subgroup: int = 0 ``` Here might be some questions: what `score` and `subgroup` are? Well, `score` is a measure of expected samples similarity. If you only need to specify if two samples are similar or not, you can use `1.0` and `0.0` respectively. `subgroups` parameter is required for more granular description of what negative examples could be. By default, all pairs belong the subgroup zero. That means that we would need to specify all negative examples manually. But in most cases, we can avoid this by enabling different subgroups. All objects from different subgroups will be considered as negative examples in loss, and thus it provides a way to set negative examples implicitly. With this knowledge, we now can create our `Dataset` class in `dataset.py` to feed our model: ```python import json from typing import List, Dict from torch.utils.data import Dataset from quaterion.dataset.similarity_samples import SimilarityPairSample class FAQDataset(Dataset): """"""Dataset class to process .jsonl files with FAQ from popular cloud providers."""""" def __init__(self, dataset_path): self.dataset: List[Dict[str, str]] = self.read_dataset(dataset_path) def __getitem__(self, index) -> SimilarityPairSample: line = self.dataset[index] question = line[""question""] # All questions have a unique subgroup # Meaning that all other answers are considered negative pairs subgroup = hash(question) return SimilarityPairSample( obj_a=question, obj_b=line[""answer""], score=1, subgroup=subgroup ) def __len__(self): return len(self.dataset) @staticmethod def read_dataset(dataset_path) -> List[Dict[str, str]]: """"""Read jsonl-file into a memory."""""" with open(dataset_path, ""r"") as fd: return [json.loads(json_line) for json_line in fd] ``` We assigned a unique subgroup for each question, so all other objects which have different question will be considered as negative examples. ### Evaluation Metric We still haven't added any metrics to the model. For this purpose Quaterion provides `configure_metrics`. We just need to override it and attach interested metrics. Quaterion has some popular retrieval metrics implemented - such as _precision @ k_ or _mean reciprocal rank_. They can be found in [quaterion.eval](https://quaterion.qdrant.tech/quaterion.eval.html) package. But there are just a few metrics, it is assumed that desirable ones will be made by user or taken from another libraries. You will probably need to inherit from `PairMetric` or `GroupMetric` to implement a new one. In `configure_metrics` we need to return a list of `AttachedMetric`. They are just wrappers around metric instances and helps to log metrics more easily. Under the hood `logging` is handled by `pytorch-lightning`. You can configure it as you want - pass required parameters as keyword arguments to `AttachedMetric`. For additional info visit [logging documentation page](https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html) Let's add mentioned metrics for our `FAQModel`. Add this code to `model.py`: ```python ... from quaterion.eval.pair import RetrievalPrecision, RetrievalReciprocalRank from quaterion.eval.attached_metric import AttachedMetric class FAQModel(TrainableModel): def __init__(self, lr=10e-5, *args, **kwargs): self.lr = lr super().__init__(*args, **kwargs) ... def configure_metrics(self): return [ AttachedMetric( ""RetrievalPrecision"", RetrievalPrecision(k=1), prog_bar=True, on_epoch=True, ), AttachedMetric( ""RetrievalReciprocalRank"", RetrievalReciprocalRank(), prog_bar=True, on_epoch=True ), ] ``` ### Fast training with Cache Quaterion has one more cherry on top of the cake when it comes to non-trainable encoders. If encoders are frozen, they are deterministic and emit the exact embeddings for the same input data on each epoch. It provides a way to avoid repeated calculations and reduce training time. For this purpose Quaterion has a cache functionality. Before training starts, the cache runs one epoch to pre-calculate all embeddings with frozen encoders and then store them on a device you chose (currently CPU or GPU). Everything you need is to define which encoders are trainable or not and set cache settings. And that's it: everything else Quaterion will handle for you. To configure cache you need to override `configure_cache` method in `TrainableModel`. This method should return an instance of [CacheConfig](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheConfig). Let's add cache to our model: ```python ... from quaterion.train.cache import CacheConfig, CacheType ... class FAQModel(TrainableModel): ... def configure_caches(self) -> Optional[CacheConfig]: return CacheConfig(CacheType.AUTO) ... ``` [CacheType](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheType) determines how the cache will be stored in memory. ### Training Now we need to combine all our code together in `train.py` and launch a training process. ```python import torch import pytorch_lightning as pl from quaterion import Quaterion from quaterion.dataset import PairsSimilarityDataLoader from faq.dataset import FAQDataset def train(model, train_dataset_path, val_dataset_path, params): use_gpu = params.get(""cuda"", torch.cuda.is_available()) trainer = pl.Trainer( min_epochs=params.get(""min_epochs"", 1), max_epochs=params.get(""max_epochs"", 500), auto_select_gpus=use_gpu, log_every_n_steps=params.get(""log_every_n_steps"", 1), gpus=int(use_gpu), ) train_dataset = FAQDataset(train_dataset_path) val_dataset = FAQDataset(val_dataset_path) train_dataloader = PairsSimilarityDataLoader( train_dataset, batch_size=1024 ) val_dataloader = PairsSimilarityDataLoader( val_dataset, batch_size=1024 ) Quaterion.fit(model, trainer, train_dataloader, val_dataloader) if __name__ == ""__main__"": import os from pytorch_lightning import seed_everything from faq.model import FAQModel from faq.config import DATA_DIR, ROOT_DIR seed_everything(42, workers=True) faq_model = FAQModel() train_path = os.path.join( DATA_DIR, ""train_cloud_faq_dataset.jsonl"" ) val_path = os.path.join( DATA_DIR, ""val_cloud_faq_dataset.jsonl"" ) train(faq_model, train_path, val_path, {}) faq_model.save_servable(os.path.join(ROOT_DIR, ""servable"")) ``` Here are a couple of unseen classes, `PairsSimilarityDataLoader`, which is a native dataloader for `SimilarityPairSample` objects, and `Quaterion` is an entry point to the training process. ### Dataset-wise evaluation Up to this moment we've calculated only batch-wise metrics. Such metrics can fluctuate a lot depending on a batch size and can be misleading. It might be helpful if we can calculate a metric on a whole dataset or some large part of it. Raw data may consume a huge amount of memory, and usually we can't fit it into one batch. Embeddings, on the contrary, most probably will consume less. That's where `Evaluator` enters the scene. At first, having dataset of `SimilaritySample`, `Evaluator` encodes it via `SimilarityModel` and compute corresponding labels. After that, it calculates a metric value, which could be more representative than batch-wise ones. However, you still can find yourself in a situation where evaluation becomes too slow, or there is no enough space left in the memory. A bottleneck might be a squared distance matrix, which one needs to calculate to compute a retrieval metric. You can mitigate this bottleneck by calculating a rectangle matrix with reduced size. `Evaluator` accepts `sampler` with a sample size to select only specified amount of embeddings. If sample size is not specified, evaluation is performed on all embeddings. Fewer words! Let's add evaluator to our code and finish `train.py`. ```python ... from quaterion.eval.evaluator import Evaluator from quaterion.eval.pair import RetrievalReciprocalRank, RetrievalPrecision from quaterion.eval.samplers.pair_sampler import PairSampler ... def train(model, train_dataset_path, val_dataset_path, params): ... metrics = { ""rrk"": RetrievalReciprocalRank(), ""rp@1"": RetrievalPrecision(k=1) } sampler = PairSampler() evaluator = Evaluator(metrics, sampler) results = Quaterion.evaluate(evaluator, val_dataset, model.model) print(f""results: {results}"") ``` ### Train Results At this point we can train our model, I do it via `python3 -m faq.train`.
|epoch|train_precision@1|train_reciprocal_rank|val_precision@1|val_reciprocal_rank| |-----|-----------------|---------------------|---------------|-------------------| |0 |0.650 |0.732 |0.659 |0.741 | |100 |0.665 |0.746 |0.673 |0.754 | |200 |0.677 |0.757 |0.682 |0.763 | |300 |0.686 |0.765 |0.688 |0.768 | |400 |0.695 |0.772 |0.694 |0.773 | |500 |0.701 |0.778 |0.700 |0.777 |
Results obtained with `Evaluator`:
| precision@1 | reciprocal_rank | |-------------|-----------------| | 0.577 | 0.675 |
After training all the metrics have been increased. And this training was done in just 3 minutes on a single gpu! There is no overfitting and the results are steadily growing, although I think there is still room for improvement and experimentation. ## Model serving As you could already notice, Quaterion framework is split into two separate libraries: `quaterion` and [quaterion-models](https://quaterion-models.qdrant.tech/). The former one contains training related stuff like losses, cache, `pytorch-lightning` dependency, etc. While the latter one contains only modules necessary for serving: encoders, heads and `SimilarityModel` itself. The reasons for this separation are: - less amount of entities you need to operate in a production environment - reduced memory footprint It is essential to isolate training dependencies from the serving environment cause the training step is usually more complicated. Training dependencies are quickly going out of control, significantly slowing down the deployment and serving timings and increasing unnecessary resource usage. The very last row of `train.py` - `faq_model.save_servable(...)` saves encoders and the model in a fashion that eliminates all Quaterion dependencies and stores only the most necessary data to run a model in production. In `serve.py` we load and encode all the answers and then look for the closest vectors to the questions we are interested in: ```python import os import json import torch from quaterion_models.model import SimilarityModel from quaterion.distances import Distance from faq.config import DATA_DIR, ROOT_DIR if __name__ == ""__main__"": device = ""cuda:0"" if torch.cuda.is_available() else ""cpu"" model = SimilarityModel.load(os.path.join(ROOT_DIR, ""servable"")) model.to(device) dataset_path = os.path.join(DATA_DIR, ""val_cloud_faq_dataset.jsonl"") with open(dataset_path) as fd: answers = [json.loads(json_line)[""answer""] for json_line in fd] # everything is ready, let's encode our answers answer_embeddings = model.encode(answers, to_numpy=False) # Some prepared questions and answers to ensure that our model works as intended questions = [ ""what is the pricing of aws lambda functions powered by aws graviton2 processors?"", ""can i run a cluster or job for a long time?"", ""what is the dell open manage system administrator suite (omsa)?"", ""what are the differences between the event streams standard and event streams enterprise plans?"", ] ground_truth_answers = [ ""aws lambda functions powered by aws graviton2 processors are 20% cheaper compared to x86-based lambda functions"", ""yes, you can run a cluster for as long as is required"", ""omsa enables you to perform certain hardware configuration tasks and to monitor the hardware directly via the operating system"", ""to find out more information about the different event streams plans, see choosing your plan"", ] # encode our questions and find the closest to them answer embeddings question_embeddings = model.encode(questions, to_numpy=False) distance = Distance.get_by_name(Distance.COSINE) question_answers_distances = distance.distance_matrix( question_embeddings, answer_embeddings ) answers_indices = question_answers_distances.min(dim=1)[1] for q_ind, a_ind in enumerate(answers_indices): print(""Q:"", questions[q_ind]) print(""A:"", answers[a_ind], end=""\n\n"") assert ( answers[a_ind] == ground_truth_answers[q_ind] ), f""<{answers[a_ind]}> != <{ground_truth_answers[q_ind]}>"" ``` We stored our collection of answer embeddings in memory and perform search directly in Python. For production purposes, it's better to use some sort of vector search engine like [Qdrant](https://github.com/qdrant/qdrant). It provides durability, speed boost, and a bunch of other features. So far, we've implemented a whole training process, prepared model for serving and even applied a trained model today with `Quaterion`. Thank you for your time and attention! I hope you enjoyed this huge tutorial and will use `Quaterion` for your similarity learning projects. All ready to use code can be found [here](https://github.com/qdrant/demo-cloud-faq/tree/tutorial). Stay tuned!:)",articles/faq-question-answering.md "--- title: ""Discovery needs context"" short_description: Discover points by constraining the vector space. description: Discovery Search, an innovative way to constrain the vector space in which a search is performed, relying only on vectors. social_preview_image: /articles_data/discovery-search/social_preview.jpg small_preview_image: /articles_data/discovery-search/icon.svg preview_dir: /articles_data/discovery-search/preview weight: -110 author: Luis Cossío author_link: https://coszio.github.io date: 2024-01-31T08:00:00-03:00 draft: false keywords: - why use a vector database - specialty - search - multimodal - state-of-the-art - vector-search --- # Discovery needs context When Christopher Columbus and his crew sailed to cross the Atlantic Ocean, they were not looking for the Americas. They were looking for a new route to India because they were convinced that the Earth was round. They didn't know anything about a new continent, but since they were going west, they stumbled upon it. They couldn't reach their _target_, because the geography didn't let them, but once they realized it wasn't India, they claimed it a new ""discovery"" for their crown. If we consider that sailors need water to sail, then we can establish a _context_ which is positive in the water, and negative on land. Once the sailor's search was stopped by the land, they could not go any further, and a new route was found. Let's keep these concepts of _target_ and _context_ in mind as we explore the new functionality of Qdrant: __Discovery search__. ## What is discovery search? In version 1.7, Qdrant [released](/articles/qdrant-1.7.x/) this novel API that lets you constrain the space in which a search is performed, relying only on pure vectors. This is a powerful tool that lets you explore the vector space in a more controlled way. It can be used to find points that are not necessarily closest to the target, but are still relevant to the search. You can already select which points are available to the search by using payload filters. This by itself is very versatile because it allows us to craft complex filters that show only the points that satisfy their criteria deterministically. However, the payload associated with each point is arbitrary and cannot tell us anything about their position in the vector space. In other words, filtering out irrelevant points can be seen as creating a _mask_ rather than a hyperplane –cutting in between the positive and negative vectors– in the space. ## Understanding context This is where a __vector _context___ can help. We define _context_ as a list of pairs. Each pair is made up of a positive and a negative vector. With a context, we can define hyperplanes within the vector space, which always prefer the positive over the negative vectors. This effectively partitions the space where the search is performed. After the space is partitioned, we then need a _target_ to return the points that are more similar to it. ![Discovery search visualization](/articles_data/discovery-search/discovery-search.png) While positive and negative vectors might suggest the use of the recommendation interface, in the case of _context_ they require to be paired up in a positive-negative fashion. This is inspired from the machine-learning concept of _triplet loss_, where you have three vectors: an anchor, a positive, and a negative. Triplet loss is an evaluation of how much the anchor is closer to the positive than to the negative vector, so that learning happens by ""moving"" the positive and negative points to try to get a better evaluation. However, during discovery, we consider the positive and negative vectors as static points, and we search through the whole dataset for the ""anchors"", or result candidates, which fit this characteristic better. ![Triplet loss](/articles_data/discovery-search/triplet-loss.png) [__Discovery search__](#discovery-search), then, is made up of two main inputs: - __target__: the main point of interest - __context__: the pairs of positive and negative points we just defined. However, it is not the only way to use it. Alternatively, you can __only__ provide a context, which invokes a [__Context Search__](#context-search). This is useful when you want to explore the space defined by the context, but don't have a specific target in mind. But hold your horses, we'll get to that [later ↪](#context-search). ## Real-world discovery search applications Let's talk about the first case: context with a target. To understand why this is useful, let's take a look at a real-world example: using a multimodal encoder like [CLIP](https://openai.com/blog/clip/) to search for images, from text __and__ images. CLIP is a neural network that can embed both images and text into the same vector space. This means that you can search for images using either a text query or an image query. For this example, we'll reuse our [food recommendations demo](https://food-discovery.qdrant.tech/) by typing ""burger"" in the text input: ![Burger text input in food demo](/articles_data/discovery-search/search-for-burger.png) This is basically nearest neighbor search, and while technically we have only images of burgers, one of them is a logo representation of a burger. We're looking for actual burgers, though. Let's try to exclude images like that by adding it as a negative example: ![Try to exclude burger drawing](/articles_data/discovery-search/try-to-exclude-non-burger.png) Wait a second, what has just happened? These pictures have __nothing__ to do with burgers, and still, they appear on the first results. Is the demo broken? Turns out, multimodal encoders might not work how you expect them to. Images and text are embedded in the same space, but they are not necessarily close to each other. This means that we can create a mental model of the distribution as two separate planes, one for images and one for text. ![Mental model of CLIP embeddings](/articles_data/discovery-search/clip-mental-model.png) This is where discovery excels because it allows us to constrain the space considering the same mode (images) while using a target from the other mode (text). ![Cross-modal search with discovery](/articles_data/discovery-search/clip-discovery.png) Discovery search also lets us keep giving feedback to the search engine in the shape of more context pairs, so we can keep refining our search until we find what we are looking for. Another intuitive example: imagine you're looking for a fish pizza, but pizza names can be confusing, so you can just type ""pizza"", and prefer a fish over meat. Discovery search will let you use these inputs to suggest a fish pizza... even if it's not called fish pizza! ![Simple discovery example](/articles_data/discovery-search/discovery-example-with-images.png) ## Context search Now, the second case: only providing context. Ever been caught in the same recommendations on your favorite music streaming service? This may be caused by getting stuck in a similarity bubble. As user input gets more complex, diversity becomes scarce, and it becomes harder to force the system to recommend something different. ![Context vs recommendation search](/articles_data/discovery-search/context-vs-recommendation.png) __Context search__ solves this by de-focusing the search around a single point. Instead, it selects points randomly from within a zone in the vector space. This search is the most influenced by _triplet loss_, as the score can be thought of as _""how much a point is closer to a negative than a positive vector?""_. If it is closer to the positive one, then its score will be zero, same as any other point within the same zone. But if it is on the negative side, it will be assigned a more and more negative score the further it gets. ![Context search visualization](/articles_data/discovery-search/context-search.png) Creating complex tastes in a high-dimensional space becomes easier since you can just add more context pairs to the search. This way, you should be able to constrain the space enough so you select points from a per-search ""category"" created just from the context in the input. ![A more complex context search](/articles_data/discovery-search/complex-context-search.png) This way you can give refreshing recommendations, while still being in control by providing positive and negative feedback, or even by trying out different permutations of pairs. ## Key takeaways: - Discovery search is a powerful tool for controlled exploration in vector spaces. Context, consisting of positive and negative vectors constrain the search space, while a target guides the search. - Real-world applications include multimodal search, diverse recommendations, and context-driven exploration. - Ready to learn more about the math behind it and how to use it? Check out the [documentation](/documentation/concepts/explore/#discovery-api)",articles/discovery-search.md "--- title: ""FastEmbed: Qdrant's Efficient Python Library for Embedding Generation"" short_description: ""FastEmbed: Quantized Embedding models for fast CPU Generation"" description: ""Learn how to accurately and efficiently create text embeddings with FastEmbed."" social_preview_image: /articles_data/fastembed/preview/social_preview.jpg small_preview_image: /articles_data/fastembed/preview/lightning.svg preview_dir: /articles_data/fastembed/preview weight: -60 author: Nirant Kasliwal author_link: https://nirantk.com/about/ date: 2023-10-18T10:00:00+03:00 draft: false keywords: - vector search - embedding models - Flag Embedding - OpenAI Ada - NLP - embeddings - ONNX Runtime - quantized embedding model --- Data Science and Machine Learning practitioners often find themselves navigating through a labyrinth of models, libraries, and frameworks. Which model to choose, what embedding size, and how to approach tokenizing, are just some questions you are faced with when starting your work. We understood how many data scientists wanted an easier and more intuitive means to do their embedding work. This is why we built FastEmbed, a Python library engineered for speed, efficiency, and usability. We have created easy to use default workflows, handling the 80% use cases in NLP embedding. ## Current State of Affairs for Generating Embeddings Usually you make embedding by utilizing PyTorch or TensorFlow models under the hood. However, using these libraries comes at a cost in terms of ease of use and computational speed. This is at least in part because these are built for both: model inference and improvement e.g. via fine-tuning. To tackle these problems we built a small library focused on the task of quickly and efficiently creating text embeddings. We also decided to start with only a small sample of best in class transformer models. By keeping it small and focused on a particular use case, we could make our library focused without all the extraneous dependencies. We ship with limited models, quantize the model weights and seamlessly integrate them with the ONNX Runtime. FastEmbed strikes a balance between inference time, resource utilization and performance (recall/accuracy). ## Quick Embedding Text Document Example Here is an example of how simple we have made embedding text documents: ```python documents: List[str] = [ ""Hello, World!"", ""fastembed is supported by and maintained by Qdrant."" ]  embedding_model = DefaultEmbedding()  embeddings: List[np.ndarray] = list(embedding_model.embed(documents)) ``` These 3 lines of code do a lot of heavy lifting for you: They download the quantized model, load it using ONNXRuntime, and then run a batched embedding creation of your documents. ### Code Walkthrough Let’s delve into a more advanced example code snippet line-by-line: ```python from fastembed.embedding import DefaultEmbedding ``` Here, we import the FlagEmbedding class from FastEmbed and alias it as Embedding. This is the core class responsible for generating embeddings based on your chosen text model. This is also the class which you can import directly as DefaultEmbedding which is [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5) ```python documents: List[str] = [ ""passage: Hello, World!"", ""query: How is the World?"", ""passage: This is an example passage."", ""fastembed is supported by and maintained by Qdrant."" ] ``` In this list called documents, we define four text strings that we want to convert into embeddings. Note the use of prefixes “passage” and “query” to differentiate the types of embeddings to be generated. This is inherited from the cross-encoder implementation of the BAAI/bge series of models themselves. This is particularly useful for retrieval and we strongly recommend using this as well. The use of text prefixes like “query” and “passage” isn’t merely syntactic sugar; it informs the algorithm on how to treat the text for embedding generation. A “query” prefix often triggers the model to generate embeddings that are optimized for similarity comparisons, while “passage” embeddings are fine-tuned for contextual understanding. If you omit the prefix, the default behavior is applied, although specifying it is recommended for more nuanced results. Next, we initialize the Embedding model with the default model: [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5). ```python embedding_model = DefaultEmbedding() ``` The default model and several other models have a context window of a maximum of 512 tokens. This maximum limit comes from the embedding model training and design itself. If you'd like to embed sequences larger than that, we'd recommend using some pooling strategy to get a single vector out of the sequence. For example, you can use the mean of the embeddings of different chunks of a document. This is also what the [SBERT Paper recommends](https://lilianweng.github.io/posts/2021-05-31-contrastive/#sentence-bert) This model strikes a balance between speed and accuracy, ideal for real-world applications. ```python embeddings: List[np.ndarray] = list(embedding_model.embed(documents)) ``` Finally, we call the `embed()` method on our embedding_model object, passing in the documents list. The method returns a Python generator, so we convert it to a list to get all the embeddings. These embeddings are NumPy arrays, optimized for fast mathematical operations. The `embed()` method returns a list of NumPy arrays, each corresponding to the embedding of a document in your original documents list. The dimensions of these arrays are determined by the model you chose e.g. for “BAAI/bge-small-en-v1.5” it’s a 384-dimensional vector. You can easily parse these NumPy arrays for any downstream application—be it clustering, similarity comparison, or feeding them into a machine learning model for further analysis. ## 3 Key Features of FastEmbed FastEmbed is built for inference speed, without sacrificing (too much) performance: 1. 50% faster than PyTorch Transformers 2. Better performance than Sentence Transformers and OpenAI Ada-002 3. Cosine similarity of quantized and original model vectors is 0.92 We use `BAAI/bge-small-en-v1.5` as our DefaultEmbedding, hence we've chosen that for comparison: ![](/articles_data/fastembed/throughput.png) ## Under the Hood of FastEmbed **Quantized Models**: We quantize the models for CPU (and Mac Metal) – giving you the best buck for your compute model. Our default model is so small, you can run this in AWS Lambda if you’d like! Shout out to Huggingface's [Optimum](https://github.com/huggingface/optimum) – which made it easier to quantize models. **Reduced Installation Time**: FastEmbed sets itself apart by maintaining a low minimum RAM/Disk usage. It’s designed to be agile and fast, useful for businesses looking to integrate text embedding for production usage. For FastEmbed, the list of dependencies is refreshingly brief: > - onnx: Version ^1.11 – We’ll try to drop this also in the future if we can! > - onnxruntime: Version ^1.15 > - tqdm: Version ^4.65 – used only at Download > - requests: Version ^2.31 – used only at Download > - tokenizers: Version ^0.13 This minimized list serves two purposes. First, it significantly reduces the installation time, allowing for quicker deployments. Second, it limits the amount of disk space required, making it a viable option even for environments with storage limitations. Notably absent from the dependency list are bulky libraries like PyTorch, and there’s no requirement for CUDA drivers. This is intentional. FastEmbed is engineered to deliver optimal performance right on your CPU, eliminating the need for specialized hardware or complex setups. **ONNXRuntime**: The ONNXRuntime gives us the ability to support multiple providers. The quantization we do is limited for CPU (Intel), but we intend to support GPU versions of the same in the future as well.  This allows for greater customization and optimization, further aligning with your specific performance and computational requirements. ## Current Models We’ve started with a small set of supported models: All the models we support are [quantized](https://pytorch.org/docs/stable/quantization.html) to enable even faster computation! If you're using FastEmbed and you've got ideas or need certain features, feel free to let us know. Just drop an issue on our GitHub page. That's where we look first when we're deciding what to work on next. Here's where you can do it: [FastEmbed GitHub Issues](https://github.com/qdrant/fastembed/issues). When it comes to FastEmbed's DefaultEmbedding model, we're committed to supporting the best Open Source models. If anything changes, you'll see a new version number pop up, like going from 0.0.6 to 0.1. So, it's a good idea to lock in the FastEmbed version you're using to avoid surprises. ## Using FastEmbed with Qdrant Qdrant is a Vector Store, offering comprehensive, efficient, and scalable [enterprise solutions](https://qdrant.tech/enterprise-solutions/) for modern machine learning and AI applications. Whether you are dealing with billions of data points, require a low latency performant [vector database solution](https://qdrant.tech/qdrant-vector-database/), or specialized quantization methods – [Qdrant is engineered](/documentation/overview/) to meet those demands head-on. The fusion of FastEmbed with Qdrant’s vector store capabilities enables a transparent workflow for seamless embedding generation, storage, and retrieval. This simplifies the API design — while still giving you the flexibility to make significant changes e.g. you can use FastEmbed to make your own embedding other than the DefaultEmbedding and use that with Qdrant. Below is a detailed guide on how to get started with FastEmbed in conjunction with Qdrant. ### Step 1: Installation Before diving into the code, the initial step involves installing the Qdrant Client along with the FastEmbed library. This can be done using pip: ``` pip install qdrant-client[fastembed] ``` For those using zsh as their shell, you might encounter syntax issues. In such cases, wrap the package name in quotes: ``` pip install 'qdrant-client[fastembed]' ``` ### Step 2: Initializing the Qdrant Client After successful installation, the next step involves initializing the Qdrant Client. This can be done either in-memory or by specifying a database path: ```python from qdrant_client import QdrantClient # Initialize the client client = QdrantClient("":memory:"")  # or QdrantClient(path=""path/to/db"") ``` ### Step 3: Preparing Documents, Metadata, and IDs Once the client is initialized, prepare the text documents you wish to embed, along with any associated metadata and unique IDs: ```python docs = [ ""Qdrant has Langchain integrations"", ""Qdrant also has Llama Index integrations"" ] metadata = [ {""source"": ""Langchain-docs""}, {""source"": ""LlamaIndex-docs""}, ] ids = [42, 2] ``` Note that the add method we’ll use is overloaded: If you skip the ids, we’ll generate those for you. metadata is obviously optional. So, you can simply use this too: ```python docs = [ ""Qdrant has Langchain integrations"", ""Qdrant also has Llama Index integrations"" ] ``` ### Step 4: Adding Documents to a Collection With your documents, metadata, and IDs ready, you can proceed to add these to a specified collection within Qdrant using the add method: ```python client.add( collection_name=""demo_collection"", documents=docs, metadata=metadata, ids=ids ) ``` Inside this function, Qdrant Client uses FastEmbed to make the text embedding, generate ids if they’re missing, and then add them to the index with metadata. This uses the DefaultEmbedding model: [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5) ![INDEX TIME: Sequence Diagram for Qdrant and FastEmbed](/articles_data/fastembed/generate-embeddings-from-docs.png) ### Step 5: Performing Queries Finally, you can perform queries on your stored documents. Qdrant offers a robust querying capability, and the query results can be easily retrieved as follows: ```python search_result = client.query( collection_name=""demo_collection"", query_text=""This is a query document"" ) print(search_result) ``` Behind the scenes, we first convert the query_text to the embedding and use that to query the vector index. ![QUERY TIME: Sequence Diagram for Qdrant and FastEmbed integration](/articles_data/fastembed/generate-embeddings-query.png) By following these steps, you effectively utilize the combined capabilities of FastEmbed and Qdrant, thereby streamlining your embedding generation and retrieval tasks. Qdrant is designed to handle large-scale datasets with billions of data points. Its architecture employs techniques like [binary quantization](https://qdrant.tech/articles/binary-quantization/) and [scalar quantization](https://qdrant.tech/articles/scalar-quantization/) for efficient storage and retrieval. When you inject FastEmbed’s CPU-first design and lightweight nature into this equation, you end up with a system that can scale seamlessly while maintaining low latency. ## Summary If you're curious about how FastEmbed and Qdrant can make your search tasks a breeze, why not take it for a spin? You get a real feel for what it can do. Here are two easy ways to get started: 1. **Cloud**: Get started with a free plan on the [Qdrant Cloud](https://qdrant.to/cloud?utm_source=qdrant&utm_medium=website&utm_campaign=fastembed&utm_content=article). 2. **Docker Container**: If you're the DIY type, you can set everything up on your own machine. Here's a quick guide to help you out: [Quick Start with Docker](/documentation/quick-start/?utm_source=qdrant&utm_medium=website&utm_campaign=fastembed&utm_content=article). So, go ahead, take it for a test drive. We're excited to hear what you think! Lastly, If you find FastEmbed useful and want to keep up with what we're doing, giving our GitHub repo a star would mean a lot to us. Here's the link to [star the repository](https://github.com/qdrant/fastembed). If you ever have questions about FastEmbed, please ask them on the Qdrant Discord: [https://discord.gg/Qy6HCJK9Dc](https://discord.gg/Qy6HCJK9Dc) ",articles/fastembed.md "--- title: ""Product Quantization in Vector Search | Qdrant"" short_description: ""Vector search with low memory? Try out our brand-new Product Quantization!"" description: ""Discover product quantization in vector search technology. Learn how it optimizes storage and accelerates search processes for high-dimensional data."" social_preview_image: /articles_data/product-quantization/social_preview.png small_preview_image: /articles_data/product-quantization/product-quantization-icon.svg preview_dir: /articles_data/product-quantization/preview weight: 4 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-05-30T09:45:00+02:00 draft: false keywords: - vector search - product quantization - memory optimization aliases: [ /articles/product_quantization/ ] --- # Product Quantization Demystified: Streamlining Efficiency in Data Management Qdrant 1.1.0 brought the support of [Scalar Quantization](/articles/scalar-quantization/), a technique of reducing the memory footprint by even four times, by using `int8` to represent the values that would be normally represented by `float32`. The memory usage in [vector search](https://qdrant.tech/solutions/) might be reduced even further! Please welcome **Product Quantization**, a brand-new feature of Qdrant 1.2.0! ## What is Product Quantization? Product Quantization converts floating-point numbers into integers like every other quantization method. However, the process is slightly more complicated than [Scalar Quantization](https://qdrant.tech/articles/scalar-quantization/) and is more customizable, so you can find the sweet spot between memory usage and search precision. This article covers all the steps required to perform Product Quantization and the way it's implemented in Qdrant. ## How Does Product Quantization Work? Let’s assume we have a few vectors being added to the collection and that our optimizer decided to start creating a new segment. ![A list of raw vectors](/articles_data/product-quantization/raw-vectors.png) ### Cutting the vector into pieces First of all, our vectors are going to be divided into **chunks** aka **subvectors**. The number of chunks is configurable, but as a rule of thumb - the lower it is, the higher the compression rate. That also comes with reduced search precision, but in some cases, you may prefer to keep the memory usage as low as possible. ![A list of chunked vectors](/articles_data/product-quantization/chunked-vectors.png) Qdrant API allows choosing the compression ratio from 4x up to 64x. In our example, we selected 16x, so each subvector will consist of 4 floats (16 bytes), and it will eventually be represented by a single byte. ### Clustering The chunks of our vectors are then used as input for clustering. Qdrant uses the K-means algorithm, with $ K = 256 $. It was selected a priori, as this is the maximum number of values a single byte represents. As a result, we receive a list of 256 centroids for each chunk and assign each of them a unique id. **The clustering is done separately for each group of chunks.** ![Clustered chunks of vectors](/articles_data/product-quantization/chunks-clustering.png) Each chunk of a vector might now be mapped to the closest centroid. That’s where we lose the precision, as a single point will only represent a whole subspace. Instead of using a subvector, we can store the id of the closest centroid. If we repeat that for each chunk, we can approximate the original embedding as a vector of subsequent ids of the centroids. The dimensionality of the created vector is equal to the number of chunks, in our case 2. ![A new vector built from the ids of the centroids](/articles_data/product-quantization/vector-of-ids.png) ### Full process All those steps build the following pipeline of Product Quantization: ![Full process of Product Quantization](/articles_data/product-quantization/full-process.png) ## Measuring the distance Vector search relies on the distances between the points. Enabling Product Quantization slightly changes the way it has to be calculated. The query vector is divided into chunks, and then we figure the overall distance as a sum of distances between the subvectors and the centroids assigned to the specific id of the vector we compare to. We know the coordinates of the centroids, so that's easy. ![Calculating the distance of between the query and the stored vector](/articles_data/product-quantization/distance-calculation.png) #### Qdrant implementation Search operation requires calculating the distance to multiple points. Since we calculate the distance to a finite set of centroids, those might be precomputed and reused. Qdrant creates a lookup table for each query, so it can then simply sum up several terms to measure the distance between a query and all the centroids. | | Centroid 0 | Centroid 1 | ... | |-------------|------------|------------|-----| | **Chunk 0** | 0.14213 | 0.51242 | | | **Chunk 1** | 0.08421 | 0.00142 | | | **...** | ... | ... | ... | ## Product Quantization Benchmarks Product Quantization comes with a cost - there are some additional operations to perform so that the performance might be reduced. However, memory usage might be reduced drastically as well. As usual, we did some benchmarks to give you a brief understanding of what you may expect. Again, we reused the same pipeline as in [the other benchmarks we published](/benchmarks/). We selected [Arxiv-titles-384-angular-no-filters](https://github.com/qdrant/ann-filtering-benchmark-datasets) and [Glove-100](https://github.com/erikbern/ann-benchmarks/) datasets to measure the impact of Product Quantization on precision and time. Both experiments were launched with $ EF = 128 $. The results are summarized in the tables: #### Glove-100
Original 1D clusters 2D clusters 3D clusters
Mean precision 0.7158 0.7143 0.6731 0.5854
Mean search time 2336 µs 2750 µs 2597 µs 2534 µs
Compression x1 x4 x8 x12
Upload & indexing time 147 s 339 s 217 s 178 s
Product Quantization increases both indexing and searching time. The higher the compression ratio, the lower the search precision. The main benefit is undoubtedly the reduced usage of memory. #### Arxiv-titles-384-angular-no-filters
Original 1D clusters 2D clusters 4D clusters 8D clusters
Mean precision 0.9837 0.9677 0.9143 0.8068 0.6618
Mean search time 2719 µs 4134 µs 2947 µs 2175 µs 2053 µs
Compression x1 x4 x8 x16 x32
Upload & indexing time 332 s 921 s 597 s 481 s 474 s
It turns out that in some cases, Product Quantization may not only reduce the memory usage, but also the search time. ## Product Quantization vs Scalar Quantization Compared to [Scalar Quantization](https://qdrant.tech/articles/scalar-quantization/), Product Quantization offers a higher compression rate. However, this comes with considerable trade-offs in accuracy, and at times, in-RAM search speed. Product Quantization tends to be favored in certain specific scenarios: - Deployment in a low-RAM environment where the limiting factor is the number of disk reads rather than the vector comparison itself - Situations where the dimensionality of the original vectors is sufficiently high - Cases where indexing speed is not a critical factor In circumstances that do not align with the above, Scalar Quantization should be the preferred choice. ## Using Qdrant for Product Quantization If you’re already a Qdrant user, we have, documentation on [Product Quantization](/documentation/guides/quantization/#setting-up-product-quantization) that will help you to set and configure the new quantization for your data and achieve even up to 64x memory reduction. Ready to experience the power of Product Quantization? [Sign up now](https://cloud.qdrant.io/) for a free Qdrant demo and optimize your data management today!",articles/product-quantization.md "--- title: ""What is a Vector Database?"" draft: false slug: what-is-a-vector-database? short_description: What is a Vector Database? Use Cases & Examples | Qdrant description: Discover what a vector database is, its core functionalities, and real-world applications. Unlock advanced data management with our comprehensive guide. preview_dir: /articles_data/what-is-a-vector-database/preview weight: -100 social_preview_image: /articles_data/what-is-a-vector-database/preview/social-preview.jpg small_preview_image: /articles_data/what-is-a-vector-database/icon.svg date: 2024-01-25T09:29:33-03:00 author: Sabrina Aquino featured: true tags: - vector-search - vector-database - embeddings aliases: [ /blog/what-is-a-vector-database/ ] --- # Why use a Vector Database & How Does it Work? In the ever-evolving landscape of data management and artificial intelligence, [vector databases](https://qdrant.tech/qdrant-vector-database/) have emerged as a revolutionary tool for efficiently handling complex, high-dimensional data. But what exactly is a vector database? This comprehensive guide delves into the fundamentals of vector databases, exploring their unique capabilities, core functionalities, and real-world applications. ## What is a Vector Database? A [Vector Database](https://qdrant.tech/qdrant-vector-database/) is a specialized database system designed for efficiently indexing, querying, and retrieving high-dimensional vector data. Those systems enable advanced data analysis and similarity-search operations that extend well beyond the traditional, structured query approach of conventional databases. ## Why use a Vector Database? The data flood is real. In 2024, we're drowning in unstructured data like images, text, and audio, that don’t fit into neatly organized tables. Still, we need a way to easily tap into the value within this chaos of almost 330 million terabytes of data being created each day. Traditional databases, even with extensions that provide some vector handling capabilities, struggle with the complexities and demands of high-dimensional vector data. Handling of vector data is extremely resource-intensive. A traditional vector is around 6Kb. You can see how scaling to millions of vectors can demand substantial system memory and computational resources. Which is at least very challenging for traditional [OLTP](https://www.ibm.com/topics/oltp) and [OLAP](https://www.ibm.com/topics/olap) databases to manage. ![](/articles_data/what-is-a-vector-database/Why-Use-Vector-Database.jpg) Vector databases allow you to understand the **context** or **conceptual similarity** of unstructured data by representing them as **vectors**, enabling advanced analysis and retrieval based on data similarity. For example, in recommendation systems, vector databases can analyze user behavior and item characteristics to suggest products or content with a high degree of personal relevance. In search engines and research databases, they enhance the user experience by providing results that are **semantically** similar to the query. They do not rely solely on the exact words typed into the search bar. If you're new to the vector search space, this article explains the key concepts and relationships that you need to know. So let's get into it. ## What is Vector Data? To understand vector databases, let's begin by defining what is a 'vector' or 'vector data'. Vectors are a **numerical representation** of some type of complex information. To represent textual data, for example, it will encapsulate the nuances of language, such as semantics and context. With an image, the vector data encapsulates aspects like color, texture, and shape. The **dimensions** relate to the complexity and the amount of information each image contains. Each pixel in an image can be seen as one dimension, as it holds data (like color intensity values for red, green, and blue channels in a color image). So even a small image with thousands of pixels translates to thousands of dimensions. So from now on, when we talk about high-dimensional data, we mean that the data contains a large number of data points (pixels, features, semantics, syntax). The **creation** of vector data (so we can store this high-dimensional data on our vector database) is primarily done through **embeddings**. ![](/articles_data/what-is-a-vector-database/Vector-Data.jpg) ### How do Embeddings Work? [Embeddings](https://qdrant.tech/articles/what-are-embeddings/) translate this high-dimensional data into a more manageable, **lower-dimensional** vector form that's more suitable for machine learning and data processing applications, typically through **neural network models**. In creating dimensions for text, for example, the process involves analyzing the text to capture its linguistic elements. Transformer-based neural networks like **BERT** (Bidirectional Encoder Representations from Transformers) and **GPT** (Generative Pre-trained Transformer), are widely used for creating text embeddings. Each layer extracts different levels of features, such as context, semantics, and syntax. ![](/articles_data/what-is-a-vector-database/How-Do-Embeddings-Work_.jpg) The final layers of the network condense this information into a vector that is a compact, lower-dimensional representation of the image but still retains the essential information. ## The Core Functionalities of Vector Databases ### Vector Database Indexing Have you ever tried to find a specific face in a massive crowd photo? Well, vector databases face a similar challenge when dealing with tons of high-dimensional vectors. Now, imagine dividing the crowd into smaller groups based on hair color, then eye color, then clothing style. Each layer gets you closer to who you’re looking for. Vector databases use similar **multi-layered** structures called indexes to organize vectors based on their ""likeness."" This way, finding similar images becomes a quick hop across related groups, instead of scanning every picture one by one. ![](/articles_data/what-is-a-vector-database/Indexing.jpg) Different indexing methods exist, each with its strengths. [HNSW](/articles/filtrable-hnsw/) balances speed and accuracy like a well-connected network of shortcuts in the crowd. Others, like IVF or Product Quantization, focus on specific tasks or memory efficiency. ### Binary Quantization Quantization is a technique used for reducing the total size of the database. It works by compressing vectors into a more compact representation at the cost of accuracy. [Binary Quantization](/articles/binary-quantization/) is a fast indexing and data compression method used by Qdrant. It supports vector comparisons, which can dramatically speed up query processing times (up to 40x faster!). Think of each data point as a ruler. Binary quantization splits this ruler in half at a certain point, marking everything above as ""1"" and everything below as ""0"". This [binarization](https://deepai.org/machine-learning-glossary-and-terms/binarization) process results in a string of bits, representing the original vector. ![](/articles_data/what-is-a-vector-database/Binary-Quant.png) This ""quantized"" code is much smaller and easier to compare. Especially for OpenAI embeddings, this type of quantization has proven to achieve a massive performance improvement at a lower cost of accuracy. ### Similarity Search [Similarity search](/documentation/concepts/search/) allows you to search not by keywords but by meaning. This way you can do searches such as similar songs that evoke the same mood, finding images that match your artistic vision, or even exploring emotional patterns in text. The way it works is, when the user queries the database, this query is also converted into a vector (the query vector). The [vector search](/documentation/overview/vector-search/) starts at the top layer of the HNSW index, where the algorithm quickly identifies the area of the graph likely to contain vectors closest to the query vector. The algorithm compares your query vector to all the others, using metrics like ""distance"" or ""similarity"" to gauge how close they are. The search then moves down progressively narrowing down to more closely related vectors. The goal is to narrow down the dataset to the most relevant items. The image below illustrates this. ![](/articles_data/what-is-a-vector-database/Similarity-Search-and-Retrieval.jpg) Once the closest vectors are identified at the bottom layer, these points translate back to actual data, like images or music, representing your search results. ### Scalability [Vector databases](https://qdrant.tech/qdrant-vector-database/) often deal with datasets that comprise billions of high-dimensional vectors. This data isn't just large in volume but also complex in nature, requiring more computing power and memory to process. Scalable systems can handle this increased complexity without performance degradation. This is achieved through a combination of a **distributed architecture**, **dynamic resource allocation**, **data partitioning**, **load balancing**, and **optimization techniques**. Systems like Qdrant exemplify scalability in vector databases. It [leverages Rust's efficiency](https://qdrant.tech/articles/why-rust/) in **memory management** and **performance**, which allows the handling of large-scale data with optimized resource usage. ### Efficient Query Processing The key to efficient query processing in these databases is linked to their **indexing methods**, which enable quick navigation through complex data structures. By mapping and accessing the high-dimensional vector space, HNSW and similar indexing techniques significantly reduce the time needed to locate and retrieve relevant data. ![](/articles_data/what-is-a-vector-database/search-query.jpg) Other techniques like **handling computational load** and **parallel processing** are used for performance, especially when managing multiple simultaneous queries. Complementing them, **strategic caching** is also employed to store frequently accessed data, facilitating a quicker retrieval for subsequent queries. ### Using Metadata and Filters Filters use metadata to refine search queries within the database. For example, in a database containing text documents, a user might want to search for documents not only based on textual similarity but also filter the results by publication date or author. When a query is made, the system can use **both** the vector data and the metadata to process the query. In other words, the database doesn’t just look for the closest vectors. It also considers the additional criteria set by the metadata filters, creating a more customizable search experience. ![](/articles_data/what-is-a-vector-database/metadata.jpg) ### Data Security and Access Control Vector databases often store sensitive information. This could include personal data in customer databases, confidential images, or proprietary text documents. Ensuring data security means protecting this information from unauthorized access, breaches, and other forms of cyber threats. At Qdrant, this includes mechanisms such as: - User authentication - Encryption for data at rest and in transit - Keeping audit trails - Advanced database monitoring and anomaly detection ## What is the Architecture of a Vector Database? A vector database is made of multiple different entities and relations. Here's a high-level overview of Qdrant's terminologies and how they fit into the larger picture: ![](/articles_data/what-is-a-vector-database/Architecture-of-a-Vector-Database.jpg) **Collections**: [Collections](/documentation/concepts/collections/) are a named set of data points, where each point is a vector with an associated payload. All vectors within a collection must have the same dimensionality and be comparable using a single metric. **Distance Metrics**: These metrics are used to measure the similarity between vectors. The choice of distance metric is made when creating a collection. It depends on the nature of the vectors and how they were generated, considering the neural network used for the encoding. **Points**: Each [point](/documentation/concepts/points/) consists of a **vector** and can also include an optional **identifier** (ID) and **[payload](/documentation/concepts/payload/)**. The vector represents the high-dimensional data and the payload carries metadata information in a JSON format, giving the data point more context or attributes. **Storage Options**: There are two primary storage options. The in-memory storage option keeps all vectors in RAM, which allows for the highest speed in data access since disk access is only required for persistence. Alternatively, the Memmap storage option creates a virtual address space linked with the file on disk, giving a balance between memory usage and access speed. **Clients**: Qdrant supports various programming languages for client interaction, such as Python, Go, Rust, and Typescript. This way developers can connect to and interact with Qdrant using the programming language they prefer. ## Vector Database Use Cases If we had to summarize the [use cases for vector databases](https://qdrant.tech/use-cases/) into a single word, it would be ""match"". They are great at finding non-obvious ways to correspond or “match” data with a given query. Whether it's through similarity in images, text, user preferences, or patterns in data. Here are some examples of how to take advantage of using vector databases: [Personalized recommendation systems](https://qdrant.tech/recommendations/) to analyze and interpret complex user data, such as preferences, behaviors, and interactions. For example, on Spotify, if a user frequently listens to the same song or skips it, the recommendation engine takes note of this to personalize future suggestions. [Semantic search](https://qdrant.tech/documentation/tutorials/search-beginners/) allows for systems to be able to capture the deeper semantic meaning of words and text. In modern search engines, if someone searches for ""tips for planting in spring,"" it tries to understand the intent and contextual meaning behind the query. It doesn’t try just matching the words themselves. Here’s an example of a [vector search engine for Startups](https://demo.qdrant.tech/) made with Qdrant: ![](/articles_data/what-is-a-vector-database/semantic-search.png) There are many other use cases like for **fraud detection and anomaly analysis** used in sectors like finance and cybersecurity, to detect anomalies and potential fraud. And **Content-Based Image Retrieval (CBIR)** for images by comparing vector representations rather than metadata or tags. Those are just a few examples. The ability of vector databases to “match” data with queries makes them essential for multiple types of applications. Here are some more [use cases examples](/use-cases/) you can take a look at. ### Get Started With Qdrant’s Vector Database Today Now that you're familiar with the core concepts around vector databases, it’s time to get your hands dirty. [Start by building your own semantic search engine](/documentation/tutorials/search-beginners/) for science fiction books in just about 5 minutes with the help of Qdrant. You can also watch our [video tutorial](https://www.youtube.com/watch?v=AASiqmtKo54). Feeling ready to dive into a more complex project? Take the next step and get started building an actual [Neural Search Service with a complete API and a dataset](/documentation/tutorials/neural-search/). Let’s get into action! ",articles/what-is-a-vector-database.md "--- title: Layer Recycling and Fine-tuning Efficiency short_description: Tradeoff between speed and performance in layer recycling description: Learn when and how to use layer recycling to achieve different performance targets. preview_dir: /articles_data/embedding-recycling/preview small_preview_image: /articles_data/embedding-recycling/icon.svg social_preview_image: /articles_data/embedding-recycling/preview/social_preview.jpg weight: 10 author: Yusuf Sarıgöz author_link: https://medium.com/@yusufsarigoz date: 2022-08-23T13:00:00+03:00 draft: false aliases: [ /articles/embedding-recycler/ ] --- A recent [paper](https://arxiv.org/abs/2207.04993) by Allen AI has attracted attention in the NLP community as they cache the output of a certain intermediate layer in the training and inference phases to achieve a speedup of ~83% with a negligible loss in model performance. This technique is quite similar to [the caching mechanism in Quaterion](https://quaterion.qdrant.tech/tutorials/cache_tutorial.html), but the latter is intended for any data modalities while the former focuses only on language models despite presenting important insights from their experiments. In this post, I will share our findings combined with those, hoping to provide the community with a wider perspective on layer recycling. ## How layer recycling works The main idea of layer recycling is to accelerate the training (and inference) by avoiding repeated passes of the same data object through the frozen layers. Instead, it is possible to pass objects through those layers only once, cache the output and use them as inputs to the unfrozen layers in future epochs. In the paper, they usually cache 50% of the layers, e.g., the output of the 6th multi-head self-attention block in a 12-block encoder. However, they find out that it does not work equally for all the tasks. For example, the question answering task suffers from a more significant degradation in performance with 50% of the layers recycled, and they choose to lower it down to 25% for this task, so they suggest determining the level of caching based on the task at hand. they also note that caching provides a more considerable speedup for larger models and on lower-end machines. In layer recycling, the cache is hit for exactly the same object. It is easy to achieve this in textual data as it is easily hashable, but you may need more advanced tricks to generate keys for the cache when you want to generalize this technique to diverse data types. For instance, hashing PyTorch tensors [does not work as you may expect](https://github.com/joblib/joblib/issues/1282). Quaterion comes with an intelligent key extractor that may be applied to any data type, but it is also allowed to customize it with a callable passed as an argument. Thanks to this flexibility, we were able to run a variety of experiments in different setups, and I believe that these findings will be helpful for your future projects. ## Experiments We conducted different experiments to test the performance with: 1. Different numbers of layers recycled in [the similar cars search example](https://quaterion.qdrant.tech/tutorials/cars-tutorial.html). 2. Different numbers of samples in the dataset for training and fine-tuning for similar cars search. 3. Different numbers of layers recycled in [the question answerring example](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html). ## Easy layer recycling with Quaterion The easiest way of caching layers in Quaterion is to compose a [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel) with a frozen [Encoder](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html#quaterion_models.encoders.encoder.Encoder) and an unfrozen [EncoderHead](https://quaterion-models.qdrant.tech/quaterion_models.heads.encoder_head.html#quaterion_models.heads.encoder_head.EncoderHead). Therefore, we modified the `TrainableModel` in the [example](https://github.com/qdrant/quaterion/blob/master/examples/cars/models.py) as in the following: ```python class Model(TrainableModel): # ... def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]: pre_trained_encoder = torchvision.models.resnet34(pretrained=True) self.avgpool = copy.deepcopy(pre_trained_encoder.avgpool) self.finetuned_block = copy.deepcopy(pre_trained_encoder.layer4) modules = [] for name, child in pre_trained_encoder.named_children(): modules.append(child) if name == ""layer3"": break pre_trained_encoder = nn.Sequential(*modules) return CarsEncoder(pre_trained_encoder) def configure_head(self, input_embedding_size) -> EncoderHead: return SequentialHead(self.finetuned_block, self.avgpool, nn.Flatten(), SkipConnectionHead(512, dropout=0.3, skip_dropout=0.2), output_size=512) # ... ``` This trick lets us finetune one more layer from the base model as a part of the `EncoderHead` while still benefiting from the speedup in the frozen `Encoder` provided by the cache. ## Experiment 1: Percentage of layers recycled The paper states that recycling 50% of the layers yields little to no loss in performance when compared to full fine-tuning. In this setup, we compared performances of four methods: 1. Freeze the whole base model and train only `EncoderHead`. 2. Move one of the four residual blocks `EncoderHead` and train it together with the head layer while freezing the rest (75% layer recycling). 3. Move two of the four residual blocks to `EncoderHead` while freezing the rest (50% layer recycling). 4. Train the whole base model together with `EncoderHead`. **Note**: During these experiments, we used ResNet34 instead of ResNet152 as the pretrained model in order to be able to use a reasonable batch size in full training. The baseline score with ResNet34 is 0.106. | Model | RRP | | ------------- | ---- | | Full training | 0.32 | | 50% recycling | 0.31 | | 75% recycling | 0.28 | | Head only | 0.22 | | Baseline | 0.11 | As is seen in the table, the performance in 50% layer recycling is very close to that in full training. Additionally, we can still have a considerable speedup in 50% layer recycling with only a small drop in performance. Although 75% layer recycling is better than training only `EncoderHead`, its performance drops quickly when compared to 50% layer recycling and full training. ## Experiment 2: Amount of available data In the second experiment setup, we compared performances of fine-tuning strategies with different dataset sizes. We sampled 50% of the training set randomly while still evaluating models on the whole validation set. | Model | RRP | | ------------- | ---- | | Full training | 0.27 | | 50% recycling | 0.26 | | 75% recycling | 0.25 | | Head only | 0.21 | | Baseline | 0.11 | This experiment shows that, the smaller the available dataset is, the bigger drop in performance we observe in full training, 50% and 75% layer recycling. On the other hand, the level of degradation in training only `EncoderHead` is really small when compared to others. When we further reduce the dataset size, full training becomes untrainable at some point, while we can still improve over the baseline by training only `EncoderHead`. ## Experiment 3: Layer recycling in question answering We also wanted to test layer recycling in a different domain as one of the most important takeaways of the paper is that the performance of layer recycling is task-dependent. To this end, we set up an experiment with the code from the [Question Answering with Similarity Learning tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html). | Model | RP@1 | RRK | | ------------- | ---- | ---- | | Full training | 0.76 | 0.65 | | 50% recycling | 0.75 | 0.63 | | 75% recycling | 0.69 | 0.59 | | Head only | 0.67 | 0.58 | | Baseline | 0.64 | 0.55 | In this task, 50% layer recycling can still do a good job with only a small drop in performance when compared to full training. However, the level of degradation is smaller than that in the similar cars search example. This can be attributed to several factors such as the pretrained model quality, dataset size and task definition, and it can be the subject of a more elaborate and comprehensive research project. Another observation is that the performance of 75% layer recycling is closer to that of training only `EncoderHead` than 50% layer recycling. ## Conclusion We set up several experiments to test layer recycling under different constraints and confirmed that layer recycling yields varying performances with different tasks and domains. One of the most important observations is the fact that the level of degradation in layer recycling is sublinear with a comparison to full training, i.e., we lose a smaller percentage of performance than the percentage we recycle. Additionally, training only `EncoderHead` is more resistant to small dataset sizes. There is even a critical size under which full training does not work at all. The issue of performance differences shows that there is still room for further research on layer recycling, and luckily Quaterion is flexible enough to run such experiments quickly. We will continue to report our findings on fine-tuning efficiency. **Fun fact**: The preview image for this article was created with Dall.e with the following prompt: ""Photo-realistic robot using a tuning fork to adjust a piano."" [Click here](/articles_data/embedding-recycling/full.png) to see it in full size!",articles/embedding-recycler.md "--- title: ""What are Vector Embeddings? - Revolutionize Your Search Experience"" draft: false slug: what-are-embeddings? short_description: Explore the power of vector embeddings. Learn to use numerical machine learning representations to build a personalized Neural Search Service with Fastembed. description: Discover the power of vector embeddings. Learn how to harness the potential of numerical machine learning representations to create a personalized Neural Search Service with FastEmbed. preview_dir: /articles_data/what-are-embeddings/preview weight: -102 social_preview_image: /articles_data/what-are-embeddings/preview/social-preview.jpg small_preview_image: /articles_data/what-are-embeddings/icon.svg date: 2024-02-06T15:29:33-03:00 author: Sabrina Aquino author_link: https://github.com/sabrinaaquino featured: true tags: - vector-search - vector-database - embeddings - machine-learning - artificial intelligence --- > **Embeddings** are numerical machine learning representations of the semantic of the input data. They capture the meaning of complex, high-dimensional data, like text, images, or audio, into vectors. Enabling algorithms to process and analyze the data more efficiently. You know when you’re scrolling through your social media feeds and the content just feels incredibly tailored to you? There's the news you care about, followed by a perfect tutorial with your favorite tech stack, and then a meme that makes you laugh so hard you snort. Or what about how YouTube recommends videos you ended up loving. It’s by creators you've never even heard of and you didn’t even send YouTube a note about your ideal content lineup. This is the magic of embeddings. These are the result of **deep learning models** analyzing the data of your interactions online. From your likes, shares, comments, searches, the kind of content you linger on, and even the content you decide to skip. It also allows the algorithm to predict future content that you are likely to appreciate. The same embeddings can be repurposed for search, ads, and other features, creating a highly personalized user experience. ![How embeddings are applied to perform recommendantions and other use cases](/articles_data/what-are-embeddings/Embeddings-Use-Case.jpg) They make [high-dimensional](https://www.sciencedirect.com/topics/computer-science/high-dimensional-data) data more manageable. This reduces storage requirements, improves computational efficiency, and makes sense of a ton of **unstructured** data. ## Why use vector embeddings? The **nuances** of natural language or the hidden **meaning** in large datasets of images, sounds, or user interactions are hard to fit into a table. Traditional relational databases can't efficiently query most types of data being currently used and produced, making the **retrieval** of this information very limited. In the embeddings space, synonyms tend to appear in similar contexts and end up having similar embeddings. The space is a system smart enough to understand that ""pretty"" and ""attractive"" are playing for the same team. Without being explicitly told so. That’s the magic. At their core, vector embeddings are about semantics. They take the idea that ""a word is known by the company it keeps"" and apply it on a grand scale. ![Example of how synonyms are placed closer together in the embeddings space](/articles_data/what-are-embeddings/Similar-Embeddings.jpg) This capability is crucial for creating search systems, recommendation engines, retrieval augmented generation (RAG) and any application that benefits from a deep understanding of content. ## How do embeddings work? Embeddings are created through neural networks. They capture complex relationships and semantics into [dense vectors](https://www1.se.cuhk.edu.hk/~seem5680/lecture/semantics-with-dense-vectors-2018.pdf) which are more suitable for machine learning and data processing applications. They can then project these vectors into a proper **high-dimensional** space, specifically, a [Vector Database](/articles/what-is-a-vector-database/). ![The process for turning raw data into embeddings and placing them into the vector space](/articles_data/what-are-embeddings/How-Embeddings-Work.jpg) The meaning of a data point is implicitly defined by its **position** on the vector space. After the vectors are stored, we can use their spatial properties to perform [nearest neighbor searches](https://en.wikipedia.org/wiki/Nearest_neighbor_search#:~:text=Nearest%20neighbor%20search%20(NNS)%2C,the%20larger%20the%20function%20values.). These searches retrieve semantically similar items based on how close they are in this space. > The quality of the vector representations drives the performance. The embedding model that works best for you depends on your use case. ### Creating vector embeddings Embeddings translate the complexities of human language to a format that computers can understand. It uses neural networks to assign **numerical values** to the input data, in a way that similar data has similar values. ![The process of using Neural Networks to create vector embeddings](/articles_data/what-are-embeddings/How-Do-Embeddings-Work_.jpg) For example, if I want to make my computer understand the word 'right', I can assign a number like 1.3. So when my computer sees 1.3, it sees the word 'right’. Now I want to make my computer understand the context of the word ‘right’. I can use a two-dimensional vector, such as [1.3, 0.8], to represent 'right'. The first number 1.3 still identifies the word 'right', but the second number 0.8 specifies the context. We can introduce more dimensions to capture more nuances. For example, a third dimension could represent formality of the word, a fourth could indicate its emotional connotation (positive, neutral, negative), and so on. The evolution of this concept led to the development of embedding models like [Word2Vec](https://en.wikipedia.org/wiki/Word2vec) and [GloVe](https://en.wikipedia.org/wiki/GloVe). They learn to understand the context in which words appear to generate high-dimensional vectors for each word, capturing far more complex properties. ![How Word2Vec model creates the embeddings for a word](/articles_data/what-are-embeddings/Word2Vec-model.jpg) However, these models still have limitations. They generate a single vector per word, based on its usage across texts. This means all the nuances of the word ""right"" are blended into one vector representation. That is not enough information for computers to fully understand the context. So, how do we help computers grasp the nuances of language in different contexts? In other words, how do we differentiate between: * ""your answer is right"" * ""turn right at the corner"" * ""everyone has the right to freedom of speech"" Each of these sentences use the word 'right', with different meanings. More advanced models like [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)) and [GPT](https://en.wikipedia.org/wiki/Generative_pre-trained_transformer) use deep learning models based on the [transformer architecture](https://arxiv.org/abs/1706.03762), which helps computers consider the full context of a word. These models pay attention to the entire context. The model understands the specific use of a word in its **surroundings**, and then creates different embeddings for each. ![How the BERT model creates the embeddings for a word](/articles_data/what-are-embeddings/BERT-model.jpg) But how does this process of understanding and interpreting work in practice? Think of the term: ""biophilic design"", for example. To generate its embedding, the transformer architecture can use the following contexts: * ""Biophilic design incorporates natural elements into architectural planning."" * ""Offices with biophilic design elements report higher employee well-being."" * ""...plant life, natural light, and water features are key aspects of biophilic design."" And then it compares contexts to known architectural and design principles: * ""Sustainable designs prioritize environmental harmony."" * ""Ergonomic spaces enhance user comfort and health."" The model creates a vector embedding for ""biophilic design"" that encapsulates the concept of integrating natural elements into man-made environments. Augmented with attributes that highlight the correlation between this integration and its positive impact on health, well-being, and environmental sustainability. ### Integration with embedding APIs Selecting the right embedding model for your use case is crucial to your application performance. Qdrant makes it easier by offering seamless integration with the best selection of embedding APIs, including [Cohere](/documentation/embeddings/cohere/), [Gemini](/documentation/embeddings/gemini/), [Jina Embeddings](/documentation/embeddings/jina-embeddings/), [OpenAI](/documentation/embeddings/openai/), [Aleph Alpha](/documentation/embeddings/aleph-alpha/), [Fastembed](https://github.com/qdrant/fastembed), and [AWS Bedrock](/documentation/embeddings/bedrock/). If you’re looking for NLP and rapid prototyping, including language translation, question-answering, and text generation, OpenAI is a great choice. Gemini is ideal for image search, duplicate detection, and clustering tasks. Fastembed, which we’ll use on the example below, is designed for efficiency and speed, great for applications needing low-latency responses, such as autocomplete and instant content recommendations. We plan to go deeper into selecting the best model based on performance, cost, integration ease, and scalability in a future post. ## Create a neural search service with Fastmbed Now that you’re familiar with the core concepts around vector embeddings, how about start building your own [Neural Search Service](/documentation/tutorials/neural-search/)? Tutorial guides you through a practical application of how to use Qdrant for document management based on descriptions of companies from [startups-list.com](https://www.startups-list.com/). From embedding data, integrating it with Qdrant's vector database, constructing a search API, and finally deploying your solution with FastAPI. Check out what the final version of this project looks like on the [live online demo](https://qdrant.to/semantic-search-demo). Let us know what you’re building with embeddings! Join our [Discord](https://discord.gg/qdrant-907569970500743200) community and share your projects!",articles/what-are-embeddings.md "--- title: ""Scalar Quantization: Background, Practices & More | Qdrant"" short_description: ""Discover scalar quantization for optimized data storage and improved performance, including data compression benefits and efficiency enhancements."" description: ""Discover the efficiency of scalar quantization for optimized data storage and enhanced performance. Learn about its data compression benefits and efficiency improvements."" social_preview_image: /articles_data/scalar-quantization/social_preview.png small_preview_image: /articles_data/scalar-quantization/scalar-quantization-icon.svg preview_dir: /articles_data/scalar-quantization/preview weight: 5 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-03-27T10:45:00+01:00 draft: false keywords: - vector search - scalar quantization - memory optimization --- # Efficiency Unleashed: The Power of Scalar Quantization High-dimensional vector embeddings can be memory-intensive, especially when working with large datasets consisting of millions of vectors. Memory footprint really starts being a concern when we scale things up. A simple choice of the data type used to store a single number impacts even billions of numbers and can drive the memory requirements crazy. The higher the precision of your type, the more accurately you can represent the numbers. The more accurate your vectors, the more precise is the distance calculation. But the advantages stop paying off when you need to order more and more memory. Qdrant chose `float32` as a default type used to store the numbers of your embeddings. So a single number needs 4 bytes of the memory and a 512-dimensional vector occupies 2 kB. That's only the memory used to store the vector. There is also an overhead of the HNSW graph, so as a rule of thumb we estimate the memory size with the following formula: ```text memory_size = 1.5 * number_of_vectors * vector_dimension * 4 bytes ``` While Qdrant offers various options to store some parts of the data on disk, starting from version 1.1.0, you can also optimize your memory by compressing the embeddings. We've implemented the mechanism of **Scalar Quantization**! It turns out to have not only a positive impact on memory but also on the performance. ## Scalar quantization Scalar quantization is a data compression technique that converts floating point values into integers. In case of Qdrant `float32` gets converted into `int8`, so a single number needs 75% less memory. It's not a simple rounding though! It's a process that makes that transformation partially reversible, so we can also revert integers back to floats with a small loss of precision. ### Theoretical background Assume we have a collection of `float32` vectors and denote a single value as `f32`. In reality neural embeddings do not cover a whole range represented by the floating point numbers, but rather a small subrange. Since we know all the other vectors, we can establish some statistics of all the numbers. For example, the distribution of the values will be typically normal: ![A distribution of the vector values](/articles_data/scalar-quantization/float32-distribution.png) Our example shows that 99% of the values come from a `[-2.0, 5.0]` range. And the conversion to `int8` will surely lose some precision, so we rather prefer keeping the representation accuracy within the range of 99% of the most probable values and ignoring the precision of the outliers. There might be a different choice of the range width, actually, any value from a range `[0, 1]`, where `0` means empty range, and `1` would keep all the values. That's a hyperparameter of the procedure called `quantile`. A value of `0.95` or `0.99` is typically a reasonable choice, but in general `quantile ∈ [0, 1]`. #### Conversion to integers Let's talk about the conversion to `int8`. Integers also have a finite set of values that might be represented. Within a single byte they may represent up to 256 different values, either from `[-128, 127]` or `[0, 255]`. ![Value ranges represented by int8](/articles_data/scalar-quantization/int8-value-range.png) Since we put some boundaries on the numbers that might be represented by the `f32`, and `i8` has some natural boundaries, the process of converting the values between those two ranges is quite natural: $$ f32 = \alpha \times i8 + offset $$ $$ i8 = \frac{f32 - offset}{\alpha} $$ The parameters $ \alpha $ and $ offset $ has to be calculated for a given set of vectors, but that comes easily by putting the minimum and maximum of the represented range for both `f32` and `i8`. ![Float32 to int8 conversion](/articles_data/scalar-quantization/float32-to-int8-conversion.png) For the unsigned `int8` it will go as following: $$ \begin{equation} \begin{cases} -2 = \alpha \times 0 + offset \\\\ 5 = \alpha \times 255 + offset \end{cases} \end{equation} $$ In case of signed `int8`, we'll just change the represented range boundaries: $$ \begin{equation} \begin{cases} -2 = \alpha \times (-128) + offset \\\\ 5 = \alpha \times 127 + offset \end{cases} \end{equation} $$ For any set of vector values we can simply calculate the $ \alpha $ and $ offset $ and those values have to be stored along with the collection to enable to conversion between the types. #### Distance calculation We do not store the vectors in the collections represented by `int8` instead of `float32` just for the sake of compressing the memory. But the coordinates are being used while we calculate the distance between the vectors. Both dot product and cosine distance requires multiplying the corresponding coordinates of two vectors, so that's the operation we perform quite often on `float32`. Here is how it would look like if we perform the conversion to `int8`: $$ f32 \times f32' = $$ $$ = (\alpha \times i8 + offset) \times (\alpha \times i8' + offset) = $$ $$ = \alpha^{2} \times i8 \times i8' + \underbrace{offset \times \alpha \times i8' + offset \times \alpha \times i8 + offset^{2}}_\text{pre-compute} $$ The first term, $ \alpha^{2} \times i8 \times i8' $ has to be calculated when we measure the distance as it depends on both vectors. However, both the second and the third term ($ offset \times \alpha \times i8' $ and $ offset \times \alpha \times i8 $ respectively), depend only on a single vector and those might be precomputed and kept for each vector. The last term, $ offset^{2} $ does not depend on any of the values, so it might be even computed once and reused. If we had to calculate all the terms to measure the distance, the performance could have been even worse than without the conversion. But thanks for the fact we can precompute the majority of the terms, things are getting simpler. And in turns out the scalar quantization has a positive impact not only on the memory usage, but also on the performance. As usual, we performed some benchmarks to support this statement! ## Benchmarks We simply used the same approach as we use in all [the other benchmarks we publish](/benchmarks/). Both [Arxiv-titles-384-angular-no-filters](https://github.com/qdrant/ann-filtering-benchmark-datasets) and [Gist-960](https://github.com/erikbern/ann-benchmarks/) datasets were chosen to make the comparison between non-quantized and quantized vectors. The results are summarized in the tables: #### Arxiv-titles-384-angular-no-filters
ef = 128 ef = 256 ef = 512
Upload and indexing time Mean search precision Mean search time Mean search precision Mean search time Mean search precision Mean search time
Non-quantized vectors 649 s 0.989 0.0094 0.994 0.0932 0.996 0.161
Scalar Quantization 496 s 0.986 0.0037 0.993 0.060 0.996 0.115
Difference -23.57% -0.3% -60.64% -0.1% -35.62% 0% -28.57%
A slight decrease in search precision results in a considerable improvement in the latency. Unless you aim for the highest precision possible, you should not notice the difference in your search quality. #### Gist-960
ef = 128 ef = 256 ef = 512
Upload and indexing time Mean search precision Mean search time Mean search precision Mean search time Mean search precision Mean search time
Non-quantized vectors 452 0.802 0.077 0.887 0.135 0.941 0.231
Scalar Quantization 312 0.802 0.043 0.888 0.077 0.941 0.135
Difference -30.79% 0% -44,16% +0.11% -42.96% 0% -41,56%
In all the cases, the decrease in search precision is negligible, but we keep a latency reduction of at least 28.57%, even up to 60,64%, while searching. As a rule of thumb, the higher the dimensionality of the vectors, the lower the precision loss. ### Oversampling and rescoring A distinctive feature of the Qdrant architecture is the ability to combine the search for quantized and original vectors in a single query. This enables the best combination of speed, accuracy, and RAM usage. Qdrant stores the original vectors, so it is possible to rescore the top-k results with the original vectors after doing the neighbours search in quantized space. That obviously has some impact on the performance, but in order to measure how big it is, we made the comparison in different search scenarios. We used a machine with a very slow network-mounted disk and tested the following scenarios with different amounts of allowed RAM: | Setup | RPS | Precision | |-----------------------------|------|-----------| | 4.5GB memory | 600 | 0.99 | | 4.5GB memory + SQ + rescore | 1000 | 0.989 | And another group with more strict memory limits: | Setup | RPS | Precision | |------------------------------|------|-----------| | 2GB memory | 2 | 0.99 | | 2GB memory + SQ + rescore | 30 | 0.989 | | 2GB memory + SQ + no rescore | 1200 | 0.974 | In those experiments, throughput was mainly defined by the number of disk reads, and quantization efficiently reduces it by allowing more vectors in RAM. Read more about on-disk storage in Qdrant and how we measure its performance in our article: [Minimal RAM you need to serve a million vectors ](/articles/memory-consumption/). The mechanism of Scalar Quantization with rescoring disabled pushes the limits of low-end machines even further. It seems like handling lots of requests does not require an expensive setup if you can agree to a small decrease in the search precision. ### Accessing best practices Qdrant documentation on [Scalar Quantization](/documentation/quantization/#setting-up-quantization-in-qdrant) is a great resource describing different scenarios and strategies to achieve up to 4x lower memory footprint and even up to 2x performance increase. ",articles/scalar-quantization.md "--- title: Extending ChatGPT with a Qdrant-based knowledge base short_description: ""ChatGPT factuality might be improved with semantic search. Here is how."" description: ""ChatGPT factuality might be improved with semantic search. Here is how."" social_preview_image: /articles_data/chatgpt-plugin/social_preview.jpg small_preview_image: /articles_data/chatgpt-plugin/chatgpt-plugin-icon.svg preview_dir: /articles_data/chatgpt-plugin/preview weight: 7 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-03-23T18:01:00+01:00 draft: false keywords: - openai - chatgpt - chatgpt plugin - knowledge base - similarity search --- In recent months, ChatGPT has revolutionised the way we communicate, learn, and interact with technology. Our social platforms got flooded with prompts, responses to them, whole articles and countless other examples of using Large Language Models to generate content unrecognisable from the one written by a human. Despite their numerous benefits, these models have flaws, as evidenced by the phenomenon of hallucination - the generation of incorrect or nonsensical information in response to user input. This issue, which can compromise the reliability and credibility of AI-generated content, has become a growing concern among researchers and users alike. Those concerns started another wave of entirely new libraries, such as Langchain, trying to overcome those issues, for example, by combining tools like vector databases to bring the required context into the prompts. And that is, so far, the best way to incorporate new and rapidly changing knowledge into the neural model. So good that OpenAI decided to introduce a way to extend the model capabilities with external plugins at the model level. These plugins, designed to enhance the model's performance, serve as modular extensions that seamlessly interface with the core system. By adding a knowledge base plugin to ChatGPT, we can effectively provide the AI with a curated, trustworthy source of information, ensuring that the generated content is more accurate and relevant. Qdrant may act as a vector database where all the facts will be stored and served to the model upon request. If you’d like to ask ChatGPT questions about your data sources, such as files, notes, or emails, starting with the official [ChatGPT retrieval plugin repository](https://github.com/openai/chatgpt-retrieval-plugin) is the easiest way. Qdrant is already integrated, so that you can use it right away. In the following sections, we will guide you through setting up the knowledge base using Qdrant and demonstrate how this powerful combination can significantly improve ChatGPT's performance and output quality. ## Implementing a knowledge base with Qdrant The official ChatGPT retrieval plugin uses a vector database to build your knowledge base. Your documents are chunked and vectorized with the OpenAI's text-embedding-ada-002 model to be stored in Qdrant. That enables semantic search capabilities. So, whenever ChatGPT thinks it might be relevant to check the knowledge base, it forms a query and sends it to the plugin to incorporate the results into its response. You can now modify the knowledge base, and ChatGPT will always know the most recent facts. No model fine-tuning is required. Let’s implement that for your documents. In our case, this will be Qdrant’s documentation, so you can ask even technical questions about Qdrant directly in ChatGPT. Everything starts with cloning the plugin's repository. ```bash git clone git@github.com:openai/chatgpt-retrieval-plugin.git ``` Please use your favourite IDE to open the project once cloned. ### Prerequisites You’ll need to ensure three things before we start: 1. Create an OpenAI API key, so you can use their embeddings model programmatically. If you already have an account, you can generate one at https://platform.openai.com/account/api-keys. Otherwise, registering an account might be required. 2. Run a Qdrant instance. The instance has to be reachable from the outside, so you either need to launch it on-premise or use the [Qdrant Cloud](https://cloud.qdrant.io/) offering. A free 1GB cluster is available, which might be enough in many cases. We’ll use the cloud. 3. Since ChatGPT will interact with your service through the network, you must deploy it, making it possible to connect from the Internet. Unfortunately, localhost is not an option, but any provider, such as Heroku or fly.io, will work perfectly. We will use [fly.io](https://fly.io/), so please register an account. You may also need to install the flyctl tool for the deployment. The process is described on the homepage of fly.io. ### Configuration The retrieval plugin is a FastAPI-based application, and its default functionality might be enough in most cases. However, some configuration is required so ChatGPT knows how and when to use it. However, we can start setting up Fly.io, as we need to know the service's hostname to configure it fully. First, let’s login into the Fly CLI: ```bash flyctl auth login ``` That will open the browser, so you can simply provide the credentials, and all the further commands will be executed with your account. If you have never used fly.io, you may need to give the credit card details before running any instance, but there is a Hobby Plan you won’t be charged for. Let’s try to launch the instance already, but do not deploy it. We’ll get the hostname assigned and have all the details to fill in the configuration. The retrieval plugin uses TCP port 8080, so we need to configure fly.io, so it redirects all the traffic to it as well. ```bash flyctl launch --no-deploy --internal-port 8080 ``` We’ll be prompted about the application name and the region it should be deployed to. Please choose whatever works best for you. After that, we should see the hostname of the newly created application: ```text ... Hostname: your-application-name.fly.dev ... ``` Let’s note it down. We’ll need it for the configuration of the service. But we’re going to start with setting all the applications secrets: ```bash flyctl secrets set DATASTORE=qdrant \ OPENAI_API_KEY= \ QDRANT_URL=https://.aws.cloud.qdrant.io \ QDRANT_API_KEY= \ BEARER_TOKEN=eyJhbGciOiJIUzI1NiJ9.e30.ZRrHA1JJJW8opsbCGfG_HACGpVUMN_a9IV7pAx_Zmeo ``` The secrets will be staged for the first deployment. There is an example of a minimal Bearer token generated by https://jwt.io/. **Please adjust the token and do not expose it publicly, but you can keep the same value for the demo.** Right now, let’s dive into the application config files. You can optionally provide your icon and keep it as `.well-known/logo.png` file, but there are two additional files we’re going to modify. The `.well-known/openapi.yaml` file describes the exposed API in the OpenAPI format. Lines 3 to 5 might be filled with the application title and description, but the essential part is setting the server URL the application will run. Eventually, the top part of the file should look like the following: ```yaml openapi: 3.0.0 info: title: Qdrant Plugin API version: 1.0.0 description: Plugin for searching through the Qdrant doc… servers: - url: https://your-application-name.fly.dev ... ``` There is another file in the same directory, and that’s the most crucial piece to configure. It contains the description of the plugin we’re implementing, and ChatGPT uses this description to determine if it should communicate with our knowledge base. The file is called `.well-known/ai-plugin.json`, and let’s edit it before we finally deploy the app. There are various properties we need to fill in: | **Property** | **Meaning** | **Example** | |-------------------------|----------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `name_for_model` | Name of the plugin for the ChatGPT model | *qdrant* | | `name_for_human` | Human-friendly model name, to be displayed in ChatGPT UI | *Qdrant Documentation Plugin* | | `description_for_model` | Description of the purpose of the plugin, so ChatGPT knows in what cases it should be using it to answer a question. | *Plugin for searching through the Qdrant documentation to find answers to questions and retrieve relevant information. Use it whenever a user asks something that might be related to Qdrant vector database or semantic vector search* | | `description_for_human` | Short description of the plugin, also to be displayed in the ChatGPT UI. | *Search through Qdrant docs* | | `auth` | Authorization scheme used by the application. By default, the bearer token has to be configured. | ```{""type"": ""user_http"", ""authorization_type"": ""bearer""}``` | | `api.url` | Link to the OpenAPI schema definition. Please adjust based on your application URL. | *https://your-application-name.fly.dev/.well-known/openapi.yaml* | | `logo_url` | Link to the application logo. Please adjust based on your application URL. | *https://your-application-name.fly.dev/.well-known/logo.png* | A complete file may look as follows: ```json { ""schema_version"": ""v1"", ""name_for_model"": ""qdrant"", ""name_for_human"": ""Qdrant Documentation Plugin"", ""description_for_model"": ""Plugin for searching through the Qdrant documentation to find answers to questions and retrieve relevant information. Use it whenever a user asks something that might be related to Qdrant vector database or semantic vector search"", ""description_for_human"": ""Search through Qdrant docs"", ""auth"": { ""type"": ""user_http"", ""authorization_type"": ""bearer"" }, ""api"": { ""type"": ""openapi"", ""url"": ""https://your-application-name.fly.dev/.well-known/openapi.yaml"", ""has_user_authentication"": false }, ""logo_url"": ""https://your-application-name.fly.dev/.well-known/logo.png"", ""contact_email"": ""email@domain.com"", ""legal_info_url"": ""email@domain.com"" } ``` That was the last step before running the final command. The command that will deploy the application on the server: ```bash flyctl deploy ``` The command will build the image using the Dockerfile and deploy the service at a given URL. Once the command is finished, the service should be running on the hostname we got previously: ```text https://your-application-name.fly.dev ``` ## Integration with ChatGPT Once we have deployed the service, we can point ChatGPT to it, so the model knows how to connect. When you open the ChatGPT UI, you should see a dropdown with a Plugins tab included: ![](/articles_data/chatgpt-plugin/step-1.png) Once selected, you should be able to choose one of check the plugin store: ![](/articles_data/chatgpt-plugin/step-2.png) There are some premade plugins available, but there’s also a possibility to install your own plugin by clicking on the ""*Develop your own plugin*"" option in the bottom right corner: ![](/articles_data/chatgpt-plugin/step-3.png) We need to confirm our plugin is ready, but since we relied on the official retrieval plugin from OpenAI, this should be all fine: ![](/articles_data/chatgpt-plugin/step-4.png) After clicking on ""*My manifest is ready*"", we can already point ChatGPT to our newly created service: ![](/articles_data/chatgpt-plugin/step-5.png) A successful plugin installation should end up with the following information: ![](/articles_data/chatgpt-plugin/step-6.png) There is a name and a description of the plugin we provided. Let’s click on ""*Done*"" and return to the ""*Plugin store*"" window again. There is another option we need to choose in the bottom right corner: ![](/articles_data/chatgpt-plugin/step-7.png) Our plugin is not officially verified, but we can, of course, use it freely. The installation requires just the service URL: ![](/articles_data/chatgpt-plugin/step-8.png) OpenAI cannot guarantee the plugin provides factual information, so there is a warning we need to accept: ![](/articles_data/chatgpt-plugin/step-9.png) Finally, we need to provide the Bearer token again: ![](/articles_data/chatgpt-plugin/step-10.png) Our plugin is now ready to be tested. Since there is no data inside the knowledge base, extracting any facts is impossible, but we’re going to put some data using the Swagger UI exposed by our service at https://your-application-name.fly.dev/docs. We need to authorize first, and then call the upsert method with some docs. For the demo purposes, we can just put a single document extracted from the Qdrant documentation to see whether integration works properly: ![](/articles_data/chatgpt-plugin/step-11.png) We can come back to ChatGPT UI, and send a prompt, but we need to make sure the plugin is selected: ![](/articles_data/chatgpt-plugin/step-12.png) Now if our prompt seems somehow related to the plugin description provided, the model will automatically form a query and send it to the HTTP API. The query will get vectorized by our app, and then used to find some relevant documents that will be used as a context to generate the response. ![](/articles_data/chatgpt-plugin/step-13.png) We have a powerful language model, that can interact with our knowledge base, to return not only grammatically correct but also factual information. And this is how your interactions with the model may start to look like: However, a single document is not enough to enable the full power of the plugin. If you want to put more documents that you have collected, there are already some scripts available in the `scripts/` directory that allows converting JSON, JSON lines or even zip archives. ",articles/chatgpt-plugin.md "--- title: Deliver Better Recommendations with Qdrant’s new API short_description: Qdrant 1.6 brings recommendations strategies and more flexibility to the Recommendation API. description: Qdrant 1.6 brings recommendations strategies and more flexibility to the Recommendation API. preview_dir: /articles_data/new-recommendation-api/preview social_preview_image: /articles_data/new-recommendation-api/preview/social_preview.png small_preview_image: /articles_data/new-recommendation-api/icon.svg weight: -80 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-10-25T09:46:00.000Z --- The most popular use case for vector search engines, such as Qdrant, is Semantic search with a single query vector. Given the query, we can vectorize (embed) it and find the closest points in the index. But [Vector Similarity beyond Search](/articles/vector-similarity-beyond-search/) does exist, and recommendation systems are a great example. Recommendations might be seen as a multi-aim search, where we want to find items close to positive and far from negative examples. This use of vector databases has many applications, including recommendation systems for e-commerce, content, or even dating apps. Qdrant has provided the [Recommendation API](/documentation/concepts/search/#recommendation-api) for a while, and with the latest release, [Qdrant 1.6](https://github.com/qdrant/qdrant/releases/tag/v1.6.0), we're glad to give you more flexibility and control over the Recommendation API. Here, we'll discuss some internals and show how they may be used in practice. ### Recap of the old recommendations API The previous [Recommendation API](/documentation/concepts/search/#recommendation-api) in Qdrant came with some limitations. First of all, it was required to pass vector IDs for both positive and negative example points. If you wanted to use vector embeddings directly, you had to either create a new point in a collection or mimic the behaviour of the Recommendation API by using the [Search API](/documentation/concepts/search/#search-api). Moreover, in the previous releases of Qdrant, you were always asked to provide at least one positive example. This requirement was based on the algorithm used to combine multiple samples into a single query vector. It was a simple, yet effective approach. However, if the only information you had was that your user dislikes some items, you couldn't use it directly. Qdrant 1.6 brings a more flexible API. You can now provide both IDs and vectors of positive and negative examples. You can even combine them within a single request. That makes the new implementation backward compatible, so you can easily upgrade an existing Qdrant instance without any changes in your code. And the default behaviour of the API is still the same as before. However, we extended the API, so **you can now choose the strategy of how to find the recommended points**. ```http POST /collections/{collection_name}/points/recommend { ""positive"": [100, 231], ""negative"": [718, [0.2, 0.3, 0.4, 0.5]], ""filter"": { ""must"": [ { ""key"": ""city"", ""match"": { ""value"": ""London"" } } ] }, ""strategy"": ""average_vector"", ""limit"": 3 } ``` There are two key changes in the request. First of all, we can adjust the strategy of search and set it to `average_vector` (the default) or `best_score`. Moreover, we can pass both IDs (`718`) and embeddings (`[0.2, 0.3, 0.4, 0.5]`) as both positive and negative examples. ## HNSW ANN example and strategy Let’s start with an example to help you understand the [HNSW graph](/articles/filtrable-hnsw/). Assume you want to travel to a small city on another continent: 1. You start from your hometown and take a bus to the local airport. 2. Then, take a flight to one of the closest hubs. 3. From there, you have to take another flight to a hub on your destination continent. 4. Hopefully, one last flight to your destination city. 5. You still have one more leg on local transport to get to your final address. This journey is similar to the HNSW graph’s use in Qdrant's approximate nearest neighbours search. ![Transport network](/articles_data/new-recommendation-api/example-transport-network.png) HNSW is a multilayer graph of vectors (embeddings), with connections based on vector proximity. The top layer has the least points, and the distances between those points are the biggest. The deeper we go, the more points we have, and the distances get closer. The graph is built in a way that the points are connected to their closest neighbours at every layer. All the points from a particular layer are also in the layer below, so switching the search layer while staying in the same location is possible. In the case of transport networks, the top layer would be the airline hubs, well-connected but with big distances between the airports. Local airports, along with railways and buses, with higher density and smaller distances, make up the middle layers. Lastly, our bottom layer consists of local means of transport, which is the densest and has the smallest distances between the points. You don’t have to check all the possible connections when you travel. You select an intercontinental flight, then a local one, and finally a bus or a taxi. All the decisions are made based on the distance between the points. The search process in HNSW is also based on similarly traversing the graph. Start from the entry point in the top layer, find its closest point and then use that point as the entry point into the next densest layer. This process repeats until we reach the bottom layer. Visited points and distances to the original query vector are kept in memory. If none of the neighbours of the current point is better than the best match, we can stop the traversal, as this is a local minimum. We start at the biggest scale, and then gradually zoom in. In this oversimplified example, we assumed that the distance between the points is the only factor that matters. In reality, we might want to consider other criteria, such as the ticket price, or avoid some specific locations due to certain restrictions. That means, there are various strategies for choosing the best match, which is also true in the case of vector recommendations. We can use different approaches to determine the path of traversing the HNSW graph by changing how we calculate the score of a candidate point during traversal. The default behaviour is based on pure distance, but Qdrant 1.6 exposes two strategies for the recommendation API. ### Average vector The default strategy, called `average_vector` is the previous one, based on the average of positive and negative examples. It simplifies the recommendations process and converts it into a single vector search. It supports both point IDs and vectors as parameters. For example, you can get recommendations based on past interactions with existing points combined with query vector embedding. Internally, that mechanism is based on the averages of positive and negative examples and was calculated with the following formula: $$ \text{average vector} = \text{avg}(\text{positive vectors}) + \left( \text{avg}(\text{positive vectors}) - \text{avg}(\text{negative vectors}) \right) $$ The `average_vector` converts the problem of recommendations into a single vector search. ### The new hotness - Best score The new strategy is called `best_score`. It does not rely on averages and is more flexible. It allows you to pass just negative samples and uses a slightly more sophisticated algorithm under the hood. The best score is chosen at every step of HNSW graph traversal. We separately calculate the distance between a traversed point and every positive and negative example. In the case of the best score strategy, **there is no single query vector anymore, but a bunch of positive and negative queries**. As a result, for each sample in the query, we have a set of distances, one for each sample. In the next step, we simply take the best scores for positives and negatives, creating two separate values. Best scores are just the closest distances of a query to positives and negatives. The idea is: **if a point is closer to any negative than to any positive example, we do not want it**. We penalize being close to the negatives, so instead of using the similarity value directly, we check if it’s closer to positives or negatives. The following formula is used to calculate the score of a traversed potential point: ```rust if best_positive_score > best_negative_score { score = best_positive_score } else { score = -(best_negative_score * best_negative_score) } ``` If the point is closer to the negatives, we penalize it by taking the negative squared value of the best negative score. For a closer negative, the score of the candidate point will always be lower or equal to zero, making the chances of choosing that point significantly lower. However, if the best negative score is higher than the best positive score, we still prefer those that are further away from the negatives. That procedure effectively **pulls the traversal procedure away from the negative examples**. If you want to know more about the internals of HNSW, you can check out the article about the [Filtrable HNSW](/articles/filtrable-hnsw/) that covers the topic thoroughly. ## Food Discovery demo Our [Food Discovery demo](/articles/food-discovery-demo/) is an application built on top of the new [Recommendation API](/documentation/concepts/search/#recommendation-api). It allows you to find a meal based on liked and disliked photos. There are some updates, enabled by the new Qdrant release: * **Ability to include multiple textual queries in the recommendation request.** Previously, we only allowed passing a single query to solve the cold start problem. Right now, you can pass multiple queries and mix them with the liked/disliked photos. This became possible because of the new flexibility in parameters. We can pass both point IDs and embedding vectors in the same request, and user queries are obviously not a part of the collection. * **Switch between the recommendation strategies.** You can now choose between the `average_vector` and the `best_score` scoring algorithm. ### Differences between the strategies The UI of the Food Discovery demo allows you to switch between the strategies. The `best_vector` is the default one, but with just a single switch, you can see how the results differ when using the previous `average_vector` strategy. If you select just a single positive example, both algorithms work identically. ##### One positive example The difference only becomes apparent when you start adding more examples, especially if you choose some negatives. ##### One positive and one negative example The more likes and dislikes we add, the more diverse the results of the `best_score` strategy will be. In the old strategy, there is just a single vector, so all the examples are similar to it. The new one takes into account all the examples separately, making the variety richer. ##### Multiple positive and negative examples Choosing the right strategy is dataset-dependent, and the embeddings play a significant role here. Thus, it’s always worth trying both of them and comparing the results in a particular case. #### Handling the negatives only In the case of our Food Discovery demo, passing just the negative images can work as an outlier detection mechanism. While the dataset was supposed to contain only food photos, this is not actually true. A simple way to find these outliers is to pass in food item photos as negatives, leading to the results being the most ""unlike"" food images. In our case you will see pill bottles and books. **The `average_vector` strategy still requires providing at least one positive example!** However, since cosine distance is set up for the collection used in the demo, we faked it using [a trick described in the previous article](/articles/food-discovery-demo/#negative-feedback-only). In a nutshell, if you only pass negative examples, their vectors will be averaged, and the negated resulting vector will be used as a query to the search endpoint. ##### Negatives only Still, both methods return different results, so they each have their place depending on the questions being asked and the datasets being used. #### Challenges with multimodality Food Discovery uses the [CLIP embeddings model](https://huggingface.co/sentence-transformers/clip-ViT-B-32), which is multimodal, allowing both images and texts encoded into the same vector space. Using this model allows for image queries, text queries, or both of them combined. We utilized that mechanism in the updated demo, allowing you to pass the textual queries to filter the results further. ##### A single text query Text queries might be mixed with the liked and disliked photos, so you can combine them in a single request. However, you might be surprised by the results achieved with the new strategy, if you start adding the negative examples. ##### A single text query with negative example This is an issue related to the embeddings themselves. Our dataset contains a bunch of image embeddings that are pretty close to each other. On the other hand, our text queries are quite far from most of the image embeddings, but relatively close to some of them, so the text-to-image search seems to work well. When all query items come from the same domain, such as only text, everything works fine. However, if we mix positive text and negative image embeddings, the results of the `best_score` are overwhelmed by the negative samples, which are simply closer to the dataset embeddings. If you experience such a problem, the `average_vector` strategy might be a better choice. ### Check out the demo The [Food Discovery Demo](https://food-discovery.qdrant.tech/) is available online, so you can test and see the difference. This is an open source project, so you can easily deploy it on your own. The source code is available in the [GitHub repository ](https://github.com/qdrant/demo-food-discovery/) and the [README](https://github.com/qdrant/demo-food-discovery/blob/main/README.md) describes the process of setting it up. Since calculating the embeddings takes a while, we precomputed them and exported them as a [snapshot](https://storage.googleapis.com/common-datasets-snapshots/wolt-clip-ViT-B-32.snapshot), which might be easily imported into any Qdrant instance. [Qdrant Cloud is the easiest way to start](https://cloud.qdrant.io/), though! ",articles/new-recommendation-api.md "--- title: "" Data Privacy with Qdrant: Implementing Role-Based Access Control (RBAC)"" #required short_description: ""Secure Your Data with Qdrant: Implementing RBAC"" description: Discover how Qdrant's Role-Based Access Control (RBAC) ensures data privacy and compliance for your AI applications. Build secure and scalable systems with ease. Read more now! social_preview_image: /articles_data/data-privacy/preview/social_preview.jpg # This image will be used in social media previews, should be 1200x630px. Required. preview_dir: /articles_data/data-privacy/preview # This directory contains images that will be used in the article preview. They can be generated from one image. Read more below. Required. weight: -110 # This is the order of the article in the list of articles at the footer. The lower the number, the higher the article will be in the list. author: Qdrant Team # Author of the article. Required. author_link: https://qdrant.tech/ # Link to the author's page. Required. date: 2024-06-18T08:00:00-03:00 # Date of the article. Required. draft: false # If true, the article will not be published keywords: # Keywords for SEO - Role-Based Access Control (RBAC) - Data Privacy in Vector Databases - Secure AI Data Management - Qdrant Data Security - Enterprise Data Compliance --- Data stored in vector databases is often proprietary to the enterprise and may include sensitive information like customer records, legal contracts, electronic health records (EHR), financial data, and intellectual property. Moreover, strong security measures become critical to safeguarding this data. If the data stored in a vector database is not secured, it may open a vulnerability known as ""[embedding inversion attack](https://arxiv.org/abs/2004.00053),"" where malicious actors could potentially [reconstruct the original data from the embeddings](https://arxiv.org/pdf/2305.03010) themselves. Strict compliance regulations govern data stored in vector databases across various industries. For instance, healthcare must comply with HIPAA, which dictates how protected health information (PHI) is stored, transmitted, and secured. Similarly, the financial services industry follows PCI DSS to safeguard sensitive financial data. These regulations require developers to ensure data storage and transmission comply with industry-specific legal frameworks across different regions. **As a result, features that enable data privacy, security and sovereignty are deciding factors when choosing the right vector database.** This article explores various strategies to ensure the security of your critical data while leveraging the benefits of vector search. Implementing some of these security approaches can help you build privacy-enhanced similarity search algorithms and integrate them into your AI applications. Additionally, you will learn how to build a fully data-sovereign architecture, allowing you to retain control over your data and comply with relevant data laws and regulations. > To skip right to the code implementation, [click here](/articles/data-privacy/#jwt-on-qdrant). ## Vector Database Security: An Overview Vector databases are often unsecured by default to facilitate rapid prototyping and experimentation. This approach allows developers to quickly ingest data, build vector representations, and test similarity search algorithms without initial security concerns. However, in production environments, unsecured databases pose significant data breach risks. For production use, robust security systems are essential. Authentication, particularly using static API keys, is a common approach to control access and prevent unauthorized modifications. Yet, simple API authentication is insufficient for enterprise data, which requires granular control. The primary challenge with static API keys is their all-or-nothing access, inadequate for role-based data segregation in enterprise applications. Additionally, a compromised key could grant attackers full access to manipulate or steal data. To strengthen the security of the vector database, developers typically need the following: 1. **Encryption**: This ensures that sensitive data is scrambled as it travels between the application and the vector database. This safeguards against Man-in-the-Middle ([MitM](https://en.wikipedia.org/wiki/Man-in-the-middle_attack)) attacks, where malicious actors can attempt to intercept and steal data during transmission. 2. **Role-Based Access Control**: As mentioned before, traditional static API keys grant all-or-nothing access, which is a significant security risk in enterprise environments. RBAC offers a more granular approach by defining user roles and assigning specific data access permissions based on those roles. For example, an analyst might have read-only access to specific datasets, while an administrator might have full CRUD (Create, Read, Update, Delete) permissions across the database. 3. **Deployment Flexibility**: Data residency regulations like GDPR (General Data Protection Regulation) and industry-specific compliance requirements dictate where data can be stored, processed, and accessed. Developers would need to choose a database solution which offers deployment options that comply with these regulations. This might include on-premise deployments within a company's private cloud or geographically distributed cloud deployments that adhere to data residency laws. ## How Qdrant Handles Data Privacy and Security One of the cornerstones of our design choices at Qdrant has been the focus on security features. We have built in a range of features keeping the enterprise user in mind, which allow building of granular access control on a fully data sovereign architecture. A Qdrant instance is unsecured by default. However, when you are ready to deploy in production, Qdrant offers a range of security features that allow you to control access to your data, protect it from breaches, and adhere to regulatory requirements. Using Qdrant, you can build granular access control, segregate roles and privileges, and create a fully data sovereign architecture. ### API Keys and TLS Encryption For simpler use cases, Qdrant offers API key-based authentication. This includes both regular API keys and read-only API keys. Regular API keys grant full access to read, write, and delete operations, while read-only keys restrict access to data retrieval operations only, preventing write actions. On Qdrant Cloud, you can create API keys using the [Cloud Dashboard](https://qdrant.to/cloud). This allows you to generate API keys that give you access to a single node or cluster, or multiple clusters. You can read the steps to do so [here](/documentation/cloud/authentication/). ![web-ui](/articles_data/data-privacy/web-ui.png) For on-premise or local deployments, you'll need to configure API key authentication. This involves specifying a key in either the Qdrant configuration file or as an environment variable. This ensures that all requests to the server must include a valid API key sent in the header. When using the simple API key-based authentication, you should also turn on TLS encryption. Otherwise, you are exposing the connection to sniffing and MitM attacks. To secure your connection using TLS, you would need to create a certificate and private key, and then [enable TLS](/documentation/guides/security/#tls) in the configuration. API authentication, coupled with TLS encryption, offers a first layer of security for your Qdrant instance. However, to enable more granular access control, the recommended approach is to leverage JSON Web Tokens (JWTs). ### JWT on Qdrant JSON Web Tokens (JWTs) are a compact, URL-safe, and stateless means of representing _claims_ to be transferred between two parties. These claims are encoded as a JSON object and are cryptographically signed. JWT is composed of three parts: a header, a payload, and a signature, which are concatenated with dots (.) to form a single string. The header contains the type of token and algorithm being used. The payload contains the claims (explained in detail later). The signature is a cryptographic hash and ensures the token’s integrity. In Qdrant, JWT forms the foundation through which powerful access controls can be built. Let’s understand how. JWT is enabled on the Qdrant instance by specifying the API key and turning on the **jwt_rbac** feature in the configuration (alternatively, they can be set as environment variables). For any subsequent request, the API key is used to encode or decode the token. The way JWT works is that just the API key is enough to generate the token, and doesn’t require any communication with the Qdrant instance or server. There are several libraries that help generate tokens by encoding a payload, such as [PyJWT](https://pyjwt.readthedocs.io/en/stable/) (for Python), [jsonwebtoken](https://www.npmjs.com/package/jsonwebtoken) (for JavaScript), and [jsonwebtoken](https://crates.io/crates/jsonwebtoken) (for Rust). Qdrant uses the HS256 algorithm to encode or decode the tokens. We will look at the payload structure shortly, but here’s how you can generate a token using PyJWT. ```python import jwt import datetime # Define your API key and other payload data api_key = ""your_api_key"" payload = { ... } token = jwt.encode(payload, api_key, algorithm=""HS256"") print(token) ``` Once you have generated the token, you should include it in the subsequent requests. You can do so by providing it as a bearer token in the Authorization header, or in the API Key header of your requests. Below is an example of how to do so using QdrantClient in Python: ```python from qdrant_client import QdrantClient qdrant_client = QdrantClient( ""http://localhost:6333"", api_key="""", # the token goes here ) # Example search vector search_vector = [0.1, 0.2, 0.3, 0.4] # Example similarity search request response = qdrant_client.search( collection_name=""demo_collection"", query_vector=search_vector, limit=5 # Number of results to retrieve ) ``` For convenience, we have added a JWT generation tool in the Qdrant Web UI, which is present under the 🔑 tab. For your local deployments, you will find it at [http://localhost:6333/dashboard#/jwt](http://localhost:6333/dashboard#/jwt). ### Payload Configuration There are several different options (claims) you can use in the JWT payload that help control access and functionality. Let’s look at them one by one. **exp**: This claim is the expiration time of the token, and is a unix timestamp in seconds. After the expiration time, the token will be invalid. **value_exists**: This claim validates the token against a specific key-value stored in a collection. By using this claim, you can revoke access by simply changing a value without having to invalidate the API key. **access**: This claim defines the access level of the token. The access level can be global read (r) or manage (m). It can also be specific to a collection, or even a subset of a collection, using read (r) and read-write (rw). Let’s look at a few example JWT payload configurations. **Scenario 1: 1-hour expiry time, and read-only access to a collection** ```json { ""exp"": 1690995200, // Set to 1 hour from the current time (Unix timestamp) ""access"": [ { ""collection"": ""demo_collection"", ""access"": ""r"" // Read-only access } ] } ``` **Scenario 2: 1-hour expiry time, and access to user with a specific role** Suppose you have a ‘users’ collection and have defined specific roles for each user, such as ‘developer’, ‘manager’, ‘admin’, ‘analyst’, and ‘revoked’. In such a scenario, you can use a combination of **exp** and **value_exists**. ```json { ""exp"": 1690995200, ""value_exists"": { ""collection"": ""users"", ""matches"": [ { ""key"": ""username"", ""value"": ""john"" }, { ""key"": ""role"", ""value"": ""developer"" } ], }, } ``` Now, if you ever want to revoke access for a user, simply change the value of their role. All future requests will be invalid using a token payload of the above type. **Scenario 3: 1-hour expiry time, and read-write access to a subset of a collection** You can even specify access levels specific to subsets of a collection. This can be especially useful when you are leveraging [multitenancy](/documentation/guides/multiple-partitions/), and want to segregate access. ```json { ""exp"": 1690995200, ""access"": [ { ""collection"": ""demo_collection"", ""access"": ""r"", ""payload"": { ""user_id"": ""user_123456"" } } ] } ``` By combining the claims, you can fully customize the access level that a user or a role has within the vector store. ### Creating Role-Based Access Control (RBAC) Using JWT As we saw above, JWT claims create powerful levers through which you can create granular access control on Qdrant. Let’s bring it all together and understand how it helps you create Role-Based Access Control (RBAC). In a typical enterprise application, you will have a segregation of users based on their roles and permissions. These could be: 1. **Admin or Owner:** with full access, and can generate API keys. 2. **Editor:** with read-write access levels to specific collections. 3. **Viewer:** with read-only access to specific collections. 4. **Data Scientist or Analyst:** with read-only access to specific collections. 5. **Developer:** with read-write access to development- or testing-specific collections, but limited access to production data. 6. **Guest:** with limited read-only access to publicly available collections. In addition, you can create access levels within sections of a collection. In a multi-tenant application, where you have used payload-based partitioning, you can create read-only access for specific user roles for a subset of the collection that belongs to that user. Your application requirements will eventually help you decide the roles and access levels you should create. For example, in an application managing customer data, you could create additional roles such as: **Customer Support Representative**: read-write access to customer service-related data but no access to billing information. **Billing Department**: read-only access to billing data and read-write access to payment records. **Marketing Analyst**: read-only access to anonymized customer data for analytics. Each role can be assigned a JWT with claims that specify expiration times, read/write permissions for collections, and validating conditions. In such an application, an example JWT payload for a customer support representative role could be: ```json { ""exp"": 1690995200, ""access"": [ { ""collection"": ""customer_data"", ""access"": ""rw"", ""payload"": { ""department"": ""support"" } } ], ""value_exists"": { ""collection"": ""departments"", ""matches"": [ { ""key"": ""department"", ""value"": ""support"" } ] } } ``` As you can see, by implementing RBAC, you can ensure proper segregation of roles and their privileges, and avoid privacy loopholes in your application. ## Qdrant Hybrid Cloud and Data Sovereignty Data governance varies by country, especially for global organizations dealing with different regulations on data privacy, security, and access. This often necessitates deploying infrastructure within specific geographical boundaries. To address these needs, the vector database you choose should support deployment and scaling within your controlled infrastructure. [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/) offers this flexibility, along with features like sharding, replicas, JWT authentication, and monitoring. Qdrant Hybrid Cloud integrates Kubernetes clusters from various environments—cloud, on-premises, or edge—into a unified managed service. This allows organizations to manage Qdrant databases through the Qdrant Cloud UI while keeping the databases within their infrastructure. With JWT and RBAC, Qdrant Hybrid Cloud provides a secure, private, and sovereign vector store. Enterprises can scale their AI applications geographically, comply with local laws, and maintain strict data control. ## Conclusion Vector similarity is increasingly becoming the backbone of AI applications that leverage unstructured data. By transforming data into vectors – their numerical representations – organizations can build powerful applications that harness semantic search, ranging from better recommendation systems to algorithms that help with personalization, or powerful customer support chatbots. However, to fully leverage the power of AI in production, organizations need to choose a vector database that offers strong privacy and security features, while also helping them adhere to local laws and regulations. Qdrant provides exceptional efficiency and performance, along with the capability to implement granular access control to data, Role-Based Access Control (RBAC), and the ability to build a fully data-sovereign architecture. Interested in mastering vector search security and deployment strategies? [Join our Discord community](https://discord.gg/qdrant) to explore more advanced search strategies, connect with other developers and researchers in the industry, and stay updated on the latest innovations! ",articles/data-privacy.md "--- title: Question Answering as a Service with Cohere and Qdrant short_description: ""End-to-end Question Answering system for the biomedical data with SaaS tools: Cohere co.embed API and Qdrant"" description: ""End-to-end Question Answering system for the biomedical data with SaaS tools: Cohere co.embed API and Qdrant"" social_preview_image: /articles_data/qa-with-cohere-and-qdrant/social_preview.png small_preview_image: /articles_data/qa-with-cohere-and-qdrant/q-and-a-article-icon.svg preview_dir: /articles_data/qa-with-cohere-and-qdrant/preview weight: 7 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2022-11-29T15:45:00+01:00 draft: false keywords: - vector search - question answering - cohere - co.embed - embeddings --- Bi-encoders are probably the most efficient way of setting up a semantic Question Answering system. This architecture relies on the same neural model that creates vector embeddings for both questions and answers. The assumption is, both question and answer should have representations close to each other in the latent space. It should be like that because they should both describe the same semantic concept. That doesn't apply to answers like ""Yes"" or ""No"" though, but standard FAQ-like problems are a bit easier as there is typically an overlap between both texts. Not necessarily in terms of wording, but in their semantics. ![Bi-encoder structure. Both queries (questions) and documents (answers) are vectorized by the same neural encoder. Output embeddings are then compared by a chosen distance function, typically cosine similarity.](/articles_data/qa-with-cohere-and-qdrant/biencoder-diagram.png) And yeah, you need to **bring your own embeddings**, in order to even start. There are various ways how to obtain them, but using Cohere [co.embed API](https://docs.cohere.ai/reference/embed) is probably the easiest and most convenient method. ## Why co.embed API and Qdrant go well together? Maintaining a **Large Language Model** might be hard and expensive. Scaling it up and down, when the traffic changes, require even more effort and becomes unpredictable. That might be definitely a blocker for any semantic search system. But if you want to start right away, you may consider using a SaaS model, Cohere’s [co.embed API](https://docs.cohere.ai/reference/embed) in particular. It gives you state-of-the-art language models available as a Highly Available HTTP service with no need to train or maintain your own service. As all the communication is done with JSONs, you can simply provide the co.embed output as Qdrant input. ```python # Putting the co.embed API response directly as Qdrant method input qdrant_client.upsert( collection_name=""collection"", points=rest.Batch( ids=[...], vectors=cohere_client.embed(...).embeddings, payloads=[...], ), ) ``` Both tools are easy to combine, so you can start working with semantic search in a few minutes, not days. And what if your needs are so specific that you need to fine-tune a general usage model? Co.embed API goes beyond pre-trained encoders and allows providing some custom datasets to [customize the embedding model with your own data](https://docs.cohere.com/docs/finetuning). As a result, you get the quality of domain-specific models, but without worrying about infrastructure. ## System architecture overview In real systems, answers get vectorized and stored in an efficient vector search database. We typically don’t even need to provide specific answers, but just use sentences or paragraphs of text and vectorize them instead. Still, if a bit longer piece of text contains the answer to a particular question, its distance to the question embedding should not be that far away. And for sure closer than all the other, non-matching answers. Storing the answer embeddings in a vector database makes the search process way easier. ![Building the database of possible answers. All the texts are converted into their vector embeddings and those embeddings are stored in a vector database, i.e. Qdrant.](/articles_data/qa-with-cohere-and-qdrant/vector-database.png) ## Looking for the correct answer Once our database is working and all the answer embeddings are already in place, we can start querying it. We basically perform the same vectorization on a given question and ask the database to provide some near neighbours. We rely on the embeddings to be close to each other, so we expect the points with the smallest distance in the latent space to contain the proper answer. ![While searching, a question gets vectorized by the same neural encoder. Vector database is a component that looks for the closest answer vectors using i.e. cosine similarity. A proper system, like Qdrant, will make the lookup process more efficient, as it won’t calculate the distance to all the answer embeddings. Thanks to HNSW, it will be able to find the nearest neighbours with sublinear complexity.](/articles_data/qa-with-cohere-and-qdrant/search-with-vector-database.png) ## Implementing the QA search system with SaaS tools We don’t want to maintain our own service for the neural encoder, nor even set up a Qdrant instance. There are SaaS solutions for both — Cohere’s [co.embed API](https://docs.cohere.ai/reference/embed) and [Qdrant Cloud](https://qdrant.to/cloud), so we’ll use them instead of on-premise tools. ### Question Answering on biomedical data We’re going to implement the Question Answering system for the biomedical data. There is a *[pubmed_qa](https://huggingface.co/datasets/pubmed_qa)* dataset, with it *pqa_labeled* subset containing 1,000 examples of questions and answers labelled by domain experts. Our system is going to be fed with the embeddings generated by co.embed API and we’ll load them to Qdrant. Using Qdrant Cloud vs your own instance does not matter much here. There is a subtle difference in how to connect to the cloud instance, but all the other operations are executed in the same way. ```python from datasets import load_dataset # Loading the dataset from HuggingFace hub. It consists of several columns: pubid, # question, context, long_answer and final_decision. For the purposes of our system, # we’ll use question and long_answer. dataset = load_dataset(""pubmed_qa"", ""pqa_labeled"") ``` | **pubid** | **question** | **context** | **long_answer** | **final_decision** | |-----------|---------------------------------------------------|-------------|---------------------------------------------------|--------------------| | 18802997 | Can calprotectin predict relapse risk in infla... | ... | Measuring calprotectin may help to identify UC... | maybe | | 20538207 | Should temperature be monitorized during kidne... | ... | The new storage can affords more stable temper... | no | | 25521278 | Is plate clearing a risk factor for obesity? | ... | The tendency to clear one's plate when eating ... | yes | | 17595200 | Is there an intrauterine influence on obesity? | ... | Comparison of mother-offspring and father-offs.. | no | | 15280782 | Is unsafe sexual behaviour increasing among HI... | ... | There was no evidence of a trend in unsafe sex... | no | ### Using Cohere and Qdrant to build the answers database In order to start generating the embeddings, you need to [create a Cohere account](https://dashboard.cohere.ai/welcome/register). That will start your trial period, so you’ll be able to vectorize the texts for free. Once logged in, your default API key will be available in [Settings](https://dashboard.cohere.ai/api-keys). We’ll need it to call the co.embed API. with the official python package. ```python import cohere cohere_client = cohere.Client(COHERE_API_KEY) # Generating the embeddings with Cohere client library embeddings = cohere_client.embed( texts=[""A test sentence""], model=""large"", ) vector_size = len(embeddings.embeddings[0]) print(vector_size) # output: 4096 ``` Let’s connect to the Qdrant instance first and create a collection with the proper configuration, so we can put some embeddings into it later on. ```python # Connecting to Qdrant Cloud with qdrant-client requires providing the api_key. # If you use an on-premise instance, it has to be skipped. qdrant_client = QdrantClient( host=""xyz-example.eu-central.aws.cloud.qdrant.io"", prefer_grpc=True, api_key=QDRANT_API_KEY, ) ``` Now we’re able to vectorize all the answers. They are going to form our collection, so we can also put them already into Qdrant, along with the payloads and identifiers. That will make our dataset easily searchable. ```python answer_response = cohere_client.embed( texts=dataset[""train""][""long_answer""], model=""large"", ) vectors = [ # Conversion to float is required for Qdrant list(map(float, vector)) for vector in answer_response.embeddings ] ids = [entry[""pubid""] for entry in dataset[""train""]] # Filling up Qdrant collection with the embeddings generated by Cohere co.embed API qdrant_client.upsert( collection_name=""pubmed_qa"", points=rest.Batch( ids=ids, vectors=vectors, payloads=list(dataset[""train""]), ) ) ``` And that’s it. Without even setting up a single server on our own, we created a system that might be easily asked a question. I don’t want to call it serverless, as this term is already taken, but co.embed API with Qdrant Cloud makes everything way easier to maintain. ### Answering the questions with semantic search — the quality It’s high time to query our database with some questions. It might be interesting to somehow measure the quality of the system in general. In those kinds of problems we typically use *top-k accuracy*. We assume the prediction of the system was correct if the correct answer was present in the first *k* results. ```python # Finding the position at which Qdrant provided the expected answer for each question. # That allows to calculate accuracy@k for different values of k. k_max = 10 answer_positions = [] for embedding, pubid in tqdm(zip(question_response.embeddings, ids)): response = qdrant_client.search( collection_name=""pubmed_qa"", query_vector=embedding, limit=k_max, ) answer_ids = [record.id for record in response] if pubid in answer_ids: answer_positions.append(answer_ids.index(pubid)) else: answer_positions.append(-1) ``` Saved answer positions allow us to calculate the metric for different *k* values. ```python # Prepared answer positions are being used to calculate different values of accuracy@k for k in range(1, k_max + 1): correct_answers = len( list( filter(lambda x: 0 <= x < k, answer_positions) ) ) print(f""accuracy@{k} ="", correct_answers / len(dataset[""train""])) ``` Here are the values of the top-k accuracy for different values of k: | **metric** | **value** | |-------------|-----------| | accuracy@1 | 0.877 | | accuracy@2 | 0.921 | | accuracy@3 | 0.942 | | accuracy@4 | 0.950 | | accuracy@5 | 0.956 | | accuracy@6 | 0.960 | | accuracy@7 | 0.964 | | accuracy@8 | 0.971 | | accuracy@9 | 0.976 | | accuracy@10 | 0.977 | It seems like our system worked pretty well even if we consider just the first result, with the lowest distance. We failed with around 12% of questions. But numbers become better with the higher values of k. It might be also valuable to check out what questions our system failed to answer, their perfect match and our guesses. We managed to implement a working Question Answering system within just a few lines of code. If you are fine with the results achieved, then you can start using it right away. Still, if you feel you need a slight improvement, then fine-tuning the model is a way to go. If you want to check out the full source code, it is available on [Google Colab](https://colab.research.google.com/drive/1YOYq5PbRhQ_cjhi6k4t1FnWgQm8jZ6hm?usp=sharing). ",articles/qa-with-cohere-and-qdrant.md "--- title: ""Is RAG Dead? The Role of Vector Databases in Vector Search | Qdrant"" short_description: Learn how Qdrant’s vector database enhances enterprise AI with superior accuracy and cost-effectiveness. description: Uncover the necessity of vector databases for RAG and learn how Qdrant's vector database empowers enterprise AI with unmatched accuracy and cost-effectiveness. social_preview_image: /articles_data/rag-is-dead/preview/social_preview.jpg small_preview_image: /articles_data/rag-is-dead/icon.svg preview_dir: /articles_data/rag-is-dead/preview weight: -131 author: David Myriel author_link: https://github.com/davidmyriel date: 2024-02-27T00:00:00.000Z draft: false keywords: - vector database - vector search - retrieval augmented generation - gemini 1.5 --- # Is RAG Dead? The Role of Vector Databases in AI Efficiency and Vector Search When Anthropic came out with a context window of 100K tokens, they said: “*[Vector search](https://qdrant.tech/solutions/) is dead. LLMs are getting more accurate and won’t need RAG anymore.*” Google’s Gemini 1.5 now offers a context window of 10 million tokens. [Their supporting paper](https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf) claims victory over accuracy issues, even when applying Greg Kamradt’s [NIAH methodology](https://twitter.com/GregKamradt/status/1722386725635580292). *It’s over. [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/) (Retrieval Augmented Generation) must be completely obsolete now. Right?* No. Larger context windows are never the solution. Let me repeat. Never. They require more computational resources and lead to slower processing times. The community is already stress testing Gemini 1.5: ![RAG and Gemini 1.5](/articles_data/rag-is-dead/rag-is-dead-1.png) This is not surprising. LLMs require massive amounts of compute and memory to run. To cite Grant, running such a model by itself “would deplete a small coal mine to generate each completion”. Also, who is waiting 30 seconds for a response? ## Context stuffing is not the solution > Relying on context is expensive, and it doesn’t improve response quality in real-world applications. Retrieval based on [vector search](https://qdrant.tech/solutions/) offers much higher precision. If you solely rely on an [LLM](https://qdrant.tech/articles/what-is-rag-in-ai/) to perfect retrieval and precision, you are doing it wrong. A large context window makes it harder to focus on relevant information. This increases the risk of errors or hallucinations in its responses. Google found Gemini 1.5 significantly more accurate than GPT-4 at shorter context lengths and “a very small decrease in recall towards 1M tokens”. The recall is still below 0.8. ![Gemini 1.5 Data](/articles_data/rag-is-dead/rag-is-dead-2.png) We don’t think 60-80% is good enough. The LLM might retrieve enough relevant facts in its context window, but it still loses up to 40% of the available information. > The whole point of vector search is to circumvent this process by efficiently picking the information your app needs to generate the best response. A [vector database](https://qdrant.tech/) keeps the compute load low and the query response fast. You don’t need to wait for the LLM at all. Qdrant’s benchmark results are strongly in favor of accuracy and efficiency. We recommend that you consider them before deciding that an LLM is enough. Take a look at our [open-source benchmark reports](/benchmarks/) and [try out the tests](https://github.com/qdrant/vector-db-benchmark) yourself. ## Vector search in compound systems The future of AI lies in careful system engineering. As per [Zaharia et al.](https://bair.berkeley.edu/blog/2024/02/18/compound-ai-systems/), results from Databricks find that “60% of LLM applications use some form of RAG, while 30% use multi-step chains.” Even Gemini 1.5 demonstrates the need for a complex strategy. When looking at [Google’s MMLU Benchmark](https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf), the model was called 32 times to reach a score of 90.0% accuracy. This shows us that even a basic compound arrangement is superior to monolithic models. As a retrieval system, a [vector database](https://qdrant.tech/) perfectly fits the need for compound systems. Introducing them into your design opens the possibilities for superior applications of LLMs. It is superior because it’s faster, more accurate, and much cheaper to run. > The key advantage of RAG is that it allows an LLM to pull in real-time information from up-to-date internal and external knowledge sources, making it more dynamic and adaptable to new information. - Oliver Molander, CEO of IMAGINAI > ## Qdrant scales to enterprise RAG scenarios People still don’t understand the economic benefit of vector databases. Why would a large corporate AI system need a standalone vector database like [Qdrant](https://qdrant.tech/)? In our minds, this is the most important question. Let’s pretend that LLMs cease struggling with context thresholds altogether. **How much would all of this cost?** If you are running a RAG solution in an enterprise environment with petabytes of private data, your compute bill will be unimaginable. Let's assume 1 cent per 1K input tokens (which is the current GPT-4 Turbo pricing). Whatever you are doing, every time you go 100 thousand tokens deep, it will cost you $1. That’s a buck a question. > According to our estimations, vector search queries are **at least** 100 million times cheaper than queries made by LLMs. Conversely, the only up-front investment with vector databases is the indexing (which requires more compute). After this step, everything else is a breeze. Once setup, Qdrant easily scales via [features like Multitenancy and Sharding](/articles/multitenancy/). This lets you scale up your reliance on the vector retrieval process and minimize your use of the compute-heavy LLMs. As an optimization measure, Qdrant is irreplaceable. Julien Simon from HuggingFace says it best: > RAG is not a workaround for limited context size. For mission-critical enterprise use cases, RAG is a way to leverage high-value, proprietary company knowledge that will never be found in public datasets used for LLM training. At the moment, the best place to index and query this knowledge is some sort of vector index. In addition, RAG downgrades the LLM to a writing assistant. Since built-in knowledge becomes much less important, a nice small 7B open-source model usually does the trick at a fraction of the cost of a huge generic model. ## Get superior accuracy with Qdrant's vector database As LLMs continue to require enormous computing power, users will need to leverage vector search and [RAG](https://qdrant.tech/). Our customers remind us of this fact every day. As a product, [our vector database](https://qdrant.tech/) is highly scalable and business-friendly. We develop our features strategically to follow our company’s Unix philosophy. We want to keep Qdrant compact, efficient and with a focused purpose. This purpose is to empower our customers to use it however they see fit. When large enterprises release their generative AI into production, they need to keep costs under control, while retaining the best possible quality of responses. Qdrant has the [vector search solutions](https://qdrant.tech/solutions/) to do just that. Revolutionize your vector search capabilities and get started with [a Qdrant demo](https://qdrant.tech/contact-us/).",articles/rag-is-dead.md "--- title: ""BM42: New Baseline for Hybrid Search"" short_description: ""Introducing next evolutionary step in lexical search."" description: ""Introducing BM42 - a new sparse embedding approach, which combines the benefits of exact keyword search with the intelligence of transformers."" social_preview_image: /articles_data/bm42/social-preview.jpg preview_dir: /articles_data/bm42/preview weight: -140 author: Andrey Vasnetsov date: 2024-07-01T12:00:00+03:00 draft: false keywords: - hybrid search - sparse embeddings - bm25 --- For the last 40 years, BM25 has served as the standard for search engines. It is a simple yet powerful algorithm that has been used by many search engines, including Google, Bing, and Yahoo. Though it seemed that the advent of vector search would diminish its influence, it did so only partially. The current state-of-the-art approach to retrieval nowadays tries to incorporate BM25 along with embeddings into a hybrid search system. However, the use case of text retrieval has significantly shifted since the introduction of RAG. Many assumptions upon which BM25 was built are no longer valid. For example, the typical length of documents and queries vary significantly between traditional web search and modern RAG systems. In this article, we will recap what made BM25 relevant for so long and why alternatives have struggled to replace it. Finally, we will discuss BM42, as the next step in the evolution of lexical search. ## Why has BM25 stayed relevant for so long? To understand why, we need to analyze its components. The famous BM25 formula is defined as: $$ \text{score}(D,Q) = \sum_{i=1}^{N} \text{IDF}(q_i) \times \frac{f(q_i, D) \cdot (k_1 + 1)}{f(q_i, D) + k_1 \cdot \left(1 - b + b \cdot \frac{|D|}{\text{avgdl}}\right)} $$ Let's simplify this to gain a better understanding. - The $score(D, Q)$ - means that we compute the score for each pair of document $D$ and query $Q$. - The $\sum_{i=1}^{N}$ - means that each of $N$ terms in the query contribute to the final score as a part of the sum. - The $\text{IDF}(q_i)$ - is the inverse document frequency. The more rare the term $q_i$ is, the more it contributes to the score. A simplified formula for this is: $$ \text{IDF}(q_i) = \frac{\text{Number of documents}}{\text{Number of documents with } q_i} $$ It is fair to say that the `IDF` is the most important part of the BM25 formula. `IDF` selects the most important terms in the query relative to the specific document collection. So intuitively, we can interpret the `IDF` as **term importance within the corpora**. That explains why BM25 is so good at handling queries, which dense embeddings consider out-of-domain. The last component of the formula can be intuitively interpreted as **term importance within the document**. This might look a bit complicated, so let's break it down. $$ \text{Term importance in document }(q_i) = \color{red}\frac{f(q_i, D)\color{black} \cdot \color{blue}(k_1 + 1) \color{black} }{\color{red}f(q_i, D)\color{black} + \color{blue}k_1\color{black} \cdot \left(1 - \color{blue}b\color{black} + \color{blue}b\color{black} \cdot \frac{|D|}{\text{avgdl}}\right)} $$ - The $\color{red}f(q_i, D)\color{black}$ - is the frequency of the term $q_i$ in the document $D$. Or in other words, the number of times the term $q_i$ appears in the document $D$. - The $\color{blue}k_1\color{black}$ and $\color{blue}b\color{black}$ are the hyperparameters of the BM25 formula. In most implementations, they are constants set to $k_1=1.5$ and $b=0.75$. Those constants define relative implications of the term frequency and the document length in the formula. - The $\frac{|D|}{\text{avgdl}}$ - is the relative length of the document $D$ compared to the average document length in the corpora. The intuition befind this part is following: if the token is found in the smaller document, it is more likely that this token is important for this document. #### Will BM25 term importance in the document work for RAG? As we can see, the *term importance in the document* heavily depends on the statistics within the document. Moreover, statistics works well if the document is long enough. Therefore, it is suitable for searching webpages, books, articles, etc. However, would it work as well for modern search applications, such as RAG? Let's see. The typical length of a document in RAG is much shorter than that of web search. In fact, even if we are working with webpages and articles, we would prefer to split them into chunks so that a) Dense models can handle them and b) We can pinpoint the exact part of the document which is relevant to the query As a result, the document size in RAG is small and fixed. That effectively renders the term importance in the document part of the BM25 formula useless. The term frequency in the document is always 0 or 1, and the relative length of the document is always 1. So, the only part of the BM25 formula that is still relevant for RAG is `IDF`. Let's see how we can leverage it. ## Why SPLADE is not always the answer Before discussing our new approach, let's examine the current state-of-the-art alternative to BM25 - SPLADE. The idea behind SPLADE is interesting—what if we let a smart, end-to-end trained model generate a bag-of-words representation of the text for us? It will assign all the weights to the tokens, so we won't need to bother with statistics and hyperparameters. The documents are then represented as a sparse embedding, where each token is represented as an element of the sparse vector. And it works in academic benchmarks. Many papers report that SPLADE outperforms BM25 in terms of retrieval quality. This performance, however, comes at a cost. * **Inappropriate Tokenizer**: To incorporate transformers for this task, SPLADE models require using a standard transformer tokenizer. These tokenizers are not designed for retrieval tasks. For example, if the word is not in the (quite limited) vocabulary, it will be either split into subwords or replaced with a `[UNK]` token. This behavior works well for language modeling but is completely destructive for retrieval tasks. * **Expensive Token Expansion**: In order to compensate the tokenization issues, SPLADE uses *token expansion* technique. This means that we generate a set of similar tokens for each token in the query. There are a few problems with this approach: - It is computationally and memory expensive. We need to generate more values for each token in the document, which increases both the storage size and retrieval time. - It is not always clear where to stop with the token expansion. The more tokens we generate, the more likely we are to get the relevant one. But simultaneously, the more tokens we generate, the more likely we are to get irrelevant results. - Token expansion dilutes the interpretability of the search. We can't say which tokens were used in the document and which were generated by the token expansion. * **Domain and Language Dependency**: SPLADE models are trained on specific corpora. This means that they are not always generalizable to new or rare domains. As they don't use any statistics from the corpora, they cannot adapt to the new domain without fine-tuning. * **Inference Time**: Additionally, currently available SPLADE models are quite big and slow. They usually require a GPU to make the inference in a reasonable time. At Qdrant, we acknowledge the aforementioned problems and are looking for a solution. Our idea was to combine the best of both worlds - the simplicity and interpretability of BM25 and the intelligence of transformers while avoiding the pitfalls of SPLADE. And here is what we came up with. ## The best of both worlds As previously mentioned, `IDF` is the most important part of the BM25 formula. In fact it is so important, that we decided to build its calculation into the Qdrant engine itself. Check out our latest [release notes](https://github.com/qdrant/qdrant/releases/tag/v1.10.0). This type of separation allows streaming updates of the sparse embeddings while keeping the `IDF` calculation up-to-date. As for the second part of the formula, *the term importance within the document* needs to be rethought. Since we can't rely on the statistics within the document, we can try to use the semantics of the document instead. And semantics is what transformers are good at. Therefore, we only need to solve two problems: - How does one extract the importance information from the transformer? - How can tokenization issues be avoided? ### Attention is all you need Transformer models, even those used to generate embeddings, generate a bunch of different outputs. Some of those outputs are used to generate embeddings. Others are used to solve other kinds of tasks, such as classification, text generation, etc. The one particularly interesting output for us is the attention matrix. {{< figure src=""/articles_data/bm42/attention-matrix.png"" alt=""Attention matrix"" caption=""Attention matrix"" width=""60%"" >}} The attention matrix is a square matrix, where each row and column corresponds to the token in the input sequence. It represents the importance of each token in the input sequence for each other. The classical transformer models are trained to predict masked tokens in the context, so the attention weights define which context tokens influence the masked token most. Apart from regular text tokens, the transformer model also has a special token called `[CLS]`. This token represents the whole sequence in the classification tasks, which is exactly what we need. By looking at the attention row for the `[CLS]` token, we can get the importance of each token in the document for the whole document. ```python sentences = ""Hello, World - is the starting point in most programming languages"" features = transformer.tokenize(sentences) # ... attentions = transformer.auto_model(**features, output_attentions=True).attentions weights = torch.mean(attentions[-1][0,:,0], axis=0) # ▲ ▲ ▲ ▲ # │ │ │ └─── [CLS] token is the first one # │ │ └─────── First item of the batch # │ └────────── Last transformer layer # └────────────────────────── Averate all 6 attention heads for weight, token in zip(weights, tokens): print(f""{token}: {weight}"") # [CLS] : 0.434 // Filter out the [CLS] token # hello : 0.039 # , : 0.039 # world : 0.107 // <-- The most important token # - : 0.033 # is : 0.024 # the : 0.031 # starting : 0.054 # point : 0.028 # in : 0.018 # most : 0.016 # programming : 0.060 // <-- The third most important token # languages : 0.062 // <-- The second most important token # [SEP] : 0.047 // Filter out the [SEP] token ``` The resulting formula for the BM42 score would look like this: $$ \text{score}(D,Q) = \sum_{i=1}^{N} \text{IDF}(q_i) \times \text{Attention}(\text{CLS}, q_i) $$ Note that classical transformers have multiple attention heads, so we can get multiple importance vectors for the same document. The simplest way to combine them is to simply average them. These averaged attention vectors make up the importance information we were looking for. The best part is, one can get them from any transformer model, without any additional training. Therefore, BM42 can support any natural language as long as there is a transformer model for it. In our implementation, we use the `sentence-transformers/all-MiniLM-L6-v2` model, which gives a huge boost in the inference speed compared to the SPLADE models. In practice, any transformer model can be used. It doesn't require any additional training, and can be easily adapted to work as BM42 backend. ### WordPiece retokenization The final piece of the puzzle we need to solve is the tokenization issue. In order to get attention vectors, we need to use native transformer tokenization. But this tokenization is not suitable for the retrieval tasks. What can we do about it? Actually, the solution we came up with is quite simple. We reverse the tokenization process after we get the attention vectors. Transformers use [WordPiece](https://huggingface.co/learn/nlp-course/en/chapter6/6) tokenization. In case it sees the word, which is not in the vocabulary, it splits it into subwords. Here is how that looks: ```text ""unbelievable"" -> [""un"", ""##believ"", ""##able""] ``` What can merge the subwords back into the words. Luckily, the subwords are marked with the `##` prefix, so we can easily detect them. Since the attention weights are normalized, we can simply sum the attention weights of the subwords to get the attention weight of the word. After that, we can apply the same traditional NLP techniques, as - Removing of the stop-words - Removing of the punctuation - Lemmatization In this way, we can significantly reduce the number of tokens, and therefore minimize the memory footprint of the sparse embeddings. We won't simultaneously compromise the ability to match (almost) exact tokens. ## Practical examples | Trait | BM25 | SPLADE | BM42 | |-------------------------|--------------|--------------|--------------| | Interpretability | High ✅ | Ok 🆗 | High ✅ | | Document Inference speed| Very high ✅ | Slow 🐌 | High ✅ | | Query Inference speed | Very high ✅ | Slow 🐌 | Very high ✅ | | Memory footprint | Low ✅ | High ❌ | Low ✅ | | In-domain accuracy | Ok 🆗 | High ✅ | High ✅ | | Out-of-domain accuracy | Ok 🆗 | Low ❌ | Ok 🆗 | | Small documents accuracy| Low ❌ | High ✅ | High ✅ | | Large documents accuracy| High ✅ | Low ❌ | Ok 🆗 | | Unknown tokens handling | Yes ✅ | Bad ❌ | Yes ✅ | | Multi-lingual support | Yes ✅ | No ❌ | Yes ✅ | | Best Match | Yes ✅ | No ❌ | Yes ✅ | Starting from Qdrant v1.10.0, BM42 can be used in Qdrant via FastEmbed inference. Let's see how you can setup a collection for hybrid search with BM42 and [jina.ai](https://jina.ai/embeddings/) dense embeddings. ```http PUT collections/my-hybrid-collection { ""vectors"": { ""jina"": { ""size"": 768, ""distance"": ""Cosine"" } }, ""sparse_vectors"": { ""bm42"": { ""modifier"": ""idf"" // <--- This parameter enables the IDF calculation } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient() client.create_collection( collection_name=""my-hybrid-collection"", vectors_config={ ""jina"": models.VectorParams( size=768, distance=models.Distance.COSINE, ) }, sparse_vectors_config={ ""bm42"": models.SparseVectorParams( modifier=models.Modifier.IDF, ) } ) ``` The search query will retrieve the documents with both dense and sparse embeddings and combine the scores using the Reciprocal Rank Fusion (RRF) algorithm. ```python from fastembed import SparseTextEmbedding, TextEmbedding query_text = ""best programming language for beginners?"" model_bm42 = SparseTextEmbedding(model_name=""Qdrant/bm42-all-minilm-l6-v2-attentions"") model_jina = TextEmbedding(model_name=""jinaai/jina-embeddings-v2-base-en"") sparse_embedding = list(embedding_model.query_embed(query_text))[0] dense_embedding = list(model_jina.query_embed(query_text))[0] client.query_points( collection_name=""my-hybrid-collection"", prefetch=[ models.Prefetch(query=sparse_embedding.as_object(), using=""bm42"", limit=10), models.Prefetch(query=dense_embedding.tolist(), using=""jina"", limit=10), ], query=models.FusionQuery(fusion=models.Fusion.RRF), # <--- Combine the scores limit=10 ) ``` ### Benchmarks To prove the point further we have conducted some benchmarks to highlight the cases where BM42 outperforms BM25. Please note, that we didn't intend to make an exhaustive evaluation, as we are presenting a new approach, not a new model. For out experiments we choose [quora](https://huggingface.co/datasets/BeIR/quora) dataset, which represents a question-deduplication task ~~the Question-Answering task~~. The typical example of the dataset is the following: ```text {""_id"": ""109"", ""text"": ""How GST affects the CAs and tax officers?""} {""_id"": ""110"", ""text"": ""Why can't I do my homework?""} {""_id"": ""111"", ""text"": ""How difficult is it get into RSI?""} ``` As you can see, it has pretty short texts, there are not much of the statistics to rely on. After encoding with BM42, the average vector size is only **5.6 elements per document**. With `datatype: uint8` available in Qdrant, the total size of the sparse vector index is about **13MB** for ~530k documents. As a reference point, we use: - BM25 with tantivy - the [sparse vector BM25 implementation](https://github.com/qdrant/bm42_eval/blob/master/index_bm25_qdrant.py) with the same preprocessing pipeline like for BM42: tokenization, stop-words removal, and lemmatization | | BM25 (tantivy) | BM25 (Sparse) | BM42 | |----------------------|-------------------|---------------|----------| | ~~Precision @ 10~~ * | ~~0.45~~ | ~~0.45~~ | ~~0.49~~ | | Recall @ 10 | ~~0.71~~ **0.89** | 0.83 | 0.85 | \* - values were corrected after the publication due to a mistake in the evaluation script. To make our benchmarks transparent, we have published scripts we used for the evaluation: see [github repo](https://github.com/qdrant/bm42_eval). Please note, that both BM25 and BM42 won't work well on their own in a production environment. Best results are achieved with a combination of sparse and dense embeddings in a hybrid approach. In this scenario, the two models are complementary to each other. The sparse model is responsible for exact token matching, while the dense model is responsible for semantic matching. Some more advanced models might outperform default `sentence-transformers/all-MiniLM-L6-v2` model we were using. We encourage developers involved in training embedding models to include a way to extract attention weights and contribute to the BM42 backend. ## Fostering curiosity and experimentation Despite all of its advantages, BM42 is not always a silver bullet. For large documents without chunks, BM25 might still be a better choice. There might be a smarter way to extract the importance information from the transformer. There could be a better method to weigh IDF against attention scores. Qdrant does not specialize in model training. Our core project is the search engine itself. However, we understand that we are not operating in a vacuum. By introducing BM42, we are stepping up to empower our community with novel tools for experimentation. We truly believe that the sparse vectors method is at exact level of abstraction to yield both powerful and flexible results. Many of you are sharing your recent Qdrant projects in our [Discord channel](https://discord.com/invite/qdrant). Feel free to try out BM42 and let us know what you come up with. ",articles/bm42.md "--- title: ""Binary Quantization - Vector Search, 40x Faster "" short_description: ""Binary Quantization is a newly introduced mechanism of reducing the memory footprint and increasing performance"" description: ""Binary Quantization is a newly introduced mechanism of reducing the memory footprint and increasing performance"" social_preview_image: /articles_data/binary-quantization/social_preview.png small_preview_image: /articles_data/binary-quantization/binary-quantization-icon.svg preview_dir: /articles_data/binary-quantization/preview weight: -40 author: Nirant Kasliwal author_link: https://nirantk.com/about/ date: 2023-09-18T13:00:00+03:00 draft: false keywords: - vector search - binary quantization - memory optimization --- # Optimizing High-Dimensional Vectors with Binary Quantization Qdrant is built to handle typical scaling challenges: high throughput, low latency and efficient indexing. **Binary quantization (BQ)** is our latest attempt to give our customers the edge they need to scale efficiently. This feature is particularly excellent for collections with large vector lengths and a large number of points. Our results are dramatic: Using BQ will reduce your memory consumption and improve retrieval speeds by up to 40x. As is the case with other quantization methods, these benefits come at the cost of recall degradation. However, our implementation lets you balance the tradeoff between speed and recall accuracy at time of search, rather than time of index creation. The rest of this article will cover: 1. The importance of binary quantization 2. Basic implementation using our Python client 3. Benchmark analysis and usage recommendations ## What is Binary Quantization? Binary quantization (BQ) converts any vector embedding of floating point numbers into a vector of binary or boolean values. This feature is an extension of our past work on [scalar quantization](/articles/scalar-quantization/) where we convert `float32` to `uint8` and then leverage a specific SIMD CPU instruction to perform fast vector comparison. ![What is binary quantization](/articles_data/binary-quantization/bq-2.png) **This binarization function is how we convert a range to binary values. All numbers greater than zero are marked as 1. If it's zero or less, they become 0.** The benefit of reducing the vector embeddings to binary values is that boolean operations are very fast and need significantly less CPU instructions. In exchange for reducing our 32 bit embeddings to 1 bit embeddings we can see up to a 40x retrieval speed up gain! One of the reasons vector search still works with such a high compression rate is that these large vectors are over-parameterized for retrieval. This is because they are designed for ranking, clustering, and similar use cases, which typically need more information encoded in the vector. For example, The 1536 dimension OpenAI embedding is worse than Open Source counterparts of 384 dimension at retrieval and ranking. Specifically, it scores 49.25 on the same [Embedding Retrieval Benchmark](https://huggingface.co/spaces/mteb/leaderboard) where the Open Source `bge-small` scores 51.82. This 2.57 points difference adds up quite soon. Our implementation of quantization achieves a good balance between full, large vectors at ranking time and binary vectors at search and retrieval time. It also has the ability for you to adjust this balance depending on your use case. ## Faster search and retrieval Unlike product quantization, binary quantization does not rely on reducing the search space for each probe. Instead, we build a binary index that helps us achieve large increases in search speed. ![Speed by quantization method](/articles_data/binary-quantization/bq-3.png) HNSW is the approximate nearest neighbor search. This means our accuracy improves up to a point of diminishing returns, as we check the index for more similar candidates. In the context of binary quantization, this is referred to as the **oversampling rate**. For example, if `oversampling=2.0` and the `limit=100`, then 200 vectors will first be selected using a quantized index. For those 200 vectors, the full 32 bit vector will be used with their HNSW index to a much more accurate 100 item result set. As opposed to doing a full HNSW search, we oversample a preliminary search and then only do the full search on this much smaller set of vectors. ## Improved storage efficiency The following diagram shows the binarization function, whereby we reduce 32 bits storage to 1 bit information. Text embeddings can be over 1024 elements of floating point 32 bit numbers. For example, remember that OpenAI embeddings are 1536 element vectors. This means each vector is 6kB for just storing the vector. ![Improved storage efficiency](/articles_data/binary-quantization/bq-4.png) In addition to storing the vector, we also need to maintain an index for faster search and retrieval. Qdrant’s formula to estimate overall memory consumption is: `memory_size = 1.5 * number_of_vectors * vector_dimension * 4 bytes` For 100K OpenAI Embedding (`ada-002`) vectors we would need 900 Megabytes of RAM and disk space. This consumption can start to add up rapidly as you create multiple collections or add more items to the database. **With binary quantization, those same 100K OpenAI vectors only require 128 MB of RAM.** We benchmarked this result using methods similar to those covered in our [Scalar Quantization memory estimation](/articles/scalar-quantization/#benchmarks). This reduction in RAM usage is achieved through the compression that happens in the binary conversion. HNSW and quantized vectors will live in RAM for quick access, while original vectors can be offloaded to disk only. For searching, quantized HNSW will provide oversampled candidates, then they will be re-evaluated using their disk-stored original vectors to refine the final results. All of this happens under the hood without any additional intervention on your part. ### When should you not use BQ? Since this method exploits the over-parameterization of embedding, you can expect poorer results for small embeddings i.e. less than 1024 dimensions. With the smaller number of elements, there is not enough information maintained in the binary vector to achieve good results. You will still get faster boolean operations and reduced RAM usage, but the accuracy degradation might be too high. ## Sample implementation Now that we have introduced you to binary quantization, let’s try our a basic implementation. In this example, we will be using OpenAI and Cohere with Qdrant. #### Create a collection with Binary Quantization enabled Here is what you should do at indexing time when you create the collection: 1. We store all the ""full"" vectors on disk. 2. Then we set the binary embeddings to be in RAM. By default, both the full vectors and BQ get stored in RAM. We move the full vectors to disk because this saves us memory and allows us to store more vectors in RAM. By doing this, we explicitly move the binary vectors to memory by setting `always_ram=True`. ```python from qdrant_client import QdrantClient #collect to our Qdrant Server client = QdrantClient( url=""http://localhost:6333"", prefer_grpc=True, ) #Create the collection to hold our embeddings # on_disk=True and the quantization_config are the areas to focus on collection_name = ""binary-quantization"" if not client.collection_exists(collection_name): client.create_collection( collection_name=f""{collection_name}"", vectors_config=models.VectorParams( size=1536, distance=models.Distance.DOT, on_disk=True, ), optimizers_config=models.OptimizersConfigDiff( default_segment_number=5, indexing_threshold=0, ), quantization_config=models.BinaryQuantization( binary=models.BinaryQuantizationConfig(always_ram=True), ), ) ``` #### What is happening in the OptimizerConfig? We're setting `indexing_threshold` to 0 i.e. disabling the indexing to zero. This allows faster uploads of vectors and payloads. We will turn it back on down below, once all the data is loaded #### Next, we upload our vectors to this and then enable indexing: ```python batch_size = 10000 client.upload_collection( collection_name=collection_name, ids=range(len(dataset)), vectors=dataset[""openai""], payload=[ {""text"": x} for x in dataset[""text""] ], parallel=10, # based on the machine ) ``` Enable indexing again: ```python client.update_collection( collection_name=f""{collection_name}"", optimizer_config=models.OptimizersConfigDiff( indexing_threshold=20000 ) ) ``` #### Configure the search parameters: When setting search parameters, we specify that we want to use `oversampling` and `rescore`. Here is an example snippet: ```python client.search( collection_name=""{collection_name}"", query_vector=[0.2, 0.1, 0.9, 0.7, ...], search_params=models.SearchParams( quantization=models.QuantizationSearchParams( ignore=False, rescore=True, oversampling=2.0, ) ) ) ``` After Qdrant pulls the oversampled vectors set, the full vectors which will be, say 1536 dimensions for OpenAI will then be pulled up from disk. Qdrant computes the nearest neighbor with the query vector and returns the accurate, rescored order. This method produces much more accurate results. We enabled this by setting `rescore=True`. These two parameters are how you are going to balance speed versus accuracy. The larger the size of your oversample, the more items you need to read from disk and the more elements you have to search with the relatively slower full vector index. On the other hand, doing this will produce more accurate results. If you have lower accuracy requirements you can even try doing a small oversample without rescoring. Or maybe, for your data set combined with your accuracy versus speed requirements you can just search the binary index and no rescoring, i.e. leaving those two parameters out of the search query. ## Benchmark results We retrieved some early results on the relationship between limit and oversampling using the the DBPedia OpenAI 1M vector dataset. We ran all these experiments on a Qdrant instance where 100K vectors were indexed and used 100 random queries. We varied the 3 parameters that will affect query time and accuracy: limit, rescore and oversampling. We offer these as an initial exploration of this new feature. You are highly encouraged to reproduce these experiments with your data sets. > Aside: Since this is a new innovation in vector databases, we are keen to hear feedback and results. [Join our Discord server](https://discord.gg/Qy6HCJK9Dc) for further discussion! **Oversampling:** In the figure below, we illustrate the relationship between recall and number of candidates: ![Correct vs candidates](/articles_data/binary-quantization/bq-5.png) We see that ""correct"" results i.e. recall increases as the number of potential ""candidates"" increase (limit x oversampling). To highlight the impact of changing the `limit`, different limit values are broken apart into different curves. For example, we see that the lowest recall for limit 50 is around 94 correct, with 100 candidates. This also implies we used an oversampling of 2.0 As oversampling increases, we see a general improvement in results – but that does not hold in every case. **Rescore:** As expected, rescoring increases the time it takes to return a query. We also repeated the experiment with oversampling except this time we looked at how rescore impacted result accuracy. ![Relationship between limit and rescore on correct](/articles_data/binary-quantization/bq-7.png) **Limit:** We experiment with limits from Top 1 to Top 50 and we are able to get to 100% recall at limit 50, with rescore=True, in an index with 100K vectors. ## Recommendations Quantization gives you the option to make tradeoffs against other parameters: Dimension count/embedding size Throughput and Latency requirements Recall requirements If you're working with OpenAI or Cohere embeddings, we recommend the following oversampling settings: |Method|Dimensionality|Test Dataset|Recall|Oversampling| |-|-|-|-|-| |OpenAI text-embedding-3-large|3072|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M) | 0.9966|3x| |OpenAI text-embedding-3-small|1536|[DBpedia 100K](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-small-1536-100K)| 0.9847|3x| |OpenAI text-embedding-3-large|1536|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M)| 0.9826|3x| |Cohere AI embed-english-v2.0|4096|[Wikipedia](https://huggingface.co/datasets/nreimers/wikipedia-22-12-large/tree/main) 1M|0.98|2x| |OpenAI text-embedding-ada-002|1536|[DbPedia 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) |0.98|4x| |Gemini|768|No Open Data| 0.9563|3x| |Mistral Embed|768|No Open Data| 0.9445 |3x| If you determine that binary quantization is appropriate for your datasets and queries then we suggest the following: - Binary Quantization with always_ram=True - Vectors stored on disk - Oversampling=2.0 (or more) - Rescore=True ## What's next? Binary quantization is exceptional if you need to work with large volumes of data under high recall expectations. You can try this feature either by spinning up a [Qdrant container image](https://hub.docker.com/r/qdrant/qdrant) locally or, having us create one for you through a [free account](https://cloud.qdrant.io/login) in our cloud hosted service. The article gives examples of data sets and configuration you can use to get going. Our documentation covers [adding large datasets to Qdrant](/documentation/tutorials/bulk-upload/) to your Qdrant instance as well as [more quantization methods](/documentation/guides/quantization/). If you have any feedback, drop us a note on Twitter or LinkedIn to tell us about your results. [Join our lively Discord Server](https://discord.gg/Qy6HCJK9Dc) if you want to discuss BQ with like-minded people! ",articles/binary-quantization.md "--- title: Introducing Qdrant 0.11 short_description: Check out what's new in Qdrant 0.11 description: Replication support is the most important change introduced by Qdrant 0.11. Check out what else has been added! preview_dir: /articles_data/qdrant-0-11-release/preview small_preview_image: /articles_data/qdrant-0-11-release/announcement-svgrepo-com.svg social_preview_image: /articles_data/qdrant-0-11-release/preview/social_preview.jpg weight: 65 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2022-10-26T13:55:00+02:00 draft: false --- We are excited to [announce the release of Qdrant v0.11](https://github.com/qdrant/qdrant/releases/tag/v0.11.0), which introduces a number of new features and improvements. ## Replication One of the key features in this release is replication support, which allows Qdrant to provide a high availability setup with distributed deployment out of the box. This, combined with sharding, enables you to horizontally scale both the size of your collections and the throughput of your cluster. This means that you can use Qdrant to handle large amounts of data without sacrificing performance or reliability. ## Administration API Another new feature is the administration API, which allows you to disable write operations to the service. This is useful in situations where search availability is more critical than updates, and can help prevent issues like memory usage watermarks from affecting your searches. ## Exact search We have also added the ability to report indexed payload points in the info API, which allows you to verify that payload values were properly formatted for indexing. In addition, we have introduced a new `exact` search parameter that allows you to force exact searches of vectors, even if an ANN index is built. This can be useful for validating the accuracy of your HNSW configuration. ## Backward compatibility This release is backward compatible with v0.10.5 storage in single node deployment, but unfortunately, distributed deployment is not compatible with previous versions due to the large number of changes required for the replica set implementation. However, clients are tested for backward compatibility with the v0.10.x service. ",articles/qdrant-0-11-release.md "--- title: Finding errors in datasets with Similarity Search short_description: Finding errors datasets with distance-based methods description: Improving quality of text-and-images datasets on the online furniture marketplace example. preview_dir: /articles_data/dataset-quality/preview social_preview_image: /articles_data/dataset-quality/preview/social_preview.jpg small_preview_image: /articles_data/dataset-quality/icon.svg weight: 8 author: George Panchuk author_link: https://medium.com/@george.panchuk date: 2022-07-18T10:18:00.000Z # aliases: [ /articles/dataset-quality/ ] --- Nowadays, people create a huge number of applications of various types and solve problems in different areas. Despite such diversity, they have something in common - they need to process data. Real-world data is a living structure, it grows day by day, changes a lot and becomes harder to work with. In some cases, you need to categorize or label your data, which can be a tough problem given its scale. The process of splitting or labelling is error-prone and these errors can be very costly. Imagine that you failed to achieve the desired quality of the model due to inaccurate labels. Worse, your users are faced with a lot of irrelevant items, unable to find what they need and getting annoyed by it. Thus, you get poor retention, and it directly impacts company revenue. It is really important to avoid such errors in your data. ## Furniture web-marketplace Let’s say you work on an online furniture marketplace. {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/furniture_marketplace.png caption=""Furniture marketplace"" >}} In this case, to ensure a good user experience, you need to split items into different categories: tables, chairs, beds, etc. One can arrange all the items manually and spend a lot of money and time on this. There is also another way: train a classification or similarity model and rely on it. With both approaches it is difficult to avoid mistakes. Manual labelling is a tedious task, but it requires concentration. Once you got distracted or your eyes became blurred mistakes won't keep you waiting. The model also can be wrong. You can analyse the most uncertain predictions and fix them, but the other errors will still leak to the site. There is no silver bullet. You should validate your dataset thoroughly, and you need tools for this. When you are sure that there are not many objects placed in the wrong category, they can be considered outliers or anomalies. Thus, you can train a model or a bunch of models capable of looking for anomalies, e.g. autoencoder and a classifier on it. However, this is again a resource-intensive task, both in terms of time and manual labour, since labels have to be provided for classification. On the contrary, if the proportion of out-of-place elements is high enough, outlier search methods are likely to be useless. ### Similarity search The idea behind similarity search is to measure semantic similarity between related parts of the data. E.g. between category title and item images. The hypothesis is, that unsuitable items will be less similar. We can't directly compare text and image data. For this we need an intermediate representation - embeddings. Embeddings are just numeric vectors containing semantic information. We can apply a pre-trained model to our data to produce these vectors. After embeddings are created, we can measure the distances between them. Assume we want to search for something other than a single bed in «Single beds» category. {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/similarity_search.png caption=""Similarity search"" >}} One of the possible pipelines would look like this: - Take the name of the category as an anchor and calculate the anchor embedding. - Calculate embeddings for images of each object placed into this category. - Compare obtained anchor and object embeddings. - Find the furthest. For instance, we can do it with the [CLIP](https://huggingface.co/sentence-transformers/clip-ViT-B-32-multilingual-v1) model. {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/category_vs_image_transparent.png caption=""Category vs. Image"" >}} We can also calculate embeddings for titles instead of images, or even for both of them to find more errors. {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/category_vs_name_and_image_transparent.png caption=""Category vs. Title and Image"" >}} As you can see, different approaches can find new errors or the same ones. Stacking several techniques or even the same techniques with different models may provide better coverage. Hint: Caching embeddings for the same models and reusing them among different methods can significantly speed up your lookup. ### Diversity search Since pre-trained models have only general knowledge about the data, they can still leave some misplaced items undetected. You might find yourself in a situation when the model focuses on non-important features, selects a lot of irrelevant elements, and fails to find genuine errors. To mitigate this issue, you can perform a diversity search. Diversity search is a method for finding the most distinctive examples in the data. As similarity search, it also operates on embeddings and measures the distances between them. The difference lies in deciding which point should be extracted next. Let's imagine how to get 3 points with similarity search and then with diversity search. Similarity: 1. Calculate distance matrix 2. Choose your anchor 3. Get a vector corresponding to the distances from the selected anchor from the distance matrix 4. Sort fetched vector 5. Get top-3 embeddings Diversity: 1. Calculate distance matrix 2. Initialize starting point (randomly or according to the certain conditions) 3. Get a distance vector for the selected starting point from the distance matrix 4. Find the furthest point 5. Get a distance vector for the new point 6. Find the furthest point from all of already fetched points {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/diversity_transparent.png caption=""Diversity search"" >}} Diversity search utilizes the very same embeddings, and you can reuse them. If your data is huge and does not fit into memory, vector search engines like [Qdrant](https://github.com/qdrant/qdrant) might be helpful. Although the described methods can be used independently. But they are simple to combine and improve detection capabilities. If the quality remains insufficient, you can fine-tune the models using a similarity learning approach (e.g. with [Quaterion](https://quaterion.qdrant.tech) both to provide a better representation of your data and pull apart dissimilar objects in space. ## Conclusion In this article, we enlightened distance-based methods to find errors in categorized datasets. Showed how to find incorrectly placed items in the furniture web store. I hope these methods will help you catch sneaky samples leaked into the wrong categories in your data, and make your users` experience more enjoyable. Poke the [demo](https://dataset-quality.qdrant.tech). Stay tuned :) ",articles/dataset-quality.md "--- title: ""What is a Sparse Vector? How to Achieve Vector-based Hybrid Search"" short_description: ""Discover sparse vectors, their function, and significance in modern data processing, including methods like SPLADE for efficient use."" description: ""Learn what sparse vectors are, how they work, and their importance in modern data processing. Explore methods like SPLADE for creating and leveraging sparse vectors efficiently."" social_preview_image: /articles_data/sparse-vectors/social_preview.png small_preview_image: /articles_data/sparse-vectors/sparse-vectors-icon.svg preview_dir: /articles_data/sparse-vectors/preview weight: -100 author: Nirant Kasliwal author_link: https://nirantk.com/about date: 2023-12-09T13:00:00+03:00 draft: false keywords: - sparse vectors - SPLADE - hybrid search - vector search --- Think of a library with a vast index card system. Each index card only has a few keywords marked out (sparse vector) of a large possible set for each book (document). This is what sparse vectors enable for text. ## What are sparse and dense vectors? Sparse vectors are like the Marie Kondo of data—keeping only what sparks joy (or relevance, in this case). Consider a simplified example of 2 documents, each with 200 words. A dense vector would have several hundred non-zero values, whereas a sparse vector could have, much fewer, say only 20 non-zero values. In this example: We assume it selects only 2 words or tokens from each document. The rest of the values are zero. This is why it's called a sparse vector. ```python dense = [0.2, 0.3, 0.5, 0.7, ...] # several hundred floats sparse = [{331: 0.5}, {14136: 0.7}] # 20 key value pairs ``` The numbers 331 and 14136 map to specific tokens in the vocabulary e.g. `['chocolate', 'icecream']`. The rest of the values are zero. This is why it's called a sparse vector. The tokens aren't always words though, sometimes they can be sub-words: `['ch', 'ocolate']` too. They're pivotal in information retrieval, especially in ranking and search systems. BM25, a standard ranking function used by search engines like [Elasticsearch](https://www.elastic.co/blog/practical-bm25-part-2-the-bm25-algorithm-and-its-variables?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors), exemplifies this. BM25 calculates the relevance of documents to a given search query. BM25's capabilities are well-established, yet it has its limitations. BM25 relies solely on the frequency of words in a document and does not attempt to comprehend the meaning or the contextual importance of the words. Additionally, it requires the computation of the entire corpus's statistics in advance, posing a challenge for large datasets. Sparse vectors harness the power of neural networks to surmount these limitations while retaining the ability to query exact words and phrases. They excel in handling large text data, making them crucial in modern data processing a and marking an advancement over traditional methods such as BM25. # Understanding sparse vectors Sparse Vectors are a representation where each dimension corresponds to a word or subword, greatly aiding in interpreting document rankings. This clarity is why sparse vectors are essential in modern search and recommendation systems, complimenting the meaning-rich embedding or dense vectors. Dense vectors from models like OpenAI Ada-002 or Sentence Transformers contain non-zero values for every element. In contrast, sparse vectors focus on relative word weights per document, with most values being zero. This results in a more efficient and interpretable system, especially in text-heavy applications like search. Sparse Vectors shine in domains and scenarios where many rare keywords or specialized terms are present. For example, in the medical domain, many rare terms are not present in the general vocabulary, so general-purpose dense vectors cannot capture the nuances of the domain. | Feature | Sparse Vectors | Dense Vectors | |---------------------------|---------------------------------------------|----------------------------------------------| | **Data Representation** | Majority of elements are zero | All elements are non-zero | | **Computational Efficiency** | Generally higher, especially in operations involving zero elements | Lower, as operations are performed on all elements | | **Information Density** | Less dense, focuses on key features | Highly dense, capturing nuanced relationships | | **Example Applications** | Text search, Hybrid search | [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/), many general machine learning tasks | Where do sparse vectors fail though? They're not great at capturing nuanced relationships between words. For example, they can't capture the relationship between ""king"" and ""queen"" as well as dense vectors. # SPLADE Let's check out [SPLADE](https://europe.naverlabs.com/research/computer-science/splade-a-sparse-bi-encoder-bert-based-model-achieves-effective-and-efficient-full-text-document-ranking/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors), an excellent way to make sparse vectors. Let's look at some numbers first. Higher is better: | Model | MRR@10 (MS MARCO Dev) | Type | |--------------------|---------|----------------| | BM25 | 0.184 | Sparse | | TCT-ColBERT | 0.359 | Dense | | doc2query-T5 [link](https://github.com/castorini/docTTTTTquery) | 0.277 | Sparse | | SPLADE | 0.322 | Sparse | | SPLADE-max | 0.340 | Sparse | | SPLADE-doc | 0.322 | Sparse | | DistilSPLADE-max | 0.368 | Sparse | All numbers are from [SPLADEv2](https://arxiv.org/abs/2109.10086). MRR is [Mean Reciprocal Rank](https://www.wikiwand.com/en/Mean_reciprocal_rank#References), a standard metric for ranking. [MS MARCO](https://microsoft.github.io/MSMARCO-Passage-Ranking/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) is a dataset for evaluating ranking and retrieval for passages. SPLADE is quite flexible as a method, with regularization knobs that can be tuned to obtain [different models](https://github.com/naver/splade) as well: > SPLADE is more a class of models rather than a model per se: depending on the regularization magnitude, we can obtain different models (from very sparse to models doing intense query/doc expansion) with different properties and performance. First, let's look at how to create a sparse vector. Then, we'll look at the concepts behind SPLADE. ## Creating a sparse vector We'll explore two different ways to create a sparse vector. The higher performance way to create a sparse vector from dedicated document and query encoders. We'll look at a simpler approach -- here we will use the same model for both document and query. We will get a dictionary of token ids and their corresponding weights for a sample text - representing a document. If you'd like to follow along, here's a [Colab Notebook](https://colab.research.google.com/gist/NirantK/ad658be3abefc09b17ce29f45255e14e/splade-single-encoder.ipynb), [alternate link](https://gist.github.com/NirantK/ad658be3abefc09b17ce29f45255e14e) with all the code. ### Setting Up ```python from transformers import AutoModelForMaskedLM, AutoTokenizer model_id = ""naver/splade-cocondenser-ensembledistil"" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForMaskedLM.from_pretrained(model_id) text = """"""Arthur Robert Ashe Jr. (July 10, 1943 – February 6, 1993) was an American professional tennis player. He won three Grand Slam titles in singles and two in doubles."""""" ``` ### Computing the sparse vector ```python import torch def compute_vector(text): """""" Computes a vector from logits and attention mask using ReLU, log, and max operations. """""" tokens = tokenizer(text, return_tensors=""pt"") output = model(**tokens) logits, attention_mask = output.logits, tokens.attention_mask relu_log = torch.log(1 + torch.relu(logits)) weighted_log = relu_log * attention_mask.unsqueeze(-1) max_val, _ = torch.max(weighted_log, dim=1) vec = max_val.squeeze() return vec, tokens vec, tokens = compute_vector(text) print(vec.shape) ``` You'll notice that there are 38 tokens in the text based on this tokenizer. This will be different from the number of tokens in the vector. In a TF-IDF, we'd assign weights only to these tokens or words. In SPLADE, we assign weights to all the tokens in the vocabulary using this vector using our learned model. ## Term expansion and weights ```python def extract_and_map_sparse_vector(vector, tokenizer): """""" Extracts non-zero elements from a given vector and maps these elements to their human-readable tokens using a tokenizer. The function creates and returns a sorted dictionary where keys are the tokens corresponding to non-zero elements in the vector, and values are the weights of these elements, sorted in descending order of weights. This function is useful in NLP tasks where you need to understand the significance of different tokens based on a model's output vector. It first identifies non-zero values in the vector, maps them to tokens, and sorts them by weight for better interpretability. Args: vector (torch.Tensor): A PyTorch tensor from which to extract non-zero elements. tokenizer: The tokenizer used for tokenization in the model, providing the mapping from tokens to indices. Returns: dict: A sorted dictionary mapping human-readable tokens to their corresponding non-zero weights. """""" # Extract indices and values of non-zero elements in the vector cols = vector.nonzero().squeeze().cpu().tolist() weights = vector[cols].cpu().tolist() # Map indices to tokens and create a dictionary idx2token = {idx: token for token, idx in tokenizer.get_vocab().items()} token_weight_dict = { idx2token[idx]: round(weight, 2) for idx, weight in zip(cols, weights) } # Sort the dictionary by weights in descending order sorted_token_weight_dict = { k: v for k, v in sorted( token_weight_dict.items(), key=lambda item: item[1], reverse=True ) } return sorted_token_weight_dict # Usage example sorted_tokens = extract_and_map_sparse_vector(vec, tokenizer) sorted_tokens ``` There will be 102 sorted tokens in total. This has expanded to include tokens that weren't in the original text. This is the term expansion we will talk about next. Here are some terms that are added: ""Berlin"", and ""founder"" - despite having no mention of Arthur's race (which leads to Owen's Berlin win) and his work as the founder of Arthur Ashe Institute for Urban Health. Here are the top few `sorted_tokens` with a weight of more than 1: ```python { ""ashe"": 2.95, ""arthur"": 2.61, ""tennis"": 2.22, ""robert"": 1.74, ""jr"": 1.55, ""he"": 1.39, ""founder"": 1.36, ""doubles"": 1.24, ""won"": 1.22, ""slam"": 1.22, ""died"": 1.19, ""singles"": 1.1, ""was"": 1.07, ""player"": 1.06, ""titles"": 0.99, ... } ``` If you're interested in using the higher-performance approach, check out the following models: 1. [naver/efficient-splade-VI-BT-large-doc](https://huggingface.co/naver/efficient-splade-vi-bt-large-doc) 2. [naver/efficient-splade-VI-BT-large-query](https://huggingface.co/naver/efficient-splade-vi-bt-large-doc) ## Why SPLADE works: term expansion Consider a query ""solar energy advantages"". SPLADE might expand this to include terms like ""renewable,"" ""sustainable,"" and ""photovoltaic,"" which are contextually relevant but not explicitly mentioned. This process is called term expansion, and it's a key component of SPLADE. SPLADE learns the query/document expansion to include other relevant terms. This is a crucial advantage over other sparse methods which include the exact word, but completely miss the contextually relevant ones. This expansion has a direct relationship with what we can control when making a SPLADE model: Sparsity via Regularisation. The number of tokens (BERT wordpieces) we use to represent each document. If we use more tokens, we can represent more terms, but the vectors become denser. This number is typically between 20 to 200 per document. As a reference point, the dense BERT vector is 768 dimensions, OpenAI Embedding is 1536 dimensions, and the sparse vector is 30 dimensions. For example, assume a 1M document corpus. Say, we use 100 sparse token ids + weights per document. Correspondingly, dense BERT vector would be 768M floats, the OpenAI Embedding would be 1.536B floats, and the sparse vector would be a maximum of 100M integers + 100M floats. This could mean a **10x reduction in memory usage**, which is a huge win for large-scale systems: | Vector Type | Memory (GB) | |-------------------|-------------------------| | Dense BERT Vector | 6.144 | | OpenAI Embedding | 12.288 | | Sparse Vector | 1.12 | ## How SPLADE works: leveraging BERT SPLADE leverages a transformer architecture to generate sparse representations of documents and queries, enabling efficient retrieval. Let's dive into the process. The output logits from the transformer backbone are inputs upon which SPLADE builds. The transformer architecture can be something familiar like BERT. Rather than producing dense probability distributions, SPLADE utilizes these logits to construct sparse vectors—think of them as a distilled essence of tokens, where each dimension corresponds to a term from the vocabulary and its associated weight in the context of the given document or query. This sparsity is critical; it mirrors the probability distributions from a typical [Masked Language Modeling](http://jalammar.github.io/illustrated-bert/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) task but is tuned for retrieval effectiveness, emphasizing terms that are both: 1. Contextually relevant: Terms that represent a document well should be given more weight. 2. Discriminative across documents: Terms that a document has, and other documents don't, should be given more weight. The token-level distributions that you'd expect in a standard transformer model are now transformed into token-level importance scores in SPLADE. These scores reflect the significance of each term in the context of the document or query, guiding the model to allocate more weight to terms that are likely to be more meaningful for retrieval purposes. The resulting sparse vectors are not only memory-efficient but also tailored for precise matching in the high-dimensional space of a search engine like Qdrant. ## Interpreting SPLADE A downside of dense vectors is that they are not interpretable, making it difficult to understand why a document is relevant to a query. SPLADE importance estimation can provide insights into the 'why' behind a document's relevance to a query. By shedding light on which tokens contribute most to the retrieval score, SPLADE offers some degree of interpretability alongside performance, a rare feat in the realm of neural IR systems. For engineers working on search, this transparency is invaluable. ## Known limitations of SPLADE ### Pooling strategy The switch to max pooling in SPLADE improved its performance on the MS MARCO and TREC datasets. However, this indicates a potential limitation of the baseline SPLADE pooling method, suggesting that SPLADE's performance is sensitive to the choice of pooling strategy​​. ### Document and query Eecoder The SPLADE model variant that uses a document encoder with max pooling but no query encoder reaches the same performance level as the prior SPLADE model. This suggests a limitation in the necessity of a query encoder, potentially affecting the efficiency of the model​​. ## Other sparse vector methods SPLADE is not the only method to create sparse vectors. Essentially, sparse vectors are a superset of TF-IDF and BM25, which are the most popular text retrieval methods. In other words, you can create a sparse vector using the term frequency and inverse document frequency (TF-IDF) to reproduce the BM25 score exactly. Additionally, attention weights from Sentence Transformers can be used to create sparse vectors. This method preserves the ability to query exact words and phrases but avoids the computational overhead of query expansion used in SPLADE. We will cover these methods in detail in a future article. ## Leveraging sparse vectors in Qdrant for hybrid search Qdrant supports a separate index for Sparse Vectors. This enables you to use the same collection for both dense and sparse vectors. Each ""Point"" in Qdrant can have both dense and sparse vectors. But let's first take a look at how you can work with sparse vectors in Qdrant. ## Practical implementation in Python Let's dive into how Qdrant handles sparse vectors with an example. Here is what we will cover: 1. Setting Up Qdrant Client: Initially, we establish a connection with Qdrant using the QdrantClient. This setup is crucial for subsequent operations. 2. Creating a Collection with Sparse Vector Support: In Qdrant, a collection is a container for your vectors. Here, we create a collection specifically designed to support sparse vectors. This is done using the create_collection method where we define the parameters for sparse vectors, such as setting the index configuration. 3. Inserting Sparse Vectors: Once the collection is set up, we can insert sparse vectors into it. This involves defining the sparse vector with its indices and values, and then upserting this point into the collection. 4. Querying with Sparse Vectors: To perform a search, we first prepare a query vector. This involves computing the vector from a query text and extracting its indices and values. We then use these details to construct a query against our collection. 5. Retrieving and Interpreting Results: The search operation returns results that include the id of the matching document, its score, and other relevant details. The score is a crucial aspect, reflecting the similarity between the query and the documents in the collection. ### 1. Set up ```python # Qdrant client setup client = QdrantClient("":memory:"") # Define collection name COLLECTION_NAME = ""example_collection"" # Insert sparse vector into Qdrant collection point_id = 1 # Assign a unique ID for the point ``` ### 2. Create a collection with sparse vector support ```python client.create_collection( collection_name=COLLECTION_NAME, vectors_config={}, sparse_vectors_config={ ""text"": models.SparseVectorParams( index=models.SparseIndexParams( on_disk=False, ) ) }, ) ``` ### 3. Insert sparse vectors Here, we see the process of inserting a sparse vector into the Qdrant collection. This step is key to building a dataset that can be quickly retrieved in the first stage of the retrieval process, utilizing the efficiency of sparse vectors. Since this is for demonstration purposes, we insert only one point with Sparse Vector and no dense vector. ```python client.upsert( collection_name=COLLECTION_NAME, points=[ models.PointStruct( id=point_id, payload={}, # Add any additional payload if necessary vector={ ""text"": models.SparseVector( indices=indices.tolist(), values=values.tolist() ) }, ) ], ) ``` By upserting points with sparse vectors, we prepare our dataset for rapid first-stage retrieval, laying the groundwork for subsequent detailed analysis using dense vectors. Notice that we use ""text"" to denote the name of the sparse vector. Those familiar with the Qdrant API will notice that the extra care taken to be consistent with the existing named vectors API -- this is to make it easier to use sparse vectors in existing codebases. As always, you're able to **apply payload filters**, shard keys, and other advanced features you've come to expect from Qdrant. To make things easier for you, the indices and values don't have to be sorted before upsert. Qdrant will sort them when the index is persisted e.g. on disk. ### 4. Query with sparse vectors We use the same process to prepare a query vector as well. This involves computing the vector from a query text and extracting its indices and values. We then use these details to construct a query against our collection. ```python # Preparing a query vector query_text = ""Who was Arthur Ashe?"" query_vec, query_tokens = compute_vector(query_text) query_vec.shape query_indices = query_vec.nonzero().numpy().flatten() query_values = query_vec.detach().numpy()[indices] ``` In this example, we use the same model for both document and query. This is not a requirement, but it's a simpler approach. ### 5. Retrieve and interpret results After setting up the collection and inserting sparse vectors, the next critical step is retrieving and interpreting the results. This process involves executing a search query and then analyzing the returned results. ```python # Searching for similar documents result = client.search( collection_name=COLLECTION_NAME, query_vector=models.NamedSparseVector( name=""text"", vector=models.SparseVector( indices=query_indices, values=query_values, ), ), with_vectors=True, ) result ``` In the above code, we execute a search against our collection using the prepared sparse vector query. The `client.search` method takes the collection name and the query vector as inputs. The query vector is constructed using the `models.NamedSparseVector`, which includes the indices and values derived from the query text. This is a crucial step in efficiently retrieving relevant documents. ```python ScoredPoint( id=1, version=0, score=3.4292831420898438, payload={}, vector={ ""text"": SparseVector( indices=[2001, 2002, 2010, 2018, 2032, ...], values=[ 1.0660614967346191, 1.391068458557129, 0.8903818726539612, 0.2502821087837219, ..., ], ) }, ) ``` The result, as shown above, is a `ScoredPoint` object containing the ID of the retrieved document, its version, a similarity score, and the sparse vector. The score is a key element as it quantifies the similarity between the query and the document, based on their respective vectors. To understand how this scoring works, we use the familiar dot product method: $$\text{Similarity}(\text{Query}, \text{Document}) = \sum_{i \in I} \text{Query}_i \times \text{Document}_i$$ This formula calculates the similarity score by multiplying corresponding elements of the query and document vectors and summing these products. This method is particularly effective with sparse vectors, where many elements are zero, leading to a computationally efficient process. The higher the score, the greater the similarity between the query and the document, making it a valuable metric for assessing the relevance of the retrieved documents. ## Hybrid search: combining sparse and dense vectors By combining search results from both dense and sparse vectors, you can achieve a hybrid search that is both efficient and accurate. Results from sparse vectors will guarantee, that all results with the required keywords are returned, while dense vectors will cover the semantically similar results. The mixture of dense and sparse results can be presented directly to the user, or used as a first stage of a two-stage retrieval process. Let's see how you can make a hybrid search query in Qdrant. First, you need to create a collection with both dense and sparse vectors: ```python client.create_collection( collection_name=COLLECTION_NAME, vectors_config={ ""text-dense"": models.VectorParams( size=1536, # OpenAI Embeddings distance=models.Distance.COSINE, ) }, sparse_vectors_config={ ""text-sparse"": models.SparseVectorParams( index=models.SparseIndexParams( on_disk=False, ) ) }, ) ``` Then, assuming you have upserted both dense and sparse vectors, you can query them together: ```python query_text = ""Who was Arthur Ashe?"" # Compute sparse and dense vectors query_indices, query_values = compute_sparse_vector(query_text) query_dense_vector = compute_dense_vector(query_text) client.search_batch( collection_name=COLLECTION_NAME, requests=[ models.SearchRequest( vector=models.NamedVector( name=""text-dense"", vector=query_dense_vector, ), limit=10, ), models.SearchRequest( vector=models.NamedSparseVector( name=""text-sparse"", vector=models.SparseVector( indices=query_indices, values=query_values, ), ), limit=10, ), ], ) ``` The result will be a pair of result lists, one for dense and one for sparse vectors. Having those results, there are several ways to combine them: ### Mixing or fusion You can mix the results from both dense and sparse vectors, based purely on their relative scores. This is a simple and effective approach, but it doesn't take into account the semantic similarity between the results. Among the [popular mixing methods](https://medium.com/plain-simple-software/distribution-based-score-fusion-dbsf-a-new-approach-to-vector-search-ranking-f87c37488b18) are: - Reciprocal Ranked Fusion (RRF) - Relative Score Fusion (RSF) - Distribution-Based Score Fusion (DBSF) {{< figure src=/articles_data/sparse-vectors/mixture.png caption=""Relative Score Fusion"" width=80% >}} [Ranx](https://github.com/AmenRa/ranx) is a great library for mixing results from different sources. ### Re-ranking You can use obtained results as a first stage of a two-stage retrieval process. In the second stage, you can re-rank the results from the first stage using a more complex model, such as [Cross-Encoders](https://www.sbert.net/examples/applications/cross-encoder/README.html) or services like [Cohere Rerank](https://txt.cohere.com/rerank/). And that's it! You've successfully achieved hybrid search with Qdrant! ## Additional resources For those who want to dive deeper, here are the top papers on the topic most of which have code available: 1. Problem Motivation: [Sparse Overcomplete Word Vector Representations](https://ar5iv.org/abs/1506.02004?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) 1. [SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval](https://ar5iv.org/abs/2109.10086?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) 1. [SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking](https://ar5iv.org/abs/2107.05720?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) 1. Late Interaction - [ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction](https://ar5iv.org/abs/2112.01488?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) 1. [SparseEmbed: Learning Sparse Lexical Representations with Contextual Embeddings for Retrieval](https://research.google/pubs/pub52289/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) **Why just read when you can try it out?** We've packed an easy-to-use Colab for you on how to make a Sparse Vector: [Sparse Vectors Single Encoder Demo](https://colab.research.google.com/drive/1wa2Yr5BCOgV0MTOFFTude99BOXCLHXky?usp=sharing). Run it, tinker with it, and start seeing the magic unfold in your projects. We can't wait to hear how you use it! ## Conclusion Alright, folks, let's wrap it up. Better search isn't a 'nice-to-have,' it's a game-changer, and Qdrant can get you there. Got questions? Our [Discord community](https://qdrant.to/discord?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) is teeming with answers. If you enjoyed reading this, why not sign up for our [newsletter](/subscribe/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) to stay ahead of the curve. And, of course, a big thanks to you, our readers, for pushing us to make ranking better for everyone. ",articles/sparse-vectors.md "--- title: Google Summer of Code 2023 - Polygon Geo Filter for Qdrant Vector Database short_description: Gsoc'23 Polygon Geo Filter for Qdrant Vector Database description: A Summary of my work and experience at Qdrant's Gsoc '23. preview_dir: /articles_data/geo-polygon-filter-gsoc/preview small_preview_image: /articles_data/geo-polygon-filter-gsoc/icon.svg social_preview_image: /articles_data/geo-polygon-filter-gsoc/preview/social_preview.jpg weight: -50 author: Zein Wen author_link: https://www.linkedin.com/in/zishenwen/ date: 2023-10-12T08:00:00+03:00 draft: false keywords: - payload filtering - geo polygon - search condition - gsoc'23 --- ## Introduction Greetings, I'm Zein Wen, and I was a Google Summer of Code 2023 participant at Qdrant. I got to work with an amazing mentor, Arnaud Gourlay, on enhancing the Qdrant Geo Polygon Filter. This new feature allows users to refine their query results using polygons. As the latest addition to the Geo Filter family of radius and rectangle filters, this enhancement promises greater flexibility in querying geo data, unlocking interesting new use cases. ## Project Overview {{< figure src=""/articles_data/geo-polygon-filter-gsoc/geo-filter-example.png"" caption=""A Use Case of Geo Filter (https://traveltime.com/blog/map-postcode-data-catchment-area)"" alt=""A Use Case of Geo Filter"" >}} Because Qdrant is a powerful query vector database it presents immense potential for machine learning-driven applications, such as recommendation. However, the scope of vector queries alone may not always meet user requirements. Consider a scenario where you're seeking restaurant recommendations; it's not just about a list of restaurants, but those within your neighborhood. This is where the Geo Filter comes into play, enhancing query by incorporating additional filtering criteria. Up until now, Qdrant's geographic filter options were confined to circular and rectangular shapes, which may not align with the diverse boundaries found in the real world. This scenario was exactly what led to a user feature request and we decided it would be a good feature to tackle since it introduces greater capability for geo-related queries. ## Technical Challenges **1. Geo Geometry Computation** {{< figure src=""/articles_data/geo-polygon-filter-gsoc/basic-concept.png"" caption=""Geo Space Basic Concept"" alt=""Geo Space Basic Concept"" >}} Internally, the Geo Filter doesn't start by testing each individual geo location as this would be computationally expensive. Instead, we create a geo hash layer that [divides the world](https://en.wikipedia.org/wiki/Grid_(spatial_index)#Grid-based_spatial_indexing) into rectangles. When a spatial index is created for Qdrant entries it assigns the entry to the geohash for its location. During a query we first identify all potential geo hashes that satisfy the filters and subsequently check for location candidates within those hashes. Accomplishing this search involves two critical geometry computations: 1. determining if a polygon intersects with a rectangle 2. ascertaining if a point lies within a polygon. {{< figure src=/articles_data/geo-polygon-filter-gsoc/geo-computation-testing.png caption=""Geometry Computation Testing"" alt=""Geometry Computation Testing"" >}} While we have a geo crate (a Rust library) that provides APIs for these computations, we dug in deeper to understand the underlying algorithms and verify their accuracy. This lead us to conduct extensive testing and visualization to determine correctness. In addition to assessing the current crate, we also discovered that there are multiple algorithms available for these computations. We invested time in exploring different approaches, such as [winding windows](https://en.wikipedia.org/wiki/Point_in_polygon#Winding%20number%20algorithm:~:text=of%20the%20algorithm.-,Winding%20number%20algorithm,-%5Bedit%5D) and [ray casting](https://en.wikipedia.org/wiki/Point_in_polygon#Winding%20number%20algorithm:~:text=.%5B2%5D-,Ray%20casting%20algorithm,-%5Bedit%5D), to grasp their distinctions, and pave the way for future improvements. Through this process, I enjoyed honing my ability to swiftly grasp unfamiliar concepts. In addition, I needed to develop analytical strategies to dissect and draw meaningful conclusions from them. This experience has been invaluable in expanding my problem-solving toolkit. **2. Proto and JSON format design** Considerable effort was devoted to designing the ProtoBuf and JSON interfaces for this new feature. This component is directly exposed to users, requiring a consistent and user-friendly interface, which in turns help drive a a positive user experience and less code modifications in the future. Initially, we contemplated aligning our interface with the [GeoJSON](https://geojson.org/) specification, given its prominence as a standard for many geo-related APIs. However, we soon realized that the way GeoJSON defines geometries significantly differs from our current JSON and ProtoBuf coordinate definitions for our point radius and rectangular filter. As a result, we prioritized API-level consistency and user experience, opting to align the new polygon definition with all our existing definitions. In addition, we planned to develop a separate multi-polygon filter in addition to the polygon. However, after careful consideration, we recognize that, for our use case, polygon filters can achieve the same result as a multi-polygon filter. This relationship mirrors how we currently handle multiple circles or rectangles. Consequently, we deemed the multi-polygon filter redundant and would introduce unnecessary complexity to the API. Doing this work illustrated to me the challenge of navigating real-world solutions that require striking a balance between adhering to established standards and prioritizing user experience. It also was key to understanding the wisdom of focusing on developing what's truly necessary for users, without overextending our efforts. ## Outcomes **1. Capability of Deep Dive** Navigating unfamiliar code bases, concepts, APIs, and techniques is a common challenge for developers. Participating in GSoC was akin to me going from the safety of a swimming pool and right into the expanse of the ocean. Having my mentor’s support during this transition was invaluable. He provided me with numerous opportunities to independently delve into areas I had never explored before. I have grown into no longer fearing unknown technical areas, whether it's unfamiliar code, techniques, or concepts in specific domains. I've gained confidence in my ability to learn them step by step and use them to create the things I envision. **2. Always Put User in Minds** Another crucial lesson I learned is the importance of considering the user's experience and their specific use cases. While development may sometimes entail iterative processes, every aspect that directly impacts the user must be approached and executed with empathy. Neglecting this consideration can lead not only to functional errors but also erode the trust of users due to inconsistency and confusion, which then leads to them no longer using my work. **3. Speak Up and Effectively Communicate** Finally, In the course of development, encountering differing opinions is commonplace. It's essential to remain open to others' ideas, while also possessing the resolve to communicate one's own perspective clearly. This fosters productive discussions and ultimately elevates the quality of the development process. ### Wrap up Being selected for Google Summer of Code 2023 and collaborating with Arnaud and the other Qdrant engineers, along with all the other community members, has been a true privilege. I'm deeply grateful to those who invested their time and effort in reviewing my code, engaging in discussions about alternatives and design choices, and offering assistance when needed. Through these interactions, I've experienced firsthand the essence of open source and the culture that encourages collaboration. This experience not only allowed me to write Rust code for a real-world product for the first time, but it also opened the door to the amazing world of open source. Without a doubt, I'm eager to continue growing alongside this community and contribute to new features and enhancements that elevate the product. I've also become an advocate for Qdrant, introducing this project to numerous coworkers and friends in the tech industry. I'm excited to witness new users and contributors emerge from within my own network! If you want to try out my work, read the [documentation](/documentation/concepts/filtering/#geo-polygon) and then, either sign up for a free [cloud account](https://cloud.qdrant.io) or download the [Docker image](https://hub.docker.com/r/qdrant/qdrant). I look forward to seeing how people are using my work in their own applications! ",articles/geo-polygon-filter-gsoc.md "--- title: ""Introducing Qdrant 1.3.0"" short_description: ""New version is out! Our latest release brings about some exciting performance improvements and much-needed fixes."" description: ""New version is out! Our latest release brings about some exciting performance improvements and much-needed fixes."" social_preview_image: /articles_data/qdrant-1.3.x/social_preview.png small_preview_image: /articles_data/qdrant-1.3.x/icon.svg preview_dir: /articles_data/qdrant-1.3.x/preview weight: 2 author: David Sertic author_link: date: 2023-06-26T00:00:00Z draft: false keywords: - vector search - new features - oversampling - grouping lookup - io_uring - oversampling - group lookup --- A brand-new [Qdrant 1.3.0 release](https://github.com/qdrant/qdrant/releases/tag/v1.3.0) comes packed with a plethora of new features, performance improvements and bux fixes: 1. Asynchronous I/O interface: Reduce overhead by managing I/O operations asynchronously, thus minimizing context switches. 2. Oversampling for Quantization: Improve the accuracy and performance of your queries while using Scalar or Product Quantization. 3. Grouping API lookup: Storage optimization method that lets you look for points in another collection using group ids. 4. Qdrant Web UI: A convenient dashboard to help you manage data stored in Qdrant. 5. Temp directory for Snapshots: Set a separate storage directory for temporary snapshots on a faster disk. 6. Other important changes Your feedback is valuable to us, and are always tying to include some of your feature requests into our roadmap. Join [our Discord community](https://qdrant.to/discord) and help us build Qdrant!. ## New features ### Asychronous I/O interface Going forward, we will support the `io_uring` asychnronous interface for storage devices on Linux-based systems. Since its introduction, `io_uring` has been proven to speed up slow-disk deployments as it decouples kernel work from the IO process. This interface uses two ring buffers to queue and manage I/O operations asynchronously, avoiding costly context switches and reducing overhead. Unlike mmap, it frees the user threads to do computations instead of waiting for the kernel to complete. ![io_uring](/articles_data/qdrant-1.3.x/io-uring.png) #### Enable the interface from your config file: ```yaml storage: # enable the async scorer which uses io_uring async_scorer: true ``` You can return to the mmap based backend by either deleting the `async_scorer` entry or setting the value to `false`. This optimization will mainly benefit workloads with lots of disk IO (e.g. querying on-disk collections with rescoring). Please keep in mind that this feature is experimental and that the interface may change in further versions. ### Oversampling for quantization We are introducing [oversampling](/documentation/guides/quantization/#oversampling) as a new way to help you improve the accuracy and performance of similarity search algorithms. With this method, you are able to significantly compress high-dimensional vectors in memory and then compensate the accuracy loss by re-scoring additional points with the original vectors. You will experience much faster performance with quantization due to parallel disk usage when reading vectors. Much better IO means that you can keep quantized vectors in RAM, so the pre-selection will be even faster. Finally, once pre-selection is done, you can use parallel IO to retrieve original vectors, which is significantly faster than traversing HNSW on slow disks. #### Set the oversampling factor via query: Here is how you can configure the oversampling factor - define how many extra vectors should be pre-selected using the quantized index, and then re-scored using original vectors. ```http POST /collections/{collection_name}/points/search { ""params"": { ""quantization"": { ""ignore"": false, ""rescore"": true, ""oversampling"": 2.4 } }, ""vector"": [0.2, 0.1, 0.9, 0.7], ""limit"": 100 } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient(""localhost"", port=6333) client.search( collection_name=""{collection_name}"", query_vector=[0.2, 0.1, 0.9, 0.7], search_params=models.SearchParams( quantization=models.QuantizationSearchParams( ignore=False, rescore=True, oversampling=2.4 ) ) ) ``` In this case, if `oversampling` is 2.4 and `limit` is 100, then 240 vectors will be pre-selected using quantized index, and then the top 100 points will be returned after re-scoring with the unquantized vectors. As you can see from the example above, this parameter is set during the query. This is a flexible method that will let you tune query accuracy. While the index is not changed, you can decide how many points you want to retrieve using quantized vectors. ### Grouping API lookup In version 1.2.0, we introduced a mechanism for requesting groups of points. Our new feature extends this functionality by giving you the option to look for points in another collection using the group ids. We wanted to add this feature, since having a single point for the shared data of the same item optimizes storage use, particularly if the payload is large. This has the extra benefit of having a single point to update when the information shared by the points in a group changes. ![Group Lookup](/articles_data/qdrant-1.3.x/group-lookup.png) For example, if you have a collection of documents, you may want to chunk them and store the points for the chunks in a separate collection, making sure that you store the point id from the document it belongs in the payload of the chunk point. #### Adding the parameter to grouping API request: When using the grouping API, add the `with_lookup` parameter to bring the information from those points into each group: ```http POST /collections/chunks/points/search/groups { // Same as in the regular search API ""vector"": [1.1], ..., // Grouping parameters ""group_by"": ""document_id"", ""limit"": 2, ""group_size"": 2, // Lookup parameters ""with_lookup"": { // Name of the collection to look up points in ""collection_name"": ""documents"", // Options for specifying what to bring from the payload // of the looked up point, true by default ""with_payload"": [""title"", ""text""], // Options for specifying what to bring from the vector(s) // of the looked up point, true by default ""with_vectors: false, } } ``` ```python client.search_groups( collection_name=""chunks"", # Same as in the regular search() API query_vector=[1.1], ..., # Grouping parameters group_by=""document_id"", # Path of the field to group by limit=2, # Max amount of groups group_size=2, # Max amount of points per group # Lookup parameters with_lookup=models.WithLookup( # Name of the collection to look up points in collection_name=""documents"", # Options for specifying what to bring from the payload # of the looked up point, True by default with_payload=[""title"", ""text""] # Options for specifying what to bring from the vector(s) # of the looked up point, True by default with_vectors=False, ) ) ``` ### Qdrant web user interface We are excited to announce a more user-friendly way to organize and work with your collections inside of Qdrant. Our dashboard's design is simple, but very intuitive and easy to access. Try it out now! If you have Docker running, you can [quickstart Qdrant](/documentation/quick-start/) and access the Dashboard locally from [http://localhost:6333/dashboard](http://localhost:6333/dashboard). You should see this simple access point to Qdrant: ![Qdrant Web UI](/articles_data/qdrant-1.3.x/web-ui.png) ### Temporary directory for Snapshots Currently, temporary snapshot files are created inside the `/storage` directory. Oftentimes `/storage` is a network-mounted disk. Therefore, we found this method suboptimal because `/storage` is limited in disk size and also because writing data to it may affect disk performance as it consumes bandwidth. This new feature allows you to specify a different directory on another disk that is faster. We expect this feature to significantly optimize cloud performance. To change it, access `config.yaml` and set `storage.temp_path` to another directory location. ## Important changes The latest release focuses not only on the new features but also introduces some changes making Qdrant even more reliable. ### Optimizing group requests Internally, `is_empty` was not using the index when it was called, so it had to deserialize the whole payload to see if the key had values or not. Our new update makes sure to check the index first, before confirming with the payload if it is actually `empty`/`null`, so these changes improve performance only when the negated condition is true (e.g. it improves when the field is not empty). Going forward, this will improve the way grouping API requests are handled. ### Faster read access with mmap If you used mmap, you most likely found that segments were always created with cold caches. The first request to the database needed to request the disk, which made startup slower despite plenty of RAM being available. We have implemeneted a way to ask the kernel to ""heat up"" the disk cache and make initialization much faster. The function is expected to be used on startup and after segment optimization and reloading of newly indexed segment. So far this is only implemented for ""immutable"" memmaps. ## Release notes As usual, [our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.3.0) describe all the changes introduced in the latest version. ",articles/qdrant-1.3.x.md "--- title: Vector Search in constant time short_description: Apply Quantum Computing to your search engine description: Quantum Quantization enables vector search in constant time. This article will discuss the concept of quantum quantization for ANN vector search. preview_dir: /articles_data/quantum-quantization/preview social_preview_image: /articles_data/quantum-quantization/social_preview.png small_preview_image: /articles_data/quantum-quantization/icon.svg weight: 1000 author: Prankstorm Team draft: false author_link: https://www.youtube.com/watch?v=dQw4w9WgXcQ date: 2023-04-01T00:48:00.000Z --- The advent of quantum computing has revolutionized many areas of science and technology, and one of the most intriguing developments has been its potential application to artificial neural networks (ANNs). One area where quantum computing can significantly improve performance is in vector search, a critical component of many machine learning tasks. In this article, we will discuss the concept of quantum quantization for ANN vector search, focusing on the conversion of float32 to qbit vectors and the ability to perform vector search on arbitrary-sized databases in constant time. ## Quantum Quantization and Entanglement Quantum quantization is a novel approach that leverages the power of quantum computing to speed up the search process in ANNs. By converting traditional float32 vectors into qbit vectors, we can create quantum entanglement between the qbits. Quantum entanglement is a unique phenomenon in which the states of two or more particles become interdependent, regardless of the distance between them. This property of quantum systems can be harnessed to create highly efficient vector search algorithms. The conversion of float32 vectors to qbit vectors can be represented by the following formula: ```text qbit_vector = Q( float32_vector ) ``` where Q is the quantum quantization function that transforms the float32_vector into a quantum entangled qbit_vector. ## Vector Search in Constant Time The primary advantage of using quantum quantization for ANN vector search is the ability to search through an arbitrary-sized database in constant time. The key to performing vector search in constant time with quantum quantization is to use a quantum algorithm called Grover's algorithm. Grover's algorithm is a quantum search algorithm that finds the location of a marked item in an unsorted database in O(√N) time, where N is the size of the database. This is a significant improvement over classical algorithms, which require O(N) time to solve the same problem. However, the is one another trick, which allows to improve Grover's algorithm performanse dramatically. This trick is called transposition and it allows to reduce the number of Grover's iterations from O(√N) to O(√D), where D - is a dimension of the vector space. And since the dimension of the vector space is much smaller than the number of vectors, and usually is a constant, this trick allows to reduce the number of Grover's iterations from O(√N) to O(√D) = O(1). Check out our [Quantum Quantization PR](https://github.com/qdrant/qdrant/pull/1639) on GitHub. ",articles/quantum-quantization.md "--- title: ""Introducing Qdrant 1.2.x"" short_description: ""Check out what Qdrant 1.2 brings to vector search"" description: ""Check out what Qdrant 1.2 brings to vector search"" social_preview_image: /articles_data/qdrant-1.2.x/social_preview.png small_preview_image: /articles_data/qdrant-1.2.x/icon.svg preview_dir: /articles_data/qdrant-1.2.x/preview weight: 8 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-05-24T10:45:00+02:00 draft: false keywords: - vector search - new features - product quantization - optional vectors - nested filters - appendable mmap - group requests --- A brand-new Qdrant 1.2 release comes packed with a plethora of new features, some of which were highly requested by our users. If you want to shape the development of the Qdrant vector database, please [join our Discord community](https://qdrant.to/discord) and let us know how you use it! ## New features As usual, a minor version update of Qdrant brings some interesting new features. We love to see your feedback, and we tried to include the features most requested by our community. ### Product Quantization The primary focus of Qdrant was always performance. That's why we built it in Rust, but we were always concerned about making vector search affordable. From the very beginning, Qdrant offered support for disk-stored collections, as storage space is way cheaper than memory. That's also why we have introduced the [Scalar Quantization](/articles/scalar-quantization/) mechanism recently, which makes it possible to reduce the memory requirements by up to four times. Today, we are bringing a new quantization mechanism to life. A separate article on [Product Quantization](/documentation/quantization/#product-quantization) will describe that feature in more detail. In a nutshell, you can **reduce the memory requirements by up to 64 times**! ### Optional named vectors Qdrant has been supporting multiple named vectors per point for quite a long time. Those may have utterly different dimensionality and distance functions used to calculate similarity. Having multiple embeddings per item is an essential real-world scenario. For example, you might be encoding textual and visual data using different models. Or you might be experimenting with different models but don't want to make your payloads redundant by keeping them in separate collections. ![Optional vectors](/articles_data/qdrant-1.2.x/optional-vectors.png) However, up to the previous version, we requested that you provide all the vectors for each point. There have been many requests to allow nullable vectors, as sometimes you cannot generate an embedding or simply don't want to for reasons we don't need to know. ### Grouping requests Embeddings are great for capturing the semantics of the documents, but we rarely encode larger pieces of data into a single vector. Having a summary of a book may sound attractive, but in reality, we divide it into paragraphs or some different parts to have higher granularity. That pays off when we perform the semantic search, as we can return the relevant pieces only. That's also how modern tools like Langchain process the data. The typical way is to encode some smaller parts of the document and keep the document id as a payload attribute. ![Query without grouping request](/articles_data/qdrant-1.2.x/without-grouping-request.png) There are cases where we want to find relevant parts, but only up to a specific number of results per document (for example, only a single one). Up till now, we had to implement such a mechanism on the client side and send several calls to the Qdrant engine. But that's no longer the case. Qdrant 1.2 provides a mechanism for [grouping requests](/documentation/search/#grouping-api), which can handle that server-side, within a single call to the database. This mechanism is similar to the SQL `GROUP BY` clause. ![Query with grouping request](/articles_data/qdrant-1.2.x/with-grouping-request.png) You are not limited to a single result per document, and you can select how many entries will be returned. ### Nested filters Unlike some other vector databases, Qdrant accepts any arbitrary JSON payload, including arrays, objects, and arrays of objects. You can also [filter the search results using nested keys](/documentation/filtering/#nested-key), even though arrays (using the `[]` syntax). Before Qdrant 1.2 it was impossible to express some more complex conditions for the nested structures. For example, let's assume we have the following payload: ```json { ""country"": ""Japan"", ""cities"": [ { ""name"": ""Tokyo"", ""population"": 9.3, ""area"": 2194 }, { ""name"": ""Osaka"", ""population"": 2.7, ""area"": 223 }, { ""name"": ""Kyoto"", ""population"": 1.5, ""area"": 827.8 } ] } ``` We want to filter out the results to include the countries with a city with over 2 million citizens and an area bigger than 500 square kilometers but no more than 1000. There is no such a city in Japan, looking at our data, but if we wrote the following filter, it would be returned: ```json { ""filter"": { ""must"": [ { ""key"": ""country.cities[].population"", ""range"": { ""gte"": 2 } }, { ""key"": ""country.cities[].area"", ""range"": { ""gt"": 500, ""lte"": 1000 } } ] }, ""limit"": 3 } ``` Japan would be returned because Tokyo and Osaka match the first criteria, while Kyoto fulfills the second. But that's not what we wanted to achieve. That's the motivation behind introducing a new type of nested filter. ```json { ""filter"": { ""must"": [ { ""nested"": { ""key"": ""country.cities"", ""filter"": { ""must"": [ { ""key"": ""population"", ""range"": { ""gte"": 2 } }, { ""key"": ""area"", ""range"": { ""gt"": 500, ""lte"": 1000 } } ] } } } ] }, ""limit"": 3 } ``` The syntax is consistent with all the other supported filters and enables new possibilities. In our case, it allows us to express the joined condition on a nested structure and make the results list empty but correct. ## Important changes The latest release focuses not only on the new features but also introduces some changes making Qdrant even more reliable. ### Recovery mode There has been an issue in memory-constrained environments, such as cloud, happening when users were pushing massive amounts of data into the service using `wait=false`. This data influx resulted in an overreaching of disk or RAM limits before the Write-Ahead Logging (WAL) was fully applied. This situation was causing Qdrant to attempt a restart and reapplication of WAL, failing recurrently due to the same memory constraints and pushing the service into a frustrating crash loop with many Out-of-Memory errors. Qdrant 1.2 enters recovery mode, if enabled, when it detects a failure on startup. That makes the service halt the loading of collection data and commence operations in a partial state. This state allows for removing collections but doesn't support search or update functions. **Recovery mode [has to be enabled by user](/documentation/administration/#recovery-mode).** ### Appendable mmap For a long time, segments using mmap storage were `non-appendable` and could only be constructed by the optimizer. Dynamically adding vectors to the mmap file is fairly complicated and thus not implemented in Qdrant, but we did our best to implement it in the recent release. If you want to read more about segments, check out our docs on [vector storage](/documentation/storage/#vector-storage). ## Security There are two major changes in terms of [security](/documentation/security/): 1. **API-key support** - basic authentication with a static API key to prevent unwanted access. Previously API keys were only supported in [Qdrant Cloud](https://cloud.qdrant.io/). 2. **TLS support** - to use encrypted connections and prevent sniffing/MitM attacks. ## Release notes As usual, [our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.2.0) describe all the changes introduced in the latest version. ",articles/qdrant-1.2.x.md "--- title: ""Qdrant under the hood: io_uring"" short_description: ""The Linux io_uring API offers great performance in certain cases. Here's how Qdrant uses it!"" description: ""Slow disk decelerating your Qdrant deployment? Get on top of IO overhead with this one trick!"" social_preview_image: /articles_data/io_uring/social_preview.png small_preview_image: /articles_data/io_uring/io_uring-icon.svg preview_dir: /articles_data/io_uring/preview weight: 3 author: Andre Bogus author_link: https://llogiq.github.io date: 2023-06-21T09:45:00+02:00 draft: false keywords: - vector search - linux - optimization aliases: [ /articles/io-uring/ ] --- With Qdrant [version 1.3.0](https://github.com/qdrant/qdrant/releases/tag/v1.3.0) we introduce the alternative io\_uring based *async uring* storage backend on Linux-based systems. Since its introduction, io\_uring has been known to improve async throughput wherever the OS syscall overhead gets too high, which tends to occur in situations where software becomes *IO bound* (that is, mostly waiting on disk). ## Input+Output Around the mid-90s, the internet took off. The first servers used a process- per-request setup, which was good for serving hundreds if not thousands of concurrent request. The POSIX Input + Output (IO) was modeled in a strictly synchronous way. The overhead of starting a new process for each request made this model unsustainable. So servers started forgoing process separation, opting for the thread-per-request model. But even that ran into limitations. I distinctly remember when someone asked the question whether a server could serve 10k concurrent connections, which at the time exhausted the memory of most systems (because every thread had to have its own stack and some other metadata, which quickly filled up available memory). As a result, the synchronous IO was replaced by asynchronous IO during the 2.5 kernel update, either via `select` or `epoll` (the latter being Linux-only, but a small bit more efficient, so most servers of the time used it). However, even this crude form of asynchronous IO carries the overhead of at least one system call per operation. Each system call incurs a context switch, and while this operation is itself not that slow, the switch disturbs the caches. Today's CPUs are much faster than memory, but if their caches start to miss data, the memory accesses required led to longer and longer wait times for the CPU. ### Memory-mapped IO Another way of dealing with file IO (which unlike network IO doesn't have a hard time requirement) is to map parts of files into memory - the system fakes having that chunk of the file in memory, so when you read from a location there, the kernel interrupts your process to load the needed data from disk, and resumes your process once done, whereas writing to the memory will also notify the kernel. Also the kernel can prefetch data while the program is running, thus reducing the likelyhood of interrupts. Thus there is still some overhead, but (especially in asynchronous applications) it's far less than with `epoll`. The reason this API is rarely used in web servers is that these usually have a large variety of files to access, unlike a database, which can map its own backing store into memory once. ### Combating the Poll-ution There were multiple experiments to improve matters, some even going so far as moving a HTTP server into the kernel, which of course brought its own share of problems. Others like Intel added their own APIs that ignored the kernel and worked directly on the hardware. Finally, Jens Axboe took matters into his own hands and proposed a ring buffer based interface called *io\_uring*. The buffers are not directly for data, but for operations. User processes can setup a Submission Queue (SQ) and a Completion Queue (CQ), both of which are shared between the process and the kernel, so there's no copying overhead. ![io_uring diagram](/articles_data/io_uring/io-uring.png) Apart from avoiding copying overhead, the queue-based architecture lends itself to multithreading as item insertion/extraction can be made lockless, and once the queues are set up, there is no further syscall that would stop any user thread. Servers that use this can easily get to over 100k concurrent requests. Today Linux allows asynchronous IO via io\_uring for network, disk and accessing other ports, e.g. for printing or recording video. ## And what about Qdrant? Qdrant can store everything in memory, but not all data sets may fit, which can require storing on disk. Before io\_uring, Qdrant used mmap to do its IO. This led to some modest overhead in case of disk latency. The kernel may stop a user thread trying to access a mapped region, which incurs some context switching overhead plus the wait time until the disk IO is finished. Ultimately, this works very well with the asynchronous nature of Qdrant's core. One of the great optimizations Qdrant offers is quantization (either [scalar](/articles/scalar-quantization/) or [product](/articles/product-quantization/)-based). However unless the collection resides fully in memory, this optimization method generates significant disk IO, so it is a prime candidate for possible improvements. If you run Qdrant on Linux, you can enable io\_uring with the following in your configuration: ```yaml # within the storage config storage: # enable the async scorer which uses io_uring async_scorer: true ``` You can return to the mmap based backend by either deleting the `async_scorer` entry or setting the value to `false`. ## Benchmarks To run the benchmark, use a test instance of Qdrant. If necessary spin up a docker container and load a snapshot of the collection you want to benchmark with. You can copy and edit our [benchmark script](/articles_data/io_uring/rescore-benchmark.sh) to run the benchmark. Run the script with and without enabling `storage.async_scorer` and once. You can measure IO usage with `iostat` from another console. For our benchmark, we chose the laion dataset picking 5 million 768d entries. We enabled scalar quantization + HNSW with m=16 and ef_construct=512. We do the quantization in RAM, HNSW in RAM but keep the original vectors on disk (which was a network drive rented from Hetzner for the benchmark). If you want to reproduce the benchmarks, you can get snapshots containing the datasets: * [mmap only](https://storage.googleapis.com/common-datasets-snapshots/laion-768-6m-mmap.snapshot) * [with scalar quantization](https://storage.googleapis.com/common-datasets-snapshots/laion-768-6m-sq-m16-mmap.shapshot) Running the benchmark, we get the following IOPS, CPU loads and wall clock times: | | oversampling | parallel | ~max IOPS | CPU% (of 4 cores) | time (s) (avg of 3) | |----------|--------------|----------|-----------|-------------------|---------------------| | io_uring | 1 | 4 | 4000 | 200 | 12 | | mmap | 1 | 4 | 2000 | 93 | 43 | | io_uring | 1 | 8 | 4000 | 200 | 12 | | mmap | 1 | 8 | 2000 | 90 | 43 | | io_uring | 4 | 8 | 7000 | 100 | 30 | | mmap | 4 | 8 | 2300 | 50 | 145 | Note that in this case, the IO operations have relatively high latency due to using a network disk. Thus, the kernel takes more time to fulfil the mmap requests, and application threads need to wait, which is reflected in the CPU percentage. On the other hand, with the io\_uring backend, the application threads can better use available cores for the rescore operation without any IO-induced delays. Oversampling is a new feature to improve accuracy at the cost of some performance. It allows setting a factor, which is multiplied with the `limit` while doing the search. The results are then re-scored using the original vector and only then the top results up to the limit are selected. ## Discussion Looking back, disk IO used to be very serialized; re-positioning read-write heads on moving platter was a slow and messy business. So the system overhead didn't matter as much, but nowadays with SSDs that can often even parallelize operations while offering near-perfect random access, the overhead starts to become quite visible. While memory-mapped IO gives us a fair deal in terms of ease of use and performance, we can improve on the latter in exchange for some modest complexity increase. io\_uring is still quite young, having only been introduced in 2019 with kernel 5.1, so some administrators will be wary of introducing it. Of course, as with performance, the right answer is usually ""it depends"", so please review your personal risk profile and act accordingly. ## Best Practices If your on-disk collection's query performance is of sufficiently high priority to you, enable the io\_uring-based async\_scorer to greatly reduce operating system overhead from disk IO. On the other hand, if your collections are in memory only, activating it will be ineffective. Also note that many queries are not IO bound, so the overhead may or may not become measurable in your workload. Finally, on-device disks typically carry lower latency than network drives, which may also affect mmap overhead. Therefore before you roll out io\_uring, perform the above or a similar benchmark with both mmap and io\_uring and measure both wall time and IOps). Benchmarks are always highly use-case dependent, so your mileage may vary. Still, doing that benchmark once is a small price for the possible performance wins. Also please [tell us](https://discord.com/channels/907569970500743200/907569971079569410) about your benchmark results! ",articles/io_uring.md "--- title: ""Hybrid Search Revamped - Building with Qdrant's Query API"" short_description: ""Merging different search methods to improve the search quality was never easier"" description: ""Our new Query API allows you to build a hybrid search system that uses different search methods to improve search quality & experience. Learn more here."" preview_dir: /articles_data/hybrid-search/preview social_preview_image: /articles_data/hybrid-search/social-preview.png weight: -150 author: Kacper Łukawski author_link: https://kacperlukawski.com date: 2024-07-25T00:00:00.000Z --- It's been over a year since we published the original article on how to build a hybrid search system with Qdrant. The idea was straightforward: combine the results from different search methods to improve retrieval quality. Back in 2023, you still needed to use an additional service to bring lexical search capabilities and combine all the intermediate results. Things have changed since then. Once we introduced support for sparse vectors, [the additional search service became obsolete](/articles/sparse-vectors/), but you were still required to combine the results from different methods on your end. **Qdrant 1.10 introduces a new Query API that lets you build a search system by combining different search methods to improve retrieval quality**. Everything is now done on the server side, and you can focus on building the best search experience for your users. In this article, we will show you how to utilize the new [Query API](/documentation/concepts/search/#query-api) to build a hybrid search system. ## Introducing the new Query API At Qdrant, we believe that vector search capabilities go well beyond a simple search for nearest neighbors. That's why we provided separate methods for different search use cases, such as `search`, `recommend`, or `discover`. With the latest release, we are happy to introduce the new Query API, which combines all of these methods into a single endpoint and also supports creating nested multistage queries that can be used to build complex search pipelines. If you are an existing Qdrant user, you probably have a running search mechanism that you want to improve, whether sparse or dense. Doing any changes should be preceded by a proper evaluation of its effectiveness. ## How effective is your search system? None of the experiments makes sense if you don't measure the quality. How else would you compare which method works better for your use case? The most common way of doing that is by using the standard metrics, such as `precision@k`, `MRR`, or `NDCG`. There are existing libraries, such as [ranx](https://amenra.github.io/ranx/), that can help you with that. We need to have the ground truth dataset to calculate any of these, but curating it is a separate task. ```python from ranx import Qrels, Run, evaluate # Qrels, or query relevance judgments, keep the ground truth data qrels_dict = { ""q_1"": { ""d_12"": 5, ""d_25"": 3 }, ""q_2"": { ""d_11"": 6, ""d_22"": 1 } } # Runs are built from the search results run_dict = { ""q_1"": { ""d_12"": 0.9, ""d_23"": 0.8, ""d_25"": 0.7, ""d_36"": 0.6, ""d_32"": 0.5, ""d_35"": 0.4 }, ""q_2"": { ""d_12"": 0.9, ""d_11"": 0.8, ""d_25"": 0.7, ""d_36"": 0.6, ""d_22"": 0.5, ""d_35"": 0.4 } } # We need to create both objects, and then we can evaluate the run against the qrels qrels = Qrels(qrels_dict) run = Run(run_dict) # Calculating the NDCG@5 metric is as simple as that evaluate(qrels, run, ""ndcg@5"") ``` ## Available embedding options with Query API Support for multiple vectors per point is nothing new in Qdrant, but introducing the Query API makes it even more powerful. The 1.10 release supports the multivectors, allowing you to treat embedding lists as a single entity. There are many possible ways of utilizing this feature, and the most prominent one is the support for late interaction models, such as [ColBERT](https://qdrant.tech/documentation/fastembed/fastembed-colbert/). Instead of having a single embedding for each document or query, this family of models creates a separate one for each token of text. In the search process, the final score is calculated based on the interaction between the tokens of the query and the document. Contrary to cross-encoders, document embedding might be precomputed and stored in the database, which makes the search process much faster. If you are curious about the details, please check out [the article about ColBERT, written by our friends from Jina AI](https://jina.ai/news/what-is-colbert-and-late-interaction-and-why-they-matter-in-search/). ![Late interaction](/articles_data/hybrid-search/late-interaction.png) Besides multivectors, you can use regular dense and sparse vectors, and experiment with smaller data types to reduce memory use. Named vectors can help you store different dimensionalities of the embeddings, which is useful if you use multiple models to represent your data, or want to utilize the Matryoshka embeddings. ![Multiple vectors per point](/articles_data/hybrid-search/multiple-vectors.png) There is no single way of building a hybrid search. The process of designing it is an exploratory exercise, where you need to test various setups and measure their effectiveness. Building a proper search experience is a complex task, and it's better to keep it data-driven, not just rely on the intuition. ## Fusion vs reranking We can, distinguish two main approaches to building a hybrid search system: fusion and reranking. The former is about combining the results from different search methods, based solely on the scores returned by each method. That usually involves some normalization, as the scores returned by different methods might be in different ranges. After that, there is a formula that takes the relevancy measures and calculates the final score that we use later on to reorder the documents. Qdrant has built-in support for the Reciprocal Rank Fusion method, which is the de facto standard in the field. ![Fusion](/articles_data/hybrid-search/fusion.png) Reranking, on the other hand, is about taking the results from different search methods and reordering them based on some additional processing using the content of the documents, not just the scores. This processing may rely on an additional neural model, such as a cross-encoder which would be inefficient enough to be used on the whole dataset. These methods are practically applicable only when used on a smaller subset of candidates returned by the faster search methods. Late interaction models, such as ColBERT, are way more efficient in this case, as they can be used to rerank the candidates without the need to access all the documents in the collection. ![Reranking](/articles_data/hybrid-search/reranking.png) ### Why not a linear combination? It's often proposed to use full-text and vector search scores to form a linear combination formula to rerank the results. So it goes like this: ```final_score = 0.7 * vector_score + 0.3 * full_text_score``` However, we didn't even consider such a setup. Why? Those scores don't make the problem linearly separable. We used the BM25 score along with cosine vector similarity to use both of them as points coordinates in 2-dimensional space. The chart shows how those points are distributed: ![A distribution of both Qdrant and BM25 scores mapped into 2D space.](/articles_data/hybrid-search/linear-combination.png) *A distribution of both Qdrant and BM25 scores mapped into 2D space. It clearly shows relevant and non-relevant objects are not linearly separable in that space, so using a linear combination of both scores won't give us a proper hybrid search.* Both relevant and non-relevant items are mixed. **None of the linear formulas would be able to distinguish between them.** Thus, that's not the way to solve it. ## Building a hybrid search system in Qdrant Ultimately, **any search mechanism might also be a reranking mechanism**. You can prefetch results with sparse vectors and then rerank them with the dense ones, or the other way around. Or, if you have Matryoshka embeddings, you can start with oversampling the candidates with the dense vectors of the lowest dimensionality and then gradually reduce the number of candidates by reranking them with the higher-dimensional embeddings. Nothing stops you from combining both fusion and reranking. Let's go a step further and build a hybrid search mechanism that combines the results from the Matryoshka embeddings, dense vectors, and sparse vectors and then reranks them with the late interaction model. In the meantime, we will introduce additional reranking and fusion steps. ![Complex search pipeline](/articles_data/hybrid-search/complex-search-pipeline.png) Our search pipeline consists of two branches, each of them responsible for retrieving a subset of documents that we eventually want to rerank with the late interaction model. Let's connect to Qdrant first and then build the search pipeline. ```python from qdrant_client import QdrantClient, models client = QdrantClient(""http://localhost:6333"") ``` All the steps utilizing Matryoshka embeddings might be specified in the Query API as a nested structure: ```python # The first branch of our search pipeline retrieves 25 documents # using the Matryoshka embeddings with multistep retrieval. matryoshka_prefetch = models.Prefetch( prefetch=[ models.Prefetch( prefetch=[ # The first prefetch operation retrieves 100 documents # using the Matryoshka embeddings with the lowest # dimensionality of 64. models.Prefetch( query=[0.456, -0.789, ..., 0.239], using=""matryoshka-64dim"", limit=100, ), ], # Then, the retrieved documents are re-ranked using the # Matryoshka embeddings with the dimensionality of 128. query=[0.456, -0.789, ..., -0.789], using=""matryoshka-128dim"", limit=50, ) ], # Finally, the results are re-ranked using the Matryoshka # embeddings with the dimensionality of 256. query=[0.456, -0.789, ..., 0.123], using=""matryoshka-256dim"", limit=25, ) ``` Similarly, we can build the second branch of our search pipeline, which retrieves the documents using the dense and sparse vectors and performs the fusion of them using the Reciprocal Rank Fusion method: ```python # The second branch of our search pipeline also retrieves 25 documents, # but uses the dense and sparse vectors, with their results combined # using the Reciprocal Rank Fusion. sparse_dense_rrf_prefetch = models.Prefetch( prefetch=[ models.Prefetch( prefetch=[ # The first prefetch operation retrieves 100 documents # using dense vectors using integer data type. Retrieval # is faster, but quality is lower. models.Prefetch( query=[7, 63, ..., 92], using=""dense-uint8"", limit=100, ) ], # Integer-based embeddings are then re-ranked using the # float-based embeddings. Here we just want to retrieve # 25 documents. query=[-1.234, 0.762, ..., 1.532], using=""dense"", limit=25, ), # Here we just add another 25 documents using the sparse # vectors only. models.Prefetch( query=models.SparseVector( indices=[125, 9325, 58214], values=[-0.164, 0.229, 0.731], ), using=""sparse"", limit=25, ), ], # RRF is activated below, so there is no need to specify the # query vector here, as fusion is done on the scores of the # retrieved documents. query=models.FusionQuery( fusion=models.Fusion.RRF, ), ) ``` The second branch could have already been called hybrid, as it combines the results from the dense and sparse vectors with fusion. However, nothing stops us from building even more complex search pipelines. Here is how the target call to the Query API would look like in Python: ```python client.query_points( ""my-collection"", prefetch=[ matryoshka_prefetch, sparse_dense_rrf_prefetch, ], # Finally rerank the results with the late interaction model. It only # considers the documents retrieved by all the prefetch operations above. # Return 10 final results. query=[ [1.928, -0.654, ..., 0.213], [-1.197, 0.583, ..., 1.901], ..., [0.112, -1.473, ..., 1.786], ], using=""late-interaction"", with_payload=False, limit=10, ) ``` The options are endless, the new Query API gives you the flexibility to experiment with different setups. **You rarely need to build such a complex search pipeline**, but it's good to know that you can do that if needed. ## Some anecdotal observations Neither of the algorithms performs best in all cases. In some cases, keyword-based search will be the winner and vice-versa. The following table shows some interesting examples we could find in the [WANDS](https://github.com/wayfair/WANDS) dataset during experimentation:
Query BM25 Search Vector Search
cybersport desk desk ❌ gaming desk ✅
plates for icecream ""eat"" plates on wood wall décor ❌ alicyn 8.5 '' melamine dessert plate ✅
kitchen table with a thick board craft kitchen acacia wood cutting board ❌ industrial solid wood dining table ✅
wooden bedside table 30 '' bedside table lamp ❌ portable bedside end table ✅
Also examples where keyword-based search did better:
Query BM25 Search Vector Search
computer chair vibrant computer task chair ✅ office chair ❌
64.2 inch console table cervantez 64.2 '' console table ✅ 69.5 '' console table ❌
## Try the New Query API in Qdrant 1.10 The new Query API introduced in Qdrant 1.10 is a game-changer for building hybrid search systems. You don't need any additional services to combine the results from different search methods, and you can even create more complex pipelines and serve them directly from Qdrant. Our webinar on *Building the Ultimate Hybrid Search* takes you through the process of building a hybrid search system with Qdrant Query API. If you missed it, you can [watch the recording](https://www.youtube.com/watch?v=LAZOxqzceEU), or [check the notebooks](https://github.com/qdrant/workshop-ultimate-hybrid-search).
If you have any questions or need help with building your hybrid search system, don't hesitate to reach out to us on [Discord](https://qdrant.to/discord). ",articles/hybrid-search.md "--- title: ""Neural Search 101: A Complete Guide and Step-by-Step Tutorial"" short_description: Step-by-step guide on how to build a neural search service. description: Discover the power of neural search. Learn what neural search is and follow our tutorial to build a neural search service using BERT, Qdrant, and FastAPI. # external_link: https://blog.qdrant.tech/neural-search-tutorial-3f034ab13adc social_preview_image: /articles_data/neural-search-tutorial/social_preview.jpg preview_dir: /articles_data/neural-search-tutorial/preview small_preview_image: /articles_data/neural-search-tutorial/tutorial.svg weight: 50 author: Andrey Vasnetsov author_link: https://blog.vasnetsov.com/ date: 2021-06-10T10:18:00.000Z # aliases: [ /articles/neural-search-tutorial/ ] --- # Neural Search 101: A Comprehensive Guide and Step-by-Step Tutorial Information retrieval technology is one of the main technologies that enabled the modern Internet to exist. These days, search technology is the heart of a variety of applications. From web-pages search to product recommendations. For many years, this technology didn't get much change until neural networks came into play. In this guide we are going to find answers to these questions: * What is the difference between regular and neural search? * What neural networks could be used for search? * In what tasks is neural network search useful? * How to build and deploy own neural search service step-by-step? ## What is neural search? A regular full-text search, such as Google's, consists of searching for keywords inside a document. For this reason, the algorithm can not take into account the real meaning of the query and documents. Many documents that might be of interest to the user are not found because they use different wording. Neural search tries to solve exactly this problem - it attempts to enable searches not by keywords but by meaning. To achieve this, the search works in 2 steps. In the first step, a specially trained neural network encoder converts the query and the searched objects into a vector representation called embeddings. The encoder must be trained so that similar objects, such as texts with the same meaning or alike pictures get a close vector representation. ![Encoders and embedding space](https://gist.githubusercontent.com/generall/c229cc94be8c15095286b0c55a3f19d7/raw/e52e3f1a320cd985ebc96f48955d7f355de8876c/encoders.png) Having this vector representation, it is easy to understand what the second step should be. To find documents similar to the query you now just need to find the nearest vectors. The most convenient way to determine the distance between two vectors is to calculate the cosine distance. The usual Euclidean distance can also be used, but it is not so efficient due to [the curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality). ## Which model could be used? It is ideal to use a model specially trained to determine the closeness of meanings. For example, models trained on Semantic Textual Similarity (STS) datasets. Current state-of-the-art models can be found on this [leaderboard](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts-benchmark?p=roberta-a-robustly-optimized-bert-pretraining). However, not only specially trained models can be used. If the model is trained on a large enough dataset, its internal features can work as embeddings too. So, for instance, you can take any pre-trained on ImageNet model and cut off the last layer from it. In the penultimate layer of the neural network, as a rule, the highest-level features are formed, which, however, do not correspond to specific classes. The output of this layer can be used as an embedding. ## What tasks is neural search good for? Neural search has the greatest advantage in areas where the query cannot be formulated precisely. Querying a table in an SQL database is not the best place for neural search. On the contrary, if the query itself is fuzzy, or it cannot be formulated as a set of conditions - neural search can help you. If the search query is a picture, sound file or long text, neural network search is almost the only option. If you want to build a recommendation system, the neural approach can also be useful. The user's actions can be encoded in vector space in the same way as a picture or text. And having those vectors, it is possible to find semantically similar users and determine the next probable user actions. ## Step-by-step neural search tutorial using Qdrant With all that said, let's make our neural network search. As an example, I decided to make a search for startups by their description. In this demo, we will see the cases when text search works better and the cases when neural network search works better. I will use data from [startups-list.com](https://www.startups-list.com/). Each record contains the name, a paragraph describing the company, the location and a picture. Raw parsed data can be found at [this link](https://storage.googleapis.com/generall-shared-data/startups_demo.json). ### Step 1: Prepare data for neural search To be able to search for our descriptions in vector space, we must get vectors first. We need to encode the descriptions into a vector representation. As the descriptions are textual data, we can use a pre-trained language model. As mentioned above, for the task of text search there is a whole set of pre-trained models specifically tuned for semantic similarity. One of the easiest libraries to work with pre-trained language models, in my opinion, is the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) by UKPLab. It provides a way to conveniently download and use many pre-trained models, mostly based on transformer architecture. Transformers is not the only architecture suitable for neural search, but for our task, it is quite enough. We will use a model called `all-MiniLM-L6-v2`. This model is an all-round model tuned for many use-cases. Trained on a large and diverse dataset of over 1 billion training pairs. It is optimized for low memory consumption and fast inference. The complete code for data preparation with detailed comments can be found and run in [Colab Notebook](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) ### Step 2: Incorporate a Vector search engine Now as we have a vector representation for all our records, we need to store them somewhere. In addition to storing, we may also need to add or delete a vector, save additional information with the vector. And most importantly, we need a way to search for the nearest vectors. The vector search engine can take care of all these tasks. It provides a convenient API for searching and managing vectors. In our tutorial, we will use [Qdrant vector search engine](https://github.com/qdrant/qdrant) vector search engine. It not only supports all necessary operations with vectors but also allows you to store additional payload along with vectors and use it to perform filtering of the search result. Qdrant has a client for Python and also defines the API schema if you need to use it from other languages. The easiest way to use Qdrant is to run a pre-built image. So make sure you have Docker installed on your system. To start Qdrant, use the instructions on its [homepage](https://github.com/qdrant/qdrant). Download image from [DockerHub](https://hub.docker.com/r/qdrant/qdrant): ```bash docker pull qdrant/qdrant ``` And run the service inside the docker: ```bash docker run -p 6333:6333 \ -v $(pwd)/qdrant_storage:/qdrant/storage \ qdrant/qdrant ``` You should see output like this ```text ... [2021-02-05T00:08:51Z INFO actix_server::builder] Starting 12 workers [2021-02-05T00:08:51Z INFO actix_server::builder] Starting ""actix-web-service-0.0.0.0:6333"" service on 0.0.0.0:6333 ``` This means that the service is successfully launched and listening port 6333. To make sure you can test [http://localhost:6333/](http://localhost:6333/) in your browser and get qdrant version info. All uploaded to Qdrant data is saved into the `./qdrant_storage` directory and will be persisted even if you recreate the container. ### Step 3: Upload data to Qdrant Now once we have the vectors prepared and the search engine running, we can start uploading the data. To interact with Qdrant from python, I recommend using an out-of-the-box client library. To install it, use the following command ```bash pip install qdrant-client ``` At this point, we should have startup records in file `startups.json`, encoded vectors in file `startup_vectors.npy`, and running Qdrant on a local machine. Let's write a script to upload all startup data and vectors into the search engine. First, let's create a client object for Qdrant. ```python # Import client library from qdrant_client import QdrantClient from qdrant_client.models import VectorParams, Distance qdrant_client = QdrantClient(host='localhost', port=6333) ``` Qdrant allows you to combine vectors of the same purpose into collections. Many independent vector collections can exist on one service at the same time. Let's create a new collection for our startup vectors. ```python if not qdrant_client.collection_exists('startups'): qdrant_client.create_collection( collection_name='startups', vectors_config=VectorParams(size=384, distance=Distance.COSINE), ) ``` The `vector_size` parameter is very important. It tells the service the size of the vectors in that collection. All vectors in a collection must have the same size, otherwise, it is impossible to calculate the distance between them. `384` is the output dimensionality of the encoder we are using. The `distance` parameter allows specifying the function used to measure the distance between two points. The Qdrant client library defines a special function that allows you to load datasets into the service. However, since there may be too much data to fit a single computer memory, the function takes an iterator over the data as input. Let's create an iterator over the startup data and vectors. ```python import numpy as np import json fd = open('./startups.json') # payload is now an iterator over startup data payload = map(json.loads, fd) # Here we load all vectors into memory, numpy array works as iterable for itself. # Other option would be to use Mmap, if we don't want to load all data into RAM vectors = np.load('./startup_vectors.npy') ``` And the final step - data uploading ```python qdrant_client.upload_collection( collection_name='startups', vectors=vectors, payload=payload, ids=None, # Vector ids will be assigned automatically batch_size=256 # How many vectors will be uploaded in a single request? ) ``` Now we have vectors uploaded to the vector search engine. In the next step, we will learn how to actually search for the closest vectors. The full code for this step can be found [here](https://github.com/qdrant/qdrant_demo/blob/master/qdrant_demo/init_collection_startups.py). ### Step 4: Make a search API Now that all the preparations are complete, let's start building a neural search class. First, install all the requirements: ```bash pip install sentence-transformers numpy ``` In order to process incoming requests neural search will need 2 things. A model to convert the query into a vector and Qdrant client, to perform a search queries. ```python # File: neural_searcher.py from qdrant_client import QdrantClient from sentence_transformers import SentenceTransformer class NeuralSearcher: def __init__(self, collection_name): self.collection_name = collection_name # Initialize encoder model self.model = SentenceTransformer('all-MiniLM-L6-v2', device='cpu') # initialize Qdrant client self.qdrant_client = QdrantClient(host='localhost', port=6333) ``` The search function looks as simple as possible: ```python def search(self, text: str): # Convert text query into vector vector = self.model.encode(text).tolist() # Use `vector` for search for closest vectors in the collection search_result = self.qdrant_client.search( collection_name=self.collection_name, query_vector=vector, query_filter=None, # We don't want any filters for now top=5 # 5 the most closest results is enough ) # `search_result` contains found vector ids with similarity scores along with the stored payload # In this function we are interested in payload only payloads = [hit.payload for hit in search_result] return payloads ``` With Qdrant it is also feasible to add some conditions to the search. For example, if we wanted to search for startups in a certain city, the search query could look like this: ```python from qdrant_client.models import Filter ... city_of_interest = ""Berlin"" # Define a filter for cities city_filter = Filter(**{ ""must"": [{ ""key"": ""city"", # We store city information in a field of the same name ""match"": { # This condition checks if payload field have requested value ""keyword"": city_of_interest } }] }) search_result = self.qdrant_client.search( collection_name=self.collection_name, query_vector=vector, query_filter=city_filter, top=5 ) ... ``` We now have a class for making neural search queries. Let's wrap it up into a service. ### Step 5: Deploy as a service To build the service we will use the FastAPI framework. It is super easy to use and requires minimal code writing. To install it, use the command ```bash pip install fastapi uvicorn ``` Our service will have only one API endpoint and will look like this: ```python # File: service.py from fastapi import FastAPI # That is the file where NeuralSearcher is stored from neural_searcher import NeuralSearcher app = FastAPI() # Create an instance of the neural searcher neural_searcher = NeuralSearcher(collection_name='startups') @app.get(""/api/search"") def search_startup(q: str): return { ""result"": neural_searcher.search(text=q) } if __name__ == ""__main__"": import uvicorn uvicorn.run(app, host=""0.0.0.0"", port=8000) ``` Now, if you run the service with ```bash python service.py ``` and open your browser at [http://localhost:8000/docs](http://localhost:8000/docs) , you should be able to see a debug interface for your service. ![FastAPI Swagger interface](https://gist.githubusercontent.com/generall/c229cc94be8c15095286b0c55a3f19d7/raw/d866e37a60036ebe65508bd736faff817a5d27e9/fastapi_neural_search.png) Feel free to play around with it, make queries and check out the results. This concludes the tutorial. ### Experience Neural Search With Qdrant’s Free Demo Excited to see neural search in action? Take the next step and book a [free demo](https://qdrant.to/semantic-search-demo) with Qdrant! Experience firsthand how this cutting-edge technology can transform your search capabilities. Our demo will help you grow intuition for cases when the neural search is useful. The demo contains a switch that selects between neural and full-text searches. You can turn neural search on and off to compare the result with regular full-text search. Try to use a startup description to find similar ones. Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, and publish other examples of neural networks and neural search applications. ",articles/neural-search-tutorial.md "--- title: Serverless Semantic Search short_description: ""Need to setup a server to offer semantic search? Think again!"" description: ""Create a serverless semantic search engine using nothing but Qdrant and free cloud services."" social_preview_image: /articles_data/serverless/social_preview.png small_preview_image: /articles_data/serverless/icon.svg preview_dir: /articles_data/serverless/preview weight: 1 author: Andre Bogus author_link: https://llogiq.github.io date: 2023-07-12T10:00:00+01:00 draft: false keywords: rust, serverless, lambda, semantic, search --- Do you want to insert a semantic search function into your website or online app? Now you can do so - without spending any money! In this example, you will learn how to create a free prototype search engine for your own non-commercial purposes. You may find all of the assets for this tutorial on [GitHub](https://github.com/qdrant/examples/tree/master/lambda-search). ## Ingredients * A [Rust](https://rust-lang.org) toolchain * [cargo lambda](https://cargo-lambda.info) (install via package manager, [download](https://github.com/cargo-lambda/cargo-lambda/releases) binary or `cargo install cargo-lambda`) * The [AWS CLI](https://aws.amazon.com/cli) * Qdrant instance ([free tier](https://cloud.qdrant.io) available) * An embedding provider service of your choice (see our [Embeddings docs](/documentation/embeddings/). You may be able to get credits from [AI Grant](https://aigrant.org), also Cohere has a [rate-limited non-commercial free tier](https://cohere.com/pricing)) * AWS Lambda account (12-month free tier available) ## What you're going to build You'll combine the embedding provider and the Qdrant instance to a neat semantic search, calling both services from a small Lambda function. ![lambda integration diagram](/articles_data/serverless/lambda_integration.png) Now lets look at how to work with each ingredient before connecting them. ## Rust and cargo-lambda You want your function to be quick, lean and safe, so using Rust is a no-brainer. To compile Rust code for use within Lambda functions, the `cargo-lambda` subcommand has been built. `cargo-lambda` can put your Rust code in a zip file that AWS Lambda can then deploy on a no-frills `provided.al2` runtime. To interface with AWS Lambda, you will need a Rust project with the following dependencies in your `Cargo.toml`: ```toml [dependencies] tokio = { version = ""1"", features = [""macros""] } lambda_http = { version = ""0.8"", default-features = false, features = [""apigw_http""] } lambda_runtime = ""0.8"" ``` This gives you an interface consisting of an entry point to start the Lambda runtime and a way to register your handler for HTTP calls. Put the following snippet into `src/helloworld.rs`: ```rust use lambda_http::{run, service_fn, Body, Error, Request, RequestExt, Response}; /// This is your callback function for responding to requests at your URL async fn function_handler(_req: Request) -> Result, Error> { Response::from_text(""Hello, Lambda!"") } #[tokio::main] async fn main() { run(service_fn(function_handler)).await } ``` You can also use a closure to bind other arguments to your function handler (the `service_fn` call then becomes `service_fn(|req| function_handler(req, ...))`). Also if you want to extract parameters from the request, you can do so using the [Request](https://docs.rs/lambda_http/latest/lambda_http/type.Request.html) methods (e.g. `query_string_parameters` or `query_string_parameters_ref`). Add the following to your `Cargo.toml` to define the binary: ```toml [[bin]] name = ""helloworld"" path = ""src/helloworld.rs"" ``` On the AWS side, you need to setup a Lambda and IAM role to use with your function. ![create lambda web page](/articles_data/serverless/create_lambda.png) Choose your function name, select ""Provide your own bootstrap on Amazon Linux 2"". As architecture, use `arm64`. You will also activate a function URL. Here it is up to you if you want to protect it via IAM or leave it open, but be aware that open end points can be accessed by anyone, potentially costing money if there is too much traffic. By default, this will also create a basic role. To look up the role, you can go into the Function overview: ![function overview](/articles_data/serverless/lambda_overview.png) Click on the ""Info"" link near the ""▸ Function overview"" heading, and select the ""Permissions"" tab on the left. You will find the ""Role name"" directly under *Execution role*. Note it down for later. ![function overview](/articles_data/serverless/lambda_role.png) To test that your ""Hello, Lambda"" service works, you can compile and upload the function: ```bash $ export LAMBDA_FUNCTION_NAME=hello $ export LAMBDA_ROLE= $ export LAMBDA_REGION=us-east-1 $ cargo lambda build --release --arm --bin helloworld --output-format zip Downloaded libc v0.2.137 # [..] output omitted for brevity Finished release [optimized] target(s) in 1m 27s $ # Delete the old empty definition $ aws lambda delete-function-url-config --region $LAMBDA_REGION --function-name $LAMBDA_FUNCTION_NAME $ aws lambda delete-function --region $LAMBDA_REGION --function-name $LAMBDA_FUNCTION_NAME $ # Upload the function $ aws lambda create-function --function-name $LAMBDA_FUNCTION_NAME \ --handler bootstrap \ --architectures arm64 \ --zip-file fileb://./target/lambda/helloworld/bootstrap.zip \ --runtime provided.al2 \ --region $LAMBDA_REGION \ --role $LAMBDA_ROLE \ --tracing-config Mode=Active $ # Add the function URL $ aws lambda add-permission \ --function-name $LAMBDA_FUNCTION_NAME \ --action lambda:InvokeFunctionUrl \ --principal ""*"" \ --function-url-auth-type ""NONE"" \ --region $LAMBDA_REGION \ --statement-id url $ # Here for simplicity unauthenticated URL access. Beware! $ aws lambda create-function-url-config \ --function-name $LAMBDA_FUNCTION_NAME \ --region $LAMBDA_REGION \ --cors ""AllowOrigins=*,AllowMethods=*,AllowHeaders=*"" \ --auth-type NONE ``` Now you can go to your *Function Overview* and click on the Function URL. You should see something like this: ```text Hello, Lambda! ``` Bearer ! You have set up a Lambda function in Rust. On to the next ingredient: ## Embedding Most providers supply a simple https GET or POST interface you can use with an API key, which you have to supply in an authentication header. If you are using this for non-commercial purposes, the rate limited trial key from Cohere is just a few clicks away. Go to [their welcome page](https://dashboard.cohere.ai/welcome/register), register and you'll be able to get to the dashboard, which has an ""API keys"" menu entry which will bring you to the following page: [cohere dashboard](/articles_data/serverless/cohere-dashboard.png) From there you can click on the ⎘ symbol next to your API key to copy it to the clipboard. *Don't put your API key in the code!* Instead read it from an env variable you can set in the lambda environment. This avoids accidentally putting your key into a public repo. Now all you need to get embeddings is a bit of code. First you need to extend your dependencies with `reqwest` and also add `anyhow` for easier error handling: ```toml anyhow = ""1.0"" reqwest = { version = ""0.11.18"", default-features = false, features = [""json"", ""rustls-tls""] } serde = ""1.0"" ``` Now given the API key from above, you can make a call to get the embedding vectors: ```rust use anyhow::Result; use serde::Deserialize; use reqwest::Client; #[derive(Deserialize)] struct CohereResponse { outputs: Vec> } pub async fn embed(client: &Client, text: &str, api_key: &str) -> Result>> { let CohereResponse { outputs } = client .post(""https://api.cohere.ai/embed"") .header(""Authorization"", &format!(""Bearer {api_key}"")) .header(""Content-Type"", ""application/json"") .header(""Cohere-Version"", ""2021-11-08"") .body(format!(""{{\""text\"":[\""{text}\""],\""model\"":\""small\""}}"")) .send() .await? .json() .await?; Ok(outputs) } ``` Note that this may return multiple vectors if the text overflows the input dimensions. Cohere's `small` model has 1024 output dimensions. Other providers have similar interfaces. Consult our [Embeddings docs](/documentation/embeddings/) for further information. See how little code it took to get the embedding? While you're at it, it's a good idea to write a small test to check if embedding works and the vectors are of the expected size: ```rust #[tokio::test] async fn check_embedding() { // ignore this test if API_KEY isn't set let Ok(api_key) = &std::env::var(""API_KEY"") else { return; } let embedding = crate::embed(""What is semantic search?"", api_key).unwrap()[0]; // Cohere's `small` model has 1024 output dimensions. assert_eq!(1024, embedding.len()); } ``` Run this while setting the `API_KEY` environment variable to check if the embedding works. ## Qdrant search Now that you have embeddings, it's time to put them into your Qdrant. You could of course use `curl` or `python` to set up your collection and upload the points, but as you already have Rust including some code to obtain the embeddings, you can stay in Rust, adding `qdrant-client` to the mix. ```rust use anyhow::Result; use qdrant_client::prelude::*; use qdrant_client::qdrant::{VectorsConfig, VectorParams}; use qdrant_client::qdrant::vectors_config::Config; use std::collections::HashMap; fn setup<'i>( embed_client: &reqwest::Client, embed_api_key: &str, qdrant_url: &str, api_key: Option<&str>, collection_name: &str, data: impl Iterator)>, ) -> Result<()> { let mut config = QdrantClientConfig::from_url(qdrant_url); config.api_key = api_key; let client = QdrantClient::new(Some(config))?; // create the collections if !client.has_collection(collection_name).await? { client .create_collection(&CreateCollection { collection_name: collection_name.into(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 1024, // output dimensions from above distance: Distance::Cosine as i32, ..Default::default() })), }), ..Default::default() }) .await?; } let mut id_counter = 0_u64; let points = data.map(|(text, payload)| { let id = std::mem::replace(&mut id_counter, *id_counter + 1); let vectors = Some(embed(embed_client, text, embed_api_key).unwrap()); PointStruct { id, vectors, payload } }).collect(); client.upsert_points(collection_name, points, None).await?; Ok(()) } ``` Depending on whether you want to efficiently filter the data, you can also add some indexes. I'm leaving this out for brevity, but you can look at the [example code](https://github.com/qdrant/examples/tree/master/lambda-search) containing this operation. Also this does not implement chunking (splitting the data to upsert in multiple requests, which avoids timeout errors). Add a suitable `main` method and you can run this code to insert the points (or just use the binary from the example). Be sure to include the port in the `qdrant_url`. Now that you have the points inserted, you can search them by embedding: ```rust use anyhow::Result; use qdrant_client::prelude::*; pub async fn search( text: &str, collection_name: String, client: &Client, api_key: &str, qdrant: &QdrantClient, ) -> Result> { Ok(qdrant.search_points(&SearchPoints { collection_name, limit: 5, // use what fits your use case here with_payload: Some(true.into()), vector: embed(client, text, api_key)?, ..Default::default() }).await?.result) } ``` You can also filter by adding a `filter: ...` field to the `SearchPoints`, and you will likely want to process the result further, but the example code already does that, so feel free to start from there in case you need this functionality. ## Putting it all together Now that you have all the parts, it's time to join them up. Now copying and wiring up the snippets above is left as an exercise to the reader. Impatient minds can peruse the [example repo](https://github.com/qdrant/examples/tree/master/lambda-search) instead. You'll want to extend the `main` method a bit to connect with the Client once at the start, also get API keys from the environment so you don't need to compile them into the code. To do that, you can get them with `std::env::var(_)` from the rust code and set the environment from the AWS console. ```bash $ export QDRANT_URI= $ export QDRANT_API_KEY= $ export COHERE_API_KEY= $ export COLLECTION_NAME=site-cohere $ aws lambda update-function-configuration \ --function-name $LAMBDA_FUNCTION_NAME \ --environment ""Variables={QDRANT_URI=$QDRANT_URI,\ QDRANT_API_KEY=$QDRANT_API_KEY,COHERE_API_KEY=${COHERE_API_KEY},\ COLLECTION_NAME=${COLLECTION_NAME}""` ``` In any event, you will arrive at one command line program to insert your data and one Lambda function. The former can just be `cargo run` to set up the collection. For the latter, you can again call `cargo lambda` and the AWS console: ```bash $ export LAMBDA_FUNCTION_NAME=search $ export LAMBDA_REGION=us-east-1 $ cargo lambda build --release --arm --output-format zip Downloaded libc v0.2.137 # [..] output omitted for brevity Finished release [optimized] target(s) in 1m 27s $ # Update the function $ aws lambda update-function-code --function-name $LAMBDA_FUNCTION_NAME \ --zip-file fileb://./target/lambda/page-search/bootstrap.zip \ --region $LAMBDA_REGION ``` ## Discussion Lambda works by spinning up your function once the URL is called, so they don't need to keep the compute on hand unless it is actually used. This means that the first call will be burdened by some 1-2 seconds of latency for loading the function, later calls will resolve faster. Of course, there is also the latency for calling the embeddings provider and Qdrant. On the other hand, the free tier doesn't cost a thing, so you certainly get what you pay for. And for many use cases, a result within one or two seconds is acceptable. Rust minimizes the overhead for the function, both in terms of file size and runtime. Using an embedding service means you don't need to care about the details. Knowing the URL, API key and embedding size is sufficient. Finally, with free tiers for both Lambda and Qdrant as well as free credits for the embedding provider, the only cost is your time to set everything up. Who could argue with free? ",articles/serverless.md "--- title: Filtrable HNSW short_description: How to make ANN search with custom filtering? description: How to make ANN search with custom filtering? Search in selected subsets without loosing the results. # external_link: https://blog.vasnetsov.com/posts/categorical-hnsw/ social_preview_image: /articles_data/filtrable-hnsw/social_preview.jpg preview_dir: /articles_data/filtrable-hnsw/preview small_preview_image: /articles_data/filtrable-hnsw/global-network.svg weight: 60 date: 2019-11-24T22:44:08+03:00 author: Andrei Vasnetsov author_link: https://blog.vasnetsov.com/ # aliases: [ /articles/filtrable-hnsw/ ] --- If you need to find some similar objects in vector space, provided e.g. by embeddings or matching NN, you can choose among a variety of libraries: Annoy, FAISS or NMSLib. All of them will give you a fast approximate neighbors search within almost any space. But what if you need to introduce some constraints in your search? For example, you want search only for products in some category or select the most similar customer of a particular brand. I did not find any simple solutions for this. There are several discussions like [this](https://github.com/spotify/annoy/issues/263), but they only suggest to iterate over top search results and apply conditions consequently after the search. Let's see if we could somehow modify any of ANN algorithms to be able to apply constrains during the search itself. Annoy builds tree index over random projections. Tree index implies that we will meet same problem that appears in relational databases: if field indexes were built independently, then it is possible to use only one of them at a time. Since nobody solved this problem before, it seems that there is no easy approach. There is another algorithm which shows top results on the [benchmark](https://github.com/erikbern/ann-benchmarks). It is called HNSW which stands for Hierarchical Navigable Small World. The [original paper](https://arxiv.org/abs/1603.09320) is well written and very easy to read, so I will only give the main idea here. We need to build a navigation graph among all indexed points so that the greedy search on this graph will lead us to the nearest point. This graph is constructed by sequentially adding points that are connected by a fixed number of edges to previously added points. In the resulting graph, the number of edges at each point does not exceed a given threshold $m$ and always contains the nearest considered points. ![NSW](/articles_data/filtrable-hnsw/NSW.png) ### How can we modify it? What if we simply apply the filter criteria to the nodes of this graph and use in the greedy search only those that meet these criteria? It turns out that even with this naive modification algorithm can cover some use cases. One such case is if your criteria do not correlate with vector semantics. For example, you use a vector search for clothing names and want to filter out some sizes. In this case, the nodes will be uniformly filtered out from the entire cluster structure. Therefore, the theoretical conclusions obtained in the [Percolation theory](https://en.wikipedia.org/wiki/Percolation_theory) become applicable: > Percolation is related to the robustness of the graph (called also network). Given a random graph of $n$ nodes and an average degree $\langle k\rangle$ . Next we remove randomly a fraction $1-p$ of nodes and leave only a fraction $p$. There exists a critical percolation threshold $ pc = \frac{1}{\langle k\rangle} $ below which the network becomes fragmented while above $pc$ a giant connected component exists. This statement also confirmed by experiments: {{< figure src=/articles_data/filtrable-hnsw/exp_connectivity_glove_m0.png caption=""Dependency of connectivity to the number of edges"" >}} {{< figure src=/articles_data/filtrable-hnsw/exp_connectivity_glove_num_elements.png caption=""Dependency of connectivity to the number of point (no dependency)."" >}} There is a clear threshold when the search begins to fail. This threshold is due to the decomposition of the graph into small connected components. The graphs also show that this threshold can be shifted by increasing the $m$ parameter of the algorithm, which is responsible for the degree of nodes. Let's consider some other filtering conditions we might want to apply in the search: * Categorical filtering * Select only points in a specific category * Select points which belong to a specific subset of categories * Select points with a specific set of labels * Numerical range * Selection within some geographical region In the first case, we can guarantee that the HNSW graph will be connected simply by creating additional edges inside each category separately, using the same graph construction algorithm, and then combining them into the original graph. In this case, the total number of edges will increase by no more than 2 times, regardless of the number of categories. Second case is a little harder. A connection may be lost between two categories if they lie in different clusters. ![category clusters](/articles_data/filtrable-hnsw/hnsw_graph_category.png) The idea here is to build same navigation graph but not between nodes, but between categories. Distance between two categories might be defined as distance between category entry points (or, for precision, as the average distance between a random sample). Now we can estimate expected graph connectivity by number of excluded categories, not nodes. It still does not guarantee that two random categories will be connected, but allows us to switch to multiple searches in each category if connectivity threshold passed. In some cases, multiple searches can be even faster if you take advantage of parallel processing. {{< figure src=/articles_data/filtrable-hnsw/exp_random_groups.png caption=""Dependency of connectivity to the random categories included in search"" >}} Third case might be resolved in a same way it is resolved in classical databases. Depending on labeled subsets size ration we can go for one of the following scenarios: * if at least one subset is small: perform search over the label containing smallest subset and then filter points consequently. * if large subsets give large intersection: perform regular search with constraints expecting that intersection size fits connectivity threshold. * if large subsets give small intersection: perform linear search over intersection expecting that it is small enough to fit a time frame. Numerical range case can be reduces to the previous one if we split numerical range into a buckets containing equal amount of points. Next we also connect neighboring buckets to achieve graph connectivity. We still need to filter some results which presence in border buckets but do not fulfill actual constraints, but their amount might be regulated by the size of buckets. Geographical case is a lot like a numerical one. Usual geographical search involves [geohash](https://en.wikipedia.org/wiki/Geohash), which matches any geo-point to a fixes length identifier. ![Geohash example](/articles_data/filtrable-hnsw/geohash.png) We can use this identifiers as categories and additionally make connections between neighboring geohashes. It will ensure that any selected geographical region will also contain connected HNSW graph. ## Conclusion It is possible to enchant HNSW algorithm so that it will support filtering points in a first search phase. Filtering can be carried out on the basis of belonging to categories, which in turn is generalized to such popular cases as numerical ranges and geo. Experiments were carried by modification [python implementation](https://github.com/generall/hnsw-python) of the algorithm, but real production systems require much faster version, like [NMSLib](https://github.com/nmslib/nmslib). ",articles/filtrable-hnsw.md "--- title: Food Discovery Demo short_description: Feeling hungry? Find the perfect meal with Qdrant's multimodal semantic search. description: Feeling hungry? Find the perfect meal with Qdrant's multimodal semantic search. preview_dir: /articles_data/food-discovery-demo/preview social_preview_image: /articles_data/food-discovery-demo/preview/social_preview.png small_preview_image: /articles_data/food-discovery-demo/icon.svg weight: -30 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-09-05T11:32:00.000Z --- Not every search journey begins with a specific destination in mind. Sometimes, you just want to explore and see what’s out there and what you might like. This is especially true when it comes to food. You might be craving something sweet, but you don’t know what. You might be also looking for a new dish to try, and you just want to see the options available. In these cases, it's impossible to express your needs in a textual query, as the thing you are looking for is not yet defined. Qdrant's semantic search for images is useful when you have a hard time expressing your tastes in words. ## General architecture We are happy to announce a refreshed version of our [Food Discovery Demo](https://food-discovery.qdrant.tech/). This time available as an open source project, so you can easily deploy it on your own and play with it. If you prefer to dive into the source code directly, then feel free to check out the [GitHub repository ](https://github.com/qdrant/demo-food-discovery/). Otherwise, read on to learn more about the demo and how it works! In general, our application consists of three parts: a [FastAPI](https://fastapi.tiangolo.com/) backend, a [React](https://react.dev/) frontend, and a [Qdrant](/) instance. The architecture diagram below shows how these components interact with each other: ![Archtecture diagram](/articles_data/food-discovery-demo/architecture-diagram.png) ## Why did we use a CLIP model? CLIP is a neural network that can be used to encode both images and texts into vectors. And more importantly, both images and texts are vectorized into the same latent space, so we can compare them directly. This lets you perform semantic search on images using text queries and the other way around. For example, if you search for “flat bread with toppings”, you will get images of pizza. Or if you search for “pizza”, you will get images of some flat bread with toppings, even if they were not labeled as “pizza”. This is because CLIP embeddings capture the semantics of the images and texts and can find the similarities between them no matter the wording. ![CLIP model](/articles_data/food-discovery-demo/clip-model.png) CLIP is available in many different ways. We used the pretrained `clip-ViT-B-32` model available in the [Sentence-Transformers](https://www.sbert.net/examples/applications/image-search/README.html) library, as this is the easiest way to get started. ## The dataset The demo is based on the [Wolt](https://wolt.com/) dataset. It contains over 2M images of dishes from different restaurants along with some additional metadata. This is how a payload for a single dish looks like: ```json { ""cafe"": { ""address"": ""VGX7+6R2 Vecchia Napoli, Valletta"", ""categories"": [""italian"", ""pasta"", ""pizza"", ""burgers"", ""mediterranean""], ""location"": {""lat"": 35.8980154, ""lon"": 14.5145106}, ""menu_id"": ""610936a4ee8ea7a56f4a372a"", ""name"": ""Vecchia Napoli Is-Suq Tal-Belt"", ""rating"": 9, ""slug"": ""vecchia-napoli-skyparks-suq-tal-belt"" }, ""description"": ""Tomato sauce, mozzarella fior di latte, crispy guanciale, Pecorino Romano cheese and a hint of chilli"", ""image"": ""https://wolt-menu-images-cdn.wolt.com/menu-images/610936a4ee8ea7a56f4a372a/005dfeb2-e734-11ec-b667-ced7a78a5abd_l_amatriciana_pizza_joel_gueller1.jpeg"", ""name"": ""L'Amatriciana"" } ``` Processing this amount of records takes some time, so we precomputed the CLIP embeddings, stored them in a Qdrant collection and exported the collection as a snapshot. You may [download it here](https://storage.googleapis.com/common-datasets-snapshots/wolt-clip-ViT-B-32.snapshot). ## Different search modes The FastAPI backend [exposes just a single endpoint](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/main.py#L37), however it handles multiple scenarios. Let's dive into them one by one and understand why they are needed. ### Cold start Recommendation systems struggle with a cold start problem. When a new user joins the system, there is no data about their preferences, so it’s hard to recommend anything. The same applies to our demo. When you open it, you will see a random selection of dishes, and it changes every time you refresh the page. Internally, the demo [chooses some random points](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L70) in the vector space. ![Random points selection](/articles_data/food-discovery-demo/random-results.png) That procedure should result in returning diverse results, so we have a higher chance of showing something interesting to the user. ### Textual search Since the demo suffers from the cold start problem, we implemented a textual search mode that is useful to start exploring the data. You can type in any text query by clicking a search icon in the top right corner. The demo will use the CLIP model to encode the query into a vector and then search for the nearest neighbors in the vector space. ![Random points selection](/articles_data/food-discovery-demo/textual-search.png) This is implemented as [a group search query to Qdrant](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L44). We didn't use a simple search, but performed grouping by the restaurant to get more diverse results. [Search groups](/documentation/concepts/search/#search-groups) is a mechanism similar to `GROUP BY` clause in SQL, and it's useful when you want to get a specific number of result per group (in our case just one). ```python import settings # Encode query into a vector, model is an instance of # sentence_transformers.SentenceTransformer that loaded CLIP model query_vector = model.encode(query).tolist() # Search for nearest neighbors, client is an instance of # qdrant_client.QdrantClient that has to be initialized before response = client.search_groups( settings.QDRANT_COLLECTION, query_vector=query_vector, group_by=settings.GROUP_BY_FIELD, limit=search_query.limit, ) ``` ### Exploring the results The main feature of the demo is the ability to explore the space of the dishes. You can click on any of them to see more details, but first of all you can like or dislike it, and the demo will update the search results accordingly. ![Recommendation results](/articles_data/food-discovery-demo/recommendation-results.png) #### Negative feedback only Qdrant [Recommendation API](/documentation/concepts/search/#recommendation-api) needs at least one positive example to work. However, in our demo we want to be able to provide only negative examples. This is because we want to be able to say “I don’t like this dish” without having to like anything first. To achieve this, we use a trick. We negate the vectors of the disliked dishes and use their mean as a query. This way, the disliked dishes will be pushed away from the search results. **This works because the cosine distance is based on the angle between two vectors, and the angle between a vector and its negation is 180 degrees.** ![CLIP model](/articles_data/food-discovery-demo/negated-vector.png) Food Discovery Demo [implements that trick](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L122) by calling Qdrant twice. Initially, we use the [Scroll API](/documentation/concepts/points/#scroll-points) to find disliked items, and then calculate a negated mean of all their vectors. That allows using the [Search Groups API](/documentation/concepts/search/#search-groups) to find the nearest neighbors of the negated mean vector. ```python import numpy as np # Retrieve the disliked points based on their ids disliked_points, _ = client.scroll( settings.QDRANT_COLLECTION, scroll_filter=models.Filter( must=[ models.HasIdCondition(has_id=search_query.negative), ] ), with_vectors=True, ) # Calculate a mean vector of disliked points disliked_vectors = np.array([point.vector for point in disliked_points]) mean_vector = np.mean(disliked_vectors, axis=0) negated_vector = -mean_vector # Search for nearest neighbors of the negated mean vector response = client.search_groups( settings.QDRANT_COLLECTION, query_vector=negated_vector.tolist(), group_by=settings.GROUP_BY_FIELD, limit=search_query.limit, ) ``` #### Positive and negative feedback Since the [Recommendation API](/documentation/concepts/search/#recommendation-api) requires at least one positive example, we can use it only when the user has liked at least one dish. We could theoretically use the same trick as above and negate the disliked dishes, but it would be a bit weird, as Qdrant has that feature already built-in, and we can call it just once to do the job. It's always better to perform the search server-side. Thus, in this case [we just call the Qdrant server with a list of positive and negative examples](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L166), so it can find some points which are close to the positive examples and far from the negative ones. ```python response = client.recommend_groups( settings.QDRANT_COLLECTION, positive=search_query.positive, negative=search_query.negative, group_by=settings.GROUP_BY_FIELD, limit=search_query.limit, ) ``` From the user perspective nothing changes comparing to the previous case. ### Location-based search Last but not least, location plays an important role in the food discovery process. You are definitely looking for something you can find nearby, not on the other side of the globe. Therefore, your current location can be toggled as a filtering condition. You can enable it by clicking on “Find near me” icon in the top right. This way you can find the best pizza in your neighborhood, not in the whole world. Qdrant [geo radius filter](/documentation/concepts/filtering/#geo-radius) is a perfect choice for this. It lets you filter the results by distance from a given point. ```python from qdrant_client import models # Create a geo radius filter query_filter = models.Filter( must=[ models.FieldCondition( key=""cafe.location"", geo_radius=models.GeoRadius( center=models.GeoPoint( lon=location.longitude, lat=location.latitude, ), radius=location.radius_km * 1000, ), ) ] ) ``` Such a filter needs [a payload index](/documentation/concepts/indexing/#payload-index) to work efficiently, and it was created on a collection we used to create the snapshot. When you import it into your instance, the index will be already there. ## Using the demo The Food Discovery Demo [is available online](https://food-discovery.qdrant.tech/), but if you prefer to run it locally, you can do it with Docker. The [README](https://github.com/qdrant/demo-food-discovery/blob/main/README.md) describes all the steps more in detail, but here is a quick start: ```bash git clone git@github.com:qdrant/demo-food-discovery.git cd demo-food-discovery # Create .env file based on .env.example docker-compose up -d ``` The demo will be available at `http://localhost:8001`, but you won't be able to search anything until you [import the snapshot into your Qdrant instance](/documentation/concepts/snapshots/#recover-via-api). If you don't want to bother with hosting a local one, you can use the [Qdrant Cloud](https://cloud.qdrant.io/) cluster. 4 GB RAM is enough to load all the 2 million entries. ## Fork and reuse Our demo is completely open-source. Feel free to fork it, update with your own dataset or adapt the application to your use case. Whether you’re looking to understand the mechanics of semantic search or to have a foundation to build a larger project, this demo can serve as a starting point. Check out the [Food Discovery Demo repository ](https://github.com/qdrant/demo-food-discovery/) to get started. If you have any questions, feel free to reach out [through Discord](https://qdrant.to/discord). ",articles/food-discovery-demo.md "--- title: Google Summer of Code 2023 - Web UI for Visualization and Exploration short_description: Gsoc'23 Web UI for Visualization and Exploration description: My journey as a Google Summer of Code 2023 student working on the ""Web UI for Visualization and Exploration"" project for Qdrant. preview_dir: /articles_data/web-ui-gsoc/preview small_preview_image: /articles_data/web-ui-gsoc/icon.svg social_preview_image: /articles_data/web-ui-gsoc/preview/social_preview.jpg weight: -20 author: Kartik Gupta author_link: https://kartik-gupta-ij.vercel.app/ date: 2023-08-28T08:00:00+03:00 draft: false keywords: - vector reduction - console - gsoc'23 - vector similarity - exploration - recommendation --- ## Introduction Hello everyone! My name is Kartik Gupta, and I am thrilled to share my coding journey as part of the Google Summer of Code 2023 program. This summer, I had the incredible opportunity to work on an exciting project titled ""Web UI for Visualization and Exploration"" for Qdrant, a vector search engine. In this article, I will take you through my experience, challenges, and achievements during this enriching coding journey. ## Project Overview Qdrant is a powerful vector search engine widely used for similarity search and clustering. However, it lacked a user-friendly web-based UI for data visualization and exploration. My project aimed to bridge this gap by developing a web-based user interface that allows users to easily interact with and explore their vector data. ## Milestones and Achievements The project was divided into six milestones, each focusing on a specific aspect of the web UI development. Let's go through each of them and my achievements during the coding period. **1. Designing a friendly UI on Figma** I started by designing the user interface on Figma, ensuring it was easy to use, visually appealing, and responsive on different devices. I focused on usability and accessibility to create a seamless user experience. ( [Figma Design](https://www.figma.com/file/z54cAcOErNjlVBsZ1DrXyD/Qdant?type=design&node-id=0-1&mode=design&t=Pu22zO2AMFuGhklG-0)) **2. Building the layout** The layout route served as a landing page with an overview of the application's features and navigation links to other routes. **3. Creating a view collection route** This route enabled users to view a list of collections available in the application. Users could click on a collection to see more details, including the data and vectors associated with it. {{< figure src=/articles_data/web-ui-gsoc/collections-page.png caption=""Collection Page"" alt=""Collection Page"" >}} **4. Developing a data page with ""find similar"" functionality** I implemented a data page where users could search for data and find similar data using a recommendation API. The recommendation API suggested similar data based on the Data's selected ID, providing valuable insights. {{< figure src=/articles_data/web-ui-gsoc/points-page.png caption=""Points Page"" alt=""Points Page"" >}} **5. Developing query editor page libraries** This milestone involved creating a query editor page that allowed users to write queries in a custom language. The editor provided syntax highlighting, autocomplete, and error-checking features for a seamless query writing experience. {{< figure src=/articles_data/web-ui-gsoc/console-page.png caption=""Query Editor Page"" alt=""Query Editor Page"" >}} **6. Developing a route for visualizing vector data points** This is done by the reduction of n-dimensional vector in 2-D points and they are displayed with their respective payloads. {{< figure src=/articles_data/web-ui-gsoc/visualization-page.png caption=""Vector Visuliztion Page"" alt=""visualization-page"" >}} ## Challenges and Learning Throughout the project, I encountered a series of challenges that stretched my engineering capabilities and provided unique growth opportunities. From mastering new libraries and technologies to ensuring the user interface (UI) was both visually appealing and user-friendly, every obstacle became a stepping stone toward enhancing my skills as a developer. However, each challenge provided an opportunity to learn and grow as a developer. I acquired valuable experience in vector search and dimension reduction techniques. The most significant learning for me was the importance of effective project management. Setting realistic timelines, collaborating with mentors, and staying proactive with feedback allowed me to complete the milestones efficiently. ### Technical Learning and Skill Development One of the most significant aspects of this journey was diving into the intricate world of vector search and dimension reduction techniques. These areas, previously unfamiliar to me, required rigorous study and exploration. Learning how to process vast amounts of data efficiently and extract meaningful insights through these techniques was both challenging and rewarding. ### Effective Project Management Undoubtedly, the most impactful lesson was the art of effective project management. I quickly grasped the importance of setting realistic timelines and goals. Collaborating closely with mentors and maintaining proactive communication proved indispensable. This approach enabled me to navigate the complex development process and successfully achieve the project's milestones. ### Overcoming Technical Challenges #### Autocomplete Feature in Console One particularly intriguing challenge emerged while working on the autocomplete feature within the console. Finding a solution was proving elusive until a breakthrough came from an unexpected direction. My mentor, Andrey, proposed creating a separate module that could support autocomplete based on OpenAPI for our custom language. This ingenious approach not only resolved the issue but also showcased the power of collaborative problem-solving. #### Optimization with Web Workers The high-processing demands of vector reduction posed another significant challenge. Initially, this task was straining browsers and causing performance issues. The solution materialized in the form of web workers—an independent processing instance that alleviated the strain on browsers. However, a new question arose: how to terminate these workers effectively? With invaluable insights from my mentor, I gained a deeper understanding of web worker dynamics and successfully tackled this challenge. #### Console Integration Complexity Integrating the console interaction into the application presented multifaceted challenges. Crafting a custom language in Monaco, parsing text to make API requests, and synchronizing the entire process demanded meticulous attention to detail. Overcoming these hurdles was a testament to the complexity of real-world engineering endeavours. #### Codelens Multiplicity Issue An unexpected issue cropped up during the development process: the codelen (run button) registered multiple times, leading to undesired behaviour. This hiccup underscored the importance of thorough testing and debugging, even in seemingly straightforward features. ### Key Learning Points Amidst these challenges, I garnered valuable insights that have significantly enriched my engineering prowess: **Vector Reduction Techniques**: Navigating the realm of vector reduction techniques provided a deep understanding of how to process and interpret data efficiently. This knowledge opens up new avenues for developing data-driven applications in the future. **Web Workers Efficiency**: Mastering the intricacies of web workers not only resolved performance concerns but also expanded my repertoire of optimization strategies. This newfound proficiency will undoubtedly find relevance in various future projects. **Monaco Editor and UI Frameworks**: Working extensively with the Monaco Editor, Material-UI (MUI), and Vite enriched my familiarity with these essential tools. I honed my skills in integrating complex UI components seamlessly into applications. ## Areas for Improvement and Future Enhancements While reflecting on this transformative journey, I recognize several areas that offer room for improvement and future enhancements: 1. Enhanced Autocomplete: Further refining the autocomplete feature to support key-value suggestions in JSON structures could greatly enhance the user experience. 2. Error Detection in Console: Integrating the console's error checker with OpenAPI could enhance its accuracy in identifying errors and offering precise suggestions for improvement. 3. Expanded Vector Visualization: Exploring additional visualization methods and optimizing their performance could elevate the utility of the vector visualization route. ## Conclusion Participating in the Google Summer of Code 2023 and working on the ""Web UI for Visualization and Exploration"" project has been an immensely rewarding experience. I am grateful for the opportunity to contribute to Qdrant and develop a user-friendly interface for vector data exploration. I want to express my gratitude to my mentors and the entire Qdrant community for their support and guidance throughout this journey. This experience has not only improved my coding skills but also instilled a deeper passion for web development and data analysis. As my coding journey continues beyond this project, I look forward to applying the knowledge and experience gained here to future endeavours. I am excited to see how Qdrant evolves with the newly developed web UI and how it positively impacts users worldwide. Thank you for joining me on this coding adventure, and I hope to share more exciting projects in the future! Happy coding!",articles/web-ui-gsoc.md "--- title: Metric Learning for Anomaly Detection short_description: ""How to use metric learning to detect anomalies: quality assessment of coffee beans with just 200 labelled samples"" description: Practical use of metric learning for anomaly detection. A way to match the results of a classification-based approach with only ~0.6% of the labeled data. social_preview_image: /articles_data/detecting-coffee-anomalies/preview/social_preview.jpg preview_dir: /articles_data/detecting-coffee-anomalies/preview small_preview_image: /articles_data/detecting-coffee-anomalies/anomalies_icon.svg weight: 30 author: Yusuf Sarıgöz author_link: https://medium.com/@yusufsarigoz date: 2022-05-04T13:00:00+03:00 draft: false # aliases: [ /articles/detecting-coffee-anomalies/ ] --- Anomaly detection is a thirsting yet challenging task that has numerous use cases across various industries. The complexity results mainly from the fact that the task is data-scarce by definition. Similarly, anomalies are, again by definition, subject to frequent change, and they may take unexpected forms. For that reason, supervised classification-based approaches are: * Data-hungry - requiring quite a number of labeled data; * Expensive - data labeling is an expensive task itself; * Time-consuming - you would try to obtain what is necessarily scarce; * Hard to maintain - you would need to re-train the model repeatedly in response to changes in the data distribution. These are not desirable features if you want to put your model into production in a rapidly-changing environment. And, despite all the mentioned difficulties, they do not necessarily offer superior performance compared to the alternatives. In this post, we will detail the lessons learned from such a use case. ## Coffee Beans [Agrivero.ai](https://agrivero.ai/) - is a company making AI-enabled solution for quality control & traceability of green coffee for producers, traders, and roasters. They have collected and labeled more than **30 thousand** images of coffee beans with various defects - wet, broken, chipped, or bug-infested samples. This data is used to train a classifier that evaluates crop quality and highlights possible problems. {{< figure src=/articles_data/detecting-coffee-anomalies/detection.gif caption=""Anomalies in coffee"" width=""400px"" >}} We should note that anomalies are very diverse, so the enumeration of all possible anomalies is a challenging task on it's own. In the course of work, new types of defects appear, and shooting conditions change. Thus, a one-time labeled dataset becomes insufficient. Let's find out how metric learning might help to address this challenge. ## Metric Learning Approach In this approach, we aimed to encode images in an n-dimensional vector space and then use learned similarities to label images during the inference. The simplest way to do this is KNN classification. The algorithm retrieves K-nearest neighbors to a given query vector and assigns a label based on the majority vote. In production environment kNN classifier could be easily replaced with [Qdrant](https://github.com/qdrant/qdrant) vector search engine. {{< figure src=/articles_data/detecting-coffee-anomalies/anomalies_detection.png caption=""Production deployment"" >}} This approach has the following advantages: * We can benefit from unlabeled data, considering labeling is time-consuming and expensive. * The relevant metric, e.g., precision or recall, can be tuned according to changing requirements during the inference without re-training. * Queries labeled with a high score can be added to the KNN classifier on the fly as new data points. To apply metric learning, we need to have a neural encoder, a model capable of transforming an image into a vector. Training such an encoder from scratch may require a significant amount of data we might not have. Therefore, we will divide the training into two steps: * The first step is to train the autoencoder, with which we will prepare a model capable of representing the target domain. * The second step is finetuning. Its purpose is to train the model to distinguish the required types of anomalies. {{< figure src=/articles_data/detecting-coffee-anomalies/anomaly_detection_training.png caption=""Model training architecture"" >}} ### Step 1 - Autoencoder for Unlabeled Data First, we pretrained a Resnet18-like model in a vanilla autoencoder architecture by leaving the labels aside. Autoencoder is a model architecture composed of an encoder and a decoder, with the latter trying to recreate the original input from the low-dimensional bottleneck output of the former. There is no intuitive evaluation metric to indicate the performance in this setup, but we can evaluate the success by examining the recreated samples visually. {{< figure src=/articles_data/detecting-coffee-anomalies/image_reconstruction.png caption=""Example of image reconstruction with Autoencoder"" >}} Then we encoded a subset of the data into 128-dimensional vectors by using the encoder, and created a KNN classifier on top of these embeddings and associated labels. Although the results are promising, we can do even better by finetuning with metric learning. ### Step 2 - Finetuning with Metric Learning We started by selecting 200 labeled samples randomly without replacement. In this step, The model was composed of the encoder part of the autoencoder with a randomly initialized projection layer stacked on top of it. We applied transfer learning from the frozen encoder and trained only the projection layer with Triplet Loss and an online batch-all triplet mining strategy. Unfortunately, the model overfitted quickly in this attempt. In the next experiment, we used an online batch-hard strategy with a trick to prevent vector space from collapsing. We will describe our approach in the further articles. This time it converged smoothly, and our evaluation metrics also improved considerably to match the supervised classification approach. {{< figure src=/articles_data/detecting-coffee-anomalies/ae_report_knn.png caption=""Metrics for the autoencoder model with KNN classifier"" >}} {{< figure src=/articles_data/detecting-coffee-anomalies/ft_report_knn.png caption=""Metrics for the finetuned model with KNN classifier"" >}} We repeated this experiment with 500 and 2000 samples, but it showed only a slight improvement. Thus we decided to stick to 200 samples - see below for why. ## Supervised Classification Approach We also wanted to compare our results with the metrics of a traditional supervised classification model. For this purpose, a Resnet50 model was finetuned with ~30k labeled images, made available for training. Surprisingly, the F1 score was around ~0.86. Please note that we used only 200 labeled samples in the metric learning approach instead of ~30k in the supervised classification approach. These numbers indicate a huge saving with no considerable compromise in the performance. ## Conclusion We obtained results comparable to those of the supervised classification method by using **only 0.66%** of the labeled data with metric learning. This approach is time-saving and resource-efficient, and that may be improved further. Possible next steps might be: - Collect more unlabeled data and pretrain a larger autoencoder. - Obtain high-quality labels for a small number of images instead of tens of thousands for finetuning. - Use hyperparameter optimization and possibly gradual unfreezing in the finetuning step. - Use [vector search engine](https://github.com/qdrant/qdrant) to serve Metric Learning in production. We are actively looking into these, and we will continue to publish our findings in this challenge and other use cases of metric learning. ",articles/detecting-coffee-anomalies.md "--- title: Fine Tuning Similar Cars Search short_description: ""How to use similarity learning to search for similar cars"" description: Learn how to train a similarity model that can retrieve similar car images in novel categories. social_preview_image: /articles_data/cars-recognition/preview/social_preview.jpg small_preview_image: /articles_data/cars-recognition/icon.svg preview_dir: /articles_data/cars-recognition/preview weight: 10 author: Yusuf Sarıgöz author_link: https://medium.com/@yusufsarigoz date: 2022-06-28T13:00:00+03:00 draft: false # aliases: [ /articles/cars-recognition/ ] --- Supervised classification is one of the most widely used training objectives in machine learning, but not every task can be defined as such. For example, 1. Your classes may change quickly —e.g., new classes may be added over time, 2. You may not have samples from every possible category, 3. It may be impossible to enumerate all the possible classes during the training time, 4. You may have an essentially different task, e.g., search or retrieval. All such problems may be efficiently solved with similarity learning. N.B.: If you are new to the similarity learning concept, checkout the [awesome-metric-learning](https://github.com/qdrant/awesome-metric-learning) repo for great resources and use case examples. However, similarity learning comes with its own difficulties such as: 1. Need for larger batch sizes usually, 2. More sophisticated loss functions, 3. Changing architectures between training and inference. Quaterion is a fine tuning framework built to tackle such problems in similarity learning. It uses [PyTorch Lightning](https://www.pytorchlightning.ai/) as a backend, which is advertized with the motto, ""spend more time on research, less on engineering."" This is also true for Quaterion, and it includes: 1. Trainable and servable model classes, 2. Annotated built-in loss functions, and a wrapper over [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/) when you need even more, 3. Sample, dataset and data loader classes to make it easier to work with similarity learning data, 4. A caching mechanism for faster iterations and less memory footprint. ## A closer look at Quaterion Let's break down some important modules: - `TrainableModel`: A subclass of `pl.LightNingModule` that has additional hook methods such as `configure_encoders`, `configure_head`, `configure_metrics` and others to define objects needed for training and evaluation —see below to learn more on these. - `SimilarityModel`: An inference-only export method to boost code transfer and lower dependencies during the inference time. In fact, Quaterion is composed of two packages: 1. `quaterion_models`: package that you need for inference. 2. `quaterion`: package that defines objects needed for training and also depends on `quaterion_models`. - `Encoder` and `EncoderHead`: Two objects that form a `SimilarityModel`. In most of the cases, you may use a frozen pretrained encoder, e.g., ResNets from `torchvision`, or language modelling models from `transformers`, with a trainable `EncoderHead` stacked on top of it. `quaterion_models` offers several ready-to-use `EncoderHead` implementations, but you may also create your own by subclassing a parent class or easily listing PyTorch modules in a `SequentialHead`. Quaterion has other objects such as distance functions, evaluation metrics, evaluators, convenient dataset and data loader classes, but these are mostly self-explanatory. Thus, they will not be explained in detail in this article for brevity. However, you can always go check out the [documentation](https://quaterion.qdrant.tech) to learn more about them. The focus of this tutorial is a step-by-step solution to a similarity learning problem with Quaterion. This will also help us better understand how the abovementioned objects fit together in a real project. Let's start walking through some of the important parts of the code. If you are looking for the complete source code instead, you can find it under the [examples](https://github.com/qdrant/quaterion/tree/master/examples/cars) directory in the Quaterion repo. ## Dataset In this tutorial, we will use the [Stanford Cars](https://pytorch.org/vision/main/generated/torchvision.datasets.StanfordCars.html) dataset. {{< figure src=https://storage.googleapis.com/quaterion/docs/class_montage.jpg caption=""Stanford Cars Dataset"" >}} It has 16185 images of cars from 196 classes, and it is split into training and testing subsets with almost a 50-50% split. To make things even more interesting, however, we will first merge training and testing subsets, then we will split it into two again in such a way that the half of the 196 classes will be put into the training set and the other half will be in the testing set. This will let us test our model with samples from novel classes that it has never seen in the training phase, which is what supervised classification cannot achieve but similarity learning can. In the following code borrowed from [`data.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/data.py): - `get_datasets()` function performs the splitting task described above. - `get_dataloaders()` function creates `GroupSimilarityDataLoader` instances from training and testing datasets. - Datasets are regular PyTorch datasets that emit `SimilarityGroupSample` instances. N.B.: Currently, Quaterion has two data types to represent samples in a dataset. To learn more about `SimilarityPairSample`, check out the [NLP tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html) ```python import numpy as np import os import tqdm from torch.utils.data import Dataset, Subset from torchvision import datasets, transforms from typing import Callable from pytorch_lightning import seed_everything from quaterion.dataset import ( GroupSimilarityDataLoader, SimilarityGroupSample, ) # set seed to deterministically sample train and test categories later on seed_everything(seed=42) # dataset will be downloaded to this directory under local directory dataset_path = os.path.join(""."", ""torchvision"", ""datasets"") def get_datasets(input_size: int): # Use Mean and std values for the ImageNet dataset as the base model was pretrained on it. # taken from https://www.geeksforgeeks.org/how-to-normalize-images-in-pytorch/ mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] # create train and test transforms transform = transforms.Compose( [ transforms.Resize((input_size, input_size)), transforms.ToTensor(), transforms.Normalize(mean, std), ] ) # we need to merge train and test splits into a full dataset first, # and then we will split it to two subsets again with each one composed of distinct labels. full_dataset = datasets.StanfordCars( root=dataset_path, split=""train"", download=True ) + datasets.StanfordCars(root=dataset_path, split=""test"", download=True) # full_dataset contains examples from 196 categories labeled with an integer from 0 to 195 # randomly sample half of it to be used for training train_categories = np.random.choice(a=196, size=196 // 2, replace=False) # get a list of labels for all samples in the dataset labels_list = np.array([label for _, label in tqdm.tqdm(full_dataset)]) # get a mask for indices where label is included in train_categories labels_mask = np.isin(labels_list, train_categories) # get a list of indices to be used as train samples train_indices = np.argwhere(labels_mask).squeeze() # others will be used as test samples test_indices = np.argwhere(np.logical_not(labels_mask)).squeeze() # now that we have distinct indices for train and test sets, we can use `Subset` to create new datasets # from `full_dataset`, which contain only the samples at given indices. # finally, we apply transformations created above. train_dataset = CarsDataset( Subset(full_dataset, train_indices), transform=transform ) test_dataset = CarsDataset( Subset(full_dataset, test_indices), transform=transform ) return train_dataset, test_dataset def get_dataloaders( batch_size: int, input_size: int, shuffle: bool = False, ): train_dataset, test_dataset = get_datasets(input_size) train_dataloader = GroupSimilarityDataLoader( train_dataset, batch_size=batch_size, shuffle=shuffle ) test_dataloader = GroupSimilarityDataLoader( test_dataset, batch_size=batch_size, shuffle=False ) return train_dataloader, test_dataloader class CarsDataset(Dataset): def __init__(self, dataset: Dataset, transform: Callable): self._dataset = dataset self._transform = transform def __len__(self) -> int: return len(self._dataset) def __getitem__(self, index) -> SimilarityGroupSample: image, label = self._dataset[index] image = self._transform(image) return SimilarityGroupSample(obj=image, group=label) ``` ## Trainable Model Now it's time to review one of the most exciting building blocks of Quaterion: [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#module-quaterion.train.trainable_model). It is the base class for models you would like to configure for training, and it provides several hook methods starting with `configure_` to set up every aspect of the training phase just like [`pl.LightningModule`](https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.core.LightningModule.html), its own base class. It is central to fine tuning with Quaterion, so we will break down this essential code in [`models.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/models.py) and review each method separately. Let's begin with the imports: ```python import torch import torchvision from quaterion_models.encoders import Encoder from quaterion_models.heads import EncoderHead, SkipConnectionHead from torch import nn from typing import Dict, Union, Optional, List from quaterion import TrainableModel from quaterion.eval.attached_metric import AttachedMetric from quaterion.eval.group import RetrievalRPrecision from quaterion.loss import SimilarityLoss, TripletLoss from quaterion.train.cache import CacheConfig, CacheType from .encoders import CarsEncoder ``` In the following code snippet, we subclass `TrainableModel`. You may use `__init__()` to store some attributes to be used in various `configure_*` methods later on. The more interesting part is, however, in the [`configure_encoders()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_encoders) method. We need to return an instance of [`Encoder`](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html#quaterion_models.encoders.encoder.Encoder) (or a dictionary with `Encoder` instances as values) from this method. In our case, it is an instance of `CarsEncoders`, which we will review soon. Notice now how it is created with a pretrained ResNet152 model whose classification layer is replaced by an identity function. ```python class Model(TrainableModel): def __init__(self, lr: float, mining: str): self._lr = lr self._mining = mining super().__init__() def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]: pre_trained_encoder = torchvision.models.resnet152(pretrained=True) pre_trained_encoder.fc = nn.Identity() return CarsEncoder(pre_trained_encoder) ``` In Quaterion, a [`SimilarityModel`](https://quaterion-models.qdrant.tech/quaterion_models.model.html#quaterion_models.model.SimilarityModel) is composed of one or more `Encoder`s and an [`EncoderHead`](https://quaterion-models.qdrant.tech/quaterion_models.heads.encoder_head.html#quaterion_models.heads.encoder_head.EncoderHead). `quaterion_models` has [several `EncoderHead` implementations](https://quaterion-models.qdrant.tech/quaterion_models.heads.html#module-quaterion_models.heads) with a unified API such as a configurable dropout value. You may use one of them or create your own subclass of `EncoderHead`. In either case, you need to return an instance of it from [`configure_head`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_head) In this example, we will use a `SkipConnectionHead`, which is lightweight and more resistant to overfitting. ```python def configure_head(self, input_embedding_size) -> EncoderHead: return SkipConnectionHead(input_embedding_size, dropout=0.1) ``` Quaterion has implementations of [some popular loss functions](https://quaterion.qdrant.tech/quaterion.loss.html) for similarity learning, all of which subclass either [`GroupLoss`](https://quaterion.qdrant.tech/quaterion.loss.group_loss.html#quaterion.loss.group_loss.GroupLoss) or [`PairwiseLoss`](https://quaterion.qdrant.tech/quaterion.loss.pairwise_loss.html#quaterion.loss.pairwise_loss.PairwiseLoss). In this example, we will use [`TripletLoss`](https://quaterion.qdrant.tech/quaterion.loss.triplet_loss.html#quaterion.loss.triplet_loss.TripletLoss), which is a subclass of `GroupLoss`. In general, subclasses of `GroupLoss` are used with datasets in which samples are assigned with some group (or label). In our example label is a make of the car. Those datasets should emit `SimilarityGroupSample`. Other alternatives are implementations of `PairwiseLoss`, which consume `SimilarityPairSample` - pair of objects for which similarity is specified individually. To see an example of the latter, you may need to check out the [NLP Tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html) ```python def configure_loss(self) -> SimilarityLoss: return TripletLoss(mining=self._mining, margin=0.5) ``` `configure_optimizers()` may be familiar to PyTorch Lightning users, but there is a novel `self.model` used inside that method. It is an instance of `SimilarityModel` and is automatically created by Quaterion from the return values of `configure_encoders()` and `configure_head()`. ```python def configure_optimizers(self): optimizer = torch.optim.Adam(self.model.parameters(), self._lr) return optimizer ``` Caching in Quaterion is used for avoiding calculation of outputs of a frozen pretrained `Encoder` in every epoch. When it is configured, outputs will be computed once and cached in the preferred device for direct usage later on. It provides both a considerable speedup and less memory footprint. However, it is quite a bit versatile and has several knobs to tune. To get the most out of its potential, it's recommended that you check out the [cache tutorial](https://quaterion.qdrant.tech/tutorials/cache_tutorial.html). For the sake of making this article self-contained, you need to return a [`CacheConfig`](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheConfig) instance from [`configure_caches()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_caches) to specify cache-related preferences such as: - [`CacheType`](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheType), i.e., whether to store caches on CPU or GPU, - `save_dir`, i.e., where to persist caches for subsequent runs, - `batch_size`, i.e., batch size to be used only when creating caches - the batch size to be used during the actual training might be different. ```python def configure_caches(self) -> Optional[CacheConfig]: return CacheConfig( cache_type=CacheType.AUTO, save_dir=""./cache_dir"", batch_size=32 ) ``` We have just configured the training-related settings of a `TrainableModel`. However, evaluation is an integral part of experimentation in machine learning, and you may configure evaluation metrics by returning one or more [`AttachedMetric`](https://quaterion.qdrant.tech/quaterion.eval.attached_metric.html#quaterion.eval.attached_metric.AttachedMetric) instances from `configure_metrics()`. Quaterion has several built-in [group](https://quaterion.qdrant.tech/quaterion.eval.group.html) and [pairwise](https://quaterion.qdrant.tech/quaterion.eval.pair.html) evaluation metrics. ```python def configure_metrics(self) -> Union[AttachedMetric, List[AttachedMetric]]: return AttachedMetric( ""rrp"", metric=RetrievalRPrecision(), prog_bar=True, on_epoch=True, on_step=False, ) ``` ## Encoder As previously stated, a `SimilarityModel` is composed of one or more `Encoder`s and an `EncoderHead`. Even if we freeze pretrained `Encoder` instances, `EncoderHead` is still trainable and has enough parameters to adapt to the new task at hand. It is recommended that you set the `trainable` property to `False` whenever possible, as it lets you benefit from the caching mechanism described above. Another important property is `embedding_size`, which will be passed to `TrainableModel.configure_head()` as `input_embedding_size` to let you properly initialize the head layer. Let's see how an `Encoder` is implemented in the following code borrowed from [`encoders.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/encoders.py): ```python import os import torch import torch.nn as nn from quaterion_models.encoders import Encoder class CarsEncoder(Encoder): def __init__(self, encoder_model: nn.Module): super().__init__() self._encoder = encoder_model self._embedding_size = 2048 # last dimension from the ResNet model @property def trainable(self) -> bool: return False @property def embedding_size(self) -> int: return self._embedding_size ``` An `Encoder` is a regular `torch.nn.Module` subclass, and we need to implement the forward pass logic in the `forward` method. Depending on how you create your submodules, this method may be more complex; however, we simply pass the input through a pretrained ResNet152 backbone in this example: ```python def forward(self, images): embeddings = self._encoder.forward(images) return embeddings ``` An important step of machine learning development is proper saving and loading of models. Quaterion lets you save your `SimilarityModel` with [`TrainableModel.save_servable()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.save_servable) and restore it with [`SimilarityModel.load()`](https://quaterion-models.qdrant.tech/quaterion_models.model.html#quaterion_models.model.SimilarityModel.load). To be able to use these two methods, you need to implement `save()` and `load()` methods in your `Encoder`. Additionally, it is also important that you define your subclass of `Encoder` outside the `__main__` namespace, i.e., in a separate file from your main entry point. It may not be restored properly otherwise. ```python def save(self, output_path: str): os.makedirs(output_path, exist_ok=True) torch.save(self._encoder, os.path.join(output_path, ""encoder.pth"")) @classmethod def load(cls, input_path): encoder_model = torch.load(os.path.join(input_path, ""encoder.pth"")) return CarsEncoder(encoder_model) ``` ## Training With all essential objects implemented, it is easy to bring them all together and run a training loop with the [`Quaterion.fit()`](https://quaterion.qdrant.tech/quaterion.main.html#quaterion.main.Quaterion.fit) method. It expects: - A `TrainableModel`, - A [`pl.Trainer`](https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html), - A [`SimilarityDataLoader`](https://quaterion.qdrant.tech/quaterion.dataset.similarity_data_loader.html#quaterion.dataset.similarity_data_loader.SimilarityDataLoader) for training data, - And optionally, another `SimilarityDataLoader` for evaluation data. We need to import a few objects to prepare all of these: ```python import os import pytorch_lightning as pl import torch from pytorch_lightning.callbacks import EarlyStopping, ModelSummary from quaterion import Quaterion from .data import get_dataloaders from .models import Model ``` The `train()` function in the following code snippet expects several hyperparameter values as arguments. They can be defined in a `config.py` or passed from the command line. However, that part of the code is omitted for brevity. Instead let's focus on how all the building blocks are initialized and passed to `Quaterion.fit()`, which is responsible for running the whole loop. When the training loop is complete, you can simply call `TrainableModel.save_servable()` to save the current state of the `SimilarityModel` instance: ```python def train( lr: float, mining: str, batch_size: int, epochs: int, input_size: int, shuffle: bool, save_dir: str, ): model = Model( lr=lr, mining=mining, ) train_dataloader, val_dataloader = get_dataloaders( batch_size=batch_size, input_size=input_size, shuffle=shuffle ) early_stopping = EarlyStopping( monitor=""validation_loss"", patience=50, ) trainer = pl.Trainer( gpus=1 if torch.cuda.is_available() else 0, max_epochs=epochs, callbacks=[early_stopping, ModelSummary(max_depth=3)], enable_checkpointing=False, log_every_n_steps=1, ) Quaterion.fit( trainable_model=model, trainer=trainer, train_dataloader=train_dataloader, val_dataloader=val_dataloader, ) model.save_servable(save_dir) ``` ## Evaluation Let's see what we have achieved with these simple steps. [`evaluate.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/evaluate.py) has two functions to evaluate both the baseline model and the tuned similarity model. We will review only the latter for brevity. In addition to the ease of restoring a `SimilarityModel`, this code snippet also shows how to use [`Evaluator`](https://quaterion.qdrant.tech/quaterion.eval.evaluator.html#quaterion.eval.evaluator.Evaluator) to evaluate the performance of a `SimilarityModel` on a given dataset by given evaluation metrics. {{< figure src=https://storage.googleapis.com/quaterion/docs/original_vs_tuned_cars.png caption=""Comparison of original and tuned models for retrieval"" >}} Full evaluation of a dataset usually grows exponentially, and thus you may want to perform a partial evaluation on a sampled subset. In this case, you may use [samplers](https://quaterion.qdrant.tech/quaterion.eval.samplers.html) to limit the evaluation. Similar to `Quaterion.fit()` used for training, [`Quaterion.evaluate()`](https://quaterion.qdrant.tech/quaterion.main.html#quaterion.main.Quaterion.evaluate) runs a complete evaluation loop. It takes the following as arguments: - An `Evaluator` instance created with given evaluation metrics and a `Sampler`, - The `SimilarityModel` to be evaluated, - And the evaluation dataset. ```python def eval_tuned_encoder(dataset, device): print(""Evaluating tuned encoder..."") tuned_cars_model = SimilarityModel.load( os.path.join(os.path.dirname(__file__), ""cars_encoders"") ).to(device) tuned_cars_model.eval() result = Quaterion.evaluate( evaluator=Evaluator( metrics=RetrievalRPrecision(), sampler=GroupSampler(sample_size=1000, device=device, log_progress=True), ), model=tuned_cars_model, dataset=dataset, ) print(result) ``` ## Conclusion In this tutorial, we trained a similarity model to search for similar cars from novel categories unseen in the training phase. Then, we evaluated it on a test dataset by the Retrieval R-Precision metric. The base model scored 0.1207, and our tuned model hit 0.2540, a twice higher score. These scores can be seen in the following figure: {{< figure src=/articles_data/cars-recognition/cars_metrics.png caption=""Metrics for the base and tuned models"" >}} ",articles/cars-recognition.md "--- title: ""How to Optimize RAM Requirements for 1 Million Vectors: A Case Study"" short_description: Master RAM measurement and memory optimization for optimal performance and resource use. description: Unlock the secrets of efficient RAM measurement and memory optimization with this comprehensive guide, ensuring peak performance and resource utilization. social_preview_image: /articles_data/memory-consumption/preview/social_preview.jpg preview_dir: /articles_data/memory-consumption/preview small_preview_image: /articles_data/memory-consumption/icon.svg weight: 7 author: Andrei Vasnetsov author_link: https://blog.vasnetsov.com/ date: 2022-12-07T10:18:00.000Z # aliases: [ /articles/memory-consumption/ ] --- # Mastering RAM Measurement and Memory Optimization in Qdrant: A Comprehensive Guide When it comes to measuring the memory consumption of our processes, we often rely on tools such as `htop` to give us an indication of how much RAM is being used. However, this method can be misleading and doesn't always accurately reflect the true memory usage of a process. There are many different ways in which `htop` may not be a reliable indicator of memory usage. For instance, a process may allocate memory in advance but not use it, or it may not free deallocated memory, leading to overstated memory consumption. A process may be forked, which means that it will have a separate memory space, but it will share the same code and data with the parent process. This means that the memory consumption of the child process will be counted twice. Additionally, a process may utilize disk cache, which is also accounted as resident memory in the `htop` measurements. As a result, even if `htop` shows that a process is using 10GB of memory, it doesn't necessarily mean that the process actually requires 10GB of RAM to operate efficiently. In this article, we will explore how to properly measure RAM usage and optimize [Qdrant](https://qdrant.tech/) for optimal memory consumption. ## How to measure actual RAM requirements We need to know memory consumption in order to estimate how much RAM is required to run the program. So in order to determine that, we can conduct a simple experiment. Let's limit the allowed memory of the process and observe at which point it stops functioning. In this way we can determine the minimum amount of RAM the program needs to operate. One way to do this is by conducting a grid search, but a more efficient method is to use binary search to quickly find the minimum required amount of RAM. We can use docker to limit the memory usage of the process. Before running each benchmark, it is important to clear the page cache with the following command: ```bash sudo bash -c 'sync; echo 1 > /proc/sys/vm/drop_caches' ``` This ensures that the process doesn't utilize any data from previous runs, providing more accurate and consistent results. We can use the following command to run Qdrant with a memory limit of 1GB: ```bash docker run -it --rm \ --memory 1024mb \ --network=host \ -v ""$(pwd)/data/storage:/qdrant/storage"" \ qdrant/qdrant:latest ``` ## Let's run some benchmarks Let's run some benchmarks to see how much RAM Qdrant needs to serve 1 million vectors. We can use the `glove-100-angular` and scripts from the [vector-db-benchmark](https://github.com/qdrant/vector-db-benchmark) project to upload and query the vectors. With the first run we will use the default configuration of Qdrant with all data stored in RAM. ```bash # Upload vectors python run.py --engines qdrant-all-in-ram --datasets glove-100-angular ``` After uploading vectors, we will repeat the same experiment with different RAM limits to see how they affect the memory consumption and search speed. ```bash # Search vectors python run.py --engines qdrant-all-in-ram --datasets glove-100-angular --skip-upload ``` ### All in Memory In the first experiment, we tested how well our system performs when all vectors are stored in memory. We tried using different amounts of memory, ranging from 1512mb to 1024mb, and measured the number of requests per second (rps) that our system was able to handle. | Memory | Requests/s | |--------|---------------| | 1512mb | 774.38 | | 1256mb | 760.63 | | 1200mb | 794.72 | | 1152mb | out of memory | | 1024mb | out of memory | We found that 1152MB memory limit resulted in our system running out of memory, but using 1512mb, 1256mb, and 1200mb of memory resulted in our system being able to handle around 780 RPS. This suggests that about 1.2GB of memory is needed to serve around 1 million vectors, and there is no speed degradation when limiting memory usage above 1.2GB. ### Vectors stored using MMAP Let's go a bit further! In the second experiment, we tested how well our system performs when **vectors are stored using the memory-mapped file** (mmap). Create collection with: ```http PUT /collections/benchmark { ""vectors"": { ... ""on_disk"": true } } ``` This configuration tells Qdrant to use mmap for vectors if the segment size is greater than 20000Kb (which is approximately 40K 128d-vectors). Now the out-of-memory happens when we allow using **600mb** RAM only
Experiments details | Memory | Requests/s | |--------|---------------| | 1200mb | 759.94 | | 1100mb | 687.00 | | 1000mb | 10 | --- use a bit faster disk --- | Memory | Requests/s | |--------|---------------| | 1000mb | 25 rps | | 750mb | 5 rps | | 625mb | 2.5 rps | | 600mb | out of memory |
At this point we have to switch from network-mounted storage to a faster disk, as the network-based storage is too slow to handle the amount of sequential reads that our system needs to serve the queries. But let's first see how much RAM we need to serve 1 million vectors and then we will discuss the speed optimization as well. ### Vectors and HNSW graph stored using MMAP In the third experiment, we tested how well our system performs when vectors and [HNSW](https://qdrant.tech/articles/filtrable-hnsw/) graph are stored using the memory-mapped files. Create collection with: ```http PUT /collections/benchmark { ""vectors"": { ... ""on_disk"": true }, ""hnsw_config"": { ""on_disk"": true }, ... } ``` With this configuration we are able to serve 1 million vectors with **only 135mb of RAM**!
Experiments details | Memory | Requests/s | |--------|---------------| | 600mb | 5 rps | | 300mb | 0.9 rps / 1.1 sec per query | | 150mb | 0.4 rps / 2.5 sec per query | | 135mb | 0.33 rps / 3 sec per query | | 125mb | out of memory |
At this point the importance of the disk speed becomes critical. We can serve the search requests with 135mb of RAM, but the speed of the requests makes it impossible to use the system in production. Let's see how we can improve the speed. ## How to speed up the search To measure the impact of disk parameters on search speed, we used the `fio` tool to test the speed of different types of disks. ```bash # Install fio sudo apt-get install fio # Run fio to check the random reads speed fio --randrepeat=1 \ --ioengine=libaio \ --direct=1 \ --gtod_reduce=1 \ --name=fiotest \ --filename=testfio \ --bs=4k \ --iodepth=64 \ --size=8G \ --readwrite=randread ``` Initially, we tested on a network-mounted disk, but its performance was too slow, with a read IOPS of 6366 and a bandwidth of 24.9 MiB/s: ```text read: IOPS=6366, BW=24.9MiB/s (26.1MB/s)(8192MiB/329424msec) ``` To improve performance, we switched to a local disk, which showed much faster results, with a read IOPS of 63.2k and a bandwidth of 247 MiB/s: ```text read: IOPS=63.2k, BW=247MiB/s (259MB/s)(8192MiB/33207msec) ``` That gave us a significant speed boost, but we wanted to see if we could improve performance even further. To do that, we switched to a machine with a local SSD, which showed even better results, with a read IOPS of 183k and a bandwidth of 716 MiB/s: ```text read: IOPS=183k, BW=716MiB/s (751MB/s)(8192MiB/11438msec) ``` Let's see how these results translate into search speed: | Memory | RPS with IOPS=63.2k | RPS with IOPS=183k | |--------|---------------------|--------------------| | 600mb | 5 | 50 | | 300mb | 0.9 | 13 | | 200mb | 0.5 | 8 | | 150mb | 0.4 | 7 | As you can see, the speed of the disk has a significant impact on the search speed. With a local SSD, we were able to increase the search speed by 10x! With the production-grade disk, the search speed could be even higher. Some configurations of the SSDs can reach 1M IOPS and more. Which might be an interesting option to serve large datasets with low search latency in Qdrant. ## Conclusion In this article, we showed that Qdrant has flexibility in terms of RAM usage and can be used to serve large datasets. It provides configurable trade-offs between RAM usage and search speed. If you’re interested to learn more about Qdrant, [book a demo today](https://qdrant.tech/contact-us/)! We are eager to learn more about how you use Qdrant in your projects, what challenges you face, and how we can help you solve them. Please feel free to join our [Discord](https://qdrant.to/discord) and share your experience with us! ",articles/memory-consumption.md "--- title: ""Vector Search as a dedicated service"" short_description: ""Why vector search requires to be a dedicated service."" description: ""Why vector search requires a dedicated service."" social_preview_image: /articles_data/dedicated-service/social-preview.png small_preview_image: /articles_data/dedicated-service/preview/icon.svg preview_dir: /articles_data/dedicated-service/preview weight: -70 author: Andrey Vasnetsov author_link: https://vasnetsov.com/ date: 2023-11-30T10:00:00+03:00 draft: false keywords: - system architecture - vector search - best practices - anti-patterns --- Ever since the data science community discovered that vector search significantly improves LLM answers, various vendors and enthusiasts have been arguing over the proper solutions to store embeddings. Some say storing them in a specialized engine (aka vector database) is better. Others say that it's enough to use plugins for existing databases. Here are [just](https://nextword.substack.com/p/vector-database-is-not-a-separate) a [few](https://stackoverflow.blog/2023/09/20/do-you-need-a-specialized-vector-database-to-implement-vector-search-well/) of [them](https://www.singlestore.com/blog/why-your-vector-database-should-not-be-a-vector-database/). This article presents our vision and arguments on the topic . We will: 1. Explain why and when you actually need a dedicated vector solution 2. Debunk some ungrounded claims and anti-patterns to be avoided when building a vector search system. A table of contents: * *Each database vendor will sooner or later introduce vector capabilities...* [[click](#each-database-vendor-will-sooner-or-later-introduce-vector-capabilities-that-will-make-every-database-a-vector-database)] * *Having a dedicated vector database requires duplication of data.* [[click](#having-a-dedicated-vector-database-requires-duplication-of-data)] * *Having a dedicated vector database requires complex data synchronization.* [[click](#having-a-dedicated-vector-database-requires-complex-data-synchronization)] * *You have to pay for a vector service uptime and data transfer.* [[click](#you-have-to-pay-for-a-vector-service-uptime-and-data-transfer-of-both-solutions)] * *What is more seamless than your current database adding vector search capability?* [[click](#what-is-more-seamless-than-your-current-database-adding-vector-search-capability)] * *Databases can support RAG use-case end-to-end.* [[click](#databases-can-support-rag-use-case-end-to-end)] ## Responding to claims ###### Each database vendor will sooner or later introduce vector capabilities. That will make every database a Vector Database. The origins of this misconception lie in the careless use of the term Vector *Database*. When we think of a *database*, we subconsciously envision a relational database like Postgres or MySQL. Or, more scientifically, a service built on ACID principles that provides transactions, strong consistency guarantees, and atomicity. The majority of Vector Database are not *databases* in this sense. It is more accurate to call them *search engines*, but unfortunately, the marketing term *vector database* has already stuck, and it is unlikely to change. *What makes search engines different, and why vector DBs are built as search engines?* First of all, search engines assume different patterns of workloads and prioritize different properties of the system. The core architecture of such solutions is built around those priorities. What types of properties do search engines prioritize? * **Scalability**. Search engines are built to handle large amounts of data and queries. They are designed to be horizontally scalable and operate with more data than can fit into a single machine. * **Search speed**. Search engines should guarantee low latency for queries, while the atomicity of updates is less important. * **Availability**. Search engines must stay available if the majority of the nodes in a cluster are down. At the same time, they can tolerate the eventual consistency of updates. {{< figure src=/articles_data/dedicated-service/compass.png caption=""Database guarantees compass"" width=80% >}} Those priorities lead to different architectural decisions that are not reproducible in a general-purpose database, even if it has vector index support. ###### Having a dedicated vector database requires duplication of data. By their very nature, vector embeddings are derivatives of the primary source data. In the vast majority of cases, embeddings are derived from some other data, such as text, images, or additional information stored in your system. So, in fact, all embeddings you have in your system can be considered transformations of some original source. And the distinguishing feature of derivative data is that it will change when the transformation pipeline changes. In the case of vector embeddings, the scenario of those changes is quite simple: every time you update the encoder model, all the embeddings will change. In systems where vector embeddings are fused with the primary data source, it is impossible to perform such migrations without significantly affecting the production system. As a result, even if you want to use a single database for storing all kinds of data, you would still need to duplicate data internally. ###### Having a dedicated vector database requires complex data synchronization. Most production systems prefer to isolate different types of workloads into separate services. In many cases, those isolated services are not even related to search use cases. For example, databases for analytics and one for serving can be updated from the same source. Yet they can store and organize the data in a way that is optimal for their typical workloads. Search engines are usually isolated for the same reason: you want to avoid creating a noisy neighbor problem and compromise the performance of your main database. *To give you some intuition, let's consider a practical example:* Assume we have a database with 1 million records. This is a small database by modern standards of any relational database. You can probably use the smallest free tier of any cloud provider to host it. But if we want to use this database for vector search, 1 million OpenAI `text-embedding-ada-002` embeddings will take **~6GB of RAM** (sic!). As you can see, the vector search use case completely overwhelmed the main database resource requirements. In practice, this means that your main database becomes burdened with high memory requirements and can not scale efficiently, limited by the size of a single machine. Fortunately, the data synchronization problem is not new and definitely not unique to vector search. There are many well-known solutions, starting with message queues and ending with specialized ETL tools. For example, we recently released our [integration with Airbyte](/documentation/integrations/airbyte/), allowing you to synchronize data from various sources into Qdrant incrementally. ###### You have to pay for a vector service uptime and data transfer of both solutions. In the open-source world, you pay for the resources you use, not the number of different databases you run. Resources depend more on the optimal solution for each use case. As a result, running a dedicated vector search engine can be even cheaper, as it allows optimization specifically for vector search use cases. For instance, Qdrant implements a number of [quantization techniques](/documentation/guides/quantization/) that can significantly reduce the memory footprint of embeddings. In terms of data transfer costs, on most cloud providers, network use within a region is usually free. As long as you put the original source data and the vector store in the same region, there are no added data transfer costs. ###### What is more seamless than your current database adding vector search capability? In contrast to the short-term attractiveness of integrated solutions, dedicated search engines propose flexibility and a modular approach. You don't need to update the whole production database each time some of the vector plugins are updated. Maintenance of a dedicated search engine is as isolated from the main database as the data itself. In fact, integration of more complex scenarios, such as read/write segregation, is much easier with a dedicated vector solution. You can easily build cross-region replication to ensure low latency for your users. {{< figure src=/articles_data/dedicated-service/region-based-deploy.png caption=""Read/Write segregation + cross-regional deployment"" width=80% >}} It is especially important in large enterprise organizations, where the responsibility for different parts of the system is distributed among different teams. In those situations, it is much easier to maintain a dedicated search engine for the AI team than to convince the core team to update the whole primary database. Finally, the vector capabilities of the all-in-one database are tied to the development and release cycle of the entire stack. Their long history of use also means that they need to pay a high price for backward compatibility. ###### Databases can support RAG use-case end-to-end. Putting aside performance and scalability questions, the whole discussion about implementing RAG in the DBs assumes that the only detail missing in traditional databases is the vector index and the ability to make fast ANN queries. In fact, the current capabilities of vector search have only scratched the surface of what is possible. For example, in our recent article, we discuss the possibility of building an [exploration API](/articles/vector-similarity-beyond-search/) to fuel the discovery process - an alternative to kNN search, where you don’t even know what exactly you are looking for. ## Summary Ultimately, you do not need a vector database if you are looking for a simple vector search functionality with a small amount of data. We genuinely recommend starting with whatever you already have in your stack to prototype. But you need one if you are looking to do more out of it, and it is the central functionality of your application. It is just like using a multi-tool to make something quick or using a dedicated instrument highly optimized for the use case. Large-scale production systems usually consist of different specialized services and storage types for good reasons since it is one of the best practices of modern software architecture. Comparable to the orchestration of independent building blocks in a microservice architecture. When you stuff the database with a vector index, you compromise both the performance and scalability of the main database and the vector search capabilities. There is no one-size-fits-all approach that would not compromise on performance or flexibility. So if your use case utilizes vector search in any significant way, it is worth investing in a dedicated vector search engine, aka vector database. ",articles/dedicated-service.md "--- title: Triplet Loss - Advanced Intro short_description: ""What are the advantages of Triplet Loss and how to efficiently implement it?"" description: ""What are the advantages of Triplet Loss over Contrastive loss and how to efficiently implement it?"" social_preview_image: /articles_data/triplet-loss/social_preview.jpg preview_dir: /articles_data/triplet-loss/preview small_preview_image: /articles_data/triplet-loss/icon.svg weight: 30 author: Yusuf Sarıgöz author_link: https://medium.com/@yusufsarigoz date: 2022-03-24T15:12:00+03:00 # aliases: [ /articles/triplet-loss/ ] --- ## What is Triplet Loss? Triplet Loss was first introduced in [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/abs/1503.03832) in 2015, and it has been one of the most popular loss functions for supervised similarity or metric learning ever since. In its simplest explanation, Triplet Loss encourages that dissimilar pairs be distant from any similar pairs by at least a certain margin value. Mathematically, the loss value can be calculated as $L=max(d(a,p) - d(a,n) + m, 0)$, where: - $p$, i.e., positive, is a sample that has the same label as $a$, i.e., anchor, - $n$, i.e., negative, is another sample that has a label different from $a$, - $d$ is a function to measure the distance between these three samples, - and $m$ is a margin value to keep negative samples far apart. The paper uses Euclidean distance, but it is equally valid to use any other distance metric, e.g., cosine distance. The function has a learning objective that can be visualized as in the following: {{< figure src=/articles_data/triplet-loss/loss_objective.png caption=""Triplet Loss learning objective"" >}} Notice that Triplet Loss does not have a side effect of urging to encode anchor and positive samples into the same point in the vector space as in Contrastive Loss. This lets Triplet Loss tolerate some intra-class variance, unlike Contrastive Loss, as the latter forces the distance between an anchor and any positive essentially to $0$. In other terms, Triplet Loss allows to stretch clusters in such a way as to include outliers while still ensuring a margin between samples from different clusters, e.g., negative pairs. Additionally, Triplet Loss is less greedy. Unlike Contrastive Loss, it is already satisfied when different samples are easily distinguishable from similar ones. It does not change the distances in a positive cluster if there is no interference from negative examples. This is due to the fact that Triplet Loss tries to ensure a margin between distances of negative pairs and distances of positive pairs. However, Contrastive Loss takes into account the margin value only when comparing dissimilar pairs, and it does not care at all where similar pairs are at that moment. This means that Contrastive Loss may reach a local minimum earlier, while Triplet Loss may continue to organize the vector space in a better state. Let's demonstrate how two loss functions organize the vector space by animations. For simpler visualization, the vectors are represented by points in a 2-dimensional space, and they are selected randomly from a normal distribution. {{< figure src=/articles_data/triplet-loss/contrastive.gif caption=""Animation that shows how Contrastive Loss moves points in the course of training."" >}} {{< figure src=/articles_data/triplet-loss/triplet.gif caption=""Animation that shows how Triplet Loss moves points in the course of training."" >}} From mathematical interpretations of the two-loss functions, it is clear that Triplet Loss is theoretically stronger, but Triplet Loss has additional tricks that help it work better. Most importantly, Triplet Loss introduce online triplet mining strategies, e.g., automatically forming the most useful triplets. ## Why triplet mining matters? The formulation of Triplet Loss demonstrates that it works on three objects at a time: - `anchor`, - `positive` - a sample that has the same label as the anchor, - and `negative` - a sample with a different label from the anchor and the positive. In a naive implementation, we could form such triplets of samples at the beginning of each epoch and then feed batches of such triplets to the model throughout that epoch. This is called ""offline strategy."" However, this would not be so efficient for several reasons: - It needs to pass $3n$ samples to get a loss value of $n$ triplets. - Not all these triplets will be useful for the model to learn anything, e.g., yielding a positive loss value. - Even if we form ""useful"" triplets at the beginning of each epoch with one of the methods that I will be implementing in this series, they may become ""useless"" at some point in the epoch as the model weights will be constantly updated. Instead, we can get a batch of $n$ samples and their associated labels, and form triplets on the fly. That is called ""online strategy."" Normally, this gives $n^3$ possible triplets, but only a subset of such possible triplets will be actually valid. Even in this case, we will have a loss value calculated from much more triplets than the offline strategy. Given a triplet of `(a, p, n)`, it is valid only if: - `a` and `p` has the same label, - `a` and `p` are distinct samples, - and `n` has a different label from `a` and `p`. These constraints may seem to be requiring expensive computation with nested loops, but it can be efficiently implemented with tricks such as distance matrix, masking, and broadcasting. The rest of this series will focus on the implementation of these tricks. ## Distance matrix A distance matrix is a matrix of shape $(n, n)$ to hold distance values between all possible pairs made from items in two $n$-sized collections. This matrix can be used to vectorize calculations that would need inefficient loops otherwise. Its calculation can be optimized as well, and we will implement [Euclidean Distance Matrix Trick (PDF)](https://www.robots.ox.ac.uk/~albanie/notes/Euclidean_distance_trick.pdf) explained by Samuel Albanie. You may want to read this three-page document for the full intuition of the trick, but a brief explanation is as follows: - Calculate the dot product of two collections of vectors, e.g., embeddings in our case. - Extract the diagonal from this matrix that holds the squared Euclidean norm of each embedding. - Calculate the squared Euclidean distance matrix based on the following equation: $||a - b||^2 = ||a||^2 - 2 ⟨a, b⟩ + ||b||^2$ - Get the square root of this matrix for non-squared distances. We will implement it in PyTorch, so let's start with imports. ```python import torch import torch.nn as nn import torch.nn.functional as F eps = 1e-8 # an arbitrary small value to be used for numerical stability tricks ``` --- ```python def euclidean_distance_matrix(x): """"""Efficient computation of Euclidean distance matrix Args: x: Input tensor of shape (batch_size, embedding_dim) Returns: Distance matrix of shape (batch_size, batch_size) """""" # step 1 - compute the dot product # shape: (batch_size, batch_size) dot_product = torch.mm(x, x.t()) # step 2 - extract the squared Euclidean norm from the diagonal # shape: (batch_size,) squared_norm = torch.diag(dot_product) # step 3 - compute squared Euclidean distances # shape: (batch_size, batch_size) distance_matrix = squared_norm.unsqueeze(0) - 2 * dot_product + squared_norm.unsqueeze(1) # get rid of negative distances due to numerical instabilities distance_matrix = F.relu(distance_matrix) # step 4 - compute the non-squared distances # handle numerical stability # derivative of the square root operation applied to 0 is infinite # we need to handle by setting any 0 to eps mask = (distance_matrix == 0.0).float() # use this mask to set indices with a value of 0 to eps distance_matrix += mask * eps # now it is safe to get the square root distance_matrix = torch.sqrt(distance_matrix) # undo the trick for numerical stability distance_matrix *= (1.0 - mask) return distance_matrix ``` ## Invalid triplet masking Now that we can compute a distance matrix for all possible pairs of embeddings in a batch, we can apply broadcasting to enumerate distance differences for all possible triplets and represent them in a tensor of shape `(batch_size, batch_size, batch_size)`. However, only a subset of these $n^3$ triplets are actually valid as I mentioned earlier, and we need a corresponding mask to compute the loss value correctly. We will implement such a helper function in three steps: - Compute a mask for distinct indices, e.g., `(i != j and j != k)`. - Compute a mask for valid anchor-positive-negative triplets, e.g., `labels[i] == labels[j] and labels[j] != labels[k]`. - Combine two masks. ```python def get_triplet_mask(labels): """"""compute a mask for valid triplets Args: labels: Batch of integer labels. shape: (batch_size,) Returns: Mask tensor to indicate which triplets are actually valid. Shape: (batch_size, batch_size, batch_size) A triplet is valid if: `labels[i] == labels[j] and labels[i] != labels[k]` and `i`, `j`, `k` are different. """""" # step 1 - get a mask for distinct indices # shape: (batch_size, batch_size) indices_equal = torch.eye(labels.size()[0], dtype=torch.bool, device=labels.device) indices_not_equal = torch.logical_not(indices_equal) # shape: (batch_size, batch_size, 1) i_not_equal_j = indices_not_equal.unsqueeze(2) # shape: (batch_size, 1, batch_size) i_not_equal_k = indices_not_equal.unsqueeze(1) # shape: (1, batch_size, batch_size) j_not_equal_k = indices_not_equal.unsqueeze(0) # Shape: (batch_size, batch_size, batch_size) distinct_indices = torch.logical_and(torch.logical_and(i_not_equal_j, i_not_equal_k), j_not_equal_k) # step 2 - get a mask for valid anchor-positive-negative triplets # shape: (batch_size, batch_size) labels_equal = labels.unsqueeze(0) == labels.unsqueeze(1) # shape: (batch_size, batch_size, 1) i_equal_j = labels_equal.unsqueeze(2) # shape: (batch_size, 1, batch_size) i_equal_k = labels_equal.unsqueeze(1) # shape: (batch_size, batch_size, batch_size) valid_indices = torch.logical_and(i_equal_j, torch.logical_not(i_equal_k)) # step 3 - combine two masks mask = torch.logical_and(distinct_indices, valid_indices) return mask ``` ## Batch-all strategy for online triplet mining Now we are ready for actually implementing Triplet Loss itself. Triplet Loss involves several strategies to form or select triplets, and the simplest one is to use all valid triplets that can be formed from samples in a batch. This can be achieved in four easy steps thanks to utility functions we've already implemented: - Get a distance matrix of all possible pairs that can be formed from embeddings in a batch. - Apply broadcasting to this matrix to compute loss values for all possible triplets. - Set loss values of invalid or easy triplets to $0$. - Average the remaining positive values to return a scalar loss. I will start by implementing this strategy, and more complex ones will follow as separate posts. ```python class BatchAllTtripletLoss(nn.Module): """"""Uses all valid triplets to compute Triplet loss Args: margin: Margin value in the Triplet Loss equation """""" def __init__(self, margin=1.): super().__init__() self.margin = margin def forward(self, embeddings, labels): """"""computes loss value. Args: embeddings: Batch of embeddings, e.g., output of the encoder. shape: (batch_size, embedding_dim) labels: Batch of integer labels associated with embeddings. shape: (batch_size,) Returns: Scalar loss value. """""" # step 1 - get distance matrix # shape: (batch_size, batch_size) distance_matrix = euclidean_distance_matrix(embeddings) # step 2 - compute loss values for all triplets by applying broadcasting to distance matrix # shape: (batch_size, batch_size, 1) anchor_positive_dists = distance_matrix.unsqueeze(2) # shape: (batch_size, 1, batch_size) anchor_negative_dists = distance_matrix.unsqueeze(1) # get loss values for all possible n^3 triplets # shape: (batch_size, batch_size, batch_size) triplet_loss = anchor_positive_dists - anchor_negative_dists + self.margin # step 3 - filter out invalid or easy triplets by setting their loss values to 0 # shape: (batch_size, batch_size, batch_size) mask = get_triplet_mask(labels) triplet_loss *= mask # easy triplets have negative loss values triplet_loss = F.relu(triplet_loss) # step 4 - compute scalar loss value by averaging positive losses num_positive_losses = (triplet_loss > eps).float().sum() triplet_loss = triplet_loss.sum() / (num_positive_losses + eps) return triplet_loss ``` ## Conclusion I mentioned that Triplet Loss is different from Contrastive Loss not only mathematically but also in its sample selection strategies, and I implemented the batch-all strategy for online triplet mining in this post efficiently by using several tricks. There are other more complicated strategies such as batch-hard and batch-semihard mining, but their implementations, and discussions of the tricks I used for efficiency in this post, are worth separate posts of their own. The future posts will cover such topics and additional discussions on some tricks to avoid vector collapsing and control intra-class and inter-class variance.",articles/triplet-loss.md "--- title: ""Qdrant Internals: Immutable Data Structures"" short_description: ""Learn how immutable data structures improve vector search performance in Qdrant."" description: ""Learn how immutable data structures improve vector search performance in Qdrant."" social_preview_image: /articles_data/immutable-data-structures/social_preview.png preview_dir: /articles_data/immutable-data-structures/preview weight: -200 author: Andrey Vasnetsov date: 2024-08-20T10:45:00+02:00 draft: false keywords: - data structures - optimization - immutable data structures - perfect hashing - defragmentation --- ## Data Structures 101 Those who took programming courses might remember that there is no such thing as a universal data structure. Some structures are good at accessing elements by index (like arrays), while others shine in terms of insertion efficiency (like linked lists). {{< figure src=""/articles_data/immutable-data-structures/hardware-optimized.png"" alt=""Hardware-optimized data structure"" caption=""Hardware-optimized data structure"" width=""80%"" >}} However, when we move from theoretical data structures to real-world systems, and particularly in performance-critical areas such as [vector search](/use-cases/), things become more complex. [Big-O notation](https://en.wikipedia.org/wiki/Big_O_notation) provides a good abstraction, but it doesn’t account for the realities of modern hardware: cache misses, memory layout, disk I/O, and other low-level considerations that influence actual performance. > From the perspective of hardware efficiency, the ideal data structure is a contiguous array of bytes that can be read sequentially in a single thread. This scenario allows hardware optimizations like prefetching, caching, and branch prediction to operate at their best. However, real-world use cases require more complex structures to perform various operations like insertion, deletion, and search. These requirements increase complexity and introduce performance trade-offs. ### Mutability One of the most significant challenges when working with data structures is ensuring **mutability — the ability to change the data structure after it’s created**, particularly with fast update operations. Let’s consider a simple example: we want to iterate over items in sorted order. Without a mutability requirement, we can use a simple array and sort it once. This is very close to our ideal scenario. We can even put the structure on disk - which is trivial for an array. However, if we need to insert an item into this array, **things get more complicated**. Inserting into a sorted array requires shifting all elements after the insertion point, which leads to linear time complexity for each insertion, which is not acceptable for many applications. To handle such cases, more complex structures like [B-trees](https://en.wikipedia.org/wiki/B-tree) come into play. B-trees are specifically designed to optimize both insertion and read operations for large data sets. However, they sacrifice the raw speed of array reads for better insertion performance. Here’s a benchmark that illustrates the difference between iterating over a plain array and a BTreeSet in Rust: ```rust use std::collections::BTreeSet; use rand::Rng; fn main() { // Benchmark plain vector VS btree in a task of iteration over all elements let mut rand = rand::thread_rng(); let vector: Vec<_> = (0..1000000).map(|_| rand.gen::()).collect(); let btree: BTreeSet<_> = vector.iter().copied().collect(); { let mut sum = 0; for el in vector { sum += el; } } // Elapsed: 850.924µs { let mut sum = 0; for el in btree { sum += el; } } // Elapsed: 5.213025ms, ~6x slower } ``` [Vector databases](https://qdrant.tech/), like Qdrant, have to deal with a large variety of data structures. If we could make them immutable, it would significantly improve performance and optimize memory usage. ## How Does Immutability Help? A large part of the immutable advantage comes from the fact that we know the exact data we need to put into the structure even before we start building it. The simplest example is a sorted array: we would know exactly how many elements we have to put into the array so we can allocate the exact amount of memory once. More complex data structures might require additional statistics to be collected before the structure is built. A Qdrant-related example of this is [Scalar Quantization](/articles/scalar-quantization/#conversion-to-integers): in order to select proper quantization levels, we have to know the distribution of the data. {{< figure src=""/articles_data/immutable-data-structures/quantization-quantile.png"" alt=""Scalar Quantization Quantile"" caption=""Scalar Quantization Quantile"" width=""70%"" >}} Computing this distribution requires knowing all the data in advance, but once we have it, applying scalar quantization is a simple operation. Let's take a look at a non-exhaustive list of data structures and potential improvements we can get from making them immutable: |Function| Mutable Data Structure | Immutable Alternative | Potential improvements | |----|------|------|------------------------| | Read by index | Array | Fixed chunk of memory | Allocate exact amount of memory | | Vector Storage | Array or Arrays | Memory-mapped file | Offload data to disk | | Read sorted ranges| B-Tree | Sorted Array | Store all data close, avoid cache misses | | Read by key | Hash Map | Hash Map with Perfect Hashing | Avoid hash collisions | | Get documents by keyword | Inverted Index | Inverted Index with Sorted
and BitPacked Postings | Less memory usage, faster search | | Vector Search | HNSW graph | HNSW graph with
payload-aware connections | Better precision with filters | | Tenant Isolation | Vector Storage | Defragmented Vector Storage | Faster access to on-disk data | For more info on payload-aware connections in HNSW, read our [previous article](/articles/filtrable-hnsw/). This time around, we will focus on the latest additions to Qdrant: - **the immutable hash map with perfect hashing** - **defragmented vector storage**. ### Perfect Hashing A hash table is one of the most commonly used data structures implemented in almost every programming language, including Rust. It provides fast access to elements by key, with an average time complexity of O(1) for read and write operations. There is, however, the assumption that should be satisfied for the hash table to work efficiently: *hash collisions should not cause too much overhead*. In a hash table, each key is mapped to a ""bucket,"" a slot where the value is stored. When different keys map to the same bucket, a collision occurs. In regular mutable hash tables, minimization of collisions is achieved by: * making the number of buckets bigger so the probability of collision is lower * using a linked list or a tree to store multiple elements with the same hash However, these strategies have overheads, which become more significant if we consider using high-latency storage like disk. Indeed, every read operation from disk is several orders of magnitude slower than reading from RAM, so we want to know the correct location of the data from the first attempt. In order to achieve this, we can use a so-called minimal perfect hash function (MPHF). This special type of hash function is constructed specifically for a given set of keys, and it guarantees no collisions while using minimal amount of buckets. In Qdrant, we decided to use *fingerprint-based minimal perfect hash function* implemented in the [ph crate 🦀](https://crates.io/crates/ph) by [Piotr Beling](https://dl.acm.org/doi/10.1145/3596453). According to our benchmarks, using the perfect hash function does introduce some overhead in terms of hashing time, but it significantly reduces the time for the whole operation: | Volume | `ph::Function` | `std::hash::Hash` | `HashMap::get`| |--------|----------------|-------------------|---------------| | 1000 | 60ns | ~20ns | 34ns | | 100k | 90ns | ~20ns | 220ns | | 10M | 238ns | ~20ns | 500ns | Even thought the absolute time for hashing is higher, the time for the whole operation is lower, because PHF guarantees no collisions. The difference is even more significant when we consider disk read time, which might up to several milliseconds (10^6 ns). PHF RAM size scales linearly for `ph::Function`: 3.46 kB for 10k elements, 119MB for 350M elements. The construction time required to build the hash function is surprisingly low, and we only need to do it once: | Volume | `ph::Function` (construct) | PHF size | Size of int64 keys (for reference) | |--------|----------------------------|----------|------------------------------------| | 1M | 52ms | 0.34Mb | 7.62Mb | | 100M | 7.4s | 33.7Mb | 762.9Mb | The usage of PHF in Qdrant lets us minimize the latency of cold reads, which is especially important for large-scale multi-tenant systems. With PHF, it is enough to read a single page from a disk to get the exact location of the data. ### Defragmentation When you read data from a disk, you almost never read a single byte. Instead, you read a page, which is a fixed-size chunk of data. On many systems, the page size is 4KB, which means that every read operation will read 4KB of data, even if you only need a single byte. Vector search, on the other hand, requires reading a lot of small vectors, which might create a large overhead. It is especially noticeable if we use binary quantization, where the size of even large OpenAI 1536d vectors is compressed down to **192 bytes**. {{< figure src=""/articles_data/immutable-data-structures/page-vector.png"" alt=""Overhead when reading a single vector"" caption=""Overhead when reading single vector"" width=""80%"" >}} That means if the vectors we access during the search are randomly scattered across the disk, we will have to read 4KB for each vector, which is 20 times more than the actual data size. There is, however, a simple way to avoid this overhead: **defragmentation**. If we knew some additional information about the data, we could combine all relevant vectors into a single page. {{< figure src=""/articles_data/immutable-data-structures/defragmentation.png"" alt=""Defragmentation"" caption=""Defragmentation"" width=""70%"" >}} This additional information is available to Qdrant via the [payload index](/documentation/concepts/indexing/#payload-index). By specifying the payload index, which is going to be used for filtering most of the time, we can put all vectors with the same payload together. This way, reading a single page will also read nearby vectors, which will be used in the search. This approach is especially efficient for [multi-tenant systems](/documentation/guides/multiple-partitions/), where only a small subset of vectors is actively used for search. The capacity of such a deployment is typically defined by the size of the hot subset, which is much smaller than the total number of vectors. > Grouping relevant vectors together allows us to optimize the size of the hot subset by avoiding caching of irrelevant data. The following benchmark data compares RPS for defragmented and non-defragmented storage: | % of hot subset | Tenant Size (vectors) | RPS, Non-defragmented | RPS, Defragmented | |-----------------|-----------------------|-----------------------|-------------------| | 2.5% | 50k | 1.5 | 304 | | 12.5% | 50k | 0.47 | 279 | | 25% | 50k | 0.4 | 63 | | 50% | 50k | 0.3 | 8 | | 2.5% | 5k | 56 | 490 | | 12.5% | 5k | 5.8 | 488 | | 25% | 5k | 3.3 | 490 | | 50% | 5k | 3.1 | 480 | | 75% | 5k | 2.9 | 130 | | 100% | 5k | 2.7 | 95 | **Dataset size:** 2M 768d vectors (~6Gb Raw data), binary quantization, 650Mb of RAM limit. All benchmarks are made with minimal RAM allocation to demonstrate disk cache efficiency. As you can see, the biggest impact is on the small tenant size, where defragmentation allows us to achieve **100x more RPS**. Of course, the real-world impact of defragmentation depends on the specific workload and the size of the hot subset, but enabling this feature can significantly improve the performance of Qdrant. Please find more details on how to enable defragmentation in the [indexing documentation](/documentation/concepts/indexing/#tenant-index). ## Updating Immutable Data Structures One may wonder how Qdrant allows updating collection data if everything is immutable. Indeed, [Qdrant API](https://api.qdrant.tech) allows the change of any vector or payload at any time, so from the user's perspective, the whole collection is mutable at any time. As it usually happens with every decent magic trick, the secret is disappointingly simple: not all data in Qdrant is immutable. In Qdrant, storage is divided into segments, which might be either mutable or immutable. New data is always written to the mutable segment, which is later converted to the immutable one by the optimization process. {{< figure src=""/articles_data/immutable-data-structures/optimization.png"" alt=""Optimization process"" caption=""Optimization process"" width=""80%"" >}} If we need to update the data in the immutable or currenly optimized segment, instead of changing the data in place, we perform a copy-on-write operation, move the data to the mutable segment, and update it there. Data in the original segment is marked as deleted, and later vacuumed by the optimization process. ## Downsides and How to Compensate While immutable data structures are great for read-heavy operations, they come with trade-offs: - **Higher update costs:** Immutable structures are less efficient for updates. The amortized time complexity might be the same as mutable structures, but the constant factor is higher. - **Rebuilding overhead:** In some cases, we may need to rebuild indices or structures for the same data more than once. - **Read-heavy workloads:** Immutability assumes a search-heavy workload, which is typical for search engines but not for all applications. In Qdrant, we mitigate these downsides by allowing the user to adapt the system to their specific workload. For example, changing the default size of the segment might help to reduce the overhead of rebuilding indices. In extreme cases, multi-segment storage can act as a single segment, falling back to the mutable data structure when needed. ## Conclusion Immutable data structures, while tricky to implement correctly, offer significant performance gains, especially for read-heavy systems like search engines. They allow us to take full advantage of hardware optimizations, reduce memory overhead, and improve cache performance. In Qdrant, the combination of techniques like perfect hashing and defragmentation brings further benefits, making our vector search operations faster and more efficient. While there are trade-offs, the flexibility of Qdrant’s architecture — including segment-based storage — allows us to balance the best of both worlds. ",articles/immutable-data-structures.md "--- title: ""Qdrant 1.8.0: Enhanced Search Capabilities for Better Results"" draft: false slug: qdrant-1.8.x short_description: ""Faster sparse vectors.Optimized indexation. Optional CPU resource management."" description: ""Explore the latest in search technology with Qdrant 1.8.0! Discover faster performance, smarter indexing, and enhanced search capabilities."" social_preview_image: /articles_data/qdrant-1.8.x/social_preview.png small_preview_image: /articles_data/qdrant-1.8.x/icon.svg preview_dir: /articles_data/qdrant-1.8.x/preview weight: -140 date: 2024-03-06T00:00:00-08:00 author: David Myriel, Mike Jang featured: false tags: - vector search - new features - sparse vectors - hybrid search - CPU resource management - text field index --- # Unlocking Next-Level Search: Exploring Qdrant 1.8.0's Advanced Search Capabilities [Qdrant 1.8.0 is out!](https://github.com/qdrant/qdrant/releases/tag/v1.8.0). This time around, we have focused on Qdrant's internals. Our goal was to optimize performance so that your existing setup can run faster and save on compute. Here is what we've been up to: - **Faster [sparse vectors](https://qdrant.tech/articles/sparse-vectors/):** [Hybrid search](https://qdrant.tech/articles/hybrid-search/) is up to 16x faster now! - **CPU resource management:** You can allocate CPU threads for faster indexing. - **Better indexing performance:** We optimized text [indexing](https://qdrant.tech/documentation/concepts/indexing/) on the backend. ## Faster search with sparse vectors Search throughput is now up to 16 times faster for sparse vectors. If you are [using Qdrant for hybrid search](/articles/sparse-vectors/), this means that you can now handle up to sixteen times as many queries. This improvement comes from extensive backend optimizations aimed at increasing efficiency and capacity. What this means for your setup: - **Query speed:** The time it takes to run a search query has been significantly reduced. - **Search capacity:** Qdrant can now handle a much larger volume of search requests. - **User experience:** Results will appear faster, leading to a smoother experience for the user. - **Scalability:** You can easily accommodate rapidly growing users or an expanding dataset. ### Sparse vectors benchmark Performance results are publicly available for you to test. Qdrant's R&D developed a dedicated [open-source benchmarking tool](https://github.com/qdrant/sparse-vectors-benchmark) just to test sparse vector performance. A real-life simulation of sparse vector queries was run against the [NeurIPS 2023 dataset](https://big-ann-benchmarks.com/neurips23.html). All tests were done on an 8 CPU machine on Azure. Latency (y-axis) has dropped significantly for queries. You can see the before/after here: ![dropping latency](/articles_data/qdrant-1.8.x/benchmark.png) **Figure 1:** Dropping latency in sparse vector search queries across versions 1.7-1.8. The colors within both scatter plots show the frequency of results. The red dots show that the highest concentration is around 2200ms (before) and 135ms (after). This tells us that latency for sparse vector queries dropped by about a factor of 16. Therefore, the time it takes to retrieve an answer with Qdrant is that much shorter. This performance increase can have a dramatic effect on hybrid search implementations. [Read more about how to set this up.](/articles/sparse-vectors/) FYI, sparse vectors were released in [Qdrant v.1.7.0](/articles/qdrant-1.7.x/#sparse-vectors). They are stored using a different index, so first [check out the documentation](/documentation/concepts/indexing/#sparse-vector-index) if you want to try an implementation. ## CPU resource management Indexing is Qdrant’s most resource-intensive process. Now you can account for this by allocating compute use specifically to indexing. You can assign a number CPU resources towards indexing and leave the rest for search. As a result, indexes will build faster, and search quality will remain unaffected. This isn't mandatory, as Qdrant is by default tuned to strike the right balance between indexing and search. However, if you wish to define specific CPU usage, you will need to do so from `config.yaml`. This version introduces a `optimizer_cpu_budget` parameter to control the maximum number of CPUs used for indexing. > Read more about `config.yaml` in the [configuration file](/documentation/guides/configuration/). ```yaml # CPU budget, how many CPUs (threads) to allocate for an optimization job. optimizer_cpu_budget: 0 ``` - If left at 0, Qdrant will keep 1 or more CPUs unallocated - depending on CPU size. - If the setting is positive, Qdrant will use this exact number of CPUs for indexing. - If the setting is negative, Qdrant will subtract this number of CPUs from the available CPUs for indexing. For most users, the default `optimizer_cpu_budget` setting will work well. We only recommend you use this if your indexing load is significant. Our backend leverages dynamic CPU saturation to increase indexing speed. For that reason, the impact on search query performance ends up being minimal. Ultimately, you will be able to strike the best possible balance between indexing times and search performance. This configuration can be done at any time, but it requires a restart of Qdrant. Changing it affects both existing and new collections. > **Note:** This feature is not configurable on [Qdrant Cloud](https://qdrant.to/cloud). ## Better indexing for text data In order to [minimize your RAM expenditure](https://qdrant.tech/articles/memory-consumption/), we have developed a new way to index specific types of data. Please keep in mind that this is a backend improvement, and you won't need to configure anything. > Going forward, if you are indexing immutable text fields, we estimate a 10% reduction in RAM loads. Our benchmark result is based on a system that uses 64GB of RAM. If you are using less RAM, this reduction might be higher than 10%. Immutable text fields are static and do not change once they are added to Qdrant. These entries usually represent some type of attribute, description or tag. Vectors associated with them can be indexed more efficiently, since you don’t need to re-index them anymore. Conversely, mutable fields are dynamic and can be modified after their initial creation. Please keep in mind that they will continue to require additional RAM. This approach ensures stability in the [vector search](https://qdrant.tech/documentation/overview/vector-search/) index, with faster and more consistent operations. We achieved this by setting up a field index which helps minimize what is stored. To improve search performance we have also optimized the way we load documents for searches with a text field index. Now our backend loads documents mostly sequentially and in increasing order. ## Minor improvements and new features Beyond these enhancements, [Qdrant v1.8.0](https://github.com/qdrant/qdrant/releases/tag/v1.8.0) adds and improves on several smaller features: 1. **Order points by payload:** In addition to searching for semantic results, you might want to retrieve results by specific metadata (such as price). You can now use Scroll API to [order points by payload key](/documentation/concepts/points/#order-points-by-payload-key). 2. **Datetime support:** We have implemented [datetime support for the payload index](/documentation/concepts/filtering/#datetime-range). Prior to this, if you wanted to search for a specific datetime range, you would have had to convert dates to UNIX timestamps. ([PR#3320](https://github.com/qdrant/qdrant/issues/3320)) 3. **Check collection existence:** You can check whether a collection exists via the `/exists` endpoint to the `/collections/{collection_name}`. You will get a true/false response. ([PR#3472](https://github.com/qdrant/qdrant/pull/3472)). 4. **Find points** whose payloads match more than the minimal amount of conditions. We included the `min_should` match feature for a condition to be `true` ([PR#3331](https://github.com/qdrant/qdrant/pull/3466/)). 5. **Modify nested fields:** We have improved the `set_payload` API, adding the ability to update nested fields ([PR#3548](https://github.com/qdrant/qdrant/pull/3548)). ## Experience the Power of Qdrant 1.8.0 Ready to experience the enhanced performance of Qdrant 1.8.0? Upgrade now and explore the major improvements, from faster sparse vectors to optimized CPU resource management and better indexing for text data. Take your search capabilities to the next level with Qdrant's latest version. [Try a demo today](https://qdrant.tech/demo/) and see the difference firsthand! ## Release notes For more information, see [our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.8.0). Qdrant is an open-source project. We welcome your contributions; raise [issues](https://github.com/qdrant/qdrant/issues), or contribute via [pull requests](https://github.com/qdrant/qdrant/pulls)! ",articles/qdrant-1.8.x.md "--- title: On Unstructured Data, Vector Databases, New AI Age, and Our Seed Round. short_description: On Unstructured Data, Vector Databases, New AI Age, and Our Seed Round. description: We announce Qdrant seed round investment and share our thoughts on Vector Databases and New AI Age. preview_dir: /articles_data/seed-round/preview social_preview_image: /articles_data/seed-round/seed-social.png small_preview_image: /articles_data/quantum-quantization/icon.svg weight: 6 author: Andre Zayarni draft: false author_link: https://www.linkedin.com/in/zayarni date: 2023-04-19T00:42:00.000Z --- > Vector databases are here to stay. The New Age of AI is powered by vector embeddings, and vector databases are a foundational part of the stack. At Qdrant, we are working on cutting-edge open-source vector similarity search solutions to power fantastic AI applications with the best possible performance and excellent developer experience. > > Our 7.5M seed funding – led by [Unusual Ventures](https://www.unusual.vc/), awesome angels, and existing investors – will help us bring these innovations to engineers and empower them to make the most of their unstructured data and the awesome power of LLMs at any scale. We are thrilled to announce that we just raised our seed round from the best possible investor we could imagine for this stage. Let’s talk about fundraising later – it is a story itself that I could probably write a bestselling book about. First, let's dive into a bit of background about our project, our progress, and future plans. ## A need for vector databases. Unstructured data is growing exponentially, and we are all part of a huge unstructured data workforce. This blog post is unstructured data; your visit here produces unstructured and semi-structured data with every web interaction, as does every photo you take or email you send. The global datasphere will grow to [165 zettabytes by 2025](https://github.com/qdrant/qdrant/pull/1639), and about 80% of that will be unstructured. At the same time, the rising demand for AI is vastly outpacing existing infrastructure. Around 90% of machine learning research results fail to reach production because of a lack of tools. {{< figure src=/articles_data/seed-round/demand.png caption=""Demand for AI tools"" alt=""Vector Databases Demand"" >}} Thankfully there’s a new generation of tools that let developers work with unstructured data in the form of vector embeddings, which are deep representations of objects obtained from a neural network model. A vector database, also known as a vector similarity search engine or approximate nearest neighbour (ANN) search database, is a database designed to store, manage, and search high-dimensional data with an additional payload. Vector Databases turn research prototypes into commercial AI products. Vector search solutions are industry agnostic and bring solutions for a number of use cases, including classic ones like semantic search, matching engines, and recommender systems to more novel applications like anomaly detection, working with time series, or biomedical data. The biggest limitation is to have a neural network encoder in place for the data type you are working with. {{< figure src=/articles_data/seed-round/use-cases.png caption=""Vector Search Use Cases"" alt=""Vector Search Use Cases"" >}} With the rise of large language models (LLMs), Vector Databases have become the fundamental building block of the new AI Stack. They let developers build even more advanced applications by extending the “knowledge base” of LLMs-based applications like ChatGPT with real-time and real-world data. A new AI product category, “Co-Pilot for X,” was born and is already affecting how we work. Starting from producing content to developing software. And this is just the beginning, there are even more types of novel applications being developed on top of this stack. {{< figure src=/articles_data/seed-round/ai-stack.png caption=""New AI Stack"" alt=""New AI Stack"" >}} ## Enter Qdrant. ## At the same time, adoption has only begun. Vector Search Databases are replacing VSS libraries like FAISS, etc., which, despite their disadvantages, are still used by ~90% of projects out there They’re hard-coupled to the application code, lack of production-ready features like basic CRUD operations or advanced filtering, are a nightmare to maintain and scale and have many other difficulties that make life hard for developers. The current Qdrant ecosystem consists of excellent products to work with vector embeddings. We launched our managed vector database solution, Qdrant Cloud, early this year, and it is already serving more than 1,000 Qdrant clusters. We are extending our offering now with managed on-premise solutions for enterprise customers. {{< figure src=/articles_data/seed-round/ecosystem.png caption=""Qdrant Ecosystem"" alt=""Qdrant Vector Database Ecosystem"" >}} Our plan for the current [open-source roadmap](https://github.com/qdrant/qdrant/blob/master/docs/roadmap/README.md) is to make billion-scale vector search affordable. Our recent release of the [Scalar Quantization](/articles/scalar-quantization/) improves both memory usage (x4) as well as speed (x2). Upcoming [Product Quantization](https://www.irisa.fr/texmex/people/jegou/papers/jegou_searching_with_quantization.pdf) will introduce even another option with more memory saving. Stay tuned. Qdrant started more than two years ago with the mission of building a vector database powered by a well-thought-out tech stack. Using Rust as the system programming language and technical architecture decision during the development of the engine made Qdrant the leading and one of the most popular vector database solutions. Our unique custom modification of the [HNSW algorithm](/articles/filtrable-hnsw/) for Approximate Nearest Neighbor Search (ANN) allows querying the result with a state-of-the-art speed and applying filters without compromising on results. Cloud-native support for distributed deployment and replications makes the engine suitable for high-throughput applications with real-time latency requirements. Rust brings stability, efficiency, and the possibility to make optimization on a very low level. In general, we always aim for the best possible results in [performance](/benchmarks/), code quality, and feature set. Most importantly, we want to say a big thank you to our [open-source community](https://qdrant.to/discord), our adopters, our contributors, and our customers. Your active participation in the development of our products has helped make Qdrant the best vector database on the market. I cannot imagine how we could do what we’re doing without the community or without being open-source and having the TRUST of the engineers. Thanks to all of you! I also want to thank our team. Thank you for your patience and trust. Together we are strong. Let’s continue doing great things together. ## Fundraising ## The whole process took only a couple of days, we got several offers, and most probably, we would get more with different conditions. We decided to go with Unusual Ventures because they truly understand how things work in the open-source space. They just did it right. Here is a big piece of advice for all investors interested in open-source: Dive into the community, and see and feel the traction and product feedback instead of looking at glossy pitch decks. With Unusual on our side, we have an active operational partner instead of one who simply writes a check. That help is much more important than overpriced valuations and big shiny names. Ultimately, the community and adopters will decide what products win and lose, not VCs. Companies don’t need crazy valuations to create products that customers love. You do not need Ph.D. to innovate. You do not need to over-engineer to build a scalable solution. You do not need ex-FANG people to have a great team. You need clear focus, a passion for what you’re building, and the know-how to do it well. We know how. PS: This text is written by me in an old-school way without any ChatGPT help. Sometimes you just need inspiration instead of AI ;-) ",articles/seed-round.md "--- title: ""Optimizing RAG Through an Evaluation-Based Methodology"" short_description: Learn how Qdrant-powered RAG applications can be tested and iteratively improved using LLM evaluation tools like Quotient. description: Learn how Qdrant-powered RAG applications can be tested and iteratively improved using LLM evaluation tools like Quotient. social_preview_image: /articles_data/rapid-rag-optimization-with-qdrant-and-quotient/preview/social_preview.jpg small_preview_image: /articles_data/rapid-rag-optimization-with-qdrant-and-quotient/icon.svg preview_dir: /articles_data/rapid-rag-optimization-with-qdrant-and-quotient/preview weight: -131 author: Atita Arora author_link: https://github.com/atarora date: 2024-06-12T00:00:00.000Z draft: false keywords: - vector database - vector search - retrieval augmented generation - quotient - optimization - rag --- In today's fast-paced, information-rich world, AI is revolutionizing knowledge management. The systematic process of capturing, distributing, and effectively using knowledge within an organization is one of the fields in which AI provides exceptional value today. > The potential for AI-powered knowledge management increases when leveraging Retrieval Augmented Generation (RAG), a methodology that enables LLMs to access a vast, diverse repository of factual information from knowledge stores, such as vector databases. This process enhances the accuracy, relevance, and reliability of generated text, thereby mitigating the risk of faulty, incorrect, or nonsensical results sometimes associated with traditional LLMs. This method not only ensures that the answers are contextually relevant but also up-to-date, reflecting the latest insights and data available. While RAG enhances the accuracy, relevance, and reliability of traditional LLM solutions, **an evaluation strategy can further help teams ensure their AI products meet these benchmarks of success.** ## Relevant tools for this experiment In this article, we’ll break down a RAG Optimization workflow experiment that demonstrates that evaluation is essential to build a successful RAG strategy. We will use Qdrant and Quotient for this experiment. [Qdrant](https://qdrant.tech/) is a vector database and vector similarity search engine designed for efficient storage and retrieval of high-dimensional vectors. Because Qdrant offers efficient indexing and searching capabilities, it is ideal for implementing RAG solutions, where quickly and accurately retrieving relevant information from extremely large datasets is crucial. Qdrant also offers a wealth of additional features, such as quantization, multivector support and multi-tenancy. Alongside Qdrant we will use Quotient, which provides a seamless way to evaluate your RAG implementation, accelerating and improving the experimentation process. [Quotient](https://www.quotientai.co/) is a platform that provides tooling for AI developers to build evaluation frameworks and conduct experiments on their products. Evaluation is how teams surface the shortcomings of their applications and improve performance in key benchmarks such as faithfulness, and semantic similarity. Iteration is key to building innovative AI products that will deliver value to end users. > 💡 The [accompanying notebook](https://github.com/qdrant/qdrant-rag-eval/tree/master/workshop-rag-eval-qdrant-quotient) for this exercise can be found on GitHub for future reference. ## Summary of key findings 1. **Irrelevance and Hallucinations**: When the documents retrieved are irrelevant, evidenced by low scores in both Chunk Relevance and Context Relevance, the model is prone to generating inaccurate or fabricated information. 2. **Optimizing Document Retrieval**: By retrieving a greater number of documents and reducing the chunk size, we observed improved outcomes in the model's performance. 3. **Adaptive Retrieval Needs**: Certain queries may benefit from accessing more documents. Implementing a dynamic retrieval strategy that adjusts based on the query could enhance accuracy. 4. **Influence of Model and Prompt Variations**: Alterations in language models or the prompts used can significantly impact the quality of the generated responses, suggesting that fine-tuning these elements could optimize performance. Let us walk you through how we arrived at these findings! ## Building a RAG pipeline To evaluate a RAG pipeline , we will have to build a RAG Pipeline first. In the interest of simplicity, we are building a Naive RAG in this article. There are certainly other versions of RAG : ![shades_of_rag.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/shades_of_rag.png) The illustration below depicts how we can leverage a RAG Evaluation framework to assess the quality of RAG Application. ![qdrant_and_quotient.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/qdrant_and_quotient.png) We are going to build a RAG application using Qdrant’s Documentation and the premeditated [hugging face dataset](https://huggingface.co/datasets/atitaarora/qdrant_doc). We will then assess our RAG application’s ability to answer questions about Qdrant. To prepare our knowledge store we will use Qdrant, which can be leveraged in 3 different ways as below : ```python ##Uncomment to initialise qdrant client in memory #client = qdrant_client.QdrantClient( # location="":memory:"", #) ##Uncomment below to connect to Qdrant Cloud client = qdrant_client.QdrantClient( os.environ.get(""QDRANT_URL""), api_key=os.environ.get(""QDRANT_API_KEY""), ) ## Uncomment below to connect to local Qdrant #client = qdrant_client.QdrantClient(""http://localhost:6333"") ``` We will be using [Qdrant Cloud](https://cloud.qdrant.io/login) so it is a good idea to provide the `QDRANT_URL` and `QDRANT_API_KEY` as environment variables for easier access. Moving on, we will need to define the collection name as : ```python COLLECTION_NAME = ""qdrant-docs-quotient"" ``` In this case , we may need to create different collections based on the experiments we conduct. To help us provide seamless embedding creations throughout the experiment, we will use Qdrant’s native embedding provider [Fastembed](https://qdrant.github.io/fastembed/) which supports [many different models](https://qdrant.github.io/fastembed/examples/Supported_Models/) including dense as well as sparse vector models. We can initialize and switch the embedding model of our choice as below : ```python ## Declaring the intended Embedding Model with Fastembed from fastembed.embedding import TextEmbedding ## General Fastembed specific operations ##Initilising embedding model ## Using Default Model - BAAI/bge-small-en-v1.5 embedding_model = TextEmbedding() ## For custom model supported by Fastembed #embedding_model = TextEmbedding(model_name=""BAAI/bge-small-en"", max_length=512) #embedding_model = TextEmbedding(model_name=""sentence-transformers/all-MiniLM-L6-v2"", max_length=384) ## Verify the chosen Embedding model embedding_model.model_name ``` Before implementing RAG, we need to prepare and index our data in Qdrant. This involves converting textual data into vectors using a suitable encoder (e.g., sentence transformers), and storing these vectors in Qdrant for retrieval. ```python from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.docstore.document import Document as LangchainDocument ## Load the dataset with qdrant documentation dataset = load_dataset(""atitaarora/qdrant_doc"", split=""train"") ## Dataset to langchain document langchain_docs = [ LangchainDocument(page_content=doc[""text""], metadata={""source"": doc[""source""]}) for doc in dataset ] len(langchain_docs) #Outputs #240 ``` You can preview documents in the dataset as below : ```python ## Here's an example of what a document in our dataset looks like print(dataset[100]['text']) ``` ## Evaluation dataset To measure the quality of our RAG setup, we will need a representative evaluation dataset. This dataset should contain realistic questions and the expected answers. Additionally, including the expected contexts for which your RAG pipeline is designed to retrieve information would be beneficial. We will be using a [prebuilt evaluation dataset](https://huggingface.co/datasets/atitaarora/qdrant_doc_qna). If you are struggling to make an evaluation dataset for your use case , you can use your documents and some techniques described in this [notebook](https://github.com/qdrant/qdrant-rag-eval/blob/master/synthetic_qna/notebook/Synthetic_question_generation.ipynb) ### Building the RAG pipeline We establish the data preprocessing parameters essential for the RAG pipeline and configure the Qdrant vector database according to the specified criteria. Key parameters under consideration are: - **Chunk size** - **Chunk overlap** - **Embedding model** - **Number of documents retrieved (retrieval window)** Following the ingestion of data in Qdrant, we proceed to retrieve pertinent documents corresponding to each query. These documents are then seamlessly integrated into our evaluation dataset, enriching the contextual information within the designated **`context`** column to fulfil the evaluation aspect. Next we define methods to take care of logistics with respect to adding documents to Qdrant ```python def add_documents(client, collection_name, chunk_size, chunk_overlap, embedding_model_name): """""" This function adds documents to the desired Qdrant collection given the specified RAG parameters. """""" ## Processing each document with desired TEXT_SPLITTER_ALGO, CHUNK_SIZE, CHUNK_OVERLAP text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap, add_start_index=True, separators=[""\n\n"", ""\n"", ""."", "" "", """"], ) docs_processed = [] for doc in langchain_docs: docs_processed += text_splitter.split_documents([doc]) ## Processing documents to be encoded by Fastembed docs_contents = [] docs_metadatas = [] for doc in docs_processed: if hasattr(doc, 'page_content') and hasattr(doc, 'metadata'): docs_contents.append(doc.page_content) docs_metadatas.append(doc.metadata) else: # Handle the case where attributes are missing print(""Warning: Some documents do not have 'page_content' or 'metadata' attributes."") print(""processed: "", len(docs_processed)) print(""content: "", len(docs_contents)) print(""metadata: "", len(docs_metadatas)) ## Adding documents to Qdrant using desired embedding model client.set_model(embedding_model_name=embedding_model_name) client.add(collection_name=collection_name, metadata=docs_metadatas, documents=docs_contents) ``` and retrieving documents from Qdrant during our RAG Pipeline assessment. ```python def get_documents(collection_name, query, num_documents=3): """""" This function retrieves the desired number of documents from the Qdrant collection given a query. It returns a list of the retrieved documents. """""" search_results = client.query( collection_name=collection_name, query_text=query, limit=num_documents, ) results = [r.metadata[""document""] for r in search_results] return results ``` ### Setting up Quotient You will need an account log in, which you can get by requesting access on [Quotient's website](https://www.quotientai.co/). Once you have an account, you can create an API key by running the `quotient authenticate` CLI command. **Once you have your API key, make sure to set it as an environment variable called `QUOTIENT_API_KEY`** ```python # Import QuotientAI client and connect to QuotientAI from quotientai.client import QuotientClient from quotientai.utils import show_job_progress # IMPORTANT: be sure to set your API key as an environment variable called QUOTIENT_API_KEY # You will need this set before running the code below. You may also uncomment the following line and insert your API key: # os.environ['QUOTIENT_API_KEY'] = ""YOUR_API_KEY"" quotient = QuotientClient() ``` **QuotientAI** provides a seamless way to integrate *RAG evaluation* into your applications. Here, we'll see how to use it to evaluate text generated from an LLM, based on retrieved knowledge from the Qdrant vector database. After retrieving the top similar documents and populating the `context` column, we can submit the evaluation dataset to Quotient and execute an evaluation job. To run a job, all you need is your evaluation dataset and a `recipe`. ***A recipe is a combination of a prompt template and a specified LLM.*** **Quotient** orchestrates the evaluation run and handles version control and asset management throughout the experimentation process. ***Prior to assessing our RAG solution, it's crucial to outline our optimization goals.*** In the context of *question-answering on Qdrant documentation*, our focus extends beyond merely providing helpful responses. Ensuring the absence of any *inaccurate or misleading information* is paramount. In other words, **we want to minimize hallucinations** in the LLM outputs. For our evaluation, we will be considering the following metrics, with a focus on **Faithfulness**: - **Context Relevance** - **Chunk Relevance** - **Faithfulness** - **ROUGE-L** - **BERT Sentence Similarity** - **BERTScore** ### Evaluation in action The function below takes an evaluation dataset as input, which in this case contains questions and their corresponding answers. It retrieves relevant documents based on the questions in the dataset and populates the context field with this information from Qdrant. The prepared dataset is then submitted to QuotientAI for evaluation for the chosen metrics. After the evaluation is complete, the function displays aggregated statistics on the evaluation metrics followed by the summarized evaluation results. ```python def run_eval(eval_df, collection_name, recipe_id, num_docs=3, path=""eval_dataset_qdrant_questions.csv""): """""" This function evaluates the performance of a complete RAG pipeline on a given evaluation dataset. Given an evaluation dataset (containing questions and ground truth answers), this function retrieves relevant documents, populates the context field, and submits the dataset to QuotientAI for evaluation. Once the evaluation is complete, aggregated statistics on the evaluation metrics are displayed. The evaluation results are returned as a pandas dataframe. """""" # Add context to each question by retrieving relevant documents eval_df['documents'] = eval_df.apply(lambda x: get_documents(collection_name=collection_name, query=x['input_text'], num_documents=num_docs), axis=1) eval_df['context'] = eval_df.apply(lambda x: ""\n"".join(x['documents']), axis=1) # Now we'll save the eval_df to a CSV eval_df.to_csv(path, index=False) # Upload the eval dataset to QuotientAI dataset = quotient.create_dataset( file_path=path, name=""qdrant-questions-eval-v1"", ) # Create a new task for the dataset task = quotient.create_task( dataset_id=dataset['id'], name='qdrant-questions-qa-v1', task_type='question_answering' ) # Run a job to evaluate the model job = quotient.create_job( task_id=task['id'], recipe_id=recipe_id, num_fewshot_examples=0, limit=500, metric_ids=[5, 7, 8, 11, 12, 13, 50], ) # Show the progress of the job show_job_progress(quotient, job['id']) # Once the job is complete, we can get our results data = quotient.get_eval_results(job_id=job['id']) # Add the results to a pandas dataframe to get statistics on performance df = pd.json_normalize(data, ""results"") df_stats = df[df.columns[df.columns.str.contains(""metric|completion_time"")]] df.columns = df.columns.str.replace(""metric."", """") df_stats.columns = df_stats.columns.str.replace(""metric."", """") metrics = { 'completion_time_ms':'Completion Time (ms)', 'chunk_relevance': 'Chunk Relevance', 'selfcheckgpt_nli_relevance':""Context Relevance"", 'selfcheckgpt_nli':""Faithfulness"", 'rougeL_fmeasure':""ROUGE-L"", 'bert_score_f1':""BERTScore"", 'bert_sentence_similarity': ""BERT Sentence Similarity"", 'completion_verbosity':""Completion Verbosity"", 'verbosity_ratio':""Verbosity Ratio"",} df = df.rename(columns=metrics) df_stats = df_stats.rename(columns=metrics) display(df_stats[metrics.values()].describe()) return df main_metrics = [ 'Context Relevance', 'Chunk Relevance', 'Faithfulness', 'ROUGE-L', 'BERT Sentence Similarity', 'BERTScore', ] ``` ## Experimentation Our approach is rooted in the belief that improvement thrives in an environment of exploration and discovery. By systematically testing and tweaking various components of the RAG pipeline, we aim to incrementally enhance its capabilities and performance. In the following section, we dive into the details of our experimentation process, outlining the specific experiments conducted and the insights gained. ### Experiment 1 - Baseline Parameters - **Embedding Model: `bge-small-en`** - **Chunk size: `512`** - **Chunk overlap: `64`** - **Number of docs retrieved (Retireval Window): `3`** - **LLM: `Mistral-7B-Instruct`** We’ll process our documents based on configuration above and ingest them into Qdrant using `add_documents` method introduced earlier ```python #experiment1 - base config chunk_size = 512 chunk_overlap = 64 embedding_model_name = ""BAAI/bge-small-en"" num_docs = 3 COLLECTION_NAME = f""experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}"" add_documents(client, collection_name=COLLECTION_NAME, chunk_size=chunk_size, chunk_overlap=chunk_overlap, embedding_model_name=embedding_model_name) #Outputs #processed: 4504 #content: 4504 #metadata: 4504 ``` Notice the `COLLECTION_NAME` which helps us segregate and identify our collections based on the experiments conducted. To proceed with the evaluation, let’s create the `evaluation recipe` up next ```python # Create a recipe for the generator model and prompt template recipe_mistral = quotient.create_recipe( model_id=10, prompt_template_id=1, name='mistral-7b-instruct-qa-with-rag', description='Mistral-7b-instruct using a prompt template that includes context.' ) recipe_mistral #Outputs recipe JSON with the used prompt template #'prompt_template': {'id': 1, # 'name': 'Default Question Answering Template', # 'variables': '[""input_text"",""context""]', # 'created_at': '2023-12-21T22:01:54.632367', # 'template_string': 'Question: {input_text}\\n\\nContext: {context}\\n\\nAnswer:', # 'owner_profile_id': None} ``` To get a list of your existing recipes, you can simply run: ```python quotient.list_recipes() ``` Notice the recipe template is a simplest prompt using `Question` from evaluation template `Context` from document chunks retrieved from Qdrant and `Answer` generated by the pipeline. To kick off the evaluation ```python # Kick off an evaluation job experiment_1 = run_eval(eval_df, collection_name=COLLECTION_NAME, recipe_id=recipe_mistral['id'], num_docs=num_docs, path=f""{COLLECTION_NAME}_{num_docs}_mistral.csv"") ``` This may take few minutes (depending on the size of evaluation dataset!) We can look at the results from our first (baseline) experiment as below : ![experiment1_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment1_eval.png) Notice that we have a pretty **low average Chunk Relevance** and **very large standard deviations for both Chunk Relevance and Context Relevance**. Let's take a look at some of the lower performing datapoints with **poor Faithfulness**: ```python with pd.option_context('display.max_colwidth', 0): display(experiment_1[['content.input_text', 'content.answer','content.documents','Chunk Relevance','Context Relevance','Faithfulness'] ].sort_values(by='Faithfulness').head(2)) ``` ![experiment1_bad_examples.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment1_bad_examples.png) In instances where the retrieved documents are **irrelevant (where both Chunk Relevance and Context Relevance are low)**, the model also shows **tendencies to hallucinate** and **produce poor quality responses**. The quality of the retrieved text directly impacts the quality of the LLM-generated answer. Therefore, our focus will be on enhancing the RAG setup by **adjusting the chunking parameters**. ### Experiment 2 - Adjusting the chunk parameter Keeping all other parameters constant, we changed the `chunk size` and `chunk overlap` to see if we can improve our results. Parameters : - **Embedding Model : `bge-small-en`** - **Chunk size: `1024`** - **Chunk overlap: `128`** - **Number of docs retrieved (Retireval Window): `3`** - **LLM: `Mistral-7B-Instruct`** We will reprocess the data with the updated parameters above: ```python ## for iteration 2 - lets modify chunk configuration ## We will start with creating seperate collection to store vectors chunk_size = 1024 chunk_overlap = 128 embedding_model_name = ""BAAI/bge-small-en"" num_docs = 3 COLLECTION_NAME = f""experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}"" add_documents(client, collection_name=COLLECTION_NAME, chunk_size=chunk_size, chunk_overlap=chunk_overlap, embedding_model_name=embedding_model_name) #Outputs #processed: 2152 #content: 2152 #metadata: 2152 ``` Followed by running evaluation : ![experiment2_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment2_eval.png) and **comparing it with the results from Experiment 1:** ![graph_exp1_vs_exp2.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_vs_exp2.png) We observed slight enhancements in our LLM completion metrics (including BERT Sentence Similarity, BERTScore, ROUGE-L, and Knowledge F1) with the increase in *chunk size*. However, it's noteworthy that there was a significant decrease in *Faithfulness*, which is the primary metric we are aiming to optimize. Moreover, *Context Relevance* demonstrated an increase, indicating that the RAG pipeline retrieved more relevant information required to address the query. Nonetheless, there was a considerable drop in *Chunk Relevance*, implying that a smaller portion of the retrieved documents contained pertinent information for answering the question. **The correlation between the rise in Context Relevance and the decline in Chunk Relevance suggests that retrieving more documents using the smaller chunk size might yield improved results.** ### Experiment 3 - Increasing the number of documents retrieved (retrieval window) This time, we are using the same RAG setup as `Experiment 1`, but increasing the number of retrieved documents from **3** to **5**. Parameters : - **Embedding Model : `bge-small-en`** - **Chunk size: `512`** - **Chunk overlap: `64`** - **Number of docs retrieved (Retrieval Window): `5`** - **LLM: : `Mistral-7B-Instruct`** We can use the collection from Experiment 1 and run evaluation with modified `num_docs` parameter as : ```python #collection name from Experiment 1 COLLECTION_NAME = f""experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}"" #running eval for experiment 3 experiment_3 = run_eval(eval_df, collection_name=COLLECTION_NAME, recipe_id=recipe_mistral['id'], num_docs=num_docs, path=f""{COLLECTION_NAME}_{num_docs}_mistral.csv"") ``` Observe the results as below : ![experiment_3_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment_3_eval.png) Comparing the results with Experiment 1 and 2 : ![graph_exp1_exp2_exp3.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_exp2_exp3.png) As anticipated, employing the smaller chunk size while retrieving a larger number of documents resulted in achieving the highest levels of both *Context Relevance* and *Chunk Relevance.* Additionally, it yielded the **best** (albeit marginal) *Faithfulness* score, indicating a *reduced occurrence of inaccuracies or hallucinations*. Looks like we have achieved a good hold on our chunking parameters but it is worth testing another embedding model to see if we can get better results. ### Experiment 4 - Changing the embedding model Let us try using **MiniLM** for this experiment ****Parameters : - **Embedding Model : `MiniLM-L6-v2`** - **Chunk size: `512`** - **Chunk overlap: `64`** - **Number of docs retrieved (Retrieval Window): `5`** - **LLM: : `Mistral-7B-Instruct`** We will have to create another collection for this experiment : ```python #experiment-4 chunk_size=512 chunk_overlap=64 embedding_model_name=""sentence-transformers/all-MiniLM-L6-v2"" num_docs=5 COLLECTION_NAME = f""experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}"" add_documents(client, collection_name=COLLECTION_NAME, chunk_size=chunk_size, chunk_overlap=chunk_overlap, embedding_model_name=embedding_model_name) #Outputs #processed: 4504 #content: 4504 #metadata: 4504 ``` We will observe our evaluations as : ![experiment4_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment4_eval.png) Comparing these with our previous experiments : ![graph_exp1_exp2_exp3_exp4.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_exp2_exp3_exp4.png) It appears that `bge-small` was more proficient in capturing the semantic nuances of the Qdrant Documentation. Up to this point, our experimentation has focused solely on the *retrieval aspect* of our RAG pipeline. Now, let's explore altering the *generation aspect* or LLM while retaining the optimal parameters identified in Experiment 3. ### Experiment 5 - Changing the LLM Parameters : - **Embedding Model : `bge-small-en`** - **Chunk size: `512`** - **Chunk overlap: `64`** - **Number of docs retrieved (Retrieval Window): `5`** - **LLM: : `GPT-3.5-turbo`** For this we can repurpose our collection from Experiment 3 while the evaluations to use a new recipe with **GPT-3.5-turbo** model. ```python #collection name from Experiment 3 COLLECTION_NAME = f""experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}"" # We have to create a recipe using the same prompt template and GPT-3.5-turbo recipe_gpt = quotient.create_recipe( model_id=5, prompt_template_id=1, name='gpt3.5-qa-with-rag-recipe-v1', description='GPT-3.5 using a prompt template that includes context.' ) recipe_gpt #Outputs #{'id': 495, # 'name': 'gpt3.5-qa-with-rag-recipe-v1', # 'description': 'GPT-3.5 using a prompt template that includes context.', # 'model_id': 5, # 'prompt_template_id': 1, # 'created_at': '2024-05-03T12:14:58.779585', # 'owner_profile_id': 34, # 'system_prompt_id': None, # 'prompt_template': {'id': 1, # 'name': 'Default Question Answering Template', # 'variables': '[""input_text"",""context""]', # 'created_at': '2023-12-21T22:01:54.632367', # 'template_string': 'Question: {input_text}\\n\\nContext: {context}\\n\\nAnswer:', # 'owner_profile_id': None}, # 'model': {'id': 5, # 'name': 'gpt-3.5-turbo', # 'endpoint': 'https://api.openai.com/v1/chat/completions', # 'revision': 'placeholder', # 'created_at': '2024-02-06T17:01:21.408454', # 'model_type': 'OpenAI', # 'description': 'Returns a maximum of 4K output tokens.', # 'owner_profile_id': None, # 'external_model_config_id': None, # 'instruction_template_cls': 'NoneType'}} ``` Running the evaluations as : ```python experiment_5 = run_eval(eval_df, collection_name=COLLECTION_NAME, recipe_id=recipe_gpt['id'], num_docs=num_docs, path=f""{COLLECTION_NAME}_{num_docs}_gpt.csv"") ``` We observe : ![experiment5_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment5_eval.png) and comparing all the 5 experiments as below : ![graph_exp1_exp2_exp3_exp4_exp5.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_exp2_exp3_exp4_exp5.png) **GPT-3.5 surpassed Mistral-7B in all metrics**! Notably, Experiment 5 exhibited the **lowest occurrence of hallucination**. ## Conclusions Let’s take a look at our results from all 5 experiments above ![overall_eval_results.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/overall_eval_results.png) We still have a long way to go in improving the retrieval performance of RAG, as indicated by our generally poor results thus far. It might be beneficial to **explore alternative embedding models** or **different retrieval strategies** to address this issue. The significant variations in *Context Relevance* suggest that **certain questions may necessitate retrieving more documents than others**. Therefore, investigating a **dynamic retrieval strategy** could be worthwhile. Furthermore, there's ongoing **exploration required on the generative aspect** of RAG. Modifying LLMs or prompts can substantially impact the overall quality of responses. This iterative process demonstrates how, starting from scratch, continual evaluation and adjustments throughout experimentation can lead to the development of an enhanced RAG system. ## Watch this workshop on YouTube > A workshop version of this article is [available on YouTube](https://www.youtube.com/watch?v=3MEMPZR1aZA). Follow along using our [GitHub notebook](https://github.com/qdrant/qdrant-rag-eval/tree/master/workshop-rag-eval-qdrant-quotient). ",articles/rapid-rag-optimization-with-qdrant-and-quotient.md "--- title: Qdrant Articles page_title: Articles about Vector Search description: Articles about vector search and similarity larning related topics. Latest updates on Qdrant vector search engine. section_title: Check out our latest publications subtitle: Check out our latest publications img: /articles_data/title-img.png --- ",articles/_index.md "--- title: Why Rust? short_description: ""A short history on how we chose rust and what it has brought us"" description: Qdrant could be built in any language. But it's written in Rust. Here*s why. social_preview_image: /articles_data/why-rust/preview/social_preview.jpg preview_dir: /articles_data/why-rust/preview weight: 10 author: Andre Bogus author_link: https://llogiq.github.io date: 2023-05-11T10:00:00+01:00 draft: false keywords: rust, programming, development aliases: [ /articles/why_rust/ ] --- # Building Qdrant in Rust Looking at the [github repository](https://github.com/qdrant/qdrant), you can see that Qdrant is built in [Rust](https://rust-lang.org). Other offerings may be written in C++, Go, Java or even Python. So why does Qdrant chose Rust? Our founder Andrey had built the first prototype in C++, but didn’t trust his command of the language to scale to a production system (to be frank, he likened it to cutting his leg off). He was well versed in Java and Scala and also knew some Python. However, he considered neither a good fit: **Java** is also more than 30 years old now. With a throughput-optimized VM it can often at least play in the same ball park as native services, and the tooling is phenomenal. Also portability is surprisingly good, although the GC is not suited for low-memory applications and will generally take good amount of RAM to deliver good performance. That said, the focus on throughput led to the dreaded GC pauses that cause latency spikes. Also the fat runtime incurs high start-up delays, which need to be worked around. **Scala** also builds on the JVM, although there is a native compiler, there was the question of compatibility. So Scala shared the limitations of Java, and although it has some nice high-level amenities (of which Java only recently copied a subset), it still doesn’t offer the same level of control over memory layout as, say, C++, so it is similarly disqualified. **Python**, being just a bit younger than Java, is ubiquitous in ML projects, mostly owing to its tooling (notably jupyter notebooks), being easy to learn and integration in most ML stacks. It doesn’t have a traditional garbage collector, opting for ubiquitous reference counting instead, which somewhat helps memory consumption. With that said, unless you only use it as glue code over high-perf modules, you may find yourself waiting for results. Also getting complex python services to perform stably under load is a serious technical challenge. ## Into the Unknown So Andrey looked around at what younger languages would fit the challenge. After some searching, two contenders emerged: Go and Rust. Knowing neither, Andrey consulted the docs, and found hinself intrigued by Rust with its promise of Systems Programming without pervasive memory unsafety. This early decision has been validated time and again. When first learning Rust, the compiler’s error messages are very helpful (and have only improved in the meantime). It’s easy to keep memory profile low when one doesn’t have to wrestle a garbage collector and has complete control over stack and heap. Apart from the much advertised memory safety, many footguns one can run into when writing C++ have been meticulously designed out. And it’s much easier to parallelize a task if one doesn’t have to fear data races. With Qdrant written in Rust, we can offer cloud services that don’t keep us awake at night, thanks to Rust’s famed robustness. A current qdrant docker container comes in at just a bit over 50MB — try that for size. As for performance… have some [benchmarks](/benchmarks/). And we don’t have to compromise on ergonomics either, not for us nor for our users. Of course, there are downsides: Rust compile times are usually similar to C++’s, and though the learning curve has been considerably softened in the last years, it’s still no match for easy-entry languages like Python or Go. But learning it is a one-time cost. Contrast this with Go, where you may find [the apparent simplicity is only skin-deep](https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-ride). ## Smooth is Fast The complexity of the type system pays large dividends in bugs that didn’t even make it to a commit. The ecosystem for web services is also already quite advanced, perhaps not at the same point as Java, but certainly matching or outcompeting Go. Some people may think that the strict nature of Rust will slow down development, which is true only insofar as it won’t let you cut any corners. However, experience has conclusively shown that this is a net win. In fact, Rust lets us [ride the wall](https://the-race.com/nascar/bizarre-wall-riding-move-puts-chastain-into-nascar-folklore/), which makes us faster, not slower. The job market for Rust programmers is certainly not as big as that for Java or Python programmers, but the language has finally reached the mainstream, and we don’t have any problems getting and retaining top talent. And being an open source project, when we get contributions, we don’t have to check for a wide variety of errors that Rust already rules out. ## In Rust We Trust Finally, the Rust community is a very friendly bunch, and we are delighted to be part of that. And we don’t seem to be alone. Most large IT companies (notably Amazon, Google, Huawei, Meta and Microsoft) have already started investing in Rust. It’s in the Windows font system already and in the process of coming to the Linux kernel (build support has already been included). In machine learning applications, Rust has been tried and proven by the likes of Aleph Alpha and Huggingface, among many others. To sum up, choosing Rust was a lucky guess that has brought huge benefits to Qdrant. Rust continues to be our not-so-secret weapon. ### Key Takeaways: - **Rust's Advantages for Qdrant:** Rust provides memory safety and control without a garbage collector, which is crucial for Qdrant's high-performance cloud services. - **Low Overhead:** Qdrant's Rust-based system offers efficiency, with small Docker container sizes and robust performance benchmarks. - **Complexity vs. Simplicity:** Rust's strict type system reduces bugs early in development, making it faster in the long run despite initial learning curves. - **Adoption by Major Players:** Large tech companies like Amazon, Google, and Microsoft are embracing Rust, further validating Qdrant's choice. - **Community and Talent:** The supportive Rust community and increasing availability of Rust developers make it easier for Qdrant to grow and innovate.",articles/why-rust.md "--- title: ""Qdrant x.y.0 - #required; update version and headline"" draft: true # Change to false to publish the article at /articles/ slug: qdrant-x.y.z # required; subtitute version number short_description: ""Headline-like description."" description: ""Headline with more detail. Suggested limit: 140 characters. "" # Follow instructions in https://github.com/qdrant/landing_page?tab=readme-ov-file#articles to create preview images # social_preview_image: /articles_data//social_preview.jpg # This image will be used in social media previews, should be 1200x600px. Required. # small_preview_image: /articles_data//icon.svg # This image will be used in the list of articles at the footer, should be 40x40px # preview_dir: /articles_data//preview # This directory contains images that will be used in the article preview. They can be generated from one image. Read more below. Required. weight: 10 # This is the order of the article in the list of articles at the footer. The lower the number, the higher the article will be in the list. Negative numbers OK. author: # Author of the article. Required. author_link: https://medium.com/@yusufsarigoz # Link to the author's page. Not required. date: 2022-06-28T13:00:00+03:00 # Date of the article. Required. If the date is in the future it does not appear in the build tags: # Keywords for SEO - vector databases comparative benchmark - benchmark - performance - latency --- [Qdrant x.y.0 is out!]((https://github.com/qdrant/qdrant/releases/tag/vx.y.0). Include headlines: - **Headline 1:** Description - **Headline 2:** Description - **Headline 3:** Description ## Related to headline 1 Description Highlights: - **Detail 1:** Description - **Detail 2:** Description - **Detail 3:** Description Include before / after information, ideally with graphs and/or numbers Include links to documentation Note limits, such as availability on Qdrant Cloud ## Minor improvements and new features Beyond these enhancements, [Qdrant vx.y.0](https://github.com/qdrant/qdrant/releases/tag/vx.y.0) adds and improves on several smaller features: 1. 1. ## Release notes For more information, see [our release notes](https://github.com/qdrant/qdrant/releases/tag/vx.y.0). Qdrant is an open source project. We welcome your contributions; raise [issues](https://github.com/qdrant/qdrant/issues), or contribute via [pull requests](https://github.com/qdrant/qdrant/pulls)! ",articles/templates/release-post-template.md "--- review: “With the landscape of AI being complex for most customers, Qdrant's ease of use provides an easy approach for customers' implementation of RAG patterns for Generative AI solutions and additional choices in selecting AI components on Azure.” names: Tara Walker positions: Principal Software Engineer at Microsoft avatar: src: /img/customers/tara-walker.svg alt: Tara Walker Avatar logo: src: /img/brands/microsoft-gray.svg alt: Logo sitemapExclude: true --- ",qdrant-for-startups/qdrant-for-startups-testimonial.md "--- title: Apply Now form: id: startup-program-form title: Join our Startup Program firstNameLabel: First Name lastNameLabel: Last Name businessEmailLabel: Business Email companyNameLabel: Company Name companyUrlLabel: Company URL cloudProviderLabel: Cloud Provider productDescriptionLabel: Product Description latestFundingRoundLabel: Latest Funding Round numberOfEmployeesLabel: Number of Employees info: By submitting, I confirm that I have read and understood the link: url: / text: Terms and Conditions. button: Send Message hubspotFormOptions: '{ ""region"": ""eu1"", ""portalId"": ""139603372"", ""formId"": ""59eb058b-0145-4ab0-b49a-c37708d20a60"", ""submitButtonClass"": ""button button_contained"", }' sitemapExclude: true --- ",qdrant-for-startups/qdrant-for-startups-form.md "--- title: Program FAQ questions: - id: 0 question: Who is eligible? answer: |
  • Pre-seed, Seed or Series A startups (under five years old)
  • Has not previously participated in the Qdrant for Startups program
  • Must be building an AI-driven product or services (agencies or devshops are not eligible)
  • A live, functional website is a must for all applicants
  • Billing must be done directly with Qdrant (not through a marketplace)
- id: 1 question: When will I get notified about my application? answer: Upon submitting your application, we will review it and notify you of your status within 7 business days. - id: 2 question: What is the price? answer: It is free to apply to the program. As part of the program, you will receive up to a 20% discount on Qdrant Cloud, valid for 12 months. For detailed cloud pricing, please visit qdrant.tech/pricing. - id: 3 question: How can my startup join the program? answer: Your startup can join the program by simply submitting the application on this page. Once submitted, we will review your application and notify you of your status within 7 business days. sitemapExclude: true --- ",qdrant-for-startups/qdrant-for-startups-faq.md "--- title: Why join Qdrant for Startups? mainCard: title: Discount for Qdrant Cloud description: Receive up to 20% discount on Qdrant Cloud for the first year and start building now. image: src: /img/qdrant-for-startups-benefits/card1.png alt: Qdrant Discount for Startups cards: - id: 0 title: Expert Technical Advice description: Get access to one-on-one sessions with experts for personalized technical advice. image: src: /img/qdrant-for-startups-benefits/card2.svg alt: Expert Technical Advice - id: 1 title: Co-Marketing Opportunities description: We’d love to share your work with our community. Exclusive access to our Vector Space Talks, joint blog posts, and more. image: src: /img/qdrant-for-startups-benefits/card3.svg alt: Co-Marketing Opportunities description: Qdrant is the leading open source vector database and similarity search engine designed to handle high-dimensional vectors for performance and massive-scale AI applications. link: url: /documentation/overview/ text: Learn More sitemapExclude: true --- ",qdrant-for-startups/qdrant-for-startups-benefits.md "--- title: Qdrant For Startups description: Qdrant For Startups cascade: - _target: environment: production build: list: never render: never publishResources: false sitemapExclude: true # todo: remove sitemapExclude and change building options after the page is ready to be published --- ",qdrant-for-startups/_index.md "--- title: Qdrant for Startups description: Powering The Next Wave of AI Innovators, Qdrant for Startups is committed to being the catalyst for the next generation of AI pioneers. Our program is specifically designed to provide AI-focused startups with the right resources to scale. If AI is at the heart of your startup, you're in the right place. button: text: Apply Now url: ""#form"" image: src: /img/qdrant-for-startups-hero.svg srcMobile: /img/mobile/qdrant-for-startups-hero.svg alt: Qdrant for Startups sitemapExclude: true --- ",qdrant-for-startups/qdrant-for-startups-hero.md "--- title: Distributed icon: - url: /features/cloud.svg - url: /features/cluster.svg weight: 50 sitemapExclude: True --- Cloud-native and scales horizontally. \ No matter how much data you need to serve - Qdrant can always be used with just the right amount of computational resources. ",features/distributed.md "--- title: Rich data types icon: - url: /features/data.svg weight: 40 sitemapExclude: True --- Vector payload supports a large variety of data types and query conditions, including string matching, numerical ranges, geo-locations, and more. Payload filtering conditions allow you to build almost any custom business logic that should work on top of similarity matching.",features/rich-data-types.md "--- title: Efficient icon: - url: /features/sight.svg weight: 60 sitemapExclude: True --- Effectively utilizes your resources. Developed entirely in Rust language, Qdrant implements dynamic query planning and payload data indexing. Hardware-aware builds are also available for Enterprises. ",features/optimized.md "--- title: Easy to Use API icon: - url: /features/settings.svg - url: /features/microchip.svg weight: 10 sitemapExclude: True --- Provides the [OpenAPI v3 specification](https://api.qdrant.tech/api-reference) to generate a client library in almost any programming language. Alternatively utilise [ready-made client for Python](https://github.com/qdrant/qdrant-client) or other programming languages with additional functionality.",features/easy-to-use.md "--- title: Filterable icon: - url: /features/filter.svg weight: 30 sitemapExclude: True --- Support additional payload associated with vectors. Not only stores payload but also allows filter results based on payload values. \ Unlike Elasticsearch post-filtering, Qdrant guarantees all relevant vectors are retrieved. ",features/filterable.md "--- title: Fast and Accurate icon: - url: /features/speed.svg - url: /features/target.svg weight: 20 sitemapExclude: True --- Implement a unique custom modification of the [HNSW algorithm](https://arxiv.org/abs/1603.09320) for Approximate Nearest Neighbor Search. Search with a [State-of-the-Art speed](https://github.com/qdrant/benchmark/tree/master/search_benchmark) and apply search filters without [compromising on results](https://blog.vasnetsov.com/posts/categorical-hnsw/). ",features/fast-and-accurate.md "--- title: ""Make the most of your Unstructured Data"" icon: sitemapExclude: True _build: render: never list: never publishResources: false cascade: _build: render: never list: never publishResources: false --- Qdrant is a vector database & vector similarity search engine. It deploys as an API service providing search for the nearest high-dimensional vectors. With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and much more! ",features/_index.md "--- title: Are you contributing to our code, content, or community? button: url: https://forms.gle/q4fkwudDsy16xAZk8 text: Become a Star image: src: /img/stars.svg alt: Stars sitemapExclude: true --- ",stars/stars-get-started.md "--- title: Meet our Stars cards: - id: 0 image: src: /img/stars/robert-caulk.jpg alt: Robert Caulk Photo name: Robert Caulk position: Founder of Emergent Methods description: Robert is working with a team on AskNews.app to adaptively enrich, index, and report on over 1 million news articles per day - id: 1 image: src: /img/stars/joshua-mo.jpg alt: Joshua Mo Photo name: Joshua Mo position: DevRel at Shuttle.rs description: Hey there! I primarily use Rust and am looking forward to contributing to the Qdrant community! - id: 2 image: src: /img/stars/nick-khami.jpg alt: Nick Khami Photo name: Nick Khami position: Founder & Product Engineer description: Founder and product engineer at Trieve and has been using Qdrant since late 2022 - id: 3 image: src: /img/stars/owen-colegrove.jpg alt: Owen Colegrove Photo name: Owen Colegrove position: Founder of SciPhi description: Physics PhD, Quant @ Citadel and Founder at SciPhi - id: 4 image: src: /img/stars/m-k-pavan-kumar.jpg alt: M K Pavan Kumar Photo name: M K Pavan Kumar position: Data Scientist and Lead GenAI description: A seasoned technology expert with 14 years of experience in full stack development, cloud solutions, & artificial intelligence - id: 5 image: src: /img/stars/niranjan-akella.jpg alt: Niranjan Akella Photo name: Niranjan Akella position: Scientist by Heart & AI Engineer description: I build & deploy AI models like LLMs, Diffusion Models & Vision Models at scale - id: 6 image: src: /img/stars/bojan-jakimovski.jpg alt: Bojan Jakimovski Photo name: Bojan Jakimovski position: Machine Learning Engineer description: I'm really excited to show the power of the Qdrant as vector database - id: 7 image: src: /img/stars/haydar-kulekci.jpg alt: Haydar KULEKCI Photo name: Haydar KULEKCI position: Senior Software Engineer description: I am a senior software engineer and consultant with over 10 years of experience in data management, processing, and software development. - id: 8 image: src: /img/stars/nicola-procopio.jpg alt: Nicola Procopio Photo name: Nicola Procopio position: Senior Data Scientist @ Fincons Group description: Nicola, a data scientist and open-source enthusiast since 2009, has used Qdrant since 2023. He developed fastembed for Haystack, vector search for Cheshire Cat A.I., and shares his expertise through articles, tutorials, and talks. - id: 9 image: src: /img/stars/eduardo-vasquez.jpg alt: Eduardo Vasquez Photo name: Eduardo Vasquez position: Data Scientist and MLOps Engineer description: I am a Data Scientist and MLOps Engineer exploring generative AI and LLMs, creating YouTube content on RAG workflows and fine-tuning LLMs. I hold an MSc in Statistics and Data Science. - id: 10 image: src: /img/stars/benito-martin.jpg alt: Benito Martin Photo name: Benito Martin position: Independent Consultant | Data Science, ML and AI Project Implementation | Teacher and Course Content Developer description: Over the past year, Benito developed MLOps and LLM projects. Based in Switzerland, Benito continues to advance his skills. - id: 11 image: src: /img/stars/nirant-kasliwal.jpg alt: Nirant Kasliwal Photo name: Nirant Kasliwal position: FastEmbed Creator description: I'm a Machine Learning consultant specializing in NLP and Vision systems for early-stage products. I've authored an NLP book recommended by Dr. Andrew Ng to Stanford's CS230 students and maintain FastEmbed at Qdrant for speed. - id: 12 image: src: /img/stars/denzell-ford.jpg alt: Denzell Ford Photo name: Denzell Ford position: Founder at Trieve, has been using Qdrant since late 2022. description: Denzell Ford, the founder of Trieve, has been using Qdrant since late 2022. He's passionate about helping people in the community. - id: 13 image: src: /img/stars/pavan-nagula.jpg alt: Pavan Nagula Photo name: Pavan Nagula position: Data Scientist | Machine Learning and Generative AI description: I'm Pavan, a data scientist specializing in AI, ML, and big data analytics. I love experimenting with new technologies in the AI and ML space, and Qdrant is a place where I've seen such innovative implementations recently. sitemapExclude: true --- ",stars/stars-list.md "--- title: Everything you need to extend your current reach to be the voice of the developer community and represent Qdrant benefits: - id: 0 icon: src: /icons/outline/training-blue.svg alt: Training title: Training description: You will be equipped with the assets and knowledge to organize and execute successful talks and events. Get access to our content library with slide decks, templates, and more. - id: 1 icon: src: /icons/outline/award-blue.svg alt: Award title: Recognition description: Win a certificate and be featured on our website page. Plus, enjoy the distinction of receiving exclusive Qdrant swag. - id: 2 icon: src: /icons/outline/travel-blue.svg alt: Travel title: Travel description: Benefit from a dedicated travel fund for speaking engagements at developer conferences. - id: 3 icon: src: /icons/outline/star-ticket-blue.svg alt: Star ticket title: Beta-tests description: Get a front-row seat to the future of Qdrant with opportunities to beta-test new releases and access our detailed product roadmap. sitemapExclude: true --- ",stars/stars-benefits.md "--- title: Join our growing community cards: - id: 0 icon: src: /img/stars-marketplaces/github.svg alt: Github icon title: Stars statsToUse: githubStars description: Join our GitHub community and contribute to the future of vector databases. link: text: Start Contributing url: https://github.com/qdrant/qdrant - id: 1 icon: src: /img/stars-marketplaces/discord.svg alt: Discord icon title: Members statsToUse: discordMembers description: Discover and chat on a vibrant community of developers working on the future of AI. link: text: Join our Conversations url: https://qdrant.to/discord - id: 2 icon: src: /img/stars-marketplaces/twitter.svg alt: Twitter icon title: Followers statsToUse: twitterFollowers description: Join us on X, participate and find out about our updates and releases before anyone else. link: text: Spread the Word url: https://qdrant.to/twitter sitemapExclude: true --- ",stars/stars-marketplaces.md "--- title: About Qdrant Stars descriptionFirstPart: Qdrant Stars is an exclusive program to the top contributors and evangelists inside the Qdrant community. descriptionSecondPart: These are the experts responsible for leading community discussions, creating high-quality content, and participating in Qdrant’s events and meetups. image: src: /img/stars-about.png alt: Stars program sitemapExclude: true --- ",stars/stars-about.md "--- title: You are already a star in our community! description: The Qdrant Stars program is here to take that one step further. button: text: Become a Star url: https://forms.gle/q4fkwudDsy16xAZk8 image: src: /img/stars-hero.svg alt: Stars sitemapExclude: true --- ",stars/stars-hero.md "--- title: Qdrant Stars description: Qdrant Stars - Our Ambassador Program build: render: always cascade: - build: list: local publishResources: false render: never --- ",stars/_index.md "--- title: Qdrant Private Cloud. Run Qdrant On-Premise. description: Effortlessly deploy and manage your enterprise-ready vector database fully on-premise, enhancing security for AI-driven applications. contactUs: text: Contact us url: /contact-sales/ sitemapExclude: true --- ",private-cloud/private-cloud-hero.md "--- title: Qdrant Private Cloud offers a dedicated, on-premise solution that guarantees supreme data privacy and sovereignty. description: Designed for enterprise-grade demands, it provides a seamless management experience for your vector database, ensuring optimal performance and security for vector search and AI applications. image: src: /img/private-cloud-data-privacy.svg alt: Private cloud data privacy sitemapExclude: true --- ",private-cloud/private-cloud-about.md "--- content: To learn more about Qdrant Private Cloud, please contact our team. contactUs: text: Contact us url: /contact-sales/ sitemapExclude: true --- ",private-cloud/private-cloud-get-contacted.md "--- title: private-cloud description: private-cloud build: render: always cascade: - build: list: local publishResources: false render: never --- ",private-cloud/_index.md "--- draft: false title: Building a High-Performance Entity Matching Solution with Qdrant - Rishabh Bhardwaj | Vector Space Talks slug: entity-matching-qdrant short_description: Rishabh Bhardwaj, a Data Engineer at HRS Group, discusses building a high-performance hotel matching solution with Qdrant. description: Rishabh Bhardwaj, a Data Engineer at HRS Group, discusses building a high-performance hotel matching solution with Qdrant, addressing data inconsistency, duplication, and real-time processing challenges. preview_image: /blog/from_cms/rishabh-bhardwaj-cropped.png date: 2024-01-09T11:53:56.825Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talk - Entity Matching Solution - Real Time Processing --- > *""When we were building proof of concept for this solution, we initially started with Postgres. But after some experimentation, we realized that it basically does not perform very well in terms of recall and speed... then we came to know that Qdrant performs a lot better as compared to other solutions that existed at the moment.”*\ > -- Rishabh Bhardwaj > How does the HNSW (Hierarchical Navigable Small World) algorithm benefit the solution built by Rishabh? Rhishabh, a Data Engineer at HRS Group, excels in designing, developing, and maintaining data pipelines and infrastructure crucial for data-driven decision-making processes. With extensive experience, Rhishabh brings a profound understanding of data engineering principles and best practices to the role. Proficient in SQL, Python, Airflow, ETL tools, and cloud platforms like AWS and Azure, Rhishabh has a proven track record of delivering high-quality data solutions that align with business needs. Collaborating closely with data analysts, scientists, and stakeholders at HRS Group, Rhishabh ensures the provision of valuable data and insights for informed decision-making. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/3IMIZljXqgYBqt671eaR9b?si=HUV6iwzIRByLLyHmroWTFA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/tDWhMAOyrcE).*** ## **Top Takeaways:** Data inconsistency, duplication, and real-time processing challenges? Rishabh Bhardwaj, Data Engineer at HRS Group has the solution! In this episode, Rishabh dives into the nitty-gritty of creating a high-performance hotel matching solution with Qdrant, covering everything from data inconsistency challenges to the speed and accuracy enhancements achieved through the HNSW algorithm. 5 Keys to Learning from the Episode: 1. Discover the importance of data consistency and the challenges it poses when dealing with multiple sources and languages. 2. Learn how Qdrant, an open-source vector database, outperformed other solutions and provided an efficient solution for high-speed matching. 3. Explore the unique modification of the HNSW algorithm in Qdrant and how it optimized the performance of the solution. 4. Dive into the crucial role of geofiltering and how it ensures accurate matching based on hotel locations. 5. Gain insights into the considerations surrounding GDPR compliance and the secure handling of hotel data. > Fun Fact: Did you know that Rishabh and his team experimented with multiple transformer models to find the best fit for their entity resolution use case? Ultimately, they found that the Mini LM model struck the perfect balance between speed and accuracy. Talk about a winning combination! > ## Show Notes: 02:24 Data from different sources is inconsistent and complex.\ 05:03 Using Postgres for proof, switched to Qdrant for better results\ 09:16 Geofiltering is crucial for validating our matches.\ 11:46 Insights on performance metrics and benchmarks.\ 16:22 We experimented with different values and found the desired number.\ 19:54 We experimented with different models and found the best one.\ 21:01 API gateway connects multiple clients for entity resolution.\ 24:31 Multiple languages supported, using transcript API for accuracy. ## More Quotes from Rishabh: *""One of the major challenges is the data inconsistency.”*\ -- Rishabh Bhardwaj *""So the only thing of how to know that which model would work for us is to again experiment with the models on our own data sets. But after doing those experiments, we realized that this is the best model that offers the best balance between speed and accuracy cool of the embeddings.”*\ -- Rishabh Bhardwaj *""Qdrant basically optimizes a lot using for the compute resources and this also helped us to scale the whole infrastructure in a really efficient manner.”*\ -- Rishabh Bhardwaj ## Transcript: Demetrios: Hello, fellow travelers in vector space. Dare, I call you astronauts? Today we've got an incredible conversation coming up with Rishabh, and I am happy that you all have joined us. Rishabh, it's great to have you here, man. How you doing? Rishabh Bhardwaj: Thanks for having me, Demetrios. I'm doing really great. Demetrios: Cool. I love hearing that. And I know you are in India. It is a little bit late there, so I appreciate you taking the time to come on the Vector space talks with us today. You've got a lot of stuff that you're going to be talking about. For anybody that does not know you, you are a data engineer at Hrs Group, and you're responsible for designing, developing, and maintaining data pipelines and infrastructure that supports the company. I am excited because today we're going to be talking about building a high performance hotel matching solution with Qdrant. Of course, there's a little kicker there. Demetrios: We want to get into how you did that and how you leveraged Qdrant. Let's talk about it, man. Let's get into it. I want to know give us a quick overview of what exactly this is. I gave the title, but I think you can tell us a little bit more about building this high performance hotel matching solution. Rishabh Bhardwaj: Definitely. So to start with, a brief description about the project. So we have some data in our internal databases, and we ingest a lot of data on a regular basis from different sources. So Hrs is basically a global tech company focused on business travel, and we have one of the most used hotel booking portals in Europe. So one of the major things that is important for customer satisfaction is the content that we provide them on our portals. Right. So the issue or the key challenges that we have is basically with the data itself that we ingest from different sources. One of the major challenges is the data inconsistency. Rishabh Bhardwaj: So different sources provide data in different formats, not only in different formats. It comes in multiple languages as well. So almost all the languages being used across Europe and also other parts of the world as well. So, Majorly, the data is coming across 20 different languages, and it makes it really difficult to consolidate and analyze this data. And this inconsistency in data often leads to many errors in data interpretation and decision making as well. Also, there is a challenge of data duplication, so the same piece of information can be represented differently across various sources, which could then again lead to data redundancy. And identifying and resolving these duplicates is again a significant challenge. Then the last challenge I can think about is that this data processing happens in real time. Rishabh Bhardwaj: So we have a constant influx of data from multiple sources, and processing and updating this information in real time is a really daunting task. Yeah. Demetrios: And when you are talking about this data duplication, are you saying things like, it's the same information in French and German? Or is it something like it's the same column, just a different way in like, a table? Rishabh Bhardwaj: Actually, it is both the cases, so the same entities can be coming in multiple languages. And then again, second thing also wow. Demetrios: All right, cool. Well, that sets the scene for us. Now, I feel like you brought some slides along. Feel free to share those whenever you want. I'm going to fire away the first question and ask about this. I'm going to go straight into Qdrant questions and ask you to elaborate on how the unique modification of Qdrant of the HNSW algorithm benefits your solution. So what are you doing there? How are you leveraging that? And how also to add another layer to this question, this ridiculously long question that I'm starting to get myself into, how do you handle geo filtering based on longitude and latitude? So, to summarize my lengthy question, let's just start with the HNSW algorithm. How does that benefit your solution? Rishabh Bhardwaj: Sure. So to begin with, I will give you a little backstory. So when we were building proof of concept for this solution, we initially started with Postgres, because we had some Postgres databases lying around in development environments, and we just wanted to try out and build a proof of concept. So we installed an extension called Pgvector. And at that point of time, it used to have IVF Flat indexing approach. But after some experimentation, we realized that it basically does not perform very well in terms of recall and speed. Basically, if we want to increase the speed, then we would suffer a lot on basis of recall. Then we started looking for native vector databases in the market, and then we saw some benchmarks and we came to know that Qdrant performs a lot better as compared to other solutions that existed at the moment. Rishabh Bhardwaj: And also, it was open source and really easy to host and use. We just needed to deploy a docker image in EC two instance and we can really start using it. Demetrios: Did you guys do your own benchmarks too? Or was that just like, you looked, you saw, you were like, all right, let's give this thing a spin. Rishabh Bhardwaj: So while deciding initially we just looked at the publicly available benchmarks, but later on, when we started using Qdrant, we did our own benchmarks internally. Nice. Demetrios: All right. Rishabh Bhardwaj: We just deployed a docker image of Qdrant in one of the EC Two instances and started experimenting with it. Very soon we realized that the HNSW indexing algorithm that it uses to build the indexing for the vectors, it was really efficient. We noticed that as compared to the PG Vector IVF Flat approach, it was around 16 times faster. And it didn't mean that it was not that accurate. It was actually 5% more accurate as compared to the previous results. So hold up. Demetrios: 16 times faster and 5% more accurate. And just so everybody out there listening knows we're not paying you to say this, right? Rishabh Bhardwaj: No, not at all. Demetrios: All right, keep going. I like it. Rishabh Bhardwaj: Yeah. So initially, during the experimentations, we begin with the default values for the HNSW algorithm that Qdrant ships with. And these benchmarks that I just told you about, it was based on those parameters. But as our use cases evolved, we also experimented on multiple values of basically M and EF construct that Qdrant allow us to specify in the indexing algorithm. Demetrios: Right. Rishabh Bhardwaj: So also the other thing is, Qdrant also provides the functionality to specify those parameters while making the search as well. So it does not mean if we build the index initially, we only have to use those specifications. We can again specify them during the search as well. Demetrios: Okay. Rishabh Bhardwaj: Yeah. So some use cases we have requires 100% accuracy. It means we do not need to worry about speed at all in those use cases. But there are some use cases in which speed is really important when we need to match, like, a million scale data set. In those use cases, speed is really important, and we can adjust a little bit on the accuracy part. So, yeah, this configuration that Qdrant provides for indexing really benefited us in our approach. Demetrios: Okay, so then layer into that all the fun with how you're handling geofiltering. Rishabh Bhardwaj: So geofiltering is also a very important feature in our solution because the entities that we are dealing with in our data majorly consist of hotel entities. Right. And hotel entities often comes with the geocordinates. So even if we match it using one of the Embedding models, then we also need to make sure that whatever the model has matched with a certain cosine similarity is also true. So in order to validate that, we use geofiltering, which also comes in stacked with Qdrant. So we provide geocordinate data from our internal databases, and then we match it from what we get from multiple sources as well. And it also has a radius parameter, which we can provide to tune in. How much radius do we want to take in account in order for this to be filterable? Demetrios: Yeah. Makes sense. I would imagine that knowing where the hotel location is is probably a very big piece of the puzzle that you're serving up for people. So as you were doing this, what are some things that came up that were really important? I know you talked about working with Europe. There's a lot of GDPR concerns. Was there, like, privacy considerations that you had to address? Was there security considerations when it comes to handling hotel data? Vector, Embeddings, how did you manage all that stuff? Rishabh Bhardwaj: So GDP compliance? Yes. It does play a very important role in this whole solution. Demetrios: That was meant to be a thumbs up. I don't know what happened there. Keep going. Sorry, I derailed that. Rishabh Bhardwaj: No worries. Yes. So GDPR compliance is also one of the key factors that we take in account while building this solution to make sure that nothing goes out of the compliance. We basically deployed Qdrant inside a private EC two instance, and it is also protected by an API key. And also we have built custom authentication workflows using Microsoft Azure SSO. Demetrios: I see. So there are a few things that I also want to ask, but I do want to open it up. There are people that are listening, watching live. If anyone wants to ask any questions in the chat, feel free to throw something in there and I will ask away. In the meantime, while people are typing in what they want to talk to you about, can you talk to us about any insights into the performance metrics? And really, these benchmarks that you did where you saw it was, I think you said, 16 times faster and then 5% more accurate. What did that look like? What benchmarks did you do? How did you benchmark it? All that fun stuff. And what are some things to keep in mind if others out there want to benchmark? And I guess you were just benchmarking it against Pgvector, right? Rishabh Bhardwaj: Yes, we did. Demetrios: Okay, cool. Rishabh Bhardwaj: So for benchmarking, we have some data sets that are already matched to some entities. This was done partially by humans and partially by other algorithms that we use for matching in the past. And it is already consolidated data sets, which we again used for benchmarking purposes. Then the benchmarks that I specified were only against PG vector, and we did not benchmark it any further because the speed and the accuracy that Qdrant provides, I think it is already covering our use case and it is way more faster than we thought the solution could be. So right now we did not benchmark against any other vector database or any other solution. Demetrios: Makes sense just to also get an idea in my head kind of jumping all over the place, so forgive me. The semantic components of the hotel, was it text descriptions or images or a little bit of both? Everything? Rishabh Bhardwaj: Yes. So semantic comes just from the descriptions of the hotels, and right now it does not include the images. But in future use cases, we are also considering using images as well to calculate the semantic similarity between two entities. Demetrios: Nice. Okay, cool. Good. I am a visual guy. You got slides for us too, right? If I'm not mistaken? Do you want to share those or do you want me to keep hitting you with questions? We have something from Brad in the chat and maybe before you share any slides, is there a map visualization as part of the application UI? Can you speak to what you used? Rishabh Bhardwaj: If so, not right now, but this is actually a great idea and we will try to build it as soon as possible. Demetrios: Yeah, it makes sense. Where you have the drag and you can see like within this area, you have X amount of hotels, and these are what they look like, et cetera, et cetera. Rishabh Bhardwaj: Yes, definitely. Demetrios: Awesome. All right, so, yeah, feel free to share any slides you have, otherwise I can hit you with another question in the meantime, which is I'm wondering about the configurations you used for the HNSW index in Qdrant and what were the number of edges per node and the number of neighbors to consider during the index building. All of that fun stuff that goes into the nitty gritty of it. Rishabh Bhardwaj: So should I go with the slide first or should I answer your question first? Demetrios: Probably answer the question so we don't get too far off track, and then we can hit up your slides. And the slides, I'm sure, will prompt many other questions from my side and the audience's side. Rishabh Bhardwaj: So, for HNSW configuration, we have specified the value of M, which is, I think, basically the layers as 64, and the value for EF construct is 256. Demetrios: And how did you go about that? Rishabh Bhardwaj: So we did some again, benchmarks based on the single model that we have selected, which is mini LM, L six, V two. I will talk about it later also. But we basically experimented with different values of M and EF construct, and we came to this number that this is the value that we want to go ahead with. And also when I said that in some cases, indexing is not required at all, speed is not required at all, we want to make sure that whatever we are matching is 100% accurate. In that case, the Python client for Qdrant also provides a parameter called exact, and if we specify it as true, then it basically does not use indexing and it makes a full search on the whole vector collection, basically. Demetrios: Okay, so there's something for me that's pretty fascinating there on these different use cases. What else differs in the different ones? Because you have certain needs for speed or accuracy. It seems like those are the main trade offs that you're working with. What differs in the way that you set things up? Rishabh Bhardwaj: So in some cases so there are some internal databases that need to have hotel entities in a very sophisticated manner. It means it should not contain even a single duplicate entity. In those cases, accuracy is the most important thing we look at, and in some cases, for data analytics and consolidation purposes, we want speed more, but the accuracy should not be that much in value. Demetrios: So what does that look like in practice? Because you mentioned okay, when we are looking for the accuracy, we make sure that it comes through all of the different records. Right. Are there any other things in practice that you did differently? Rishabh Bhardwaj: Not really. Nothing I can think of right now. Demetrios: Okay, if anything comes up yeah, I'll remind you, but hit us with the slides, man. What do you got for the visual learners out there? Rishabh Bhardwaj: Sure. So I have an architecture diagram of what the solution looks like right now. So, this is the current architecture that we have in production. So, as I mentioned, we have deployed the Qdrant vector database in an EC Two, private EC Two instance hosted inside a VPC. And then we have some batch jobs running, which basically create Embeddings. And the source data basically first comes into S three buckets into a data lake. We do a little bit of preprocessing data cleaning and then it goes through a batch process of generating the Embeddings using the Mini LM model, mini LML six, V two. And this model is basically hosted in a SageMaker serverless inference endpoint, which allows us to not worry about servers and we can scale it as much as we want. Rishabh Bhardwaj: And it really helps us to build the Embeddings in a really fast manner. Demetrios: Why did you choose that model? Did you go through different models or was it just this one worked well enough and you went with it? Rishabh Bhardwaj: No, actually this was, I think the third or the fourth model that we tried out with. So what happens right now is if, let's say we want to perform a task such as sentence similarity and we go to the Internet and we try to find a model, it is really hard to see which model would perform best in our use case. So the only thing of how to know that which model would work for us is to again experiment with the models on our own data sets. So we did a lot of experiments. We used, I think, Mpnet model and a lot of multilingual models as well. But after doing those experiments, we realized that this is the best model that offers the best balance between speed and accuracy cool of the Embeddings. So we have deployed it in a serverless inference endpoint in SageMaker. And once we generate the Embeddings in a glue job, we then store them into the vector database Qdrant. Rishabh Bhardwaj: Then this part here is what goes on in the real time scenario. So, we have multiple clients, basically multiple application that would connect to an API gateway. We have exposed this API gateway in such a way that multiple clients can connect to it and they can use this entity resolution service according to their use cases. And we take in different parameters. Some are mandatory, some are not mandatory, and then they can use it based on their use case. The API gateway is connected to a lambda function which basically performs search on Qdrant vector database using the same Embeddings that can be generated from the same model that we hosted in the serverless inference endpoint. So, yeah, this is how the diagram looks right now. It did not used to look like this sometime back, but we have evolved it, developed it, and now we have got to this point where it is really scalable because most of the infrastructure that we have used here is serverless and it can be scaled up to any number of requests that you want. Demetrios: What did you have before that was the MVP. Rishabh Bhardwaj: So instead of this one, we had a real time inference endpoint which basically limited us to some number of requests that we had preset earlier while deploying the model. So this was one of the bottlenecks and then lambda function was always there, I think this one and also I think in place of this Qdrant vector database, as I mentioned, we had Postgres. So yeah, that was also a limitation because it used to use a lot of compute capacity within the EC two instance as compared to Qdrant. Qdrant basically optimizes a lot using for the compute resources and this also helped us to scale the whole infrastructure in a really efficient manner. Demetrios: Awesome. Cool. This is fascinating. From my side, I love seeing what you've done and how you went about iterating on the architecture and starting off with something that you had up and running and then optimizing it. So this project has been how long has it been in the making and what has the time to market been like that first MVP from zero to one and now it feels like you're going to one to infinity by making it optimized. What's the time frames been here? Rishabh Bhardwaj: I think we started this in the month of May this year. Now it's like five to six months already. So the first working solution that we built was in around one and a half months and then from there onwards we have tried to iterate it to make it better and better. Demetrios: Cool. Very cool. Some great questions come through in the chat. Do you have multiple language support for hotel names? If so, did you see any issues with such mappings? Rishabh Bhardwaj: Yes, we do have support for multiple languages and we do not do it using currently using the multilingual models because what we realized is the multilingual models are built on journal sentences and not based it is not trained on entities like names, hotel names and traveler names, et cetera. So when we experimented with the multilingual models it did not provide much satisfactory results. So we used transcript API from Google and it is able to basically translate a lot of languages across that we have across the data and it really gives satisfactory results in terms of entity resolution. Demetrios: Awesome. What other transformers were considered for the evaluation? Rishabh Bhardwaj: The ones I remember from top of my head are Mpnet, then there is a Chinese model called Text to VEC, Shiba something and Bert uncased, if I remember correctly. Yeah, these were some of the models. Demetrios: That we considered and nothing stood out that worked that well or was it just that you had to make trade offs on all of them? Rishabh Bhardwaj: So in terms of accuracy, Mpnet was a little bit better than Mini LM but then again it was a lot slower than the Mini LM model. It was around five times slower than the Mini LM model, so it was not a big trade off to give up with. So we decided to go ahead with Mini LM. Demetrios: Awesome. Well, dude, this has been pretty enlightening. I really appreciate you coming on here and doing this. If anyone else has any questions for you, we'll leave all your information on where to get in touch in the chat. Rishabh, thank you so much. This is super cool. I appreciate you coming on here. Anyone that's listening, if you want to come onto the vector space talks, feel free to reach out to me and I'll make it happen. Demetrios: This is really cool to see the different work that people are doing and how you all are evolving the game, man. I really appreciate this. Rishabh Bhardwaj: Thank you, Demetrios. Thank you for inviting inviting me and have a nice day.",blog/building-a-high-performance-entity-matching-solution-with-qdrant-rishabh-bhardwaj-vector-space-talks-005.md "--- draft: false preview_image: /blog/from_cms/inception.png sitemapExclude: true title: Qdrant has joined NVIDIA Inception Program slug: qdrant-joined-nvidia-inception-program short_description: Recently Qdrant has become a member of the NVIDIA Inception. description: Along with the various opportunities it gives, we are the most excited about GPU support since it is an essential feature in Qdrant's roadmap. Stay tuned for our new updates. date: 2022-04-04T12:06:36.819Z author: Alyona Kavyerina featured: false author_link: https://www.linkedin.com/in/alyona-kavyerina/ tags: - Corporate news - NVIDIA categories: - News --- Recently we've become a member of the NVIDIA Inception. It is a program that helps boost the evolution of technology startups through access to their cutting-edge technology and experts, connects startups with venture capitalists, and provides marketing support. Along with the various opportunities it gives, we are the most excited about GPU support since it is an essential feature in Qdrant's roadmap. Stay tuned for our new updates.",blog/qdrant-has-joined-nvidia-inception-program.md "--- draft: false title: ""Kairoswealth & Qdrant: Transforming Wealth Management with AI-Driven Insights and Scalable Vector Search"" short_description: ""Transforming wealth management with AI-driven insights and scalable vector search."" description: ""Enhancing wealth management using AI-driven insights and efficient vector search for improved recommendations and scalability."" preview_image: /blog/case-study-kairoswealth/preview.png social_preview_image: /blog/case-study-kairoswealth/preview.png date: 2024-07-10T00:02:00Z author: Qdrant featured: false tags: - Kairoswealth - Vincent Teyssier - AI-Driven Insights - Performance Scalability - Multi-Tenancy - Financial Recommendations --- ![Kairoswealth overview](/blog/case-study-kairoswealth/image2.png) ### **About Kairoswealth** [Kairoswealth](https://kairoswealth.com/) is a comprehensive wealth management platform designed to provide users with a holistic view of their financial portfolio. The platform offers access to unique financial products and automates back-office operations through its AI assistant, Gaia. ![Dashboard Kairoswealth](/blog/case-study-kairoswealth/image3.png) ### **Motivations for Adopting a Vector Database** “At Kairoswealth we encountered several use cases necessitating the ability to run similarity queries on large datasets. Key applications included product recommendations and retrieval-augmented generation (RAG),” says [Vincent Teyssier](https://www.linkedin.com/in/vincent-teyssier/), Chief Technology & AI Officer at Kairoswealth. These needs drove the search for a more robust and scalable vector database solution. ### **Challenges with Previous Solutions** “We faced several critical showstoppers with our previous vector database solution, which led us to seek an alternative,” says Teyssier. These challenges included: - **Performance Scalability:** Significant performance degradation occurred as more data was added, despite various optimizations. - **Robust Multi-Tenancy:** The previous solution struggled with multi-tenancy, impacting performance. - **RAM Footprint:** High memory consumption was an issue. ### **Qdrant Use Cases at Kairoswealth** Kairoswealth leverages Qdrant for several key use cases: - **Internal Data RAG:** Efficiently handling internal RAG use cases. - **Financial Regulatory Reports RAG:** Managing and generating financial reports. - **Recommendations:** Enhancing the accuracy and efficiency of recommendations with the Kairoswealth platform. ![Stock recommendation](/blog/case-study-kairoswealth/image1.png) ### **Why Kairoswealth Chose Qdrant** Some of the key reasons, why Kairoswealth landed on Qdrant as the vector database of choice are: 1. **High Performance with 2.4M Vectors:** “Qdrant efficiently handled the indexing of 1.2 million vectors with 16 metadata fields each, maintaining high performance with no degradation. Similarity queries and scrolls run in less than 0.3 seconds. When we doubled the dataset to 2.4 million vectors, performance remained consistent.So we decided to double that to 2.4M vectors, and it's as if we were inserting our first vector!” says Teyssier. 2. **8x Memory Efficiency:** The database storage size with Qdrant was eight times smaller than the previous solution, enabling the deployment of the entire dataset on smaller instances and saving significant infrastructure costs. 3. **Embedded Capabilities:** “Beyond simple search and similarity, Qdrant hosts a bunch of very nice features around recommendation engines, adding positive and negative examples for better spacial narrowing, efficient multi-tenancy, and many more,” says Teyssier. 4. **Support and Community:** “The Qdrant team, led by Andre Zayarni, provides exceptional support and has a strong passion for data engineering,” notes Teyssier, “the team's commitment to open-source and their active engagement in helping users, from beginners to veterans, is highly valued by Kairoswealth.” ### **Conclusion** Kairoswealth's transition to Qdrant has enabled them to overcome significant challenges related to performance, scalability, and memory efficiency, while also benefiting from advanced features and robust support. This partnership positions Kairoswealth to continue innovating in the wealth management sector, leveraging the power of AI to deliver superior services to their clients. ### **Future Roadmap for Kairoswealth** Kairoswealth is seizing the opportunity to disrupt the wealth management sector, which has traditionally been underserved by technology. For example, they are developing the Kairos Terminal, a natural language interface that translates user queries into OpenBB commands (a set of tools for financial analysis and data visualization within the OpenBB Terminal). With regards to the future of the wealth management sector, Teyssier notes that “the integration of Generative AI will automate back-office tasks such as data collation, data reconciliation, and market research. This technology will also enable wealth managers to scale their services to broader segments, including affluent clients, by automating relationship management and interactions.” ",blog/case-study-kairoswealth.md "--- draft: false title: Vector Search for Content-Based Video Recommendation - Gladys and Samuel from Dailymotion slug: vector-search-vector-recommendation short_description: Gladys Roch and Samuel Leonardo Gracio join us in this episode to share their knowledge on content-based recommendation. description: Gladys Roch and Samuel Leonardo Gracio from Dailymotion, discussed optimizing video recommendations using Qdrant's vector search alongside challenges and solutions in content-based recommender systems. preview_image: /blog/from_cms/gladys-and-sam-bp-cropped.png date: 2024-03-19T14:08:00.190Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - Video Recommender - content based recommendation --- > ""*The vector search engine that we chose is Qdrant, but why did we choose it? Actually, it answers all the load constraints and the technical needs that we had. It allows us to do a fast neighbor search. It has a python API which matches the recommender tag that we have.*”\ -- Gladys Roch > Gladys Roch is a French Machine Learning Engineer at Dailymotion working on recommender systems for video content. > ""*We don't have full control and at the end the cost of their solution is very high for a very low proposal. So after that we tried to benchmark other solutions and we found out that Qdrant was easier for us to implement.*”\ -- Samuel Leonardo Gracio > Samuel Leonardo Gracio, a Senior Machine Learning Engineer at Dailymotion, mainly works on recommender systems and video classification. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/4YYASUZKcT5A90d6H2mOj9?si=a5GgBd4JTR6Yo3HBJfiejQ), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/z_0VjMZ2JY0).*** ## **Top takeaways:** Are you captivated by how video recommendations that are engineered to serve up your next binge-worthy content? We definitely are. Get ready to unwrap the secrets that keep millions engaged, as Demetrios chats with the brains behind the scenes of Dailymotion. This episode is packed with insights straight from ML Engineers at Dailymotion who are reshaping how we discover videos online. Here's what you’ll unbox from this episode: 1. **The Mech Behind the Magic:** Understand how a robust video embedding process can change the game - from textual metadata to audio signals and beyond. 2. **The Power of Multilingual Understanding:** Discover the tools that help recommend videos to a global audience, transcending language barriers. 3. **Breaking the Echo Chamber:** Learn about Dailymotion's 'perspective' feature that's transforming the discovery experience for users. 4. **Challenges & Triumphs:** Hear how Qdrant helps Dailymotion tackle a massive video catalog and ensure the freshest content pops on your feed. 5. **Behind the Scenes with Qdrant:** Get an insider’s look at why Dailymotion entrusted their recommendation needs to Qdrant's capable hands (or should we say algorithms?). > Fun Fact: Did you know that Dailymotion juggles over 13 million recommendations daily? That's like serving up a personalized video playlist to the entire population of Greece. Every single day! > ## Show notes: 00:00 Vector Space Talks intro with Gladys and Samuel.\ 05:07 Recommender system needs vector search for recommendations.\ 09:29 Chose vector search engine for fast neighbor search.\ 13:23 Video transcript use for scalable multilingual embedding.\ 16:35 Transcripts prioritize over video title and tags.\ 17:46 Videos curated based on metadata for quality.\ 20:53 Qdrant setup overview for machine learning engineers.\ 25:25 Enhanced recommendation system improves user engagement.\ 29:36 Recommender system, A/B testing, collection aliases strategic.\ 33:03 Dailymotion's new feature diversifies video perspectives.\ 34:58 Exploring different perspectives and excluding certain topics. ## More Quotes from Gladys and Sam: ""*Basically, we're computing the embeddings and then we feed them into Qdrant, and we do that with a streaming pipeline, which means that every time, so everything is in streaming, every time a new video is uploaded or updated, if the description changes, for example, then the embedding will be computed and then it will be fed directly into Qdrant.*”\ -- Gladys Roch *""We basically recommend videos to a user if other users watching the same video were watching other videos. But the problem with that is that it only works with videos where we have what we call here high signal. So videos that have at least thousands of views, some interactions, because for fresh and fresh or niche videos, we don't have enough interaction.”*\ -- Samuel Leonardo Gracio *""But every time we add new videos to Dailymotion, then it's growing. So it can provide recommendation for videos with few interactions that we don't know well. So we're very happy because it led us to huge performances increase on the low signal. We did a threefold increase on the CTR, which means the number of clicks on the recommendation. So with Qdrant we were able to kind of fix our call start issues.”*\ -- Gladys Roch *""The fact that you have a very cool team that helped us to implement some parts when it was difficult, I think it was definitely the thing that make us choose Qdrant instead of another solution.”*\ -- Samuel Leonardo Gracio ## Transcript: Demetrios: I don't know if you all realize what you got yourself into, but we are back for another edition of the Vector Space Talks. My stream is a little bit chunky and slow, so I think we're just to get into it with Gladys and Samuel from Daily motion. Thank you both for joining us. It is an honor to have you here. For everyone that is watching, please throw your questions and anything else that you want to remark about into the chat. We love chatting with you and I will jump on screen if there is something that we need to stop the presentation about and ask right away. But for now, I think you all got some screen shares you want to show us. Samuel Leonardo Gracio: Yes, exactly. So first of all, thank you for the invitation, of course. And yes, I will share my screen. We have a presentation. Excellent. Should be okay now. Demetrios: Brilliant. Samuel Leonardo Gracio: So can we start? Demetrios: I would love it. Yes, I'm excited. I think everybody else is excited too. Gladys Roch: So welcome, everybody, to our vector space talk. I'm Gladys Roch, machine learning engineer at Dailymotion. Samuel Leonardo Gracio: And I'm Samuel, senior machine learning engineer at Dailymotion. Gladys Roch: Today we're going to talk about Vector search in the context of recommendation and in particular how Qdrant. That's going to be a hard one. We actually got used to pronouncing Qdrant as a french way, so we're going to sleep a bit during this presentation, sorry, in advance, the Qdrant and how we use it for our content based recommender. So we are going to first present the context and why we needed a vector database and why we chose Qdrant, how we fit Qdrant, what we put in it, and we are quite open about the pipelines that we've set up and then we get into the results and how Qdrant helped us solve the issue that we had. Samuel Leonardo Gracio: Yeah. So first of all, I will talk about, globally, the recommendation at Dailymotion. So just a quick introduction about Dailymotion, because you're not all french, so you may not all know what Dailymotion is. So we are a video hosting platform as YouTube or TikTok, and we were founded in 2005. So it's a node company for videos and we have 400 million unique users per month. So that's a lot of users and videos and views. So that's why we think it's interesting. So Dailymotion is we can divide the product in three parts. Samuel Leonardo Gracio: So one part is the native app. As you can see, it's very similar from other apps like TikTok or Instagram reels. So you have vertical videos, you just scroll and that's it. We also have a website. So Dailymotion.com, that is our main product, historical product. So on this website you have a watching page like you can have for instance, on YouTube. And we are also a video player that you can find in most of the french websites and even in other countries. And so we have recommendation almost everywhere and different recommenders for each of these products. Gladys Roch: Okay, so that's Dailymotion. But today we're going to focus on one of our recommender systems. Actually, the machine learning engineer team handles multiple recommender systems. But the video to video recommendation is the oldest and the most used. And so it's what you can see on the screen, it's what you have the recommendation queue of videos that you can see on the side or below the videos that you're watching. And to compute these suggestions, we have multiple models running. So that's why it's a global system. This recommendation is quite important for Dailymotion. Gladys Roch: It's actually a key component. It's one of the main levers of audience generation. So for everybody who comes to the website from SEO or other ways, then that's how we generate more audience and more engagement. So it's very important in the revenue stream of the platform. So working on it is definitely a main topic of the team and that's why we are evolving on this topic all the time. Samuel Leonardo Gracio: Okay, so why would we need a vector search for this recommendation? I think we are here for that. So as many platforms and as many recommender systems, I think we have a very usual approach based on a collaborative model. So we basically recommend videos to a user if other users watching the same video were watching other videos. But the problem with that is that it only works with videos where we have what we call here high signal. So videos that have at least thousands of views, some interactions, because for fresh and fresh or niche videos, we don't have enough interaction. And we have a problem that I think all the recommender systems can have, which is a costar tissue. So this costar tissue is for new users and new videos, in fact. So if we don't have any information or interaction, it's difficult to recommend anything based on this collaborative approach. Samuel Leonardo Gracio: So the idea to solve that was to use a content based recommendation. It's also a classic solution. And the idea is when you have a very fresh video. So video, hey, in this case, a good thing to recommend when you don't have enough information is to recommend a very similar video and hope that the user will watch it also. So for that, of course, we use Qdrant and we will explain how. So yeah, the idea is to put everything in the vector space. So each video at Dailymotion will go through an embedding model. So for each video we'll get a video on embedding. Samuel Leonardo Gracio: We will describe how we do that just after and put it in a vector space. So after that we could use Qdrant to, sorry, Qdrant to query and get similar videos that we will recommend to our users. Gladys Roch: Okay, so if we have embeddings to represent our videos, then we have a vector space, but we need to be able to query this vector space and not only to query it, but to do it at scale and online because it's like a recommender facing users. So we have a few requirements. The first one is that we have a lot of videos in our catalog. So actually doing an exact neighbor search would be unreasonable, unrealistic. It's a combinatorial explosion issue, so we can't do an exact Knn. Plus we also have new videos being uploaded to Dailymotion every hour. So if we could somehow manage to do KNN and to pre compute it, it would never be up to date and it would be very expensive to recompute all the time to include all the new videos. So we need a solution that can integrate new videos all the time. Gladys Roch: And we're also at scale, we serve over 13 million recommendation each day. So it means that we need a big setup to retrieve the neighbors of many videos all day. And finally, we have users waiting for the recommendation. So it's not just pre computed and stored, and it's not just content knowledge. We are trying to provide the recommendation as fast as possible. So we have time constraints and we only have a few hundred milliseconds to compute the recommendation that we're going to show the user. So we need to be able to retrieve the close video that we'd like to propose to the user very fast. So we need to be able to navigate this vector space that we are building quite quickly. Gladys Roch: So of course we need vector search engine. That's the most easy way to do it, to be able to compute and approximate neighbor search and to do it at scale. So obviously, evidently the vector search engine that we chose this Qdrant, but why did we choose it? Actually, it answers all the load constraints and the technical needs that we had. It allows us to do a fast neighbor search. It has a python API which match the recommendous tag that we have. A very important issue for us was to be able to not only put the embeddings of the vectors in this space but also to put metadata with it to be able to get a bit more information and not just a mathematical representation of the video in this database. And actually doing that make it filterable, which means that we can retrieve neighbors of a video, but given some constraints, and it's very important for us typically for language constraints. Samuel will talk a bit more in details about that just after. Gladys Roch: But we have an embedding that is multilingual and we need to be able to filter all the language, all the videos on their language to offer more robust recommendation for our users. And also Qdrant is distributed and so it's scalable and we needed that due to the load that I just talked about. So that's the main points that led us to choose Qdrant. Samuel Leonardo Gracio: And also they have an amazing team. Gladys Roch: So that's another, that would be our return of experience. The team of Qdrant is really nice. You helped us actually put in place the cluster. Samuel Leonardo Gracio: Yeah. So what do we put in our Qdrant cluster? So how do we build our robust video embedding? I think it's really interesting. So the first point for us was to know what a video is about. So it's a really tricky question, in fact. So of course, for each video uploaded on the platform, we have the video signal, so many frames representing the video, but we don't use that for our meetings. And in fact, why we are not using them, it's because it contains a lot of information. Right, but not what we want. For instance, here you have video about an interview of LeBron James. Samuel Leonardo Gracio: But if you only use the frames, the video signal, you can't even know what he's saying, what the video is about, in fact. So we still try to use it. But in fact, the most interesting thing to represent our videos are the textual metadata. So the textual metadata, we have them for every video. So for every video uploaded on the platform, we have a video title, video description that are put by the person that uploads the video. But we also have automatically detected tags. So for instance, for this video, you could have LeBron James, and we also have subtitles that are automatically generated. So just to let you know, we do that using whisper, which is an open source solution provided by OpenAI, and we do it at scale. Samuel Leonardo Gracio: When a video is uploaded, we directly have the video transcript and we can use this information to represent our videos with just a textual embedding, which is far more easy to treat, and we need less compute than for frames, for instance. So the other issue for us was that we needed an embedding that could scale so that does not require too much time to compute because we have a lot of videos, more than 400 million videos, and we have many videos uploaded every hour, so it needs to scale. We also have many languages on our platform, more than 300 languages in the videos. And even if we are a french video platform, in fact, it's only a third of our videos that are actually in French. Most of the videos are in English or other languages such as Turkish, Spanish, Arabic, et cetera. So we needed something multilingual, which is not very easy to find. But we came out with this embedding, which is called multilingual universal sentence encoder. It's not the most famous embedding, so I think it's interesting to share it. Samuel Leonardo Gracio: It's open source, so everyone can use it. It's available on Tensorflow hub, and I think that now it's also available on hugging face, so it's easy to implement and to use it. The good thing is that it's pre trained, so you don't even have to fine tune it on your data. You can, but I think it's not even required. And of course it's multilingual, so it doesn't work with every languages. But still we have the main languages that are used on our platform. It focuses on semantical similarity. And you have an example here when you have different video titles. Samuel Leonardo Gracio: So for instance, one about soccer, another one about movies. Even if you have another video title in another language, if it's talking about the same topic, they will have a high cosine similarity. So that's what we want. We want to be able to recommend every video that we have in our catalog, not depending on the language. And the good thing is that it's really fast. Actually, it's a few milliseconds on cpu, so it's really easy to scale. So that was a huge requirement for us. Demetrios: Can we jump in here? Demetrios: There's a few questions that are coming through that I think are pretty worth. And it's actually probably more suited to the last slide. Sameer is asking this one, actually, one more back. Sorry, with the LeBron. Yeah, so it's really about how you understand the videos. And Sameer was wondering if you can quote unquote hack the understanding by putting some other tags or. Samuel Leonardo Gracio: Ah, you mean from a user perspective, like the person uploading the video, right? Demetrios: Yeah, exactly. Samuel Leonardo Gracio: You could do that before using transcripts, but since we are using them mainly and we only use the title, so the tags are automatically generated. So it's on our side. So the title and description, you can put whatever you want. But since we have the transcript, we know the content of the video and we embed that. So the title and the description are not the priority in the embedding. So I think it's still possible, but we don't have such use case. In fact, most of the people uploading videos are just trying to put the right title, but I think it's still possible. But yeah, with the transcript we don't have any examples like that. Samuel Leonardo Gracio: Yeah, hopefully. Demetrios: So that's awesome to think about too. It kind of leads into the next question, which is around, and this is from Juan Pablo. What do you do with videos that have no text and no meaningful audio, like TikTok or a reel? Samuel Leonardo Gracio: So for the moment, for these videos, we are only using the signal from the title tags, description and other video metadata. And we also have a moderation team which is watching the videos that we have here in the mostly recommended videos. So we know that the videos that we recommend are mostly good videos. And for these videos, so that don't have audio signal, we are forced to use the title tags and description. So these are the videos where the risk is at the maximum for us currently. But we are also working at the moment on something using the audio signal and the frames, but not all the frames. But for the moment, we don't have this solution. Right. Gladys Roch: Also, as I said, it's not just one model, we're talking about the content based model. But if we don't have a similarity score that is high enough, or if we're just not confident about the videos that were the closest, then we will default to another model. So it's not just one, it's a huge system. Samuel Leonardo Gracio: Yeah, and one point also, we are talking about videos with few interactions, so they are not videos at risk. I mean, they don't have a lot of views. When this content based algo is called, they are important because there are very fresh videos, and fresh videos will have a lot of views in a few minutes. But when the collaborative model will be retrained, it will be able to recommend videos on other things than the content itself, but it will use the collaborative signal. So I'm not sure that it's a really important risk for us. But still, I think we could still do some improvement for that aspect. Demetrios: So where do I apply to just watch videos all day for the content team? All right, I'll let you get back to it. Sorry to interrupt. And if anyone else has good questions. Samuel Leonardo Gracio: And I think it's good to ask your question during the presentation, it's more easier to answer. So, yeah, sorry, I was saying that we had this multilingual embedding, and just to present you our embedding pipeline. So, for each video that is uploaded or edited, because you can change the video title whenever you want, we have a pub sub event that is sent to a dataflow pipeline. So it's a streaming job for every video we will retrieve. So textual metadata, title, description tags or transcript, preprocess it to remove some words, for instance, and then call the model to have this embedding. And then. So we put it in bigquery, of course, but also in Qdrant. Gladys Roch: So I'm going to present a bit our Qdrant setup. So actually all this was deployed by our tier DevOps team, not by us machine learning engineers. So it's an overview, and I won't go into the details because I'm not familiar with all of this, but basically, as Samuel said, we're computing the embeddings and then we feed them into Qdrant, and we do that with a streaming pipeline, which means that every time, so everything is in streaming, every time a new video is uploaded or updated, if the description changes, for example, then the embedding will be computed and then it will be fed directly into Qdrant. And on the other hand, our recommender queries the Qdrant vector space through GrPC ingress. And actually Qdrant is running on six pods that are using arm nodes. And you have the specificities of which type of nodes we're using there, if you're interested. But basically that's the setup. And what is interesting is that our recommendation stack for now, it's on premise, which means it's running on Dailymotion servers, not on the Google Kubernetes engine, whereas Qdrant is on the TKE. Gladys Roch: So we are querying it from outside. And also if you have more questions about this setup, we'll be happy to redirect you to the DevOps team that helped us put that in place. And so finally the results. So we stated earlier that we had a call start issue. So before Qdrant, we had a lot of difficulties with this challenge. We had a collaborative recommender that was trained and performed very well on high senior videos, which means that is videos with a lot of interactions. So we can see what user like to watch, which videos they like to watch together. And we also had a metadata recommender. Gladys Roch: But first, this collaborative recommender was actually also used to compute call start recommendation, which is not allowed what it is trained on, but we were using a default embedding to compute like a default recommendation for call start, which led to a lot of popularity issues. Popularity issues for recommender system is when you always recommend the same video that is hugely popular and it's like a feedback loop. A lot of people will default to this video because it might be clickbait and then we will have a lot of inhaler action. So it will pollute the collaborative model all over again. So we had popularity issues with this, obviously. And we also had like this metadata recommender that only focused on a very small scope of trusted owners and trusted video sources. So it was working. It was an auto encoder and it was fine, but the scope was too small. Gladys Roch: Too few videos could be recommended through this model. And also those two models were trained very infrequently, only every 4 hours and 5 hours, which means that any fresh videos on the platform could not be recommended properly for like 4 hours. So it was the main issue because Dailymotion uses a lot of fresh videos and we have a lot of news, et cetera. So we need to be topical and this couldn't be done with this huge delay. So we had overall bad performances on the Los signal. And so with squadron we fixed that. We still have our collaborative recommender. It has evolved since then. Gladys Roch: It's actually computed much more often, but the collaborative model is only focused on high signal now and it's not computed like default recommendation for low signal that it doesn't know. And we have a content based recommender based on the muse embedding and Qdrant that is able to recommend to users video as soon as they are uploaded on the platform. And it has like a growing scope, 20 million vectors at the moment. But every time we add new videos to Dailymotion, then it's growing. So it can provide recommendation for videos with few interactions that we don't know well. So we're very happy because it led us to huge performances increase on the low signal. We did a threefold increase on the CTR, which means the number of clicks on the recommendation. So with Qdrant we were able to kind of fix our call start issues. Gladys Roch: What I was talking about fresh videos, popularities, low performances. We fixed that and we were very happy with the setup. It's running smoothly. Yeah, I think that's it for the presentation, for the slides at least. So we are open to discussion and if you have any questions to go into the details of the recommender system. So go ahead, shoot. Demetrios: I've got some questions while people are typing out everything in the chat and the first one I think that we should probably get into is how did the evaluation process go for you when you were looking at different vector databases and vector search engines? Samuel Leonardo Gracio: So that's a good point. So first of all, you have to know that we are working with Google cloud platform. So the first thing that we did was to use their vector search engine, so which called matching engine. Gladys Roch: Right. Samuel Leonardo Gracio: But the issue with matching engine is that we could not in fact add the API, wasn't easy to use. First of all. The second thing was that we could not put metadata, as we do in Qdrant, and filter out, pre filter before the query, as we are doing now in a Qdrant. And the first thing is that their solution is managed. Yeah, is managed. We don't have the full control and at the end the cost of their solution is very high for a very low proposal. So after that we tried to benchmark other solutions and we found out that Qdrant was easier for us to implement. We had a really cool documentation, so it was easy to test some things and basically we couldn't find any drawbacks for our use case at least. Samuel Leonardo Gracio: And moreover, the fact that you have a very cool team that helped us to implement some parts when it was difficult, I think it was definitely the thing that make us choose Qdrant instead of another solution, because we implemented Qdrant. Gladys Roch: Like on February or even January 2023. So Qdrant is fairly new, so the documentation was still under construction. And so you helped us through the discord to set up the cluster. So it was really nice. Demetrios: Excellent. And what about least favorite parts of using Qdrant? Gladys Roch: Yeah, I have one. I discovered it was not actually a requirement at the beginning, but for recommender systems we tend to do a lot of a B test. And you might wonder what's the deal with Qdrant and a b test. It's not related, but actually we were able to a b test our collection. So how we compute the embedding? First we had an embedding without the transcript, and now we have an embedding that includes the transcript. So we wanted to a b test that. And on Quellin you can have collection aliases and this is super helpful because you can have two collections that live on the cluster at the same time, and then on your code you can just call the production collection and then set the alias to the proper one. So for a d testing and rollout it's very useful. Gladys Roch: And I found it when I first wanted to do an a test. So I like this one. It was an existed and I like it also, the second thing I like is the API documentation like the one that is auto generated with all the examples and how to query any info on Qdrant. It's really nice for someone who's not from DevOps. It help us just debug our collection whenever. So it's very easy to get into. Samuel Leonardo Gracio: And the fact that the product is evolving so fast, like every week almost. You have a new feeder. I think it's really cool. There is one community and I think, yeah, it's really interesting and it's amazing to have such people working on that on an open source project like this one. Gladys Roch: We had feedback from our devot team when preparing this presentation. We reached out to them for the small schema that I tried to present. And yeah, they said that the open source community of quasant was really nice. It was easy to contribute, it was very open on Discord. I think we did a return on experience at some point on how we set up the cluster at the beginning. And yeah, they were very hyped by the fact that it's coded in rust. I don't know if you hear this a lot, but to them it's even more encouraging contributing with this kind of new language. Demetrios: 100% excellent. So last question from my end, and it is on if you're using Qdrant for anything else when it comes to products at Dailymotion, yes, actually we do. Samuel Leonardo Gracio: I have one slide about this. Gladys Roch: We have slides because we presented quadrum to another talk a few weeks ago. Samuel Leonardo Gracio: So we didn't prepare this slide just for this presentation, it's from another presentation, but still, it's a good point because we're currently trained to use it in other projects. So as we said in this presentation, we're mostly using it for the watching page. So Dailymotion.com but we also introduced it in the mobile app recently through a feature that is called perspective. So the goal of the feature is to be able to break this vertical feed algorithm to let the users to have like a button to discover new videos. So when you go through your feed, sometimes you will get a video talking about, I don't know, a movie. You will get this button, which is called perspective, and you will be able to have other videos talking about the same movie but giving to you another point of view. So people liking the movie, people that didn't like the movie, and we use Qdrant, sorry for the candidate generation part. So to get the similar videos and to get the videos that are talking about the same subject. Samuel Leonardo Gracio: So I won't talk too much about this project because it will require another presentation of 20 minutes or more. But still we are using it in other projects and yeah, it's really interesting to see what we are able to do with that tool. Gladys Roch: Once we have the vector space set up, we can just query it from everywhere. In every project of recommendation. Samuel Leonardo Gracio: We also tested some search. We are testing many things actually, but we don't have implemented it yet. For the moment we just have this perspective feed and the content based Roko, but we still have a lot of ideas using this vector search space. Demetrios: I love that idea on the get another perspective. So it's not like you get, as you were mentioning before, you don't get that echo chamber and just about everyone saying the same thing. You get to see are there other sides to this? And I can see how that could be very uh, Juan Pablo is back, asking questions in the chat about are you able to recommend videos with negative search queries and negative in the sense of, for example, as a user I want to see videos of a certain topic, but I want to exclude some topics from the video. Gladys Roch: Okay. We actually don't do that at the moment, but we know we can with squadron we can set positive and negative points from where to query. So actually for the moment we only retrieve close positive neighbors and we apply some business filters on top of that recommendation. But that's it. Samuel Leonardo Gracio: And that's because we have also this collaborative model, which is our main recommender system. But I think we definitely need to check that and maybe in the future we will implement that. We saw that many documentation about this and I'm pretty sure that it would work very well on our use case. Demetrios: Excellent. Well folks, I think that's about it for today. I want to thank you so much for coming and chatting with us and teaching us about how you're using Qdrant and being very transparent about your use. I learned a ton. And for anybody that's out there doing recommender systems and interested in more, I think they can reach out to you on LinkedIn. I've got both of your we'll drop them in the chat right now and we'll let everybody enjoy. So don't get lost in vector base. We will see you all later. Demetrios: If anyone wants to give a talk next, reach out to me. We always are looking for incredible talks and so this has been great. Thank you all. Gladys Roch: Thank you. Samuel Leonardo Gracio: Thank you very much for the invitation and for everyone listening. Thank you. Gladys Roch: See you. Bye. ",blog/vector-search-for-content-based-video-recommendation-gladys-and-sam-vector-space-talks.md "--- draft: false title: Indexify Unveiled - Diptanu Gon Choudhury | Vector Space Talks slug: indexify-content-extraction-engine short_description: Diptanu Gon Choudhury discusses how Indexify is transforming the AI-driven workflow in enterprises today. description: Diptanu Gon Choudhury shares insights on re-imaging Spark and data infrastructure while discussing his work on Indexify to enhance AI-driven workflows and knowledge bases. preview_image: /blog/from_cms/diptanu-choudhury-cropped.png date: 2024-01-26T16:40:55.469Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Indexify - structured extraction engine - rag-based applications --- > *""We have something like Qdrant, which is very geared towards doing Vector search. And so we understand the shape of the storage system now.”*\ — Diptanu Gon Choudhury > Diptanu Gon Choudhury is the founder of Tensorlake. They are building Indexify - an open-source scalable structured extraction engine for unstructured data to build near-real-time knowledgebase for AI/agent-driven workflows and query engines. Before building Indexify, Diptanu created the Nomad cluster scheduler at Hashicorp, inventor of the Titan/Titus cluster scheduler at Netflix, led the FBLearner machine learning platform, and built the real-time speech inference engine at Facebook. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/6MSwo7urQAWE7EOxO7WTns?si=_s53wC0wR9C4uF8ngGYQlg), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/RoOgTxHkViA).*** ## **Top takeaways:** Discover how reimagined data infrastructures revolutionize AI-agent workflows as Diptanu delves into Indexify, transforming raw data into real-time knowledge bases, and shares expert insights on optimizing rag-based applications, all amidst the ever-evolving landscape of Spark. Here's What You'll Discover: 1. **Innovative Data Infrastructure**: Diptanu dives deep into how Indexify is revolutionizing the enterprise world by providing a sharper focus on data infrastructure and a refined abstraction for generative AI this year. 2. **AI-Copilot for Call Centers**: Learn how Indexify streamlines customer service with a real-time knowledge base, transforming how agents interact and resolve issues. 3. **Scaling Real-Time Indexing**: discover the system’s powerful capability to index content as it happens, enabling multiple extractors to run simultaneously. It’s all about the right model and the computing capacity for on-the-fly content generation. 4. **Revamping Developer Experience**: get a glimpse into the future as Diptanu chats with Demetrios about reimagining Spark to fit today's tech capabilities, vastly different from just two years ago! 5. **AI Agent Workflow Insights**: Understand the crux of AI agent-driven workflows, where models dynamically react to data, making orchestrated decisions in live environments. > Fun Fact: The development of Indexify by Diptanu was spurred by the rising use of Large Language Models in applications and the subsequent need for better data infrastructure to support these technologies. > ## Show notes: 00:00 AI's impact on model production and workflows.\ 05:15 Building agents need indexes for continuous updates.\ 09:27 Early RaG and LLMs adopters neglect data infrastructure.\ 12:32 Design partner creating copilot for call centers.\ 17:00 Efficient indexing and generation using scalable models.\ 20:47 Spark is versatile, used for many cases.\ 24:45 Recent survey paper on RAG covers tips.\ 26:57 Evaluation of various aspects of data generation.\ 28:45 Balancing trust and cost in factual accuracy. ## More Quotes from Diptanu: *""In 2017, when I started doing machine learning, it would take us six months to ship a good model in production. And here we are today, in January 2024, new models are coming out every week, and people are putting them in production.”*\ -- Diptanu Gon Choudhury *""Over a period of time, you want to extract new information out of existing data, because models are getting better continuously.”*\ -- Diptanu Gon Choudhury *""We are in the golden age of demos. Golden age of demos with LLMs. Almost anyone, I think with some programming knowledge can kind of like write a demo with an OpenAI API or with an embedding model and so on.”*\ -- Diptanu Gon Choudhury ## Transcript: Demetrios: We are live, baby. This is it. Welcome back to another vector space talks. I'm here with my man Diptanu. He is the founder and creator of Tenterlake. They are building indexify, an open source, scalable, structured extraction engine for unstructured data to build near real time knowledge bases for AI agent driven workflows and query engines. And if it sounds like I just threw every buzzword in the book into that sentence, you can go ahead and say, bingo, we are here, and we're about to dissect what all that means in the next 30 minutes. So, dude, first of all, I got to just let everyone know who is here, that you are a bit of a hard hitter. Demetrios: You've got some track record under some notches on your belt. We could say before you created Tensorlake, let's just let people know that you were at Hashicorp, you created the nomad cluster scheduler, and you were the inventor of Titus cluster scheduler at Netflix. You led the FB learner machine learning platform and built real time speech inference engine at Facebook. You may be one of the most decorated people we've had on and that I have had the pleasure of talking to, and that's saying a lot. I've talked to a lot of people in my day, so I want to dig in, man. First question I've got for you, it's a big one. What the hell do you mean by AI agent driven workflows? Are you talking to autonomous agents? Are you talking, like the voice agents? What's that? Diptanu Gon Choudhury: Yeah, I was going to say that what a great last couple of years has been for AI. I mean, in context, learning has kind of, like, changed the way people do models and access models and use models in production, like at Facebook. In 2017, when I started doing machine learning, it would take us six months to ship a good model in production. And here we are today, in January 2024, new models are coming out every week, and people are putting them in production. It's a little bit of a Yolo where I feel like people have stopped measuring how well models are doing and just ship in production, but here we are. But I think underpinning all of this is kind of like this whole idea that models are capable of reasoning over data and non parametric knowledge to a certain extent. And what we are seeing now is workflows stop being completely heuristics driven, or as people say, like software 10 driven. And people are putting models in the picture where models are reacting to data that a workflow is seeing, and then people are using models behavior on the data and kind of like making the model decide what should the workflow do? And I think that's pretty much like, to me, what an agent is that an agent responds to information of the world and information which is external and kind of reacts to the information and kind of orchestrates some kind of business process or some kind of workflow, some kind of decision making in a workflow. Diptanu Gon Choudhury: That's what I mean by agents. And they can be like autonomous. They can be something that writes an email or writes a chat message or something like that. The spectrum is wide here. Demetrios: Excellent. So next question, logical question is, and I will second what you're saying. Like the advances that we've seen in the last year, wow. And the times are a change in, we are trying to evaluate while in production. And I like the term, yeah, we just yoloed it, or as the young kids say now, or so I've heard, because I'm not one of them, but we just do it for the plot. So we are getting those models out there, we're seeing if they work. And I imagine you saw some funny quotes from the Chevrolet chat bot, that it was a chat bot on the Chevrolet support page, and it was asked if Teslas are better than Chevys. And it said, yeah, Teslas are better than Chevys. Demetrios: So yes, that's what we do these days. This is 2024, baby. We just put it out there and test and prod. Anyway, getting back on topic, let's talk about indexify, because there was a whole lot of jargon that I said of what you do, give me the straight shooting answer. Break it down for me like I was five. Yeah. Diptanu Gon Choudhury: So if you are building an agent today, which depends on augmented generation, like retrieval, augmented generation, and given that this is Qdrant's show, I'm assuming people are very much familiar with Arag and augmented generation. So if people are building applications where the data is external or non parametric, and the model needs to see updated information all the time, because let's say, the documents under the hood that the application is using for its knowledge base is changing, or someone is building a chat application where new chat messages are coming all the time, and the agent or the model needs to know about what is happening, then you need like an index, or a set of indexes, which are continuously updated. And you also, over a period of time, you want to extract new information out of existing data, because models are getting better continuously. And the other thing is, AI, until now, or until a couple of years back, used to be very domain oriented or task oriented, where modality was the key behind models. Now we are entering into a world where information being encoded in any form, documents, videos or whatever, are important to these workflows that people are building or these agents that people are building. And so you need capability to ingest any kind of data and then build indexes out of them. And indexes, in my opinion, are not just embedding indexes, they could be indexes of semi structured data. So let's say you have an invoice. Diptanu Gon Choudhury: You want to maybe transform that invoice into semi structured data of where the invoice is coming from or what are the line items and so on. So in a nutshell, you need good data infrastructure to store these indexes and serve these indexes. And also you need a scalable compute engine so that whenever new data comes in, you're able to index them appropriately and update the indexes and so on. And also you need capability to experiment, to add new extractors into your platform, add new models into your platform, and so on. Indexify helps you with all that, right? So indexify, imagine indexify to be an online service with an API so that developers can upload any form of unstructured data, and then a bunch of extractors run in parallel on the cluster and extract information out of this unstructured data, and then update indexes on something like Qdrant or postgres for semi structured data continuously. Demetrios: Okay? Diptanu Gon Choudhury: And you basically get that in a single application, in a single binary, which is distributed on your cluster. You wouldn't have any external dependencies other than storage systems, essentially, to have a very scalable data infrastructure for your Rag applications or for your LLM agents. Demetrios: Excellent. So then talk to me about the inspiration for creating this. What was it that you saw that gave you that spark of, you know what? There needs to be something on the market that can handle this. Yeah. Diptanu Gon Choudhury: Earlier this year I was working with founder of a generative AI startup here. I was looking at what they were doing, I was helping them out, and I saw that. And then I looked around, I looked around at what is happening. Not earlier this year as in 2023. Somewhere in early 2023, I was looking at how developers are building applications with llms, and we are in the golden age of demos. Golden age of demos with llms. Almost anyone, I think with some programming knowledge can kind of like write a demo with an OpenAI API or with an embedding model and so on. And I mostly saw that the data infrastructure part of those demos or those applications were very basic people would do like one shot transformation of data, build indexes and then do stuff, build an application on top. Diptanu Gon Choudhury: And then I started talking to early adopters of RaG and llms in enterprises, and I started talking to them about how they're building their data pipelines and their data infrastructure for llms. And I feel like people were mostly excited about the application layer, right? A very less amount of thought was being put on the data infrastructure, and it was almost like built out of duct tape, right, of pipeline, like pipelines and workflows like RabbitMQ, like x, Y and z, very bespoke pipelines, which are good at one shot transformation of data. So you put in some documents on a queue, and then somehow the documents get embedded and put into something like Qdrant. But there was no thought about how do you re index? How do you add a new capability into your pipeline? Or how do you keep the whole system online, right? Keep the indexes online while reindexing and so on. And so classically, if you talk to a distributed systems engineer, they would be, you know, this is a mapreduce problem, right? So there are tools like Spark, there are tools like any skills ray, and they would classically solve these problems, right? And if you go to Facebook, we use Spark for something like this, or like presto, or we have a ton of big data infrastructure for handling things like this. And I thought that in 2023 we need a better abstraction for doing something like this. The world is moving to our server less, right? Developers understand functions. Developer thinks about computers as functions and functions which are distributed on the cluster and can transform content into something that llms can consume. Diptanu Gon Choudhury: And that was the inspiration I was thinking, what would it look like if we redid Spark or ray for generative AI in 2023? How can we make it so easy so that developers can write functions to extract content out of any form of unstructured data, right? You don't need to think about text, audio, video, or whatever. You write a function which can kind of handle a particular data type and then extract something out of it. And now how can we scale it? How can we give developers very transparently, like, all the abilities to manage indexes and serve indexes in production? And so that was the inspiration for it. I wanted to reimagine Mapreduce for generative AI. Demetrios: Wow. I like the vision you sent me over some ideas of different use cases that we can walk through, and I'd love to go through that and put it into actual tangible things that you've been seeing out there. And how you can plug it in to these different use cases. I think the first one that I wanted to look at was building a copilot for call center agents and what that actually looks like in practice. Yeah. Diptanu Gon Choudhury: So I took that example because that was super close to my heart in the sense that we have a design partner like who is doing this. And you'll see that in a call center, the information that comes in into a call center or the information that an agent in a human being in a call center works with is very rich. In a call center you have phone calls coming in, you have chat messages coming in, you have emails going on, and then there are also documents which are knowledge bases for human beings to answer questions or make decisions on. Right. And so they're working with a lot of data and then they're always pulling up a lot of information. And so one of our design partner is like building a copilot for call centers essentially. And what they're doing is they want the humans in a call center to answer questions really easily based on the context of a conversation or a call that is happening with one of their users, or pull up up to date information about the policies of the company and so on. And so the way they are using indexify is that they ingest all the content, like the raw content that is coming in video, not video, actually, like audio emails, chat messages into indexify. Diptanu Gon Choudhury: And then they have a bunch of extractors which handle different type of modalities, right? Some extractors extract information out of emails. Like they would do email classification, they would do embedding of emails, they would do like entity extraction from emails. And so they are creating many different types of indexes from emails. Same with speech. Right? Like data that is coming on through calls. They would transcribe them first using ASR extractor, and from there on the speech would be embedded and the whole pipeline for a text would be invoked into it, and then the speech would be searchable. If someone wants to find out what conversation has happened, they would be able to look up things. There is a summarizer extractor, which is like looking at a phone call and then summarizing what the customer had called and so on. Diptanu Gon Choudhury: So they are basically building a near real time knowledge base of one what is happening with the customer. And also they are pulling in information from their documents. So that's like one classic use case. Now the only dependency now they have is essentially like a blob storage system and serving infrastructure for indexes, like in this case, like Qdrant and postgres. And they have a bunch of extractors that they have written in house and some extractors that we have written, they're using them out of the box and they can scale the system to as much as they need. And it's kind of like giving them a high level abstraction of building indexes and using them in llms. Demetrios: So I really like this idea of how you have the unstructured and you have the semi structured and how those play together almost. And I think one thing that is very clear is how you've got the transcripts, you've got the embeddings that you're doing, but then you've also got documents that are very structured and maybe it's from the last call and it's like in some kind of a database. And I imagine we could say whatever, salesforce, it's in a salesforce and you've got it all there. And so there is some structure to that data. And now you want to be able to plug into all of that and you want to be able to, especially in this use case, the call center agents, human agents need to make decisions and they need to make decisions fast. Right. So the real time aspect really plays a part of that. Diptanu Gon Choudhury: Exactly. Demetrios: You can't have it be something that it'll get back to you in 30 seconds, or maybe 30 seconds is okay, but really the less time the better. And so traditionally when I think about using llms, I kind of take real time off the table. Have you had luck with making it more real time? Yeah. Diptanu Gon Choudhury: So there are two aspects of it. How quickly can your indexes be updated? As of last night, we can index all of Wikipedia under five minutes on AWS. We can run up to like 5000 extractors with indexify concurrently and parallel. I feel like we got the indexing part covered. Unless obviously you are using a model as behind an API where we don't have any control. But assuming you're using some kind of embedding model or some kind of extractor model, right, like a named entity extractor or an speech to text model that you control and you understand the I Ops, we can scale it out and our system can kind of handle the scale of getting it indexed really quickly. Now on the generation side, that's where it's a little bit more nuanced, right? Generation depends on how big the generation model is. If you're using GPD four, then obviously you would be playing with the latency budgets that OpenAI provides. Diptanu Gon Choudhury: If you're using some other form of models like mixture MoE or something which is very optimized and you have worked on making the model optimized, then obviously you can cut it down. So it depends on the end to end stack. It's not like a single piece of software. It's not like a monolithic piece of software. So it depends on a lot of different factors. But I can confidently claim that we have gotten the indexing side of real time aspects covered as long as the models people are using are reasonable and they have enough compute in their cluster. Demetrios: Yeah. Okay. Now talking again about the idea of rethinking the developer experience with this and almost reimagining what Spark would be if it were created today. Diptanu Gon Choudhury: Exactly. Demetrios: How do you think that there are manifestations in what you've built that play off of things that could only happen because you created it today as opposed to even two years ago. Diptanu Gon Choudhury: Yeah. So I think, for example, take Spark, right? Spark was born out of big data, like the 2011 twelve era of big data. In fact, I was one of the committers on Apache Mesos, the cluster scheduler that Spark used for a long time. And then when I was at Hashicorp, we tried to contribute support for Nomad in Spark. What I'm trying to say is that Spark is a task scheduler at the end of the day and it uses an underlying scheduler. So the teams that manage spark today or any other similar tools, they have like tens or 15 people, or they're using like a hosted solution, which is super complex to manage. Right. A spark cluster is not easy to manage. Diptanu Gon Choudhury: I'm not saying it's a bad thing or whatever. Software written at any given point in time reflect the world in which it was born. And so obviously it's from that era of systems engineering and so on. And since then, systems engineering has progressed quite a lot. I feel like we have learned how to make software which is scalable, but yet simpler to understand and to operate and so on. And the other big thing in spark that I feel like is missing or any skills, Ray, is that they are not natively integrated into the data stack. Right. They don't have an opinion on what the data stack is. Diptanu Gon Choudhury: They're like excellent Mapreduce systems, and then the data stuff is layered on top. And to a certain extent that has allowed them to generalize to so many different use cases. People use spark for everything. At Facebook, I was using Spark for batch transcoding of speech, to text, for various use cases with a lot of issues under the hood. Right? So they are tied to the big data storage infrastructure. So when I am reimagining Spark, I almost can take the position that we are going to use blob storage for ingestion and writing raw data, and we will have low latency serving infrastructure in the form of something like postgres or something like clickhouse or something for serving like structured data or semi structured data. And then we have something like Qdrant, which is very geared towards doing vector search and so on. And so we understand the shape of the storage system now. Diptanu Gon Choudhury: We understand that developers want to integrate with them. So now we can control the compute layer such that the compute layer is optimized for doing the compute and producing data such that they can be written in those data stores, right? So we understand the I Ops, right? The I O, what is it called? The I O characteristics of the underlying storage system really well. And we understand that the use case is that people want to consume those data in llms, right? So we can make design decisions such that how we write into those, into the storage system, how we serve very specifically for llms, that I feel like a developer would be making those decisions themselves, like if they were using some other tool. Demetrios: Yeah, it does feel like optimizing for that and recognizing that spark is almost like a swiss army knife. As you mentioned, you can do a million things with it, but sometimes you don't want to do a million things. You just want to do one thing and you want it to be really easy to be able to do that one thing. I had a friend who worked at some enterprise and he was talking about how spark engineers have all the job security in the world, because a, like you said, you need a lot of them, and b, it's hard stuff being able to work on that and getting really deep and knowing it and the ins and outs of it. So I can feel where you're coming from on that one. Diptanu Gon Choudhury: Yeah, I mean, we basically integrated the compute engine with the storage so developers don't have to think about it. Plug in whatever storage you want. We support, obviously, like all the blob stores, and we support Qdrant and postgres right now, indexify in the future can even have other storage engines. And now all an application developer needs to do is deploy this on AWS or GCP or whatever, right? Have enough compute, point it to the storage systems, and then now build your application. You don't need to make any of the hard decisions or build a distributed systems by bringing together like five different tools and spend like five months building the data layer, focus on the application, build your agents. Demetrios: So there is something else. As we are winding down, I want to ask you one last thing, and if anyone has any questions, feel free to throw them in the chat. I am monitoring that also, but I am wondering about advice that you have for people that are building rag based applications, because I feel like you've probably seen quite a few out there in the wild. And so what are some optimizations or some nice hacks that you've seen that have worked really well? Yeah. Diptanu Gon Choudhury: So I think, first of all, there is a recent paper, like a rack survey paper. I really like it. Maybe you can have the link on the show notes if you have one. There was a recent survey paper, I really liked it, and it covers a lot of tips and tricks that people can use with Rag. But essentially, Rag is an information. Rag is like a two step process in its essence. One is the document selection process and the document reading process. Document selection is how do you retrieve the most important information out of million documents that might be there, and then the reading process is how do you jam them in the context of a model, and so that the model can kind of ground its generation based on the context. Diptanu Gon Choudhury: So I think the most tricky part here, and the part which has the most tips and tricks is the document selection part. And that is like a classic information retrieval problem. So I would suggest people doing a lot of experimentation around ranking algorithms, hitting different type of indexes, and refining the results by merging results from different indexes. One thing that always works for me is reducing the search space of the documents that I am selecting in a very systematic manner. So like using some kind of hybrid search where someone does the embedding lookup first, and then does the keyword lookup, or vice versa, or does lookups parallel and then merges results together? Those kind of things where the search space is narrowed down always works for me. Demetrios: So I think one of the Qdrant team members would love to know because I've been talking to them quite frequently about this, the evaluating of retrieval. Have you found any tricks or tips around that and evaluating the quality of what is retrieved? Diptanu Gon Choudhury: So I haven't come across a golden one trick that fits every use case type thing like solution for evaluation. Evaluation is really hard. There are open source projects like ragas who are trying to solve it, and everyone is trying to solve various, various aspects of evaluating like rag exactly. Some of them try to evaluate how accurate the results are, some people are trying to evaluate how diverse the answers are, and so on. I think the most important thing that our design partners care about is factual accuracy and factual accuracy. One process that has worked really well is like having a critique model. So let the generation model generate some data and then have a critique model go and try to find citations and look up how accurate the data is, how accurate the generation is, and then feed that back into the system. One another thing like going back to the previous point is what tricks can someone use for doing rag really well? I feel like people don't fine tune embedding models that much. Diptanu Gon Choudhury: I think if people are using an embedding model, like sentence transformer or anything like off the shelf, they should look into fine tuning the embedding models on their data set that they are embedding. And I think a combination of fine tuning the embedding models and kind of like doing some factual accuracy checks lead to a long way in getting like rag working really well. Demetrios: Yeah, it's an interesting one. And I'll probably leave it here on the extra model that is basically checking factual accuracy. You've always got these trade offs that you're playing with, right? And one of the trade offs is going to be, maybe you're making another LLM call, which could be more costly, but you're gaining trust or you're gaining confidence that what it's outputting is actually what it says it is. And it's actually factually correct, as you said. So it's like, what price can you put on trust? And we're going back to that whole thing that I saw on Chevy's website where they were saying that a Tesla is better. It's like that hopefully doesn't happen anymore as people deploy this stuff and they recognize that humans are cunning when it comes to playing around with chat bots. So this has been fascinating, man. I appreciate you coming on here and chatting me with it. Demetrios: I encourage everyone to go and either reach out to you on LinkedIn, I know you are on there, and we'll leave a link to your LinkedIn in the chat too. And if not, check out Tensorleg, check out indexify, and we will be in touch. Man, this was great. Diptanu Gon Choudhury: Yeah, same. It was really great chatting with you about this, Demetrius, and thanks for having me today. Demetrios: Cheers. I'll talk to you later. ",blog/indexify-unveiled-diptanu-gon-choudhury-vector-space-talk-009.md "--- draft: false title: ""Qdrant Hybrid Cloud and DigitalOcean for Scalable and Secure AI Solutions"" short_description: ""Enabling developers to deploy a managed vector database in their DigitalOcean Environment."" description: ""Enabling developers to deploy a managed vector database in their DigitalOcean Environment."" preview_image: /blog/hybrid-cloud-digitalocean/hybrid-cloud-digitalocean.png date: 2024-04-11T00:02:00Z author: Qdrant featured: false weight: 1010 tags: - Qdrant - Vector Database --- Developers are constantly seeking new ways to enhance their AI applications with new customer experiences. At the core of this are vector databases, as they enable the efficient handling of complex, unstructured data, making it possible to power applications with semantic search, personalized recommendation systems, and intelligent Q&A platforms. However, when deploying such new AI applications, especially those handling sensitive or personal user data, privacy becomes important. [DigitalOcean](https://www.digitalocean.com/) and Qdrant are actively addressing this with an integration that lets developers deploy a managed vector database in their existing DigitalOcean environments. With the recent launch of [Qdrant Hybrid Cloud](/hybrid-cloud/), developers can seamlessly deploy Qdrant on DigitalOcean Kubernetes (DOKS) clusters, making it easier for developers to handle vector databases without getting bogged down in the complexity of managing the underlying infrastructure. #### Unlocking the Power of Generative AI with Qdrant and DigitalOcean User data is a critical asset for a business, and user privacy should always be a top priority. This is why businesses require tools that enable them to leverage their user data as a valuable asset while respecting privacy. Qdrant Hybrid Cloud on DigitalOcean brings these capabilities directly into developers' hands, enhancing deployment flexibility and ensuring greater control over data. > *“Qdrant, with its seamless integration and robust performance, equips businesses to develop cutting-edge applications that truly resonate with their users. Through applications such as semantic search, Q&A systems, recommendation engines, image search, and RAG, DigitalOcean customers can leverage their data to the fullest, ensuring privacy and driving innovation.“* - Bikram Gupta, Lead Product Manager, Kubernetes & App Platform, DigitalOcean. #### Get Started with Qdrant on DigitalOcean DigitalOcean customers can easily deploy Qdrant on their DigitalOcean Kubernetes (DOKS) clusters through a simple Kubernetis-native “one-line” installment. This simplicity allows businesses to start small and scale efficiently. - **Simple Deployment**: Leveraging Kubernetes, deploying Qdrant Hybrid Cloud on DigitalOcean is streamlined, making the management of vector search workloads in the own environment more efficient. - **Own Infrastructure**: Hosting the vector database on your DigitalOcean infrastructure offers flexibility and allows you to manage the entire AI stack in one place. - **Data Control**: Deploying within the own DigitalOcean environment ensures data control, keeping sensitive information within its security perimeter. To get Qdrant Hybrid Cloud setup on DigitalOcean, just follow these steps: - **Hybrid Cloud Setup**: Begin by logging into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and activate **Hybrid Cloud** feature in the sidebar. - **Cluster Configuration**: From Hybrid Cloud settings, integrate your DigitalOcean Kubernetes clusters as a Hybrid Cloud Environment. - **Simplified Deployment**: Use the Qdrant Management Console to effortlessly establish and oversee your Qdrant clusters on DigitalOcean. #### Chat with PDF Documents with Qdrant Hybrid Cloud on DigitalOcean ![hybrid-cloud-llamaindex-tutorial](/blog/hybrid-cloud-llamaindex/hybrid-cloud-llamaindex-tutorial.png) We created a tutorial that guides you through setting up and leveraging Qdrant Hybrid Cloud on DigitalOcean for a RAG application. It highlights practical steps to integrate vector search with Jina AI's LLMs, optimizing the generation of high-quality, relevant AI content, while ensuring data sovereignty is maintained throughout. This specific system is tied together via the LlamaIndex framework. [Try the Tutorial](/documentation/tutorials/hybrid-search-llamaindex-jinaai/) For a comprehensive guide, our documentation provides detailed instructions on setting up Qdrant on DigitalOcean. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-digitalocean.md "--- draft: false title: Optimizing an Open Source Vector Database with Andrey Vasnetsov slug: open-source-vector-search-engine-vector-database short_description: CTO of Qdrant Andrey talks about Vector search engines and the technical facets and challenges encountered in developing an open-source vector database. description: Learn key strategies for optimizing vector search from Andrey Vasnetsov, CTO at Qdrant. Dive into techniques like efficient indexing for improved performance. preview_image: /blog/from_cms/andrey-vasnetsov-cropped.png date: 2024-01-10T16:04:57.804Z author: Demetrios Brinkmann featured: false tags: - Qdrant - Vector Search Engine - Vector Database --- # Optimizing Open Source Vector Search: Strategies from Andrey Vasnetsov at Qdrant > *""For systems like Qdrant, scalability and performance in my opinion, is much more important than transactional consistency, so it should be treated as a search engine rather than database.""*\ -- Andrey Vasnetsov > Discussing core differences between search engines and databases, Andrey underlined the importance of application needs and scalability in database selection for vector search tasks. Andrey Vasnetsov, CTO at Qdrant is an enthusiast of [Open Source](https://qdrant.tech/), machine learning, and vector search. He works on Open Source projects related to [Vector Similarity Search](https://qdrant.tech/articles/vector-similarity-beyond-search/) and Similarity Learning. He prefers practical over theoretical, working demo over arXiv paper. ***You can watch this episode on [YouTube](https://www.youtube.com/watch?v=bU38Ovdh3NY).*** ***This episode is part of the [ML⇄DB Seminar Series](https://db.cs.cmu.edu/seminar2023/#) (Machine Learning for Databases + Databases for Machine Learning) of the Carnegie Mellon University Database Research Group.*** ## **Top Takeaways:** Dive into the intricacies of [vector databases](https://qdrant.tech/articles/what-is-a-vector-database/) with Andrey as he unpacks Qdrant's approach to combining filtering and vector search, revealing how in-place filtering during graph traversal optimizes precision without sacrificing search exactness, even when scaling to billions of vectors. 5 key insights you’ll learn: - 🧠 **The Strategy of Subgraphs:** Dive into how overlapping intervals and geo hash regions can enhance the precision and connectivity within vector search indices. - 🛠️ **Engine vs Database:** Discover the differences between search engines and relational databases and why considering your application's needs is crucial for scalability. - 🌐 **Combining Searches with Relational Data:** Get insights on integrating relational and vector search for improved efficiency and performance. - 🚅 **Speed and Precision Tactics:** Uncover the techniques for controlling search precision and speed by tweaking the beam size in HNSW indices. - 🔗 **Connected Graph Challenges:** Learn about navigating the difficulties of maintaining a connected graph while filtering during search operations. > Fun Fact: [The Qdrant system](https://qdrant.tech/) is capable of in-place filtering during graph traversal, which is a novel approach compared to traditional post-filtering methods, ensuring the correct quantity of results that meet the filtering conditions. > ## Timestamps: 00:00 Search professional with expertise in vectors and engines.\ 09:59 Elasticsearch: scalable, weak consistency, prefer vector search.\ 12:53 Optimize data structures for faster processing efficiency.\ 21:41 Vector indexes require special treatment, like HNSW's proximity graph and greedy search.\ 23:16 HNSW index: approximate, precision control, CPU intensive.\ 30:06 Post-filtering inefficient, prefiltering costly.\ 34:01 Metadata-based filters; creating additional connecting links.\ 41:41 Vector dimension impacts comparison speed, indexing complexity high.\ 46:53 Overlapping intervals and subgraphs for precision.\ 53:18 Postgres limits scalability, additional indexing engines provide faster queries.\ 59:55 Embedding models for time series data explained.\ 01:02:01 Cheaper system for serving billion vectors. ## More Quotes from Andrey: *""It allows us to compress vector to a level where a single dimension is represented by just a single bit, which gives total of 32 times compression for the vector.""*\ -- Andrey Vasnetsov on vector compression in AI *""We build overlapping intervals and we build these subgraphs with additional links for those intervals. And also we can do the same with, let's say, location data where we have geocoordinates, so latitude, longitude, we encode it into geo hashes and basically build this additional graph for overlapping geo hash regions.""*\ -- Andrey Vasnetsov *""We can further compress data using such techniques as delta encoding, as variable byte encoding, and so on. And this total effect, total combined effect of this optimization can make immutable data structures order of minute more efficient than mutable ones.""*\ -- Andrey Vasnetsov ",blog/open-source-vector-search-engine-and-vector-database.md "--- draft: false title: ""Integrating Qdrant and LangChain for Advanced Vector Similarity Search"" short_description: Discover how Qdrant and LangChain can be integrated to enhance AI applications. description: Discover how Qdrant and LangChain can be integrated to enhance AI applications with advanced vector similarity search technology. preview_image: /blog/using-qdrant-and-langchain/qdrant-langchain.png date: 2024-03-12T09:00:00Z author: David Myriel featured: true tags: - Qdrant - LangChain - LangChain integration - Vector similarity search - AI LLM (large language models) - LangChain agents - Large Language Models --- > *""Building AI applications doesn't have to be complicated. You can leverage pre-trained models and support complex pipelines with a few lines of code. LangChain provides a unified interface, so that you can avoid writing boilerplate code and focus on the value you want to bring.""* Kacper Lukawski, Developer Advocate, Qdrant ## Long-Term Memory for Your GenAI App Qdrant's vector database quickly grew due to its ability to make Generative AI more effective. On its own, an LLM can be used to build a process-altering invention. With Qdrant, you can turn this invention into a production-level app that brings real business value. The use of vector search in GenAI now has a name: **Retrieval Augmented Generation (RAG)**. [In our previous article](/articles/rag-is-dead/), we argued why RAG is an essential component of AI setups, and why large-scale AI can't operate without it. Numerous case studies explain that AI applications are simply too costly and resource-intensive to run using only LLMs. > Going forward, the solution is to leverage composite systems that use models and vector databases. **What is RAG?** Essentially, a RAG setup turns Qdrant into long-term memory storage for LLMs. As a vector database, Qdrant manages the efficient storage and retrieval of user data. Adding relevant context to LLMs can vastly improve user experience, leading to better retrieval accuracy, faster query speed and lower use of compute. Augmenting your AI application with vector search reduces hallucinations, a situation where AI models produce legitimate-sounding but made-up responses. Qdrant streamlines this process of retrieval augmentation, making it faster, easier to scale and efficient. When you are accessing vast amounts of data (hundreds or thousands of documents), vector search helps your sort through relevant context. **This makes RAG a primary candidate for enterprise-scale use cases.** ## Why LangChain? Retrieval Augmented Generation is not without its challenges and limitations. One of the main setbacks for app developers is managing the entire setup. The integration of a retriever and a generator into a single model can lead to a raised level of complexity, thus increasing the computational resources required. [LangChain](https://www.langchain.com/) is a framework that makes developing RAG-based applications much easier. It unifies interfaces to different libraries, including major embedding providers like OpenAI or Cohere and vector stores like Qdrant. With LangChain, you can focus on creating tangible GenAI applications instead of writing your logic from the ground up. > Qdrant is one of the **top supported vector stores** on LangChain, with [extensive documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant) and [examples](https://python.langchain.com/docs/integrations/retrievers/self_query/qdrant_self_query). **How it Works:** LangChain receives a query and retrieves the query vector from an embedding model. Then, it dispatches the vector to a vector database, retrieving relevant documents. Finally, both the query and the retrieved documents are sent to the large language model to generate an answer. ![qdrant-langchain-rag](/blog/using-qdrant-and-langchain/flow-diagram.png) When supported by LangChain, Qdrant can help you set up effective question-answer systems, detection systems and chatbots that leverage RAG to its full potential. When it comes to long-term memory storage, developers can use LangChain to easily add relevant documents, chat history memory & rich user data to LLM app prompts via Qdrant. ## Common Use Cases Integrating Qdrant and LangChain can revolutionize your AI applications. Let's take a look at what this integration can do for you: *Enhance Natural Language Processing (NLP):* LangChain is great for developing question-answering **chatbots**, where Qdrant is used to contextualize and retrieve results for the LLM. We cover this in [our article](/articles/langchain-integration/), and in OpenAI's [cookbook examples](https://cookbook.openai.com/examples/vector_databases/qdrant/qa_with_langchain_qdrant_and_openai) that use LangChain and GPT to process natural language. *Improve Recommendation Systems:* Food delivery services thrive on indecisive customers. Businesses need to accomodate a multi-aim search process, where customers seek recommendations though semantic search. With LangChain you can build systems for **e-commerce, content sharing, or even dating apps**. *Advance Data Analysis and Insights:* Sometimes you just want to browse results that are not necessarily closest, but still relevant. Semantic search helps user discover products in **online stores**. Customers don't exactly know what they are looking for, but require constrained space in which a search is performed. *Offer Content Similarity Analysis:* Ever been stuck seeing the same recommendations on your **local news portal**? You may be held in a similarity bubble! As inputs get more complex, diversity becomes scarce, and it becomes harder to force the system to show something different. LangChain developers can use semantic search to develop further context. ## Building a Chatbot with LangChain _Now that you know how Qdrant and LangChain work together - it's time to build something!_ Follow Daniel Romero's video and create a RAG Chatbot completely from scratch. You will only use OpenAI, Qdrant and LangChain. Here is what this basic tutorial will teach you: **1. How to set up a chatbot using Qdrant and LangChain:** You will use LangChain to create a RAG pipeline that retrieves information from a dataset and generates output. This will demonstrate the difference between using an LLM by itself and leveraging a vector database like Qdrant for memory retrieval. **2. Preprocess and format data for use by the chatbot:** First, you will download a sample dataset based on some academic journals. Then, you will process this data into embeddings and store it as vectors inside of Qdrant. **3. Implement vector similarity search algorithms:** Second, you will create and test a chatbot that only uses the LLM. Then, you will enable the memory component offered by Qdrant. This will allow your chatbot to be modified and updated, giving it long-term memory. **4. Optimize the chatbot's performance:** In the last step, you will query the chatbot in two ways. First query will retrieve parametric data from the LLM, while the second one will get contexual data via Qdrant. The goal of this exercise is to show that RAG is simple to implement via LangChain and yields much better results than using LLMs by itself. ## Scaling Qdrant and LangChain If you are looking to scale up and keep the same level of performance, Qdrant and LangChain are a rock-solid combination. Getting started with both is a breeze and the [documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant) covers a broad number of cases. However, the main strength of Qdrant is that it can consistently support the user way past the prototyping and launch phases. > *""We are all-in on performance and reliability. Every release we make Qdrant faster, more stable and cost-effective for the user. When others focus on prototyping, we are already ready for production. Very soon, our users will build successful products and go to market. At this point, I anticipate a great need for a reliable vector store. Qdrant will be there for LangChain and the entire community.""* Whether you are building a bank fraud-detection system, RAG for e-commerce, or services for the federal government - you will need to leverage a scalable architecture for your product. Qdrant offers different features to help you considerably increase your application’s performance and lower your hosting costs. > Read more about out how we foster [best practices for large-scale deployments](/articles/multitenancy/). ## Next Steps Now that you know how Qdrant and LangChain can elevate your setup - it's time to try us out. - Qdrant is open source and you can [quickstart locally](/documentation/quick-start/), [install it via Docker](/documentation/quick-start/), [or to Kubernetes](https://github.com/qdrant/qdrant-helm/). - We also offer [a free-tier of Qdrant Cloud](https://cloud.qdrant.io/) for prototyping and testing. - For best integration with LangChain, read the [official LangChain documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant/). - For all other cases, [Qdrant documentation](/documentation/integrations/langchain/) is the best place to get there. > We offer additional support tailored to your business needs. [Contact us](https://qdrant.to/contact-us) to learn more about implementation strategies and integrations that suit your company. ",blog/using-qdrant-and-langchain.md "--- draft: false title: Qdrant supports ARM architecture! slug: qdrant-supports-arm-architecture short_description: Qdrant announces ARM architecture support, expanding accessibility and performance for their advanced data indexing technology. description: Qdrant's support for ARM architecture marks a pivotal step in enhancing accessibility and performance. This development optimizes data indexing and retrieval. preview_image: /blog/from_cms/docker-preview.png date: 2022-09-21T09:49:53.352Z author: Kacper Łukawski featured: false tags: - Vector Search - Vector Search Engine - Embedding - Neural Networks - Database --- The processor architecture is a thing that the end-user typically does not care much about, as long as all the applications they use run smoothly. If you use a PC then chances are you have an x86-based device, while your smartphone rather runs on an ARM processor. In 2020 Apple introduced their ARM-based M1 chip which is used in modern Mac devices, including notebooks. The main differences between those two architectures are the set of supported instructions and energy consumption. ARM’s processors have a way better energy efficiency and are cheaper than their x86 counterparts. That’s why they became available as an affordable alternative in the hosting providers, including the cloud. ![](/blog/from_cms/1_seaglc6jih2qknoshqbf1q.webp ""An image generated by Stable Diffusion with a query “two computer processors fightning against each other”"") In order to make an application available for ARM users, it has to be compiled for that platform. Otherwise, it has to be emulated by the device, which gives an additional overhead and reduces its performance. We decided to provide the [Docker images](https://hub.docker.com/r/qdrant/qdrant/) targeted especially at ARM users. Of course, using a limited set of processor instructions may impact the performance of your vector search, and that’s why we decided to test both architectures using a similar setup. ## Test environments AWS offers ARM-based EC2 instances that are 20% cheaper than the x86 corresponding alternatives with a similar configuration. That estimate has been done for the eu-central-1 region (Frankfurt) and R6g/R6i instance families. For the purposes of this comparison, we used an r6i.large instance (Intel Xeon) and compared it to r6g.large one (AWS Graviton2). Both setups have 2 vCPUs and 16 GB of memory available and these were the smallest comparable instances available. ## The results For the purposes of this test, we created some random vectors which were compared with cosine distance. ### Vector search During our experiments, we performed 1000 search operations for both ARM64 and x86-based setups. We didn’t measure the network overhead, only the time measurements returned by the engine in the API response. The chart below shows the distribution of that time, separately for each architecture. ![](/blog/from_cms/1_zvuef4ri6ztqjzbsocqj_w.webp ""The latency distribution of search requests: arm vs x86"") It seems that ARM64 might be an interesting alternative if you are on a budget. It is 10% slower on average, and 20% slower on the median, but the performance is more consistent. It seems like it won’t be randomly 2 times slower than the average, unlike x86. That makes ARM64 a cost-effective way of setting up vector search with Qdrant, keeping in mind it’s 20% cheaper on AWS. You do get less for less, but surprisingly more than expected.",blog/qdrant-supports-arm-architecture.md "--- draft: false title: Advancements and Challenges in RAG Systems - Syed Asad | Vector Space Talks slug: rag-advancements-challenges short_description: Syed Asad talked about advanced rag systems and multimodal AI projects, discussing challenges, technologies, and model evaluations in the context of their work at Kiwi Tech. description: Syed Asad unfolds the challenges of developing multimodal RAG systems at Kiwi Tech, detailing the balance between accuracy and cost-efficiency, and exploring various tools and approaches like GPT 4 and Mixtral to enhance family tree apps and financial chatbots while navigating the hurdles of data privacy and infrastructure demands. preview_image: /blog/from_cms/syed-asad-cropped.png date: 2024-04-11T22:25:00.000Z author: Demetrios Brinkmann featured: false tags: - Vector Search - Retrieval Augmented Generation - Generative AI - KiwiTech --- > *""The problem with many of the vector databases is that they work fine, they are scalable. This is common. The problem is that they are not easy to use. So that is why I always use Qdrant.”*\ — Syed Asad > Syed Asad is an accomplished AI/ML Professional, specializing in LLM Operations and RAGs. With a focus on Image Processing and Massive Scale Vector Search Operations, he brings a wealth of expertise to the field. His dedication to advancing artificial intelligence and machine learning technologies has been instrumental in driving innovation and solving complex challenges. Syed continues to push the boundaries of AI/ML applications, contributing significantly to the ever-evolving landscape of the industry. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/4Gm4TQsO2PzOGBp5U6Cj2e?si=JrG0kHDpRTeb2gLi5zdi4Q), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/RVb6_CI7ysM?si=8Hm7XSWYTzK6SRj0).*** ## **Top takeaways:** Prompt engineering is the new frontier in AI. Let’s find out about how critical its role is in controlling AI language models. In this episode, Demetrios and Syed gets to discuss about it. Syed also explores the retrieval augmented generation systems and machine learning technology at Kiwi Tech. This episode showcases the challenges and advancements in AI applications across various industries. Here are the highlights from this episode: 1. **Digital Family Tree:** Learn about the family tree app project that brings the past to life through video interactions with loved ones long gone. 2. **Multimodal Mayhem:** Discover the complexities of creating AI systems that can understand diverse accents and overcome transcription tribulations – all while being cost-effective! 3. **The Perfect Match:** Find out how semantic chunking is revolutionizing job matching in radiology and why getting the context right is non-negotiable. 4. **Quasar's Quantum Leap:** Syed shares the inside scoop on Quasar, a financial chatbot, and the AI magic that makes it tick. 5. **The Privacy Paradox:** Delve into the ever-present conflict between powerful AI outcomes and the essential quest to preserve data privacy. > Fun Fact: Syed Asad and his team at Kiwi Tech use a GPU-based approach with GPT 4 for their AI system named Quasar, addressing challenges like temperature control and mitigating hallucinatory responses. > ## Show notes: 00:00 Clients seek engaging multimedia apps over chatbots.\ 06:03 Challenges in multimodal rags: accent, transcription, cost.\ 08:18 AWS credits crucial, but costs skyrocket quickly.\ 10:59 Accurate procedures crucial, Qdrant excels in search.\ 14:46 Embraces AI for monitoring and research.\ 19:47 Seeking insights on ineffective marketing models and solutions.\ 23:40 GPT 4 useful, prompts need tracking tools\ 25:28 Discussing data localization and privacy, favoring Ollama.\ 29:21 Hallucination control and pricing are major concerns.\ 32:47 DeepEval, AI testing, LLM, potential, open source.\ 35:24 Filter for appropriate embedding model based on use case and size. ## More Quotes from Syed: *""Qdrant has the ease of use. I have trained people in my team who specializes with Qdrant, and they were initially using Weaviate and Pinecone.”*\ — Syed Asad *""What's happening nowadays is that the clients or the projects in which I am particularly working on are having more of multimedia or multimodal approach. They want their apps or their LLM apps to be more engaging rather than a mere chatbot.”*\ — Syed Asad *""That is where the accuracy matters the most. And in this case, Qdrant has proved just commendable in giving excellent search results.”*\ — Syed Asad in Advancements in Medical Imaging Search ## Transcript: Demetrios: What is up, good people? How y'all doing? We are back for yet another vector space talks. I'm super excited to be with you today because we're gonna be talking about rags and rag systems. And from the most basic naive rag all the way to the most advanced rag, we've got it covered with our guest of honor, Asad. Where are you at, my man? There he is. What's going on, dude? Syed Asad: Yeah, everything is fine. Demetrios: Excellent, excellent. Well, I know we were talking before we went live, and you are currently in India. It is very late for you, so I appreciate you coming on here and doing this with us. You are also, for those who do not know, a senior engineer for AI and machine learning at Kiwi Tech. Can you break down what Kiwi tech is for us real fast? Syed Asad: Yeah, sure. Absolutely. So Kiwi tech is actually a software development, was actually a software development company focusing on software development, iOS and mobile apps. And right now we are in all focusing more on generative AI, machine learning and computer vision projects. So I am heading the AI part here. So. And we are having loads of projects here with, from basic to advanced rags, from naive to visual rags. So basically I'm doing rag in and out from morning to evening. Demetrios: Yeah, you can't get away from it, huh? Man, that is great. Syed Asad: Everywhere there is rag. Even, even the machine learning part, which was previously done by me, is all now into rags engineered AI. Yeah. Machine learning is just at the background now. Demetrios: Yeah, yeah, yeah. It's funny, I understand the demand for it because people are trying to see where they can get value in their companies with the new generative AI advancements. Syed Asad: Yeah. Demetrios: So I want to talk a lot about advance rags, considering the audience that we have. I would love to hear about the visual rags also, because that sounds very exciting. Can we start with the visual rags and what exactly you are doing, what you're working on when it comes to that? Syed Asad: Yeah, absolutely. So initially when I started working, so you all might be aware with the concept of frozen rags, the normal and the basic rag, there is a text retrieval system. You just query your data and all those things. So what is happening nowadays is that the clients or the projects in which I am particularly working on are having more of multimedia or multimodal approach. So that is what is happening. So they want their apps or their LLM apps to be more engaging rather than a mere chatbot. Because. Because if we go on to the natural language or the normal english language, I mean, interacting by means of a video or interacting by means of a photo, like avatar, generation, anything like that. Syed Asad: So that has become more popular or, and is gaining more popularity. And if I talk about, specifically about visual rags. So the projects which I am working on is, say, for example, say, for example, there is a family tree type of app in which. In which you have an account right now. So, so you are recording day videos every day, right? Like whatever you are doing, for example, you are singing a song, you're walking in the park, you are eating anything like that, and you're recording those videos and just uploading them on that app. But what do you want? Like, your future generations can do some sort of query, like what, what was my grandfather like? What was my, my uncle like? Anything my friend like. And it was, it is not straight, restricted to a family. It can be friends also. Syed Asad: Anyway, so. And these are all us based projects, not indian based projects. Okay, so, so you, you go in query and it returns a video about your grandfather who has already died. He has not. You can see him speaking about that particular thing. So it becomes really engaging. So this is something which is called visual rag, which I am working right now on this. Demetrios: I love that use case. So basically it's, I get to be closer to my family that may or may not be here with us right now because the rag can pull writing that they had. It can pull video of other family members talking about it. It can pull videos of when my cousin was born, that type of stuff. Syed Asad: Anything, anything from cousin to family. You can add any numbers of members of your family. You can give access to any number of people who can have after you, after you're not there, like a sort of a nomination or a delegation live up thing. So that is, I mean, actually, it is a very big project, involves multiple transcription models, video transcription models. It also involves actually the databases, and I'm using Qdrant, proud of it. So, in that, so. And Qdrant is working seamlessly in that. So, I mean, at the end there is a vector search, but at the background there is more of more of visual rag, and people want to communicate through videos and photos. Syed Asad: So that is coming into picture more. Demetrios: Well, talk to me about multimodal rag. And I know it's a bit of a hairy situation because if you're trying to do vector search with videos, it can be a little bit more complicated than just vector search with text. Right. So what are some of the unique challenges that you've seen when it comes to multimodal rag? Syed Asad: The first challenge dealing with multimodal rags is actually the accent, because it can be varying accent. The problem with the transcription, one of the problems or the challenges which I have faced in this is that lack of proper transcription models, if you are, if you are able to get a proper transcription model, then if that, I want to deploy that model in the cloud, say for example, an AWS cloud. So that AWS cloud is costing heavy on the pockets. So managing infra is one of the part. I mean, I'm talking in a, in a, in a highly scalable production environment. I'm not talking about a research environment in which you can do anything on a collab notebook and just go with that. So whenever it comes to the client part or the delivery part, it becomes more critical. And even there, there were points then that we have to entirely overhaul the entire approach, which was working very fine when we were doing it on the dev environment, like the openais whisper. Syed Asad: We started with that OpenAI's whisper. It worked fine. The transcription was absolutely fantastic. But we couldn't go into the production. Demetrios: Part with that because it was too, the word error rate was too high, or because it was too slow. What made it not allow you to go into production? Syed Asad: It was, the word error rate was also high. It was very slow when it was being deployed on an AWS instance. And the thing is that the costing part, because usually these are startups, or mid startup, if I talk about the business point of view, not the tech point of view. So these companies usually offer these type of services for free, and on the basis of these services they try to raise funding. So they want something which is actually optimized, optimizing their cost as well. So what I personally feel, although AWS is massively scalable, but I don't prefer AWS at all until, unless there are various other options coming out, like salad. I had a call, I had some interactions with Titan machine learning also, but it was also fine. But salad is one of the best as of now. Demetrios: Yeah. Unless you get that free AWS credits from the startup program, it can get very expensive very quickly. And even if you do have the free AWS credits, it still gets very expensive very quickly. So I understand what you're saying is basically it was unusable because of the cost and the inability to figure out, it was more of a product problem if you could figure out how to properly monetize it. But then you had technical problems like word error rate being really high, the speed and latency was just unbearable. I can imagine. So unless somebody makes a query and they're ready to sit around for a few minutes and let that query come back to you, with a video or some documents, whatever it may be. Is that what I'm understanding on this? And again, this is for the family tree use case that you're talking about. Syed Asad: Yes, family tree use case. So what was happening in that, in that case is a video is uploaded, it goes to the admin for an approval actually. So I mean you can, that is where we, they were restricting the costing part as far as the project was concerned. It's because you cannot upload any random videos and they will select that. Just some sort of moderation was also there, as in when the admin approves those videos, that videos goes on to the transcription pipeline. They are transcripted via an, say a video to text model like the open eyes whisper. So what was happening initially, all the, all the research was done with Openais, but at the end when deployment came, we have to go with deep Gram and AssemblyAI. That was the place where these models were excelling far better than OpenAI. Syed Asad: And I'm a big advocate of open source models, so also I try to leverage those, but it was not pretty working in production environment. Demetrios: Fascinating. So you had that, that's one of your use cases, right? And that's very much the multimodal rag use case. Are all of your use cases multimodal or did you have, do you have other ones too? Syed Asad: No, all are not multimodal. There are few multimodal, there are few text based on naive rag also. So what, like for example, there is one use case coming which is sort of a job search which is happening. A job search for a radiology, radiology section. I mean a very specialized type of client it is. And they're doing some sort of job search matching the modalities and procedures. And it is sort of a temporary job. Like, like you have two shifts ready, two shifts begin, just some. Syed Asad: So, so that is, that is very critical when somebody is putting their procedures or what in. Like for example, they, they are specializing in x rays in, in some sort of medical procedures and that is matching with the, with the, with the, with the employers requirement. So that is where the accuracy matters the most. Accurate. And in this case, Qdrant has proved just commendable in giving excellent search results. The other way around is that in this case is there were some challenges related to the quality of results also because. So progressing from frozen rack to advanced rag like adopting methods like re ranking, semantic chunking. I have, I have started using semantic chunking. Syed Asad: So it has proved very beneficial as far as the quality of results is concerned. Demetrios: Well, talk to me more about. I'm trying to understand this use case and why a rag is useful for the job matching. You have doctors who have specialties and they understand, all right, they're, maybe it's an orthopedic surgeon who is very good at a certain type of surgery, and then you have different jobs that come online. They need to be matched with those different jobs. And so where does the rag come into play? Because it seems like it could be solved with machine learning as opposed to AI. Syed Asad: Yeah, it could have been solved through machine learning, but the type of modalities that are, the type of, say, the type of jobs which they were posting are too much specialized. So it needed some sort of contextual matching also. So there comes the use case for the rag. In this place, the contextual matching was required. Initially, an approach for machine learning was on the table, but it was done with, it was not working. Demetrios: I get it, I get it. So now talk to me. This is really important that you said accuracy needs to be very high in this use case. How did you make sure that the accuracy was high? Besides the, I think you said chunking, looking at the chunks, looking at how you were doing that, what were some other methods you took to make sure that the accuracy was high? Syed Asad: I mean, as far as the accuracy is concerned. So what I did was that my focus was on the embedding model, actually when I started with what type of embed, choice of embedding model. So initially my team started with open source model available readily on hugging face, looking at some sort of leaderboard metrics, some sort of model specializing in medical, say, data, all those things. But even I was curious that the large language, the embedding models which were specializing in medical data, they were also not returning good results and they were mismatching. When, when there was a tabular format, I created a visualization in which the cosine similarity of various models were compared. So all were lagging behind until I went ahead with cohere. Cohere re rankers. They were the best in that case, although they are not trained on that. Syed Asad: And just an API call was required rather than loading that whole model onto the local. Demetrios: Interesting. All right. And so then were you doing certain types, so you had the cohere re ranker that gave you a big up. Were you doing any kind of monitoring of the output also, or evaluation of the output and if so, how? Syed Asad: Yes, for evaluation, for monitoring we readily use arrays AI, because I am a, I'm a huge advocate of Llama index also because it has made everything so easier versus lang chain. I mean, if I talk about my personal preference, not regarding any bias, because I'm not linked with anybody, I'm not promoting it here, but they are having the best thing which I write, I like about Llama index and why I use it, is that anything which is coming into play as far as the new research is going on, like for example, a recent research paper was with the raft retrieval augmented fine tuning, which was released by the Microsoft, and it is right now available on archive. So barely few days after they just implemented it in the library, and you can readily start using it rather than creating your own structure. So, yeah, so it was. So one of my part is that I go through the research papers first, then coming on to a result. So a research based approach is required in actually selecting the models, because every day there is new advancement going on in rags and you cannot figure out what is, what would be fine for you, and you cannot do hit and trial the whole day. Demetrios: Yes, that is a great point. So then if we break down your tech stack, what does it look like? You're using Llama index, you're using arise for the monitoring, you're using Qdrant for your vector database. You have the, you have the coherent re ranker, you are using GPT 3.5. Syed Asad: No, it's GPT 4, not 3.5. Demetrios: You needed to go with GPT 4 because everything else wasn't good enough. Syed Asad: Yes, because one of the context length was one of the most things. But regarding our production, we have been readily using since the last one and a half months. I have been readily using Mixtril. I have been. I have been using because there's one more challenge coming onto the rack, because there's one more I'll give, I'll give you an example of one more use case. It is the I'll name the project also because I'm allowed by my company. It is a big project by the name of Quasar markets. It is a us based company and they are actually creating a financial market type of check chatbot. Syed Asad: Q u a s a r, quasar. You can search it also, and they give you access to various public databases also, and some paid databases also. They have a membership plan. So we are entirely handling the front end backend. I'm not handling the front end and the back end, I'm handling the AI part in that. So one of the challenges is the inference, timing, the timing in which the users are getting queries when it is hitting the database. Say for example, there is a database publicly available database called Fred of us government. So when user can select in that app and go and select the Fred database and want to ask some questions regarding that. Syed Asad: So that is in this place there is no vectors, there are no vector databases. It is going without that. So we are following some keyword approach. We are extracting keywords, classifying the queries in simple or complex, then hitting it again to the database, sending it on the live API, getting results. So there are multiple hits going on. So what happened? This all multiple hits which were going on. They reduced the timing and I mean the user experience was being badly affected as the time for the retrieval has gone up and user and if you're going any query and inputting any query it is giving you results in say 1 minute. You wouldn't be waiting for 1 minute for a result. Demetrios: Not at all. Syed Asad: So this is one of the challenge for a GPU based approach. And in, in the background everything was working on GPT 4 even, not 3.5. I mean the costliest. Demetrios: Yeah. Syed Asad: So, so here I started with the LPU approach, the Grok. I mean it's magical. Demetrios: Yeah. Syed Asad: I have been implementing proc since the last many days and it has been magical. The chatbots are running blazingly fast but there are some shortcomings also. You cannot control the temperature if you have lesser control on hallucination. That is one of the challenges which I am facing. So that is why I am not able to deploy Grok into production right now. Because hallucination is one of the concern for the client. Also for anybody who is having, who wants to have a rag on their own data, say, or AI on their own data, they won't, they won't expect you, the LLM, to be creative. So that is one of the challenges. Syed Asad: So what I found that although many of the tools that are available in the market right now day in and day out, there are more researches. But most of the things which are coming up in our feeds or more, I mean they are coming as a sort of a marketing gimmick. They're not working actually on the ground. Demetrios: Tell me, tell me more about that. What other stuff have you tried that's not working? Because I feel that same way. I've seen it and I also have seen what feels like some people, basically they release models for marketing purposes as opposed to actual valuable models going out there. So which ones? I mean Grok, knowing about Grok and where it excels and what some of the downfalls are is really useful. It feels like this idea of temperature being able to control the knob on the temperature and then trying to decrease the hallucinations is something that is fixable in the near future. So maybe it's like months that we'll have to deal with that type of thing for now. But I'd love to hear what other things you've tried that were not like you thought they were going to be when you were scrolling Twitter or LinkedIn. Syed Asad: Should I name them? Demetrios: Please. So we all know we don't have to spend our time on them. Syed Asad: I'll start with OpenAI. The clients don't like GPT 4 to be used in there just because the primary concern is the cost. Secondary concern is the data privacy. And the third is that, I mean, I'm talking from the client's perspective, not the tech stack perspective. Demetrios: Yeah, yeah, yeah. Syed Asad: They consider OpenAI as a more of a marketing gimmick. Although GPT 4 gives good results. I'm, I'm aware of that, but the clients are not in favor. But the thing is that I do agree that GPT 4 is still the king of llms right now. So they have no option, no option to get the better, better results. But Mixtral is performing very good as far as the hallucinations are concerned. Just keeping the parameter temperature is equal to zero in a python code does not makes the hallucination go off. It is one of my key takeaways. Syed Asad: I have been bogging my head. Just. I'll give you an example, a chat bot. There is a, there's one of the use case in which is there's a big publishing company. I cannot name that company right now. And they want the entire system of books since the last 2025 years to be just converted into a rack pipeline. And the people got query. The. Syed Asad: The basic problem which I was having is handling a hello. When a user types hello. So when you type in hello, it. Demetrios: Gives you back a book. Syed Asad: It gives you back a book even. It is giving you back sometimes. Hello, I am this, this, this. And then again, some information. What you have written in the prompt, it is giving you everything there. I will answer according to this. I will answer according to this. So, so even if the temperature is zero inside the code, even so that, that included lots of prompt engineering. Syed Asad: So prompt engineering is what I feel is one of the most important trades which will be popular, which is becoming popular. And somebody is having specialization in prompt engineering. I mean, they can control the way how an LLM behaves because it behaves weirdly. Like in this use case, I was using croc and Mixtral. So to control Mixtral in such a way. It was heck lot of work, although it, we made it at the end, but it was heck lot of work in prompt engineering part. Demetrios: And this was, this was Mixtral large. Syed Asad: Mixtral, seven bits, eight by seven bits. Demetrios: Yeah. I mean, yeah, that's the trade off that you have to deal with. And it wasn't fine tuned at all. Syed Asad: No, it was not fine tuned because we were constructing a rack pipeline, not a fine tuned application, because right now, right now, even the customers are not interested in getting a fine tune model because it cost them and they are more interested in a contextual, like a rag contextual pipeline. Demetrios: Yeah, yeah. Makes sense. So basically, this is very useful to think about. I think we all understand and we've all seen that GPT 4 does best if we can. We want to get off of it as soon as possible and see how we can, how far we can go down the line or how far we can go on the difficulty spectrum. Because as soon as you start getting off GPT 4, then you have to look at those kind of issues with like, okay, now it seems to be hallucinating a lot more. How do I figure this out? How can I prompt it? How can I tune my prompts? How can I have a lot of prompt templates or a prompt suite to make sure that things work? And so are you using any tools for keeping track of prompts? I know there's a ton out there. Syed Asad: We initially started with the parameter efficient fine tuning for prompts, but nothing is working 100% interesting. Nothing works 100% it is as far as the prompting is concerned. It goes on to a hit and trial at the end. Huge wastage of time in doing prompt engineering. Even if you are following the exact prompt template given on the hugging face given on the model card anywhere, it will, it will behave, it will act, but after some time. Demetrios: Yeah, yeah. Syed Asad: But mixed well. Is performing very good. Very, very good. Mixtral eight by seven bits. That's very good. Demetrios: Awesome. Syed Asad: The summarization part is very strong. It gives you responses at par with GPT 4. Demetrios: Nice. Okay. And you don't have to deal with any of those data concerns that your customers have. Syed Asad: Yeah, I'm coming on to that only. So the next part was the data concern. So they, they want either now or in future the localization of llms. I have been doing it with readily, with Llama, CPP and Ollama. Right now. Ollama is very good. I mean, I'm a huge, I'm a huge fan of Ollama right now, and it is performing very good as far as the localization and data privacy is concerned because, because at the end what you are selling, it makes things, I mean, at the end it is sales. So even if the client is having data of the customers, they want to make their customers assure that the data is safe. Syed Asad: So that is with the localization only. So they want to gradually go into that place. So I want to bring here a few things. To summarize what I said, localization of llms is one of the concern right now is a big market. Second is quantization of models. Demetrios: Oh, interesting. Syed Asad: In quantization of models, whatever. So I perform scalar quantization and binary quantization, both using bits and bytes. I various other techniques also, but the bits and bytes was the best. Scalar quantization is performing better. Binary quantization, I mean the maximum compression or maximum lossy function is there, so it is not, it is, it is giving poor results. Scalar quantization is working very fine. It, it runs on CPU also. It gives you good results because whatever projects which we are having right now or even in the markets also, they are not having huge corpus of data right now, but they will eventually scale. Syed Asad: So they want something right now so that quantization works. So quantization is one of the concerns. People want to dodge aws, they don't want to go to AWS, but it is there. They don't have any other way. So that is why they want aws. Demetrios: And is that because of costs lock in? Syed Asad: Yeah, cost is the main part. Demetrios: Yeah. They understand that things can get out of hand real quick if you're using AWS and you start using different services. I think it's also worth noting that when you're using different services on AWS, it may be a very similar service. But if you're using sagemaker endpoints on AWS, it's like a lot more expensive than just an EKS endpoint. Syed Asad: Minimum cost for a startup, for just the GPU, bare minimum is minimum. $450. Minimum. It's $450 even without just on the testing phases or the development phases, even when it has not gone into production. So that gives a dent to the client also. Demetrios: Wow. Yeah. Yeah. So it's also, and this is even including trying to use like tranium or inferencia and all of that stuff. You know those services? Syed Asad: I know those services, but I've not readily tried those services. I'm right now in the process of trying salad also for inference, and they are very, very cheap right now. Demetrios: Nice. Okay. Yeah, cool. So if you could wave your magic wand and have something be different when it comes to your work, your day in, day out, especially because you've been doing a lot of rags, a lot of different kinds of rags, a lot of different use cases with, with rags. Where do you think you would get the biggest uptick in your performance, your ability to just do what you need to do? How could rags be drastically changed? Is it something that you say, oh, the hallucinations. If we didn't have to deal with those, that would make my life so much easier. I didn't have to deal with prompts that would make my life infinitely easier. What are some things like where in five years do you want to see this field be? Syed Asad: Yeah, you figured it right. The hallucination part is one of the concerns, or biggest concerns with the client when it comes to the rag, because what we see on LinkedIn and what we see on places, it gives you a picture that it, it controls hallucination, and it gives you answer that. I don't know anything about this, as mentioned in the context, but it does not really happen when you come to the production. It gives you information like you are developing a rag for a publishing company, and it is giving you. Where is, how is New York like, it gives you information on that also, even if you have control and everything. So that is one of the things which needs to be toned down. As far as the rag is concerned, pricing is the biggest concern right now, because there are very few players in the market as far as the inference is concerned, and they are just dominating the market with their own rates. So this is one of the pain points. Syed Asad: And the. I'll also want to highlight the popular vector databases. There are many Pinecone weaviate, many things. So they are actually, the problem with many of the vector databases is that they work fine. They are scalable. This is common. The problem is that they are not easy to use. So that is why I always use Qdrant. Syed Asad: Not because Qdrant is sponsoring me, not because I am doing a job with Qdrant, but Qdrant is having the ease of use. And it, I have, I have trained people in my team who specialize with Qdrant, and they were initially using Weaviate and Pinecone. I mean, you can do also store vectors in those databases, but it is not especially the, especially the latest development with Pine, sorry, with Qdrant is the fast embed, which they just now released. And it made my work a lot easier by using the ONNX approach rather than a Pytorch based approach, because there was one of the projects in which we were deploying embedding model on an AWS server and it was running continuously. And minimum utilization of ram is 6gb. Even when it is not doing any sort of vector embedding so fast. Embed has so Qdrant is playing a huge role, I should acknowledge them. And one more thing which I would not like to use is LAN chain. Syed Asad: I have been using it. So. So I don't want to use that language because it is not, it did not serve any purpose for me, especially in the production. It serves purpose in the research phase. When you are releasing any notebook, say you have done this and does that. It is not. It does not works well in production, especially for me. Llama index works fine, works well. Demetrios: You haven't played around with anything else, have you? Like Haystack or. Syed Asad: Yeah, haystack. Haystack. I have been playing out around, but haystack is lacking functionalities. It is working well. I would say it is working well, but it lacks some functionalities. They need to add more things as compared to Llama index. Demetrios: And of course, the hottest one on the block right now is DSPY. Right? Have you messed around with that at all? Syed Asad: DSPy, actually DSPY. I have messed with DSPY. But the thing is that DSPY is right now, I have not experimented with that in the production thing, just in the research phase. Demetrios: Yeah. Syed Asad: So, and regarding the evaluation part, DeepEval, I heard you might have a DeepEval. So I've been using that. It is because one of the, one of the challenges is the testing for the AI. Also, what responses are large language model is generating the traditional testers or the manual tester software? They don't know, actually. So there's one more vertical which is waiting to be developed, is the testing for AI. It has a huge potential. And DeepEval, the LLM based approach on testing is very, is working fine and is open source also. Demetrios: And that's the DeepEval I haven't heard. Syed Asad: Let me just tell you the exact spelling. It is. Sorry. It is DeepEval. D E E P. Deep eval. I can. Demetrios: Yeah. Okay. I know DeepEval. All right. Yeah, for sure. Okay. Hi. I for some reason was understanding D Eval. Syed Asad: Yeah, actually I was pronouncing it wrong. Demetrios: Nice. So these are some of your favorite, non favorite, and that's very good to know. It is awesome to hear about all of this. Is there anything else that you want to say before we jump off? Anything that you can, any wisdom you can impart on us for your rag systems and how you have learned the hard way? So tell us so we don't have to learn that way. Syed Asad: Just go. Don't go with the marketing. Don't go with the marketing. Do your own research. Hugging face is a good, I mean, just fantastic. The leaderboard, although everything does not work in the leaderboard, also say, for example, I don't, I don't know about today and tomorrow, today and yesterday, but there was a model from Salesforce, the embedding model from Salesforce. It is still topping charts, I think, in the, on the MTEB. MTEB leaderboard for the embedding models. Syed Asad: But you cannot use it in the production. It is way too huge to implement it. So what's the use? Mixed bread AI. The mixed bread AI, they are very light based, lightweight, and they, they are working fine. They're not even on the leaderboard. They were on the leaderboard, but they're right, they might not. When I saw they were ranking on around seven or eight on the leaderboard, MTEB leaderboard, but they were working fine. So even on the leaderboard thing, it does not works. Demetrios: And right now it feels a little bit like, especially when it comes to embedding models, you just kind of go to the leaderboard and you close your eyes and then you pick one of them. Have you figured out a way to better test these or do you just find one and then try and use it everywhere? Syed Asad: No, no, that is not the case. Actually what I do is that I need to find the first, the embedding model. Try to find the embedding model based on my use case. Like if it is an embedding model on a medical use case more. So I try to find that. But the second factor to filter that is, is the size of that embedding model. Because at the end, if I am doing the entire POC or an entire research with that embedding model, what? And it has happened to me that we did entire research with embedding models, large language models, and then we have to remove everything just on the production part and it just went in smoke. Everything. Syed Asad: So a lightweight embedding model, especially the one which, which has started working recently, is that the cohere embedding models, and they have given a facility to call those embedding models in a quantized format. So that is also working and fast. Embed is one of the things which is by Qdrant, these two things are working in the production. I'm talking in the production for research. You can do anything. Demetrios: Brilliant, man. Well, this has been great. I really appreciate it. Asad, thank you for coming on here and for anybody else that would like to come on to the vector space talks, just let us know. In the meantime, don't get lost in vector space. We will see you all later. Have a great afternoon. Morning, evening, wherever you are. Demetrios: Asad, you taught me so much, bro. Thank you. ",blog/advancements-and-challenges-in-rag-systems-syed-asad-vector-space-talks-021.md "--- draft: false title: Talk with YouTube without paying a cent - Francesco Saverio Zuppichini | Vector Space Talks slug: youtube-without-paying-cent short_description: A sneak peek into the tech world as Francesco shares his ideas and processes on coding innovative solutions. description: Francesco Zuppichini outlines the process of converting YouTube video subtitles into searchable vector databases, leveraging tools like YouTube DL and Hugging Face, and addressing the challenges of coding without conventional frameworks in machine learning engineering. preview_image: /blog/from_cms/francesco-saverio-zuppichini-bp-cropped.png date: 2024-03-27T12:37:55.643Z author: Demetrios Brinkmann featured: false tags: - embeddings - LLMs - Retrieval Augmented Generation - Ollama --- > *""Now I do believe that Qdrant, I'm not sponsored by Qdrant, but I do believe it's the best one for a couple of reasons. And we're going to see them mostly because I can just run it on my computer so it's full private and I'm in charge of my data.”*\ -- Francesco Saverio Zuppichini > Francesco Saverio Zuppichini is a Senior Full Stack Machine Learning Engineer at Zurich Insurance with experience in both large corporations and startups of various sizes. He is passionate about sharing knowledge, and building communities, and is known as a skilled practitioner in computer vision. He is proud of the community he built because of all the amazing people he got to know. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/7kVd5a64sz2ib26IxyUikO?si=mrOoVP3ISQ22kXrSUdOmQA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/56mFleo06LI).*** ## **Top takeaways:** Curious about transforming YouTube content into searchable elements? Francesco Zuppichini unpacks the journey of coding a RAG by using subtitles as input, harnessing technologies like YouTube DL, Hugging Face, and Qdrant, while debating framework reliance and the fine art of selecting the right software tools. Here are some insights from this episode: 1. **Behind the Code**: Francesco unravels how to create a RAG using YouTube videos. Get ready to geek out on the nuts and bolts that make this magic happen. 2. **Vector Voodoo**: Ever wonder how embedding vectors carry out their similarity searches? Francesco's got you covered with his brilliant explanation of vector databases and the mind-bending distance method that seeks out those matches. 3. **Function over Class**: The debate is as old as stardust. Francesco shares why he prefers using functions over classes for better code organization and demonstrates how this approach solidifies when running language models with Ollama. 4. **Metadata Magic**: Find out how metadata isn't just a sidekick but plays a pivotal role in the realm of Qdrant and RAGs. Learn why Francesco values metadata as payload and the challenges it presents in developing domain-specific applications. 5. **Tool Selection Tips**: Deciding on the right software tool can feel like navigating an asteroid belt. Francesco shares his criteria—ease of installation, robust documentation, and a little help from friends—to ensure a safe landing. > Fun Fact: Francesco confessed that his code for chunking subtitles was ""a little bit crappy"" because of laziness—proving that even pros take shortcuts to the stars now and then. > ## Show notes: 00:00 Intro to Francesco\ 05:36 Create YouTube rack for data retrieval.\ 09:10 Local web dev showcase without frameworks effectively.\ 11:12 Qdrant: converting video text to vectors.\ 13:43 Connect to vectordb, specify config, keep it simple.\ 17:59 Recreate, compare vectors, filter for right matches.\ 21:36 Use functions and share states for simpler coding.\ 29:32 Gemini Pro generates task-based outputs effectively.\ 32:36 Good documentation shows pride in the product.\ 35:38 Organizing different data types in separate collections.\ 38:36 Proactive approach to understanding code and scalability.\ 42:22 User feedback and statistics evaluation is crucial.\ 44:09 Consider user needs for chatbot accuracy and relevance. ## More Quotes from Francesco: *""So through Docker, using Docker compose, very simple here I just copy and paste the configuration for the Qdrant documentation. I run it and when I run it I also get a very nice looking interface.*”\ -- Francesco Saverio Zuppichini *""It's a very easy way to debug stuff because if you see a lot of vectors from the same document in the same place, maybe your chunking is not doing a great job because maybe you have some too much kind of overlapping on the recent bug in your code in which you have duplicate chunks. Okay, so we have our vector DB running. Now we need to do some setup stuff. So very easy to do with Qdrant. You just need to get the Qdrant client.”*\ -- Francesco Saverio Zuppichini *""So straightforward, so useful. A lot of people, they don't realize that types are very useful. So kudos to the Qdrant team to actually make all the types very nice.”*\ -- Francesco Saverio Zuppichini ## Transcript: Demetrios: Folks, welcome to another vector space talks. I'm excited to be here and it is a special day because I've got a co host with me today. Sabrina, what's going on? How you doing? Sabrina Aquino: Let's go. Thank you so much, Demetrios, for having me here. I've always wanted to participate in vector space talks. Now it's finally my chance. So thank you so much. Demetrios: Your dream has come true and what a day for it to come true because we've got a special guest today. While we've got you here, Sabrina, I know you've been doing some excellent stuff on the Internet when it comes to other ways to engage with the Qdrant community. Can you break that down real fast before we jump into this? Sabrina Aquino: Absolutely. I think an announcement here is we're hosting our first discord office hours. We're going to be answering all your questions about Qdrant with Qdrant team members, where you can interact with us, with our community as well. And we're also going to be dropping a few insights on the next Qdrant release 1.8. So that's super exciting and also, we are. Sorry, I just have another thing going on here on the live. Demetrios: Music got in your ear. Sabrina Aquino: We're also having the vector voices on Twitter, the X Spaces roundtable, where we bring experts to talk about a topic with our team. And you can also jump in and ask questions on the AMA. So that's super exciting as well. And, yeah, see you guys there. And I'll drop a link of the discord in the comments so you guys can join our community and be a part of it. Demetrios: Exactly what I was about to say. So without further ado, let's bring on our guest of honor, Mr. Where are you at, dude? Francesco Zuppichini: Hi. Hello. How are you? Demetrios: I'm great. How are you doing? Francesco Zuppichini: Great. Demetrios: I've been seeing you all around the Internet and I am very excited to be able to chat with you today. I know you've got a bit of stuff planned for us. You've got a whole presentation, right? Francesco Zuppichini: Correct. Demetrios: But for those that do not know you, you're a full stack machine learning engineer at Zurich Insurance. I think you also are very vocal and you are fun to follow on LinkedIn is what I would say. And we're going to get to that at the end after you give your presentation. But once again, reminder for everybody, if you want to ask questions, hit us up with questions in the chat. As far as going through his presentation today, you're going to be talking to us all about some really cool stuff about rags. I'm going to let you get into it, man. And while you're sharing your screen, I'm going to tell people a little bit of a fun fact about you. That you put ketchup on your pizza, which I think is a little bit sacrilegious. Francesco Zuppichini: Yes. So that's 100% true. And I hope that the italian pizza police is not listening to this call or I can be in real trouble. Demetrios: I think we just lost a few viewers there, but it's all good. Sabrina Aquino: Italy viewers just dropped out. Demetrios: Yeah, the Italians just dropped, but it's all good. We will cut that part out in post production, my man. I'm going to share your screen and I'm going to let you get after it. I'll be hanging around in case any questions pop up with Sabrina in the background. And here you go, bro. Francesco Zuppichini: Wonderful. So you can see my screen, right? Demetrios: Yes, for sure. Francesco Zuppichini: That's perfect. Okay, so today we're going to talk about talk with YouTube without paying a cent, no framework bs. So the goal of today is to showcase how to code a RAG given as an input a YouTube video without using any framework like language, et cetera, et cetera. And I want to show you that it's straightforward, using a bunch of technologies and Qdrants as well. And you can do all of this without actually pay to any service. Right. So we are going to run our PEDro DB locally and also the language model. We are going to run our machines. Francesco Zuppichini: And yeah, it's going to be a technical talk, so I will kind of guide you through the code. Feel free to interrupt me at any time if you have questions, if you want to ask why I did that, et cetera, et cetera. So very quickly, before we get started, I just want you not to introduce myself. So yeah, senior full stack machine engineer. That's just a bunch of funny work to basically say that I do a little bit of everything. Start. So when I was working, I start as computer vision engineer, I work at PwC, then a bunch of startups, and now I sold my soul to insurance companies working at insurance. And before I was doing computer vision, now I'm doing due to Chat GPT, hyper language model, I'm doing more of that. Francesco Zuppichini: But I'm always involved in bringing the full product together. So from zero to something that is deployed and running. So I always be interested in web dev. I can also do website servers, a little bit of infrastructure as well. So now I'm just doing a little bit of everything. So this is why there is full stack there. Yeah. Okay, let's get started to something a little bit more interesting than myself. Francesco Zuppichini: So our goal is to create a full local YouTube rack. And if you don't want a rack, is, it's basically a system in which you take some data. In this case, we are going to take subtitles from YouTube videos and you're able to basically q a with your data. So you're able to use a language model, you ask questions, then we retrieve the relevant parts in the data that you provide, and hopefully you're going to get the right answer to your. So let's talk about the technologies that we're going to use. So to get the subtitles from a video, we're going to use YouTube DL and YouTube DL. It's a library that is available through Pip. So Python, I think at some point it was on GitHub and then I think it was removed because Google, they were a little bit beach about that. Francesco Zuppichini: So then they realized it on GitHub. And now I think it's on GitHub again, but you can just install it through Pip and it's very cool. Demetrios: One thing, man, are you sharing a slide? Because all I see is your. I think you shared a different screen. Francesco Zuppichini: Oh, boy. Demetrios: I just see the video of you. There we go. Francesco Zuppichini: Entire screen. Yeah. I'm sorry. Thank you so much. Demetrios: There we go. Francesco Zuppichini: Wonderful. Okay, so in order to get the embedding. So to translate from text to vectors, right, so we're going to use hugging face just an embedding model so we can actually get some vectors. Then as soon as we got our vectors, we need to store and search them. So we're going to use our beloved Qdrant to do so. We also need to keep a little bit of stage right because we need to know which video we have processed so we don't redo the old embeddings and the storing every time we see the same video. So for this part, I'm just going to use SQLite, which is just basically an SQL database in just a file. So very easy to use, very kind of lightweight, and it's only your computer, so it's safe to run the language model. Francesco Zuppichini: We're going to use Ollama. That is a very simple way and very well done way to just get a language model that is running on your computer. And you can also call it using the OpenAI Python library because they have implemented the same endpoint as. It's like, it's super convenient, super easy to use. If you already have some code that is calling OpenAI, you can just run a different language model using Ollama. And you just need to basically change two lines of code. So what we're going to do, basically, I'm going to take a video. So here it's a video from Fireship IO. Francesco Zuppichini: We're going to run our command line and we're going to ask some questions. Now, if you can still, in theory, you should be able to see my full screen. Yeah. So very quickly to showcase that to you, I already processed this video from the good sound YouTube channel and I have already here my command line. So I can already kind of see, you know, I can ask a question like what is the contact size of Germany? And we're going to get the reply. Yeah. And here we're going to get a reply. And now I want to walk you through how you can do something similar. Francesco Zuppichini: Now, the goal is not to create the best rack in the world. It's just to showcase like show zero to something that is actually working. How you can do that in a fully local way without using any framework so you can really understand what's going on under the hood. Because I think a lot of people, they try to copy, to just copy and paste stuff on Langchain and then they end up in a situation when they need to change something, but they don't really know where the stuff is. So this is why I just want to just show like Windfield zero to hero. So the first step will be I get a YouTube video and now I need to get the subtitle. So you could actually use a model to take the audio from the video and get the text. Like a whisper model from OpenAI, for example. Francesco Zuppichini: In this case, we are taking advantage that YouTube allow people to upload subtitles and YouTube will automatically generate the subtitles. So here using YouTube dial, I'm just going to get my video URL. I'm going to set up a bunch of options like the format they want, et cetera, et cetera. And then basically I'm going to download and get the subtitles. And they look something like this. Let me show you an example. Something similar to this one, right? We have the timestamps and we do have all text inside. Now the next step. Francesco Zuppichini: So we got our source of data, we have our text key. Next step is I need to translate my text to vectors. Now the easiest way to do so is just use sentence transformers for backing phase. So here I've installed it. I load in a model. In this case I'm using this model here. I have no idea what tat model is. I just default one tatted find and it seems to work fine. Francesco Zuppichini: And then in order to use it, I'm just providing a query and I'm getting back a list of vectors. So we have a way to take a video, take the text from the video, convert that to vectors with a semantic meaningful representation. And now we need to store them. Now I do believe that Qdrant, I'm not sponsored by Qdrant, but I do believe it's the best one for a couple of reasons. And we're going to see them mostly because I can just run it on my computer so it's full private and I'm in charge of my data. So the way I'm running it is through Docker compose. So through Docker, using Docker compose, very simple here I just copy and paste the configuration for the Qdrant documentation. I run it and when I run it I also get a very nice looking interface. Francesco Zuppichini: I'm going to show that to you because I think it's very cool. So here I've already some vectors inside here so I can just look in my collection, it's called embeddings, an original name. And we can see all the chunks that were embed with the metadata, in this case just the video id. A super cool thing, super useful to debug is go in the visualize part and see the embeddings, the projected embeddings. You can actually do a bounce of stuff. You can actually also go here and color them by some metadata. Like I can say I want to have a different color based on the video id. In this case I just have one video. Francesco Zuppichini: I will show that as soon as we add more videos. This is so cool, so useful. I will use this at work as well in which I have a lot of documents. And it's a very easy way to debug stuff because if you see a lot of vectors from the same document in the same place, maybe your chunking is not doing a great job because maybe you have some too much kind of overlapping on the recent bug in your code in which you have duplicate chunks. Okay, so we have our vector DB running. Now we need to do some setup stuff. So very easy to do with Qdrant. You just need to get the Qdrant client. Francesco Zuppichini: So you have a connection with a vectordb, you create a connection, you specify a name, you specify some configuration stuff. In this case I just specify the vector size because Qdrant, it needs to know how big the vectors are going to be and the distance I want to use. So I'm going to use the cosite distance in Qdrant documentation there are a lot of parameters. You can do a lot of crazy stuff here and just keep it very simple. And yeah, another important thing is that since we are going to embed more videos, when I ask a question to a video, I need to know which embedded are from that video. So we're going to create an index. So it's very efficient to filter my embedded based on that index, an index on the metadata video because when I store a chunk in Qdrant, I also going to include from which video is coming from. Very simple, very simple to set up. Francesco Zuppichini: You just need to do this once. I was very lazy so I just assumed that if this is going to fail, it means that it's because I've already created a collection. So I'm just going to pass it and call it a day. Okay, so this is basically all the preprocess this setup you need to do to have your Qdrant ready to store and search vectors. To store vectors. Straightforward, very straightforward as well. Just need again the client. So the connection to the database here I'm passing my embedding so sentence transformer model and I'm passing my chunks as a list of documents. Francesco Zuppichini: So documents in my code is just a type that will contain just this metadata here. Very simple. It's similar to Lang chain here. I just have attacked it because it's lightweight. To store them we call the upload records function. We encode them here. There is a little bit of bad variable names from my side which I replacing that. So you shouldn't do that. Francesco Zuppichini: Apologize about that and you just send the records. Another very cool thing about Qdrant. So the second things that I really like is that they have types for what you send through the library. So this models record is a Qdrant type. So you use it and you know immediately. So what you need to put inside. So let me give you an example. Right? So assuming that I'm programming, right, I'm going to say model record bank. Francesco Zuppichini: I know immediately. So what I have to put inside, right? So straightforward, so useful. A lot of people, they don't realize that types are very useful. So kudos to the Qdrant team to actually make all the types very nice. Another cool thing is that if you're using fast API to build a web server, if you are going to return a Qdrant models type, it's actually going to be serialized automatically through pydantic. So you don't need to do weird stuff. It's all handled by the Qdrant APIs, by the product SDK. Super cool. Francesco Zuppichini: Now we have a way to store our chunks to embed them. So this is how they look like in the interface. I can see them, I can go to them, et cetera, et Cetera. Very nice. Now the missing part, right. So video subtitles. I chunked the subtitles. I haven't show you the chunking code. Francesco Zuppichini: It's a little bit crappy because I was very lazy. So I just like chunking by characters count and a little bit of overlapping. We have a way to store and embed our chunks and now we need a way to search. That's basically one of the missing steps. Now search straightforward as well. This is also a good example because I can show you how effective is to create filters using Qdrant. So what do we need to search with again the vector client, the embeddings, because we have a query, right. We need to run the query with the same embedding models. Francesco Zuppichini: We need to recreate to embed in a vector and then we need to compare with the vectors in the vector Db using a distance method, in this case considered similarity in order to get the right matches right, the closest one in our vector DB, in our vector search base. So passing a query string, I'm passing a video id and I pass in a label. So how many hits I want to get from the metadb. Now to create a filter again you're going to use the model package from the Qdrant framework. So here I'm just creating a filter class for the model and I'm saying okay, this filter must match this key, right? So metadata video id with this video id. So when we search, before we do the similarity search, we are going to filter away all the vectors that are not from that video. Wonderful. Now super easy as well. Francesco Zuppichini: We just call the DB search, right pass. Our collection name here is star coded. Apologies about that, I think I forgot to put the right global variable our coded, we create a query, we set the limit, we pass the query filter, we get the it back as a dictionary in the payload field of each it and we recreate our document a dictionary. I have types, right? So I know what this function is going to return. Now if you were to use a framework, right this part, it will be basically the same thing. If I were to use langchain and I want to specify a filter, I would have to write the same amount of code. So most of the times you don't really need to use a framework. One thing that is nice about not using a framework here is that I add control on the indexes. Francesco Zuppichini: Lang chain, for instance, will create the indexes only while you call a classmate like from document. And that is kind of cumbersome because sometimes I wasn't quoting bugs in which I was not understanding why one index was created before, after, et cetera, et cetera. So yes, just try to keep things simple and not always write on frameworks. Wonderful. Now I have a way to ask a query to get back the relative parts from that video. Now we need to translate this list of chunks to something that we can read as human. Before we do that, I was almost going to forget we need to keep state. Now, one of the last missing part is something in which I can store data. Francesco Zuppichini: Here I just have a setup function in which I'm going to create an SQL lite database, create a table called videos in which I have an id and a title. So later I can check, hey, is this video already in my database? Yes. I don't need to process that. I can just start immediately to QA on that video. If not, I'm going to do the chunking and embeddings. Got a couple of functions here to get video from Db to save video from and to save video to Db. So notice now I only use functions. I'm not using classes here. Francesco Zuppichini: I'm not a fan of object writing programming because it's very easy to kind of reach inheritance health in which we have like ten levels of inheritance. And here if a function needs to have state, here we do need to have state because we need a connection. So I will just have a function that initialize that state. I return tat to me, and me as a caller, I'm just going to call it and pass my state. Very simple tips allow you really to divide your code properly. You don't need to think about is my class to couple with another class, et cetera, et cetera. Very simple, very effective. So what I suggest when you're coding, just start with function and share states across just pass down state. Francesco Zuppichini: And when you realize that you can cluster a lot of function together with a common behavior, you can go ahead and put state in a class and have key function as methods. So try to not start first by trying to understand which class I need to use around how I connect them, because in my opinion it's just a waste of time. So just start with function and then try to cluster them together if you need to. Okay, last part, the juicy part as well. Language models. So we need the language model. Why do we need the language model? Because I'm going to ask a question, right. I'm going to get a bunch of relevant chunks from a video and the language model. Francesco Zuppichini: It needs to answer that to me. So it needs to get information from the chunks and reply that to me using that information as a context. To run language model, the easiest way in my opinion is using Ollama. There are a lot of models that are available. I put a link here and you can also bring your own model. There are a lot of videos and tutorial how to do that. You run this command as soon as you install it on Linux. It's a one line to install Ollama. Francesco Zuppichini: You run this command here, it's going to download Mistral 7B very good model and run it on your gpu if you have one, or your cpu if you don't have a gpu, run it on GPU. Here you can see it yet. It's around 6gb. So even with a low tier gpu, you should be able to run a seven minute model on your gpu. Okay, so this is the prompt just for also to show you how easy is this, this prompt was just very lazy. Copy and paste from langchain source code here prompt use the following piece of context to answer the question at the end. Blah blah blah variable to inject the context inside question variable to get question and then we're going to get an answer. How do we call it? Is it easy? I have a function here called getanswer passing a bunch of stuff, passing also the OpenAI from the OpenAI Python package model client passing a question, passing a vdb, my DB client, my embeddings, reading my prompt, getting my matching documents, calling the search function we have just seen before, creating my context. Francesco Zuppichini: So just joining the text in the chunks on a new line, calling the format function in Python. As simple as that. Just calling the format function in Python because the format function will look at a string and kitty will inject variables that match inside these parentheses. Passing context passing question using the OpenAI model client APIs and getting a reply back. Super easy. And here I'm returning the reply from the language model and also the list of documents. So this should be documents. I think I did a mistake. Francesco Zuppichini: When I copy and paste this to get this image and we are done right. We have a way to get some answers from a video by putting everything together. This can seem scary because there is no comment here, but I can show you tson code. I think it's easier so I can highlight stuff. I'm creating my embeddings, I'm getting my database, I'm getting my vector DB login, some stuff I'm getting my model client, I'm getting my vid. So here I'm defining the state that I need. You don't need comments because I get it straightforward. Like here I'm getting the vector db, good function name. Francesco Zuppichini: Then if I don't have the vector db, sorry. If I don't have the video id in a database, I'm going to get some information to the video. I'm going to download the subtitles, split the subtitles. I'm going to do the embeddings. In the end I'm going to save it to the betterDb. Finally I'm going to get my video back, printing something and start a while loop in which you can get an answer. So this is the full pipeline. Very simple, all function. Francesco Zuppichini: Also here fit function is very simple to divide things. Around here I have a file called RAG and here I just do all the RAG stuff. Right. It's all here similar. I have my file called crude. Here I'm doing everything I need to do with my database, et cetera, et cetera. Also a file called YouTube. So just try to split things based on what they do instead of what they are. Francesco Zuppichini: I think it's easier than to code. Yeah. So I can actually show you a demo in which we kind of embed a video from scratch. So let me kill this bad boy here. Let's get a juicy YouTube video from Sam. We can go with Gemma. We can go with Gemma. I think I haven't embedded that yet. Francesco Zuppichini: I'm sorry. My Eddie block is doing weird stuff over here. Okay, let me put this here. Demetrios: This is the moment that we need to all pray to the demo gods that this will work. Francesco Zuppichini: Oh yeah. I'm so sorry. I'm so sorry. I think it was already processed. So let me. I don't know this one. Also I noticed I'm seeing this very weird thing which I've just not seen that yesterday. So that's going to be interesting. Francesco Zuppichini: I think my poor Linux computer is giving up to running language models. Okay. Downloading ceramic logs, embeddings and we have it now before I forgot because I think that you guys spent some time doing this. So let's go on the visualize page and let's actually do the color by and let's do metadata, video id. Video id. Let's run it. Metadata, metadata, video meta. Oh my God. Francesco Zuppichini: Data video id. Why don't see the other one? I don't know. This is the beauty of live section. Demetrios: This is how we know it's real. Francesco Zuppichini: Yeah, I mean, this is working, right? This is called Chevroni Pro. That video. Yeah, I don't know about that. I don't know about that. It was working before. I can touch for sure. So probably I'm doing something wrong, probably later. Let's try that. Francesco Zuppichini: Let's see. I must be doing something wrong, so don't worry about that. But we are ready to ask questions, so maybe I can just say I don't know, what is Gemini pro? So let's see, Mr. Running on GPU is kind of fast, it doesn't take too much time. And here we can see we are 6gb, 1gb is for the embedding model. So 4gb, 5gb running the language model here it says Gemini pro is a colonized tool that can generate output based on given tasks. Blah, blah, blah, blah, blah, blah. Yeah, it seems to work. Francesco Zuppichini: Here you have it. Thanks. Of course. And I don't know if there are any questions about it. Demetrios: So many questions. There's a question that came through the chat that is a simple one that we can answer right away, which is can we access this code anywhere? Francesco Zuppichini: Yeah, so it's on my GitHub. Can I share a link with you in the chat? Maybe? So that should be YouTube. Can I put it here maybe? Demetrios: Yes, most definitely can. And we'll drop that into all of the spots so that we have it. Now. Next question from my side, while people are also asking, and you've got some fans in the chat right now, so. Francesco Zuppichini: Nice to everyone by the way. Demetrios: So from my side, I'm wondering, do you have any specific design decisions criteria that you use when you are building out your stack? Like you chose Mistral, you chose Ollama, you chose Qdrant. It sounds like with Qdrant you did some testing and you appreciated the capabilities. With Qdrant, was it similar with Ollama and Mistral? Francesco Zuppichini: So my test is how long it's going to take to install that tool. If it's taking too much time and it's hard to install because documentation is bad, so that it's a red flag, right? Because if it's hard to install and documentation is bad for the installation, that's the first thing people are going to read. So probably it's not going to be great for something down the road to use Olama. It took me two minutes, took me two minutes, it was incredible. But just install it, run it and it was done. Same thing with Qualent as well and same thing with the hacking phase library. So to me, usually as soon as if I see that something is easy to install, that's usually means that is good. And if the documentation to install it, it's good. Francesco Zuppichini: It means that people thought about it and they care about writing good documentation because they want people to use their tools. A lot of times for enterprises tools like cloud enterprise services, documentation is terrible because they know you're going to pay because you're an enterprise. And some manager has decided five years ago to use TatCloud provider, not the other. So I think know if you see recommendation that means that the people's company, startup enterprise behind that want you to use their software because they know and they're proud of it. Like they know that is good. So usually this is my way of going. And then of course I watch a lot of YouTube videos so I see people talking about different texts, et cetera. And if some youtuber which I trust say like I tried this seems to work well, I will note it down. Francesco Zuppichini: So then in the future I know hey, for these things I think I use ABC and this has already be tested by someone. I don't know I'm going to use it. Another important thing is reach out to your friends networks and say hey guys, I need to do this. Do you know if you have a good stock that you're already trying to experience with that? Demetrios: Yeah. With respect to the enterprise software type of tools, there was something that I saw that was hilarious. It was something along the lines of custom customer and user is not the same thing. Customer is the one who pays, user is the one who suffers. Francesco Zuppichini: That's really true for enterprise software, I need to tell you. So that's true. Demetrios: Yeah, we've all been through it. So there's another question coming through in the chat about would there be a collection for each embedded video based on your unique view video id? Francesco Zuppichini: No. What you want to do, I mean you could do that of course, but collection should encapsulate the project that you're doing more or less in my mind. So in this case I just call it embeddings. Maybe I should have called videos. So they are just going to be inside the same collection, they're just going to have different metadata. I think you need to correct me if I'm wrong that from your side, from the Qdrant code, searching things in the same collection, probably it's more effective to some degree. And imagine that if you have 1000 videos you need to create 1000 collection. And then I think cocoa wise collection are meant to have data coming from the same source, semantic value. Francesco Zuppichini: So in my case I have all videos. If I were to have different data, maybe from pdfs. Probably I would just create another collection, right, if I don't want them to be in the same part and search them. And one cool thing of having all the videos in the same collection is that I can just ask a question to all the videos at the same time if I want to, or I can change my filter and ask questions to two free videos. Specifically, you can do that if you have one collection per video, right? Like for instance at work I was embedding PDF and using qualitative and sometimes you need to talk with two pdf at the same time free, or just one, or maybe all the PDF in that folder. So I was just changing the filter, right? And that can only be done if they're all in the same collection. Sabrina Aquino: Yeah, that's a great explanation of collections. And I do love your approach of having everything locally and having everything in a structured way that you can really understand what you're doing. And I know you mentioned sometimes frameworks are not necessary. And I wonder also from your side, when do you think a framework would be necessary and does it have to do with scaling? What do you think? Francesco Zuppichini: So that's a great question. So what frameworks in theory should give you is good interfaces, right? So a good interface means that if I'm following that interface, I know that I can always call something that implements that interface in the same way. Like for instance in Langchain, if I call a betterdb, I can just swap the betterdb and I can call it in the same way. If the interfaces are good, the framework is useful. If you know that you are going to change stuff. In my case, I know from the beginning that I'm going to use Qdrant, I'm going to use Ollama, and I'm going to use SQL lite. So why should I go to the hello reading framework documentation? I install libraries, and then you need to install a bunch of packages from the framework that you don't even know why you need them. Maybe you have a conflict package, et cetera, et cetera. Francesco Zuppichini: If you know ready. So what you want to do then just code it and call it a day? Like in this case, I know I'm not going to change the vector DB. If you think that you're going to change something, even if it's a simple approach, it's fair enough, simple to change stuff. Like I will say that if you know that you want to change your vector DB providers, either you define your own interface or you use a framework with an already defined interface. But be careful because right too much on framework will. First of all, basically you don't know what's going on inside the hood for launching because it's so kudos to them. They were the first one. They are very smart people, et cetera, et cetera. Francesco Zuppichini: But they have inheritance held in that code. And in order to understand how to do certain stuff I had to look at in the source code, right. And try to figure it out. So which class is inherited from that? And going straight up in order to understand what behavior that class was supposed to have. If I pass this parameter, and sometimes defining an interface is straightforward, just maybe you want to define a couple of function in a class. You call it, you just need to define the inputs and the outputs and if you want to scale and you can just implement a new class called that interface. Yeah, that is at least like my take. I try to first try to do stuff and then if I need to scale, at least I have already something working and I can scale it instead of kind of try to do the perfect thing from the beginning. Francesco Zuppichini: Also because I hate reading documentation, so I try to avoid doing that in general. Sabrina Aquino: Yeah, I totally love this. It's about having like what's your end project? Do you actually need what you're going to build and understanding what you're building behind? I think it's super nice. We're also having another question which is I haven't used Qdrant yet. The metadata is also part of the embedding, I. E. Prepended to the chunk or so basically he's asking if the metadata is also embedded in the answer for that. Go ahead. Francesco Zuppichini: I think you have a good article about another search which you also probably embed the title. Yeah, I remember you have a good article in which you showcase having chunks with the title from, I think the section, right. And you first do a search, find the right title and then you do a search inside. So all the chunks from that paragraph, I think from that section, if I'm not mistaken. It really depends on the use case, though. If you have a document full of information, splitting a lot of paragraph, very long one, and you need to very be precise on what you want to fetch, you need to take advantage of the structure of the document, right? Sabrina Aquino: Yeah, absolutely. The metadata goes as payload in Qdrant. So basically it's like a JSON type of information attached to your data that's not embedded. We also have documentation on it. I will answer on the comments as well, I think another question I have for you, Franz, about the sort of evaluation and how would you perform a little evaluation on this rag that you created. Francesco Zuppichini: Okay, so that is an interesting question, because everybody talks about metrics and evaluation. Most of the times you don't really have that, right? So you have benchmarks, right. And everybody can use a benchmark to evaluate their pipeline. But when you have domain specific documents, like at work, for example, I'm doing RAG on insurance documents now. How do I create a data set from that in order to evaluate my RAG? It's going to be very time consuming. So what we are trying to do, so we get a bunch of people who knows these documents, catching some paragraph, try to ask a question, and that has the reply there and having basically a ground truth from their side. A lot of time the reply has to be composed from different part of the document. So, yeah, it's very hard. Francesco Zuppichini: It's very hard. So what I will kind of suggest is try to use no benchmark, or then you empirically try that. If you're building a RAG that users are going to use, always include a way to collect feedback and collect statistics. So collect the conversation, if that is okay with your privacy rules. Because in my opinion, it's always better to put something in production till you wait too much time, because you need to run all your metrics, et cetera, et cetera. And as soon as people start using that, you kind of see if it is good enough, maybe for language model itself, so that it's a different task, because you need to be sure that they don't say, we're stuck to the users. I don't really have the source of true answer here. It's very hard to evaluate them. Francesco Zuppichini: So what I know people also try to do, like, so they get some paragraph or some chunks, they ask GPD four to generate a question and the answer based on the paragraph, and they use that as an auto labeling way to create a data set to evaluate your RAG. That can also be effective, I guess 100%, yeah. Demetrios: And depending on your use case, you probably need more rigorous evaluation or less, like in this case, what you're doing, it might not need that rigor. Francesco Zuppichini: You can see, actually, I think was Canada Airlines, right? Demetrios: Yeah. Francesco Zuppichini: If you have something that is facing paying users, then think one of the times before that. In my case at all, I have something that is used by internal users and we communicate with them. So if my chat bot is saying something wrong, so they will tell me. And the worst thing that can happen is that they need to manually look for the answer. But as soon as your chatbot needs to do something that had people that are going to pay or medical stuff. You need to understand that for some use cases, you need to apply certain rules for others and you can be kind of more relaxed, I would say, based on the arm that your chatbot is going to generate. Demetrios: Yeah, I think that's all the questions we've got for now. Appreciate you coming on here and chatting with us. And I also appreciate everybody listening in. Anyone who is not following Fran, go give him a follow, at least for the laughs, the chuckles, and huge thanks to you, Sabrina, for joining us, too. It was a pleasure having you here. I look forward to doing many more of these. Sabrina Aquino: The pleasure is all mine, Demetrios, and it was a total pleasure. Fran, I learned a lot from your session today. Francesco Zuppichini: Thank you so much. Thank you so much. And also go ahead and follow the Qdrant on LinkedIn. They post a lot of cool stuff and read the Qdrant blogs. They're very good. They're very good. Demetrios: That's it. The team is going to love to hear that, I'm sure. So if you are doing anything cool with good old Qdrant, give us a ring so we can feature you in the vector space talks. Until next time, don't get lost in vector space. We will see you all later. Have a good one, y'all. ",blog/talk-with-youtube-without-paying-a-cent-francesco-saverio-zuppichini-vector-space-talks.md "--- draft: false title: The challenges in using LLM-as-a-Judge - Sourabh Agrawal | Vector Space Talks slug: llm-as-a-judge short_description: Sourabh Agrawal explores the world of AI chatbots. description: Everything you need to know about chatbots, Sourabh Agrawal goes in to detail on evaluating their performance, from real-time to post-feedback assessments, and introduces uptrendAI—an open-source tool for enhancing chatbot interactions through customized and logical evaluations. preview_image: /blog/from_cms/sourabh-agrawal-bp-cropped.png date: 2024-03-19T15:05:02.986Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - LLM - retrieval augmented generation --- > ""*You don't want to use an expensive model like GPT 4 for evaluation, because then the cost adds up and it does not work out. If you are spending more on evaluating the responses, you might as well just do something else, like have a human to generate the responses.*”\ -- Sourabh Agrawal > Sourabh Agrawal, CEO & Co-Founder at UpTrain AI is a seasoned entrepreneur and AI/ML expert with a diverse background. He began his career at Goldman Sachs, where he developed machine learning models for financial markets. Later, he contributed to the autonomous driving team at Bosch/Mercedes, focusing on computer vision modules for scene understanding. In 2020, Sourabh ventured into entrepreneurship, founding an AI-powered fitness startup that gained over 150,000 users. Throughout his career, he encountered challenges in evaluating AI models, particularly Generative AI models. To address this issue, Sourabh is developing UpTrain, an open-source LLMOps tool designed to evaluate, test, and monitor LLM applications. UpTrain provides scores and offers insights to enhance LLM applications by performing root-cause analysis, identifying common patterns among failures, and providing automated suggestions for resolution. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/1o7xdbdx32TiKe7OSjpZts?si=yCHU-FxcQCaJLpbotLk7AQ), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/vBJF2sy1Pyw).*** ## **Top takeaways:** Why is real-time evaluation critical in maintaining the integrity of chatbot interactions and preventing issues like promoting competitors or making false promises? What strategies do developers employ to minimize cost while maximizing the effectiveness of model evaluations, specifically when dealing with LLMs? These might be just some of the many questions people in the industry are asking themselves. Fear, not! Sourabh will break it down for you. Check out the full conversation as they dive into the intricate world of AI chatbot evaluations. Discover the nuances of ensuring your chatbot's quality and continuous improvement across various metrics. Here are the key topics of this episode: 1. **Evaluating Chatbot Effectiveness**: An exploration of systematic approaches to assess chatbot quality across various stages, encompassing retrieval accuracy, response generation, and user satisfaction. 2. **Importance of Real-Time Assessment**: Insights into why continuous and real-time evaluation of chatbots is essential to maintain integrity and ensure they function as designed without promoting undesirable actions. 3. **Indicators of Compromised Systems**: Understand the significance of identifying behaviors that suggest a system may be prone to 'jailbreaking' and the methods available to counter these through API integration. 4. **Cost-Effective Evaluation Models**: Discussion on employing smaller models for evaluation to reduce costs without compromising the depth of analysis, focusing on failure cases and root-cause assessments. 5. **Tailored Evaluation Metrics**: Emphasis on the necessity of customizing evaluation criteria to suit specific use case requirements, including an exploration of the different metrics applicable to diverse scenarios. > Fun Fact: Sourabh discussed the use of Uptrend, an innovative API that provides scores and explanations for various data checks, facilitating logical and informed decision-making when evaluating AI models. > ## Show notes: 00:00 Prototype evaluation subjective; scalability challenges emerge.\ 05:52 Use cheaper, smaller models for effective evaluation.\ 07:45 Use LLM objectively, avoid subjective biases.\ 10:31 Evaluate conversation quality and customization for AI.\ 15:43 Context matters for AI model performance.\ 19:35 Chat bot creates problems for car company.\ 20:45 Real-time user query evaluations, guardrails, and jailbreak.\ 27:27 Check relevance, monitor data, filter model failures.\ 28:09 Identify common themes, insights, experiment with settings.\ 32:27 Customize jailbreak check for specific app purposes.\ 37:42 Mitigate hallucination using evaluation data techniques.\ 38:59 Discussion on productizing hallucination mitigation techniques.\ 42:22 Experimentation is key for system improvement. ## More Quotes from Sourabh: *""There are some cases, let's say related to safety, right? Like you want to check whether the user is trying to jailbreak your LLMs or not. So in that case, what you can do is you can do this evaluation in parallel to the generation because based on just the user query, you can check whether the intent is to jailbreak or it's an intent to actually use your product to kind of utilize it for the particular model purpose.*”\ -- Sourabh Agrawal *""You have to break down the response into individual facts and just see whether each fact is relevant for the question or not. And then take some sort of a ratio to get the final score. So that way all the biases which comes up into the picture, like egocentric bias, where LLM prefers its own outputs, those biases can be mitigated to a large extent.”*\ -- Sourabh Agrawal *""Generally speaking, what we have been seeing is that the better context you retrieve, the better your model becomes.”*\ -- Sourabh Agrawal ## Transcript: Demetrios: Sourabh, I've got you here from Uptrain. I think you have some notes that you wanted to present, but I also want to ask you a few questions because we are going to be diving into a topic that is near and dear to my heart and I think it's been coming up so much recently that is using LLMs as a judge. It is really hot these days. Some have even gone as far to say that it is the topic of 2024. I would love for you to dive in. Let's just get right to it, man. What are some of the key topics when you're talking about using LLMs to evaluate what key metrics are you using? How does this work? Can you break it down? Sourabh Agrawal: Yeah. First of all, thanks a lot for inviting me and no worries for hiccup. I guess I have never seen a demo or a talk which goes without any technical hiccups. It is bound to happen. Really excited to be here. Really excited to talk about LLM evaluations. And as you rightly pointed right, it's really a hot topic and rightly so. Right. Sourabh Agrawal: The way things have been panning out with LLMs and chat, GPT and GPT four and so on, is that people started building all these prototypes, right? And the way to evaluate them was just like eyeball them, just trust your gut feeling, go with the vibe. I guess they truly adopted the startup methodology, push things out to production and break things. But what people have been realizing is that it's not scalable, right? I mean, rightly so. It's highly subjective. It's a developer, it's a human who is looking at all the responses, someday he might like this, someday he might like something else. And it's not possible for them to kind of go over, just read through more than ten responses. And now the unique thing about production use cases is that they need continuous refinement. You need to keep on improving them, you need to keep on improving your prompt or your retrieval, your embedding model, your retrieval mechanisms and so on. Sourabh Agrawal: So that presents a case like you have to use a more scalable technique, you have to use LLMs as a judge because that's scalable. You can have an API call, and if that API call gives good quality results, it's a way you can mimic whatever your human is doing or in a way augment them which can truly act as their copilot. Demetrios: Yeah. So one question that's been coming through my head when I think about using LLMs as a judge and I get more into it, has been around when do we use those API calls. It's not in the moment that we're looking for this output. Is it like just to see if this output is real? And then before we show it to the user, it's kind of in bunches after we've gotten a bit of feedback from the user. So that means that certain use cases are automatically discarded from this, right? Like if we are thinking, all right, we're going to use LLMs as a judge to make sure that we're mitigating hallucinations or that we are evaluating better, it is not necessarily something that we can do in the moment, if I'm understanding it correctly. So can you break that down a little bit more? How does it actually look in practice? Sourabh Agrawal: Yeah, definitely. And that's a great point. The way I see it, there are three cases. Case one is what you mentioned in the moment before showing the response to the user. You want to check whether the response is good or not. In most of the scenarios you can't do that because obviously checking requires extra time and you don't want to add latency. But there are some cases, let's say related to safety, right? Like you want to check whether the user is trying to jailbreak your LLMs or not. So in that case, what you can do is you can do this evaluation in parallel to the generation because based on just the user query, you can check whether the intent is to jailbreak or it's an intent to actually use your product to kind of utilize it for the particular model purpose. Sourabh Agrawal: But most of the other evaluations like relevance, hallucinations, quality and so on, it has to be done. Post whatever you show to the users and then there you can do it in two ways. You can either experiment with use them to experiment with things, or you can run monitoring on your production and find out failure cases. And typically we are seeing like developers are adopting a combination of these two to find cases and then experiment and then improve their systems. Demetrios: Okay, so when you're doing it in parallel, that feels like something that is just asking you craft a prompt and as soon as. So you're basically sending out two prompts. Another piece that I have been thinking about is, doesn't this just add a bunch more cost to your system? Because there you're effectively doubling your cost. But then later on I can imagine you can craft a few different ways of making the evaluations and sending out the responses to the LLM better, I guess. And you can figure out how to trim some tokens off, or you can try and concatenate some of the responses and do tricks there. I'm sure there's all kinds of tricks that you know about that I don't, and I'd love to tell you to tell me about them, but definitely what kind of cost are we looking at? How much of an increase can we expect? Sourabh Agrawal: Yeah, so I think that's like a very valid limitation of evaluation. So that's why, let's say at uptrend, what we truly believe in is that you don't want to use an expensive model like GPT four for evaluation, because then the cost adds up and it does not work out. Right. If you are spending more on evaluating the responses, you may as well just do something else, like have a human to generate the responses. We rely on smaller models, on cheaper models for this. And secondly, the methodology which we adopt is that you don't want to evaluate everything on all the data points. Like maybe you have a higher level check, let's say, for jailbreak or let's say for the final response quality. And when you find cases where the quality is low, you run a battery of checks on these failures to figure out which part of the pipeline is exactly failing. Sourabh Agrawal: This is something what we call as like root cause analysis, where you take all these failure cases, which may be like 10% or 20% of the cases out of all what you are seeing in production. Take these 20% cases, run like a battery of checks on them. They might be exhaustive. You might run like five to ten checks on them. And then based on those checks, you can figure out that, what is the error mode? Is it a retrieval problem? Is it a citation problem? Is it a utilization problem? Is it hallucination? Is the query like the question asked by the user? Is it not clear enough? Is it like your embedding model is not appropriate? So that's how you can kind of take best of the two. Like, you can also improve the performance at the same time, make sure that you don't burn a hole in your pocket. Demetrios: I've also heard this before, and it's almost like you're using the LLMs as tests and they're helping you write. It's not that they're helping you write tests, it's that they are there and they're part of the tests that you're writing. Sourabh Agrawal: Yeah, I think the key here is that you have to use them objectively. What I have seen is a lot of people who are trying to do LLM evaluations, what they do is they ask the LLM that, okay, this is my response. Can you tell is it relevant or not? Or even, let's say, they go a step beyond and do like a grading thing, that is it highly relevant, somewhat relevant, highly irrelevant. But then it becomes very subjective, right? It depends upon the LLM to decide whether it's relevant or not. Rather than that you have to transform into an objective setting. You have to break down the response into individual facts and just see whether each fact is relevant for the question or not. And then take some sort of a ratio to get the final score. So that way all the biases which comes up into the picture, like egocentric bias, where LLM prefers its own outputs, those biases can be mitigated to a large extent. Sourabh Agrawal: And I believe that's the key for making LLM evaluations work, because similar to LLM applications, even LLM evaluations, you have to put in a lot of efforts to make them really work and finally get some scores which align well with human expectations. Demetrios: It's funny how these LLMs mimic humans so much. They love the sound of their own voice, even. It's hilarious. Yeah, dude. Well, talk to me a bit more about how this looks in practice, because there's a lot of different techniques that you can do. Also, I do realize that when it comes to the use cases, it's very different, right. So if it's code generation use case, and you're evaluating that, it's going to be pretty clear, did the code run or did it not? And then you can go into some details on is this code actually more valuable? Is it a hacked way to do it? Et cetera, et cetera. But there's use cases that I would consider more sensitive and less sensitive. Demetrios: And so how do you look at that type of thing? Sourabh Agrawal: Yeah, I think so. The way even we think about evaluations is there's no one size fit all solution for different use cases. You need to look at different things. And even if you, let's say, looking at hallucinations, different use cases, or different businesses would look at evaluations from different lenses. Right. For someone, whatever, if they are focusing a lot on certain aspects of the correctness, someone else would focus less on those aspects and more on other aspects. The way we think about it is, know, we define different criteria for different use cases. So if you have A-Q-A bot, right? So you look at the quality of the response, the quality of the context. Sourabh Agrawal: If you have a conversational agent, then you look at the quality of the conversation as a whole. You look at whether the user is satisfied with that conversation. If you are writing long form content. Like, you look at coherence across the content, you look at the creativity or the sort of the interestingness of the content. If you have an AI agent, you look at how well they are able to plan, how well they were able to execute a particular task, and so on. How many steps do they take to achieve their objective? So there are a variety of these evaluation matrices, which are each one of which is more suitable for different use cases. And even there, I believe a good tool needs to provide certain customization abilities to their developers so that they can transform it, they can modify it in a way that it makes most sense for their business. Demetrios: Yeah. Is there certain ones that you feel like are more prevalent and that if I'm just thinking about this, I'm developing on the side and I'm thinking about this right now and I'm like, well, how could I start? What would you recommend? Sourabh Agrawal: Yeah, definitely. One of the biggest use case for LLMs today is rag. Applications for Rag. I think retrieval is the key. So I think the best starting points in terms of evaluations is like look at the response quality, so look at the relevance of the response, look at the completeness of the response, look at the context quality. So like context relevance, which judges the retrieval quality. Hallucinations, which judges whether the response is grounded by the context or not. If tone matters for your use case, look at the tonality and finally look at the conversation satisfaction, because at the end, whatever outputs you give, you also need to judge whether the end user is satisfied with these outputs. Sourabh Agrawal: So I would say these four or five matrices are the best way for any developer to start who is building on top of these LLMs. And from there you can understand how the behavior is going, and then you can go more deeper, look at more nuanced metrics, which can help you understand your systems even better. Demetrios: Yeah, I like that. Now, one thing that has also been coming up in my head a lot are like the custom metrics and custom evaluation and also proprietary data set, like evaluation data sets, because as we all know, the benchmarks get gamed. And you see on Twitter, oh wow, this new model just came out. It's so good. And then you try it and you're like, what are you talking about? This thing just was trained on the benchmarks. And so it seems like it's good, but it's not. And can you talk to us about creating these evaluation data sets? What have you seen as far as the best ways of going about it? What kind of size? Like how many do we need to actually make it valuable. And what is that? Give us a breakdown there? Sourabh Agrawal: Yeah, definitely. So, I mean, surprisingly, the answer is that you don't need that many to get started. We have seen cases where even if someone builds a test data sets of like 50 to 100 samples, that's actually like a very good starting point than where they were in terms of manual annotation and in terms of creation of this data set, I believe that the best data set is what actually your users are asking. You can look at public benchmarks, you can generate some synthetic data, but none of them matches the quality of what actually your end users are looking, because those are going to give you issues which you can never anticipate. Right. Even you're generating and synthetic data, you have to anticipate what issues can come up and generate data. Beyond that, if you're looking at public data sets, they're highly curated. There is always problems of them leaking into the training data and so on. Sourabh Agrawal: So those benchmarks becomes highly reliable. So look at your traffic, take 50 samples from them. If you are collecting user feedback. So the cases where the user has downvoted or the user has not accepted the response, I mean, they are very good cases to look at. Or if you're running some evaluations, quality checks on that cases which are failing, I think they are the best starting point for you to have a good quality test data sets and use that as a way to experiment with your prompts, experiment with your systems, experiment with your retrievals, and iteratively improve them. Demetrios: Are you weighing any metrics more than others? Because I've heard stories about how sometimes you'll see that a new model will come out, or you're testing out a new model, and it seems like on certain metrics, it's gone down. But then the golden metric that you have, it actually has gone up. And so have you seen which metrics are better for different use cases? Sourabh Agrawal: I think for here, there's no single answer. I think that metric depends upon the business. Generally speaking, what we have been seeing is that the better context you retrieve, the better your model becomes. Especially like if you're using any of the bigger models, like any of the GPT or claudes, or to some extent even mistral, is highly performant. So if you're using any of these highly performant models, then if you give them the right context, the response more or less, it comes out to be good. So I think one thing which we are seeing people focusing a lot on, experimenting with different retrieval mechanisms, embedding models, and so on. But then again, the final golden key, I think many people we have seen, they annotate some data set so they have like a ground root response or a golden response, and they completely rely on just like how well their answer matches with that golden response, which I believe it's a very good starting point because now you know that, okay, if this is right and you're matching very highly with that, then obviously your response is also right. Demetrios: And what about those use cases where golden responses are very subjective? Sourabh Agrawal: Yeah, I think that's where the issues like. So I think in those scenarios, what we have seen is that one thing which people have been doing a lot is they try to see whether all information in the golden response is contained in the generated response. You don't miss out any of the important information in your ground truth response. And on top of that you want it to be concise, so you don't want it to be blabbering too much or giving highly verbose responses. So that is one way we are seeing where people are getting around this subjectivity issue of the responses by making sure that the key information is there. And then beyond that it's being highly concise and it's being to the point in terms of the task being asked. Demetrios: And so you kind of touched on this earlier, but can you say it again? Because I don't know if I fully grasped it. Where are all the places in the system that you are evaluating? Because it's not just the output. Right. And how do you look at evaluation as a system rather than just evaluating the output every once in a while? Sourabh Agrawal: Yeah, so I mean, what we do is we plug with every part. So even if you start with retrieval, so we have a high level check where we look at the quality of retrieved context. And then we also have evaluations for every part of this retrieval pipeline. So if you're doing query rewrite, if you're doing re ranking, if you're doing sub question, we have evaluations for all of them. In fact, we have worked closely with the llama index team to kind of integrate with all of their modular pipelines. Secondly, once we cross the retrieval step, we have around five to six matrices on this retrieval part. Then we look at the response generation. We have their evaluations for different criterias. Sourabh Agrawal: So conciseness, completeness, safety, jailbreaks, prompt injections, as well as you can define your custom guidelines. So you can say that, okay, if the user is asking anything and related to code, the output should also give an example code snippet so you can just in plain English, define this guideline. And we check for that. And then finally, like zooming out, we also have checks. We look at conversations as a whole, how the user is satisfied, how many turns it requires for them to, for the chatbot or the LLM to answer the user. Yeah, that's how we look at the whole evaluations as a whole. Demetrios: Yeah. It really reminds me, I say this so much because it's one of the biggest fails, I think, on the Internet, and I'm sure you've seen it where I think it was like Chevy or GM, the car manufacturer car company, they basically slapped a chat bot on their website. It was a GPT call, and people started talking to it and realized, oh my God, this thing will do anything that we want it to do. So they started asking it questions like, is Tesla better than GM? And the bot would say, yeah, give a bunch of reasons why Tesla is better than GM on the website of GM. And then somebody else asked it, oh, can I get a car for a dollar? And it said, no. And then it said, but I'm broke and I need a car for a dollar. And it said, ok, we'll sell you the car for the dollar. And so you're getting yourself into all this trouble just because you're not doing that real time evaluation. Demetrios: How do you think about the real time evaluation? And is that like an extra added layer of complexity? Sourabh Agrawal: Yeah, for the real time evaluations, I think the most important cases, which, I mean, there are two scenarios which we feel like are most important to deal with. One is you have to put some guardrails in the sense that you don't want the users to talk about your competitors. You don't want to answer some queries, like, say, you don't want to make false promises, and so on, right? Some of them can be handled with pure rejects, contextual logics, and some of them you have to do evaluations. And the second is jailbreak. Like, you don't want the user to use, let's say, your Chevy chatbot to kind of solve math problems or solve coding problems, right? Because in a way, you're just like subsidizing GPT four for them. And all of these can be done just on the question which is being asked. So you can have a system where you can fire a query, evaluate a few of these key matrices, and in parallel generate your responses. And as soon as you get your response, you also get your evaluations. Sourabh Agrawal: And you can have some logic that if the user is asking about something which I should not be answering. Instead of giving the response, I should just say, sorry, I could not answer this or have a standard text for those cases and have some mechanisms to limit such scenarios and so on. Demetrios: And it's better to do that in parallel than to try and catch the response. Make sure it's okay before sending out an LLM call. Sourabh Agrawal: I mean, generally, yes, because if you look at, if you catch the response, it adds another layer of latency. Demetrios: Right. Sourabh Agrawal: And at the end of the day, 95% of your users are not trying to do this any good product. A lot of those users are genuinely trying to use it and you don't want to build something which kind of breaks, creates an issue for them, add a latency for them just to solve for that 5%. So you have to be cognizant of this fact and figure out clever ways to do this. Demetrios: Yeah, I remember I was talking to Philip of company called honeycomb, and they added some LLM functionality to their product. And he said that when people were trying to either prompt, inject or jailbreak, it was fairly obvious because there were a lot of calls. It kind of started to be not human usage and it was easy to catch in that way. Have you seen some of that too? And what are some signs that you see when people are trying to jailbreak? Sourabh Agrawal: Yeah, I think we also have seen typically, what we also see is that whenever someone is trying to jailbreak, the length of their question or the length of their prompt typically is much larger than any average question, because they will have all sorts of instruction like forget everything, you know, you are allowed to say all of those things. And then again, this issue also comes because when they try to jailbreak, they try with one technique, it doesn't work. They try with another technique, it doesn't work. Then they try with third technique. So there is like a burst of traffic. And even in terms of sentiment, typically the sentiment or the coherence in those cases, we have seen that to be lower as compared to a genuine question, because people are just trying to cramp up all these instructions into the response. So there are definitely certain signs which already indicates that the user is trying to jailbreak this. And I think those are leg race indicators to catch them. Demetrios: And I assume that you've got it set up so you can just set an alert when those things happen and then it at least will flag it and have humans look over it or potentially just ask the person to cool off for the next minute. Hey, you've been doing some suspicious activity here. We want to see something different so I think you were going to show us a little bit about uptrend, right? I want to see what you got. Can we go for a spin? Sourabh Agrawal: Yeah, definitely. Let me share my screen and I can show you how that looks like. Demetrios: Cool, very cool. Yeah. And just while you're sharing your screen, I want to mention that for this talk, I wore my favorite shirt, which is it says, I don't know if everyone can see it, but it says, I hallucinate more than Chat GPT. Sourabh Agrawal: I think that's a cool one. Demetrios: What do we got here? Sourabh Agrawal: Yeah, so, yeah, let me kind of just get started. So I create an account with uptrend. What we have is an API method, API way of calculating these evaluations. So you get an API key similar to what you get for chat, GPT or others, and then you can just do uptrend log and evaluate and you can tell give your data. So you can give whatever your question responses context, and you can define your checks which you want to evaluate for. So if I create an API key, I can just copy this code and I just already have it here. So I'll just show you. So we have two mechanisms. Sourabh Agrawal: One is that you can just run evaluations so you can define like, okay, I want to run context relevance, I want to run response completeness. Similarly, I want to run jailbreak. I want to run for safety. I want to run for satisfaction of the users and so on. And then when you run it, it gives back you a score and it gives back you an explanation on why this particular score has been given for this particular question. Demetrios: Can you make that a little bit bigger? Yeah, just give us some plus. Yeah, there we. Sourabh Agrawal: It'S, it's essentially an API call which takes the data, takes the list of checks which you want to run, and then it gives back and score and an explanation for that. So based on that score, you can have logics, right? If the jailbreak score is like more than 0.5, then you don't want to show it. Like you want to switch back to a default response and so on. And then you can also configure that we log all of these course, and we have dashboard where you can access them. Demetrios: I was just going to ask if you have dashboards. Everybody loves a good dashboard. Let's see it. That's awesome. Sourabh Agrawal: So let's see. Okay, let's take this one. So in this case, I just ran some of this context relevance checks for some of the queries. So you can see how that changes on your data sets. If you're running the same. We also run this in a monitoring setting, so you can see how this varies over time. And then finally you have all of the data. So we provide all of the data, you can download it, run whatever analysis you want to run, and then you can also, one of the features which we have built recently and is getting very popular amongst our users is that you can filter cases where, let's say, the model is failing. Sourabh Agrawal: So let's say I take all the cases where the responses is zero and I can find common topics. So I can look at all these cases and I can find, okay, what's the common theme across them? Maybe, as you can see, they're all talking about France, Romeo Juliet and so on. So it can just pull out a common topic among these cases. So then this gives you some insights into where things are going wrong and what do you need to improve upon. And the second piece of the puzzle is the experiments. So, not just you can evaluate them, but also you can use it to experiment with different settings. So let's say. Let me just pull out an experiment I ran recently. Demetrios: Yeah. Sourabh Agrawal: So let's say I want to compare two different models, right? So GPT 3.5 and clot two. So I can now see that, okay, clot two is giving more concise responses, but in terms of factual accuracy, like GPT 3.5 is more factually accurate. So I can now decide, based on my application, based on what my users want, I can now decide which of these criteria is more meaningful for me, it's more meaningful for my users, for my data, and decide which prompt or which model I want to go ahead with. Demetrios: This is totally what I was talking about earlier, where you get a new model and you're seeing on some metrics, it's doing worse. But then on your core metric that you're looking at, it's actually performing better. So you have to kind of explain to yourself, why is it doing better on those other metrics? I don't know if I'm understanding this correctly. We can set the metrics that we're looking at. Sourabh Agrawal: Yeah, actually, I'll show you the kind of metric. Also, I forgot to mention earlier, uptrend is like open source. Demetrios: Nice. Sourabh Agrawal: Yeah. So we have these pre configured checks, so you don't need to do anything. You can just say uptrend response completeness or uptrend prompt injection. So these are like, pre configured. So we did the hard work of getting all these scores and so on. And on top of that, we also have ways for you to customize these matrices so you can define a custom guideline. You can change the prompt which you want. You can even define a custom python function which you want to act as an evaluator. Sourabh Agrawal: So we provide all of those functionalities so that they can also take advantage of things which are already there, as well as they can create custom things which make sense for them and have a way to kind of truly understand how their systems are doing. Demetrios: Oh, that's really cool. I really like the idea of custom, being able to set custom ones, but then also having some that just come right out of the box to make life easier on us. Sourabh Agrawal: Yeah. And I think both are needed because you want someplace to start, and as you advance, you also want to kind of like, you can't cover everything right, with pre configured. So you want to have a way to customize things. Demetrios: Yeah. And especially once you have data flowing, you'll start to see what other things you need to be evaluating exactly. Sourabh Agrawal: Yeah, that's very true. Demetrios: Just the random one. I'm not telling you how to build your product or anything, but have you thought about having a community sourced metric? So, like, all these custom ones that people are making, maybe there's a hub where we can add our custom? Sourabh Agrawal: Yeah, I think that's really interesting. This is something we also have been thinking a lot. It's not built out yet, but we plan to kind of go in that direction pretty soon. We want to kind of create, like a store kind of a thing where people can add their custom matrices. So. Yeah, you're right on. I think I also believe that's the way to go, and we will be releasing something on those fronts pretty soon. Demetrios: Nice. So drew's asking, how do you handle jailbreak for different types of applications? Jailbreak for a medical app would be different than one for a finance one, right? Yeah. Sourabh Agrawal: The way our jailbreak check is configured. So it takes something, what you call as a model purpose. So you define what is the purpose of your model? For a financial app, you need to say that, okay, this LLM application is designed to answer financial queries so and so on. From medical. You will have a different purpose, so you can configure what is the purpose of your app. And then when we take up a user query, we check whether the user query is under. Firstly, we check also for illegals activities and so on. And then we also check whether it's under the preview of this purpose. Sourabh Agrawal: If not, then we tag that as a scenario of jailbreak because the user is trying to do something other than the purpose so that's how we tackle it. Demetrios: Nice, dude. Well, this is awesome. Is there anything else you want to say before we jump off? Sourabh Agrawal: No, I mean, it was like, a great conversation. Really glad to be here and great talking to you. Demetrios: Yeah, I'm very happy that we got this working and you were able to show us a little bit of uptrend. Super cool that it's open source. So I would recommend everybody go check it out, get your LLMs working with confidence, and make sure that nobody is using your chatbot to be their GPT subsidy, like GM use case and. Yeah, it's great, dude. I appreciate. Sourabh Agrawal: Yeah, check us out like we are@GitHub.com. Slash uptrendai slashuptrend. Demetrios: There we go. And if anybody else wants to come on to the vector space talks and talk to us about all the cool stuff that you're doing, hit us up and we'll see you all astronauts later. Don't get lost in vector space. Sourabh Agrawal: Yeah, thank you. Thanks a lot. Demetrios: All right, dude. There we go. We are good. I don't know how the hell I'm going to stop this one because I can't go through on my phone or I can't go through on my computer. It's so weird. So I'm not, like, technically there's nobody at the wheel right now. So I think if we both get off, it should stop working. Okay. Demetrios: Yeah, but that was awesome, man. This is super cool. I really like what you're doing, and it's so funny. I don't know if we're not connected on LinkedIn, are we? I literally just today posted a video of me going through a few different hallucination mitigation techniques. So it's, like, super timely that you talk about this. I think so many people have been thinking about this. Sourabh Agrawal: Definitely with enterprises, it's like a big issue. Right? I mean, how do you make it safe? How do you make it production ready? So I'll definitely check out your video. Also would be super interesting. Demetrios: Just go to my LinkedIn right now. It's just like LinkedIn.com dpbrinkm or just search for me. I think we are connected. We're connected. All right, cool. Yeah, so, yeah, check out the last video I just posted, because it's literally all about this. And there's a really cool paper that came out and you probably saw it. It's all like, mitigating AI hallucinations, and it breaks down all 32 techniques. Demetrios: And I was talking with on another podcast that I do, I was literally talking with the guys from weights and biases yesterday, and I was talking about how I was like, man, these evaluation data sets as a service feels like something that nobody's doing. And I guess it's probably because, and you're the expert, so I would love to hear what you have to say about it, but I guess it's because you don't really need it that bad. With a relatively small amount of data, you can start getting some really good evaluation happening. So it's a lot better than paying somebody else. Sourabh Agrawal: And also, I think it doesn't make sense also for a service because some external person is not best suited to make a data set for your use case. Demetrios: Right. Sourabh Agrawal: It's you. You have to look at what your users are asking to create a good data set. You can have a method, which is what optrain also does. We basically help you to sample and pick out the right cases from this data set based on the feedback of your users, based on the scores which are being generated. But it's difficult for someone external to craft really good questions or really good queries or really good cases which make sense for your business. Demetrios: Because the other piece that kind of, like, spitballed off of that, the other piece of it was techniques. So let me see if I can place all this words into a coherent sentence for you. It's basically like, okay, evaluation data sets don't really make sense because you're the one who knows the most. With a relatively small amount of data, you're going to be able to get stuff going real quick. What I thought about is, what about these hallucination mitigation techniques so that you can almost have options. So in this paper, right, there's like 32 different kinds of techniques that they use, and some are very pertinent for rags. They have like, five different or four different types of techniques. When you're dealing with rags to mitigate hallucinations, then they have some like, okay, if you're distilling a model, here is how you can make sure that the new distilled model doesn't hallucinate as much. Demetrios: Blah, blah, blah. But what I was thinking is like, what about how can you get a product? Or can you productize these kind of techniques? So, all right, cool. They're in this paper, but in uptrain, can we just say, oh, you want to try this new mitigation technique? We make that really easy for you. You just have to select it as one of the hallucination mitigation techniques. And then we do the heavy lifting of, if it's like, there's one. Have you heard of fleek? That was one that I was talking about in the video. Fleek is like where there's a knowledge graph, LLM that is created, and it is specifically created to try and combat hallucinations. And the way that they do it is they say that LLM will try and identify anywhere in the prompt or the output. Demetrios: Sorry, the output. It will try and identify if there's anything that can be fact checked. And so if it says that humans landed on the moon in 1969, it will identify that. And then either through its knowledge graph or through just forming a search query that will go out and then search the Internet, it will verify if that fact is true in the output. So that's like one technique, right? And so what I'm thinking about is like, oh, man, wouldn't it be cool if you could have all these different techniques to be able to use really easily as opposed to, great, I read it in a paper. Now, how the fuck am I going to get my hands on one of these LLMs with a knowledge graph if I don't train it myself? Sourabh Agrawal: Shit, yeah, I think that's a great suggestion. I'll definitely check it out. One of the things which we also want to do is integrate with all these techniques because these are really good techniques and they help solve a lot of problems, but using them is not simple. Recently we integrated with Spade. It's basically like a technique where I. Demetrios: Did another video on spade, actually. Sourabh Agrawal: Yeah, basically. I think I'll also check out these hallucinations. So right now what we do is based on this paper called fact score, which instead of checking on the Internet, it checks in the context only to verify this fact can be verified from the context or not. But I think it would be really cool if people can just play around with these techniques and just see whether it's actually working on their data or not. Demetrios: That's kind of what I was thinking is like, oh, can you see? Does it give you a better result? And then the other piece is like, oh, wait a minute, does this actually, can I put like two or three of them in my system at the same time? Right. And maybe it's over engineering or maybe it's not. I don't know. So there's a lot of fun stuff that can go down there and it's fascinating to think about. Sourabh Agrawal: Yeah, definitely. And I think experimentation is the key here, right? I mean, unless you try out them, you don't know what works. And if something works which improves your system, then definitely it was worth it. Demetrios: Thanks for that. Sourabh Agrawal: We'll check into it. Demetrios: Dude, awesome. It's great chatting with you, bro. And I'll talk to you later, bro. Sourabh Agrawal: Yeah, thanks a lot. Great speaking. See you. Bye. ",blog/vector-search-for-content-based-video-recommendation-gladys-and-sam-vector-space-talk-012.md "--- draft: false title: Iveta Lohovska on Gen AI and Vector Search | Qdrant slug: gen-ai-and-vector-search short_description: Iveta talks about the importance of trustworthy AI, particularly when implementing it within high-stakes enterprises like governments and security agencies description: Discover valuable insights on generative AI, vector search, and ethical AI implementation from Iveta Lohovska, Chief Technologist at HPE. preview_image: /blog/from_cms/iveta-lohovska-bp-cropped.png date: 2024-04-11T22:12:00.000Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - Retrieval Augmented Generation - GenAI --- # Exploring Gen AI and Vector Search: Insights from Iveta Lohovska > *""In the generative AI context of AI, all foundational models have been trained on some foundational data sets that are distributed in different ways. Some are very conversational, some are very technical, some are on, let's say very strict taxonomy like healthcare or chemical structures. We call them modalities, and they have different representations.”*\ — Iveta Lohovska > Iveta Lohovska serves as the Chief Technologist and Principal Data Scientist for AI and Supercomputing at [Hewlett Packard Enterprise (HPE)](https://www.hpe.com/us/en/home.html), where she champions the democratization of decision intelligence and the development of ethical AI solutions. An industry leader, her multifaceted expertise encompasses natural language processing, computer vision, and data mining. Committed to leveraging technology for societal benefit, Iveta is a distinguished technical advisor to the United Nations' AI for Good program and a Data Science lecturer at the Vienna University of Applied Sciences. Her career also includes impactful roles with the World Bank Group, focusing on open data initiatives and Sustainable Development Goals (SDGs), as well as collaborations with USAID and the Gates Foundation. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/7f1RDwp5l2Ps9N7gKubl8S?si=kCSX4HGCR12-5emokZbRfw), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/RsRAUO-fNaA).*** ## **Top takeaways:** In our continuous pursuit of knowledge and understanding, especially in the evolving landscape of AI and the vector space, we brought another great Vector Space Talk episode featuring Iveta Lohovska as she talks about generative AI and [vector search](https://qdrant.tech/). Iveta brings valuable insights from her work with the World Bank and as Chief Technologist at HPE, explaining the ins and outs of ethical AI implementation. Here are the episode highlights: - Exploring the critical role of trustworthiness and explainability in AI, especially within high confidentiality use cases like government and security agencies. - Discussing the importance of transparency in AI models and how it impacts the handling of data and understanding the foundational datasets for vector search. - Iveta shares her experiences implementing generative AI in high-stakes environments, including the energy sector and policy-making, emphasizing accuracy and source credibility. - Strategies for managing data privacy in high-stakes sectors, the superiority of on-premises solutions for control, and the implications of opting for cloud or hybrid infrastructure. - Iveta's take on the maturity levels of generative AI, the ongoing development of smaller, more focused models, and the evolving landscape of AI model licensing and open-source contributions. > Fun Fact: The climate agent solution showcased by Iveta helps individuals benchmark their carbon footprint and assists policymakers in drafting policy recommendations based on scientifically accurate data. > ## Show notes: 00:00 AI's vulnerabilities and ethical implications in practice.\ 06:28 Trust reliable sources for accurate climate data.\ 09:14 Vector database offers control and explainability.\ 13:21 On-prem vital for security and control.\ 16:47 Gen AI chat models at basic maturity.\ 19:28 Mature technical community, but slow enterprise adoption.\ 23:34 Advocates for open source but highlights complexities.\ 25:38 Unreliable information, triangle of necessities, vector space. ## More Quotes from Iveta: *""What we have to ensure here is that every citation and every answer and augmentation by the generative AI on top of that is linked to the exact source of paper or publication, where it's coming from, to ensure that we can trace it back to where the climate information is coming from.”*\ — Iveta Lohovska *""Explainability means if you receive a certain answer based on your prompt, you can trace it back to the exact source where the embedding has been stored or the source of where the information is coming from and things.”*\ — Iveta Lohovska *""Chat GPT for conversational purposes and individual help is something very cool but when this needs to be translated into actual business use cases scenario with all the constraint of the enterprise architecture, with the constraint of the use cases, the reality changes quite dramatically.”*\ — Iveta Lohovska ## Transcript: Demetrios: Look at that. We are back for another vector space talks. I'm very excited to be doing this today with you all. I am joined by none other than Sabrina again. Where are you at, Sabrina? How's it going? Sabrina Aquino: Hey there, Demetrios. Amazing. Another episode and I'm super excited for this one. How are you doing? Demetrios: I'm great. And we're going to bring out our guest of honor today. We are going to be talking a lot about trustworthy AI because Iveta has a background working with the World bank and focusing on the open data with that. But currently she is chief technologist and principal data scientist at HPE. And we were talking before we hit record before we went live. And we've got some hot takes that are coming up. So I'm going to bring Iveta to the stage. Where are you? There you are, our guest of honor. Demetrios: How you doing? Iveta Lohovska: Good. I hope you can hear me well. Demetrios: Loud and clear. Yes. Iveta Lohovska: Happy to join here from Vienna and thank you for the invite. Demetrios: Yes. So I'm very excited to talk with you today. I think it's probably worth getting the TLDR on your story and why you're so passionate about trustworthiness and explainability. Iveta Lohovska: Well, I think especially in the genaid context where if there any vulnerabilities around the solution or the training data set or any underlying context, either in the enterprise or in a smaller scale, it's just the scale that AI engine AI can achieve if it has any vulnerabilities or any weaknesses when it comes to explainability or trustworthiness or bias, it just goes explain nature. So it is to be considered and taken with high attention when it comes to those use cases. And most of my work is within an enterprise with high confidentiality use cases. So it plays a big role more than actually people will think it's on a high level. It just sounds like AI ethical principles or high level words that are very difficult to implement in technical terms. But in reality, when you hit the ground, when you hit the projects, when you work with in the context of, let's say, governments or organizations that deal with atomic energy, I see it in Vienna, the atomic agency is a neighboring one, or security agencies. Then you see the importance and the impact of those terms and the technical implications behind that. Sabrina Aquino: That's amazing. And can you talk a little bit more about the importance of the transparency of these models and what can happen if we don't know exactly what kind of data they are being trained on? Iveta Lohovska: I mean, this is especially relevant under our context of [vector databases](https://qdrant.tech/articles/what-is-a-vector-database/) and vector search. Because in the generative AI context of AI, all foundational models have been trained on some foundational data sets that are distributed in different ways. Some are very conversational, some are very technical, some are on, let's say very strict taxonomy like healthcare or chemical structures. We call them modalities, and they have different representations. So, so when it comes to implementing vector search or [vector database](https://qdrant.tech/articles/what-is-a-vector-database/) and knowing the distribution of the foundational data sets, you have better control if you introduce additional layers or additional components to have the control in your hands of where the information is coming from, where it's stored, [what are the embeddings](https://qdrant.tech/articles/what-are-embeddings/). So that helps, but it is actually quite important that you know what the foundational data sets are, so that you can predict any kind of weaknesses or vulnerabilities or penetrations that the solution or the use case of the model will face when it lands at the end user. Because we know with generative AI that is unpredictable, we know we can implement guardrails. They're already solutions. Iveta Lohovska: We know they're not 100, they don't give you 100% certainty, but they are definitely use cases and work where you need to hit the hundred percent certainty, especially intelligence, cybersecurity and healthcare. Demetrios: Yeah, that's something that I wanted to dig into a little bit. More of these high stakes use cases feel like you can't. I don't know. I talk with a lot of people about at this current time, it's very risky to try and use specifically generative AI for those high stakes use cases. Have you seen people that are doing it well, and if so, how? Iveta Lohovska: Yeah, I'm in the business of high stakes use cases and yes, we do those kind of projects and work, which is very exciting and interesting, and you can see the impact. So I'm in the generative AI implementation into enterprise control. An enterprise context could mean critical infrastructure, could mean telco, could mean a government, could mean intelligence organizations. So those are just a few examples, but I could flip the coin and give you an alternative for a public one where I can share, let's say a good example is climate data. And we recently worked on, on building a knowledge worker, a climate agent that is trained, of course, his foundational knowledge, because all foundational models have prior knowledge they can refer to. But the key point here is to be an expert on climate data emissions gap country cards. Every country has a commitment to meet certain reduction emission reduction goals and then benchmarked and followed through the international supervisions of the world, like the United nations environmental program and similar entities. So when you're training this agent on climate data, they're competing ideas or several sources. Iveta Lohovska: You can source your information from the local government that is incentivized to show progress to the nation and other stakeholders faster than the actual reality, the independent entities that provide information around the state of the world when it comes to progress towards certain climate goals. And there are also different parties. So for this kind of solution, we were very lucky to work with kind of the status co provider, the benchmark around climate data, around climate publications. And what we have to ensure here is that every citation and every answer and augmentation by the generative AI on top of that is linked to the exact source of paper or publication, where it's coming from, to ensure that we can trace it back to where the climate information is coming from. If Germany performs better compared to Austria, and also the partner we work with was the United nations environmental program. So they want to make sure that they're the citadel scientific arm when it comes to giving information. And there's no compromise, could be a compromise on the structure of the answer, on the breadth and death of the information, but there should be no compromise on the exact fact fullness of the information and where it's coming from. And this is a concrete example because why, you oughta ask, why is this so important? Because it has two interfaces. Iveta Lohovska: It has the public. You can go and benchmark your carbon footprint as an individual living in one country comparing to an individual living in another. But if you are a policymaker, which is the other interface of this application, who will write the policy recommendation of a country in their own country, or a country they're advising on, you might want to make sure that the scientific citations and the policy recommendations that you're making are correct and they are retrieved from the proper data sources. Because there will be a huge implication when you go public with those numbers or when you actually design a law that is reinforceable with legal terms and law enforcement. Sabrina Aquino: That's very interesting, Iveta, and I think this is one of the great use cases for [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/), for example. And I think if you can talk a little bit more about how vector search is playing into all of this, how it's helping organizations do this, this. Iveta Lohovska: Would be amazing in such specific use cases. I think the main differentiator is the traceability component, the first that you have full control on which data it will refer to, because if you deal with open source models, most of them are open, but the data it has been trained on has not been opened or given public so with vector database you introduce a step of control and explainability. Explainability means if you receive a certain answer based on your prompt, you can trace it back to the exact source where the embedding has been stored or the source of where the information is coming from and things. So this is a major use case for us for those kind of high stake solution is that you have the explainability and traceability. Explainability. It could be as simple as a semantical similarity to the text, but also the traceability of where it's coming from and the exact link of where it's coming from. So it should be, it shouldn't be referred. You can close and you can cut the line of the model referring to its previous knowledge by introducing a [vector database](https://qdrant.tech/articles/what-is-a-vector-database/), for example. Iveta Lohovska: So there could be many other implications and improvements in terms of speed and just handling huge amounts of data, yet also nice to have that come with this kind of technique, but the prior use case is actually not incentivized around those. Demetrios: So if I'm hearing you correctly, it's like yet another reason why you should be thinking about using vector databases, because you need that ability to cite your work and it's becoming a very strong design pattern. Right. We all understand now, if you can't see where this data has been pulled from or you can't get, you can't trace back to the actual source, it's hard to trust what the output is. Iveta Lohovska: Yes, and the easiest way to kind of cluster the two groups. If you think of creative fields and marketing fields and design fields where you could go wild and crazy with the temperature on each model, how creative it could go and how much novelty it could bring to the answer are one family of use cases. But there is exactly the opposite type of use cases where this is a no go and you don't need any creativity, you just focus on, focus on the factfulness and explainability. So it's more of the speed and the accuracy of retrieving information with a high level of novelty, but not compromising on any kind of facts within the answer, because there will be legal implications and policy implications and societal implications based on the action taken on this answer, either policy recommendation or legal action. There's a lot to do with the intelligence agencies that retrieve information based on nearest neighbor or kind of a relational analysis that you can also execute with vector databases and generative AI. Sabrina Aquino: And we know that for these high stakes sectors that data privacy is a huge concern. And when we're talking about using vector databases and storing that data somewhere, what are some of the principles or techniques that you use in terms of infrastructure, where should you store your vector database and how should you think about that part of your system? Iveta Lohovska: Yeah, so most of the cases, I would say 99% of the cases, is that if you have such a high requirements around security and explainability, security of the data, but those security of the whole use case and environment, and the explainability and trustworthiness of the answer, then it's very natural to have expectations that will be on prem and not in the cloud, because only on prem you have a full control of where your data sits, where your model sits, the full ownership of your IP, and then the full ownership of having less question marks of the implementation and architecture, but mainly the full ownership of the end to end solution. So when it comes to those use cases, RAG on Prem, with the whole infrastructure, with the whole software and platform layers, including models on Prem, not accessible through an API, through a service somewhere where you don't know where the guardrails is, who designed the guardrails, what are the guardrails? And we see those, this a lot with, for example, copilot, a lot of question marks around that. So it's a huge part of my work is just talking of it, just sorting out that. Sabrina Aquino: Exactly. You don't want to just give away your data to a cloud provider, because there's many implications that that comes with. And I think even your clients, they need certain certifications, then they need to make sure that nobody can access that data, something that you cannot. Exactly. I think ensure if you're just using a cloud provider somewhere, which is, I think something that's very important when you're thinking about these high stakes solutions. But also I think if you're going to maybe outsource some of the infrastructure, you also need to think about something that's similar to a [hybrid cloud solution](https://qdrant.tech/documentation/hybrid-cloud/) where you can keep your data and outsource the kind of management of infrastructure. So that's also a nice use case for that, right? Iveta Lohovska: I mean, I work for HPE, so hybrid is like one of our biggest sacred words. Yeah, exactly. But actually like if you see the trends and if you see how expensive is to work to run some of those workloads in the cloud, either for training for national model or fine tuning. And no one talks about inference, inference not in ten users, but inference in hundred users with big organizations. This itself is not sustainable. Honestly, when you do the simple Linux, algebra or math of the exponential cost around this. That's why everything is hybrid. And there are use cases that make sense to be fast and speedy and easy to play with, low risk in the cloud to try. Iveta Lohovska: But when it comes to actual GenAI work and LLM models, yeah, the answer is never straightforward when it comes to the infrastructure and the environment where you are hosting it, for many reasons, not just cost, but any other. Demetrios: So there's something that I've been thinking about a lot lately that I would love to get your take on, especially because you deal with this day in and day out, and it is the maturity levels of the current state of Gen AI and where we are at for chat GPT or just llms and foundational models feel like they just came out. And so we're almost in the basic, basic, basic maturity levels. And when you work with customers, how do you like kind of signal that, hey, this is where we are right now, but you should be very conscientious that you're going to need to potentially work with a lot of breaking changes or you're going to have to be constantly updating. And this isn't going to be set it and forget it type of thing. This is going to be a lot of work to make sure that you're staying up to date, even just like trying to stay up to date with the news as we were talking about. So I would love to hear your take on on the different maturity levels that you've been seeing and what that looks like. Iveta Lohovska: So I have huge exposure to GenAI for the enterprise, and there's a huge component expectation management. Why? Because chat GPT for conversational purposes and individual help is something very cool. But when this needs to be translated into actual business use cases scenario with all the constraint of the enterprise architecture, with the constraint of the use cases, the reality changes quite dramatically. So end users who are used to expect level of forgiveness as conversational chatbots have, is very different of what you will get into actual, let's say, knowledge worker type of context, or summarization type of context into the enterprise. And it's not so much to the performance of the models, but we have something called modalities of the models. And I don't think there will be ultimately one model with all the capabilities possible, let's say cult generation or image generation, voice generational, or just being very chatty and loving and so on. There will be multiple mini models out there for those. Modalities in actual architecture with reasonable cost are very difficult to handle. Iveta Lohovska: So I would say the technical community feels we are very mature and very fast. The enterprise adoption is a totally different topic, and it's a couple of years behind, but also the society type of technologists like me, who try to keep up with the development and we know where we stand at this point, but they're the legal side and the regulations coming in, like the EU act and Biden trying to regulate the compute power, but also how societies react to this and how they adapt. And I think especially on the third one, we are far behind understanding and the implications of this technology, also adopting it at scale and understanding the vulnerabilities. That's why I enjoy so much my enterprise work is because it's a reality check. When you put the price tag attached to actual Gen AI use case in production with the inference cost and the expected performance, it's different situation when you just have an app on the phone and you chat with it and it pulls you interesting links. So yes, I think that there's a bridge to be built between the two worlds. Demetrios: Yeah. And I find it really interesting too, because it feels to me like since it is so new, people are more willing to explore and not necessarily have that instant return of the ROI, but when it comes to more traditional ML or predictive ML, it is a bit more mature and so there's less patience for that type of exploration. Or, hey, is this use case? If you can't by now show the ROI of a predictive ML use case, then that's a little bit more dangerous. But if you can't with a Gen AI use case, it is not that big of a deal. Iveta Lohovska: Yeah, it's basically a technology growing up in front of our eyes. It's a kind of a flying a plane while building it type of situation. We are seeing it in the real time, and I agree with you. So that the maturity around ML is one thing, but around generative AI, and they will be a model of kind of mini disappointment or decline, in my opinion, before actually maturing product. This kind of powerful technology in a sustainable way. Sustainable ways mean you can afford it, but also it proves your business case and use case. Otherwise it's just doing for the sake of doing it because everyone else is doing it. Demetrios: Yeah, yeah, 100%. So I know we're bumping up against time here. I do feel like there was a bit of a topic that we wanted to discuss with the licenses and how that plays into basically trustworthiness and explainability. And so we were talking about how, yeah, the best is to run your own model, and it probably isn't going to be this gigantic model that can do everything. It's the, it seems like the trends are going into smaller models. And from your point of view though, we are getting new models like every week. It feels like. Yeah, especially. Demetrios: I mean, we were just talking about this before we went live again, like databricks just released there. What is it? DBRX Yesterday you had Mistral releasing like a new base model over the weekend, and then Llama 3 is probably going to come out in the flash of an eye. So where do you stand in regards to that? It feels like there's a lot of movement in open source, but it is a little bit of, as you mentioned, like, to be cautious with the open source movement. Iveta Lohovska: So I think it feels like there's a lot of open source, but that. So I'm totally for open sourcing and giving the people and the communities the power to be able to innovate, to do R & D in different labs so it's not locked to the view. Elite big tech companies that can afford this kind of technology. So kudos to meta for trying compared to the other equal players in the space. But open source comes with a lot of ecosystem in our world, especially for the more powerful models, which is something I don't like because it becomes like just, it immediately translates into legal fees type of conversation. It's like there are too many if else statements in those open source licensing terms where it becomes difficult to navigate, for technologists to understand what exactly this means, and then you have to bring the legal people to articulate it to you or to put additional clauses. So it's becoming a very complex environment to handle and less and less open, because there are not so many open source and small startup players that can afford to train foundational models that are powerful and useful. So it becomes a bit of a game logged to a view, and I think everyone needs to be a bit worried about that. Iveta Lohovska: So we can use the equivalents from the past, but I don't think we are doing well enough in terms of open sourcing. The three main core components of LLM model, which is the model itself, the data it has been trained on, and the data sets, and most of the times, at least in one of those, is restricted or missing. So it's difficult space to navigate. Demetrios: Yeah, yeah. You can't really call it trustworthy, or you can't really get the information that you need and that you would hope for if you're missing one of those three. I do like that little triangle of the necessities. So, Iveta, this has been awesome. I really appreciate you coming on here. Thank you, Sabrina, for joining us. And for everyone else that is watching, remember, don't get lost in vector space. This has been another vector space talk. Demetrios: We are out. Have a great weekend, everyone. Iveta Lohovska: Thank you. Bye. Thank you. Bye. ",blog/gen-ai-and-vector-search-iveta-lohovska-vector-space-talks.md "--- draft: false title: ""Qdrant and OVHcloud Bring Vector Search to All Enterprises"" short_description: ""Collaborating to support startups and enterprises in Europe with a strong focus on data control and privacy."" description: ""Collaborating to support startups and enterprises in Europe with a strong focus on data control and privacy."" preview_image: /blog/hybrid-cloud-ovhcloud/hybrid-cloud-ovhcloud.png date: 2024-04-10T00:05:00Z author: Qdrant featured: false weight: 1004 tags: - Qdrant - Vector Database --- With the official release of [Qdrant Hybrid Cloud](/hybrid-cloud/), businesses running their data infrastructure on [OVHcloud](https://ovhcloud.com/) are now able to deploy a fully managed vector database in their existing OVHcloud environment. We are excited about this partnership, which has been established through the [OVHcloud Open Trusted Cloud](https://opentrustedcloud.ovhcloud.com/en/) program, as it is based on our shared understanding of the importance of trust, control, and data privacy in the context of the emerging landscape of enterprise-grade AI applications. As part of this collaboration, we are also providing a detailed use case tutorial on building a recommendation system that demonstrates the benefits of running Qdrant Hybrid Cloud on OVHcloud. Deploying Qdrant Hybrid Cloud on OVHcloud's infrastructure represents a significant leap for European businesses invested in AI-driven projects, as this collaboration underscores the commitment to meeting the rigorous requirements for data privacy and control of European startups and enterprises building AI solutions. As businesses are progressing on their AI journey, they require dedicated solutions that allow them to make their data accessible for machine learning and AI projects, without having it leave the company's security perimeter. Prioritizing data sovereignty, a crucial aspect in today's digital landscape, will help startups and enterprises accelerate their AI agendas and build even more differentiating AI-enabled applications. The ability of running Qdrant Hybrid Cloud on OVHcloud not only underscores the commitment to innovative, secure AI solutions but also ensures that companies can navigate the complexities of AI and machine learning workloads with the flexibility and security required. > *“The partnership between OVHcloud and Qdrant Hybrid Cloud highlights, in the European AI landscape, a strong commitment to innovative and secure AI solutions, empowering startups and organisations to navigate AI complexities confidently. By emphasizing data sovereignty and security, we enable businesses to leverage vector databases securely.“* Yaniv Fdida, Chief Product and Technology Officer, OVHcloud #### Qdrant & OVHcloud: High Performance Vector Search With Full Data Control Through the seamless integration between Qdrant Hybrid Cloud and OVHcloud, developers and businesses are able to deploy the fully managed vector database within their existing OVHcloud setups in minutes, enabling faster, more accurate AI-driven insights. - **Simple setup:** With the seamless “one-click” installation, developers are able to deploy Qdrant’s fully managed vector database to their existing OVHcloud environment. - **Trust and data sovereignty**: Deploying Qdrant Hybrid Cloud on OVHcloud enables developers with vector search that prioritizes data sovereignty, a crucial aspect in today's AI landscape where data privacy and control are essential. True to its “Sovereign by design” DNA, OVHcloud guarantees that all the data stored are immune to extraterritorial laws and comply with the highest security standards. - **Open standards and open ecosystem**: OVHcloud’s commitment to open standards and an open ecosystem not only facilitates the easy integration of Qdrant Hybrid Cloud with OVHcloud’s AI services and GPU-powered instances but also ensures compatibility with a wide range of external services and applications, enabling seamless data workflows across the modern AI stack. - **Cost efficient sector search:** By leveraging Qdrant's quantization for efficient data handling and pairing it with OVHcloud's eco-friendly, water-cooled infrastructure, known for its superior price/performance ratio, this collaboration provides a strong foundation for cost efficient vector search. #### Build a RAG-Based System with Qdrant Hybrid Cloud and OVHcloud ![hybrid-cloud-ovhcloud-tutorial](/blog/hybrid-cloud-ovhcloud/hybrid-cloud-ovhcloud-tutorial.png) To show how Qdrant Hybrid Cloud deployed on OVHcloud allows developers to leverage the benefits of an AI use case that is completely run within the existing infrastructure, we put together a comprehensive use case tutorial. This tutorial guides you through creating a recommendation system using collaborative filtering and sparse vectors with Qdrant Hybrid Cloud on OVHcloud. It employs the Movielens dataset for practical application, providing insights into building efficient, scalable recommendation engines suitable for developers and data scientists looking to leverage advanced vector search technologies within a secure, GDPR-compliant European cloud infrastructure. [Try the Tutorial](/documentation/tutorials/recommendation-system-ovhcloud/) #### Get Started Today and Leverage the Benefits of Qdrant Hybrid Cloud Setting up Qdrant Hybrid Cloud on OVHcloud is straightforward and quick, thanks to the intuitive integration with Kubernetes. Here's how: - **Hybrid Cloud Activation**: Log into your Qdrant account and enable 'Hybrid Cloud'. - **Cluster Integration**: Add your OVHcloud Kubernetes clusters as a Hybrid Cloud Environment in the Hybrid Cloud settings. - **Effortless Deployment**: Use the Qdrant Management Console for easy deployment and management of Qdrant clusters on OVHcloud. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-ovhcloud.md "--- draft: false title: Full-text filter and index are already available! slug: qdrant-introduces-full-text-filters-and-indexes short_description: Qdrant v0.10 introduced full-text filters description: Qdrant v0.10 introduced full-text filters and indexes to enable more search capabilities for those working with textual data. preview_image: /blog/from_cms/andrey.vasnetsov_black_hole_sucking_up_the_word_tag_cloud_f349586d-3e51-43c5-9e5e-92abf9a9e871.png date: 2022-11-16T09:53:05.860Z author: Kacper Łukawski featured: false tags: - Information Retrieval - Database - Open Source - Vector Search Database --- Qdrant is designed as an efficient vector database, allowing for a quick search of the nearest neighbours. But, you may find yourself in need of applying some extra filtering on top of the semantic search. Up to version 0.10, Qdrant was offering support for keywords only. Since 0.10, there is a possibility to apply full-text constraints as well. There is a new type of filter that you can use to do that, also combined with every other filter type. ## Using full-text filters without the payload index Full-text filters without the index created on a field will return only those entries which contain all the terms included in the query. That is effectively a substring match on all the individual terms but **not a substring on a whole query**. ![](/blog/from_cms/1_ek61_uvtyn89duqtmqqztq.webp ""An example of how to search for “long_sleeves” in a “detail_desc” payload field."") ## Full-text search behaviour on an indexed payload field There are more options if you create a full-text index on a field you will filter by. ![](/blog/from_cms/1_pohx4eznqpgoxak6ppzypq.webp ""Full-text search behaviour on an indexed payload field There are more options if you create a full-text index on a field you will filter by."") First and foremost, you can choose the tokenizer. It defines how Qdrant should split the text into tokens. There are three options available: * **word** — spaces, punctuation marks and special characters define the token boundaries * **whitespace** — token boundaries defined by whitespace characters * **prefix** — token boundaries are the same as for the “word” tokenizer, but in addition to that, there are prefixes created for every single token. As a result, “Qdrant” will be indexed as “Q”, “Qd”, “Qdr”, “Qdra”, “Qdran”, and “Qdrant”. There are also some additional parameters you can provide, such as * **min_token_len** — minimal length of the token * **max_token_len** — maximal length of the token * **lowercase** — if set to *true*, then the index will be case-insensitive, as Qdrant will convert all the texts to lowercase ## Using text filters in practice ![](/blog/from_cms/1_pbtd2tzqtjqqlbi61r8czg.webp ""There are also some additional parameters you can provide, such as min_token_len — minimal length of the token max_token_len — maximal length of the token lowercase — if set to true, then the index will be case-insensitive, as Qdrant will convert all the texts to lowercase Using text filters in practice"") The main difference between using full-text filters on the indexed vs non-indexed field is the performance of such query. In a simple benchmark, performed on the [H&M dataset](https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations) (with over 105k examples), the average query time looks as follows (n=1000): ![](/blog/from_cms/screenshot_31.png) It is evident that creating a filter on a field that we’ll query often, may lead us to substantial performance gains without much effort.",blog/full-text-filter-and-index-are-already-available.md "--- draft: false preview_image: /blog/from_cms/docarray.png sitemapExclude: true title: ""Qdrant and Jina integration: storage backend support for DocArray"" slug: qdrant-and-jina-integration short_description: ""One more way to use Qdrant: Jina's DocArray is now supporting Qdrant as a storage backend."" description: We are happy to announce that Jina.AI integrates Qdrant engine as a storage backend to their DocArray solution. date: 2022-03-15T15:00:00+03:00 author: Alyona Kavyerina featured: false author_link: https://medium.com/@alyona.kavyerina tags: - jina integration - docarray categories: - News --- We are happy to announce that [Jina.AI](https://jina.ai/) integrates Qdrant engine as a storage backend to their [DocArray](https://docarray.jina.ai/) solution. Now you can experience the convenience of Pythonic API and Rust performance in a single workflow. DocArray library defines a structure for the unstructured data and simplifies processing a collection of documents, including audio, video, text, and other data types. Qdrant engine empowers scaling of its vector search and storage. Read more about the integration by this [link](/documentation/install/#docarray) ",blog/qdrant_and_jina_integration.md "--- title: ""Qdrant Attains SOC 2 Type II Audit Report"" draft: false slug: qdrant-soc2-type2-audit # Change this slug to your page slug if needed short_description: We're proud to announce achieving SOC 2 Type II compliance for Security, Availability, Processing Integrity, Confidentiality, and Privacy. description: We're proud to announce achieving SOC 2 Type II compliance for Security, Availability, and Confidentiality. preview_image: /blog/soc2-type2-report/soc2-preview.jpeg # social_preview_image: /blog/soc2-type2-report/soc2-preview.jpeg date: 2024-05-23T20:26:20-03:00 author: Sabrina Aquino # Change this featured: false # if true, this post will be featured on the blog page tags: # Change this, related by tags posts will be shown on the blog page - soc2 - audit - security - confidenciality - data privacy - soc2 type 2 --- At Qdrant, we are happy to announce the successful completion our the SOC 2 Type II Audit. This achievement underscores our unwavering commitment to upholding the highest standards of security, availability, and confidentiality for our services and our customers’ data. ## SOC 2 Type II: What Is It? SOC 2 Type II certification is an examination of an organization's controls in reference to the American Institute of Certified Public Accountants [(AICPA) Trust Services criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022). It evaluates not only our written policies but also their practical implementation, ensuring alignment between our stated objectives and operational practices. Unlike Type I, which is a snapshot in time, Type II verifies over several months that the company has lived up to those controls. The report represents thorough auditing of our security procedures throughout this examination period: January 1, 2024 to April 7, 2024. ## Key Audit Findings The audit ensured with no exceptions noted the effectiveness of our systems and controls on the following Trust Service Criteria: * Security * Confidentiality * Availability These certifications are available today and automatically apply to your existing workloads. The full SOC 2 Type II report is available to customers and stakeholders upon request through the [Trust Center](https://app.drata.com/trust/9cbbb75b-0c38-11ee-865f-029d78a187d9). ## Future Compliance Going forward, Qdrant will maintain SOC 2 Type II compliance by conducting continuous, annual audits to ensure our security practices remain aligned with industry standards and evolving risks. Recognizing the critical importance of data security and the trust our clients place in us, achieving SOC 2 Type II compliance underscores our ongoing commitment to prioritize data protection with the utmost integrity and reliability. ## About Qdrant Qdrant is a vector database designed to handle large-scale, high-dimensional data efficiently. It allows for fast and accurate similarity searches in complex datasets. Qdrant strives to achieve seamless and scalable vector search capabilities for various applications. For more information about Qdrant and our security practices, please visit our [website](http://qdrant.tech) or [reach out to our team directly](https://qdrant.tech/contact-us/). ",blog/soc2-type2-report.md "--- draft: false title: Binary Quantization - Andrey Vasnetsov | Vector Space Talks slug: binary-quantization short_description: Andrey Vasnetsov, CTO of Qdrant, discusses the concept of binary quantization and its applications in vector indexing. description: Andrey Vasnetsov, CTO of Qdrant, discusses the concept of binary quantization and its benefits in vector indexing, including the challenges and potential future developments of this technique. preview_image: /blog/from_cms/andrey-vasnetsov-cropped.png date: 2024-01-09T10:30:10.952Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Binary Quantization - Qdrant --- > *""Everything changed when we actually tried binary quantization with OpenAI model.”*\ > -- Andrey Vasnetsov Ever wonder why we need quantization for vector indexes? Andrey Vasnetsov explains the complexities and challenges of searching through proximity graphs. Binary quantization reduces storage size and boosts speed by 30x, but not all models are compatible. Andrey worked as a Machine Learning Engineer most of his career. He prefers practical over theoretical, working demo over arXiv paper. He is currently working as the CTO at Qdrant a Vector Similarity Search Engine, which can be used for semantic search, similarity matching of text, images or even videos, and also recommendations. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/7dPOm3x4rDBwSFkGZuwaMq?si=Ip77WCa_RCCYebeHX6DTMQ), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/4aUq5VnR_VI).*** ## Top Takeaways: Discover how oversampling optimizes precision in real-time, enhancing the accuracy without altering stored data structures in our very first episode of the Vector Space Talks by Qdrant, with none other than the CTO of Qdrant, Andrey Vasnetsov. In this episode, Andrey shares invaluable insights into the world of binary quantization and its profound impact on Vector Space technology. 5 Keys to Learning from the Episode: 1. The necessity of quantization and the complex challenges it helps to overcome. 2. The transformative effects of binary quantization on processing speed and storage size reduction. 3. A detailed exploration of oversampling and its real-time precision control in query search. 4. Understanding the simplicity and effectiveness of binary quantization, especially when compared to more intricate quantization methods. 5. The ongoing research and potential impact of binary quantization on future models. > Fun Fact: Binary quantization can deliver processing speeds over 30 times faster than traditional quantization methods, which is a revolutionary advancement in Vector Space technology. > ## Show Notes: 00:00 Overview of HNSW vector index.\ 03:57 Efficient storage needed for large vector sizes.\ 07:49 Oversampling controls precision in real-time search.\ 12:21 Comparison of vectors using dot production.\ 15:20 Experimenting with models, OpenAI has compatibility.\ 18:29 Qdrant architecture doesn't support removing original vectors. ## More Quotes from Andrey: *""Inside Qdrant we use HNSW vector Index, which is essentially a proximity graph. You can imagine it as a number of vertices where each vertex is representing one vector and links between those vertices representing nearest neighbors.”*\ -- Andrey Vasnetsov *""The main idea is that we convert the float point elements of the vector into binary representation. So, it's either zero or one, depending if the original element is positive or negative.”*\ -- Andrey Vasnetsov *""We tried most popular open source models, and unfortunately they are not as good compatible with binary quantization as OpenAI.”*\ -- Andrey Vasnetsov ## Transcript: Demetrios: Okay, welcome everyone. This is the first and inaugural vector space talks, and who better to kick it off than the CTO of Qdrant himself? Andrey V. Happy to introduce you and hear all about this binary quantization that you're going to be talking about. I've got some questions for you, and I know there are some questions that came through in the chat. And the funny thing about this is that we recorded it live on Discord yesterday. But the thing about Discord is you cannot trust the recordings on there. And so we only got the audio and we wanted to make this more visual for those of you that are watching on YouTube. Hence here we are recording it again. Demetrios: And so I'll lead us through some questions for you, Andrey. And I have one thing that I ask everyone who is listening to this, and that is if you want to give a talk and you want to showcase either how you're using Qdrant, how you've built a rag, how you have different features or challenges that you've overcome with your AI, landscape or ecosystem or stack that you've set up, please reach out to myself and I will get you on here and we can showcase what you've done and you can give a talk for the vector space talk. So without further ado, let's jump into this, Andrey, we're talking about binary quantization, but let's maybe start a step back. Why do we need any quantization at all? Why not just use original vectors? Andrey Vasnetsov: Yep. Hello, everyone. Hello Demetrios. And it's a good question, and I think in order to answer it, I need to first give a short overview of what is vector index, how it works and what challenges it possess. So, inside Qdrant we use so called HNSW vector Index, which is essentially a proximity graph. You can imagine it as a number of vertices where each vertex is representing one vector and links between those vertices representing nearest neighbors. So in order to search through this graph, what you actually need to do is do a greedy deep depth first search, and you can tune the precision of your search with the beam size of the greedy search process. But this structure of the index actually has its own challenges and first of all, its index building complexity. Andrey Vasnetsov: Inserting one vector into the index is as complicated as searching for one vector in the graph. And the graph structure overall have also its own limitations. It requires a lot of random reads where you can go in any direction. It's not easy to predict which path the graph will take. The search process will take in advance. So unlike traditional indexes in traditional databases, like binary trees, like inverted indexes, where we can pretty much serialize everything. In HNSW it's always random reads and it's actually always sequential reads, because you need to go from one vertex to another in a sequential manner. And this actually creates a very strict requirement for underlying storage of vectors. Andrey Vasnetsov: It had to have a very low latency and it have to support this randomly spatter. So basically we can only do it efficiently if we store all the vectors either in very fast solid state disks or if we use actual RAM to store everything. And RAM is not cheap these days, especially considering that the size of vectors increases with each new version of the model. And for example, OpenAI model is already more than 1000 dimensions. So you can imagine one vector is already 6 data, no matter how long your text is, and it's just becoming more and more expensive with the advancements of new models and so on. So in order to actually fight this, in order to compensate for the growth of data requirement, what we propose to do, and what we already did with different other quantization techniques is we actually compress vectors into quantized vector storage, which is usually much more compact for the in memory representation. For example, on one of the previous releases we have scalar quantization and product quantization, which can compress up to 64 times the size of the vector. And we only keep in fast storage these compressed vectors. Andrey Vasnetsov: We retrieve them and get a list of candidates which will later rescore using the original vectors. And the benefit here is this reordering or rescoring process actually doesn't require any kind of sequential or random access to data, because we already know all the IDs we need to rescore, and we can efficiently read it from the disk using asynchronous I O, for example, and even leverage the advantage of very cheap network mounted disks. And that's the main benefit of quantization. Demetrios: I have a few questions off the back of this one, being just a quick thing, and I'm wondering if we can double benefit by using this binary quantization, but also if we're using smaller models that aren't the GBTs, will that help? Andrey Vasnetsov: Right. So not all models are as big as OpenAI, but what we see, the trend in this area, the trend of development of different models, indicates that they will become bigger and bigger over time. Just because we want to store more information inside vectors, we want to have larger context, we want to have more detailed information, more detailed separation and so on. This trend is obvious if like five years ago the usual size of the vector was 100 dimensions now the usual size is 700 dimensions, so it's basically. Demetrios: Preparing for the future while also optimizing for today. Andrey Vasnetsov: Right? Demetrios: Yeah. Okay, so you mentioned on here oversampling. Can you go into that a little bit more and explain to me what that is? Andrey Vasnetsov: Yeah, so oversampling is a special technique we use to control precision of the search in real time, in query time. And the thing is, we can internally retrieve from quantized storage a bit more vectors than we actually need. And when we do rescoring with original vectors, we assign more precise score. And therefore from this overselection, we can pick only those vectors which are actually good for the user. And that's how we can basically control accuracy without rebuilding index, without changing any kind of parameters inside the stored data structures. But we can do it real time in just one parameter change of the search query itself. Demetrios: I see, okay, so basically this is the quantization. And now let's dive into the binary quantization and how it works. Andrey Vasnetsov: Right, so binary quantization is actually very simple. The main idea that we convert the float point elements of the vector into binary representation. So it's either zero or one, depending if the original element is positive or negative. And by doing this we can approximate dot production or cosine similarity, whatever metric you use to compare vectors with just hemming distance, and hemming distance is turned to be very simple to compute. It uses only two most optimized CPU instructions ever. It's Pixor and Popcount. Instead of complicated float point subprocessor, you only need those tool. It works with any register you have, and it's very fast. Andrey Vasnetsov: It uses very few CPU cycles to actually produce a result. That's why binary quantization is over 30 times faster than regular product. And it actually solves the problem of complicated index building, because this computation of dot products is the main source of computational requirements for HNSW. Demetrios: So if I'm understanding this correctly, it's basically taking all of these numbers that are on the left, which can be, yes, decimal numbers. Andrey Vasnetsov: On the left you can see original vector and it converts it in binary representation. And of course it does lose a lot of precision in the process. But because first we have very large vector and second, we have oversampling feature, we can compensate for this loss of accuracy and still have benefit in both speed and the size of the storage. Demetrios: So if I'm understanding this correctly, it's basically saying binary quantization on its own probably isn't the best thing that you would want to do. But since you have these other features that will help counterbalance the loss in accuracy. You get the speed from the binary quantization and you get the accuracy from these other features. Andrey Vasnetsov: Right. So the speed boost is so overwhelming that it doesn't really matter how much over sampling is going to be, we will still benefit from that. Demetrios: Yeah. And how much faster is it? You said that, what, over 30 times faster? Andrey Vasnetsov: Over 30 times and some benchmarks is about 40 times faster. Demetrios: Wow. Yeah, that's huge. And so then on the bottom here you have dot product versus hammering distance. And then there's. Yeah, hamming. Sorry, I'm inventing words over here on your slide. Can you explain what's going on there? Andrey Vasnetsov: Right, so dot production is the metric we usually use in comparing a pair of vectors. It's basically the same as cosine similarity, but this normalization on top. So internally, both cosine and dot production actually doing only dot production, that's usual metric we use. And in order to do this operation, we first need to multiply each pair of elements to the same element of the other vector and then add all these multiplications in one number. It's going to be our score instead of this in binary quantization, in binary vector, we do XOR operation and then count number of ones. So basically, Hemming distance is an approximation of dot production in this binary space. Demetrios: Excellent. Okay, so then it looks simple enough, right? Why are you implementing it now after much more complicated product quantization? Andrey Vasnetsov: It's actually a great question. And the answer to this is binary questization looked too simple to be true, too good to be true. And we thought like this, we tried different things with open source models that didn't work really well. But everything changed when we actually tried binary quantization with OpenAI model. And it turned out that OpenAI model has very good compatibility with this type of quantization. Unfortunately, not every model have as good compatibility as OpenAI. And to be honest, it's not yet absolutely clear for us what makes models compatible and whatnot. We do know that it correlates with number of dimensions, but it is not the only factor. Andrey Vasnetsov: So there is some secret source which exists and we should find it, which should enable models to be compatible with binary quantization. And I think it's actually a future of this space because the benefits of this hemming distance benefits of binary quantization is so great that it makes sense to incorporate these tricks on the learning process of the model to make them more compatible. Demetrios: Well, you mentioned that OpenAI's model is one that obviously works well with binary quantization, but there are models that don't work well with it, which models have not been very good. Andrey Vasnetsov: So right now we are in the process of experimenting with different models. We tried most popular open source models, and unfortunately they are not as good compatible with binary quantization as OpenAI. We also tried different closed source models, for example Cohere AI, which is on the same level of compatibility with binary quantization as OpenAI, but they actually have much larger dimensionality. So instead of 1500 they have 4000. And it's not yet clear if only dimensionality makes this model compatible. Or there is something else in training process, but there are open source models which are getting close to OpenAI 1000 dimensions, but they are not nearly as good as Openi in terms of this compression compatibility. Demetrios: So let that be something that hopefully the community can help us figure out. Why is it that this works incredibly well with these closed source models, but not with the open source models? Maybe there is something that we're missing there. Andrey Vasnetsov: Not all closed source models are compatible as well, so some of them work similar as open source, but a few works well. Demetrios: Interesting. Okay, so is there a plan to implement other quantization methods, like four bit quantization or even compressing two floats into one bit? Andrey Vasnetsov: Right, so our choice of quantization is mostly defined by available CPU instructions we can apply to perform those computations. In case of binary quantization, it's straightforward and very simple. That's why we like binary quantization so much. In case of, for example, four bit quantization, it is not as clear which operation we should use. It's not yet clear. Would it be efficient to convert into four bits and then apply multiplication of four bits? So this would require additional investigation, and I cannot say that we have immediate plans to do so because still the binary quincellation field is not yet explored on 100% and we think it's a lot more potential with this than currently unlocked. Demetrios: Yeah, there's some low hanging fruits still on the binary quantization field, so tackle those first and then move your way over to four bit and all that fun stuff. Last question that I've got for you is can we remove original vectors and only keep quantized ones in order to save disk space? Andrey Vasnetsov: Right? So unfortunately Qdrant architecture is not designed and not expecting this type of behavior for several reasons. First of all, removing of the original vectors will compromise some features like oversampling, like segment building. And actually removing of those original vectors will only be compatible with some types of quantization for example, it won't be compatible with scalar quantization because in this case we won't be able to rebuild index to do maintenance of the system. And in order to maintain, how would you say, consistency of the API, consistency of the engine, we decided to enforce always enforced storing of the original vectors. But the good news is that you can always keep original vectors on just disk storage. It's very cheap. Usually it's ten times or even more times cheaper than RAM, and it already gives you great advantage in terms of price. That's answer excellent. Demetrios: Well man, I think that's about it from this end, and it feels like it's a perfect spot to end it. As I mentioned before, if anyone wants to come and present at our vector space talks, we're going to be doing these, hopefully biweekly, maybe weekly, if we can find enough people. And so this is an open invitation for you, and if you come present, I promise I will send you some swag. That is my promise to you. And if you're listening after the fact and you have any questions, come into discord on the Qdrant. Discord. And ask myself or Andrey any of the questions that you may have as you're listening to this talk about binary quantization. We will catch you all later. Demetrios: See ya, have a great day. Take care.",blog/binary-quantization-andrey-vasnetsov-vector-space-talk-001.md "--- draft: true preview_image: /blog/from_cms/new-cmp-demo.gif sitemapExclude: true title: ""Introducing the Quaterion: a framework for fine-tuning similarity learning models"" slug: quaterion short_description: Please meet Quaterion—a framework for training and fine-tuning similarity learning models. description: We're happy to share the result of the work we've been into during the last months - Quaterion. It is a framework for fine-tuning similarity learning models that streamlines the training process to make it significantly faster and cost-efficient. date: 2022-06-28T12:48:36.622Z author: Andrey Vasnetsov featured: true author_link: https://www.linkedin.com/in/andrey-vasnetsov-75268897/ tags: - Corporate news - Release - Quaterion - PyTorch categories: - News - Release - Quaterion --- We're happy to share the result of the work we've been into during the last months - [Quaterion](https://quaterion.qdrant.tech/). It is a framework for fine-tuning similarity learning models that streamlines the training process to make it significantly faster and cost-efficient. To develop Quaterion, we utilized PyTorch Lightning, leveraging a high-performing AI research approach to constructing training loops for ML models. ![quaterion](/blog/from_cms/new-cmp-demo.gif) This framework empowers vector search [solutions](/solutions/), such as semantic search, anomaly detection, and others, by advanced coaching mechanism, specially designed head layers for pre-trained models, and high flexibility in terms of customization according to large-scale training pipelines and other features. Here you can read why similarity learning is preferable to the traditional machine learning approach and how Quaterion can help benefit     A quick start with Quaterion:\ \ And try it and give us a star on GitHub :) ",blog/introducing-the-quaterion-a-framework-for-fine-tuning-similarity-learning-models.md "--- draft: true title: ""OCI and Qdrant Hybrid Cloud for Maximum Data Sovereignty"" short_description: ""Qdrant Hybrid Cloud is now available for OCI customers as a managed vector search engine for data-sensitive AI apps."" description: ""Qdrant Hybrid Cloud is now available for OCI customers as a managed vector search engine for data-sensitive AI apps."" preview_image: /blog/hybrid-cloud-oracle-cloud-infrastructure/hybrid-cloud-oracle-cloud-infrastructure.png date: 2024-04-11T00:03:00Z author: Qdrant featured: false weight: 1005 tags: - Qdrant - Vector Database --- Qdrant and [Oracle Cloud Infrastructure (OCI) Cloud Engineering](https://www.oracle.com/cloud/) are thrilled to announce the ability to deploy [Qdrant Hybrid Cloud](/hybrid-cloud/) as a managed service on OCI. This marks the next step in the collaboration between Qdrant and Oracle Cloud Infrastructure, which will enable enterprises to realize the benefits of artificial intelligence powered through scalable vector search. In 2023, OCI added Qdrant to its [Oracle Cloud Infrastructure solution portfolio](https://blogs.oracle.com/cloud-infrastructure/post/vecto-database-qdrant-support-oci-kubernetes). Qdrant Hybrid Cloud is the managed service of the Qdrant vector search engine that can be deployed and run in any existing OCI environment, allowing enterprises to run fully managed vector search workloads in their existing infrastructure. This is a milestone for leveraging a managed vector search engine for data-sensitive AI applications. In the past years, enterprises have been actively engaged in exploring AI applications to enhance their products and services or unlock internal company knowledge to drive the productivity of teams. These applications range from generative AI use cases, for example, powered by retrieval augmented generation (RAG), recommendation systems, or advanced enterprise search through semantic, similarity, or neural search. As these vector search applications continue to evolve and grow with respect to dimensionality and complexity, it will be increasingly relevant to have a scalable, manageable vector search engine, also called out by Gartner’s 2024 Impact Radar. In addition to scalability, enterprises also require flexibility in deployment options to be able to maximize the use of these new AI tools within their existing environment, ensuring interoperability and full control over their data. > *""We are excited to partner with Qdrant to bring their powerful vector search capabilities to Oracle Cloud Infrastructure. By offering Qdrant Hybrid Cloud as a managed service on OCI, we are empowering enterprises to harness the full potential of AI-driven applications while maintaining complete control over their data. This collaboration represents a significant step forward in making scalable vector search accessible and manageable for businesses across various industries, enabling them to drive innovation, enhance productivity, and unlock valuable insights from their data.""* Dr. Sanjay Basu, Senior Director of Cloud Engineering, AI/GPU Infrastructure at Oracle. #### How Qdrant and OCI Support Enterprises in Unlocking Value Through AI Deploying Qdrant Hybrid Cloud on OCI facilitates vector search in production environments without altering existing setups, ideal for enterprises and developers leveraging OCI's services. Key benefits include: - **Seamless Deployment:** Qdrant Hybrid Cloud’s Kubernetes-native architecture allows you to simply connect your OCI cluster as a Hybrid Cloud Environment and deploy Qdrant with a one-step installation ensuring a smooth and scalable setup. - **Seamless Integration with OCI Services:** The integration facilitates efficient resource utilization and enhances security provisions by leveraging OCI's comprehensive suite of services. - **Simplified Cluster Management**: Qdrant’s central cluster management allows to scale your cluster on OCI (vertically and horizontally), and supports seamless zero-downtime upgrades and disaster recovery, - **Control and Data Privacy**: Deploying Qdrant on OCI ensures complete data isolation, while enjoying the benefits of a fully managed cluster management. #### Qdrant on OCI in Action: Building a RAG System for AI-Enabled Support ![hybrid-cloud-oracle-cloud-infrastructure-tutorial](/blog/hybrid-cloud-oracle-cloud-infrastructure/hybrid-cloud-oracle-cloud-infrastructure-tutorial.png) We created a comprehensive tutorial to show how to leverage the benefits of Qdrant Hybrid Cloud on OCI and build AI applications with a focus on data sovereignty. This use case is focused on building a RAG system for FAQ, leveraging the strengths of Qdrant Hybrid Cloud for vector search, Oracle Cloud Infrastructure (OCI) as a managed Kubernetes provider, Cohere models for embedding, and LangChain as a framework. [Try the Tutorial](/documentation/tutorials/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/) Deploying Qdrant Hybrid Cloud on Oracle Cloud Infrastructure only takes a few minutes due to the seamless Kubernetes-native integration. You can get started by following these three steps: 1. **Hybrid Cloud Activation**: Start by signing into your [Qdrant Cloud account](https://qdrant.to/cloud) and activate **Hybrid Cloud**. 2. **Cluster Integration**: In the Hybrid Cloud section, add your OCI Kubernetes clusters as a Hybrid Cloud Environment. 3. **Effortless Deployment**: Utilize the Qdrant Management Console to seamlessly create and manage your Qdrant clusters on OCI. You can find a detailed description in our documentation focused on deploying Qdrant on OCI. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-oracle-cloud-infrastructure.md "--- title: ""Intel’s New CPU Powers Faster Vector Search"" draft: false slug: qdrant-cpu-intel-benchmark short_description: ""New generation silicon is a game-changer for AI/ML applications."" description: ""Intel’s 5th gen Xeon processor is made for enterprise-scale operations in vector space. "" preview_image: /blog/qdrant-cpu-intel-benchmark/social_preview.jpg social_preview_image: /blog/qdrant-cpu-intel-benchmark/social_preview.jpg date: 2024-05-10T00:00:00-08:00 author: David Myriel, Kumar Shivendu featured: true tags: - vector search - intel benchmark - next gen cpu - vector database --- #### New generation silicon is a game-changer for AI/ML applications ![qdrant cpu intel benchmark report](/blog/qdrant-cpu-intel-benchmark/qdrant-cpu-intel-benchmark.png) > *Intel’s 5th gen Xeon processor is made for enterprise-scale operations in vector space.* Vector search is surging in popularity with institutional customers, and Intel is ready to support the emerging industry. Their latest generation CPU performed exceptionally with Qdrant, a leading vector database used for enterprise AI applications. Intel just released the latest Xeon processor (**codename: Emerald Rapids**) for data centers, a market which is expected to grow to $45 billion. Emerald Rapids offers higher-performance computing and significant energy efficiency over previous generations. Compared to the 4th generation Sapphire Rapids, Emerald boosts AI inference performance by up to 42% and makes vector search 38% faster. ## The CPU of choice for vector database operations The latest generation CPU performed exceptionally in tests carried out by Qdrant’s R&D division. Intel’s CPU was stress-tested for query speed, database latency and vector upload time against massive-scale datasets. Results showed that machines with 32 cores were 1.38x faster at running queries than their previous generation counterparts. In this range, Qdrant’s latency also dropped 2.79x when compared to Sapphire. Qdrant strongly recommends the use of Intel’s next-gen chips in the 8-64 core range. In addition to being a practical number of cores for most machines in the cloud, this compute capacity will yield the best results with mass-market use cases. The CPU affects vector search by influencing the speed and efficiency of mathematical computations. As of recently, companies have started using GPUs to carry large workloads in AI model training and inference. However, for vector search purposes, studies show that CPU architecture is a great fit because it can handle concurrent requests with great ease. > *“Vector search is optimized for CPUs. Intel’s new CPU brings even more performance improvement and makes vector operations blazing fast for AI applications. Customers should consider deploying more CPUs instead of GPU compute power to achieve best performance results and reduce costs simultaneously.”* > > - André Zayarni, Qdrant CEO ## **Why does vector search matter?** ![qdrant cpu intel benchmark report](/blog/qdrant-cpu-intel-benchmark/qdrant-cpu-intel-benchmark-future.png) Vector search engines empower AI to look deeper into stored data and retrieve strong relevant responses. Qdrant’s vector database is key to modern information retrieval and machine learning systems. Those looking to run massive-scale Retrieval Augmented Generation (RAG) solutions need to leverage such semantic search engines in order to generate the best results with their AI products. Qdrant is purpose-built to enable developers to store and search for high-dimensional vectors efficiently. It easily integrates with a host of AI/ML tools: Large Language Models (LLM), frameworks such as LangChain, LlamaIndex or Haystack, and service providers like Cohere, OpenAI, and Ollama. ## Supporting enterprise-scale AI/ML The market is preparing for a host of artificial intelligence and machine learning cases, pushing compute to the forefront of the innovation race. The main strength of a vector database like Qdrant is that it can consistently support the user way past the prototyping and launch phases. Qdrant’s product is already being used by large enterprises with billions of data points. Such users can go from testing to production almost instantly. Those looking to host large applications might only need up to 18GB RAM to support 1 million OpenAI Vectors. This makes Qdrant the best option for maximizing resource usage and data connection. Intel’s latest development is crucial to the future of vector databases. Vector search operations are very CPU-intensive. Therefore, Qdrant relies on the innovations made by chip makers like Intel to offer large-scale support. > *“Vector databases are a mainstay in today’s AI/ML toolchain, powering the latest generation of RAG and other Gen AI Applications. In teaming with Qdrant, Intel is helping enterprises deliver cutting-edge Gen-AI solutions and maximize their ROI by leveraging Qdrant’s high-performant and cost-efficient vector similarity search capabilities running on latest Intel Architecture based infrastructure across deployment models.”* > > - Arijit Bandyopadhyay, CTO - Enterprise Analytics & AI, Head of Strategy – Cloud and Enterprise, CSV Group, Intel Corporation ## Advancing vector search and the role of next-gen CPUs Looking ahead, the vector database market is on the cusp of significant growth, particularly for the enterprise market. Developments in CPU technologies, such as those from Intel, are expected to enhance vector search operations by 1) improving processing speeds and 2) boosting retrieval efficiency and quality. This will allow enterprise users to easily manage large and more complex datasets and introduce AI on a global scale. As large companies continue to integrate sophisticated AI and machine learning tools, the reliance on robust vector databases is going to increase. This evolution in the market underscores the importance of continuous hardware innovation in meeting the expanding demands of data-intensive applications, with Intel's contributions playing a notable role in shaping the future of enterprise-scale AI/ML solutions. ## Next steps Qdrant is open source and offers a complete SaaS solution, hosted on AWS, GCP, and Azure. Getting started is easy, either spin up a [container image](https://hub.docker.com/r/qdrant/qdrant) or start a [free Cloud instance](https://cloud.qdrant.io/login). The documentation covers [adding the data](/documentation/tutorials/bulk-upload/) to your Qdrant instance as well as [creating your indices](/documentation/tutorials/optimize/). We would love to hear about what you are building and please connect with our engineering team on [Github](https://github.com/qdrant/qdrant), [Discord](https://discord.com/invite/tdtYvXjC4h), or [LinkedIn](https://www.linkedin.com/company/qdrant).",blog/qdrant-cpu-intel-benchmark.md "--- title: ""Response to CVE-2024-3829: Arbitrary file upload vulnerability"" draft: false slug: cve-2024-3829-response short_description: Qdrant keeps your systems secure description: Upgrade your deployments to at least v1.9.0. Cloud deployments not materially affected. preview_image: /blog/cve-2024-3829-response/cve-2024-3829-response-social-preview.png # social_preview_image: /blog/Article-Image.png # Optional image used for link previews # title_preview_image: /blog/Article-Image.png # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-06-10T17:00:00Z author: Mac Chaffee featured: false tags: - cve - security weight: 0 # Change this weight to change order of posts # For more guidance, see https://github.com/qdrant/landing_page?tab=readme-ov-file#blog --- ### Summary A security vulnerability has been discovered in Qdrant affecting all versions prior to v1.9, described in [CVE-2024-3829](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-3829). The vulnerability allows an attacker to upload arbitrary files to the filesystem, which can be used to gain remote code execution. This is a different but similar vulnerability to CVE-2024-2221, announced in April 2024. The vulnerability does not materially affect Qdrant cloud deployments, as that filesystem is read-only and authentication is enabled by default. At worst, the vulnerability could be used by an authenticated user to crash a cluster, which is already possible, such as by uploading more vectors than can fit in RAM. Qdrant has addressed the vulnerability in v1.9.0 and above with code that restricts file uploads to a folder dedicated to that purpose. ### Action Check the current version of your Qdrant deployment. Upgrade if your deployment is not at least v1.9.0. To confirm the version of your Qdrant deployment in the cloud or on your local or cloud system, run an API GET call, as described in the [Qdrant Quickstart guide](https://qdrant.tech/documentation/cloud/quickstart-cloud/#step-2-test-cluster-access). If your Qdrant deployment is local, you do not need an API key. Your next step depends on how you installed Qdrant. For details, read the [Qdrant Installation](https://qdrant.tech/documentation/guides/installation/) guide. #### If you use the Qdrant container or binary Upgrade your deployment. Run the commands in the applicable section of the [Qdrant Installation](https://qdrant.tech/documentation/guides/installation/) guide. The default commands automatically pull the latest version of Qdrant. #### If you use the Qdrant helm chart If you’ve set up Qdrant on kubernetes using a helm chart, follow the README in the [qdrant-helm](https://github.com/qdrant/qdrant-helm/tree/main?tab=readme-ov-file#upgrading) repository. Make sure applicable configuration files point to version v1.9.0 or above. #### If you use the Qdrant cloud No action is required. This vulnerability does not materially affect you. However, we suggest that you upgrade your cloud deployment to the latest version. ",blog/cve-2024-3829-response.md "--- draft: false title: ""FastEmbed: Fast & Lightweight Embedding Generation - Nirant Kasliwal | Vector Space Talks"" slug: fast-embed-models short_description: Nirant Kasliwal, AI Engineer at Qdrant, discusses the power and potential of embedding models. description: Nirant Kasliwal discusses the efficiency and optimization techniques of FastEmbed, a Python library designed for speedy, lightweight embedding generation in machine learning applications. preview_image: /blog/from_cms/nirant-kasliwal-cropped.png date: 2024-01-09T11:38:59.693Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Quantized Emdedding Models - FastEmbed --- > *""When things are actually similar or how we define similarity. They are close to each other and if they are not, they're far from each other. This is what a model or embedding model tries to do.”*\ >-- Nirant Kasliwal Heard about FastEmbed? It's a game-changer. Nirant shares tricks on how to improve your embedding models. You might want to give it a shot! Nirant Kasliwal, the creator and maintainer of FastEmbed, has made notable contributions to the Finetuning Cookbook at OpenAI Cookbook. His contributions extend to the field of Natural Language Processing (NLP), with over 5,000 copies of the NLP book sold. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/4QWCyu28SlURZfS2qCeGKf?si=GDHxoOSQQ_W_UVz4IzzC_A), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/e67jLAx_F2A).*** ## **Top Takeaways:** Nirant Kasliwal, AI Engineer at Qdrant joins us on Vector Space Talks to dive into FastEmbed, a lightning-quick method for generating embeddings. In this episode, Nirant shares insights, tips, and innovative ways to enhance embedding generation. 5 Keys to Learning from the Episode: 1. Nirant introduces some hacker tricks for improving embedding models - you won't want to miss these! 2. Learn how quantized embedding models can enhance CPU performance. 3. Get an insight into future plans for GPU-friendly quantized models. 4. Understand how to select default models in Qdrant based on MTEB benchmark, and how to calibrate them for domain-specific tasks. 5. Find out how Fast Embed, a Python library created by Nirant, can solve common challenges in embedding creation and enhance the speed and efficiency of your workloads. > Fun Fact: The largest header or adapter used in production is only about 400-500 KBs -- proof that bigger doesn't always mean better! > ## Show Notes: 00:00 Nirant discusses FastEmbed at Vector Space Talks.\ 05:00 Tokens are expensive and slow in open air.\ 08:40 FastEmbed is fast and lightweight.\ 09:49 Supporting multimodal embedding is our plan.\ 15:21 No findings. Enhancing model downloads and performance.\ 16:59 Embed creation on your own compute, not cloud. Control and simplicity are prioritized.\ 21:06 Qdrant is fast for embedding similarity search.\ 24:07 Engineer's mindset: make informed guesses, set budgets.\ 26:11 Optimize embeddings with questions and linear layers.\ 29:55 Fast, cheap inference using mixed precision embeddings. ## More Quotes from Nirant: *""There is the academic way of looking at and then there is the engineer way of looking at it, and then there is the hacker way of looking at it. And I will give you all these three answers in that order.”*\ -- Nirant Kasliwal *""The engineer's mindset now tells you that the best way to build something is to make an informed guess about what workload or challenges you're going to foresee. Right. Like a civil engineer builds a bridge around how many cars they expect, they're obviously not going to build a bridge to carry a shipload, for instance, or a plane load, which are very different.”*\ -- Nirant Kasliwal *""I think the more correct way to look at it is that we use the CPU better.”*\ -- Nirant Kasliwal ## Transcript: Demetrios: Welcome back, everyone, to another vector space talks. Today we've got my man Nirant coming to us talking about FastEmbed. For those, if this is your first time at our vector space talks, we like to showcase some of the cool stuff that the community in Qdrant is doing, the Qdrant community is doing. And we also like to show off some of the cool stuff that Qdrant itself is coming out with. And this is one of those times that we are showing off what Qdrant itself came out with with FastEmbed. And we've got my man Nirant around here somewhere. I am going to bring him on stage and I will welcome him by saying Nirant a little bit about his bio, we could say. So, Naran, what's going on, dude? Let me introduce you real fast before we get cracking. Demetrios: And you are a man that wears many hats. You're currently working on the Devrel team at Qdrant, right? I like that shirt that you got there. And you have worked with ML models and embeddings since 2017. That is wild. You are also the creator and maintainer of fast embed. So you're the perfect guy to talk to about this very topic that we are doing today. Now, if anyone has questions, feel free to throw them into the chat and I will ask Nirant as he's going through it. I will also take this moment to encourage anyone who is watching to come and join us in discord, if you are not already there for the Qdrant discord. Demetrios: And secondly, I will encourage you if you have something that you've been doing with Qdrant or in the vector database space, or in the AI application space and you want to show it off, we would love to have you talk at the vector space talks. So without further ado, Nirant, my man, I'm going to kick it over to you and I am going to start it off with what are the challenges with embedding creation today? Nirant Kasliwal: I think embedding creation has it's not a standalone problem, as you might first think like that's a first thought that it's a standalone problem. It's actually two problems. One is a classic compute that how do you take any media? So you can make embeddings from practically any form of media, text, images, video. In theory, you could make it from bunch of things. So I recently saw somebody use soup as a metaphor. So you can make soup from almost anything. So you can make embeddings from almost anything. Now, what do we want to do though? Embedding are ultimately a form of compression. Nirant Kasliwal: So now we want to make sure that the compression captures something of interest to us. In this case, we want to make sure that embeddings capture some form of meaning of, let's say, text or images. And when we do that, what does that capture mean? We want that when things are actually similar or whatever is our definition of similarity. They are close to each other and if they are not, they're far from each other. This is what a model or embedding model tries to do basically in this piece. The model itself is quite often trained and built in a way which retains its ability to learn new things. And you can separate similar embeddings faster and all of those. But when we actually use this in production, we don't need all of those capabilities, we don't need the train time capabilities. Nirant Kasliwal: And that means that all the extra compute and features and everything that you have stored for training time are wasted in production. So that's almost like saying that every time I have to speak to you I start over with hello, I'm Nirant and I'm a human being. It's extremely infuriating but we do this all the time with embedding and that is what fast embed primarily tries to fix. We say embeddings from the lens of production and we say that how can we make a Python library which is built for speed, efficiency and accuracy? Those are the core ethos in that sense. And I think people really find this relatable as a problem area. So you can see this on our GitHub issues. For instance, somebody says that oh yeah, we actually does what it says and yes, that's a good thing. So for 8 million tokens we took about 3 hours on a MacBook Pro M one while some other Olama embedding took over two days. Nirant Kasliwal: You can expect what 8 million tokens would cost on open air and how slow it would be given that they frequently rate limit you. So for context, we made a 1 million embedding set which was a little more than it was a lot more than 1 million tokens and that took us several hundred of us. It was not expensive, but it was very slow. So as a batch process, if you want to embed a large data set, it's very slow. I think the more colorful version of this somebody wrote on LinkedIn, Prithvira wrote on LinkedIn that your embeddings will go and I love that idea that we have optimized speed so that it just goes fast. That's the idea. So what do we I mean let's put names to these things, right? So one is we want it to be fast and light. And I'll explain what do we mean by light? We want recall to be fast, right? I mean, that's what we started with that what are embedding we want to be make sure that similar things are similar. Nirant Kasliwal: That's what we call recall. We often confuse this with accuracy but in retrieval sense we'll call it recall. We want to make sure it's still easy to use, right? Like there is no reason for this to get complicated. And we are fast, I mean we are very fast. And part of that is let's say we use BGE small En, the English model only. And let's say this is all in tokens per second and the token is model specific. So for instance, the way BGE would count a token might be different from how OpenAI might count a token because the tokenizers are slightly different and they have been trained on slightly different corporates. So that's the idea. Nirant Kasliwal: I would love you to try this so that I can actually brag about you trying it. Demetrios: What was the fine print on that slide? Benchmarks are my second most liked way to brag. What's your first most liked way to brag? Nirant Kasliwal: The best way is that when somebody tells me that they're using it. Demetrios: There we go. So I guess that's an easy way to get people to try and use it. Nirant Kasliwal: Yeah, I would love it if you try it. Tell us how it went for you, where it's working, where it's broken, all of that. I love it if you report issue then say I will even appreciate it if you yell at me because that means you're not ignoring me. Demetrios: That's it. There we go. Bug reports are good to throw off your mojo. Keep it rolling. Nirant Kasliwal: So we said fast and light. So what does light mean? So you will see a lot of these Embedding servers have really large image sizes. When I say image, I mean typically or docker image that can typically go to a few GPS. For instance, in case of sentence transformers, which somebody's checked out with Transformers the package and PyTorch, you get a docker image of roughly five GB. The Ram consumption is not that high by the way. Right. The size is quite large and of that the model is just 400 MB. So your dependencies are very large. Nirant Kasliwal: And every time you do this on, let's say an AWS Lambda, or let's say if you want to do horizontal scaling, your cold start times can go in several minutes. That is very slow and very inefficient if you are working in a workload which is very spiky. And if you were to think about it, people have more queries than, let's say your corpus quite often. So for instance, let's say you are in customer support for an ecommerce food delivery app. Bulk of your order volume will be around lunch and dinner timing. So that's a very spiky load. Similarly, ecommerce companies, which are even in fashion quite often see that people check in on their orders every evening and for instance when they leave from office or when they get home. And that's another spike. Nirant Kasliwal: So whenever you have a spiky load, you want to be able to scale horizontally and you want to be able to do it fast. And that speed comes from being able to be light. And that is why Fast Embed is very light. So you will see here that we call out that Fast Embed is just half a GB versus five GB. So on the extreme cases, this could be a ten x difference in your docker, image sizes and even Ram consumptions recall how good or bad are these embeddings? Right? So we said we are making them fast but do we sacrifice how much performance do we trade off for that? So we did a cosine similarity test with our default embeddings which was VG small en initially and now 1.5 and they're pretty robust. We don't sacrifice a lot of performance. Everyone with me? I need some audio to you. Demetrios: I'm totally with you. There is a question that came through the chat if this is the moment to ask it. Nirant Kasliwal: Yes, please go for it. Demetrios: All right it's from a little bit back like a few slides ago. So I'm just warning you. Are there any plans to support audio or image sources in fast embed? Nirant Kasliwal: If there is a request for that we do have a plan to support multimodal embedding. We would love to do that. If there's specific model within those, let's say you want Clip or Seglip or a specific audio model, please mention that either on that discord or our GitHub so that we can plan accordingly. So yeah, that's the idea. We need specific suggestions so that we keep adding it. We don't want to have too many models because then that creates confusion for our end users and that is why we take opinated stance and that is actually a good segue. Why do we prioritize that? We want this package to be easy to use so we're always going to try and make the best default choice for you. So this is a very Linux way of saying that we do one thing and we try to do that one thing really well. Nirant Kasliwal: And here, let's say for instance, if you were to look at Qdrant client it's just passing everything as you would. So docs is a list of strings, metadata is a list of dictionaries and IDs again is a list of IDs valid IDs as per the Qdrant Client spec. And the search is also very straightforward. The entire search query is basically just two params. You could even see a very familiar integration which is let's say langchain. I think most people here would have looked at this in some shape or form earlier. This is also very familiar and very straightforward. And under the hood what are we doing is just this one line. Nirant Kasliwal: We have a dot embed which is a generator and we call a list on that so that we actually get a list of embeddings. You will notice that we have a passage and query keys here which means that our retrieval model which we have used as default here, takes these into account that if there is a passage and a query they need to be mapped together and a question and answer context is captured in the model training itself. The other caveat is that we pass on the token limits or context windows from the embedding model creators themselves. So in the case of this model, which is BGE base, that is 512 BGE tokens. Demetrios: One thing on this, we had Neil's from Cohere on last week and he was talking about Cohere's embed version three, I think, or V three, he was calling it. How does this play with that? Does it is it supported or no? Nirant Kasliwal: As of now, we only support models which are open source so that we can serve those models directly. Embed V three is cloud only at the moment, so that is why it is not supported yet. But that said, we are not opposed to it. In case there's a requirement for that, we are happy to support that so that people can use it seamlessly with Qdrant and fast embed does the heavy lifting of passing it to Qdrant, structuring the schema and all of those for you. So that's perfectly fair. As I ask, if we have folks who would love to try coherent embed V three, we'd use that. Also, I think Nils called out that coherent embed V three is compatible with binary quantization. And I think that's the only embedding which officially supports that. Nirant Kasliwal: Okay, we are binary quantization aware and they've been trained for it. Like compression awareness is, I think, what it was called. So Qdrant supports that. So please of that might be worth it because it saves about 30 x in memory costs. So that's quite powerful. Demetrios: Excellent. Nirant Kasliwal: All right, so behind the scenes, I think this is my favorite part of this. It's also very short. We do literally two things. Why are we fast? We use ONNX runtime as of now, our configurations are such that it runs on CPU and we are still very fast. And that's because of all the multiple processing and ONNX runtime itself at some point in the future. We also want to support GPUs. We had some configuration issues on different Nvidia configurations. As the GPU changes, the OnX runtime does not seamlessly change the GPU. Nirant Kasliwal: So that is why we do not allow that as a provider. But you can pass that. It's not prohibited, it's just not a default. We want to make sure your default is always available and will be available in the happy path, always. And we quantize the models for you. So when we quantize, what it means is we do a bunch of tricks supported by a huge shout out to hugging faces optimum. So we do a bunch of optimizations in the quantization, which is we compress some activations, for instance, gelu. We also do some graph optimizations and we don't really do a lot of dropping the bits, which is let's say 32 to 16 or 64 to 32 kind of quantization only where required. Nirant Kasliwal: Most of these gains come from the graph optimizations themselves. So there are different modes which optimum itself calls out. And if there are folks interested in that, happy to share docs and details around that. Yeah, that's about it. Those are the two things which we do from which we get bulk of these speed gains. And I think this goes back to the question which you opened with. Yes, we do want to support multimodal. We are looking at how we can do an on and export of Clip, which is as robust as Clip. Nirant Kasliwal: So far we have not found anything. I've spent some time looking at this, the quality of life upgrades. So far, most of our model downloads have been through Google cloud storage hosted by Qdrant. We want to support hugging Face hub so that we can launch new models much, much faster. So we will do that soon. And the next thing is, as I called out, we always want to take performance as a first class citizen. So we are looking at how we can allow you to change or adapt frozen Embeddings, let's say open a Embedding or any other model to your specific domain. So maybe a separate toolkit within Fast Embed which is optional and not a part of the default path, because this is not something which you will use all the time. Nirant Kasliwal: We want to make sure that your training and experience parts are separate. So we will do that. Yeah, that's it. Fast and sweet. Demetrios: Amazing. Like FastEmbed. Nirant Kasliwal: Yes. Demetrios: There was somebody that talked about how you need to be good at your puns and that might be the best thing, best brag worthy stuff you've got. There's also a question coming through that I want to ask you. Is it true that when we use Qdrant client add Fast Embedding is included? We don't have to do it? Nirant Kasliwal: What do you mean by do it? As in you don't have to specify a Fast Embed model? Demetrios: Yeah, I think it's more just like you don't have to add it on to Qdrant in any way or this is completely separated. Nirant Kasliwal: So this is client side. You own all your data and even when you compress it and send us all the Embedding creation happens on your own compute. This Embedding creation does not happen on Cauldron cloud, it happens on your own compute. It's consistent with the idea that you should have as much control as possible. This is also why, as of now at least, Fast Embed is not a dedicated server. We do not want you to be running two different docker images for Qdrant and Fast Embed. Or let's say two different ports for Qdrant and Discord within the sorry, Qdrant and Fast Embed in the same docker image or server. So, yeah, that is more chaos than we would like. Demetrios: Yeah, and I think if I understood it, I understood that question a little bit differently, where it's just like this comes with Qdrant out of the box. Nirant Kasliwal: Yes, I think that's a good way to look at it. We set all the defaults for you, we select good practices for you and that should work in a vast majority of cases based on the MTEB benchmark, but we cannot guarantee that it will work for every scenario. Let's say our default model is picked for English and it's mostly tested on open domain open web data. So, for instance, if you're doing something domain specific, like medical or legal, it might not work that well. So that is where you might want to still make your own Embeddings. So that's the edge case here. Demetrios: What are some of the other knobs that you might want to be turning when you're looking at using this. Nirant Kasliwal: With Qdrant or without Qdrant? Demetrios: With Qdrant. Nirant Kasliwal: So one thing which I mean, one is definitely try the different models which we support. We support a reasonable range of models, including a few multilingual ones. Second is while we take care of this when you do use with Qdrants. So, for instance, let's say this is how you would have to manually specify, let's say, passage or query. When you do this, let's say add and query. What we do, we add the passage and query keys while creating the Embeddings for you. So this is taken care of. So whatever is your best practices for the Embedding model, make sure you use it when you're using it with Qdrant or just in isolation as well. Nirant Kasliwal: So that is one knob. The second is, I think it's very commonly recommended, we would recommend that you start with some evaluation, like have maybe let's even just five sentences to begin with and see if they're actually close to each other. And as a very important shout out in Embedding retrieval, when we use Embedding for retrieval or vector similarity search, it's the relative ordering which matters. So, for instance, we cannot say that zero nine is always good. It could also mean that the best match is, let's say, 0.6 in your domain. So there is no absolute cut off for threshold in terms of match. So sometimes people assume that we should set a minimum threshold so that we get no noise. So I would suggest that you calibrate that for your queries and domain. Nirant Kasliwal: And you don't need a lot of queries. Even if you just, let's say, start with five to ten questions, which you handwrite based on your understanding of the domain, you will do a lot better than just picking a threshold at random. Demetrios: This is good to know. Okay, thanks for that. So there's a question coming through in the chat from Shreya asking how is the latency in comparison to elasticsearch? Nirant Kasliwal: Elasticsearch? I believe that's a Qdrant benchmark question and I'm not sure how is elastics HNSW index, because I think that will be the fair comparison. I also believe elastics HNSW index puts some limitations on how many vectors they can store with the payload. So it's not an apples to apples comparison. It's almost like comparing, let's say, a single page with the entire book, because that's typically the ratio from what I remember I also might be a few months outdated on this, but I think the intent behind that question is, is Qdrant fast enough for what Qdrant does? It is definitely fast is, which is embedding similarity search. So for that, it's exceptionally fast. It's written in Rust and Twitter for all C. Similar tweets uses this at really large scale. They run a Qdrant instance. Nirant Kasliwal: So I think if a Twitter scale company, which probably does about anywhere between two and 5 million tweets a day, if they can embed and use Qdrant to serve that similarity search, I think most people should be okay with that latency and throughput requirements. Demetrios: It's also in the name. I mean, you called it Fast Embed for a reason, right? Nirant Kasliwal: Yes. Demetrios: So there's another question that I've got coming through and it's around the model selection and embedding size. And given the variety of models and the embedding sizes available, how do you determine the most suitable models and embedding sizes? You kind of got into this on how yeah, one thing that you can do to turn the knobs are choosing a different model. But how do you go about choosing which model is better? There. Nirant Kasliwal: There is the academic way of looking at and then there is the engineer way of looking at it, and then there is the hacker way of looking at it. And I will give you all these three answers in that order. So the academic and the gold standard way of doing this would probably look something like this. You will go at a known benchmark, which might be, let's say, something like Kilt K-I-L-T or multilingual text embedding benchmark, also known as MTEB or Beer, which is beir one of these three benchmarks. And you will look at their retrieval section and see which one of those marks very close to whatever is your domain or your problem area, basically. So, for instance, let's say you're working in Pharmacology, the ODS that a customer support retrieval task is relevant to. You are near zero unless you are specifically in, I don't know, a Pharmacology subscription app. So that is where you would start. Nirant Kasliwal: This will typically take anywhere between two to 20 hours, depending on how familiar you are with these data sets already. But it's not going to take you, let's say, a month to do this. So just to put a rough order of magnitude, once you have that, you try to take whatever is the best model on that subdomain data set and you see how does it work within your domain and you launch from there. At that point, you switch into the engineer's mindset. The engineer's mindset now tells you that the best way to build something is to make an informed guess about what workload or challenges you're going to foresee. Right. Like a civil engineer builds a bridge around how many cars they expect, they're obviously not going to build a bridge to carry a ship load, for instance, or a plane load, which are very different. So you start with that and you say, okay, this is the number of requests which I expect, this is what my budget is, and your budget will quite often be, let's say, in terms of latency budgets, compute and memory budgets. Nirant Kasliwal: So for instance, one of the reasons I mentioned binary quantization and product quantization is with something like binary quantization you can get 98% recall, but with 30 to 40 x memory savings because it discards all the extraneous bits and just keeps the zero or one bit of the embedding itself. And Qdrant has already measured it for you. So we know that it works for OpenAI and Cohere embeddings for sure. So you might want to use that to just massively scale while keeping your budgets as an engineer. Now, in order to do this, you need to have some sense of three numbers, right? What are your latency requirements, your cost requirements, and your performance requirement. Now, for the performance, which is where engineers are most unfamiliar with, I will give the hacker answer, which is this. Demetrios: Is what I was waiting for. Man, so excited for this one, exactly this. Please tell us the hacker answer. Nirant Kasliwal: The hacker answer is this there are two tricks which I will share. One is write ten questions, figure out the best answer, and see which model gets as many of those ten, right? The second is most embedding models which are larger or equivalent to 768 embeddings, can be optimized and improved by adding a small linear head over it. So for instance, I can take the Open AI embedding, which is 1536 embedding, take my text, pass it through that, and for my own domain, adapt the Open A embedding by adding two or three layers of linear functions, basically, right? Y is equals to MX plus C or Ax plus B y is equals to C, something like that. So it's very simple, you can do it on NumPy, you don't need Torch for it because it's very small. The header or adapter size will typically be in this range of few KBS to be maybe a megabyte, maybe. I think the largest I have used in production is about 400 500 KBS. That's about it. And that will improve your recall several, several times. Nirant Kasliwal: So that's one, that's two tricks. And a third bonus hacker trick is if you're using an LLM, sometimes what you can do is take a question and rewrite it with a prompt and make embeddings from both, and pull candidates from both. And then with Qdrant Async, you can fire both these queries async so that you're not blocked, and then use the answer of both the original question which the user gave and the one which you rewrote using the LLM and see select the results which are there in both, or figure some other combination method. Also, so most Kagglers would be familiar with the idea of ensembling. This is the way to do query inference time ensembling, that's awesome. Demetrios: Okay, dude, I'm not going to lie, that was a lot more than I was expecting for that answer. Nirant Kasliwal: Got into the weeds of retrieval there. Sorry. Demetrios: I like it though. I appreciate it. So what about when it comes to the know, we had Andre V, the CTO of Qdrant on here a few weeks ago. He was talking about binary quantization. But then when it comes to quantizing embedding models, in the docs you mentioned like quantized embedding models for fast CPU generation. Can you explain a little bit more about what quantized embedding models are and how they enhance the CPU performance? Nirant Kasliwal: So it's a shorthand to say that they optimize CPU performance. I think the more correct way to look at it is that we use the CPU better. But let's talk about optimization or quantization, which we do here, right? So most of what we do is from optimum and the way optimum call set up is they call these levels. So you can basically go from let's say level zero, which is there are no optimizations to let's say 99 where there's a bunch of extra optimizations happening. And these are different flags which you can switch. And here are some examples which I remember. So for instance, there is a norm layer which you can fuse with the previous operation. Then there are different attention layers which you can fuse with the previous one because you're not going to update them anymore, right? So what we do in training is we update them. Nirant Kasliwal: You know that you're not going to update them because you're using them for inference. So let's say when somebody asks a question, you want that to be converted into an embedding as fast as possible and as cheaply as possible. So you can discard all these extra information which you are most likely to not going to use. So there's a bunch of those things and obviously you can use mixed precision, which most people have heard of with projects, let's say like lounge CPP that you can use FP 16 mixed precision or a bunch of these things. Let's say if you are doing GPU only. So some of these things like FP 16 work better on GPU. The CPU part of that claim comes from how ONNX the runtime which we use allows you to optimize whatever CPU instruction set you are using. So as an example with intel you can say, okay, I'm going to use the Vino instruction set or the optimization. Nirant Kasliwal: So when we do quantize it, we do quantization right now with CPUs in mind. So what we would want to do at some point in the future is give you a GPU friendly quantized model and we can do a device check and say, okay, we can see that a GPU is available and download the GPU friendly model first for you. Awesome. Does that answer the. Question. Demetrios: I mean, for me, yeah, but we'll see what the chat says. Nirant Kasliwal: Yes, let's do that. Demetrios: What everybody says there. Dude, this has been great. I really appreciate you coming and walking through everything we need to know, not only about fast embed, but I think about embeddings in general. All right, I will see you later. Thank you so much, Naran. Thank you, everyone, for coming out. If you want to present, please let us know. Hit us up, because we would love to have you at our vector space talks. ",blog/fastembed-fast-lightweight-embedding-generation-nirant-kasliwal-vector-space-talks-004.md "--- draft: false title: How to meow on the long tail with Cheshire Cat AI? - Piero and Nicola | Vector Space Talks slug: meow-with-cheshire-cat short_description: Piero Savastano and Nicola Procopio discusses the ins and outs of Cheshire Cat AI. description: Cheshire Cat AI's Piero Savastano and Nicola Procopio discusses the framework's vector space complexities, community growth, and future cloud-based expansions. preview_image: /blog/from_cms/piero-and-nicola-bp-cropped.png date: 2024-04-09T03:05:00.000Z author: Demetrios Brinkmann featured: false tags: - LLM - Qdrant - Cheshire Cat AI - Vector Search - Vector database --- > *""We love Qdrant! It is our default DB. We support it in three different forms, file based, container based, and cloud based as well.”*\ — Piero Savastano > Piero Savastano is the Founder and Maintainer of the open-source project, Cheshire Cat AI. He started in Deep Learning pure research. He wrote his first neural network from scratch at the age of 19. After a period as a researcher at La Sapienza and CNR, he provides international consulting, training, and mentoring services in the field of machine and deep learning. He spreads Artificial Intelligence awareness on YouTube and TikTok. > *""Another feature is the quantization because with this Qdrant feature we improve the accuracy at the performance. We use the scalar quantitation because we are model agnostic and other quantitation like the binary quantitation.”*\ — Nicola Procopio > Nicola Procopio has more than 10 years of experience in data science and has worked in different sectors and markets from Telco to Healthcare. At the moment he works in the Media market, specifically on semantic search, vector spaces, and LLM applications. He has worked in the R&D area on data science projects and he has been and is currently a contributor to some open-source projects like Cheshire Cat. He is the author of popular science articles about data science on specialized blogs. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/2d58Xui99QaUyXclIE1uuH?si=68c5f1ae6073472f), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/K40DIG9ZzAU?feature=shared).*** ## **Top takeaways:** Did you know that companies across Italy, Germany, and the USA are already harnessing the power of Cheshire Cat for a variety of nifty purposes? It's not just a pretty face; it's evolved from a simple tutorial to an influential framework! It’s time to learn how to meow! Piero in this episode of Vector Space Talks discusses the community and open-source nature that contributes to the framework's success and expansion while Nicola reveals the Cheshire Cat’s use of Qdrant and quantization to enhance search accuracy and performance in a hybrid mode. Here are the highlights from this episode: 1. **The Art of Embedding:** Discover how Cheshire Cat uses collections with an embedder, fine-tuning them through scalar quantization and other methods to enhance accuracy and performance. 2. **Vectors in Harmony:** Get the lowdown on storing quantized vectors in a hybrid mode – it's all about saving memory without compromising on speed. 3. **Memory Matters:** Scoop on managing different types of memory within Qdrant, the go-to vector DB for Cheshire Cat. 4. **Community Chronicles:** Talking about the growing community that's shaping the evolution of Cheshire Cat - from enthusiasts to core contributors! 5. **Looking Ahead:** They've got grand plans brewing for a cloud version of Cheshire Cat. Imagine a marketplace buzzing with user-generated plugins. This is the future they're painting! > Fun Fact: The Cheshire Cat community on Discord plays a crucial role in the development and user support of the framework, described humorously by Piero as ""a mess"" due to its large and active nature. > ## Show notes: 00:00 Powerful open source framework.\ 06:11 Tutorials, code customization, conversational forms, community challenges.\ 09:09 Exploring Qdrant's memory features.\ 13:02 Qdrant experiments with document quantization.\ 17:52 Explore details, export, and memories.\ 20:42 Addressing challenges in ensuring Cheshire Cat's reliability.\ 23:36 Leveraging cool features presents significant challenges.\ 27:06 Plugin-based approach distinguishes the CAT framework.\ 29:28 Wrap up ## More Quotes from Piero and Nicola: *""We have a little partnership going on with Qdrant because the native DB in this framework is Qdrant.”*\ — Piero Savastano *""We explore the feature, the Qdrant aliases feature, and we call this topic the drunken cut effect because if we have several embedders, for example two model, two embedders with the same dimension, we can put in the collection in the episodic or declarative collection factors from two different embeddings with the same dimension. But the points are different for the same sentences and for the cat is like for the human, when he mixes drinks he has a big headache and don't understand what it retrieved.”*\ — Nicola Procopio *""It's a classic language model assistant chat we have for each message you have explainability, you can upload documents. This is all handled automatically and we start with new stuff. You have a memory page where you can search through the memories of your cat, delete, explore collections, collection from Qdrant.”*\ — Piero Savastano *""Because I'm a researcher, a data scientist, I like to play with strange features like binary quantization, but we need to maintain the focus on the user needs, on the user behavior.”*\ — Nicola Procopio ## Transcript: Demetrios: What is up, good people of the Internet? We are here for another one of these vector space talks and I've got to say it's a special day. We've got the folks from Cheshire Cat coming at you full on today and I want to get it started right away because I know they got a lot to talk about. And today we get a two for one discount. It's going to be nothing like you have experienced before. Or maybe those are big words. I'm setting them up huge. We've got Piero coming at us live. Where you at, Piero? Piero, founder. Demetrios: There he is, founder at Cheshire Cat. And you are joined today by Nicola, one of the core contributors. It's great to have you both very excited. So you guys are going to be talking to us all about what you poetically put how to meow on the long tail with Cheshire Cat. And so I know you've got some slides prepared. I know you've got all that fun stuff working right now and I'm going to let you hop right into it so we don't waste any time. You ready? Who wants to share their screen first? Is it you, Nicola, or go? Piero Savastano: I'll go. Thanks. Demetrios: Here we go. Man, you should be seeing it right now. Piero Savastano: Yes. Demetrios: Boom. Piero Savastano: Let's go. Thank you, Demetrios. We're happy to be hosted at the vector space talk. Let's talk about the Cheshire Cat AI. This is an open source framework. We have a little partnership going on with Qdrant because the native DB in this framework is Qdrant. It's a python framework. And before starting to get into the details, I'm going to show you a little video. Piero Savastano: This is the website. So you see, it's a classic language model assistant chat we have for each message you have explainability, you can upload documents. This is all handled automatically and we start with new stuff. You have a memory page where you can search through the memories of your cat, delete, explore collections, collection from Qdrant. We have a plugin system and you can publish any plugin. You can sell your plugin. There is a big ecosystem already and we also give explanation on memories. We have adapters for the most common language models. Piero Savastano: Dark team, you can do a lot of stuff with the framework. This is how it presents itself. We have a blog with tutorials, but going back to our numbers, it is open source, GPL licensed. We have some good numbers. We are mostly active in Italy and in a good part of Europe, East Europe, and also a little bit of our communities in the United States. There are a lot of contributors already and our docker image has been downloaded quite a few times, so it's really easy to start up and running because you just docker run and you're good to go. We have also a discord server with thousands of members. If you want to join us, it's going to be fun. Piero Savastano: We like meme, we like to build culture around code, so it is not just the code, these are the main components of the cat. You have a chat as usual. The rabbit hole is our module dedicated to document ingestion. You can extend all of these parts. We have an agent manager. Meddetter is the module to manage plugins. We have a vectordb which is Qdrant natively, by the way. We use both the file based Qdrant, the container version, and also we support the cloud version. Piero Savastano: So if you are using Qdrant, we support the whole stack. Right now with the framework we have an embedder and a large language model coming to the embedder and language models. You can use any language model or embedded you want, closed source API, open Ollama, self hosted anything. These are the main features. So the first feature of the cat is that he's ready to fight. It is already dogsized. It's model agnostic. One command in the terminal and you can meow. Piero Savastano: The other aspect is that there is not only a retrieval augmented generation system, but there is also an action agent. This is all customizable. You can plug in any script you want as an agent, or you can customize the ready default presence default agent. And one of our specialty is that we do retrieve augmented generation, not only on documents as everybody's doing, but we do also augmented generation over conversations. I can hear your keyboard. We do augmented generation over conversations and over procedures. So also our tools and form conversational forms are embedded into the DB. We have a big plugin system. Piero Savastano: It's really easy to use and with different primitives. We have hooks which are events, WordPress style events. We have tools, function calling, and also we just build up a spec for conversational forms. So you can use your assistant to order a pizza, for example, multitool conversation and order a pizza, book a flight. You can do operative stuff. I already told you, and I repeat a little, not just a runner, but it's a full fledged framework. So we built this not to use language model, but to build applications on top of language models. There is a big documentation where all the events are described. Piero Savastano: You find tutorials and with a few lines of code you can change the prompt. You can use long chain inspired tools, and also, and this is the big part we just built, you can use conversational forms. We launched directly on GitHub and in our discord a pizza challenge, where we challenged our community members to build up prototypes to support a multi turn conversational pizza order. And the result of this challenge is this spec where you define a pedantic model in Python and then you subclass the pizza form, the cut form from the framework, and you can give examples on utterances that triggers the form, stops the forms, and you can customize the submit function and any other function related to the form. So with a simple subclass you can handle pragmatic, operational, multi turn conversations. And I truly believe we are among the first in the world to build such a spec. We have a lot of plugins. Many are built from the community itself. Piero Savastano: Many people is already hosting private plugins. There is a little marketplace independent about plugins. All of these plugins are open source. There are many ways to customize the cat. The big advantage here is no vendor lock in. So since the framework is open and the plugin system can be open, you do not need to pass censorship from big tech giants. This is one of the best key points of moving the framework along the open source values for the future. We plan to add the multimodality. Piero Savastano: At the moment we are text only, but there are plugins to generate images. But we want to have images and sounds natively into the framework. We already accomplished the conversational forms. In a later talk we can speak in more detail about this because it's really cool and we want to integrate a knowledge graph into the framework so we can play with both symbolic vector representations and symbolic network ones like linked data, for example wikidata. This stuff is going to be really interesting within. Yes, we love the Qdrant. It is our default DB. We support it in three different forms, file based, container based, and cloud based also. Piero Savastano: But from now on I want to give word to Nicola, which is way more expert on this vector search topic and he wrote most of the part related to the DB. So thank you guys. Nicola to you. Nicola Procopio: Thanks Piero. Thanks Demetrios. I'm so proud to be hosted here because I'm a vector space talks fan. Okay, Qdrant is the vector DB of the cat and now I will try to explore the feature that we use on Cheshire Cat. The first slide, explain the cut's memory. Because Qdrant is our memory. We have a long term memory in three parts. The episodic memory when we store and manage the conversation, the chart, the declarative memory when we store and manage documents and the procedural memory when we store and manage the tools how to manage three memories with several embedder because the user can choose his fabric embedder and change it. Nicola Procopio: We explore the feature, the Qdrant aliases feature, and we call this topic the drunken cut effect because if we have several embedders, for example two model, two embedders with the same dimension, we can put in the collection in the episodic or declarative collection factors from two different embeddings with the same dimension. But the points are different for the same sentences and for the cat is like for the human, when he mixes drinks he has a big headache and don't understand what it retrieved. To us the flow now is this. We create the collection with the name and we use the aliases to. Piero Savastano: Label. Nicola Procopio: This collection with the name of the embedder used. When the user changed the embedder, we check if the embedder has the same dimension. If has the same dimension, we check also the aliases. If the aliases is the same we don't change nothing. Otherwise we create another collection and this is the drunken cut effect. The first feature that we use in the cat. Another feature is the quantization because with this Qdrant feature we improve the accuracy at the performance. We use the scalar quantitation because we are model agnostic and other quantitation like the binary quantitation. Nicola Procopio: If you read on the Qdrant documents are experimented on not to all embedder but also for OpenAI and Coer. If I remember well with this discover quantitation and the scour quantization is used in the storage step. The vector are quantized and stored in a hybrid mode, the original vector on disk, the quantized vector in RAM and with this procedure we procedure we can use less memory. In case of Qdrant scalar quantization, the flat 32 elements is converted to int eight on a single number on a single element needs 75% less memory. In case of big embeddings like I don't know Gina embeddings or mistral embeddings with more than 1000 elements. This is big improvements. The second part is the retriever step. We use a quantizement query at the quantized vector to calculate causing similarity and we have the top n results like a simple semantic search pipeline. Nicola Procopio: But if we want a top end results in quantize mod, the quantity mod has less quality on the information and we use the oversampling. The oversampling is a simple multiplication. If we want top n with n ten with oversampling with a score like one five, we have 15 results, quantities results. When we have these 15 quantities results, we retrieve also the same 15 unquanted vectors. And on these unquanted vectors we rescale busset on the query and filter the best ten. This is an improvement because the retrieve step is so fast. Yes, because using these tip and tricks, the Cheshire capped vectors achieve up. Piero Savastano: Four. Nicola Procopio: Times lower memory footprint and two time performance increase. We are so fast using this Qdrant feature. And last but not least, we go in deep on the memory. This is the visualization that Piero showed before. This is the vector space in 2D we use Disney is very similar to the Qdrant cloud visualization. For the embeddings we have the search bar, how many vectors we want to retrieve. We can choose the memory and other filters. We can filter on the memory and we can wipe a memory or all memory and clean all our space. Nicola Procopio: We can go in deep using the details. We can pass on the dot and we have a bubble or use the detail, the detail and we have a list of first n results near our query for every memory. Last but not least, we can export and share our memory in two modes. The first is exporting the JSON using the export button from the UI. Or if you are very curious, you can navigate the folder in the project and share the long term memory folder with all the memories. Or the experimental feature is wake up the door mouse. This feature is simple, the download of Qdrant snapshots. This is experimental because the snapshot is very easy to download and we will work on faster methods to use it. Nicola Procopio: But now it works and sometimes us, some user use this feature for me is all and thank you. Demetrios: All right, excellent. So that is perfect timing. And I know there have been a few questions coming through in the chat, one from me. I think you already answered, Piero. But when we can have some pistachio gelato made from good old Cheshire cat. Piero Savastano: So the plan is make the cat order gelato from service from an API that can already be done. So we meet somewhere or at our house and gelato is going to come through the cat. The cat is able to take, each of us can do a different order, but to make the gelato itself, we're going to wait for more open source robotics to come to our way. And then we go also there. Demetrios: Then we do that, we can get the full program. How cool is that? Well, let's see, I'll give it another minute, let anyone from the chat ask any questions. This was really cool and I appreciate you all breaking down. Not only the space and what you're doing, but the different ways that you're using Qdrant and the challenges and the architecture behind it. I would love to know while people are typing in their questions, especially for you, Nicola, what have been some of the challenges that you've faced when you're dealing with just trying to get Cheshire Cat to be more reliable and be more able to execute with confidence? Nicola Procopio: The challenges are in particular to mix a lot of Qdrant feature with the user needs. Because I'm a researcher, a data scientist, I like to play with strange features like binary quantization, but we need to maintain the focus on the user needs, on the user behavior. And sometimes we cut some feature on the Cheshire cat because it's not important now for for the user and we can introduce some bug, or rather misunderstanding for the user. Demetrios: Can you hear me? Yeah. All right, good. Now I'm seeing a question come through in the chat that is asking if you are thinking about cloud version of the cat. Like a SaaS, it's going to come. It's in the works. Piero Savastano: It's in the works. Not only you can self host the cat freely, some people install it on a raspberry, so it's really lightweight. We plan to have an osted version and also a bigger plugin ecosystem with a little marketplace. Also user will be able to upload and maybe sell their plugins. So we want to build an know our vision is a WordPress style ecosystem. Demetrios: Very cool. Oh, that is awesome. So basically what I'm hearing from Nicola asking about some of the challenges are like, hey, there's some really cool features that we've got in Qdrant, but it's almost like you have to keep your eye on the prize and make sure that you're building for what people need and want instead of just using cool features because you can use cool features. And then Piero, you're saying, hey, we really want to enable people to be able to build more cool things and use all these cool different features and whatever flavors or tools they want to use. But we want to be that ecosystem creator so that anyone can bring and create an app on top of the ecosystem and then enable them to get paid also. So it's not just Cheshire cat getting paid, it's also the contributors that are creating cool stuff. Piero Savastano: Yeah. Community is the first protagonist without community. I'm going to tell you, the cat started as a tutorial. When chat GPT came out, I decided to do a little rug tutorial and I chose Qdrant as vector. I took OpenAI as a language model, and I built a little tutorial, and then from being a tutorial to show how to build an agent on GitHub, it completely went out of hand. So the whole framework is organically grown? Demetrios: Yeah, that's the best. That is really cool. Simone is asking if there's companies that are already using Cheshire cat, and if you can mention a few. Piero Savastano: Yeah, okay. In Italy, there are at least 1015 companies distributed along education, customer care, typical chatbot usage. Also, one of them in particular is trying to build for public administration, which is really hard to do on the international level. We are seeing something in Germany, like web agencies starting to use the cat a little on the USA. Mostly they are trying to build agents using the cat and Ollama as a runner. And a company in particular presented in a conference in Vegas a pitch about a 3d avatar. Inside the avatar, there is the cat as a linguistic device. Demetrios: Oh, nice. Piero Savastano: To be honest, we have a little problem tracking companies because we still have no telemetry. We decided to be no telemetry for the moment. So I hope companies will contribute and make themselves happen. If that does not, we're going to track a little more. But companies using the cat are at least in the 50, 60, 70. Demetrios: Yeah, nice. So if anybody out there is using the cat, and you have not talked to Piero yet, let him know so that he can have a good idea of what you're doing and how you're doing it. There's also another question coming through about the market analysis. Are there some competitors? Piero Savastano: There are many competitors. When you go down to what distinguishes the cat from many other frameworks that are coming out, we decided since the beginning to go for a plugin based operational agent. And at the moment, most frameworks are retrieval augmented generation frameworks. We have both retrieval augmented generation. We have tooling, we have forms. The tools and the forms are also embedded. So the cat can have 20,000 tools, because we also embed the tools and we make a recall over the function calling. So we scaled up both documents, conversation and tools, conversational forms, and I've not seen anybody doing that till now. Piero Savastano: So if you want to build an application, a pragmatic, operational application, to buy products, order pizza, do stuff, have a company assistant. The cat is really good at the moment. Demetrios: Excellent. Nicola Procopio: And the cat has a very big community on discord works. Piero Savastano: Our discord is a mess. Demetrios: You got the best memes around. If that doesn't make people join the discord, I don't know what will. Piero Savastano: Please, Nicola. Sorry for interrupting. Demetrios: No. Nicola Procopio: Okay. The community is a plus for Cheshire Cat because we have a lot of developer user on Discord, and for an open source project, the community is fundamentally 100%. Demetrios: Well fellas, this has been awesome. I really appreciate you coming on the vector space talks and sharing about the cat for anybody that is interested. Hopefully they go, they check it out, they join your community, they share some memes and they get involved, maybe even contribute back and create some tools. That would be awesome. So Piero and Nicola, I really appreciate your time. We'll see you all later. Piero Savastano: Thank you. Nicola Procopio: Thank you. Demetrios: And for anybody out there that wants to come on to the vector space talks and give us a bit of an update on how you're using Qdrant, we'd love to hear it. Just reach out and we'll schedule you in. Until next time. See y'all. Bye. ",blog/how-to-meow-on-the-long-tail-with-cheshire-cat-ai-piero-and-nicola-vector-space-talks.md "--- draft: false title: Production-scale RAG for Real-Time News Distillation - Robert Caulk | Vector Space Talks slug: real-time-news-distillation-rag short_description: Robert Caulk tackles the challenges and innovations in open source AI and news article modeling. description: Robert Caulk, founder of Emergent Methods, discusses the complexities of context engineering, the power of Newscatcher API for broader news access, and the sophisticated use of tools like Qdrant for improved recommendation systems, all while emphasizing the importance of efficiency and modularity in technology stacks for real-time data management. preview_image: /blog/from_cms/robert-caulk-bp-cropped.png date: 2024-03-25T08:49:22.422Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - Retrieval Augmented Generation - LLM --- > *""We've got a lot of fun challenges ahead of us in the industry, I think, and the industry is establishing best practices. Like you said, everybody's just trying to figure out what's going on. And some of these base layer tools like Qdrant really enable products and enable companies and they enable us.”*\ -- Robert Caulk > Robert, Founder of Emergent Methods is a scientist by trade, dedicating his career to a variety of open-source projects that range from large-scale artificial intelligence to discrete element modeling. He is currently working with a team at Emergent Methods to adaptively model over 1 million news articles per day, with a goal of reducing media bias and improving news awareness. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/7lQnfv0v2xRtFksGAP6TUW?si=Vv3B9AbjQHuHyKIrVtWL3Q), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/0ORi9QJlud0).*** ## **Top takeaways:** How do Robert Caulk and Emergent Methods contribute to the open-source community, particularly in AI systems and news article modeling? In this episode, we'll be learning stuff about open-source projects that are reshaping how we interact with AI systems and news article modeling. Robert takes us on an exploration into the evolving landscape of news distribution and the tech making it more efficient and balanced. Here are some takeaways from this episode: 1. **Context Matters**: Discover the importance of context engineering in news and how it ensures a diversified and consumable information flow. 2. **Introducing Newscatcher API**: Get the lowdown on how this tool taps into 50,000 news sources for more thorough and up-to-date reporting. 3. **The Magic of Embedding**: Learn about article summarization and semantic search, and how they're crucial for discovering content that truly resonates. 4. **Qdrant & Cloud**: Explore how Qdrant's cloud offering and its single responsibility principle support a robust, modular approach to managing news data. 5. **Startup Superpowers**: Find out why startups have an edge in implementing new tech solutions and how incumbents are tied down by legacy products. > Fun Fact: Did you know that startups' lack of established practices is actually a superpower in the face of new tech paradigms? Legacy products can't keep up! > ## Show notes: 00:00 Intro to Robert and Emergent Methods.\ 05:22 Crucial dedication to scaling context engineering.\ 07:07 Optimizing embedding for semantic similarity in search.\ 13:07 New search technology boosts efficiency and speed.\ 14:17 Reliable cloud provider with privacy and scalability.\ 17:46 Efficient data movement and resource management.\ 22:39 GoLang for services, Rust for security.\ 27:34 Logistics organized; Newscatcher provides up-to-date news.\ 30:27 Tested Weaviate and another in Rust.\ 32:01 Filter updates by starring and user preferences. ## More Quotes from Robert: *""Web search is powerful, but it's slow and ultimately inaccurate. What we're building is real time indexing and we couldn't do that without Qdrant*”\ -- Robert Caulk *""You need to start thinking about persistence and search and making sure those services are robust. That's where Qdrant comes into play. And we found that the all in one solutions kind of sacrifice performance for convenience, or sacrifice accuracy for convenience, but it really wasn't for us. We'd rather just orchestrate it ourselves and let Qdrant do what Qdrant does, instead of kind of just hope that an all in one solution is handling it for us and that allows for modularity performance.”*\ -- Robert Caulk *""Anyone riding the Qdrant wave is just reaping benefits. It seems monthly, like two months ago, sparse vector support got added. There's just constantly new massive features that enable products.”*\ -- Robert Caulk ## Transcript: Demetrios: Robert, it's great to have you here for the vector space talks. I don't know if you're familiar with some of this fun stuff that we do here, but we get to talk with all kinds of experts like yourself on what they're doing when it comes to the vector space and how you've overcome challenges, how you're working through things, because this is a very new field and it is not the most intuitive, as you will tell us more in this upcoming talk. I really am excited because you've been a scientist by trade. Now, you're currently founder at Emergent Methods and you've dedicated your career to a variety of open source projects that range from the large scale AI systems to the discrete element modeling. Now at emergent methods, you are adaptively modeling over 1 million news articles per day. That sounds like a whole lot of news articles. And you've been talking and working through production grade RAG, which is basically everyone's favorite topic these days. So I know you got to talk for us, man. Demetrios: I'm going to hand it over to you. I'll bring up your screen right now, and when someone wants to answer or ask a question, feel free to throw it in the chat and I'll jump out at Robert and stop him if needed. Robert Caulk: Sure. Demetrios: Great to have you here, man. I'm excited for this one. Robert Caulk: Thanks for having me, Demetrios. Yeah, it's a great opportunity. I love talking about vector spaces, parameter spaces. So to talk on the show is great. We've got a lot of fun challenges ahead of us in the industry, I think, and the industry is establishing best practices. Like you said, everybody's just trying to figure out what's going on. And some of these base layer tools like Qdrant really enable products and enable companies and they enable us. So let me start. Robert Caulk: Yeah, like you said, I'm Robert and I'm a founder of emergent methods. Our background, like you said, we are really committed to free and open source software. We started with a lot of narrow AI. Freak AI was one of our original projects, which is AI ML for algo trading very narrow AI, but we came together and built flowdapt. It's a really nice cluster orchestration software, and I'll talk a little bit about that during this presentation. But some of our background goes into, like you said, large scale deep learning for supercomputers. Really cool, interesting stuff. We have some cloud experience. Robert Caulk: We really like configuration, so let's dive into it. Why do we actually need to engineer context in the news? There's a lot of reasons why news is important and why it needs to be distributed in a way that's balanced and diversified, but also consumable. Right, let's look at Chat GPT on the left. This is Chat GPT plus it's kind of hanging out searching for Gaza news on Bing, trying to find the top three articles live. Web search is powerful, but it's slow and ultimately inaccurate. What we're building is real time indexing and we couldn't do that without Qdrant, and there's a lot of reasons which I'll be perfectly happy to dive into, but eventually Chat GPT will pull something together here. There it is. And the first thing it reports is 25 day old article with 25 day old nudes. Robert Caulk: Old news. So it's just inaccurate. So it's borderline dangerous, what's happening here. Right, so this is a very delicate topic. Engineering context in news properly, which takes a lot of energy, a lot of time and dedication and focus, and not every company really has this sort of resource. So we're talking about enforcing journalistic standards, right? OpenAI and Chat GPt, they just don't have the time and energy to build a dedicated prompt for this sort of thing. It's fine, they're doing great stuff, they're helping you code. But someone needs to step in and really do enforce some journalistic standards here. Robert Caulk: And that includes enforcing diversity, languages, regions and sources. If I'm going to read about Gaza, what's happening over there, you can bet I want to know what Egypt is saying and what France is saying and what Algeria is saying. So let's do this right. That's kind of what we're suggesting, and the only way to do that is to parse a lot of articles. That's how you avoid outdated, stale reporting. And that's a real danger, which is kind of what we saw on that first slide. Everyone here knows hallucination is a problem and it's something you got to minimize, especially when you're talking about the news. It's just a really high cost if you get it wrong. Robert Caulk: And so you need people dedicated to this. And if you're going to dedicate a ton of resources and ton of people, you might as well scale that properly. So that's kind of where this comes into. We call this context engineering news context engineering, to be precise, before llama two, which also is enabling products left and right. As we all know, the traditional pipeline was chunk it up, take 512 tokens, put it through a translator, put it through distill art, do some sentence extraction, and maybe text classification, if you're lucky, get some sentiment out of it and it works. It gets you something. But after we're talking about reading full articles, getting real rich, context, flexible output, translating, summarizing, really deciding that custom extraction on the fly as your product evolves, that's something that the traditional pipeline really just doesn't support. Right. Robert Caulk: We're talking being able to on the fly say, you know what, actually we want to ask this very particular question of all articles and get this very particular field out. And it's really just a prompt modification. This all is based on having some very high quality, base level, diversified news. And so we'll talk a little bit more. But newscatchers is one of the sources that we're using, which opens up 50,000 different sources. So check them out. That's newscatcherapi.com. They even give free access to researchers if you're doing research in this. Robert Caulk: So I don't want to dive too much into the direct rag stuff. We can go deep, but I'm happy to talk about some examples of how to optimize this and how we've optimized it. Here on the right, you can see the diagram where we're trying to follow along the process of summarizing and embedding. And I'll talk a bit more about that in a moment. It's here to support after we've summarized those articles and we're ready to embed that. Embedding is really important to get that right because like the name of the show suggests you have to have a clean cluster vector space if you're going to be doing any sort of really rich semantic similarity searches. And if you're going to be able to dive deep into extracting important facts out of all 1 million articles a day, you're going to need to do this right. So having a user query which is not equivalent to the embedded page where this is the data, the enriched data that the embedding that we really want to be able to do search on. Robert Caulk: And then how do we connect the dots here? Of course, there are many ways to go about it. One way which is interesting and fun to talk about is ide. So that's basically a hypothetical document embedding. And what you do is you use the LLM directly to generate a fake article. And that's what we're showing here on the right. So let's say if the user says, what's going on in New York City government, well, you could say, hey, write me just a hypothetical summary based, it could completely fake and use that to create a fake embedding page and use that for the search. Right. So then you're getting a lot closer to where you want to go. Robert Caulk: There's some limitations to this, to it's, there's a computational cost also, it's not updated. It's based on whatever. It's basically diving into what it knows about the New York City government and just creating keywords for you. So there's definitely optimizations here as well. When you talk about ambiguity, well, what if the user follows up and says, well, why did they change the rules? Of course, that's where you can start prompt engineering a little bit more and saying, okay, given this historic conversation and the current question, give me some explicit question without ambiguity, and then do the high, if that's something you want to do. The real goal here is to stay in a single parameter space, a single vector space. Stay as close as possible when you're doing your search as when you do your embedding. So we're talking here about production scale of stuff. Robert Caulk: So I really am happy to geek out about the stack, the open source stack that we're relying on, which includes Qdrant here. But let's start with VLLM. I don't know if you guys have heard of it. This is a really great new project, and their focus on continuous batching and page detention. And if I'm being completely honest with you, it's really above my pay grade in the technicals and how they're actually implementing all of that inside the GPU memory. But what we do is we outsource that to that project and we really like what they're doing, and we've seen really good results. It's increasing throughput. So when you're talking about trying to parse through a million articles, you're going to need a lot of throughput. Robert Caulk: The other is text embedding inference. This is a great server. A lot of vector databases will say, okay, we'll do all the embedding for you and we'll do all everything. But when you move to production scale, I'll talk a bit about this later. You need to be using micro service architecture, so it's not super smart to have your database bogged down with doing sorting out the embeddings and sorting out other things. So honestly, I'm a real big fan of single responsibility principle, and that's what Tei does for you. And it also does dynamic batching, which is great in this world where everything is heterogeneous lengths of what's coming in and what's going out. So it's great. Robert Caulk: It really simplifies the process and allows you to isolate resources. But now the star of the show Qdrant, it's really come into its own. Anyone riding the Qdrant wave is just reaping benefits. It seems monthly, like two months ago, sparse vector support got added. There's just constantly new massive features that enable products. Right. So for us, we're doing so much up Cert, we really need to minimize client connections and networking overhead. So you got that batch up cert. Robert Caulk: The filters are huge. We're talking about real time filtering. We can't be searching on news articles from a month ago, two months ago, if the user is asking for a question that's related to the last 24 hours. So having that timestamp filtering and having it be efficient, which is what it is in Qdrant, is huge. Keyword filtering really opens up a massive realm of product opportunities for us. And then the sparse vectors, we hopped on this train immediately and are just seeing benefits. I don't want to say replacement of elasticsearch, but elasticsearch is using sparse vectors as well. So you can add splade into elasticsearch, and splade is great. Robert Caulk: It's a really great alternative to BM 25. It's based on that architecture, and that really opens up a lot of opportunities for filtering out keywords that are kind of useless to the search when the user uses the and a, and then there, these words that are less important splays a bit of a hybrid into semantics, but sparse retrieval. So it's really interesting. And then the idea of hybrid search with semantic and a sparse vector also opens up the ability to do ranking, and you got a higher quality product at the end, which is really the goal, right, especially in production. Point number four here, I would say, is probably one of the most important to us, because we're dealing in a world where latency is king, and being able to deploy Qdrant inside of the same cluster as all the other services. So we're just talking through the switch. That's huge. We're never getting bogged down by network. Robert Caulk: We're never worried about a cloud provider potentially getting overloaded or noisy neighbor problems, stuff like that, completely removed. And then you got high privacy, right. All the data is completely isolated from the external world. So this point number four, I'd say, is one of the biggest value adds for us. But then distributing deployment is huge because high availability is important, and deep storage, which when you're in the business of news archival, and that's one of our main missions here, is archiving the news forever. That's an ever growing database, and so you need a database that's going to be able to grow with you as your data grows. So what's the TLDR to this context? Engineering? Well, service orchestration is really just based on service orchestration in a very heterogeneous and parallel event driven environment. On the right side, we've got the user requests coming in. Robert Caulk: They're hitting all the same services, which every five minutes or every two minutes, whatever you've scheduled the scrape workflow on, also hitting the same services, this requires some orchestration. So that's kind of where I want to move into discussing the real production, scaling, orchestration of the system and how we're doing that. Provide some diagrams to show exactly why we're using the tools we're using here. This is an overview of our Kubernetes cluster with the services that we're using. So it's a bit of a repaint of the previous diagram, but a better overview about showing kind of how these things are connected and why they're connected. I'll go through one by one on these services to just give a little deeper dive into each one. But the goal here is for us, in our opinion, microservice orchestration is key. Sticking to single responsibility principle. Robert Caulk: Open source projects like Qdrant, like Tei, like VLLM and Kubernetes, it's huge. Kubernetes is opening up doors for security and for latency. And of course, if you're going to be getting involved in this game, you got to find the strong DevOps. There's no escaping that. So let's step through kind of piece by piece and talk about flow Dapp. So that's our project. That's our open source project. We've spent about two years building this for our needs, and we're really excited because we did a public open sourcing maybe last week or the week before. Robert Caulk: So finally, after all of our testing and rewrites and refactors, we're open. We're open for business. And it's running asknews app right now, and we're really excited for where it's going to go and how it's going to help other people orchestrate their clusters. Our goal and our priorities were highly paralyzed compute and we were running tests using all sorts of different executors, comparing them. So when you use Flowdapt, you can choose ray or dask. And that's key. Especially with vanilla Python, zero code changes, you don't need to know how ray or dask works. In the back end, flowdapt is vanilla Python. Robert Caulk: That was a key goal for us to ensure that we're optimizing how data is moving around the cluster. Automatic resource management this goes back to Ray and dask. They're helping manage the resources of the cluster, allocating a GPU to a task, or allocating multiple tasks to one GPU. These can come in very, very handy when you're dealing with very heterogeneous workloads like the ones that we discussed in those previous slides. For us, the biggest priority was ensuring rapid prototyping and debugging locally. When you're dealing with clusters of 1015 servers, 40 or 5100 with ray, honestly, ray just scales as far as you want. So when you're dealing with that big of a cluster, it's really imperative that what you see on your laptop is also what you are going to see once you deploy. And being able to debug anything you see in the cluster is big for us, we really found the need for easy cluster wide data sharing methods between tasks. Robert Caulk: So essentially what we've done is made it very easy to get and put values. And so this makes it extremely easy to move data and share data between tasks and make it highly available and stay in cluster memory or persist it to disk, so that when you do the inevitable version update or debug, you're reloading from a persisted state in the real time. News business scheduling is huge. Scheduling, making sure that various workflows are scheduled at different points and different periods or frequencies rather, and that they're being scheduled correctly, and that their triggers are triggering exactly what you need when you need it. Huge for real time. And then one of our biggest selling points, if you will, for this project is Kubernetes style. Everything. Our goal is everything's Kubernetes style, so that if you're coming from Kubernetes, everything's familiar, everything's resource oriented. Robert Caulk: We even have our own flowectl, which would be the Kubectl style command schemas. A lot of what we've done is ensuring deployment cycle efficiency here. So the goal is that flowdapt can schedule everything and manage all these services for you, create workflows. But why these services? For this particular use case, I'll kind of skip through quickly. I know I'm kind of running out of time here, but of course you're going to need some proprietary remote models. That's just how it works. You're going to of course share that load with on premise llms to reduce cost and to have some reasoning engine on premise. But there's obviously advantages and disadvantages to these. Robert Caulk: I'm not going to go through them. I'm happy to make these slides available, and you're welcome to kind of parse through the details. Yeah, for sure. You need to start thinking about persistence and search and making sure those services are robust. That's where Qdrant comes into play. And we found that the all in one solutions kind of sacrifice performance for convenience, or sacrifice accuracy for convenience, but it really wasn't for us. We'd rather just orchestrate it ourselves and let Qdrant do what Qdrant does, instead of kind of just hope that an all in one solution is handling it for us and that allows for modularity performance. And we'll dump Qdrant if we want to. Robert Caulk: Probably we won't. Or we'll dump if we need to, or we'll swap out for whatever replaces vllm. Trying to keep things modular so that future engineers are able to adapt with the tech that's just blowing up and exploding right now. Right. The last thing to talk about here in a production scale environment is really minimizing the latency. I touched on this with Kubernetes ensuring that these services are sitting on the same network, and that is huge. But that talks about decommunication latency. But when you start talking about getting hit with a ton of traffic, production scale, tons of people asking a question all simultaneously, and you needing to go hit a variety of services, well, this is where you really need to isolate that to an asynchronous environment. Robert Caulk: And of course, if you could write this all in Golang, that's probably going to be your best bet for us. We have some services written in Golang, but predominantly, especially the endpoints that the ML engineers need to work with. We're using fast API on pydantic and honestly, it's powerful. Pydantic V 2.0 now runs on Rust, and as anyone in the Qdrant community knows, Rust is really valuable when you're dealing with highly parallelized environments that require high security and protections for immutability and atomicity. Forgive me for the pronunciation, that kind of sums up the production scale talk, and I'm happy to answer questions. I love diving into this sort of stuff. I do have some just general thoughts on why startups are so much more well positioned right now than some of these incumbents, and I'll just do kind of a quick run through, less than a minute just to kind of get it out there. We can talk about it, see if we agree or disagree. Robert Caulk: But you touched on it, Demetrios, in the introduction, which was the best practices have not been established. That's it. That is why startups have such a big advantage. And the reason they're not established is because, well, the new paradigm of technology is just underexplored. We don't really know what the limits are and how to properly handle these things. And that's huge. Meanwhile, some of these incumbents, they're dealing with all sorts of limitations and resistance to change and stuff, and then just market expectations for incumbents maintaining these kind of legacy products and trying to keep them hobbling along on this old tech. In my opinion, startups, you got your reasoning engine building everything around a reasoning engine, using that reasoning engine for every aspect of your system to really open up the adaptivity of your product. Robert Caulk: And okay, I won't put elasticsearch in the incumbent world. I'll keep elasticsearch in the middle. I understand it still has a lot of value, but some of these vendor lock ins, not a huge fan of. But anyway, that's it. That's kind of all I have to say. But I'm happy to take questions or chat a bit. Demetrios: Dude, I've got so much to ask you and thank you for breaking down that stack. That is like the exact type of talk that I love to see because you open the kimono full on. And I was just playing around with asknews app. And so I think it's probably worth me sharing my screen just to show everybody what exactly that is and how that looks at the moment. So you should be able to see it now. Right? And super cool props to you for what you've built. Because I went, and intuitively I was able to say like, oh, cool, I can change, I can see positive news, and I can go by the region that I'm looking at. I want to make sure that I'm checking out all the stuff in Europe or all the stuff in America categories. Demetrios: I can look at sports, blah blah blah, like as if you were flipping the old newspaper and you could go to the sports section or the finance section, and then you cite the sources and you see like, oh, what's the trend in the coverage here? What kind of coverage are we getting? Where are we at in the coverage cycle? Probably something like that. And then, wait, although I was on the happy news, I thought murder, she wrote. So anyway, what we do is we. Robert Caulk: Actually sort it from we take the poll and we actually just sort most positive to the least positive. But you're right, we were talking the other day, we're like, let's just only show the positive. But yeah, that's a good point. Demetrios: There you go. Robert Caulk: Murder, she wrote. Demetrios: But the one thing that I was actually literally just yesterday talking to someone about was how you update things inside of your vector database. So I can imagine that news, as you mentioned, news cycles move very fast and the news that happened 2 hours ago is very different. The understanding of what happened in a very big news event is very different 2 hours ago than it is right now. So how do you make sure that you're always pulling the most current and up to date information? Robert Caulk: This is another logistical point that we think needs to get sorted properly and there's a few layers to it. So for us, as we're parsing that data coming in from Newscatcher, so newscatcher is doing a good job of always feeding the latest buckets to us. Sometimes one will be kind of arrive, but generally speaking, it's always the latest news. So we're taking five minute buckets, and then with those buckets, we're going through and doing all of our enrichment on that, adding it to Qdrant. And that is the point where we use that timestamp filtering, which is such an important point. So in the metadata of Qdrant, we're using the range filter, which is where we call that the timestamp filter, but it's really range filter, and that helps. So when we're going back to update things, we're sorting and ensuring that we're filtering out only what we haven't seen. Demetrios: Okay, that makes complete sense. And basically you could generalize this to something like what I was talking to with people yesterday about, which was, hey, I've got an HR policy that gets updated every other month or every quarter, and I want to make sure that if my HR chatbot is telling people what their vacation policy is, it's pulling from the most recent HR policy. So how do I make sure and do that? And how do I make sure that my vector database isn't like a landmine where it's pulling any information, but we don't necessarily have that control to be able to pull the correct information? And this comes down to that retrieval evaluation, which is such a hot topic, too. Robert Caulk: That's true. No, I think that's a key piece of the puzzle. Now, in that particular example, maybe you actually want to go in and start cleansing a bit, your database, just to make sure if it's really something you're never going to need again. You got to get rid of it. This is a piece I didn't add to the presentation, but it's tangential. You got to keep multiple databases and you got to making sure to isolate resources and cleaning out a database, especially in real time. So ensuring that your database is representative of what you want to be searching on. And you can do this with collections too, if you want. Robert Caulk: But we find there's sometimes a good opportunity to isolate resources in that sense, 100%. Demetrios: So, another question that I had for you was, I noticed Mongo was in the stack. Why did you not just use the Mongo vector option? Is it because of what you were mentioning, where it's like, yeah, you have these all-in-one options, but you sacrifice that performance for the convenience? Robert Caulk: We didn't test that, to be honest, I can't say. All I know is we tested weaviate, we tested one other, and I just really like. Although I was going to say I like that it's written in rust, although I believe Mongo is also written in rust, if I'm not mistaken. But for us, the document DB is more of a representation of state and what's happening, especially for our configurations and workflows. Meanwhile, we really like keeping and relying on Qdrant and all the features. Qdrant is updating, so, yeah, I'd say single responsibility principle is key to that. But I saw some chat in Qdrant discord about this, which I think the only way to use vector is actually to use their cloud offering, if I'm not mistaken. Do you know about this? Demetrios: Yeah, I think so, too. Robert Caulk: This would also be a piece that we couldn't do. Demetrios: Yeah. Where it's like it's open source, but not open source, so that makes sense. Yeah. This has been excellent, man. So I encourage anyone who is out there listening, check out again this is asknews app, and stay up to date with the most relevant news in your area and what you like. And I signed in, so I'm guessing that when I sign in, it's going to tweak my settings. Am I going to be able. Robert Caulk: Good question. Demetrios: Catch this next time. Robert Caulk: Well, at the moment, if you star a story, a narrative that you find interesting, then you can filter on the star and whatever the latest updates are, you'll get it for that particular story. Okay. It brings up another point about Qdrant, which is at the moment we're not doing it yet, but we have plans to use the recommendation system for letting a user kind of create their profile by just saying what they like, what they don't like, and then using the recommender to start recommending stories that they may or may not like. And that's us outsourcing the Qdrant almost entirely. Right. It's just us building around it. So that's nice. Demetrios: Yeah. That makes life a lot easier, especially knowing recommender systems. Yeah, that's excellent. Robert Caulk: Thanks. I appreciate that. For sure. And I'll try to make the slides available. I don't know if I can send them to the two Qdrant or something. They could post them in the discord maybe, for sure. Demetrios: And we can post them in the link in the description of this talk. So this has been excellent. Rob, I really appreciate you coming on here and chatting with me about this, and thanks for breaking down everything that you're doing. I also love the VllM project. It's blowing up. It's cool to see so much usage and all the good stuff that you're doing with it. And yeah, man, for anybody that wants to follow along on your journey, we'll drop a link to your LinkedIn so that they can connect with you and. Robert Caulk: Cool. Demetrios: Thank you. Robert Caulk: Thanks for having me. Demetrios, talk to you later. Demetrios: Catch you later, man. Take care. ",blog/production-scale-rag-for-real-time-news-distillation-robert-caulk-vector-space-talks.md "--- draft: false title: ""Elevate Your Data With Airbyte and Qdrant Hybrid Cloud"" short_description: ""Leverage Airbyte and Qdrant Hybrid Cloud for best-in-class data performance."" description: ""Leverage Airbyte and Qdrant Hybrid Cloud for best-in-class data performance."" preview_image: /blog/hybrid-cloud-airbyte/hybrid-cloud-airbyte.png date: 2024-04-10T00:00:00Z author: Qdrant featured: false weight: 1013 tags: - Qdrant - Vector Database --- In their mission to support large-scale AI innovation, [Airbyte](https://airbyte.com/) and Qdrant are collaborating on the launch of Qdrant’s new offering - [Qdrant Hybrid Cloud](/hybrid-cloud/). This collaboration allows users to leverage the synergistic capabilities of both Airbyte and Qdrant within a private infrastructure. Qdrant’s new offering represents the first managed vector database that can be deployed in any environment. Businesses optimizing their data infrastructure with Airbyte are now able to host a vector database either on premise, or on a public cloud of their choice - while still reaping the benefits of a managed database product. This is a major step forward in offering enterprise customers incredible synergy for maximizing the potential of their AI data. Qdrant's new Kubernetes-native design, coupled with Airbyte’s powerful data ingestion pipelines meet the needs of developers who are both prototyping and building production-level apps. Airbyte simplifies the process of data integration by providing a platform that connects to various sources and destinations effortlessly. Moreover, Qdrant Hybrid Cloud leverages advanced indexing and search capabilities to empower users to explore and analyze their data efficiently. In a major benefit to Generative AI, businesses can leverage Airbyte's data replication capabilities to ensure that their data in Qdrant Hybrid Cloud is always up to date. This empowers all users of Retrieval Augmented Generation (RAG) applications with effective analysis and decision-making potential, all based on the latest information. Furthermore, by combining Airbyte's platform and Qdrant's hybrid cloud infrastructure, users can optimize their data operations while keeping costs under control via flexible pricing models tailored to individual usage requirements. > *“The new Qdrant Hybrid Cloud is an exciting addition that offers peace of mind and flexibility, aligning perfectly with the needs of Airbyte Enterprise users who value the same balance. Being open-source at our core, both Qdrant and Airbyte prioritize giving users the flexibility to build and test locally—a significant advantage for data engineers and AI practitioners. We're enthusiastic about the Hybrid Cloud launch, as it mirrors our vision of enabling users to confidently transition from local development and local deployments to a managed solution, with both cloud and hybrid cloud deployment options.”* AJ Steers, Staff Engineer for AI, Airbyte #### Optimizing Your GenAI Data Stack With Airbyte and Qdrant Hybrid Cloud By integrating Airbyte with Qdrant Hybrid Cloud, you can achieve seamless data ingestion from diverse sources into Qdrant's powerful indexing system. This integration enables you to derive valuable insights from your data. Here are some key advantages: **Effortless Data Integration:** Airbyte's intuitive interface lets you set up data pipelines that extract, transform, and load (ETL) data from various sources into Qdrant. Additionally, Qdrant Hybrid Cloud’s Kubernetes-native architecture means that the destination vector database can now be deployed in a few clicks to any environment. With such flexibility, you can supply even the most advanced RAG applications with optimal data pipelines. **Scalability and Performance:** With Airbyte and Qdrant Hybrid Cloud, you can scale your data infrastructure according to your needs. Whether you're dealing with terabytes or petabytes of data, this combination ensures optimal performance and scalability. This is a robust setup that is designed to meet the needs of large enterprises, ensuring a full spectrum of solutions for various projects and workloads. **Powerful Indexing and Search:** Qdrant Hybrid Cloud’s architecture combines the scalability of cloud infrastructure with the performance of on-premises indexing. Qdrant's advanced algorithms enable lightning-fast search and retrieval of data, even across large datasets. **Open-Source Compatibility:** Airbyte and Qdrant pride themselves on maintaining a reliable and mature integration that brings peace of mind to those prototyping and deploying large-scale AI solutions. Extensive open-source documentation and code samples help users of all skill levels in leveraging highly advanced features of data ingestion and vector search. #### Build a Modern GenAI Application With Qdrant Hybrid Cloud and Airbyte ![hybrid-cloud-airbyte-tutorial](/blog/hybrid-cloud-airbyte/hybrid-cloud-airbyte-tutorial.png) We put together an end-to-end tutorial to show you how to build a GenAI application with Qdrant Hybrid Cloud and Airbyte’s advanced data pipelines. #### Tutorial: Build a RAG System to Answer Customer Support Queries Learn how to set up a private AI service that addresses customer support issues with high accuracy and effectiveness. By leveraging Airbyte’s data pipelines with Qdrant Hybrid Cloud, you will create a customer support system that is always synchronized with up-to-date knowledge. [Try the Tutorial](/documentation/tutorials/rag-customer-support-cohere-airbyte-aws/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-airbyte.md "--- draft: false title: Qdrant Summer of Code 24 slug: qdrant-summer-of-code-24 short_description: Introducing Qdrant Summer of Code 2024 program. description: ""Introducing Qdrant Summer of Code 2024 program. GSoC alternative."" preview_image: /blog/Qdrant-summer-of-code.png date: 2024-02-21T00:39:53.751Z author: Andre Zayarni featured: false tags: - Open Source - Vector Database - Summer of Code - GSoC24 --- Google Summer of Code (#GSoC) is celebrating its 20th anniversary this year with the 2024 program. Over the past 20 years, 19K new contributors were introduced to #opensource through the program under the guidance of thousands of mentors from over 800 open-source organizations in various fields. Qdrant participated successfully in the program last year. Both projects, the UI Dashboard with unstructured data visualization and the advanced Geo Filtering, were completed in time and are now a part of the engine. One of the two young contributors joined the team and continues working on the project. We are thrilled to announce that Qdrant was 𝐍𝐎𝐓 𝐚𝐜𝐜𝐞𝐩𝐭𝐞𝐝 into the GSoc 2024 program for unknown reasons, but instead, we are introducing our own 𝐐𝐝𝐫𝐚𝐧𝐭 𝐒𝐮𝐦𝐦𝐞𝐫 𝐨𝐟 𝐂𝐨𝐝𝐞 program with a stipend for contributors! To not reinvent the wheel, we follow all the timelines and rules of the official Google program. ## Our project ideas. We have prepared some excellent project ideas. Take a look and choose if you want to contribute in Rust or a Python-based project. ➡ *WASM-based dimension reduction viz* 📊 Implement a dimension reduction algorithm in Rust, compile to WASM and integrate the WASM code with Qdrant Web UI. ➡ *Efficient BM25 and Okapi BM25, which uses the BERT Tokenizer* 🥇 BM25 and Okapi BM25 are popular ranking algorithms. Qdrant's FastEmbed supports dense embedding models. We need a fast, efficient, and massively parallel Rust implementation with Python bindings for these. ➡ *ONNX Cross Encoders in Python* ⚔️ Export a cross-encoder ranking models to operate on ONNX runtime and integrate this model with the Qdrant's FastEmbed to support efficient re-ranking ➡ *Ranking Fusion Algorithms implementation in Rust* 🧪 Develop Rust implementations of various ranking fusion algorithms including but not limited to Reciprocal Rank Fusion (RRF). For a complete list, see: https://github.com/AmenRa/ranx and create Python bindings for the implemented Rust modules. ➡ *Setup Jepsen to test Qdrant’s distributed guarantees* 💣 Design and write Jepsen tests based on implementations for other Databases and create a report or blog with the findings. See all details on our Notion page: https://www.notion.so/qdrant/GSoC-2024-ideas-1dfcc01070094d87bce104623c4c1110 Contributor application period begins on March 18th. We will accept applications via email. Let's contribute and celebrate together! In open-source, we trust! 🦀🤘🚀 ",blog/gsoc24-summer-of-code.md "--- title: ""Navigating challenges and innovations in search technologies"" draft: false slug: navigating-challenges-innovations short_description: Podcast on search and LLM with Datatalk.club description: Podcast on search and LLM with Datatalk.club preview_image: /blog/navigating-challenges-innovations/preview/preview.png date: 2024-01-12T15:39:53.751Z author: Atita Arora featured: false tags: - podcast - search - blog - retrieval-augmented generation - large language models --- ## Navigating challenges and innovations in search technologies We participated in a [podcast](#podcast-discussion-recap) on search technologies, specifically with retrieval-augmented generation (RAG) in language models. RAG is a cutting-edge approach in natural language processing (NLP). It uses information retrieval and language generation models. We describe how it can enhance what AI can do to understand, retrieve, and generate human-like text. ### More about RAG Think of RAG as a system that finds relevant knowledge from a vast database. It takes your query, finds the best available information, and then provides an answer. RAG is the next step in NLP. It goes beyond the limits of traditional generation models by integrating retrieval mechanisms. With RAG, NLP can access external knowledge sources, databases, and documents. This ensures more accurate, contextually relevant, and informative output. With RAG, we can set up more precise language generation as well as better context understanding. RAG helps us incorporate real-world knowledge into AI-generated text. This can improve overall performance in tasks such as: - Answering questions - Creating summaries - Setting up conversations ### The importance of evaluation for RAG and LLM Evaluation is crucial for any application leveraging LLMs. It promotes confidence in the quality of the application. It also supports implementation of feedback and improvement loops. ### Unique challenges of evaluating RAG and LLM-based applications *Retrieval* is the key to Retrieval Augmented Generation, as it affects quality of the generated response. Potential problems include: - Setting up a defined or expected set of documents, which can be a significant challenge. - Measuring *subjectiveness*, which relates to how well the data fits or applies to a given domain or use case. ### Podcast Discussion Recap In the podcast, we addressed the following: - **Model evaluation(LLM)** - Understanding the model at the domain-level for the given use case, supporting required context length and terminology/concept understanding. - **Ingestion pipeline evaluation** - Evaluating factors related to data ingestion and processing such as chunk strategies, chunk size, chunk overlap, and more. - **Retrieval evaluation** - Understanding factors such as average precision, [Distributed cumulative gain](https://en.wikipedia.org/wiki/Discounted_cumulative_gain) (DCG), as well as normalized DCG. - **Generation evaluation(E2E)** - Establishing guardrails. Evaulating prompts. Evaluating the number of chunks needed to set up the context for generation. ### The recording Thanks to the [DataTalks.Club](https://datatalks.club) for organizing [this podcast](https://www.youtube.com/watch?v=_fbe1QyJ1PY). ### Event Alert If you're interested in a similar discussion, watch for the recording from the [following event](https://www.eventbrite.co.uk/e/the-evolution-of-genai-exploring-practical-applications-tickets-778359172237?aff=oddtdtcreator), organized by [DeepRec.ai](https://deeprec.ai). ### Further reading - [Qdrant Blog](/blog/) ",blog/datatalk-club-podcast-plug.md "--- title: ""Qdrant 1.11 - The Vector Stronghold: Optimizing Data Structures for Scale and Efficiency"" draft: false short_description: ""On-Disk Payload Index. UUID Payload Support. Tenant Defragmentation."" description: ""Enhanced payload flexibility with on-disk indexing, UUID support, and tenant-based defragmentation."" preview_image: /blog/qdrant-1.11.x/social_preview.png social_preview_image: /blog/qdrant-1.11.x/social_preview.png date: 2024-08-12T00:00:00-08:00 author: David Myriel featured: true tags: - vector search - on-disk payload index - tenant defragmentation - group-by search - random sampling --- [Qdrant 1.11.0 is out!](https://github.com/qdrant/qdrant/releases/tag/v1.11.0) This release largely focuses on features that improve memory usage and optimize segments. However, there are a few cool minor features, so let's look at the whole list: Optimized Data Structures:
**Defragmentation:** Storage for multitenant workloads is more optimized and scales better.
**On-Disk Payload Index:** Store less frequently used data on disk, rather than in RAM.
**UUID for Payload Index:** Additional data types for payload can result in big memory savings. Improved Query API:
**GroupBy Endpoint:** Use this query method to group results by a certain payload field.
**Random Sampling:** Select a subset of data points from a larger dataset randomly.
**Hybrid Search Fusion:** We are adding the Distribution-Based Score Fusion (DBSF) method.
New Web UI Tools:
**Search Quality Tool:** Test the precision of your semantic search requests in real-time.
**Graph Exploration Tool:** Visualize vector search in context-based exploratory scenarios.
### Quick Recap: Multitenant Workloads Before we dive into the specifics of our optimizations, let's first go over Multitenancy. This is one of our most significant features, [best used for scaling and data isolation](https://qdrant.tech/articles/multitenancy/). If you’re using Qdrant to manage data for multiple users, regions, or workspaces (tenants), we suggest setting up a [multitenant environment](/documentation/guides/multiple-partitions/). This approach keeps all tenant data in a single global collection, with points separated and isolated by their payload. To avoid slow and unnecessary indexing, it’s better to create an index for each relevant payload rather than indexing the entire collection globally. Since some data is indexed more frequently, you can focus on building indexes for specific regions, workspaces, or users. *For more details on scaling best practices, read [How to Implement Multitenancy and Custom Sharding](https://qdrant.tech/articles/multitenancy/).* ### Defragmentation of Tenant Storage With version 1.11, Qdrant changes how vectors from the same tenant are stored on disk, placing them **closer together** for faster bulk reading and reduced scaling costs. This approach optimizes storage and retrieval operations for different tenants, leading to more efficient system performance and resource utilization. **Figure 1:** Re-ordering by payload can significantly speed up access to hot and cold data. ![defragmentation](/blog/qdrant-1.11.x/defragmentation.png) **Example:** When creating an index, you may set `is_tenant=true`. This configuration will optimize the storage based on your collection’s usage patterns. ```http PUT /collections/{collection_name}/index { ""field_name"": ""group_id"", ""field_schema"": { ""type"": ""keyword"", ""is_tenant"": true } } ``` ```python client.create_payload_index( collection_name=""{collection_name}"", field_name=""group_id"", field_schema=models.KeywordIndexParams( type=""keyword"", is_tenant=True, ), ) ``` ```typescript client.createPayloadIndex(""{collection_name}"", { field_name: ""group_id"", field_schema: { type: ""keyword"", is_tenant: true, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateFieldIndexCollectionBuilder, KeywordIndexParamsBuilder, FieldType }; use qdrant_client::{Qdrant, QdrantError}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.create_field_index( CreateFieldIndexCollectionBuilder::new( ""{collection_name}"", ""group_id"", FieldType::Keyword, ).field_index_params( KeywordIndexParamsBuilder::default() .is_tenant(true) ) ).await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadIndexParams; import io.qdrant.client.grpc.Collections.PayloadSchemaType; import io.qdrant.client.grpc.Collections.KeywordIndexParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createPayloadIndexAsync( ""{collection_name}"", ""group_id"", PayloadSchemaType.Keyword, PayloadIndexParams.newBuilder() .setKeywordIndexParams( KeywordIndexParams.newBuilder() .setIsTenant(true) .build()) .build(), null, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreatePayloadIndexAsync( collectionName: ""{collection_name}"", fieldName: ""group_id"", schemaType: PayloadSchemaType.Keyword, indexParams: new PayloadIndexParams { KeywordIndexParams = new KeywordIndexParams { IsTenant = true } } ); ``` As a result, the storage structure will be organized in a way to co-locate vectors of the same tenant together at the next optimization. *To learn more about defragmentation, read the [Multitenancy documentation](/documentation/guides/multiple-partitions/).* ### On-Disk Support for the Payload Index When managing billions of records across millions of tenants, keeping all data in RAM is inefficient. That is especially true when only a small subset is frequently accessed. As of 1.11, you can offload ""cold"" data to disk and cache the “hot” data in RAM. *This feature can help you manage a high number of different payload indexes, which is beneficial if you are working with large varied datasets.* **Figure 2:** By moving the data from Workspace 2 to disk, the system can free up valuable memory resources for Workspaces 1, 3 and 4, which are accessed more frequently. ![on-disk-payload](/blog/qdrant-1.11.x/on-disk-payload.png) **Example:** As you create an index for Workspace 2, set the `on_disk` parameter. ```http PUT /collections/{collection_name}/index { ""field_name"": ""group_id"", ""field_schema"": { ""type"": ""keyword"", ""is_tenant"": true, ""on_disk"": true } } ``` ```python client.create_payload_index( collection_name=""{collection_name}"", field_name=""group_id"", field_schema=models.KeywordIndexParams( type=""keyword"", is_tenant=True, on_disk=True, ), ) ``` ```typescript client.createPayloadIndex(""{collection_name}"", { field_name: ""group_id"", field_schema: { type: ""keyword"", is_tenant: true, on_disk: true }, }); ``` ```rust use qdrant_client::qdrant::{ CreateFieldIndexCollectionBuilder, KeywordIndexParamsBuilder, FieldType }; use qdrant_client::{Qdrant, QdrantError}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.create_field_index( CreateFieldIndexCollectionBuilder::new( ""{collection_name}"", ""group_id"", FieldType::Keyword, ) .field_index_params( KeywordIndexParamsBuilder::default() .is_tenant(true) .on_disk(true), ), ); ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadIndexParams; import io.qdrant.client.grpc.Collections.PayloadSchemaType; import io.qdrant.client.grpc.Collections.KeywordIndexParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createPayloadIndexAsync( ""{collection_name}"", ""group_id"", PayloadSchemaType.Keyword, PayloadIndexParams.newBuilder() .setKeywordIndexParams( KeywordIndexParams.newBuilder() .setIsTenant(true) .setOnDisk(true) .build()) .build(), null, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreatePayloadIndexAsync( collectionName: ""{collection_name}"", fieldName: ""group_id"", schemaType: PayloadSchemaType.Keyword, indexParams: new PayloadIndexParams { KeywordIndexParams = new KeywordIndexParams { IsTenant = true, OnDisk = true } } ); ``` By moving the index to disk, Qdrant can handle larger datasets that exceed the capacity of RAM, making the system more scalable and capable of storing more data without being constrained by memory limitations. *To learn more about this, read the [Indexing documentation](/documentation/concepts/indexing/).* ### UUID Datatype for the Payload Index Many Qdrant users rely on UUIDs in their payloads, but storing these as strings comes with a substantial memory overhead—approximately 36 bytes per UUID. In reality, UUIDs only require 16 bytes of storage when stored as raw bytes. To address this inefficiency, we’ve developed a new index type tailored specifically for UUIDs that stores them internally as bytes, **reducing memory usage by up to 2.25x.** **Example:** When adding two separate points, indicate their UUID in the payload. In this example, both data points belong to the same user (with the same UUID). ```http PUT /collections/{collection_name}/points { ""points"": [ { ""id"": 1, ""vector"": [0.05, 0.61, 0.76, 0.74], ""payload"": {""id"": 550e8400-e29b-41d4-a716-446655440000} }, { ""id"": 2, ""vector"": [0.19, 0.81, 0.75, 0.11], ""payload"": {""id"": 550e8400-e29b-41d4-a716-446655440000} }, ] } ``` > For organizations that have numerous users and UUIDs, this simple fix can significantly reduce the cluster size and improve efficiency. *To learn more about this, read the [Payload documentation](/documentation/concepts/payload/).* ### Query API: Groups Endpoint When searching over data, you can group results by specific payload field, which is useful when you have multiple data points for the same item and you want to avoid redundant entries in the results. **Example:** If a large document is divided into several chunks, and you need to search or make recommendations on a per-document basis, you can group the results by the `document_id`. ```http POST /collections/{collection_name}/points/query/groups { ""query"": [0.01, 0.45, 0.67], group_by=""document_id"", # Path of the field to group by limit=4, # Max amount of groups group_size=2, # Max amount of points per group } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points_groups( collection_name=""{collection_name}"", query=[0.01, 0.45, 0.67], group_by=""document_id"", limit=4, group_size=2, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.queryGroups(""{collection_name}"", { query: [0.01, 0.45, 0.67], group_by: ""document_id"", limit: 4, group_size: 2, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.query_groups( QueryPointGroupsBuilder::new(""{collection_name}"", ""document_id"") .query(Query::from(vec![0.01, 0.45, 0.67])) .limit(4u64) .group_size(2u64) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPointGroups; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .queryGroupsAsync( QueryPointGroups.newBuilder() .setCollectionName(""{collection_name}"") .setGroupBy(""document_id"") .setQuery(nearest(0.01f, 0.45f, 0.67f)) .setLimit(4) .setGroupSize(2) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryGroupsAsync( collectionName: ""{collection_name}"", groupBy: ""document_id"", query: new float[] { 0.01f, 0.45f, 0.67f }, limit: 4, groupSize: 2 ); ``` This endpoint will retrieve the best N points for each document, assuming that the payload of the points contains the document ID. Sometimes, the best N points cannot be fulfilled due to lack of points or a big distance with respect to the query. In every case, the `group_size` is a best-effort parameter, similar to the limit parameter. *For more information on grouping capabilities refer to our [Hybrid Queries documentation](/documentation/concepts/hybrid-queries/).* ### Query API: Random Sampling Our [Food Discovery Demo](https://food-discovery.qdrant.tech) always shows a random sample of foods from the larger dataset. Now you can do the same and set the randomization from a basic Query API endpoint. When calling the Query API, you will be able to select a subset of data points from a larger dataset randomly. *This technique is often used to reduce the computational load, improve query response times, or provide a representative sample of the data for various analytical purposes.* **Example:** When querying the collection, you can configure it to retrieve a random sample of data. ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") # Random sampling (as of 1.11.0) sampled = client.query_points( collection_name=""{collection_name}"", query=models.SampleQuery(sample=models.Sample.Random) ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); let sampled = client.query(""{collection_name}"", { query: { sample: ""random"" }, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Query, QueryPointsBuilder, Sample}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let sampled = client .query( QueryPointsBuilder::new(""{collection_name}"").query(Query::new_sample(Sample::Random)), ) .await?; ``` ```java import static io.qdrant.client.QueryFactory.sample; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Sample; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .setQuery(sample(Sample.Random)) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", query: Sample.Random ); ``` *To learn more, check out the [Query API documentation](/documentation/concepts/hybrid-queries/).* ### Query API: Distribution-Based Score Fusion In version 1.10, we added Reciprocal Rank Fusion (RRF) as a way of fusing results from Hybrid Queries. Now we are adding Distribution-Based Score Fusion (DBSF). Michelangiolo Mazzeschi talks more about this fusion method in his latest [Medium article](https://medium.com/plain-simple-software/distribution-based-score-fusion-dbsf-a-new-approach-to-vector-search-ranking-f87c37488b18). *DBSF normalizes the scores of the points in each query, using the mean +/- the 3rd standard deviation as limits, and then sums the scores of the same point across different queries.* **Example:** To fuse `prefetch` results from sparse and dense queries, set `""fusion"": ""dbsf""` ```http POST /collections/{collection_name}/points/query { ""prefetch"": [ { ""query"": { ""indices"": [1, 42], // <┐ ""values"": [0.22, 0.8] // <┴─Sparse vector }, ""using"": ""sparse"", ""limit"": 20 }, { ""query"": [0.01, 0.45, 0.67, ...], // <-- Dense vector ""using"": ""dense"", ""limit"": 20 } ], ""query"": { ""fusion"": “dbsf"" }, // <--- Distribution Based Score Fusion ""limit"": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", prefetch=[ models.Prefetch( query=models.SparseVector(indices=[1, 42], values=[0.22, 0.8]), using=""sparse"", limit=20, ), models.Prefetch( query=[0.01, 0.45, 0.67, ...], # <-- dense vector using=""dense"", limit=20, ), ], query=models.FusionQuery(fusion=models.Fusion.DBSF), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { prefetch: [ { query: { values: [0.22, 0.8], indices: [1, 42], }, using: 'sparse', limit: 20, }, { query: [0.01, 0.45, 0.67], using: 'dense', limit: 20, }, ], query: { fusion: 'dbsf', }, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Fusion, PrefetchQueryBuilder, Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.query( QueryPointsBuilder::new(""{collection_name}"") .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest([(1, 0.22), (42, 0.8)].as_slice())) .using(""sparse"") .limit(20u64) ) .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest(vec![0.01, 0.45, 0.67])) .using(""dense"") .limit(20u64) ) .query(Query::new_fusion(Fusion::Dbsf)) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import java.util.List; import static io.qdrant.client.QueryFactory.fusion; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Fusion; import io.qdrant.client.grpc.Points.PrefetchQuery; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .addPrefetch(PrefetchQuery.newBuilder() .setQuery(nearest(List.of(0.22f, 0.8f), List.of(1, 42))) .setUsing(""sparse"") .setLimit(20) .build()) .addPrefetch(PrefetchQuery.newBuilder() .setQuery(nearest(List.of(0.01f, 0.45f, 0.67f))) .setUsing(""dense"") .setLimit(20) .build()) .setQuery(fusion(Fusion.DBSF)) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", prefetch: new List < PrefetchQuery > { new() { Query = new(float, uint)[] { (0.22f, 1), (0.8f, 42), }, Using = ""sparse"", Limit = 20 }, new() { Query = new float[] { 0.01f, 0.45f, 0.67f }, Using = ""dense"", Limit = 20 } }, query: Fusion.Dbsf ); ``` Note that `dbsf` is stateless and calculates the normalization limits only based on the results of each query, not on all the scores that it has seen. *To learn more, check out the [Hybrid Queries documentation](/documentation/concepts/hybrid-queries/).* ## Web UI: Search Quality Tool We have updated the Qdrant Web UI with additional testing functionality. Now you can check the quality of your search requests in real time and measure it against exact search. **Try it:** In the Dashboard, go to collection settings and test the **Precision** from the Search Quality menu tab. > The feature will conduct a semantic search for each point and produce a report below. ## Web UI: Graph Exploration Tool Deeper exploration is highly dependent on expanding context. This is something we previously covered in the [Discovery Needs Context](/articles/discovery-search/) article earlier this year. Now, we have developed a UI feature to help you visualize how semantic search can be used for exploratory and recommendation purposes. **Try it:** Using the feature is pretty self-explanatory. Each collection's dataset can be explored from the **Graph** tab. As you see the images change, you can steer your search in the direction of specific characteristics that interest you. > Search results will become more ""distilled"" and tailored to your preferences. ## Next Steps If you’re new to Qdrant, now is the perfect time to start. Check out our [documentation](/documentation/) guides and see why Qdrant is the go-to solution for vector search. We’re very happy to bring you this latest version of Qdrant, and we can’t wait to see what you build with it. As always, your feedback is invaluable—feel free to reach out with any questions or comments on our [community forum](https://qdrant.to/discord). ",blog/qdrant-1.11.x.md "--- draft: true title: v0.8.0 update of the Qdrant engine was released slug: qdrant-0-8-0-released short_description: ""The new version of our engine - v0.8.0, went live. "" description: ""The new version of our engine - v0.8.0, went live. "" preview_image: /blog/from_cms/v0.8.0.jpg date: 2022-06-09T10:03:29.376Z author: Alyona Kavyerina author_link: https://www.linkedin.com/in/alyona-kavyerina/ categories: - News - Release update tags: - Corporate news - Release sitemapExclude: True --- The new version of our engine - v0.8.0, went live. Let's go through the new features it has: * On-disk payload storage allows storing more with less RAM usage. * Distributed deployment support is available. And we continue improving it, so stay tuned for new updates. * The payload can be indexed in the process without rebuilding the segment. * Advanced filtering support now includes filtering by similarity score. Also, it has a faster payload index, better error reporting, HNSW Speed improvements, and many more. Check out the change log for more details [](https://github.com/qdrant/qdrant/releases/tag/v0.8.0)https://github.com/qdrant/qdrant/releases/tag/v0.8.0. ",blog/v0-8-0-update-of-the-qdrant-engine-was-released.md "--- draft: false title: ""Developing Advanced RAG Systems with Qdrant Hybrid Cloud and LangChain "" short_description: ""Empowering engineers and scientists globally to easily and securely develop and scale their GenAI applications."" description: ""Empowering engineers and scientists globally to easily and securely develop and scale their GenAI applications."" preview_image: /blog/hybrid-cloud-langchain/hybrid-cloud-langchain.png date: 2024-04-14T00:04:00Z author: Qdrant featured: false weight: 1007 tags: - Qdrant - Vector Database --- [LangChain](https://www.langchain.com/) and Qdrant are collaborating on the launch of [Qdrant Hybrid Cloud](/hybrid-cloud/), which is designed to empower engineers and scientists globally to easily and securely develop and scale their GenAI applications. Harnessing LangChain’s robust framework, users can unlock the full potential of vector search, enabling the creation of stable and effective AI products. Qdrant Hybrid Cloud extends the same powerful functionality of Qdrant onto a Kubernetes-based architecture, enhancing LangChain’s capability to cater to users across any environment. Qdrant Hybrid Cloud provides users with the flexibility to deploy their vector database in a preferred environment. Through container-based scalable deployments, companies can leverage cutting-edge frameworks like LangChain while maintaining compatibility with their existing hosting architecture for data sources, embedded models, and LLMs. This potent combination empowers organizations to develop robust and secure applications capable of text-based search, complex question-answering, recommendations and analysis. Despite LLMs being trained on vast amounts of data, they often lack user-specific or private knowledge. LangChain helps developers build context-aware reasoning applications, addressing this challenge. Qdrant’s vector database sifts through semantically relevant information, enhancing the performance gains derived from LangChain’s data connection features. With LangChain, users gain access to state-of-the-art functionalities for querying, chatting, sorting, and parsing data. Through the seamless integration of Qdrant Hybrid Cloud and LangChain, developers can effortlessly vectorize their data and conduct highly accurate semantic searches—all within their preferred environment. > *“The AI industry is rapidly maturing, and more companies are moving their applications into production. We're really excited at LangChain about supporting enterprises' unique data architectures and tooling needs through integrations and first-party offerings through LangSmith. First-party enterprise integrations like Qdrant's greatly contribute to the LangChain ecosystem with enterprise-ready retrieval features that seamlessly integrate with LangSmith's observability, production monitoring, and automation features, and we're really excited to develop our partnership further.”* -Erick Friis, Founding Engineer at LangChain #### Discover Advanced Integration Options with Qdrant Hybrid Cloud and LangChain Building apps with Qdrant Hybrid Cloud and LangChain comes with several key advantages: **Seamless Deployment:** With Qdrant Hybrid Cloud's Kubernetes-native architecture, deploying Qdrant is as simple as a few clicks, allowing you to choose your preferred environment. Coupled with LangChain's flexibility, users can effortlessly create advanced RAG solutions anywhere with minimal effort. **Open-Source Compatibility:** LangChain and Qdrant support a dependable and mature integration, providing peace of mind to those developing and deploying large-scale AI solutions. With comprehensive documentation, code samples, and tutorials, users of all skill levels can harness the advanced features of data ingestion and vector search to their fullest potential. **Advanced RAG Performance:** By infusing LLMs with relevant context, Qdrant offers superior results for RAG use cases. Integrating vector search yields improved retrieval accuracy, faster query speeds, and reduced computational overhead. LangChain streamlines the entire process, offering speed, scalability, and efficiency, particularly beneficial for enterprise-scale deployments dealing with vast datasets. Furthermore, [LangSmith](https://www.langchain.com/langsmith) provides one-line instrumentation for debugging, observability, and ongoing performance testing of LLM applications. #### Start Building With LangChain and Qdrant Hybrid Cloud: Develop a RAG-Based Employee Onboarding System To get you started, we’ve put together a tutorial that shows how to create next-gen AI applications with Qdrant Hybrid Cloud using the LangChain framework and Cohere embeddings. ![hybrid-cloud-langchain-tutorial](/blog/hybrid-cloud-langchain/hybrid-cloud-langchain-tutorial.png) #### Tutorial: Build a RAG System for Employee Onboarding We created a comprehensive tutorial to show how you can build a RAG-based system with Qdrant Hybrid Cloud, LangChain and Cohere’s embeddings. This use case is focused on building a question-answering system for internal corporate employee onboarding. [Try the Tutorial](/documentation/tutorials/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-langchain.md "--- draft: false title: Building LLM Powered Applications in Production - Hamza Farooq | Vector Space Talks slug: llm-complex-search-copilot short_description: Hamza Farooq discusses the future of LLMs, complex search, and copilots. description: Hamza Farooq presents the future of large language models, complex search, and copilot, discussing real-world applications and the challenges of implementing these technologies in production. preview_image: /blog/from_cms/hamza-farooq-cropped.png date: 2024-01-09T12:16:22.760Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - LLM - Vector Database --- > *""There are 10 billion search queries a day, estimated half of them go unanswered. Because people don't actually use search as what we used.”*\ > -- Hamza Farooq > How do you think Hamza's background in machine learning and previous experiences at Google and Walmart Labs have influenced his approach to building LLM-powered applications? Hamza Farooq, an accomplished educator and AI enthusiast, is the founder of Traversaal.ai. His journey is marked by a relentless passion for AI exploration, particularly in building Large Language Models. As an adjunct professor at UCLA Anderson, Hamza shapes the future of AI by teaching cutting-edge technology courses. At Traversaal.ai, he empowers businesses with domain-specific AI solutions, focusing on conversational search and recommendation systems to deliver personalized experiences. With a diverse career spanning academia, industry, and entrepreneurship, Hamza brings a wealth of experience from time at Google. His overarching goal is to bridge the gap between AI innovation and real-world applications, introducing transformative solutions to the market. Hamza eagerly anticipates the dynamic challenges and opportunities in the ever-evolving field of AI and machine learning. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/1oh31JA2XsqzuZhCUQVNN8?si=viPPgxiZR0agFhz1QlimSA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/0N9ozwgmEQM).*** ## Top Takeaways: UX specialist? Your expertise in designing seamless user experiences for GenAI products is guaranteed to be in high demand. Let's elevate the user interface for next-gen technology! In this episode, Hamza presents the future of large language models and complex search, discussing real-world applications and the challenges of implementing these technologies in production. 5 Keys to Learning from the Episode: 1. **Complex Search** - Discover how LLMs are revolutionizing the way we interact with search engines and enhancing the search experience beyond basic queries. 2. **Conversational Search and Personalization** - Explore the potential of conversational search and personalized recommendations using open-source LLMs, bringing a whole new level of user engagement. 3. **Challenges and Solutions** - Uncover the downtime challenges faced by LLM services and learn the strategies deployed to mitigate these issues for seamless operation. 4. **Traversal AI's Unique Approach** - Learn how Traversal AI has created a unified platform with a myriad of applications, simplifying the integration of LLMs and domain-specific search. 5. **The Importance of User Experience (UX)** - Understand the unparalleled significance of UX professionals in shaping the future of Gen AI products, and how they play a pivotal role in enhancing user interactions with LLM-powered applications. > Fun Fact: User experience (UX) designers are anticipated to be crucial in the development of AI-powered products as they bridge the gap between user interaction and the technical aspects of the AI systems. > ## Show Notes: 00:00 Teaching GPU AI with open source products.\ 06:40 Complex search leads to conversational search implementation.\ 07:52 Generating personalized travel itineraries with ease.\ 12:02 Maxwell's talk highlights challenges in search technology.\ 16:01 Balancing preferences and trade-offs in travel.\ 17:45 Beta mode, selective, personalized database.\ 22:15 Applications needed: chatbot, knowledge retrieval, recommendation, job matching, copilot\ 23:59 Challenges for UX in developing gen AI. ## More Quotes from Hamza: *""Ux people are going to be more rare who can work on gen AI products than product managers and tech people, because for tech people, they can follow and understand code and they can watch videos, business people, they're learning GPT prompting and so on and so forth. But the UX people, there's literally no teaching guide except for a Chat GPT interface. So this user experience, they are going to be, their worth is going to be inequal in gold.”*\ -- Hamza Farooq *""Usually they don't come to us and say we need a pine cone or we need a quadrant or we need a local llama, they say, this is the problem you're trying to solve. And we are coming from a problem solving initiative from our company is that we got this. You don't have to hire three ML engineers and two NLP research scientists and three people from here for the cost of two people. We can do an entire end to end implementation. Because what we have is 80% product which is built and we can tune the 20% to what you need.”*\ -- Hamza Farooq *""Imagine you're trying to book a hotel, and you also get an article from New York Times that says, this is why this is a great, or a blogger that you follow and it sort of shows up in your. That is the strength that we have been powering, that you don't need to wait or you don't need to depend anymore on just the company's website itself. You can use the entire Internet to come up with an arsenal.”*\ -- Hamza Farooq ## Transcript: Demetrios: Yes, we are live. So what is going on? Hamza, it's great to have you here for this edition of the Vector Space Talks. Let's first start with this. Everybody that is here with us right now, great to have you. Let us know where you're dialing in from in the chat and feel free over the course of the next 20 - 25 minutes to ask any questions as they. Come up in the chat. I'll be monitoring it and maybe jumping. In in case we need to stop. Hunts at any moment. And if you or anybody you know would like to come and give a presentation on our vector space talks, we are very open to that. Reach out to me either on discord or LinkedIn or your preferred method of communication. Maybe it's carrier Pigeon. Whatever it may be, I am here and ready to hear your pitch about. What you want to talk about. It's always cool hearing about how people are building with Qdrant or what they. Are building in this space. So without further ado, let's jump into this with my man Hamza. Great to have you here, dude. Hamza Farooq: Thank you for having me. It's an honor. Demetrios: You say that now. Just wait. You don't know me that well. I guess that's the only thing. So let's just say this. You're doing some incredible stuff. You're the founder of Traversaal.ai. You have been building large language models in the past, and you're also a professor at UCLA. You're doing all kinds of stuff. And that is why I think it. Is my honor to have you here with us today. I know you've got all kinds of fun stuff that you want to get. Into, and it's really about building llm powered applications in production. You have some slides for us, I believe. So I'm going to kick it over. To you, let you start rocking, and in case anything comes up, I'll jump. In and stop you from going too. Far down the road. Hamza Farooq: Awesome. Thank you for that. I really like your joke of the carrier pigeon. Is it a geni carrier pigeon with multiple areas and h 100 attached to it? Demetrios: Exactly. Those are the expensive carrier pigeons. That's the premium version. I am not quite that GPU rich yet. Hamza Farooq: Absolutely. All right. I think that's a great segue. I usually tell people that I'm going to teach you all how to be a GPU poor AI gap person, and my job is to basically teach everyone, or the thesis of my organization is also, how can we build powerful solutions, LLM powered solutions by using open source products and open source llms and architectures so that we can stretch the dollar as much as possible. That's been my thesis and I have always pushed for open source because they've done some great job over there and they are coming in close to pretty much at par of what the industry standard is. But I digress. Let's start with my overall presentation. I'm here to talk about the future of search and copilots and just the overall experience which we are looking with llms. Hamza Farooq: So I know you gave a background about me. I am a founder at Traversaal.ai. Previously I was at Google and Walmart Labs. I have quite a few years of experience in machine learning. In fact, my first job in 2007 was working for SaaS and I was implementing trees for identifying fraud, for fraud detection. And I did not know that was honestly data science, but we were implementing that. I have had the experience of teaching at multiple universities and that sort of experience has really helped me do better at what I do, because when you can teach something, you actually truly understand that. All right, so why are we here? Why are we really here? I have a very strong mean game. Hamza Farooq: So we started almost a year ago, Char GPT came into our lives and almost all of a sudden we started using it. And I think in January, February, March, it was just an explosion of usage. And now we know all the different things that have been going on and we've seen peripheration of a lot of startups that have come in this space. Some of them are wrappers, some of them have done a lot, have a lot more motor. There are many, many different ways that we have been using it. I don't think we even know how many ways we can use charge GBT, but most often it's just been text generation, one form or the other. And that is what the focus has been. But if we look deeper, the llms that we know, they also can help us with a very important part, something which is called complex search. Hamza Farooq: And complex search is basically when we converse with a search system to actually give a much longer query of how we would talk to a human being. And that is something that has been missing for the longest time in our interfacing with any kind of search engine. Google has always been at the forefront of giving the best form of search for us all. But imagine if you were to look at any other e commerce websites other than Amazon. Imagine you go to Nike.com, you go to gap, you go to Banana Republic. What you see is that their search is really basic and this is an opportunity for a lot of companies to actually create a great search experience for the users with a multi tier engagement model. So you basically make a request. I would like to buy a Nike blue t shirt specially designed for golf with all these features which I need and at a reasonable price point. Hamza Farooq: It shows you a set of results and then from that you can actually converse more to it and say, hey, can you remove five or six or reduce this by a certain degree? That is the power of what we have at hand with complex search. And complex search is becoming quickly a great segue to why we need to implement conversational search. We would need to implement large language models in our ecosystem so that we can understand the context of what users have been asking. So I'll show you a great example of sort of know complex search that TripAdvisor has been. Last week in one of my classes at Stanford, we had head of AI from Trivia Advisor come in and he took us through an experience of a new way of planning your trips. So I'll share this example. So if you go to the website, you can use AI and you can actually select a city. So let's say I'm going to select London for that matter. Hamza Farooq: And I can say I'm going to go for a few days, I do next and I'm going to go with my partner now at the back end. This is just building up a version of complex search and I want to see attractions, great food, hidden gems. I basically just want to see almost everything. And then when I hit submit, the great thing what it does is that it sort of becomes a starting point for something that would have taken me quite a while to put it together, sort of takes all my information and generates an itinerary. Now see what's different about this. It has actual data about places where I can stay, things I can do literally day by day, and it's there for you free of cost generated within 10 seconds. This is an experience that did not exist before. You would have to build this by yourself and what you would usually do is you would go to chat. Hamza Farooq: GPT if you've started this year, you would say seven day itinerary to London and it would identify a few things over here. However, you see it has able to integrate the ability to book, the ability to actually see those restaurants all in one place. That is something that has not been done before. And this is the truest form of taking complex search and putting that into production and sort of create a great experience for the user so that they can understand what they can select. They can highlight and sort of interact with it. Going to pause here. Is there any question or I can help answer anything? Demetrios: No. Demetrios: Man, this is awesome though. I didn't even realize that this is already live, but it's 100% what a travel agent would be doing. And now you've got that at your fingertips. Hamza Farooq: So they have built a user experience which takes 10 seconds to build. Now, was it really happening in the back end? You have this macro task that I want to plan a vacation in Paris, I want to plan a vacation to London. And what web agents or auto agents or whatever you want to call them, they are recursively breaking down tasks into subtasks. And when you reach to an individual atomic subtask, it is able to divide it into actions which can be taken. So there's a task decomposition and a task recognition scene that is going on. And from that, for instance, Stripadvisor is able to build something of individual actions. And then it makes one interface for you where you can see everything ready to go. And that's the part that I have always been very interested in. Hamza Farooq: Whenever we go to Amazon or anything for search, we just do one tier search. We basically say, I want to buy a jeans, I want to buy a shirt, I want to buy. It's an atomic thing. Do you want to get a flight? Do you want to get an accommodation? Imagine if you could do, I would like to go to Tokyo or what kind of gear do I need? What kind of overall grade do I need to go to a glacier? And it can identify all the different subtasks that are involved in it and then eventually show you the action. Well, it's all good that it exists, but the biggest thing is that it's actually difficult to build complex search. Google can get away with it. Amazon can get away with it. But if you imagine how do we make sure that it's available to the larger masses? It's available to just about any company for that matter, if they want to build that experience at this point. Hamza Farooq: This is from a talk that was given by Maxwell a couple of months ago. There are 10 billion search queries a day, estimated half of them go unanswered. Because people don't actually use search as what we used. Because again, also because of GPT coming in and the way we have been conversing with our products, our search is getting more coherent, as we would expect it to be. We would talk to a person and it's great for finding a website for more complex questions or tasks. It often falls too short because a lot of companies, 99.99% companies, I think they are just stuck on elasticsearch because it's cheaper to run it, it's easier, it's out of the box, and a lot of companies do not want to spend the money or they don't have the people to help them build that as a product, as an SDK that is available and they can implement and starts working for them. And the biggest thing is that there are complex search is not just one query, it's multiple queries, sessions or deep, which requires deep engagement with search. And what I mean by deep engagement is imagine when you go to Google right now, you put in a search, you can give feedback on your search, but there's nothing that you can do that it can unless you start a new search all over again. Hamza Farooq: In perplexity, you can ask follow up questions, but it's also a bit of a broken experience because you can't really reduce as you would do with Jarvis in Ironman. So imagine there's a human aspect to it. And let me show you another example of a copilot system, let's say. So this is an example of a copilot which we have been working on. Demetrios: There is a question, there's actually two really good questions that came through, so I'm going to stop you before you get into this. Cool copilot Carlos was asking, what about downtime? When it comes to these LLM services. Hamza Farooq: I think the downtime. This is the perfect question. If you have a production level system running on Chat GPT, you're going to learn within five days that you can't run a production system on Chat GPT and you need to host it by yourself. And then you start with hugging face and then you realize hugging face can also go down. So you basically go to bedrock, or you go to an AWS or GCP and host your LLM over there. So essentially it's all fun with demos to show oh my God, it works beautifully. But consistently, if you have an SLA that 99.9% uptime, you need to deploy it in an architecture with redundancies so that it's up and running. And the eventual solution is to have dedicated support to it. Hamza Farooq: It could be through Azure open AI, I think, but I think even Azure openi tends to go down with open ais out of it's a little bit. Demetrios: Better, but it's not 100%, that is for sure. Hamza Farooq: Can I just give you an example? Recently we came across a new thing, the token speed. Also varies with the day and with the time of the day. So the token generation. And another thing that we found out that instruct, GPT. Instruct was great, amazing. But it's leaking the data. Even in a rack solution, it's leaking the data. So you have to go back to then 16k. Hamza Farooq: It's really slow. So to generate an answer can take up to three minutes. Demetrios: Yeah. So it's almost this catch 22. What do you prefer, leak data or slow speeds? There's always trade offs, folks. There's always trade offs. So Mike has another question coming through in the chat. And Carlos, thanks for that awesome question Mike is asking, though I presume you could modify the search itinerary with something like, I prefer italian restaurants when possible. And I was thinking about that when it comes to. So to add on to what Mike is saying, it's almost like every single piece of your travel or your itinerary would be prefaced with, oh, I like my flights at night, or I like to sit in the aisle row, and I don't want to pay over x amount, but I'm cool if we go anytime in December, et cetera, et cetera. Demetrios: And then once you get there, I like to go into hotels that are around this part of this city. I think you get what I'm going at, but the preference list for each of these can just get really detailed. And you can preference all of these different searches with what you were talking about. Hamza Farooq: Absolutely. So I think that's a great point. And I will tell you about a company that we have been closely working with. It's called Tripsby or Tripspy AI, and we actually help build them the ecosystem where you can have personalized recommendations with private discovery. It's pretty much everything that you just said. I prefer at this time, I prefer this. I prefer this. And it sort of takes audio and text, and you can converse it through WhatsApp, you can converse it through different ways. Hamza Farooq: They are still in the beta mode, and they go selectively, but literally, they have built this, they have taken a lot more personalization into play, and because the database is all the same, it's Ahmedius who gives out, if I'm pronouncing correct, they give out the database for hotels or restaurants or availability, and then you can build things on top of it. So they have gone ahead and built something, but with more user expectation. Imagine you're trying to book a hotel, and you also get an article from New York Times that says, this is why this is a great, or a blogger that you follow and it sort of shows up in your. That is the strength that we have been powering, that you don't need to wait or you don't need to depend anymore on just the company's website itself. You can use the entire Internet to come up with an arsenal. Demetrios: Yeah. Demetrios: And your ability. I think another example of this would be how I love to watch TikTok videos and some of the stuff that pops up on my TikTok feed is like Amazon finds you need to know about, and it's talking about different cool things you can buy on Amazon. If Amazon knew that I was liking that on TikTok, it would probably show it to me next time I'm on Amazon. Hamza Farooq: Yeah, I mean, that's what cookies are, right? Yeah. It's a conspiracy theory that you're talking about a product and it shows up on. Demetrios: Exactly. Well, so, okay. This website that you're showing is absolutely incredible. Carlos had a follow up question before we jump into the next piece, which is around the quality of these open source models and how you deal with that, because it does seem that OpenAI, the GPT-3 four, is still quite a. Hamza Farooq: Bit ahead these days, and that's the silver bullet you have to buy. So what we suggest is have open llms as a backup. So at a point in time, I know it will be subpar, but something subpar might be a little better than breakdown of your complete system. And that's what we have been employed, we have deployed. What we've done is that when we're building large scale products, we basically tend to put an ecosystem behind or a backup behind, which is like, if the token rate is not what we want, if it's not working, it's taking too long, we automatically switch to a redundant version, which is open source. It does perform. Like, for instance, even right now, perplexity is running a lot of things on open source llms now instead of just GPT wrappers. Demetrios: Yeah. Gives you more control. So I didn't want to derail this too much more. I know we're kind of running low on time, so feel free to jump back into it and talk fast. Demetrios: Yeah. Hamza Farooq: So can you give me a time check? How are we doing? Demetrios: Yeah, we've got about six to eight minutes left. Hamza Farooq: Okay, so I'll cover one important thing of why I built my company, Traversaal.ai. This is a great slide to see what everyone is doing everywhere. Everyone is doing so many different things. They're looking into different products for each different thing. You can pick one thing. Imagine the concern with this is that you actually have to think about every single product that you have to pick up because you have to meticulously go through, oh, for this I need this. For this I need this. For this I need this. Hamza Farooq: All what we have done is that we have created one platform which has everything under one roof. And I'll show you with a very simple example. This is our website. We call ourselves one platform with multiple applications. And in this what we have is we have any kind of data format, pretty much that you have any kind of integrations which you need, for example, any applications. And I'll zoom in a little bit. And if you need domain specific search. So basically, if you're looking for Internet search to come in any kind of llms that are in the market, and vector databases, you see Qdrant right here. Hamza Farooq: And what kind of applications that are needed? Do you need a chatbot? You need a knowledge retrieval system, you need recommendation system? You need something which is a job matching tool or a copilot. So if you've built a one stop shop where a lot of times when a customer comes in, usually they don't come to us and say we need a pine cone or we need a Qdrant or we need a local llama, they say, this is the problem you're trying to solve. And we are coming from a problem solving initiative from our company is that we got this. You don't have to hire three ML engineers and two NLP research scientists and three people from here for the cost of two people. We can do an entire end to end implementation. Because what we have is 80% product which is built and we can tune the 20% to what you need. And that is such a powerful thing that once they start trusting us, and the best way to have them trust me is they can come to my class on maven, they can come to my class in Stanford, they come to my class in UCLA, or they can. Demetrios: Listen to this podcast and sort of. Hamza Farooq: It adds credibility to what we have been doing with them. Sorry, stop sharing what we have been doing with them and sort of just goes in that direction that we can do these things pretty fast and we tend to update. I want to just cover one slide. At the end of the day, this is the main slide. Right now. All engineers and product managers think of, oh, llms and Gen AI and this and that. I think one thing we don't talk about is UX experience. I just showed you a UX experience on Tripadvisor. Hamza Farooq: It's so easy to explain, right? Like you're like, oh, I know how to use it and you can already find problems with it, which means that they've done a great job thinking about a user experience. I predict one main thing. Ux people are going to be more rare who can work on gen AI products than product managers and tech people, because for tech people, they can follow and understand code and they can watch videos, business people, they're learning GPT prompting and so on and so forth. But the UX people, there's literally no teaching guide except for a Chat GPT interface. So this user experience, they are going to be, their worth is going to be inequal in gold. Not bitcoin, but gold. It's basically because they will have to build user experiences because we can't imagine right now what it will look like. Demetrios: Yeah, I 100% agree with that, actually. Demetrios: I. Demetrios: Imagine you have seen some of the work from Linus Lee from notion and how notion is trying to add in the clicks. Instead of having to always chat with the LLM, you can just point and click and give it things that you want to do. I noticed with the demo that you shared, it was very much that, like, you're highlighting things that you like to do and you're narrowing that search and you're giving it more context without having to type in. I like italian food and I don't like meatballs or whatever it may be. Hamza Farooq: Yes. Demetrios: So that's incredible. Demetrios: This is perfect, man. Demetrios: And so for anyone that wants to continue the conversation with you, you are on LinkedIn. We will leave a link to your LinkedIn. And you're also teaching on Maven. You're teaching in Stanford, UCLA, all this fun stuff. It's been great having you here. Demetrios: I'm very excited and I hope to have you back because it's amazing seeing what you're building and how you're building it. Hamza Farooq: Awesome. I think, again, it's a pleasure and an honor and thank you for letting. Demetrios: Me speak about the UX part a. Hamza Farooq: Lot because when you go to your customers, you realize that you need the UX and all those different things. Demetrios: Oh, yeah, it's so true. It is so true. Well, everyone that is out there watching. Demetrios: Us, thank you for joining and we will see you next time. Next week we'll be back for another. Demetrios: Session of these vector talks and I am pleased to have you again. Demetrios: Reach out to me if you want to join us. Demetrios: You want to give a talk? I'll see you all later. Have a good one. Hamza Farooq: Thank you. Bye.",blog/building-llm-powered-applications-in-production-hamza-farooq-vector-space-talks-006.md "--- title: ""Dust and Qdrant: Using AI to Unlock Company Knowledge and Drive Employee Productivity"" draft: false slug: dust-and-qdrant #short_description: description: Using AI to Unlock Company Knowledge and Drive Employee Productivity preview_image: /case-studies/dust/preview.png date: 2024-02-06T07:03:26-08:00 author: Manuel Meyer featured: false tags: - Dust - case_study weight: 0 --- One of the major promises of artificial intelligence is its potential to accelerate efficiency and productivity within businesses, empowering employees and teams in their daily tasks. The French company [Dust](https://dust.tt/), co-founded by former Open AI Research Engineer [Stanislas Polu](https://www.linkedin.com/in/spolu/), set out to deliver on this promise by providing businesses and teams with an expansive platform for building customizable and secure AI assistants. ## Challenge ""The past year has shown that large language models (LLMs) are very useful but complicated to deploy,"" Polu says, especially in the context of their application across business functions. This is why he believes that the goal of augmenting human productivity at scale is especially a product unlock and not only a research unlock, with the goal to identify the best way for companies to leverage these models. Therefore, Dust is creating a product that sits between humans and the large language models, with the focus on supporting the work of a team within the company to ultimately enhance employee productivity. A major challenge in leveraging leading LLMs like OpenAI, Anthropic, or Mistral to their fullest for employees and teams lies in effectively addressing a company's wide range of internal use cases. These use cases are typically very general and fluid in nature, requiring the use of very large language models. Due to the general nature of these use cases, it is very difficult to finetune the models - even if financial resources and access to the model weights are available. The main reason is that “the data that’s available in a company is a drop in the bucket compared to the data that is needed to finetune such big models accordingly,” Polu says, “which is why we believe that retrieval augmented generation is the way to go until we get much better at fine tuning”. For successful retrieval augmented generation (RAG) in the context of employee productivity, it is important to get access to the company data and to be able to ingest the data that is considered ‘shared knowledge’ of the company. This data usually sits in various SaaS applications across the organization. ## Solution Dust provides companies with the core platform to execute on their GenAI bet for their teams by deploying LLMs across the organization and providing context aware AI assistants through RAG. Users can manage so-called data sources within Dust and upload files or directly connect to it via APIs to ingest data from tools like Notion, Google Drive, or Slack. Dust then handles the chunking strategy with the embeddings models and performs retrieval augmented generation. ![solution-laptop-screen](/case-studies/dust/laptop-solutions.jpg) For this, Dust required a vector database and evaluated different options including Pinecone and Weaviate, but ultimately decided on Qdrant as the solution of choice. “We particularly liked Qdrant because it is open-source, written in Rust, and it has a well-designed API,” Polu says. For example, Dust was looking for high control and visibility in the context of their rapidly scaling demand, which made the fact that Qdrant is open-source a key driver for selecting Qdrant. Also, Dust's existing system which is interfacing with Qdrant, is written in Rust, which allowed Dust to create synergies with regards to library support. When building their solution with Qdrant, Dust took a two step approach: 1. **Get started quickly:** Initially, Dust wanted to get started quickly and opted for [Qdrant Cloud](https://qdrant.to/cloud), Qdrant’s managed solution, to reduce the administrative load on Dust’s end. In addition, they created clusters and deployed them on Google Cloud since Dust wanted to have those run directly in their existing Google Cloud environment. This added a lot of value as it allowed Dust to centralize billing and increase security by having the instance live within the same VPC. “The early setup worked out of the box nicely,” Polu says. 2. **Scale and optimize:** As the load grew, Dust started to take advantage of Qdrant’s features to tune the setup for optimization and scale. They started to look into how they map and cache data, as well as applying some of Qdrant’s [built-in compression features](/documentation/guides/quantization/). In particular, Dust leveraged the control of the [MMAP payload threshold](/documentation/concepts/storage/#configuring-memmap-storage) as well as [Scalar Quantization](/articles/scalar-quantization/), which enabled Dust to manage the balance between storing vectors on disk and keeping quantized vectors in RAM, more effectively. “This allowed us to scale smoothly from there,” Polu says. ## Results Dust has seen success in using Qdrant as their vector database of choice, as Polu acknowledges: “Qdrant’s ability to handle large-scale models and the flexibility it offers in terms of data management has been crucial for us. The observability features, such as historical graphs of RAM, Disk, and CPU, provided by Qdrant are also particularly useful, allowing us to plan our scaling strategy effectively.” ![“We were able to reduce the footprint of vectors in memory, which led to a significant cost reduction as we don’t have to run lots of nodes in parallel. While being memory-bound, we were able to push the same instances further with the help of quantization. While you get pressure on MMAP in this case you maintain very good performance even if the RAM is fully used. With this we were able to reduce our cost by 2x.” - Stanislas Polu, Co-Founder of Dust](/case-studies/dust/Dust-Quote.jpg) Dust was able to scale its application with Qdrant while maintaining low latency across hundreds of thousands of collections with retrieval only taking milliseconds, as well as maintaining high accuracy. Additionally, Polu highlights the efficiency gains Dust was able to unlock with Qdrant: ""We were able to reduce the footprint of vectors in memory, which led to a significant cost reduction as we don’t have to run lots of nodes in parallel. While being memory-bound, we were able to push the same instances further with the help of quantization. While you get pressure on MMAP in this case you maintain very good performance even if the RAM is fully used. With this we were able to reduce our cost by 2x."" ## Outlook Dust will continue to build out their platform, aiming to be the platform of choice for companies to execute on their internal GenAI strategy, unlocking company knowledge and driving team productivity. Over the coming months, Dust will add more connections, such as Intercom, Jira, or Salesforce. Additionally, Dust will expand on its structured data capabilities. To learn more about how Dust uses Qdrant to help employees in their day to day tasks, check out our [Vector Space Talk](https://www.youtube.com/watch?v=toIgkJuysQ4) featuring Stanislas Polu, Co-Founder of Dust. ",blog/case-study-dust.md "--- title: ""Are You Vendor Locked?"" draft: false slug: are-you-vendor-locked short_description: ""Redefining freedom in the age of Generative AI."" description: ""Redefining freedom in the age of Generative AI. We believe that vendor-dependency comes from hardware, not software. "" preview_image: /blog/are-you-vendor-locked/are-you-vendor-locked.png social_preview_image: /blog/are-you-vendor-locked/are-you-vendor-locked.png date: 2024-05-05T00:00:00-08:00 author: David Myriel featured: false tags: - vector search - vendor lock - hybrid cloud --- We all are. > *“There is no use fighting it. Pick a vendor and go all in. Everything else is a mirage.”* The last words of a seasoned IT professional > As long as we are using any product, our solution’s infrastructure will depend on its vendors. Many say that building custom infrastructure will hurt velocity. **Is this true in the age of AI?** It depends on where your company is at. Most startups don’t survive more than five years, so putting too much effort into infrastructure is not the best use of their resources. You first need to survive and demonstrate product viability. **Sometimes you may pick the right vendors and still fail.** ![gpu-costs](/blog/are-you-vendor-locked/gpu-costs.png) We have all started to see the results of the AI hardware bottleneck. Running LLMs is expensive and smaller operations might fold to high costs. How will this affect large enterprises? > If you are an established corporation, being dependent on a specific supplier can make or break a solid business case. For large-scale GenAI solutions, costs are essential to maintenance and dictate the long-term viability of such projects. In the short run, enterprises may afford high costs, but when the prices drop - then it’s time to adjust. > Unfortunately, the long run goal of scalability and flexibility may be countered by vendor lock-in. Shifting operations from one host to another requires expertise and compatibility adjustments. Should businesses become dependent on a single cloud service provider, they open themselves to risks ranging from soaring costs to stifled innovation. **Finding the best vendor is key; but it’s crucial to stay mobile.** ## **Hardware is the New Vendor Lock** > *“We’re so short on GPUs, the less people that use the tool [ChatGPT], the better.”* OpenAI CEO, Sam Altman > When GPU hosting becomes too expensive, large and exciting Gen AI projects lose their luster. If moving clouds becomes too costly or difficulty to implement - you are vendor-locked. This used to be common with software. Now, hardware is the new dependency. *Enterprises have many reasons to stay provider agnostic - but cost is the main one.* [Appenzeller, Bornstein & Casado from Andreessen Horowitz](https://a16z.com/navigating-the-high-cost-of-ai-compute/) point to growing costs of AI compute. It is still a vendor’s market for A100 hourly GPUs, largely due to supply constraints. Furthermore, the price differences between AWS, GCP and Azure are dynamic enough to justify extensive cost-benefit analysis from prospective customers. ![gpu-costs-a16z](/blog/are-you-vendor-locked/gpu-costs-a16z.png) *Source: Andreessen Horowitz* Sure, your competitors can brag about all the features they can access - but are they willing to admit how much their company has lost to convenience and increasing costs? As an enterprise customer, one shouldn’t expect a vendor to stay consistent in this market. ## How Does This Affect Qdrant? As an open source vector database, Qdrant is completely risk-free. Furthermore, cost savings is one of the many reasons companies use it to augment the LLM. You won’t need to burn through GPU cash for training or inference. A basic instance with a CPU and RAM can easily manage indexing and retrieval. > *However, we find that many of our customers want to host Qdrant in the same place as the rest of their infrastructure, such as the LLM or other data engineering infra. This can be for practical reasons, due to corporate security policies, or even global political reasons.* One day, they might find this infrastructure too costly. Although vector search will remain cheap, their training, inference and embedding costs will grow. Then, they will want to switch vendors. What could interfere with the switch? Compatibility? Technologies? Lack of expertise? In terms of features, cloud service standardization is difficult due to varying features between cloud providers. This leads to custom solutions and vendor lock-in, hindering migration and cost reduction efforts, [as seen with Snapchat and Twitter](https://www.businessinsider.com/snap-google-cloud-aws-reducing-costs-2023-2). ## **Fear, Uncertainty and Doubt** You spend months setting up the infrastructure, but your competitor goes all in with a cheaper alternative and has a competing product out in one month? Does avoiding the lock-in matter if your company will be out of business while you try to setup a fully agnostic platform? **Problem:** If you're not locked into a vendor, you're locked into managing a much larger team of engineers. The build vs buy tradeoff is real and it comes with its own set of risks and costs. **Acknowledgement:** Any organization that processes vast amounts of data with AI needs custom infrastructure and dedicated resources, no matter the industry. Having to work with expensive services such as A100 GPUs justifies the existence of in-house DevOps crew. Any enterprise that scales up needs to employ vigilant operatives if it wants to manage costs. > There is no need for **Fear, Uncertainty and Doubt**. Vendor lock is not a futile cause - so let’s dispel the sentiment that all vendors are adversaries. You just need to work with a company that is willing to accommodate flexible use of products. > **The Solution is Kubernetes:** Decoupling your infrastructure from a specific cloud host is currently the best way of staying risk-free. Any component of your solution that runs on Kubernetes can integrate seamlessly with other compatible infrastructure. This is how you stay dynamic and move vendors whenever it suits you best. ## **What About Hybrid Cloud?** The key to freedom is to building your applications and infrastructure to run on any cloud. By leveraging containerization and service abstraction using Kubernetes or Docker, software vendors can exercise good faith in helping their customers transition to other cloud providers. We designed the architecture of Qdrant Hybrid Cloud to meet the evolving needs of businesses seeking unparalleled flexibility, control, and privacy. This technology integrates Kubernetes clusters from any setting - cloud, on-premises, or edge - into a unified, enterprise-grade managed service. #### Take a look. It's completely yours. We’ll help you manage it.

[Qdrant Hybrid Cloud](/hybrid-cloud/) marks a significant advancement in vector databases, offering the most flexible way to implement vector search. You can test out Qdrant Hybrid Cloud today. Sign up or log into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and get started in the **Hybrid Cloud** section. Also, to learn more about Qdrant Hybrid Cloud read our [Official Release Blog](/blog/hybrid-cloud/) or our [Qdrant Hybrid Cloud website](/hybrid-cloud/). For additional technical insights, please read our [documentation](/documentation/hybrid-cloud/). #### Try it out! [![hybrid-cloud-cta.png](/blog/are-you-vendor-locked/hybrid-cloud-cta.png)](https://qdrant.to/cloud) ",blog/are-you-vendor-locked.md "--- title: ""Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database"" draft: false slug: case-study-dailymotion # Change this slug to your page slug if needed short_description: Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database description: Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database preview_image: /case-studies/dailymotion/preview-dailymotion.png # Change this # social_preview_image: /blog/Article-Image.png # Optional image used for link previews # title_preview_image: /blog/Article-Image.png # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-02-27T13:22:31+01:00 author: Atita Arora featured: false # if true, this post will be featured on the blog page tags: # Change this, related by tags posts will be shown on the blog page - dailymotion - case study - recommender system weight: 0 # Change this weight to change order of posts # For more guidance, see https://github.com/qdrant/landing_page?tab=readme-ov-file#blog --- ## Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database In today's digital age, the consumption of video content has become ubiquitous, with an overwhelming abundance of options available at our fingertips. However, amidst this vast sea of videos, the challenge lies not in finding content, but in discovering the content that truly resonates with individual preferences and interests and yet is diverse enough to not throw users into their own filter bubble. As viewers, we seek meaningful and relevant videos that enrich our experiences, provoke thought, and spark inspiration. Dailymotion is not just another video application; it's a beacon of curated content in an ocean of options. With a steadfast commitment to providing users with meaningful and ethical viewing experiences, Dailymotion stands as the bastion of videos that truly matter. They aim to boost a dynamic visual dialogue, breaking echo chambers and fostering discovery. ### Scale - **420 million+ videos** - **2k+ new videos / hour** - **13 million+ recommendations / day** - **300+ languages in videos** - **Required response time < 100 ms** ### Challenge - **Improve video recommendations** across all 3 applications of Dailymotion (mobile app, website and embedded video player on all major French and International sites) as it is the main driver of audience engagement and revenue stream of the platform. - Traditional [collaborative recommendation model](https://en.wikipedia.org/wiki/Collaborative_filtering) tends to recommend only popular videos, fresh and niche videos suffer due to zero or minimal interaction - Video content based recommendation system required processing all the video embedding at scale and in real time, as soon as they are added to the platform - Exact neighbor search at the scale and keeping them up to date with new video updates in real time at Dailymotion was unreasonable and unrealistic - Precomputed [KNN](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) would be expensive and may not work due to video updates every hour - Platform needs fast recommendations ~ < 100ms - Needed fast ANN search on a vector search engine which could support the scale and performance requirements of the platform ### Background / Journey The quest of Dailymotion to deliver an intelligent video recommendation engine providing a curated selection of videos to its users started with a need to present more relevant videos to the first-time users of the platform (cold start problem) and implement an ideal home feed experience to allow users to watch videos that are expected to be relevant, diverse, explainable, and easily tunable. \ This goal accounted for their efforts focused on[ Optimizing Video Recommender for Dailymotion's Home Feed ](https://medium.com/dailymotion/optimizing-video-feed-recommendations-with-diversity-machine-learning-first-steps-4cf9abdbbffd)back in the time. They continued their work in [Optimising the recommender engine with vector databases and opinion mining](https://medium.com/dailymotion/reinvent-your-recommender-system-using-vector-database-and-opinion-mining-a4fadf97d020) later with emphasis on ranking videos based on features like freshness, real views ratio, watch ratio, and aspect ratio to enhance user engagement and optimise watch time per user on the home feed. Furthermore, the team continued to focus on diversifying user interests by grouping videos based on interest and using stratified sampling to ensure a balanced experience for users. By now it was clear to the Dailymotion team that the future initiatives will involve overcoming obstacles related to data processing, sentiment analysis, and user experience to provide meaningful and diverse recommendations. The main challenge stayed at the candidate generation process, textual embeddings, opinion mining, along with optimising the efficiency and accuracy of these processes and tackling the complexities of large-scale content curation. ### Solution at glance ![solution-at-glance](/case-studies/dailymotion/solution-at-glance.png) The solution involved implementing a content based Recommendation System leveraging Qdrant to power the similar videos, with the following characteristics. **Fields used to represent each video** - Title , Tags , Description , Transcript (generated by [OpenAI whisper](https://openai.com/research/whisper)) **Encoding Model used** - [MUSE - Multilingual Universal Sentence Encoder](https://www.tensorflow.org/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa) * Supports - 16 languages ### Why Qdrant? ![quote-from-Samuel](/case-studies/dailymotion/Dailymotion-Quote.jpg) Looking at the complexity, scale and adaptability of the desired solution, the team decided to leverage Qdrant’s vector database to implement a content-based video recommendation that undoubtedly offered several advantages over other methods: **1. Efficiency in High-Dimensional Data Handling:** Video content is inherently high-dimensional, comprising various features such as audio, visual, textual, and contextual elements. Qdrant excels in efficiently handling high-dimensional data and out-of-the-box support for all the models with up to 65536 dimensions, making it well-suited for representing and processing complex video features with choice of any embedding model. **2. Scalability:** As the volume of video content and user interactions grows, scalability becomes paramount. Qdrant is meticulously designed to scale vertically as well as horizontally, allowing for seamless expansion to accommodate large volumes of data and user interactions without compromising performance. **3. Fast and Accurate Similarity Search:** Efficient video recommendation systems rely on identifying similarities between videos to make relevant recommendations. Qdrant leverages advanced HNSW indexing and similarity search algorithms to support fast and accurate retrieval of similar videos based on their feature representations nearly instantly (20ms for this use case) **4. Flexibility in vector representation with metadata through payloads:** Qdrant offers flexibility in storing vectors with metadata in form of payloads and offers support for advanced metadata filtering during the similarity search to incorporate custom logic. **5. Reduced Dimensionality and Storage Requirements:** Vector representations in Qdrant offer various Quantization and memory mapping techniques to efficiently store and retrieve vectors, leading to reduced storage requirements and computational overhead compared to alternative methods such as content-based filtering or collaborative filtering. **6. Impressive Benchmarks:** [Qdrant’s benchmarks](/benchmarks/) has definitely been one of the key motivations for the Dailymotion’s team to try the solution and the team comments that the performance has been only better than the benchmarks. **7. Ease of usage:** Qdrant API’s have been immensely easy to get started with as compared to Google Vertex Matching Engine (which was Dailymotion’s initial choice) and the support from the team has been of a huge value to us. **8. Being able to fetch data by id** Qdrant allows to retrieve vector point / videos by ids while the Vertex Matching Engine requires a vector input to be able to search for other vectors which was another really important feature for Dailymotion ### Data Processing pipeline ![data-processing](/case-studies/dailymotion/data-processing-pipeline.png) Figure shows the streaming architecture of the data processing pipeline that processes everytime a new video is uploaded or updated (Title, Description, Tags, Transcript), an updated embedding is computed and fed directly into Qdrant. ### Results ![before-qdrant-results](/case-studies/dailymotion/before-qdrant.png) There has been a big improvement in the recommended content processing time and quality as the existing system had issues like: 1. Subpar video recommendations due to long processing time ~ 5 hours 2. Collaborative recommender tended to recommend and focused on high signal / popular videos 3. Metadata based recommender focussed only on a very small scope of trusted video sources 4. The recommendations did not take contents of the video into consideration ![after-qdrant-results](/case-studies/dailymotion/after-qdrant.png) The new recommender system implementation leveraging Qdrant along with the collaborative recommender offered various advantages : 1. The processing time for the new video content reduced significantly to a few minutes which enabled the fresh videos to be part of recommendations. 2. The performant & scalable scope of video recommendation currently processes 22 Million videos and can provide recommendation for videos with fewer interactions too. 3. The overall huge performance gain on the low signal videos has contributed to more than 3 times increase on the interaction and CTR ( number of clicks) on the recommended videos. 4. Seamlessly solved the initial cold start and low performance problems with the fresh content. ### Outlook / Future plans The team is very excited with the results they achieved on their recommender system and wishes to continue building with it. \ They aim to work on Perspective feed next and say >”We've recently integrated this new recommendation system into our mobile app through a feature called Perspective. The aim of this feature is to disrupt the vertical feed algorithm, allowing users to discover new videos. When browsing their feed, users may encounter a video discussing a particular movie. With Perspective, they have the option to explore different viewpoints on the same topic. Qdrant plays a crucial role in this feature by generating candidate videos related to the subject, ensuring users are exposed to diverse perspectives and preventing them from being confined to an echo chamber where they only encounter similar viewpoints.” \ > Gladys Roch - Machine Learning Engineer ![perspective-feed-with-qdrant](/case-studies/dailymotion/perspective-feed-qdrant.jpg) The team is also interested in leveraging advanced features like [Qdrant’s Discovery API](/documentation/concepts/explore/#recommendation-api) to promote exploration of content to enable finding not only similar but dissimilar content too by using positive and negative vectors in the queries and making it work with the existing collaborative recommendation model. ### References **2024 -** [https://www.youtube.com/watch?v=1ULpLpWD0Aw](https://www.youtube.com/watch?v=1ULpLpWD0Aw) **2023 -** [https://medium.com/dailymotion/reinvent-your-recommender-system-using-vector-database-and-opinion-mining-a4fadf97d020](https://medium.com/dailymotion/reinvent-your-recommender-system-using-vector-database-and-opinion-mining-a4fadf97d020) **2022 -** [https://medium.com/dailymotion/optimizing-video-feed-recommendations-with-diversity-machine-learning-first-steps-4cf9abdbbffd](https://medium.com/dailymotion/optimizing-video-feed-recommendations-with-diversity-machine-learning-first-steps-4cf9abdbbffd) ",blog/case-study-dailymotion.md "--- draft: false title: ""Vector Search Complexities: Insights from Projects in Image Search and RAG - Noé Achache | Vector Space Talks"" slug: vector-image-search-rag short_description: Noé Achache discusses their projects in image search and RAG and its complexities. description: Noé Achache shares insights on vector search complexities, discussing projects on image matching, document retrieval, and handling sensitive medical data with practical solutions and industry challenges. preview_image: /blog/from_cms/noé-achache-cropped.png date: 2024-01-09T13:51:26.168Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Image Search - Retrieval Augmented Generation --- > *""I really think it's something the technology is ready for and would really help this kind of embedding model jumping onto the text search projects.”*\ -- Noé Achache on the future of image embedding > Exploring the depths of vector search? Want an analysis of its application in image search and document retrieval? Noé got you covered. Noé Achache is a Lead Data Scientist at Sicara, where he worked on a wide range of projects mostly related to computer vision, prediction with structured data, and more recently LLMs. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/2YgcSFjP7mKE0YpDGmSiq5?si=6BhlAMveSty4Yt7umPeHjA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/1vKoiFAdorE).*** ## **Top Takeaways:** Discover the efficacy of Dino V2 in image representation and the complexities of deploying vector databases, while navigating the challenges of fine-tuning and data safety in sensitive fields. In this episode, Noe, shares insights on vector search from image search to retrieval augmented generation, emphasizing practical application in complex projects. 5 key insights you’ll learn: 1. Cutting-edge Image Search: Learn about the advanced model Dino V2 and its efficacy in image representation, surpassing traditional feature transform methods. 2. Data Deduplication Strategies: Gain knowledge on the sophisticated process of deduplicating real estate listings, a vital task in managing extensive data collections. 3. Document Retrieval Techniques: Understand the challenges and solutions in retrieval augmented generation for document searches, including the use of multi-language embedding models. 4. Protection of Sensitive Medical Data: Delve into strategies for handling confidential medical information and the importance of data safety in health-related applications. 5. The Path Forward in Model Development: Hear Noe discuss the pressing need for new types of models to address the evolving needs within the industry. > Fun Fact: The best-performing model Noé mentions for image representation in his image search project is Dino V2, which interestingly didn't require fine-tuning to understand objects and patterns. > ## Show Notes: 00:00 Relevant experience in vector DB projects and talks.\ 05:57 Match image features, not resilient to changes.\ 07:06 Compute crop vectors, and train to converge.\ 11:37 Simple training task, improve with hard examples.\ 15:25 Improving text embeddings using hard examples.\ 22:29 Future of image embedding for document search.\ 27:28 Efficient storage and retrieval process feature.\ 29:01 Models handle varied data; sparse vectors now possible.\ 35:59 Use memory, avoid disk for CI integration.\ 37:43 Challenging metadata filtering for vector databases and new models ## More Quotes from Noé: *""So basically what was great is that Dino manages to understand all objects and close patterns without fine tuning. So you can get an off the shelf model and get started very quickly and start bringing value very quickly without having to go through all the fine tuning processes.”*\ -- Noé Achache *""And at the end, the embeddings was not learning any very complex features, so it was not really improving it.”*\ -- Noé Achache *""When using an API model, it's much faster to use it in asynchronous mode like the embedding equation went something like ten times or 100 times faster. So it was definitely, it changed a lot of things.”*\ -- Noé Achache ## Transcript: Demetrios: Noe. Great to have you here everyone. We are back for another vector space talks and today we are joined by my man Noe, who is the lead data scientist at Sicara, and if you do not know, he is working on a wide range of projects, mostly related to computer vision. Vision. And today we are talking about navigating the complexities of vector search. We're going to get some practical insights from diverse projects in image search and everyone's favorite topic these days, retrieval augmented generation, aka rags. So noe, I think you got something for us. You got something planned for us here? Noe Acache: Yeah, I do. I can share them. Demetrios: All right, well, I'm very happy to have you on here, man. I appreciate you doing this. And let's get you sharing your screen so we can start rocking, rolling. Noe Acache: Okay. Can you see my screen? Demetrios: Yeah. Awesome. Noe Acache: Great. Thank you, Demetrius, for the great introduction. I just completed quickly. So as you may have guessed, I'm french. I'm a lead data scientist at Sicara. So Secura is a service company helping its clients in data engineering and data science, so building projects for them. Before being there, I worked at realtics on optical character recognition, and I'm now working mostly on, as you said, computer vision and also Gen AI. So I'm leading the geni side and I've been there for more than three years. Noe Acache: So some relevant experience on vector DB is why I'm here today, because I did four projects, four vector soft projects, and I also wrote an article on how to choose your database in 2023, your vector database. And I did some related talks in other conferences like Pydata, DVC, all the geni meetups of London and Paris. So what are we going to talk about today? First, an overview of the vector search projects. Just to give you an idea of the kind of projects we can do with vector search. Then we will dive into the specificities of the image search project and then into the specificities of the text search project. So here are the four projects. So two in image search, two in text search. The first one is about matching objects in videos to sell them afterwards. Noe Acache: So basically you have a video. We first detect the object. So like it can be a lamp, it can be a piece of clothes, anything, we classify it and then we compare it to a large selection of similar objects to retrieve the most similar one to a large collection of sellable objects. The second one is about deduplicating real estate adverts. So when agencies want to sell a property, like sometimes you have several agencies coming to take pictures of the same good. So you have different pictures of the same good. And the idea of this project was to match the different pictures of the same good, the same profile. Demetrios: I've seen that dude. I have been a victim of that. When I did a little house shopping back like five years ago, it would be the same house in many different ones, and sometimes you wouldn't know because it was different photos. So I love that you were thinking about it that way. Sorry to interrupt. Noe Acache: Yeah, so to be fair, it was the idea of my client. So basically I talk about it a bit later with aggregating all the adverts and trying to deduplicate them. And then the last two projects are about drugs retrieval, augmented generation. So the idea to be able to ask questions to your documentation. The first one was for my company's documentation and the second one was for a medical company. So different kind of complexities. So now we know all about this project, let's dive into them. So regarding the image search project, to compute representations of the images, the best performing model from the benchmark, and also from my experience, is currently Dino V two. Noe Acache: So a model developed by meta that you may have seen, which is using visual transformer. And what's amazing about it is that using the attention map, you can actually segment what's important in the picture, although you haven't told it specifically what's important. And as a human, it will learn to focus on the dog, on this picture and do not take into consideration the noisy background. So when I say best performing model, I'm talking about comparing to other architecture like Resnet efficient nets models, an approach I haven't tried, which also seems interesting. If anyone tried it for similar project, please reach out afterwards. I'll be happy to talk about it. Is sift for feature transform something about feature transform. It's basically a more traditional method without learned features through machine learning, as in you don't train the model, but it's more traditional methods. Noe Acache: And you basically detect the different features in an image and then try to find the same features in an image which is supposed to post to be the same. All the blue line trying to match the different features. Of course it's made to match image with exactly the same content, so it wouldn't really work. Probably not work in the first use case, because we are trying to match similar clothes, but which are not exactly the same one. And also it's known to be not very resilient with the changes of angles when it changes too much, et cetera. So it may not be very good as well for the second use case, but again, I haven't tried it, so just leaving it here on the side. Just a quick word about how Dino works in case you're interested. So it's a vision transformer and it's trade in an unsupervised way, as in you don't have any labels provided, so you just take pictures and you first extract small crops and large crops and you augment them. Noe Acache: And then you're going to use the model to compute vectors, representations of each of these crops. And since they all represent the same image, they should all be the same. So then you can compute a loss to see how they diverge and to basically train them to become the same. So this is how it works and how it works. And the difference between the second version is just that they use more data sets and the distillation method to have a very performant model, which is also very fast to run regarding the first use case. So, matching objects in videos to sellable items for people who use Google lengths before, it's quite similar, where in Google lens you can take a picture of something and then it will try to find similar objects to buy. So again, you have a video and then you detect one of the objects in the video, put it and compare it to a vector database which contains a lot of objects which are similar for the representation. And then it will output the most similar lamp here. Noe Acache: Now we're going to try to analyze how this project went regarding the positive outcomes and the changes we faced. So basically what was great is that Dino manages to understand all objects and close patterns without fine tuning. So you can get an off the shelf model and get started very quickly and start bringing value very quickly without having to go through all the fine tuning processes. And it also manages to focus on the object without segmentation. What I mean here is that we're going to get a box of the object, and in this box there will be a very noisy background which may disturb the matching process. And since Dino really manages to focus on the object, that's important on the image. It doesn't really matter that we don't segmentate perfectly the image. Regarding the vector database, this project started a while ago, and I think we chose the vector database something like a year and a half ago. Noe Acache: And so it was before all the vector database hype. And at the time, the most famous one was Milvos, the only famous one actually. And we went for an on premise development deployment. And actually our main learning is that the DevOps team really struggled to deploy it, because basically it's made of a lot of pods. And the documentations about how these pods are supposed to interact together is not really perfect. And it was really buggy at this time. So the clients lost a lot of time and money in this deployment. The challenges, other challenges we faced is that we noticed that the matching wasn't very resilient to large distortions. Noe Acache: So for furnitures like lamps, it's fine. But let's say you have a trouser and a person walking. So the trouser won't exactly have the same shape. And since you haven't trained your model to specifically know, it shouldn't focus on the movements. It will encode this movement. And then in the matching, instead of matching trouser, which looks similar, it will just match trouser where in the product picture the person will be working as well, which is not really what we want. And the other challenges we faced is that we tried to fine tune the model, but our first fine tuning wasn't very good because we tried to take an open source model and, and get the labels it had, like on different furnitures, clothes, et cetera, to basically train a model to classify the different classes and then remove the classification layer to just keep the embedding parts. The thing is that the labels were not specific enough. Noe Acache: So the training task was quite simple. And at the end, the embeddings was not learning any very complex features, so it was not really improving it. So jumping onto the areas of improvement, knowing all of that, the first thing I would do if I had to do it again will be to use the managed milboss for a better fine tuning, it would be to labyd hard examples, hard pairs. So, for instance, you know that when you have a matching pair where the similarity score is not too high or not too low, you know, it's where the model kind of struggles and you will find some good matching and also some mistakes. So it's where it kind of is interesting to level to then be able to fine tune your model and make it learn more complex things according to your tasks. Another possibility for fine tuning will be some sort of multilabel classification. So for instance, if you consider tab close, you could say, all right, those disclose contain buttons. It have a color, it have stripes. Noe Acache: And for all of these categories, you'll get a score between zero and one. And concatenating all these scores together, you can get an embedding which you can put in a vector database for your vector search. It's kind of hard to scale because you need to do a specific model and labeling for each type of object. And I really wonder how Google lens does because their algorithm work very well. So are they working more like with this kind of functioning or this kind of functioning? So if anyone had any thought on that or any idea, again, I'd be happy to talk about it afterwards. And finally, I feel like we made a lot of advancements in multimodal training, trying to combine text inputs with image. We've made input to build some kind of complex embeddings. And how great would it be to have an image embeding you could guide with text. Noe Acache: So you could just like when creating an embedding of your image, just say, all right, here, I don't care about the movements, I only care about the features on the object, for instance. And then it will learn an embedding according to your task without any fine tuning. I really feel like with the current state of the arts we are able to do this. I mean, we need to do it, but the technology is ready. Demetrios: Can I ask a few questions before you jump into the second use case? Noe Acache: Yes. Demetrios: What other models were you looking at besides the dyno one? Noe Acache: I said here, compared to Resnet, efficient nets and these kind of architectures. Demetrios: Maybe this was too early, or maybe it's not actually valuable. Was that like segment anything? Did that come into the play? Noe Acache: So segment anything? I don't think they redo embeddings. It's really about segmentation. So here I was just showing the segmentation part because it's a cool outcome of the model and it shows that the model works well here we are really here to build a representation of the image we cannot really play with segment anything for the matching, to my knowledge, at least. Demetrios: And then on the next slide where you talked about things you would do differently, or the last slide, I guess the areas of improvement you mentioned label hard examples for fine tuning. And I feel like, yeah, there's one way of doing it, which is you hand picking the different embeddings that you think are going to be hard. And then there's another one where I think there's tools out there now that can kind of show you where there are different embeddings that aren't doing so well or that are more edge cases. Noe Acache: Which tools are you talking about? Demetrios: I don't remember the names, but I definitely have seen demos online about how it'll give you a 3d space and you can kind of explore the different embeddings and explore what's going on I. Noe Acache: Know exactly what you're talking about. So tensorboard embeddings is a good tool for that. I could actually demo it afterwards. Demetrios: Yeah, I don't want to get you off track. That's something that came to mind if. Noe Acache: You'Re talking about the same tool. Turns out embedding. So basically you have an embedding of like 1000 dimensions and it just reduces it to free dimensions. And so you can visualize it in a 3d space and you can see how close your embeddings are from each other. Demetrios: Yeah, exactly. Noe Acache: But it's really for visualization purposes, not really for training purposes. Demetrios: Yeah, okay, I see. Noe Acache: Talking about the same thing. Demetrios: Yeah, I think that sounds like what I'm talking about. So good to know on both of these. And you're shooting me straight on it. Mike is asking a question in here, like text embedding, would that allow you to include an image with alternate text? Noe Acache: An image with alternate text? I'm not sure the question. Demetrios: So it sounds like a way to meet regulatory accessibility requirements if you have. I think it was probably around where you were talking about the multimodal and text to guide the embeddings and potentially would having that allow you to include an image with alternate text? Noe Acache: The idea is not to. I feel like the question is about inserting text within the image. It's what I understand. My idea was just if you could create an embedding that could combine a text inputs and the image inputs, and basically it would be trained in such a way that the text would basically be used as a guidance of the image to only encode the parts of the image which are required for your task to not be disturbed by the noisy. Demetrios: Okay. Yeah. All right, Mike, let us know if that answers the question or if you have more. Yes. He's saying, yeah, inserting text with image for people who can't see. Noe Acache: Okay, cool. Demetrios: Yeah, right on. So I'll let you keep cruising and I'll try not to derail it again. But that was great. It was just so pertinent. I wanted to stop you and ask some questions. Noe Acache: Larry, let's just move in. So second use case is about deduplicating real estate adverts. So as I was saying, you have two agencies coming to take different pictures of the same property. And the thing is that they may not put exactly the same price or the same surface or the same location. So you cannot just match them with metadata. So what our client was doing beforehand, and he kind of built a huge if machine, which is like, all right, if the location is not too far and if the surface is not too far. And the price, and it was just like very complex rules. And at the end there were a lot of edge cases. Noe Acache: It was very hard to maintain. So it was like, let's just do a simpler solution just based on images. So it was basically the task to match images of the same properties. Again on the positive outcomes is that the dino really managed to understand the patterns of the properties without any fine tuning. And it was resilient to read different angles of the same room. So like on the pictures I shown, I just showed, the model was quite good at identifying. It was from the same property. Here we used cudrant for this project was a bit more recent. Noe Acache: We leveraged a lot the metadata filtering because of course we can still use the metadata even it's not perfect just to say, all right, only search vectors, which are a price which is more or less 10% this price. The surface is more or less 10% the surface, et cetera, et cetera. And indexing of this metadata. Otherwise the search is really slowed down. So we had 15 million vectors and without this indexing, the search could take up to 20, 30 seconds. And with indexing it was like in a split second. So it was a killer feature for us. And we use quantization as well to save costs because the task was not too hard. Noe Acache: Since using the metadata we managed to every time reduce the task down to a search of 1000 vector. So it wasn't too annoying to quantize the vectors. And at the end for 15 million vectors, it was only $275 per month, which with the village version, which is very decent. The challenges we faced was really about bathrooms and empty rooms because all bathrooms kind of look similar. They have very similar features and same for empty rooms since there is kind of nothing in them, just windows. The model would often put high similarity scores between two bathroom of different properties and same for the empty rooms. So again, the method to overcome this thing will be to label harpers. So example were like two images where the model would think they are similar to actually tell the model no, they are not similar to allow it to improve its performance. Noe Acache: And again, same thing on the future of image embedding. I really think it's something the technology is ready for and would really help this kind of embedding model jumping onto the text search projects. So the principle of retribution generation for those of you who are not familiar with it is just you take some documents, you have an embedding model here, an embedding model trained on text and not on images, which will output representations from these documents, put it in a vector database, and then when a user will ask a question over the documentation, it will create an embedding of the request and retrieve the most similar documents. And afterwards we usually pass it to an LLM, which will generate an answer. But here in this talk, we won't focus on the overall product, but really on the vector search part. So the two projects was one, as I told you, a rack for my nutrition company, so endosion with around a few hundred thousand of pages, and the second one was for medical companies, so for the doctors. So it was really about the documentation search rather than the LLM, because you cannot output any mistake. The model we used was OpenAI Ada two. Noe Acache: Why? Mostly because for the first use case it's multilingual and it was off the shelf, very easy to use, so we did not spend a lot of time on this project. So using an API model made it just much faster. Also it was multilingual, approved by the community, et cetera. For the second use case, we're still working on it. So since we use GPT four afterwards, because it's currently the best LLM, it was also easier to use adatu to start with, but we may use a better one afterwards because as I'm saying, it's not the best one if you refer to the MTAB. So the massive text embedding benchmark made by hugging face, which basically gathers a lot of embeddings benchmark such as retrieval for instance, and so classified the different model for these benchmarks. The M tab is not perfect because it's not taking into account cross language capabilities. All the benchmarks are just for one language and it's not as well taking into account most of the languages, like it's only considering English, Polish and Chinese. Noe Acache: And also it's probably biased for models trained on close source data sets. So like most of the best performing models are currently closed source APIs and hence closed source data sets, and so we don't know how they've been trained. So they probably trained themselves on these data sets. At least if I were them, it's what I would do. So I assume they did it to gain some points in these data sets. Demetrios: So both of these rags are mainly with documents that are in French? Noe Acache: Yes. So this one is French and English, and this one is French only. Demetrios: Okay. Yeah, that's why the multilingual is super important for these use cases. Noe Acache: Exactly. Again, for this one there are models for French working much better than other two, so we may change it afterwards, but right now the performance we have is decent. Since both projects are very similar, I'll jump into the conclusion for both of them together. So Ada two is good for understanding diverse context, wide range of documentation, medical contents, technical content, et cetera, without any fine tuning. The cross language works quite well, so we can ask questions in English and retrieve documents in French and the other way around. And also, quick note, because I did not do it from the start, is that when using an API model, it's much faster to use it in asynchronous mode like the embedding equation went something like ten times or 100 times faster. So it was definitely, it changed a lot of things. Again, here we use cudrant mostly to leverage the free tier so they have a free version. Noe Acache: So you can pop it in a second, get the free version, and using the feature which allows to put the vectors on disk instead of storing them on ram, which makes it a bit slower, you can easily support few hundred thousand of vectors and with a very decent response time. The challenge we faced is that mostly for the notion, so like mostly in notion, we have a lot of pages which are just a title because they are empty, et cetera. And so when pages have just a title, the content is so small that it will be very similar actually to a question. So often the documents were retrieved were document with very little content, which was a bit frustrating. Chunking appropriately was also tough. Basically, if you want your retrieval process to work well, you have to divide your documents the right way to create the embeddings. So you can use matrix rules, but basically you need to divide your documents in content which semantically makes sense and it's not always trivial. And also for the rag, for the medical company, sometimes we are asking questions about a specific drug and it's just not under our search is just not retrieving the good documents, which is very frustrating because a basic search would. Noe Acache: So to handle these changes, a good option would be to use models handing differently question and documents like Bg or cohere. Basically they use the same model but trained differently on long documents and questions which allow them to map them differently in the space. And my guess is that using such model documents, which are only a title, et cetera, will not be as close as the question as they are right now because they will be considered differently. So I hope it will help this problem. Again, it's just a guess, maybe I'm wrong. Heap research so for the keyword problem I was mentioning here, so in the recent release, Cudran just enabled sparse vectors which make actually TFEdev vectors possible. The TFEDEF vectors are vectors which are based on keywords, but basically there is one number per possible word in the data sets, and a lot of zeros, so storing them as a normal vector will make the vector search very expensive. But as a sparse vector it's much better. Noe Acache: And so you can build a debrief search combining the TFDF search for keyword search and the other search for semantic search to get the best of both worlds and overcome this issue. And finally, I'm actually quite surprised that with all the work that is going on, generative AI and rag, nobody has started working on a model to help with chunking. It's like one of the biggest challenge, and I feel like it's quite doable to have a model which will our model, or some kind of algorithm which will understand the structure of your documentation and understand why it semantically makes sense to chunk your documents. Dude, so good. Demetrios: I got questions coming up. Don't go anywhere. Actually, it's not just me. Tom's also got some questions, so I'm going to just blame it on Tom, throw him under the bus. Rag with medical company seems like a dangerous use case. You can work to eliminate hallucinations and other security safety concerns, but you can't make sure that they're completely eliminated, right? You can only kind of make sure they're eliminated. And so how did you go about handling these concerns? Noe Acache: This is a very good question. This is why I mentioned this project is mostly about the document search. Basically what we do is that we use chainlit, which is a very good tool for chatting, and then you can put a react front in front of it to make it very custom. And so when the user asks a question, we provide the LLM answer more like as a second thought, like something the doctor could consider as a fagon thought. But what's the most important is that we directly put the, instead of just citing the sources, we put the HTML of the pages the source is based on, and what bring the most value is really these HTML pages. And so we know the answer may have some problems. The fact is, based on documents, hallucinations are almost eliminated. Like, we don't notice any hallucinations, but of course they can happen. Noe Acache: So it's really the way, it's really a product problem rather than an algorithm problem, an algorithmic problem, yeah. The documents retrieved rather than the LLM answer. Demetrios: Yeah, makes sense. My question around it is a lot of times in the medical space, the data that is being thrown around is super sensitive. Right. And you have a lot of Pii. How do you navigate that? Are you just not touching that? Noe Acache: So basically we work with a provider in front which has public documentation. So it's public documentation. There is no PII. Demetrios: Okay, cool. So it's not like some of it. Noe Acache: Is private, but still there is no PII in the documents. Demetrios: Yeah, because I think that's another really incredibly hard problem is like, oh yeah, we're just sending all this sensitive information over to the IDA model to create embeddings with it. And then we also pass it through Chat GPT before we get it back. And next thing you know, that is the data that was used to train GPT five. And you can say things like create an unlimited poem and get that out of it. So it's super sketchy, right? Noe Acache: Yeah, of course, one way to overcome that is to, for instance, for the notion project, it's our private documentation. We use Ada over Azure, which guarantees data safety. So it's quite a good workaround. And when you have to work with different level of security, if you deal with PII, a good way is to play with metadata. Depending on the security level of the person who has the question, you play with the metadata to output only some kind of documents. The database metadata. Demetrios: Excellent. Well, don't let me stop you. I know you had some conclusionary thoughts there. Noe Acache: No, sorry, I was about to conclude anyway. So just to wrap it up, so we got some good models without any fine tuning. With the model, we tried to overcome them, to overcome these limitations we still faced. For MS search, fine tuning is required at the moment. There's no really any other way to overcome it otherwise. While for tech search, fine tuning is not really necessary, it's more like tricks which are required about using eBrid search, using better models, et cetera. So two kind of approaches, Qdrant really made a lot of things easy. For instance, I love the feature where you can use the database as a disk file. Noe Acache: You can even also use it in memory for CI integration and stuff. But since for all my experimentations, et cetera, I won't use it as a disk file because it's much easier to play with. I just like this feature. And then it allows to use the same tool for your experiment and in production. When I was playing with milverse, I had to use different tools for experimentation and for the database in production, which was making the technical stock a bit more complex. Sparse vector for Tfedef, as I was mentioning, which allows to search based on keywords to make your retrieval much better. Manage deployment again, we really struggle with the deployment of the, I mean, the DevOps team really struggled with the deployment of the milverse. And I feel like in most cases, except if you have some security requirements, it will be much cheaper to use the managed deployments rather than paying dev costs. Noe Acache: And also with the free cloud and on these vectors, you can really do a lot of, at least start a lot of projects. And finally, the metadata filtering and indexing. So by the way, we went into a small trap. It's that indexing. It's recommended to index on your metadata before adding your vectors. Otherwise your performance may be impacted. So you may not retrieve the good vectors that you need. So it's interesting thing to take into consideration. Noe Acache: I know that metadata filtering is something quite hard to do for vector database, so I don't really know how it works, but I assume there is a good reason for that. And finally, as I was mentioning before, in my view, new types of models are needed to answer industrial needs. So the model we are talking about, tech guidance to make better image embeddings and automatic chunking, like some kind of algorithm and model which will automatically chunk your documents appropriately. So thank you very much. If you still have questions, I'm happy to answer them. Here are my social media. If you want to reach me out afterwards, twitch out afterwards, and all my writing and talks are gathered here if you're interested. Demetrios: Oh, I like how you did that. There is one question from Tom again, asking about if you did anything to handle images and tables within the documentation when you were doing those rags. Noe Acache: No, I did not do anything for the images and for the tables. It depends when they are well structured. I kept them because the model manages to understand them. But for instance, we did a small pock for the medical company when he tried to integrate some external data source, which was a PDF, and we wanted to use it as an HTML to be able to display the HTML otherwise explained to you directly in the answer. So we converted the PDF to HTML and in this conversion, the tables were absolutely unreadable. So even after cleaning. So we did not include them in this case. Demetrios: Great. Well, dude, thank you so much for coming on here. And thank you all for joining us for yet another vector space talk. If you would like to come on to the vector space talk and share what you've been up to and drop some knowledge bombs on the rest of us, we'd love to have you. So please reach out to me. And I think that is it for today. Noe, this was awesome, man. I really appreciate you doing this. Noe Acache: Thank you, Demetrius. Have a nice day. Demetrios: We'll see you all later. Bye. ",blog/vector-image-search-rag-vector-space-talk-008.md "--- title: ""Qdrant 1.10 - Universal Query, Built-in IDF & ColBERT Support"" draft: false short_description: ""Single search API. Server-side IDF. Native multivector support."" description: ""Consolidated search API, built-in IDF, and native multivector support. "" preview_image: /blog/qdrant-1.10.x/social_preview.png social_preview_image: /blog/qdrant-1.10.x/social_preview.png date: 2024-07-01T00:00:00-08:00 author: David Myriel featured: false tags: - vector search - ColBERT late interaction - BM25 algorithm - search API - new features --- [Qdrant 1.10.0 is out!](https://github.com/qdrant/qdrant/releases/tag/v1.10.0) This version introduces some major changes, so let's dive right in: **Universal Query API:** All search APIs, including Hybrid Search, are now in one Query endpoint.
**Built-in IDF:** We added the IDF mechanism to Qdrant's core search and indexing processes.
**Multivector Support:** Native support for late interaction ColBERT is accessible via Query API. ## One Endpoint for All Queries **Query API** will consolidate all search APIs into a single request. Previously, you had to work outside of the API to combine different search requests. Now these approaches are reduced to parameters of a single request, so you can avoid merging individual results. You can now configure the Query API request with the following parameters: |Parameter|Description| |-|-| |no parameter|Returns points by `id`| |`nearest`|Queries nearest neighbors ([Search](/documentation/concepts/search/))| |`fusion`|Fuses sparse/dense prefetch queries ([Hybrid Search](/documentation/concepts/hybrid-queries/#hybrid-search))| |`discover`|Queries `target` with added `context` ([Discovery](/documentation/concepts/explore/#discovery-api))| |`context` |No target with `context` only ([Context](/documentation/concepts/explore/#context-search))| |`recommend`|Queries against `positive`/`negative` examples. ([Recommendation](/documentation/concepts/explore/#recommendation-api))| |`order_by`|Orders results by [payload field](/documentation/concepts/hybrid-queries/#re-ranking-with-payload-values)| For example, you can configure Query API to run [Discovery search](/documentation/concepts/explore/#discovery-api). Let's see how that looks: ```http POST collections/{collection_name}/points/query { ""query"": { ""discover"": { ""target"": , ""context"": [ { ""positive"": , ""negative"": } ] } } } ``` We will be publishing code samples in [docs](/documentation/concepts/hybrid-queries/) and our new [API specification](http://api.qdrant.tech).
*If you need additional support with this new method, our [Discord](https://qdrant.to/discord) on-call engineers can help you.* ### Native Hybrid Search Support Query API now also natively supports **sparse/dense fusion**. Up to this point, you had to combine the results of sparse and dense searches on your own. This is now sorted on the back-end, and you only have to configure them as basic parameters for Query API. ```http POST /collections/{collection_name}/points/query { ""prefetch"": [ { ""query"": { ""indices"": [1, 42], // <┐ ""values"": [0.22, 0.8] // <┴─sparse vector }, ""using"": ""sparse"", ""limit"": 20 }, { ""query"": [0.01, 0.45, 0.67, ...], // <-- dense vector ""using"": ""dense"", ""limit"": 20 } ], ""query"": { ""fusion"": ""rrf"" }, // <--- reciprocal rank fusion ""limit"": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", prefetch=[ models.Prefetch( query=models.SparseVector(indices=[1, 42], values=[0.22, 0.8]), using=""sparse"", limit=20, ), models.Prefetch( query=[0.01, 0.45, 0.67], using=""dense"", limit=20, ), ], query=models.FusionQuery(fusion=models.Fusion.RRF), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { prefetch: [ { query: { values: [0.22, 0.8], indices: [1, 42], }, using: 'sparse', limit: 20, }, { query: [0.01, 0.45, 0.67], using: 'dense', limit: 20, }, ], query: { fusion: 'rrf', }, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Fusion, PrefetchQueryBuilder, Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.query( QueryPointsBuilder::new(""{collection_name}"") .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest([(1, 0.22), (42, 0.8)].as_slice())) .using(""sparse"") .limit(20u64) ) .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest(vec![0.01, 0.45, 0.67])) .using(""dense"") .limit(20u64) ) .query(Query::new_fusion(Fusion::Rrf)) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import java.util.List; import static io.qdrant.client.QueryFactory.fusion; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Fusion; import io.qdrant.client.grpc.Points.PrefetchQuery; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .addPrefetch(PrefetchQuery.newBuilder() .setQuery(nearest(List.of(0.22f, 0.8f), List.of(1, 42))) .setUsing(""sparse"") .setLimit(20) .build()) .addPrefetch(PrefetchQuery.newBuilder() .setQuery(nearest(List.of(0.01f, 0.45f, 0.67f))) .setUsing(""dense"") .setLimit(20) .build()) .setQuery(fusion(Fusion.RRF)) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", prefetch: new List < PrefetchQuery > { new() { Query = new(float, uint)[] { (0.22f, 1), (0.8f, 42), }, Using = ""sparse"", Limit = 20 }, new() { Query = new float[] { 0.01f, 0.45f, 0.67f }, Using = ""dense"", Limit = 20 } }, query: Fusion.Rrf ); ``` Query API can now pre-fetch vectors for requests, which means you can run queries sequentially within the same API call. There are a lot of options here, so you will need to define a strategy to merge these requests using new parameters. For example, you can now include **rescoring within Hybrid Search**, which can open the door to strategies like iterative refinement via matryoshka embeddings. *To learn more about this, read the [Query API documentation](/documentation/concepts/search/#query-api).* ## Inverse Document Frequency [IDF] IDF is a critical component of the **TF-IDF (Term Frequency-Inverse Document Frequency)** weighting scheme used to evaluate the importance of a word in a document relative to a collection of documents (corpus). There are various ways in which IDF might be calculated, but the most commonly used formula is: $$ \text{IDF}(q_i) = \ln \left(\frac{N - n(q_i) + 0.5}{n(q_i) + 0.5}+1\right) $$ Where:
`N` is the total number of documents in the collection.
`n` is the number of documents containing non-zero values for the given vector. This variant is also used in BM25, whose support was heavily requested by our users. We decided to move the IDF calculation into the Qdrant engine itself. This type of separation allows streaming updates of the sparse embeddings while keeping the IDF calculation up-to-date. The values of IDF previously had to be calculated using all the documents on the client side. However, now that Qdrant does it out of the box, you won't need to implement it anywhere else and recompute the value if some documents are removed or newly added. You can enable the IDF modifier in the collection configuration: ```http PUT /collections/{collection_name} { ""sparse_vectors"": { ""text"": { ""modifier"": ""idf"" } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( collection_name=""{collection_name}"", sparse_vectors={ ""text"": models.SparseVectorParams( modifier=models.Modifier.IDF, ), }, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { sparse_vectors: { ""text"": { modifier: ""idf"" } } }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{CreateCollectionBuilder, sparse_vectors_config::SparseVectorsConfigBuilder, Modifier, SparseVectorParamsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; let mut config = SparseVectorsConfigBuilder::default(); config.add_named_vector_params( ""text"", SparseVectorParamsBuilder::default().modifier(Modifier::Idf), ); client .create_collection( CreateCollectionBuilder::new(""{collection_name}"") .sparse_vectors_config(config), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Modifier; import io.qdrant.client.grpc.Collections.SparseVectorConfig; import io.qdrant.client.grpc.Collections.SparseVectorParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setSparseVectorsConfig( SparseVectorConfig.newBuilder() .putMap(""text"", SparseVectorParams.newBuilder().setModifier(Modifier.Idf).build())) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", sparseVectorsConfig: (""text"", new SparseVectorParams { Modifier = Modifier.Idf, }) ); ``` ### IDF as Part of BM42 This quarter, Qdrant also introduced BM42, a novel algorithm that combines the IDF element of BM25 with transformer-based attention matrices to improve text retrieval. It utilizes attention matrices from your embedding model to determine the importance of each token in the document based on the attention value it receives. We've prepared the standard `all-MiniLM-L6-v2` Sentence Transformer so [it outputs the attention values](https://huggingface.co/Qdrant/all_miniLM_L6_v2_with_attentions). Still, you can use virtually any model of your choice, as long as you have access to its parameters. This is just another reason to stick with open source technologies over proprietary systems. In practical terms, the BM42 method addresses the tokenization issues and computational costs associated with SPLADE. The model is both efficient and effective across different document types and lengths, offering enhanced search performance by leveraging the strengths of both BM25 and modern transformer techniques. > To learn more about IDF and BM42, read our [dedicated technical article](/articles/bm42/). **You can expect BM42 to excel in scalable RAG-based scenarios where short texts are more common.** Document inference speed is much higher with BM42, which is critical for large-scale applications such as search engines, recommendation systems, and real-time decision-making systems. ## Multivector Support We are adding native support for multivector search that is compatible, e.g., with the late-interaction [ColBERT](https://github.com/stanford-futuredata/ColBERT) model. If you are working with high-dimensional similarity searches, **ColBERT is highly recommended as a reranking step in the Universal Query search.** You will experience better quality vector retrieval since ColBERT’s approach allows for deeper semantic understanding. This model retains contextual information during query-document interaction, leading to better relevance scoring. In terms of efficiency and scalability benefits, documents and queries will be encoded separately, which gives an opportunity for pre-computation and storage of document embeddings for faster retrieval. **Note:** *This feature supports all the original quantization compression methods, just the same as the regular search method.* **Run a query with ColBERT vectors:** Query API can handle exceedingly complex requests. The following example prefetches 1000 entries most similar to the given query using the `mrl_byte` named vector, then reranks them to get the best 100 matches with `full` named vector and eventually reranks them again to extract the top 10 results with the named vector called `colbert`. A single API call can now implement complex reranking schemes. ```http POST /collections/{collection_name}/points/query { ""prefetch"": { ""prefetch"": { ""query"": [1, 23, 45, 67], // <------ small byte vector ""using"": ""mrl_byte"", ""limit"": 1000 }, ""query"": [0.01, 0.45, 0.67, ...], // <-- full dense vector ""using"": ""full"", ""limit"": 100 }, ""query"": [ // <─┐ [0.1, 0.2, ...], // < │ [0.2, 0.1, ...], // < ├─ multi-vector [0.8, 0.9, ...] // < │ ], // <─┘ ""using"": ""colbert"", ""limit"": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.query_points( collection_name=""{collection_name}"", prefetch=models.Prefetch( prefetch=models.Prefetch(query=[1, 23, 45, 67], using=""mrl_byte"", limit=1000), query=[0.01, 0.45, 0.67], using=""full"", limit=100, ), query=[ [0.1, 0.2], [0.2, 0.1], [0.8, 0.9], ], using=""colbert"", limit=10, ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.query(""{collection_name}"", { prefetch: { prefetch: { query: [1, 23, 45, 67], using: 'mrl_byte', limit: 1000 }, query: [0.01, 0.45, 0.67], using: 'full', limit: 100, }, query: [ [0.1, 0.2], [0.2, 0.1], [0.8, 0.9], ], using: 'colbert', limit: 10, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{PrefetchQueryBuilder, Query, QueryPointsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client.query( QueryPointsBuilder::new(""{collection_name}"") .add_prefetch(PrefetchQueryBuilder::default() .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest(vec![1.0, 23.0, 45.0, 67.0])) .using(""mrl_byte"") .limit(1000u64) ) .query(Query::new_nearest(vec![0.01, 0.45, 0.67])) .using(""full"") .limit(100u64) ) .query(Query::new_nearest(vec![ vec![0.1, 0.2], vec![0.2, 0.1], vec![0.8, 0.9], ])) .using(""colbert"") .limit(10u64) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PrefetchQuery; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .queryAsync( QueryPoints.newBuilder() .setCollectionName(""{collection_name}"") .addPrefetch( PrefetchQuery.newBuilder() .addPrefetch( PrefetchQuery.newBuilder() .setQuery(nearest(1, 23, 45, 67)) // <------------- small byte vector .setUsing(""mrl_byte"") .setLimit(1000) .build()) .setQuery(nearest(0.01f, 0.45f, 0.67f)) // <-- dense vector .setUsing(""full"") .setLimit(100) .build()) .setQuery( nearest( new float[][] { {0.1f, 0.2f}, // <─┐ {0.2f, 0.1f}, // < ├─ multi-vector {0.8f, 0.9f} // < ┘ })) .setUsing(""colbert"") .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.QueryAsync( collectionName: ""{collection_name}"", prefetch: new List { new() { Prefetch = { new List { new() { Query = new float[] { 1, 23, 45, 67 }, // <------------- small byte vector Using = ""mrl_byte"", Limit = 1000 }, } }, Query = new float[] {0.01f, 0.45f, 0.67f}, // <-- dense vector Using = ""full"", Limit = 100 } }, query: new float[][] { [0.1f, 0.2f], // <─┐ [0.2f, 0.1f], // < ├─ multi-vector [0.8f, 0.9f] // < ┘ }, usingVector: ""colbert"", limit: 10 ); ``` **Note:** *The multivector feature is not only useful for ColBERT; it can also be used in other ways.*
For instance, in e-commerce, you can use multi-vector to store multiple images of the same item. This serves as an alternative to the [group-by](/documentation/concepts/search/#grouping-api) method. ## Sparse Vectors Compression In version 1.9, we introduced the `uint8` [vector datatype](/documentation/concepts/vectors/#datatypes) for sparse vectors, in order to support pre-quantized embeddings from companies like JinaAI and Cohere. This time, we are introducing a new datatype **for both sparse and dense vectors**, as well as a different way of **storing** these vectors. **Datatype:** Sparse and dense vectors were previously represented in larger `float32` values, but now they can be turned to the `float16`. `float16` vectors have a lower precision compared to `float32`, which means that there is less numerical accuracy in the vector values - but this is negligible for practical use cases. These vectors will use half the memory of regular vectors, which can significantly reduce the footprint of large vector datasets. Operations can be faster due to reduced memory bandwidth requirements and better cache utilization. This can lead to faster vector search operations, especially in memory-bound scenarios. When creating a collection, you need to specify the `datatype` upfront: ```http PUT /collections/{collection_name} { ""vectors"": { ""size"": 1024, ""distance"": ""Cosine"", ""datatype"": ""float16"" } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url=""http://localhost:6333"") client.create_collection( ""{collection_name}"", vectors_config=models.VectorParams( size=1024, distance=models.Distance.COSINE, datatype=models.Datatype.FLOAT16 ), ) ``` ```typescript import { QdrantClient } from ""@qdrant/js-client-rest""; const client = new QdrantClient({ host: ""localhost"", port: 6333 }); client.createCollection(""{collection_name}"", { vectors: { size: 1024, distance: ""Cosine"", datatype: ""float16"" } }); ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Datatype; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder(""localhost"", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName(""{collection_name}"") .setVectorsConfig(VectorsConfig.newBuilder() .setParams(VectorParams.newBuilder() .setSize(1024) .setDistance(Distance.Cosine) .setDatatype(Datatype.Float16) .build()) .build()) .build()) .get(); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{CreateCollectionBuilder, Datatype, Distance, VectorParamsBuilder}; let client = Qdrant::from_url(""http://localhost:6334"").build()?; client .create_collection( CreateCollectionBuilder::new(""{collection_name}"").vectors_config( VectorParamsBuilder::new(1024, Distance::Cosine).datatype(Datatype::Float16), ), ) .await?; ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient(""localhost"", 6334); await client.CreateCollectionAsync( collectionName: ""{collection_name}"", vectorsConfig: new VectorParams { Size = 1024, Distance = Distance.Cosine, Datatype = Datatype.Float16 } ); ``` **Storage:** On the backend, we implemented bit packing to minimize the bits needed to store data, crucial for handling sparse vectors in applications like machine learning and data compression. For sparse vectors with mostly zeros, this focuses on storing only the indices and values of non-zero elements. You will benefit from a more compact storage and higher processing efficiency. This can also lead to reduced dataset sizes for faster processing and lower storage costs in data compression. ## New Rust Client Qdrant’s Rust client has been fully reshaped. It is now more accessible and easier to use. We have focused on putting together a minimalistic API interface. All operations and their types now use the builder pattern, providing an easy and extensible interface, preventing breakage with future updates. See the Rust [ColBERT query](#multivector-support) as great example. Additionally, Rust supports safe concurrent execution, which is crucial for handling multiple simultaneous requests efficiently. Documentation got a significant improvement as well. It is much better organized and provides usage examples across the board. Everything links back to our main documentation, making it easier to navigate and find the information you need.

Visit our client and operations documentation

## S3 Snapshot Storage Qdrant **Collections**, **Shards** and **Storage** can be backed up with [Snapshots](/documentation/concepts/snapshots/) and saved in case of data loss or other data transfer purposes. These snapshots can be quite large and the resources required to maintain them can result in higher costs. AWS S3 and other S3-compatible implementations like [min.io](https://min.io/) is a great low-cost alternative that can hold snapshots without incurring high costs. It is globally reliable, scalable and resistant to data loss. You can configure S3 storage settings in the [config.yaml](https://github.com/qdrant/qdrant/blob/master/config/config.yaml), specifically with `snapshots_storage`. For example, to use AWS S3: ```yaml storage: snapshots_config: # Use 's3' to store snapshots on S3 snapshots_storage: s3 s3_config: # Bucket name bucket: your_bucket_here # Bucket region (e.g. eu-central-1) region: your_bucket_region_here # Storage access key # Can be specified either here or in the `AWS_ACCESS_KEY_ID` environment variable. access_key: your_access_key_here # Storage secret key # Can be specified either here or in the `AWS_SECRET_ACCESS_KEY` environment variable. secret_key: your_secret_key_here ``` *Read more about [S3 snapshot storage](/documentation/concepts/snapshots/#s3) and [configuration](/documentation/guides/configuration/).* This integration allows for a more convenient distribution of snapshots. Users of **any S3-compatible object storage** can now benefit from other platform services, such as automated workflows and disaster recovery options. S3's encryption and access control ensure secure storage and regulatory compliance. Additionally, S3 supports performance optimization through various storage classes and efficient data transfer methods, enabling quick and effective snapshot retrieval and management. ## Issues API Issues API notifies you about potential performance issues and misconfigurations. This powerful new feature allows users (such as database admins) to efficiently manage and track issues directly within the system, ensuring smoother operations and quicker resolutions. You can find the Issues button in the top right. When you click the bell icon, a sidebar will open to show ongoing issues. ![issues api](/blog/qdrant-1.10.x/issues.png) ## Minor Improvements - Pre-configure collection parameters; quantization, vector storage & replication factor - [#4299](https://github.com/qdrant/qdrant/pull/4299) - Overwrite global optimizer configuration for collections. Lets you separate roles for indexing and searching within the single qdrant cluster - [#4317](https://github.com/qdrant/qdrant/pull/4317) - Delta encoding and bitpacking compression for sparse vectors reduces memory consumption for sparse vectors by up to 75% - [#4253](https://github.com/qdrant/qdrant/pull/4253), [#4350](https://github.com/qdrant/qdrant/pull/4350) ",blog/qdrant-1.10.x.md "--- draft: false title: Optimizing Semantic Search by Managing Multiple Vectors slug: storing-multiple-vectors-per-object-in-qdrant short_description: Qdrant's approach to storing multiple vectors per object, unraveling new possibilities in data representation and retrieval. description: Discover the power of vector storage optimization and learn how to efficiently manage multiple vectors per object for enhanced semantic search capabilities. preview_image: /blog/from_cms/andrey.vasnetsov_a_space_station_with_multiple_attached_modules_853a27c7-05c4-45d2-aebc-700a6d1e79d0.png date: 2022-10-05T10:05:43.329Z author: Kacper Łukawski featured: false tags: - Data Science - Neural Networks - Database - Search - Similarity Search --- # How to Optimize Vector Storage by Storing Multiple Vectors Per Object In a real case scenario, a single object might be described in several different ways. If you run an e-commerce business, then your items will typically have a name, longer textual description and also a bunch of photos. While cooking, you may care about the list of ingredients, and description of the taste but also the recipe and the way your meal is going to look. Up till now, if you wanted to enable [semantic search](https://qdrant.tech/documentation/tutorials/search-beginners/) with multiple vectors per object, Qdrant would require you to create separate collections for each vector type, even though they could share some other attributes in a payload. However, since Qdrant 0.10 you are able to store all those vectors together in the same collection and share a single copy of the payload! Running the new version of Qdrant is as simple as it always was. By running the following command, you are able to set up a single instance that will also expose the HTTP API: ``` docker run -p 6333:6333 qdrant/qdrant:v0.10.1 ``` ## Creating a collection Adding new functionalities typically requires making some changes to the interfaces, so no surprise we had to do it to enable the multiple vectors support. Currently, if you want to create a collection, you need to define the configuration of all the vectors you want to store for each object. Each vector type has its own name and the distance function used to measure how far the points are. ```python from qdrant_client import QdrantClient from qdrant_client.http.models import VectorParams, Distance client = QdrantClient() client.create_collection( collection_name=""multiple_vectors"", vectors_config={ ""title"": VectorParams( size=100, distance=Distance.EUCLID, ), ""image"": VectorParams( size=786, distance=Distance.COSINE, ), } ) ``` In case you want to keep a single vector per collection, you can still do it without putting a name though. ```python client.create_collection( collection_name=""single_vector"", vectors_config=VectorParams( size=100, distance=Distance.COSINE, ) ) ``` All the search-related operations have slightly changed their interfaces as well, so you can choose which vector to use in a specific request. However, it might be easier to see all the changes by following an end-to-end Qdrant usage on a real-world example. ## Building service with multiple embeddings Quite a common approach to building search engines is to combine semantic textual capabilities with image search as well. For that purpose, we need a dataset containing both images and their textual descriptions. There are several datasets available with [MS_COCO_2017_URL_TEXT](https://huggingface.co/datasets/ChristophSchuhmann/MS_COCO_2017_URL_TEXT) being probably the simplest available. And because it’s available on HuggingFace, we can easily use it with their [datasets](https://huggingface.co/docs/datasets/index) library. ```python from datasets import load_dataset dataset = load_dataset(""ChristophSchuhmann/MS_COCO_2017_URL_TEXT"") ``` Right now, we have a dataset with a structure containing the image URL and its textual description in English. For simplicity, we can convert it to the DataFrame, as this structure might be quite convenient for future processing. ```python import pandas as pd dataset_df = pd.DataFrame(dataset[""train""]) ``` The dataset consists of two columns: *TEXT* and *URL*. Thus, each data sample is described by two separate pieces of information and each of them has to be encoded with a different model. ## Processing the data with pretrained models Thanks to [embetter](https://github.com/koaning/embetter), we can reuse some existing pretrained models and use a convenient scikit-learn API, including pipelines. This library also provides some utilities to load the images, but only supports the local filesystem, so we need to create our own class that will download the file, given its URL. ```python from pathlib import Path from urllib.request import urlretrieve from embetter.base import EmbetterBase class DownloadFile(EmbetterBase): def __init__(self, out_dir: Path): self.out_dir = out_dir def transform(self, X, y=None): output_paths = [] for x in X: output_file = self.out_dir / Path(x).name urlretrieve(x, output_file) output_paths.append(str(output_file)) return output_paths ``` Now we’re ready to define the pipelines to process our images and texts using *all-MiniLM-L6-v2* and *vit_base_patch16_224* models respectively. First of all, let’s start with Qdrant configuration. ## Creating Qdrant collection We’re going to put two vectors per object (one for image and another one for text), so we need to create a collection with a configuration allowing us to do so. ```python from qdrant_client import QdrantClient from qdrant_client.http.models import VectorParams, Distance client = QdrantClient(timeout=None) client.create_collection( collection_name=""ms-coco-2017"", vectors_config={ ""text"": VectorParams( size=384, distance=Distance.EUCLID, ), ""image"": VectorParams( size=1000, distance=Distance.COSINE, ), }, ) ``` ## Defining the pipelines And since we have all the puzzles already in place, we can start the processing to convert raw data into the embeddings we need. The pretrained models come in handy. ```python from sklearn.pipeline import make_pipeline from embetter.grab import ColumnGrabber from embetter.vision import ImageLoader, TimmEncoder from embetter.text import SentenceEncoder output_directory = Path(""./images"") image_pipeline = make_pipeline( ColumnGrabber(""URL""), DownloadFile(output_directory), ImageLoader(), TimmEncoder(""vit_base_patch16_224""), ) text_pipeline = make_pipeline( ColumnGrabber(""TEXT""), SentenceEncoder(""all-MiniLM-L6-v2""), ) ``` Thanks to the scikit-learn API, we can simply call each pipeline on the created DataFrame and put created vectors into Qdrant to enable fast vector search. For convenience, we’re going to put the vectors as other columns in our DataFrame. ```python sample_df = dataset_df.sample(n=2000, random_state=643) image_vectors = image_pipeline.transform(sample_df) text_vectors = text_pipeline.transform(sample_df) sample_df[""image_vector""] = image_vectors.tolist() sample_df[""text_vector""] = text_vectors.tolist() ``` The created vectors might be easily put into Qdrant. For the sake of simplicity, we’re going to skip it, but if you are interested in details, please check out the [Jupyter notebook](https://gist.github.com/kacperlukawski/961aaa7946f55110abfcd37fbe869b8f) going step by step. ## Searching with multiple vectors If you decided to describe each object with several [neural embeddings](https://qdrant.tech/articles/neural-search-tutorial/), then at each search operation you need to provide the vector name along with the [vector embedding](https://qdrant.tech/articles/what-are-embeddings/), so the engine knows which one to use. The interface of the search operation is pretty straightforward and requires an instance of NamedVector. ```python from qdrant_client.http.models import NamedVector text_results = client.search( collection_name=""ms-coco-2017"", query_vector=NamedVector( name=""text"", vector=row[""text_vector""], ), limit=5, with_vectors=False, with_payload=True, ) ``` If we, on the other hand, decided to search using the image embedding, then we just provide the vector name we have chosen while creating the collection, so instead of “text”, we would provide “image”, as this is how we configured it at the very beginning. ## The results: image vs text search Since we have two different vectors describing each object, we can perform the search query using any of those. That shouldn’t be surprising then, that the results are different depending on the chosen embedding method. The images below present the results returned by Qdrant for the image/text on the left-hand side. ### Image search If we query the system using image embedding, then it returns the following results: ![](/blog/from_cms/0_5nqlmjznjkvdrjhj.webp ""Image search results"") ### Text search However, if we use textual description embedding, then the results are slightly different: ![](/blog/from_cms/0_3sdgctswb99xtexl.webp ""Text search However, if we use textual description embedding, then the results are slightly different:"") It is not surprising that a method used for creating neural encoding plays an important role in the search process and its quality. If your data points might be described using several vectors, then the latest release of Qdrant gives you an opportunity to store them together and reuse the payloads, instead of creating several collections and querying them separately. ### Summary: - Qdrant 0.10 introduces efficient vector storage optimization, allowing seamless management of multiple vectors per object within a single collection. - This update streamlines semantic search capabilities by eliminating the need for separate collections for each vector type, enhancing search accuracy and performance. - With Qdrant's new features, users can easily configure vector parameters, including size and distance functions, for each vector type, optimizing search results and user experience. If you’d like to check out some other examples, please check out our [full notebook](https://gist.github.com/kacperlukawski/961aaa7946f55110abfcd37fbe869b8f) presenting the search results and the whole pipeline implementation.",blog/storing-multiple-vectors-per-object-in-qdrant.md "--- draft: false title: ""Enhance AI Data Sovereignty with Aleph Alpha and Qdrant Hybrid Cloud"" short_description: ""Empowering the world’s best companies in their AI journey."" description: ""Empowering the world’s best companies in their AI journey."" preview_image: /blog/hybrid-cloud-aleph-alpha/hybrid-cloud-aleph-alpha.png date: 2024-04-11T00:01:00Z author: Qdrant featured: false weight: 1012 tags: - Qdrant - Vector Database --- [Aleph Alpha](https://aleph-alpha.com/) and Qdrant are on a joint mission to empower the world’s best companies in their AI journey. The launch of [Qdrant Hybrid Cloud](/hybrid-cloud/) furthers this effort by ensuring complete data sovereignty and hosting security. This latest collaboration is all about giving enterprise customers complete transparency and sovereignty to make use of AI in their own environment. By using a hybrid cloud vector database, those looking to leverage vector search for the AI applications can now ensure their proprietary and customer data is completely secure. Aleph Alpha’s state-of-the-art technology, offering unmatched quality and safety, cater perfectly to large-scale business applications and complex scenarios utilized by professionals across fields such as science, law, and security globally. Recognizing that these sophisticated use cases often demand comprehensive data processing capabilities beyond what standalone LLMs can provide, the collaboration between Aleph Alpha and Qdrant Hybrid Cloud introduces a robust platform. This platform empowers customers with full data sovereignty, enabling secure management of highly specific and sensitive information within their own infrastructure. Together with Aleph Alpha, Qdrant Hybrid Cloud offers an ecosystem where individual components seamlessly integrate with one another. Qdrant's new Kubernetes-native design coupled with Aleph Alpha's powerful technology meet the needs of developers who are both prototyping and building production-level apps. #### How Aleph Alpha and Qdrant Blend Data Control, Scalability, and European Standards Building apps with Qdrant Hybrid Cloud and Aleph Alpha’s models leverages some common value propositions: **Data Sovereignty:** Qdrant Hybrid Cloud is the first vector database that can be deployed anywhere, with complete database isolation, while still providing fully managed cluster management. Furthermore, as the best option for organizations that prioritize data sovereignty, Aleph Alpha offers foundation models which are aimed at serving regional use cases. Together, both products can be leveraged to keep highly specific data safe and isolated. **Scalable Vector Search:** Once deployed to a customer’s host of choice, Qdrant Hybrid Cloud provides a fully managed vector database that lets users effortlessly scale the setup through vertical or horizontal scaling. Deployed in highly secure environments, this is a robust setup that is designed to meet the needs of large enterprises, ensuring a full spectrum of solutions for various projects and workloads. **European Origins & Expertise**: With a strong presence in the European Union ecosystem, Aleph Alpha is ideally positioned to partner with European-based companies like Qdrant, providing local expertise and infrastructure that aligns with European regulatory standards. #### Build a Data-Sovereign AI System With Qdrant Hybrid Cloud and Aleph Alpha’s Models ![hybrid-cloud-aleph-alpha-tutorial](/blog/hybrid-cloud-aleph-alpha/hybrid-cloud-aleph-alpha-tutorial.png) To get you started, we created a comprehensive tutorial that shows how to build next-gen AI applications with Qdrant Hybrid Cloud and Aleph Alpha’s advanced models. #### Tutorial: Build a Region-Specific Contract Management System Learn how to develop an AI system that reads lengthy contracts and gives complex answers based on stored content. This system is completely hosted inside of Germany for GDPR compliance purposes. The tutorial shows how enterprises with a vast number of stored contract documents can leverage AI in a closed environment that doesn’t leave the hosting region, thus ensuring data sovereignty and security. [Try the Tutorial](/documentation/examples/rag-contract-management-stackit-aleph-alpha/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-aleph-alpha.md "--- title: ""Introducing Qdrant Stars: Join Our Ambassador Program!"" draft: false slug: qdrant-stars-announcement # Change this slug to your page slug if needed short_description: Qdrant Stars recognizes and supports key contributors to the Qdrant ecosystem through content creation and community leadership. # Change this description: Say hello to the first Qdrant Stars and learn more about our new ambassador program! preview_image: /blog/qdrant-stars-announcement/preview-image.png social_preview_image: /blog/qdrant-stars-announcement/preview-image.png date: 2024-05-19T11:57:37-03:00 author: Sabrina Aquino featured: false tags: - news - vector search - qdrant - ambassador program - community --- We're excited to introduce **Qdrant Stars**, our new ambassador program created to recognize and support Qdrant users making a strong impact in the AI and vector search space. Whether through innovative content, real-world applications tutorials, educational events, or engaging discussions, they are constantly making vector search more accessible and interesting to explore. ### 👋 Say hello to the first Qdrant Stars! Our inaugural Qdrant Stars are a diverse and talented lineup who have shown exceptional dedication to our community. You might recognize some of their names:
Robert Caulk

Robert is working with a team on AskNews to adaptively enrich, index, and report on over 1 million news articles per day. His team maintains an open-source tool geared toward cluster orchestration Flowdapt, which moves data around highly parallelized production environments. This is why Robert and his team rely on Qdrant for low-latency, scalable, hybrid search across dense and sparse vectors in asynchronous environments.

I am interested in brainstorming innovative ways to interact with Qdrant vector databases and building presentations that show the power of coupling Flowdapt with Qdrant for large-scale production GenAI applications. I look forward to networking with Qdrant experts and users so that I can learn from their experience.
Joshua Mo

Josh is a Rust developer and DevRel Engineer at Shuttle, assisting with user engagement and being a point of contact for first-line information within the community. He's often writing educational content that combines Javascript with Rust and is a coach at Codebar, which is a charity that runs free programming workshops for minority groups within tech.

I am excited about getting access to Qdrant's new features and contributing to the AI community by demonstrating how those features can be leveraged for production environments.
Nicholas Khami

Nick is a founder and product engineer at Trieve and has been using Qdrant since late 2022. He has a low level understanding of the Qdrant API, especially the Rust client, and knows a lot about how to make the most of Qdrant on an application level.

I'm looking forward to be helping folks use lesser known features to enhance and make their projects better!
Owen Colegrove

Owen Colegrove is the Co-Founder of SciPhi, making it easy build, deploy, and scale RAG systems using Qdrant vector search tecnology. He has Ph.D. in Physics and was previously a Quantitative Strategist at Citadel and a Researcher at CERN.

I'm excited about working together with Qdrant!
Kameshwara Pavan Kumar Mantha

Kameshwara Pavan is a expert with 14 years of extensive experience in full stack development, cloud solutions, and AI. Specializing in Generative AI and LLMs. Pavan has established himself as a leader in these cutting-edge domains. He holds a Master's in Data Science and a Master's in Computer Applications, and is currently pursuing his PhD.

Outside of my professional pursuits, I'm passionate about sharing my knowledge through technical blogging, engaging in technical meetups, and staying active with cycling. I admire the groundbreaking work Qdrant is doing in the industry, and I'm eager to collaborate and learn from the team that drives such exceptional advancements.
Niranjan Akella

Niranjan is an AI/ML Engineer at Genesys who specializes in building and deploying AI models such as LLMs, Diffusion Models, and Vision Models at scale. He actively shares his projects through content creation and is passionate about applied research, developing custom real-time applications that that serve a greater purpose.

I am a scientist by heart and an AI engineer by profession. I'm always armed to take a leap of faith into the impossible to be come the impossible. I'm excited to explore and venture into Qdrant Stars with some support to build a broader community and develop a sense of completeness among like minded people.
Bojan Jakimovski

Bojan is an Advanced Machine Learning Engineer at Loka currently pursuing a Master’s Degree focused on applying AI in Heathcare. He is specializing in Dedicated Computer Systems, with a passion for various technology fields.

I'm really excited to show the power of the Qdrant as vector database. Especially in some fields where accessing the right data by very fast and efficient way is a must, in fields like Healthcare and Medicine.
We are happy to welcome this group of people who are deeply committed to advancing vector search technology. We look forward to supporting their vision, and helping them make a bigger impact on the community. You can find and chat with them at our [Discord Community](discord.gg/qdrant). ### Why become a Qdrant Star? There are many ways you can benefit from the Qdrant Star Program. Here are just a few: ##### Exclusive rewards programs Celebrate top contributors monthly with special rewards, including exclusive swag and monetary prizes. Quarterly awards for 'Most Innovative Content' and 'Best Tutorial' offer additional prizes. ##### Early access to new features Be the first to explore and write about our latest features and beta products. Participate in product meetings where your ideas and suggestions can directly influence our roadmap. ##### Conference support We love seeing our stars on stage! If you're planning to attend and speak about Qdrant at conferences, we've got you covered. Receive presentation templates, mentorship, and educational materials to help deliver standout conference presentations, with travel expenses covered. ##### Qdrant Certification End the program as a certified Qdrant ambassador and vector search specialist, with provided training resources and a certification test to showcase your expertise. ### What do Qdrant Stars do? As a Qdrant Star, you'll share your knowledge with the community through articles, blogs, tutorials, or demos that highlight the power and versatility of vector search technology - in your own creative way. You'll be a friendly face and a trusted expert in the community, sparking discussions on topics you love and keeping our community active and engaged. Love organizing events? You'll have the chance to host meetups, workshops, and other educational gatherings, with all the promotional and logistical support you need to make them a hit. But if large conferences are your thing, we’ll provide the resources and cover your travel expenses so you can focus on delivering an outstanding presentation. You'll also have a say in the Qdrant roadmap by giving feedback on new features and participating in product meetings. Qdrant Stars are constantly contributing to the growth and value of the vector search ecosystem. ### How to join the Qdrant Stars Program Are you interested in becoming a Qdrant Star? We're on the lookout for individuals who are passionate about vector search technology and looking to make an impact in the AI community. If you have a strong understanding of vector search technologies, enjoy creating content, speaking at conferences, and actively engage with our community. If this sounds like you, don't hesitate to apply. We look forward to potentially welcoming you as our next Qdrant Star. [Apply here!](https://forms.gle/q4fkwudDsy16xAZk8) Share your journey with vector search technologies and how you plan to contribute further. #### Nominate a Qdrant Star Do you know someone who could be our next Qdrant Star? Please submit your nomination through our [nomination form](https://forms.gle/n4zv7JRkvnp28qv17), explaining why they're a great fit. Your recommendation could help us find the next standout ambassador. #### Learn More For detailed information about the program's benefits, activities, and perks, refer to the [Qdrant Stars Handbook](https://qdrant.github.io/qdrant-stars-handbook/). To connect with current Stars, ask questions, and stay updated on the latest news and events at Qdrant, [join our Discord community](http://discord.gg/qdrant). ",blog/qdrant-stars-announcement copy.md "--- title: ""What is Vector Similarity? Understanding its Role in AI Applications."" draft: false short_description: ""An in-depth exploration of vector similarity and its applications in AI."" description: ""Discover the significance of vector similarity in AI applications and how our vector database revolutionizes similarity search technology for enhanced performance and accuracy."" preview_image: /blog/what-is-vector-similarity/social_preview.png social_preview_image: /blog/what-is-vector-similarity/social_preview.png date: 2024-02-24T00:00:00-08:00 author: Qdrant Team featured: false tags: - vector search - vector similarity - similarity search - embeddings --- # Understanding Vector Similarity: Powering Next-Gen AI Applications A core function of a wide range of AI applications is to first understand the *meaning* behind a user query, and then provide *relevant* answers to the questions that the user is asking. With increasingly advanced interfaces and applications, this query can be in the form of language, or an image, an audio, video, or other forms of *unstructured* data. On an ecommerce platform, a user can, for instance, try to find ‘clothing for a trek’, when they actually want results around ‘waterproof jackets’, or ‘winter socks’. Keyword, or full-text, or even synonym search would fail to provide any response to such a query. Similarly, on a music app, a user might be looking for songs that sound similar to an audio clip they have heard. Or, they might want to look up furniture that has a similar look as the one they saw on a trip. ## How Does Vector Similarity Work? So, how does an algorithm capture the essence of a user’s query, and then unearth results that are relevant? At a high level, here’s how: - Unstructured data is first converted into a numerical representation, known as vectors, using a deep-learning model. The goal here is to capture the ‘semantics’ or the key features of this data. - The vectors are then stored in a vector database, along with references to their original data. - When a user performs a query, the query is first converted into its vector representation using the same model. Then search is performed using a metric, to find other vectors which are closest to the query vector. - The list of results returned corresponds to the vectors that were found to be the closest. At the heart of all such searches lies the concept of *vector similarity*, which gives us the ability to measure how closely related two data points are, how similar or dissimilar they are, or find other related data points. In this document, we will deep-dive into the essence of vector similarity, study how vector similarity search is used in the context of AI, look at some real-world use cases and show you how to leverage the power of vector similarity and vector similarity search for building AI applications. ## **Understanding Vectors, Vector Spaces and Vector Similarity** ML and deep learning models require numerical data as inputs to accomplish their tasks. Therefore, when working with non-numerical data, we first need to convert them into a numerical representation that captures the key features of that data. This is where vectors come in. A vector is a set of numbers that represents data, which can be text, image, or audio, or any multidimensional data. Vectors reside in a high-dimensional space, the vector space, where each dimension captures a specific aspect or feature of the data. {{< figure width=80% src=/blog/what-is-vector-similarity/working.png caption=""Working"" >}} The number of dimensions of a vector can range from tens or hundreds to thousands, and each dimension is stored as the element of an array. Vectors are, therefore, an array of numbers of fixed length, and in their totality, they encode the key features of the data they represent. Vector embeddings are created by AI models, a process known as vectorization. They are then stored in vector stores like Qdrant, which have the capability to rapidly search through vector space, and find similar or dissimilar vectors, cluster them, find related ones, or even the ones which are complete outliers. For example, in the case of text data, “coat” and “jacket” have similar meaning, even though the words are completely different. Vector representations of these two words should be such that they lie close to each other in the vector space. The process of measuring their proximity in vector space is vector similarity. Vector similarity, therefore, is a measure of how closely related two data points are in a vector space. It quantifies how alike or different two data points are based on their respective vector representations. Suppose we have the words ""king"", ""queen"" and “apple”. Given a model, words with similar meanings have vectors that are close to each other in the vector space. Vector representations of “king” and “queen” would be, therefore, closer together than ""king"" and ""apple"", or “queen” and “apple” due to their semantic relationship. Vector similarity is how you calculate this. An extremely powerful aspect of vectors is that they are not limited to representing just text, image or audio. In fact, vector representations can be created out of any kind of data. You can create vector representations of 3D models, for instance. Or for video clips, or molecular structures, or even [protein sequences](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-3220-8). There are several methodologies through which vectorization is performed. In creating vector representations of text, for example, the process involves analyzing the text for its linguistic elements using a transformer model. These models essentially learn to capture the essence of the text by dissecting its language components. ## **How Is Vector Similarity Calculated?** There are several ways to calculate the similarity (or distance) between two vectors, which we call metrics. The most popular ones are: **Dot Product**: Obtained by multiplying corresponding elements of the vectors and then summing those products. A larger dot product indicates a greater degree of similarity. **Cosine Similarity**: Calculated using the dot product of the two vectors divided by the product of their magnitudes (norms). Cosine similarity of 1 implies that the vectors are perfectly aligned, while a value of 0 indicates no similarity. A value of -1 means they are diametrically opposed (or dissimilar). **Euclidean Distance**: Assuming two vectors act like arrows in vector space, Euclidean distance calculates the length of the straight line connecting the heads of these two arrows. The smaller the Euclidean distance, the greater the similarity. **Manhattan Distance**: Also known as taxicab distance, it is calculated as the total distance between the two vectors in a vector space, if you follow a grid-like path. The smaller the Manhattan distance, the greater the similarity. {{< figure width=80% src=/blog/what-is-vector-similarity/products.png caption=""Metrics"" >}} As a rule of thumb, the choice of the best similarity metric depends on how the vectors were encoded. Of the four metrics, Cosine Similarity is the most popular. ## **The Significance of Vector Similarity** Vector Similarity is vital in powering machine learning applications. By comparing the vector representation of a query to the vectors of all data points, vector similarity search algorithms can retrieve the most relevant vectors. This helps in building powerful similarity search and recommendation systems, and has numerous applications in image and text analysis, in natural language processing, and in other domains that deal with high-dimensional data. Let’s look at some of the key ways in which vector similarity can be leveraged. **Image Analysis** Once images are converted to their vector representations, vector similarity can help create systems to identify, categorize, and compare them. This can enable powerful reverse image search, facial recognition systems, or can be used for object detection and classification. **Text Analysis** Vector similarity in text analysis helps in understanding and processing language data. Vectorized text can be used to build semantic search systems, or in document clustering, or plagiarism detection applications. **Retrieval Augmented Generation (RAG)** Vector similarity can help in representing and comparing linguistic features, from single words to entire documents. This can help build retrieval augmented generation (RAG) applications, where the data is retrieved based on user intent. It also enables nuanced language tasks such as sentiment analysis, synonym detection, language translation, and more. **Recommender Systems** By converting user preference vectors into item vectors from a dataset, vector similarity can help build semantic search and recommendation systems. This can be utilized in a range of domains such e-commerce or OTT services, where it can help in suggesting relevant products, movies or songs. Due to its varied applications, vector similarity has become a critical component in AI tooling. However, implementing it at scale, and in production settings, poses some hard problems. Below we will discuss some of them and explore how Qdrant helps solve these challenges. ## **Challenges with Vector Similarity Search** The biggest challenge in this area comes from what researchers call the ""[curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality)."" Algorithms like k-d trees may work well for finding exact matches in low dimensions (in 2D or 3D space). However, when you jump to high-dimensional spaces (hundreds or thousands of dimensions, which is common with vector embeddings), these algorithms become impractical. Traditional search methods and OLTP or OLAP databases struggle to handle this curse of dimensionality efficiently. This means that building production applications that leverage vector similarity involves navigating several challenges. Here are some of the key challenges to watch out for. ### Scalability Various vector search algorithms were originally developed to handle datasets small enough to be accommodated entirely within the memory of a single computer. However, in real-world production settings, the datasets can encompass billions of high-dimensional vectors. As datasets grow, the storage and computational resources required to maintain and search through vector space increases dramatically. For building scalable applications, leveraging vector databases that allow for a distributed architecture and have the capabilities of sharding, partitioning and load balancing is crucial. ### Efficiency As the number of dimensions in vectors increases, algorithms that work in lower dimensions become less effective in measuring true similarity. This makes finding nearest neighbors computationally expensive and inaccurate in high-dimensional space. For efficient query processing, it is important to choose vector search systems which use indexing techniques that help speed up search through high-dimensional vector space, and reduce latency. ### Security For real-world applications, vector databases frequently house privacy-sensitive data. This can encompass Personally Identifiable Information (PII) in customer records, intellectual property (IP) like proprietary documents, or specialized datasets subject to stringent compliance regulations. For data security, the vector search system should offer features that prevent unauthorized access to sensitive information. Also, it should empower organizations to retain data sovereignty, ensuring their data complies with their own regulations and legal requirements, independent of the platform or the cloud provider. These are some of the many challenges that developers face when attempting to leverage vector similarity in production applications. To address these challenges head-on, we have made several design choices at Qdrant which help power vector search use-cases that go beyond simple CRUD applications. ## How Qdrant Solves Vector Similarity Search Challenges Qdrant is a highly performant and scalable vector search system, developed ground up in Rust. Qdrant leverages Rust’s famed memory efficiency and performance. It supports horizontal scaling, sharding, and replicas, and includes security features like role-based authentication. Additionally, Qdrant can be deployed in various environments, including [hybrid cloud setups](/hybrid-cloud/). Here’s how we have taken on some of the key challenges that vector search applications face in production. ### Efficiency Our [choice of Rust](/articles/why-rust/) significantly contributes to the efficiency of Qdrant’s vector similarity search capabilities. Rust’s emphasis on safety and performance, without the need for a garbage collector, helps with better handling of memory and resources. Rust is renowned for its performance and safety features, particularly in concurrent processing, and we leverage it heavily to handle high loads efficiently. Also, a key feature of Qdrant is that we leverage both vector and traditional indexes (payload index). This means that vector index helps speed up vector search, while traditional indexes help filter the results. The vector index in Qdrant employs the Hierarchical Navigable Small World (HNSW) algorithm for Approximate Nearest Neighbor (ANN) searches, which is one of the fastest algorithms according to [benchmarks](https://github.com/erikbern/ann-benchmarks). ### Scalability For massive datasets and demanding workloads, Qdrant supports [distributed deployment](/documentation/guides/distributed_deployment/) from v0.8.0. In this mode, you can set up a Qdrant cluster and distribute data across multiple nodes, enabling you to maintain high performance and availability even under increased workloads. Clusters support sharding and replication, and harness the Raft consensus algorithm to manage node coordination. Qdrant also supports vector [quantization](/documentation/guides/quantization/) to reduce memory footprint and speed up vector similarity searches, making it very effective for large-scale applications where efficient resource management is critical. There are three quantization strategies you can choose from - scalar quantization, binary quantization and product quantization - which will help you control the trade-off between storage efficiency, search accuracy and speed. ### Security Qdrant offers several [security features](/documentation/guides/security/) to help protect data and access to the vector store: - API Key Authentication: This helps secure API access to Qdrant Cloud with static or read-only API keys. - JWT-Based Access Control: You can also enable more granular access control through JSON Web Tokens (JWT), and opt for restricted access to specific parts of the stored data while building Role-Based Access Control (RBAC). - TLS Encryption: Additionally, you can enable TLS Encryption on data transmission to ensure security of data in transit. To help with data sovereignty, Qdrant can be run in a [Hybrid Cloud](/hybrid-cloud/) setup. Hybrid Cloud allows for seamless deployment and management of the vector database across various environments, and integrates Kubernetes clusters into a unified managed service. You can manage these clusters via Qdrant Cloud’s UI while maintaining control over your infrastructure and resources. ## Optimizing Similarity Search Performance In order to achieve top performance in vector similarity searches, Qdrant employs a number of other tactics in addition to the features discussed above.**FastEmbed**: Qdrant supports [FastEmbed](/articles/fastembed/), a lightweight Python library for generating fast and efficient text embeddings. FastEmbed uses quantized transformer models integrated with ONNX Runtime, and is significantly faster than traditional methods of embedding generation. **Support for Dense and Sparse Vectors**: Qdrant supports both dense and sparse vector representations. While dense vectors are most common, you may encounter situations where the dataset contains a range of specialized domain-specific keywords. [Sparse vectors](/articles/sparse-vectors/) shine in such scenarios. Sparse vectors are vector representations of data where most elements are zero. **Multitenancy**: Qdrant supports [multitenancy](/documentation/guides/multiple-partitions/) by allowing vectors to be partitioned by payload within a single collection. Using this you can isolate each user's data, and avoid creating separate collections for each user. In order to ensure indexing performance, Qdrant also offers ways to bypass the construction of a global vector index, so that you can index vectors for each user independently. **IO Optimizations**: If your data doesn’t fit into the memory, it may require storing on disk. To [optimize disk IO performance](/articles/io_uring/), Qdrant offers io_uring based *async uring* storage backend on Linux-based systems. Benchmarks show that it drastically helps reduce operating system overhead from disk IO. **Data Integrity**: To ensure data integrity, Qdrant handles data changes in two stages. First, changes are recorded in the Write-Ahead Log (WAL). Then, changes are applied to segments, which store both the latest and individual point versions. In case of abnormal shutdowns, data is restored from WAL. **Integrations**: Qdrant has integrations with most popular frameworks, such as LangChain, LlamaIndex, Haystack, Apache Spark, FiftyOne, and more. Qdrant also has several [trusted partners](/blog/hybrid-cloud-launch-partners/) for Hybrid Cloud deployments, such as Oracle Cloud Infrastructure, Red Hat OpenShift, Vultr, OVHcloud, Scaleway, and DigitalOcean. We regularly run [benchmarks](/benchmarks/) comparing Qdrant against other vector databases like Elasticsearch, Milvus, and Weaviate. Our benchmarks show that Qdrant consistently achieves the highest requests-per-second (RPS) and lowest latencies across various scenarios, regardless of the precision threshold and metric used. ## Real-World Use Cases Vector similarity is increasingly being used in a wide range of [real-world applications](/use-cases/). In e-commerce, it powers recommendation systems by comparing user behavior vectors to product vectors. In social media, it can enhance content recommendations and user connections by analyzing user interaction vectors. In image-oriented applications, vector similarity search enables reverse image search, similar image clustering, and efficient content-based image retrieval. In healthcare, vector similarity helps in genetic research by comparing DNA sequence vectors to identify similarities and variations. The possibilities are endless. A unique example of real-world application of vector similarity is how VISUA uses Qdrant. A leading computer vision platform, VISUA faced two key challenges. First, a rapid and accurate method to identify images and objects within them for reinforcement learning. Second, dealing with the scalability issues of their quality control processes due to the rapid growth in data volume. Their previous quality control, which relied on meta-information and manual reviews, was no longer scalable, which prompted the VISUA team to explore vector databases as a solution. After exploring a number of vector databases, VISUA picked Qdrant as the solution of choice. Vector similarity search helped identify similarities and deduplicate large volumes of images, videos, and frames. This allowed VISUA to uniquely represent data and prioritize frames with anomalies for closer examination, which helped scale their quality assurance and reinforcement learning processes. Read our [case study](/blog/case-study-visua/) to learn more. ## Future Directions and Innovations As real-world deployments of vector similarity search technology grows, there are a number of promising directions where this technology is headed. We are developing more efficient indexing and search algorithms to handle increasing data volumes and high-dimensional data more effectively. Simultaneously, in case of dynamic datasets, we are pushing to enhance our handling of real-time updates and low-latency search capabilities. Qdrant is one of the most secure vector stores out there. However, we are working on bringing more privacy-preserving techniques in vector search implementations to protect sensitive data. We have just about witnessed the tip of the iceberg in terms of what vector similarity can achieve. If you are working on an interesting use-case that uses vector similarity, we would like to hear from you. ### Key Takeaways: - **Vector Similarity in AI:** Vector similarity is a crucial technique in AI, allowing for the accurate matching of queries with relevant data, driving advanced applications like semantic search and recommendation systems. - **Versatile Applications of Vector Similarity:** This technology powers a wide range of AI-driven applications, from reverse image search in e-commerce to sentiment analysis in text processing. - **Overcoming Vector Search Challenges:** Implementing vector similarity at scale poses challenges like the curse of dimensionality, but specialized systems like Qdrant provide efficient and scalable solutions. - **Qdrant's Advanced Vector Search:** Qdrant leverages Rust's performance and safety features, along with advanced algorithms, to deliver high-speed and secure vector similarity search, even for large-scale datasets. - **Future Innovations in Vector Similarity:** The field of vector similarity is rapidly evolving, with advancements in indexing, real-time search, and privacy-preserving techniques set to expand its capabilities in AI applications. ## Getting Started with Qdrant Ready to implement vector similarity in your AI applications? Explore Qdrant's vector database to enhance your data retrieval and AI capabilities. For additional resources and documentation, visit: - [Quick Start Guide](/documentation/quick-start/) - [Documentation](/documentation/) We are always available on our [Discord channel](https://qdrant.to/discord) to answer any questions you might have. You can also sign up for our [newsletter](/subscribe/) to stay ahead of the curve. ",blog/what-is-vector-similarity.md "--- draft: false title: Mastering Batch Search for Vector Optimization | Qdrant slug: batch-vector-search-with-qdrant short_description: Introducing efficient batch vector search capabilities, streamlining and optimizing large-scale searches for enhanced performance. description: ""Discover how to optimize your vector search capabilities with efficient batch search. Learn optimization strategies for faster, more accurate results."" preview_image: /blog/from_cms/andrey.vasnetsov_career_mining_on_the_moon_with_giant_machines_813bc56a-5767-4397-9243-217bea869820.png date: 2022-09-26T15:39:53.751Z author: Kacper Łukawski featured: false tags: - Data Science - Vector Database - Machine Learning - Information Retrieval --- # How to Optimize Vector Search Using Batch Search in Qdrant 0.10.0 The latest release of Qdrant 0.10.0 has introduced a lot of functionalities that simplify some common tasks. Those new possibilities come with some slightly modified interfaces of the client library. One of the recently introduced features is the possibility to query the collection with [multiple vectors](https://qdrant.tech/blog/storing-multiple-vectors-per-object-in-qdrant/) at once — a batch search mechanism. There are a lot of scenarios in which you may need to perform multiple non-related tasks at the same time. Previously, you only could send several requests to Qdrant API on your own. But multiple parallel requests may cause significant network overhead and slow down the process, especially in case of poor connection speed. Now, thanks to the new batch search, you don’t need to worry about that. Qdrant will handle multiple search requests in just one API call and will perform those requests in the most optimal way. ## An example of using batch search to optimize vector search We’ve used the official Python client to show how the batch search might be integrated with your application. Since there have been some changes in the interfaces of Qdrant 0.10.0, we’ll go step by step. ### Step 1: Creating the collection The first step is to create a collection with a specified configuration — at least vector size and the distance function used to measure the similarity between vectors. ```python from qdrant_client import QdrantClient from qdrant_client.conversions.common_types import VectorParams client = QdrantClient(""localhost"", 6333) if not client.collection_exists('test_collection'): client.create_collection( collection_name=""test_collection"", vectors_config=VectorParams(size=4, distance=Distance.EUCLID), ) ``` ## Step 2: Loading the vectors With the collection created, we can put some vectors into it. We’re going to have just a few examples. ```python vectors = [ [.1, .0, .0, .0], [.0, .1, .0, .0], [.0, .0, .1, .0], [.0, .0, .0, .1], [.1, .0, .1, .0], [.0, .1, .0, .1], [.1, .1, .0, .0], [.0, .0, .1, .1], [.1, .1, .1, .1], ] client.upload_collection( collection_name=""test_collection"", vectors=vectors, ) ``` ## Step 3: Batch search in a single request Now we’re ready to start looking for similar vectors, as our collection has some entries. Let’s say we want to find the distance between the selected vector and the most similar database entry and at the same time find the two most similar objects for a different vector query. Up till 0.9, we would need to call the API twice. Now, we can send both requests together: ```python results = client.search_batch( collection_name=""test_collection"", requests=[ SearchRequest( vector=[0., 0., 2., 0.], limit=1, ), SearchRequest( vector=[0., 0., 0., 0.01], with_vector=True, limit=2, ) ] ) # Out: [ # [ScoredPoint(id=2, version=0, score=1.9, # payload=None, vector=None)], # [ScoredPoint(id=3, version=0, score=0.09, # payload=None, vector=[0.0, 0.0, 0.0, 0.1]), # ScoredPoint(id=1, version=0, score=0.10049876, # payload=None, vector=[0.0, 0.1, 0.0, 0.0])] # ] ``` Each instance of the SearchRequest class may provide its own search parameters, including vector query but also some additional filters. The response will be a list of individual results for each request. In case any of the requests is malformed, there will be an exception thrown, so either all of them pass or none of them. And that’s it! You no longer have to handle the multiple requests on your own. Qdrant will do it under the hood. ## Batch Search Benchmarks The batch search is fairly easy to be integrated into your application, but if you prefer to see some numbers before deciding to switch, then it’s worth comparing four different options: 1. Querying the database sequentially. 2. Using many threads/processes with individual requests. 3. Utilizing the batch search of Qdrant in a single request. 4. Combining parallel processing and batch search. In order to do that, we’ll create a richer collection of points, with vectors from the *glove-25-angular* dataset, quite a common choice for ANN comparison. If you’re interested in seeing some more details of how we benchmarked Qdrant, let’s take a [look at the Gist](https://gist.github.com/kacperlukawski/2d12faa49e06a5080f4c35ebcb89a2a3). ## The results We launched the benchmark 5 times on 10000 test vectors and averaged the results. Presented numbers are the mean values of all the attempts: 1. Sequential search: 225.9 seconds 2. Batch search: 208.0 seconds 3. Multiprocessing search (8 processes): 194.2 seconds 4. Multiprocessing batch search (8 processes, batch size 10): 148.9 seconds The results you may achieve on a specific setup may vary depending on the hardware, however, at the first glance, it seems that batch searching may save you quite a lot of time. Additional improvements could be achieved in the case of distributed deployment, as Qdrant won’t need to make extensive inter-cluster requests. Moreover, if your requests share the same filtering condition, the query optimizer would be able to reuse it among batch requests. ## Summary Batch search allows packing different queries into a single API call and retrieving the results in a single response. If you ever struggled with sending several consecutive queries into Qdrant, then you can easily switch to the new batch search method and simplify your application code. As shown in the benchmarks, that may almost effortlessly speed up your interactions with Qdrant even by over 30%, even not considering the spare network overhead and possible reuse of filters! Ready to unlock the potential of batch search and optimize your vector search with Qdrant 0.10.0? Contact us today to learn how we can revolutionize your search capabilities!",blog/batch-vector-search-with-qdrant.md "--- draft: true title: Qdrant v0.6.0 engine with gRPC interface has been released short_description: We’ve released a new engine, version 0.6.0. description: We’ve released a new engine, version 0.6.0. The main feature of the release in the gRPC interface. preview_image: /blog/qdrant-v-0-6-0-engine-with-grpc-released/upload_time.png date: 2022-03-10T01:36:43+03:00 author: Alyona Kavyerina author_link: https://medium.com/@alyona.kavyerina featured: true categories: - News tags: - gRPC - release sitemapExclude: True --- We’ve released a new engine, version 0.6.0. The main feature of the release in the gRPC interface — it is much faster than the REST API and ensures higher app performance due to the following features: - re-use of connection; - binarity protocol; - separation schema from data. This results in 3 times faster data uploading on our benchmarks: ![REST API vs gRPC upload time, sec](/blog/qdrant-v-0-6-0-engine-with-grpc-released/upload_time.png) Read more about the gRPC interface and whether you should use it by this [link](/documentation/quick_start/#grpc). The release v0.6.0 includes several bug fixes. More information is available in a [changelog](https://github.com/qdrant/qdrant/releases/tag/v0.6.0). New version was provided in addition to the REST API that the company keeps supporting due to its easy debugging. ",blog/qdrant-v-0-6-0-engine-with-grpc-released.md "--- draft: false title: Insight Generation Platform for LifeScience Corporation - Hooman Sedghamiz | Vector Space Talks slug: insight-generation-platform short_description: Hooman Sedghamiz explores the potential of large language models in creating cutting-edge AI applications. description: Hooman Sedghamiz discloses the potential of AI in life sciences, from custom knowledge applications to improving crop yield predictions, while tearing apart the nuances of in-house AI deployment for multi-faceted enterprise efficiency. preview_image: /blog/from_cms/hooman-sedghamiz-bp-cropped.png date: 2024-03-25T08:46:28.227Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Retrieval Augmented Generation - Insight Generation Platform --- > *""There is this really great vector db comparison that came out recently. I saw there are like maybe more than 40 vector stores in 2024. When we started back in 2023, there were only a few. What I see, which is really lacking in this pipeline of retrieval augmented generation is major innovation around data pipeline.”*\ -- Hooman Sedghamiz > Hooman Sedghamiz, Sr. Director AI/ML - Insights at Bayer AG is a distinguished figure in AI and ML in the life sciences field. With years of experience, he has led teams and projects that have greatly advanced medical products, including implantable and wearable devices. Notably, he served as the Generative AI product owner and Senior Director at Bayer Pharmaceuticals, where he played a pivotal role in developing a GPT-based central platform for precision medicine. In 2023, he assumed the role of Co-Chair for the EMNLP 2023 GEM industrial track, furthering his contributions to the field. Hooman has also been an AI/ML advisor and scientist at the University of California, San Diego, leveraging his expertise in deep learning to drive biomedical research and innovation. His strengths lie in guiding data science initiatives from inception to commercialization and bridging the gap between medical and healthcare applications through MLOps, LLMOps, and deep learning product management. Engaging with research institutions and collaborating closely with Dr. Nemati at Harvard University and UCSD, Hooman continues to be a dynamic and influential figure in the data science community. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/2oj2ne5l9qrURQSV0T1Hft?si=DMJRTAt7QXibWiQ9CEKTJw), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/yfzLaH5SFX0).*** ## **Top takeaways:** Why is real-time evaluation critical in maintaining the integrity of chatbot interactions and preventing issues like promoting competitors or making false promises? What strategies do developers employ to minimize cost while maximizing the effectiveness of model evaluations, specifically when dealing with LLMs? These might be just some of the many questions people in the industry are asking themselves. We aim to cover most of it in this talk. Check out their conversation as they peek into world of AI chatbot evaluations. Discover the nuances of ensuring your chatbot's quality and continuous improvement across various metrics. Here are the key topics of this episode: 1. **Evaluating Chatbot Effectiveness**: An exploration of systematic approaches to assess chatbot quality across various stages, encompassing retrieval accuracy, response generation, and user satisfaction. 2. **Importance of Real-Time Assessment**: Insights into why continuous and real-time evaluation of chatbots is essential to maintain integrity and ensure they function as designed without promoting undesirable actions. 3. **Indicators of Compromised Systems**: Understand the significance of identifying behaviors that suggest a system may be prone to 'jailbreaking' and the methods available to counter these through API integration. 4. **Cost-Effective Evaluation Models**: Discussion on employing smaller models for evaluation to reduce costs without compromising the depth of analysis, focusing on failure cases and root-cause assessments. 5. **Tailored Evaluation Metrics**: Emphasis on the necessity of customizing evaluation criteria to suit specific use case requirements, including an exploration of the different metrics applicable to diverse scenarios. >Fun Fact: Large language models like Mistral, Llama, and Nexus Raven have improved in their ability to perform function calling with low hallucination and high-quality output. > ## Show notes: 00:00 Introduction to Bayer AG\ 05:15 Drug discovery, trial prediction, medical virtual assistants.\ 10:35 New language models like Llama rival GPT 3.5.\ 12:46 Large language model solving, efficient techniques, open source.\ 16:12 Scaling applications for diverse, individualized models.\ 19:02 Open source offers multilingual embedding.\ 25:06 Stability improved, reliable function calling capabilities emerged.\ 27:19 Platform aims for efficiency, measures impact.\ 31:01 Build knowledge discovery tool, measure value\ 33:10 Wrap up ## More Quotes from Hooman: *""I think there has been concentration around vector stores. So a lot of startups that have appeared around vector store idea, but I think what really is lacking are tools that you have a lot of sources of knowledge, information.*”\ -- Hooman Sedghamiz *""You can now kind of take a look and see that the performance of them is really, really getting close, if not better than GPT 3.5 already at same level and really approaching step by step to GPT 4.”*\ -- Hooman Sedghamiz in advancements in language models *""I think the biggest, I think the untapped potential, it goes back to when you can do scientific discovery and all those sort of applications which are more challenging, not just around the efficiency and all those sort of things.”*\ -- Hooman Sedghamiz ## Transcript: Demetrios: We are here and I couldn't think of a better way to spend my Valentine's Day than with you Hooman this is absolutely incredible. I'm so excited for this talk that you're going to bring and I want to let everyone that is out there listening know what caliber of a speaker we have with us today because you have done a lot of stuff. Folks out there do not let this man's young look fool you. You look like you are not in your fifty's or sixty's. But when it comes to your bio, it looks like you should be in your seventy's. I am very excited. You've got a lot of experience running data science projects, ML projects, LLM projects, all that fun stuff. You're working at Bayern Munich, sorry, not Bayern Munich, Bayer AG. And you're the senior director of AI and ML. Demetrios: And I think that there is a ton of other stuff that you've done when it comes to machine learning, artificial intelligence. You've got both like the traditional ML background, I think, and then you've also got this new generative AI background and so you can leverage both. But you also think about things in data engineering way. You understand the whole lifecycle. And so today we get to talk all about some of this fun. I know you've got some slides prepared for us. I'll let you throw those on and I'll let anyone else in the chat. Feel free to ask questions while Hooman is going through the presentation and I'll jump in and stop them when needed. Demetrios: But also we can have a little discussion after a few minutes of slides. So for everyone looking, we're going to be watching this and then we're going to be checking out like really talking about what 2024 AI in the enterprise looks like and what is needed to really take advantage of that. So Hooman, I'm dropping off to you, man, and I'll jump in when needed. Hooman Sedghamiz: Thanks a lot for the introduction. Let me get started. Do you have my screen already? Demetrios: Yeah, we see it. Hooman Sedghamiz: Okay, perfect. All right, so hopefully I can change the slides. Yes, as you said, first, thanks a lot for spending your day with me. I know it's Valentine's Day, at least here in the US people go crazy when it gets Valentine's. But I know probably a lot of you are in love with large language models, semantic search and all those sort of things, so it's great to have you here. Let me just start with the. I have a lot of slides, by the way, but maybe I can start with kind of some introduction about the company I work for, what these guys are doing and what we are doing at a life science company like Bayer, which is involved in really major humanity needs, right? So health and the food chain and like agriculture, we do three major kind of products or divisions in the company, mainly consumer halls, over the counter medication that probably a lot of you have taken, aspirin, all those sort of good stuff. And we have crop science division that works on ensuring that the yield is high for crops and the food chain is performing as it should, and also pharmaceutical side which is around treatment and prevention. Hooman Sedghamiz: So now you can imagine via is really important to us because it has the potential of unlocking a future where good health is a reality and hunger is a memory. So I maybe start about maybe giving you a hint of what are really the numerous use cases that AI or challenges that AI could help out with. In life science industry. You can think of adverse event detection when patients are taking a medication, too much of it. The patients might report adverse events, stomach bleeding and go to social media post about it. A few years back, it was really difficult to process automatically all this sort of natural text in a kind of scalable manner. But nowadays, thanks to large language models, it's possible to automate this and identify if there is a medication or anything that might have negatively an adverse event on a patient population. Similarly, you can now create a lot of marketing content using these large language models for products. Hooman Sedghamiz: At the same time, drug discovery is making really big strides when it comes to identifying new compounds. You can essentially describe these compounds using formats like smiles, which could be represented as real text. And these large language models can be trained on them and they can predict the sequences. At the same time, you have this clinical trial outcome prediction, which is huge for pharmaceutical companies. If you could predict what will be the outcome of a trial, it would be a huge time and resource saving for a lot of companies. And of course, a lot of us already see in the market a lot of medical virtual assistants using large language models that can answer medical inquiries and give consultations around them. And there is really, I believe the biggest potential here is around real world data, like most of us nowadays, have some sort of sensor or watch that's measuring our health maybe at a minute by minute level, or it's measuring our heart rate. You go to the hospital, you have all your medical records recorded there, and these large language models have their capacity to process this complex data, and you will be able to drive better insights for individualized insights for patients. Hooman Sedghamiz: And our company is also in crop science, as I mentioned, and crop yield prediction. If you could help farmers improve their crop yield, it means that they can produce better products faster with higher quality. So maybe I could start with maybe a history in 2023, what happened? How companies like ours were looking at large language models and opportunities. They bring, I think in 2023, everyone was excited to bring these efficiency games, right? Everyone wanted to use them for creating content, drafting emails, all these really low hanging fruit use cases. That was around. And one of the earlier really nice architectures that came up that I really like was from a 16 z enterprise that was, I think, back in really, really early 2023. LangChain was new, we had land chain and we had all this. Of course, Qdrant been there for a long time, but it was the first time that you could see vector store products could be integrated into applications. Hooman Sedghamiz: Really at large scale. There are different components. It's quite complex architecture. So on the right side you see how you can host large language models. On the top you see how you can augment them using external data. Of course, we had these plugins, right? So you can connect these large language models with Google search APIs, all those sort of things, and some validation that are in the middle that you could use to validate the responses fast forward. Maybe I can kind of spend, let me check out the time. Maybe I can spend a few minutes about the components of LLM APIs and hosting because that I think has a lot of potential in terms of applications that need to be really scalable. Hooman Sedghamiz: Just to give you some kind of maybe summary about my company, we have around 100,000 people in almost all over the world. Like the languages that people speak are so diverse. So it makes it really difficult to build an application that will serve 200,000 people. And it's kind of efficient. It's not really costly and all those sort of things. So maybe I can spend a few minutes talking about what that means and how kind of larger scale companies might be able to tackle that efficiently. So we have, of course, out of the box solutions, right? So you have Chat GPT already for enterprise, you have other copilots and for example from Microsoft and other companies that are offering, but normally they are seat based, right? So you kind of pay a subscription fee, like Spotify, you pay like $20 per month, $30 on average, somewhere between $20 to $60. And for a company, like, I was like, just if you calculate that for 3000 people, that means like 180,000 per month in subscription fees. Hooman Sedghamiz: And we know that most of the users won't use that. We know that it's a usage based application. You just probably go there. Depending on your daily work, you probably use it. Some people don't use it heavily. I kind of did some calculation. If you build it in house using APIs that you can access yourself, and large language models that corporations can deploy internally and locally, that cost saving could be huge, really magnitudes cheaper, maybe 30 to 20 to 30 times cheaper. So looking, comparing 2024 to 2023, a lot of things have changed. Hooman Sedghamiz: Like if you look at the open source large language models that came out really great models from Mistral, now we have models like Llama, two based model, all of these models came out. You can now kind of take a look and see that the performance of them is really, really getting close, if not better than GPT 3.5 already at same level and really approaching step by step to GPT 4. And looking at the price on the right side and speed or throughput, you can see that like for example, Mistral seven eight B could be a really cheap option to deploy. And also the performance of it gets really close to GPT 3.5 for many use cases in the enterprise companies. I think two of the big things this year, end of last year that came out that make this kind of really a reality are really a few large language models. I don't know if I can call them large language models. They are like 7 billion to 13 billion compared to GPT four, GT 3.5. I don't think they are really large. Hooman Sedghamiz: But one was Nexus Raven. We know that applications, if they want to be robust, they really need function calling. We are seeing this paradigm of function calling, which essentially you ask a language model to generate structured output, you give it a function signature, right? You ask it to generate an output, structured output argument for that function. Next was Raven came out last year, that, as you can see here, really is getting really close to GPT four, right? And GPT four being magnitude bigger than this model. This model only being 13 billion parameters really provides really less hallucination, but at the same time really high quality of function calling. So this makes me really excited for the open source and also the companies that want to build their own applications that requires function calling. That was really lacking maybe just five months ago. At the same time, we have really dedicated large language models to programming languages or scripting like SQL, that we are also seeing like SQL coder that's already beating GPT four. Hooman Sedghamiz: So maybe we can now quickly take a look at how model solving will look like for a large company like ours, like companies that have a lot of people across the globe again, in this aspect also, the community has made really big progress, right? So we have text generation inference from hugging face is open source for most purposes, can be used and it's the choice of mine and probably my group prefers this option. But we have Olama, which is great, a lot of people are using it. We have llama CPP which really optimizes the large language models for local deployment as well, and edge devices. I was really amazed seeing Raspberry PI running a large language model, right? Using Llama CPP. And you have this text generation inference that offers quantization support, continuous patching, all those sort of things that make these large LLMs more quantized or more compressed and also more suitable for deployment to large group of people. Maybe I can kind of give you kind of a quick summary of how, if you decide to deploy these large language models, what techniques you could use to make them more efficient, cost friendly and more scalable. So we have a lot of great open source projects like we have Lite LLM which essentially creates an open AI kind of signature on top of your large language models that you have deployed. Let's say you want to use Azure to host or to access GPT four gypty 3.5 or OpenAI to access OpenAI API. Hooman Sedghamiz: To access those, you could put them behind Lite LLM. You could have models using hugging face that are deployed internally, you could put lightlm in front of those, and then your applications could just use OpenAI, Python SDK or anything to call them naturally. And then you could simply do load balancing between those. Of course, we have also, as I mentioned, a lot of now serving opportunities for deploying those models that you can accelerate. Semantic caching is another opportunity for saving cost. Like for example, if you have cute rent, you are storing the conversations. You could semantically check if the user has asked similar questions and if that question is very similar to the history, you could just return that response instead of calling the large language model that can create costs. And of course you have line chain that you can summarize conversations, all those sort of things. Hooman Sedghamiz: And we have techniques like prompt compression. So as I mentioned, this really load balancing can offer a lot of opportunities for scaling this large language model. As you know, a lot of offerings from OpenAI APIs or Microsoft Azure, they have rate limits, right? So you can't call those models extensively. So what you could do, you could have them in multiple regions, you can have multiple APIs, local TGI deployed models using hugging face TGI or having Azure endpoints and OpenAI endpoints. And then you could use light LLM to load balance between these models. Once the users get in. Right. User one, you send the user one to one deployment, you send the user two requests to the other deployment. Hooman Sedghamiz: So this way you can really scale your application to large amount of users. And of course, we have these opportunities for applications called Lorex that use Lora. Probably a lot of you have heard of like very efficient way of fine tuning these models with fewer number of parameters that we could leverage to have really individualized models for a lot of applications. And you can see the costs are just not comparable if you wanted to use, right. So at GPT 3.5, even in terms of performance and all those sort of things, because you can use really small hardware GPU to deploy thousands of Lora weights or adapters, and then you will be able to serve a diverse set of models to your users. I think one really important part of these kind of applications is the part that you add contextual data, you add augmentation to make them smarter and to make them more up to date. So, for example, in healthcare domain, a lot of Americans already don't have high trust in AI when it comes to decision making in healthcare. So that's why augmentation of data or large language models is really, really important for bringing trust and all those sort of state of the art knowledge to this large language model. Hooman Sedghamiz: For example, if you ask about cancer or rededicated questions that need to build on top of scientific knowledge, it's very important to use those. Augmented or retrieval augmented generation. No, sorry, go next. Jumped on one. But let me see. I think I'm missing a slide, but yeah, I have it here. So going through this kind of, let's say retrieval augmented generation, different parts of it. You have, of course, these vector stores that in 2024, I see explosion of vector stores. Hooman Sedghamiz: Right. So there is this really great vector DB comparison that came out recently. I saw there are like maybe more than 40 vector stores in 2024. When we started back in 2023 was only a few. And what I see, which is really lacking in this pipeline of retrieval augmented generation is major innovation around data pipeline. And I think we were talking before this talk together that ETL is not something that is taken seriously. So far. We have a lot of embedding models that are coming out probably on a weekly basis. Hooman Sedghamiz: We have great embedding models that are open source, BgEM. Three is one that is multilingual, 100 plus languages. You could embed text in those languages. We have a lot of vector stores, but we don't have really ETL tools, right? So we have maybe a few airbytes, right? How can you reindex data efficiently? How can you parse scientific articles? Like imagine I have an image here, we have these articles or archive or on a pubmed, all those sort of things that have images and complex structure that our parsers are not able to parse them efficiently and make sense of them so that you can embed them really well. And really doing this Internet level, scientific level retrieval is really difficult. And no one I think is still doing it at scale. I just jumped, I have a love slide, maybe I can jump to my last and then we can pause there and take in some questions. Where I see 2014 and beyond, beyond going for large language models for enterprises, I see assistance, right? I see assistance for personalized assistance, for use cases coming out, right? So these have probably four components. Hooman Sedghamiz: You have even a personalized large language model that can learn from the history of your conversation, not just augmented. Maybe you can fine tune that using Laura and all those techniques. You have the knowledge that probably needs to be customized for your assistant and integrated using vector stores and all those sort of things, technologies that we have out, you know, plugins that bring a lot of plugins, some people call them skills, and also they can cover a lot of APIs that can bring superpowers to the large language model and multi agent setups. Right? We have autogen, a lot of cool stuff that is going on. The agent technology is getting really mature now as we go forward. We have langraph from Langchain that is bringing a lot of more stabilized kind of agent technology. And then you can think of that as for companies building all these kind of like App Stores or assistant stores that use cases, store there. And the colleagues can go there, search. Hooman Sedghamiz: I'm looking for this application. That application is customized for them, or even they can have their own assistant which is customized to them, their own large language model, and they could use that to bring value. And then even a nontechnical person could create their own assistant. They could attach the documents they like, they could select the plugins they like, they'd like to be connected to, for example, archive, or they need to be connected to API and how many agents you like. You want to build a marketing campaign, maybe you need an agent that does market research, one manager. And then you build your application which is customized to you. And then based on your feedback, the large language model can learn from your feedback as well. Going forward, maybe I pause here and then we can it was a bit longer than I expected, but yeah, it's all good, man. Demetrios: Yeah, this is cool. Very cool. I appreciate you going through this, and I also appreciate you coming from the past, from 2014 and talking about what we're going to do in 2024. That's great. So one thing that I want to dive into right away is the idea of ETL and why you feel like that is a bit of a blocker and where you think we can improve there. Hooman Sedghamiz: Yeah. So I think there has been concentration around vector stores. Right. So a lot of startups that have appeared around vector store idea, but I think what really is lacking tools that you have a lot of sources of knowledge, information. You have your Gmail, if you use outlook, if you use scientific knowledge, like sources like archive. We really don't have any startup that I hear that. Okay. I have a platform that offers real time retrieval from archive papers. Hooman Sedghamiz: And you want to ask a question, for example, about transformers. It can do retrieval, augmented generation over all archive papers in real time as they get added for you and brings back the answer to you. We don't have that. We don't have these syncing tools. You can of course, with tricks you can maybe build some smart solutions, but I haven't seen many kind of initiatives around that. And at the same time, we have this paywall knowledge. So we have these nature medicine amazing papers which are paywall. We can access them. Hooman Sedghamiz: Right. So we can build rag around them yet, but maybe some startups can start coming up with strategies, work with this kind of publishing companies to build these sort of things. Demetrios: Yeah, it's almost like you're seeing it not as the responsibility of nature or. Hooman Sedghamiz: Maybe they can do it. Demetrios: Yeah, they can potentially, but maybe that's not their bread and butter and so they don't want to. And so how do startups get in there and take some of this paywalled information and incorporate it into their product? And there is another piece that you mentioned on, just like when it comes to using agents, I wonder, have you played around with them a lot? Have you seen their reliability get better? Because I'm pretty sure a lot of us out there have tried to mess around with agents and maybe just like blown a bunch of money on GPT, four API calls. And it's like this thing isn't that stable. What's going on? So do you know something that we don't? Hooman Sedghamiz: I think they have become much, much more stable. If you look back in 2023, like June, July, they were really new, like auto GPT. We had all these new projects came out, really didn't work out as you say, they were not stable. But I would say by the end of 2023, we had really stable frameworks, for example, customized solutions around agent function calling. I think when function calling came out, the capability that you could provide signature or dot string of, I don't know, a function and you could get back the response really reliably. I think that changed a lot. And Langchen has this OpenAI function calling agent that works with some measures. I mean, of course I wouldn't say you could automate 100% something, but for a knowledge, kind of. Hooman Sedghamiz: So for example, if you have an agent that has access to data sources, all those sort of things, and you ask it to go out there, see what are the latest clinical trial design trends, it can call these tools, it can reliably now get you answer out of ten times, I would say eight times, it works. Now it has become really stable. And what I'm excited about is the latest multi agent scenarios and we are testing them. They are very promising. Right? So you have autogen from Microsoft platform, which is open source, and also you have landgraph from Langchain, which I think the frameworks are becoming really stable. My prediction is between the next few months is lots of, lots of applications will rely on agents. Demetrios: So you also mentioned how to recognize if a project is winning or losing type thing. And considering there are so many areas that you can plug in AI, especially when you're looking at buyer and all the different places that you can say, oh yeah, we could add some AI to this. How are you setting up metrics so, you know, what is worth it to continue investing into versus what maybe sounded like a better idea, but in practice it wasn't actually that good of an idea. Hooman Sedghamiz: Yeah, depends on the platform that you're building. Right? So where we started back in 2023, the platform was aiming for efficiency, right? So how can you make our colleagues more efficient? They can be faster in their daily work, like really delegate this boring stuff, like if you want to summarize or you want to create a presentation, all those sort of things, and you have measures in place that, for example, you could ask, okay, now you're using this platform for months. Let us know how many hours you're saving during your daily work. And really we could see the shift, right? So we did a questionnaire and I think we could see a lot of shift in terms of saving hours, daily work, all those sort of things that is measurable. And it's like you could then convert it, of course, to the value that brings for the enterprise on the company. And I think the biggest, I think the untapped potential, it goes back to when you can do scientific discovery and all those sort of applications which are more challenging, not just around the efficiency and all those sort of things. And then you need to really, if you're building a product, if it's not the general product. And for example, let's say if you're building a natural language to SQL, let's say you have a database. Hooman Sedghamiz: It was a relational database. You want to build an application that searches cars in the background. The customers go there and ask, I'm looking for a BMW 2013. It uses qudrant in the back, right. It kind of does semantic search, all these cool things and returns the response. I think then you need to have really good measures to see how satisfied your customers are when you're integrating a kind of generative application on top of your website that's selling cars. So measuring this in a kind of, like, cyclic manner, people are not going to be happy because you start that there are a lot of things that you didn't count for. You measure all those kind of metrics and then you go forward, you improve your platform. Demetrios: Well, there's also something else that you mentioned, and it brought up this thought in my mind, which is undoubtedly you have these low hanging fruit problems, and it's mainly based on efficiency gains. Right. And so it's helping people extract data from pdfs or what be it, and you're saving time there. You're seeing that you're saving time, and it's a fairly easy setup. Right. But then you have moonshots, I would imagine, like creating a whole new type of aspirin or tylenol or whatever it is, and that is a lot more of an investment of time and energy and infrastructure and everything along those lines. How do you look at both of these and say, we want to make sure that we make headway in both directions. And I'm not sure if you have unlimited resources to be able to just do everything or if you have to recognize what the trade offs are and how you measure those types of metrics. Demetrios: Again, in seeing where do we invest and where do we cut ties with different initiatives. Hooman Sedghamiz: Yeah. So that's a great question. So for product development, like the example that you made, there are really a lot of stages involved. Right. So you start from scientific discovery stage. So I can imagine that you can have multiple products along the way to help out. So if you have a product already out there that you want to generate insights and see. Let's say you have aspirin out there. Hooman Sedghamiz: You want to see if it is also helpful for cardiovascular problems that patients might have. So you could build a sort of knowledge discovery tool that could search for you, give it a name of your product, it will go out there, look into pubmed, all these articles that are being published, brings you back the results. Then you need to have really clear metrics to see if this knowledge discovery platform, after a few months is able to bring value to the customers or the stakeholders that you build the platform for. We have these experts that are really experts in their own field. Takes them really time to go read these articles to make conclusions or answer questions about really complex topic. I think it's really difficult based on the initial feedback we see, it helps, it helps save them time. But really I think it goes back again to the ETL problem that we still don't have your paywall. We can't access a lot of scientific knowledge yet. Hooman Sedghamiz: And these guys get a little bit discouraged at the beginning because they expect that a lot of people, especially non technical, say like you go to Chat GPT, you ask and it brings you the answer, right? But it's not like that. It doesn't work like that. But we can measure it, we can see improvements, they can access knowledge faster, but it's not comprehensive. That's the problem. It's not really deep knowledge. And I think the companies are still really encouraging developing these platforms and they can see that that's a developing field. Right. So it's very hard to give you a short answer, very hard to come up with metrics that gives you success of failure in a short term time period. Demetrios: Yeah, I like the creativity that you're talking about there though. That is like along this multistepped, very complex product creation. There are potential side projects that you can do that show and prove value along the way, and they don't necessarily need to be as complex as that bigger project. Hooman Sedghamiz: True. Demetrios: Sweet, man. Well, this has been awesome. I really appreciate you coming on here to the vector space talks for anyone that would like to join us and you have something cool to present. We're always open to suggestions. Just hit me up and we will make sure to send you some shirt or whatever kind of swag is on hand. Remember, all you astronauts out there, don't get lost in vector space. This has been another edition of the Qdrant vector space talks with Hooman, my man, on Valentine's Day. I can't believe you decided to spend it with me. Demetrios: I appreciate it. Hooman Sedghamiz: Thank you. Take care. ",blog/insight-generation-platform-for-lifescience-corporation-hooman-sedghamiz-vector-space-talks-014.md "--- draft: false title: ""Unlocking AI Potential: Insights from Stanislas Polu"" slug: qdrant-x-dust-vector-search short_description: Stanislas shares insights from his experiences at Stripe and founding his own company, Dust, focusing on AI technology's product layer. description: Explore the dynamic discussion with Stanislas Polu on AI, ML, entrepreneurship, and product development. Gain valuable insights into AI's transformative power. preview_image: /blog/from_cms/stan-polu-cropped.png date: 2024-01-26T16:22:37.487Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - OpenAI --- # Qdrant x Dust: How Vector Search Helps Make Work Better with Stanislas Polu > *""We ultimately chose Qdrant due to its open-source nature, strong performance, being written in Rust, comprehensive documentation, and the feeling of control.”*\ -- Stanislas Polu > Stanislas Polu is the Co-Founder and an Engineer at Dust. He had previously sold a company to Stripe and spent 5 years there, seeing them grow from 80 to 3000 people. Then pivoted to research at OpenAI on large language models and mathematical reasoning capabilities. He started Dust 6 months ago to make work work better with LLMs. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/2YgcSFjP7mKE0YpDGmSiq5?si=6BhlAMveSty4Yt7umPeHjA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/1vKoiFAdorE).*** ## **Top takeaways:** Curious about the interplay of SaaS platforms and AI in improving productivity? Stanislas Polu dives into the intricacies of enterprise data management, the selective use of SaaS tools, and the role of customized AI assistants in streamlining workflows, all while sharing insights from his experiences at Stripe, OpenAI, and his latest venture, Dust. Here are 5 golden nuggets you'll unearth from tuning in: 1. **The SaaS Universe**: Stan will give you the lowdown on why jumping between different SaaS galaxies like Salesforce and Slack is crucial for your business data's gravitational pull. 2. **API Expansions**: Learn how pushing the boundaries of APIs to include global payment methods can alter the orbit of your company's growth. 3. **A Bot for Every Star**: Discover how creating targeted assistants over general ones can skyrocket team productivity across various use cases. 4. **Behind the Tech Telescope**: Stan discusses the decision-making behind opting for Qdrant for their database cosmos, including what triggered their switch. 5. **Integrating AI Stardust**: They're not just talking about Gen AI; they're actively guiding companies on how to leverage it effectively, placing practicality over flashiness. > Fun Fact: Stanislas Polu co-founded a company that was acquired by Stripe, providing him with the opportunity to work with Greg Brockman at Stripe. > ## Show notes: 00:00 Interview about an exciting career in AI technology.\ 06:20 Most workflows involve multiple SaaS applications.\ 09:16 Inquiring about history with Stripe and AI.\ 10:32 Stripe works on expanding worldwide payment methods.\ 14:10 Document insertion supports hierarchy for user experience.\ 18:29 Competing, yet friends in the same field.\ 21:45 Workspace solutions, marketplace, templates, and user feedback.\ 25:24 Avoid giving false hope; be accountable.\ 26:06 Model calls, external API calls, structured data.\ 30:19 Complex knobs, but powerful once understood. Excellent support.\ 33:01 Companies hire someone to support teams and find use cases. ## More Quotes from Stan: *""You really want to narrow the data exactly where that information lies. And that's where we're really relying hard on Qdrant as well. So the kind of indexing capabilities on top of the vector search.""*\ -- Stanislas Polu *""I think the benchmarking was really about quality of models, answers in the context of ritual augmented generation. So it's not as much as performance, but obviously, performance matters and that's why we love using Qdrant.”*\ -- Stanislas Polu *""The workspace assistant are like the admin vetted the assistant, and it's kind of pushed to everyone by default.”*\ -- Stanislas Polu ## Transcript: Demetrios: All right, so, my man, I think people are going to want to know all about you. This is a conversation that we have had planned for a while. I'm excited to chat about what you have been up to. You've had quite the run around when it comes to doing some really cool stuff. You spent a lot of time at Stripe in the early days and I imagine you were doing, doing lots of fun ML initiatives and then you started researching on llms at OpenAI. And recently you are doing the entrepreneurial thing and following the trend of starting a company and getting really cool stuff out the door with AI. I think we should just start with background on yourself. What did I miss in that quick introduction? Stanislas Polu: Okay, sounds good. Yeah, perfect. Now you didn't miss too much. Maybe the only point is that starting the current company, Dust, with Gabrielle, my co founder, with whom we started a Company together twelve years or maybe 14 years ago. Stanislas Polu: I'm very bad with years that eventually got acquired to stripe. So that's how we joined Stripe, the both of us, pretty early. Stripe was 80 people when we joined, all the way to 2500 people and got to meet with and walk with Greg Brockman there. And that's how I found my way to OpenAI after stripe when I started interested in myself, in research at OpenAI, even if I'm not a trained researcher. Stanislas Polu: I did research on fate, doing research. On larger good models, reasoning capabilities, and in particular larger models mathematical reasoning capabilities. And from there. 18 months ago, kind of decided to leave OpenAI with the motivation. That is pretty simple. It's that basically the hypothesis is that. It was pre chattivity, but basically those large language models, they're already extremely capable and yet they are completely under deployed compared to the potential they have. And so while research remains a very active subject and it's going to be. A tailwind for the whole ecosystem, there's. Stanislas Polu: Probably a lot of to be done at the product layer, and most of the locks between us and deploying that technology in the world is probably sitting. At the product layer as it is sitting at the research layer. And so that's kind of the hypothesis behind dust, is we try to explore at the product layer what it means to interface between models and humans, try to make them happier and augment them. With superpowers in their daily jobs. Demetrios: So you say product layer, can you go into what you mean by that a little bit more? Stanislas Polu: Well, basically we have a motto at dust, which is no gpu before PMF. And so the idea is that while it's extremely exciting to train models. It's extremely exciting to fine tune and align models. There is a ton to be done. Above the model, not only to use. Them as best as possible, but also to really find the interaction interfaces that make sense for humans to leverage that technology. And so we basically don't train any models ourselves today. There's many reasons to that. The first one is as an early startup. It's a fascinating subject and fascinating exercise. As an early startup, it's actually a very big investment to go into training. Models because even if the costs are. Not necessarily big in terms of compute. It'S still research and development and pretty. Hard research and development. It's basically research. We understand pretraining pretty well. We don't understand fine tuning that well. We believe it's a better idea to. Stanislas Polu: Really try to explore the product layer. The image I use generally is that training a model is very sexy and it's exciting, but really you're building a small rock that will get submerged by the waves of bigger models coming in the future. And iterating and positioning yourself at the interface between humans and those models at. The product layer is more akin to. Building a surfboard that you will be. Able to use to surf those same waves. Demetrios: I like that because I am a big surfer and I have a lot. Stanislas Polu: Of fun doing it. Demetrios: Now tell me about are you going after verticals? Are you going after different areas in a market, a certain subset of the market? Stanislas Polu: How do you look at that? Yeah. Basically the idea is to look at productivity within the enterprise. So we're first focusing on internal use. By teams, internal teams of that technology. We're not at all going after external use. So backing products that embed AI or having on projects maybe exposed through our users to actual end customers. So we really focused on the internal use case. So the first thing you want to. Do is obviously if you're interested in. Productivity within enterprise, you definitely want to have the enterprise data, right? Because otherwise there's a ton that can be done with Chat GPT as an example. But there is so much more that can be done when you have context. On the data that comes from the company you're in. That's pretty much kind of the use. Case we're focusing on, and we're making. A bet, which is a crazy bet to answer your question, that there's actually value in being quite horizontal for now. So that comes with a lot of risks because an horizontal product is hard. Stanislas Polu: To read and it's hard to figure. Out how to use it. But at the same time, the reality is that when you are somebody working in a team, even if you spend. A lot of time on one particular. Application, let's say Salesforce for sales, or GitHub for engineers, or intercom for customer support, the reality of most of your workflows do involve many SaaS, meaning that you spend a lot of time in Salesforce, but you also spend a lot of time in slack and notion. Maybe, or we all spend as engineers a lot of time in GitHub, but we also use notion and slack a ton or Google Drive or whatnot. Jira. Demetrios: Good old Jira. Everybody loves spending time in Jira. Stanislas Polu: Yeah. And so basically, following our users where. They are requires us to have access to those different SaaS, which requires us. To be somewhat horizontal. We had a bunch of signals that. Kind of confirms that position, and yet. We'Re still very conscious that it's a risky position. As an example, when we are benchmarked against other solutions that are purely verticalized, there is many instances where we actually do a better job because we have. Access to all the data that matters within the company. Demetrios: Now, there is something very difficult when you have access to all of the data, and that is the data leakage issue and the data access. Right. How are you trying to conquer that hard problem? Stanislas Polu: Yeah, so we're basically focusing to continue. Answering your questions through that other question. I think we're focusing on tech companies. That are less than 1000 people. And if you think about most recent tech companies, less than 1000 people. There's been a wave of openness within. Stanislas Polu: Companies in terms of data access, meaning that it's becoming rare to see people actually relying on complex ACL for the internal data. You basically generally have silos. You have the exec silo with remuneration and ladders and whatnot. And this one is definitely not the. Kind of data we're touching. And then for the rest, you generally have a lot of data that is. Accessible by every employee within your company. So that's not a perfect answer, but that's really kind of the approach we're taking today. We give a lot of control on. Stanislas Polu: Which data comes into dust, but once. It'S into dust, and that control is pretty granular, meaning that you can select. Specific slack channels, or you can select. Specific notion pages, or you can select specific Google Drive subfolders. But once you decide to put it in dust, every dust user has access to this. And so we're really taking the silo. Vision of the granular ACL story. Obviously, if we were to go higher enterprise, that would become a very big issue, because I think larger are the enterprise, the more they rely on complex ackles. Demetrios: And I have to ask about your history with stripe. Have you been focusing on specific financial pieces to this? First thing that comes to mind is what about all those e commerce companies that are living and breathing with stripe? Feels like they've got all kinds of use cases that they could leverage AI for, whether it is their supply chain or just getting better numbers, or getting answers that they have across all this disparate data. Have you looked at that at all? Is that informing any of your decisions that you're making these days? Stanislas Polu: No, not quite. Not really. At stripe, when we joined, it was. Very early, it was the quintessential curlb onechargers number 42. 42, 42. And that's pretty much what stripe was almost, I'm exaggerating, but not too much. So what I've been focusing at stripe. Was really driven by my and our. Perspective as european funders joining a quite. Us centric company, which is, no, there. Stanislas Polu: Is not credit card all over the world. Yes, there is also payment methods. And so most of my time spent at stripe was spent on trying to expand the API to not a couple us payment methods, but a variety of worldwide payment methods. So that requires kind of a change of paradigm from an API design, and that's where I spent most of my cycles What I want to try. Demetrios: Okay, the next question that I had is you talked about how benchmarking with the horizontal solution, surprisingly, has been more effective in certain use cases. I'm guessing that's why you got a little bit of love for [Qdrant](https://qdrant.tech/) and what we're doing here. Stanislas Polu: Yeah I think the benchmarking was really about quality of models, answers in the context of [retrieval augmented generation](https://qdrant.tech/articles/what-is-rag-in-ai/). So it's not as much as performance, but obviously performance matters, and that's why we love using Qdrants. But I think the main idea of. Stanislas Polu: What I mentioned is that it's interesting because today the retrieval is noisy, because the embedders are not perfect, which is an interesting point. Sorry, I'm double clicking, but I'll come back. The embedded are really not perfect. Are really not perfect. So that's interesting. When Qdrant release kind of optimization for [storage of vectors](https://qdrant.tech/documentation/concepts/storage/), they come with obviously warnings that you may have a loss. Of precision because of the compression, et cetera, et cetera. And that's funny, like in all kind of retrieval and mental generation world, it really doesn't matter. We take all the performance we can because the loss of precision coming from compression of those vectors at the vector DB level are completely negligible compared to. The holon fuckness of the embedders in. Stanislas Polu: Terms of capability to correctly embed text, because they're extremely powerful, but they're far from being perfect. And so that's an interesting thing where you can really go as far as you want in terms of performance, because your error is dominated completely by the. Quality of your embeddings. Going back up. I think what's interesting is that the. Retrieval is noisy, mostly because of the embedders, and the models are not perfect. And so the reality is that more. Data in a rack context is not. Necessarily better data because the retrievals become noisy. The model kind of gets confused and it starts hallucinating stuff, et cetera. And so the right trade off is that you want to access to as. Much data as possible, but you want To give the ability to our users. To select very narrowly the data required for a given task. Stanislas Polu: And so that's kind of what our product does, is the ability to create assistants that are specialized to a given task. And most of the specification of an assistant is obviously a prompt, but also. Saying, oh, I'm working on helping sales find interesting next leads. And you really want to narrow the data exactly where that information lies. And that's where there, we're really relying. Hard on Qdrants as well. So the kind of indexing capabilities on. Top of the [vector search](https://qdrant.tech/), where whenever. Stanislas Polu: We insert the documents, we kind of try to insert an array of parents that reproduces the hierarchy of whatever that document is coming from, which lets us create a very nice user experience where when you create an assistant, you can say, oh, I'm going down two levels within notion, and I select that page and all of those children will come together. And that's just one string in our specification, because then rely on those parents that have been injected in Qdrant, and then the Qdrant search really works well with a simple query like this thing has to be in parents. Stanislas Polu: And you filter by that and it. Demetrios: Feels like there's two levels to the evaluation that you can be doing with rags. One is the stuff you're retrieving and evaluating the retrieval, and then the other is the output that you're giving to the end user. How are you attacking both of those evaluation questions? Stanislas Polu: Yeah, so the truth in whole transparency. Is that we don't, we're just too early. Demetrios: Well, I'm glad you're honest with us, Alicia. Stanislas Polu: This is great, we should, but the rate is that we have so many other product priorities that I think evaluating the quality of retrievals, evaluating the quality. Of retrieval, augmented generation. Good sense but good sense is hard to define, because good sense with three. Years doing research in that domain is probably better sense. Better good sense than good sense with no clue on the domain. But basically with good sense I think. You can get very far and then. You'Ll be optimizing at the margin. And the reality is that if you. Get far enough with good sense, and that everything seems to work reasonably well, then your priority is not necessarily on pushing 5% performance, whatever is the metric. Stanislas Polu: But more like I have a million other products questions to solve. That is the kind of ten people answer to your question. And as we grow, we'll probably make a priority, of course, of benchmarking that better. In terms of benchmarking that better. Extremely interesting question as well, because the. Embedding benchmarks are what they are, and. I think they are not necessarily always a good representation of the use case you'll have in your products. And so that's something you want to be cautious of. And. It'S quite hard to benchmark your use case. The kind of solutions you have and the ones that seems more plausible, whether it's spending like full years on that. Stanislas Polu: Is probably to. Evaluate the retrieval with another model, right? It's like you take five different embedding models, you record a bunch of questions. That comes from your product, you use your product data and you run those retrievals against those five different embedders, and. Then you ask GPT four to raise. That would be something that seems sensible and probably will get you another step forward and is not perfect, but it's. Probably really strong enough to go quite far. Stanislas Polu: And then the second question is evaluating. The end to end pipeline, which includes. Both the retrieval and the generation. And to be honest, again, it's a. Known question today because GPT four is. Just so much above all the models. Stanislas Polu: That there's no point evaluating them. If you accept using GPD four, just use GP four. If you want to use open source models, then the questions is more important. But if you are okay with using GPD four for many reasons, then there. Is no questions at this stage. Demetrios: So my next question there, because sounds like you got a little bit of a french accent, you're somewhere in Europe. Are you in France? Stanislas Polu: Yes, we're based in France and billion team from Paris. Demetrios: So I was wondering if you were going to lean more towards the history of you working at OpenAI or the fraternity from your french group and go for your amiz in. Stanislas Polu: Mean, we are absolute BFF with Mistral. The fun story is that Guillaume Lamp is a friend, because we were working on exactly the same subjects while I was at OpenAI and he was at Meta. So we were basically frenemies. We're competing against the same metrics and same goals, but grew a friendship out of that. Our platform is quite model agnostic, so. We support Mistral there. Then we do decide to set the defaults for our users, and we obviously set the defaults to GP four today. I think it's the question of where. Today there's no question, but when the. Time comes where open source or non open source, it's not the question, but where Ozo models kind of start catching. Up with GPT four, that's going to. Stanislas Polu: Be an interesting product question, and hopefully. Mistral will get there. I think that's definitely their goal, to be within reach of GPT four this year. And so that's going to be extremely exciting. Yeah. Demetrios: So then you mentioned how you have a lot of other product considerations that you're looking at before you even think about evaluation. What are some of the other considerations? Stanislas Polu: Yeah, so as I mentioned a bit. The main hypothesis is we're going to do company productivity or team productivity. We need the company data. That was kind of hypothesis number zero. It's not even an hypothesis, almost an axiom. And then our first product was a conversational assistance, like chat. GPT, that is general, and has access. To everything, and realized that didn't work. Quite well enough on a bunch of use cases, was kind of good on some use cases, but not great on many others. And so that's where we made that. First strong product, the hypothesis, which is. So we want to have many assistants. Not one assistant, but many assistants, targeted to specific tasks. And that's what we've been exploring since the end of the summer. And that hypothesis has been very strongly confirmed with our users. And so an example of issue that. We have is, obviously, you want to. Activate your product, so you want to make sure that people are creating assistance. So one thing that is much more important than the quality of rag is. The ability of users to create personal assistance. Before, it was only workspace assistance, and so only the admin or the builder could build it. And now we've basically, as an example, worked on having anybody can create the assistant. The assistant is scoped to themselves, they can publish it afterwards, et cetera. That's the kind of product questions that. Are, to be honest, more important than rack rarity, at least for us. Demetrios: All right, real quick, publish it for a greater user base or publish it for the internal company to be able to. Stanislas Polu: Yeah, within the workspace. Okay. Demetrios: It's not like, oh, I could publish this for. Stanislas Polu: We'Re not going there yet. And there's plenty to do internally to each workspace. Before going there, though it's an interesting case because that's basically another big problem, is you have an horizontal platform, you can create an assistance, you're not an. Expert and you're like, okay, what should I do? And so that's the kind of white blank page issue. Stanislas Polu: And so there having templates, inspiration, you can sit that within workspace, but you also want to have solutions for the new workspace that gets created. And maybe a marketplace is a good idea. Or having templates, et cetera, are also product questions that are much more important than the rack performance. And finally, the users where dust works really well, one example is Alan in. France, there are 600, and dust is. Running there pretty healthily, and they've created. More than 200 assistants. And so another big product question is like, when you get traction within a company, people start getting flooded with assistance. And so how do they discover them? How did they, and do they know which one to use, et cetera? So that's kind of the kind of. Many examples of product questions that are very first order compared to other things. Demetrios: Because out of these 200 assistants, are you seeing a lot of people creating the same assistance? Stanislas Polu: That's a good question. So far it's been kind of driven by somebody internally that was responsible for trying to push gen AI within the company. And so I think there's not that. Much redundancy, which is interesting, but I. Think there's a long tail of stuff that are mostly explorations, but from our perspective, it's very hard to distinguish the two. Obviously, usage is a very strong signal. But yeah, displaying assistance by usage, pushing. The right assistance to the right user. This problem seems completely trivial compared to building an LLM, obviously. But still, when you add the product layer requires a ton of work, and as a startup, that's where a lot of our resources go, and I think. It'S the right thing to do. Demetrios: Yeah, I wonder if, and you probably have thought about this, but if it's almost like you can tag it with this product, or this assistant is in beta or alpha or this is in production, you can trust that this one is stable, that kind of thing. Stanislas Polu: Yeah. So we have the concept of shared. Assistant and the concept of workspace assistant. The workspace assistant are like the admin vetted the assistant, and it's kind of pushed to everyone by default. And then the published assistant is like, there's a gallery of assistant that you can visit, and there, the strongest signal is probably the usage metric. Right? Demetrios: Yeah. So when you're talking about assistance, just so that I'm clear, it's not autonomous agents, is it? Stanislas Polu: No. Stanislas Polu: Yeah. So it's a great question. We are really focusing on the one. Step, trying to solve very nicely the one step thing. I have one granular task to achieve. And I can get accelerated on that. Task and maybe save a few minutes or maybe save a few tens of minutes on one specific thing, because the identity version of that is obviously the future. But the reality is that current models, even GB four, are not that great at kind of chaining decisions of tool use in a way that is sustainable. Beyond the demo effect. So while we are very hopeful for the future, it's not our core focus, because I think there's a lot of risk that it creates more deception than anything else. But it's obviously something that we are. Targeting in the future as models get better. Demetrios: Yeah. And you don't want to burn people by making them think something's possible. And then they go and check up on it and they leave it in the agent's hands, and then next thing they know they're getting fired because they don't actually do the work that they said they were going to do. Stanislas Polu: Yeah. One thing that we don't do today. Is we have kind of different ways. To bring data into the assistant before it creates generation. And we're expanding that. One of the domain use case is the one based on Qdrant, which is. The kind of retrieval one. We also have kind of a workflow system where you can create an app. An LLM app, where you can make. Stanislas Polu: Multiple calls to a model, you can call external APIs and search. And another thing we're digging into our structured data use case, which this time doesn't use Qdrants, which the idea is that semantic search is great, but it's really atrociously bad for quantitative questions. Basically, the typical use case is you. Have a big CSV somewhere and it gets chunked and then you do retrieval. And you get kind of disordered partial. Chunks, all of that. And on top of that, the moles. Are really bad at counting stuff. And so you really get bullshit, you. Demetrios: Know better than anybody. Stanislas Polu: Yeah, exactly. Past life. And so garbage in, garbage out. Basically, we're looking into being able, whenever the data is structured, to actually store. It in a structured way and as needed. Just in time, generate an in memory SQL database so that the model can generate a SQL query to that data and get kind of a SQL. Answer and as a consequence hopefully be able to answer quantitative questions better. And finally, obviously the next step also is as we integrated with those platform notion, Google Drive, slack, et cetera, basically. There'S some actions that we can take there. We're not going to take the actions, but I think it's interesting to have. The model prepare an action, meaning that here is the email I prepared, send. It or iterate with me on it, or here is the slack message I prepare, or here is the edit to the notion doc that I prepared. Stanislas Polu: This is still not agentic, it's closer. To taking action, but we definitely want. To keep the human in the loop. But obviously some stuff that are on our roadmap. And another thing that we don't support, which is one type of action would. Be the first we will be working on is obviously code interpretation, which is I think is one of the things that all users ask because they use. It on Chat GPT. And so we'll be looking into that as well. Demetrios: What made you choose Qdrant? Stanislas Polu: So the decision was made, if I. Remember correctly, something like February or March last year. And so the alternatives I looked into. Were pine cone wavy eight, some click owls because Chroma was using click owls at the time. But Chroma was. 2000 lines of code. At the time as well. And so I was like, oh, Chroma, we're part of AI grant. And Chroma is as an example also part of AI grant. So I was like, oh well, let's look at Chroma. And however, what I'm describing is last. Year, but they were very early. And so it was definitely not something. That seemed like to make sense for us. So at the end it was between pine cone wavev eight and Qdrant wave v eight. You look at the doc, you're like, yeah, not possible. And then finally it's Qdrant and Pinecone. And I think we really appreciated obviously the open source nature of Qdrants.From. Playing with it, the very strong performance, the fact that it's written in rust, the sanity of the documentation, and basically the feeling that because it's an open source, we're using the osted Qdrant cloud solution. But it's not a question of paying. Or not paying, it's more a question. Of being able to feel like you have more control. And at the time, I think it was the moment where Pinecon had their massive fuck up, where they erased gazillion database from their users and so we've been on Qdrants and I think it's. Been a two step process, really. Stanislas Polu: It's very smooth to start, but also Qdrants at this stage comes with a. Lot of knobs to turns. And so as you start scaling, you at some point reach a point where. You need to start tweaking the knobs. Which I think is great because the knobs, there's a lot of knobs, so they are hard to understand, but once you understand them, you see the power of them. And the Qdrant team has been excellent there supporting us. And so I think we've reached that first level of scale where you have. To tweak the nodes, and we've reached. The second level of scale where we. Have to have multiple nodes. But so far it's been extremely smooth. And I think we've been able to. Do with Qdrant some stuff that really are possible only because of the very good performance of the database. As an example, we're not using your clustered setup. We have n number of independent nodes. And as we scale, we kind of. Reshuffle which users go on which nodes. As we need, trying to keep our largest users and most paying users on. Very well identified nodes. We have a kind of a garbage. Node for all the free users, as an example, migrating even a very big collection from one node. One capability that we build is say, oh, I have that collection over there. It's pretty big. I'm going to initiate on another node. I'm going to set up shadow writing on both, and I'm going to migrate live the data. And that has been incredibly easy to do with Qdrant because crawling is fast, writing is fucking fast. And so even a pretty large collection. You can migrate it in a minute. Stanislas Polu: And so it becomes really within the realm of being able to administrate your cluster with that in mind, which I. Think would have probably not been possible with the different systems. Demetrios: So it feels like when you are helping companies build out their assistants, are you going in there and giving them ideas on what they can do? Stanislas Polu: Yeah, we are at a stage where obviously we have to do that because. I think the product basically starts to. Have strong legs, but I think it's still very early and so there's still a lot to do on activation, as an example. And so we are in a mode today where we do what doesn't scale. Basically, and we do spend some time. Stanislas Polu: With companies, obviously, because there's nowhere around that. But what we've seen also is that the users where it works the best and being on dust or anything else. That is relative to having people adopt gen AI. Within the company are companies where they. Actually allocate resources to the problem, meaning that the companies where it works best. Are the companies where there's somebody. Their role is really to go around the company, find, use cases, support the teams, et cetera. And in the case of companies using dust, this is kind of type of interface that is perfect for us because we provide them full support and we help them build whatever they think is. Valuable for their team. Demetrios: Are you also having to be the bearer of bad news and tell them like, yeah, I know you saw that demo on Twitter, but that is not actually possible or reliably possible? Stanislas Polu: Yeah, that's an interesting question. That's a good question. Not that much, because I think one of the big learning is that you take any company, even a pretty techy. Company, pretty young company, and the reality. Is that most of the people, they're not necessarily in the ecosystem, they just want shit done. And so they're really glad to have some shit being done by a computer. But they don't really necessarily say, oh, I want the latest shiniest thingy that. I saw on Twitter. So we've been safe from that so far. Demetrios: Excellent. Well, man, this has been incredible. I really appreciate you coming on here and doing this. Thanks so much. And if anyone wants to check out dust, I encourage that they do. Stanislas Polu: It's dust. Demetrios: It's a bit of an interesting website. What is it? Stanislas Polu: Dust TT. Demetrios: That's it. That's what I was missing, dust. There you go. So if anybody wants to look into it, I encourage them to. And thanks so much for coming on here. Stanislas Polu: Yeah. Stanislas Polu: And Qdrant is the shit. Demetrios: There we go. Awesome, dude. Well, this has been great. Stanislas Polu: Yeah, thanks, Vintu. Have a good one. ",blog/qdrant-x-dust-how-vector-search-helps-make-work-work-better-stan-polu-vector-space-talk-010.md "--- draft: false title: Powering Bloop semantic code search slug: case-study-bloop short_description: Bloop is a fast code-search engine that combines semantic search, regex search and precise code navigation description: Bloop is a fast code-search engine that combines semantic search, regex search and precise code navigation preview_image: /case-studies/bloop/social_preview.png date: 2023-02-28T09:48:00.000Z author: Qdrant Team featured: false aliases: - /case-studies/bloop/ --- Founded in early 2021, [bloop](https://bloop.ai/) was one of the first companies to tackle semantic search for codebases. A fast, reliable Vector Search Database is a core component of a semantic search engine, and bloop surveyed the field of available solutions and even considered building their own. They found Qdrant to be the top contender and now use it in production. This document is intended as a guide for people who intend to introduce semantic search to a novel field and want to find out if Qdrant is a good solution for their use case. ## About bloop ![](/case-studies/bloop/screenshot.png) [bloop](https://bloop.ai/) is a fast code-search engine that combines semantic search, regex search and precise code navigation into a single lightweight desktop application that can be run locally. It helps developers understand and navigate large codebases, enabling them to discover internal libraries, reuse code and avoid dependency bloat. bloop’s chat interface explains complex concepts in simple language so that engineers can spend less time crawling through code to understand what it does, and more time shipping features and fixing bugs. ![](/case-studies/bloop/bloop-logo.png) bloop’s mission is to make software engineers autonomous and semantic code search is the cornerstone of that vision. The project is maintained by a group of Rust and Typescript engineers and ML researchers. It leverages many prominent nascent technologies, such as [Tauri](http://tauri.app), [tantivy](https://docs.rs/tantivy), [Qdrant](https://github.com/qdrant/qdrant) and [Anthropic](https://www.anthropic.com/). ## About Qdrant ![](/case-studies/bloop/qdrant-logo.png) Qdrant is an open-source Vector Search Database written in Rust . It deploys as an API service providing a search for the nearest high-dimensional vectors. With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and many more solutions to make the most of unstructured data. It is easy to use, deploy and scale, blazing fast and is accurate simultaneously. Qdrant was founded in 2021 in Berlin by Andre Zayarni and Andrey Vasnetsov with the mission to power the next generation of AI applications with advanced and high-performant [vector similarity](https://qdrant.tech/articles/vector-similarity-beyond-search/) search technology. Their flagship product is the vector search database which is available as an open source https://github.com/qdrant/qdrant or managed cloud solution https://cloud.qdrant.io/. ## The Problem Firstly, what is semantic search? It’s finding relevant information by comparing meaning, rather than simply measuring the textual overlap between queries and documents. We compare meaning by comparing *embeddings* - these are vector representations of text that are generated by a neural network. Each document’s embedding denotes a position in a *latent* space, so to search you embed the query and find its nearest document vectors in that space. ![](/case-studies/bloop/vector-space.png) Why is semantic search so useful for code? As engineers, we often don’t know - or forget - the precise terms needed to find what we’re looking for. Semantic search enables us to find things without knowing the exact terminology. For example, if an engineer wanted to understand “*What library is used for payment processing?*” a semantic code search engine would be able to retrieve results containing “*Stripe*” or “*PayPal*”. A traditional lexical search engine would not. One peculiarity of this problem is that the **usefulness of the solution increases with the size of the code base** – if you only have one code file, you’ll be able to search it quickly, but you’ll easily get lost in thousands, let alone millions of lines of code. Once a codebase reaches a certain size, it is no longer possible for a single engineer to have read every single line, and so navigating large codebases becomes extremely cumbersome. In software engineering, we’re always dealing with complexity. Programming languages, frameworks and tools have been developed that allow us to modularize, abstract and compile code into libraries for reuse. Yet we still hit limits: Abstractions are still leaky, and while there have been great advances in reducing incidental complexity, there is still plenty of intrinsic complexity[^1] in the problems we solve, and with software eating the world, the growth of complexity to tackle has outrun our ability to contain it. Semantic code search helps us navigate these inevitably complex systems. But semantic search shouldn’t come at the cost of speed. Search should still feel instantaneous, even when searching a codebase as large as Rust (which has over 2.8 million lines of code!). Qdrant gives bloop excellent semantic search performance whilst using a reasonable amount of resources, so they can handle concurrent search requests. ## The Upshot [bloop](https://bloop.ai/) are really happy with how Qdrant has slotted into their semantic code search engine: it’s performant and reliable, even for large codebases. And it’s written in Rust(!) with an easy to integrate qdrant-client crate. In short, Qdrant has helped keep bloop’s code search fast, accurate and reliable. #### Footnotes: [^1]: Incidental complexity is the sort of complexity arising from weaknesses in our processes and tools, whereas intrinsic complexity is the sort that we face when trying to describe, let alone solve the problem. ",blog/case-study-bloop.md "--- draft: true title: ""Qdrant Hybrid Cloud and Cohere Support Enterprise AI"" short_description: ""Next gen enterprise software will rely on revolutionary technologies by Qdrant Hybrid Cloud and Cohere."" description: ""Next gen enterprise software will rely on revolutionary technologies by Qdrant Hybrid Cloud and Cohere."" preview_image: /blog/hybrid-cloud-cohere/hybrid-cloud-cohere.png date: 2024-04-10T00:01:00Z author: Qdrant featured: false weight: 1011 tags: - Qdrant - Vector Database --- We’re excited to share that Qdrant and [Cohere](https://cohere.com/) are partnering on the launch of [Qdrant Hybrid Cloud](/hybrid-cloud/) to enable global audiences to build and scale their AI applications quickly and securely. With Cohere's world-class large language models (LLMs), getting the most out of vector search becomes incredibly easy. Qdrant's new Hybrid Cloud offering and its Kubernetes-native design can be coupled with Cohere's powerful models and APIs. This combination allows for simple setup when prototyping and deploying AI solutions. It’s no secret that Retrieval Augmented Generation (RAG) has shown to be a powerful method of building conversational AI products, such as chatbots or customer support systems. With Cohere's managed LLM service, scientists and developers can tap into state-of-the-art text generation and understanding capabilities, all accessible via API. Qdrant Hybrid Cloud seamlessly integrates with Cohere’s foundation models, enabling convenient data vectorization and highly accurate semantic search. With Qdrant Hybrid Cloud, users have the flexibility to deploy their vector database in an environment of their choice. By using container-based scalable deployments, global businesses can keep both products deployed in the same hosting architecture. By combining Cohere’s foundation models with Qdrant’s vector search capabilities, developers can create robust and scalable GenAI applications tailored to meet the demands of modern enterprises. This powerful combination empowers organizations to build strong and secure applications that search, understand meaning and converse in text. #### Take Full Control of Your GenAI Application with Qdrant Hybrid Cloud and Cohere Building apps with Qdrant Hybrid Cloud and Cohere’s models comes with several key advantages: **Data Sovereignty:** Should you wish to keep both deployment together, this integration guarantees that your vector database is hosted in proximity to the foundation models and proprietary data, thereby reducing latency, supporting data locality, and safeguarding sensitive information to comply with regulatory requirements, such as GDPR. **Massive Scale Support:** Users can achieve remarkable efficiency and scalability in running complex queries across vast datasets containing billions of text objects and millions of users. This integration enables lightning-fast retrieval of relevant information, making it ideal for enterprise-scale applications where speed and accuracy are paramount. **Cost Efficiency:** By leveraging Qdrant's quantization for efficient data handling and pairing it with Cohere's scalable and affordable pricing structure, the price/performance ratio of this integration is next to none. Companies who are just getting started with both will have a minimal upfront investment and optimal cost management going forward. #### Start Building Your New App With Cohere and Qdrant Hybrid Cloud ![hybrid-cloud-cohere-tutorial](/blog/hybrid-cloud-cohere/hybrid-cloud-cohere-tutorial.png) We put together an end-to-end tutorial to show you how to build a GenAI application with Qdrant Hybrid Cloud and Cohere’s embeddings. #### Tutorial: Build a RAG System to Answer Customer Support Queries Learn how to set up a private AI service that addresses customer support issues with high accuracy and effectiveness. By leveraging Cohere’s models with Qdrant Hybrid Cloud, you will create a fully private customer support system. [Try the Tutorial](/documentation/tutorials/rag-customer-support-cohere-airbyte-aws/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-cohere.md "--- draft: false title: Introducing Qdrant Cloud on Microsoft Azure slug: qdrant-cloud-on-microsoft-azure short_description: Qdrant Cloud is now available on Microsoft Azure description: ""Learn the benefits of Qdrant Cloud on Azure."" preview_image: /blog/from_cms/qdrant-azure-2-1.png date: 2024-01-17T08:40:42Z author: Manuel Meyer featured: false tags: - Data Science - Vector Database - Machine Learning - Information Retrieval - Cloud - Azure --- Great news! We've expanded Qdrant's managed vector database offering — [Qdrant Cloud](https://cloud.qdrant.io/) — to be available on Microsoft Azure. You can now effortlessly set up your environment on Azure, which reduces deployment time, so you can hit the ground running. [Get started](https://cloud.qdrant.io/) What this means for you: - **Rapid application development**: Deploy your own cluster through the Qdrant Cloud Console within seconds and scale your resources as needed. - **Billion vector scale**: Seamlessly grow and handle large-scale datasets with billions of vectors. Leverage Qdrant features like horizontal scaling and binary quantization with Microsoft Azure's scalable infrastructure. **""With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform at enterprise scale.""** -- Jeremy Teichmann (AI Squad Technical Lead & Generative AI Expert), Daly Singh (AI Squad Lead & Product Owner) - Bosch Digital. Get started by [signing up for a Qdrant Cloud account](https://cloud.qdrant.io). And learn more about Qdrant Cloud in our [docs](/documentation/cloud/). ",blog/qdrant-cloud-on-microsoft-azure.md "--- draft: false title: ""Vultr and Qdrant Hybrid Cloud Support Next-Gen AI Projects"" short_description: ""Providing a flexible platform for high-performance vector search in next-gen AI workloads."" description: ""Providing a flexible platform for high-performance vector search in next-gen AI workloads."" preview_image: /blog/hybrid-cloud-vultr/hybrid-cloud-vultr.png date: 2024-04-10T00:08:00Z author: Qdrant featured: false weight: 1000 tags: - Qdrant - Vector Database --- We’re excited to share that Qdrant and [Vultr](https://www.vultr.com/) are partnering to provide seamless scalability and performance for vector search workloads. With Vultr's global footprint and customizable platform, deploying vector search workloads becomes incredibly flexible. Qdrant's new [Qdrant Hybrid Cloud](/hybrid-cloud/) offering and its Kubernetes-native design, coupled with Vultr's straightforward virtual machine provisioning, allows for simple setup when prototyping and building next-gen AI apps. #### Adapting to Diverse AI Development Needs with Customization and Deployment Flexibility In the fast-paced world of AI and ML, businesses are eagerly integrating AI and generative AI to enhance their products with new features like AI assistants, develop new innovative solutions, and streamline internal workflows with AI-driven processes. Given the diverse needs of these applications, it's clear that a one-size-fits-all approach doesn't apply to AI development. This variability in requirements underscores the need for adaptable and customizable development environments. Recognizing this, Qdrant and Vultr have teamed up to offer developers unprecedented flexibility and control. The collaboration enables the deployment of a fully managed vector database on Vultr’s adaptable platform, catering to the specific needs of diverse AI projects. This unique setup offers developers the ideal Vultr environment for their vector search workloads. It ensures seamless adaptability and data privacy with all data residing in their environment. For the first time, Qdrant Hybrid Cloud allows for fully managing a vector database on Vultr, promoting rapid development cycles without the hassle of modifying existing setups and ensuring that data remains secure within the organization. Moreover, this partnership empowers developers with centralized management over their vector database clusters via Qdrant’s control plane, enabling precise size adjustments based on workload demands. This joint setup marks a significant step in providing the AI and ML field with flexible, secure, and efficient application development tools. > *""Our collaboration with Qdrant empowers developers to unlock the potential of vector search applications, such as RAG, by deploying Qdrant Hybrid Cloud with its high-performance search capabilities directly on Vultr's global, automated cloud infrastructure. This partnership creates a highly scalable and customizable platform, uniquely designed for deploying and managing AI workloads with unparalleled efficiency.""* Kevin Cochrane, Vultr CMO. #### The Benefits of Deploying Qdrant Hybrid Cloud on Vultr Together, Qdrant Hybrid Cloud and Vultr offer enhanced AI and ML development with streamlined benefits: - **Simple and Flexible Deployment:** Deploy Qdrant Hybrid Cloud on Vultr in a few minutes with a simple “one-click” installation by adding your Vutlr environment as a Hybrid Cloud Environment to Qdrant. - **Scalability and Customizability**: Qdrant’s efficient data handling and Vultr’s scalable infrastructure means projects can be adjusted dynamically to workload demands, optimizing costs without compromising performance or capabilities. - **Unified AI Stack Management:** Seamlessly manage the entire lifecycle of AI applications, from vector search with Qdrant Hybrid Cloud to deployment and scaling with the Vultr platform and its AI and ML solutions, all within a single, integrated environment. This setup simplifies workflows, reduces complexity, accelerates development cycles, and simplifies the integration with other elements of the AI stack like model development, finetuning, or inference and training. - **Global Reach, Local Execution**: With Vultr's worldwide infrastructure and Qdrant's fast vector search, deploy AI solutions globally while ensuring low latency and compliance with local data regulations, enhancing user satisfaction. #### Getting Started with Qdrant Hybrid Cloud and Vultr We've compiled an in-depth guide for leveraging Qdrant Hybrid Cloud on Vultr to kick off your journey into building cutting-edge AI solutions. For further insights into the deployment process, refer to our comprehensive documentation. ![hybrid-cloud-vultr-tutorial](/blog/hybrid-cloud-vultr/hybrid-cloud-vultr-tutorial.png) #### Tutorial: Crafting a Personalized AI Assistant with RAG This tutorial outlines creating a personalized AI assistant using Qdrant Hybrid Cloud on Vultr, incorporating advanced vector search to power dynamic, interactive experiences. We will develop a RAG pipeline powered by DSPy and detail how to maintain data privacy within your Vultr environment. [Try the Tutorial](/documentation/tutorials/rag-chatbot-vultr-dspy-ollama/) #### Documentation: Effortless Deployment with Qdrant Our Kubernetes-native framework simplifies the deployment of Qdrant Hybrid Cloud on Vultr, enabling you to get started in just a few straightforward steps. Dive into our documentation to learn more. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-vultr.md "--- title: ""Chat with a codebase using Qdrant and N8N"" draft: false slug: qdrant-n8n short_description: Integration demo description: Building a RAG-based chatbot using Qdrant and N8N to chat with a codebase on GitHub preview_image: /blog/qdrant-n8n/preview.jpg date: 2024-01-06T04:09:05+05:30 author: Anush Shetty featured: false tags: - integration - n8n - blog --- n8n (pronounced n-eight-n) helps you connect any app with an API. You can then manipulate its data with little or no code. With the Qdrant node on n8n, you can build AI-powered workflows visually. Let's go through the process of building a workflow. We'll build a chat with a codebase service. ## Prerequisites - A running Qdrant instance. If you need one, use our [Quick start guide](/documentation/quick-start/) to set it up. - An OpenAI API Key. Retrieve your key from the [OpenAI API page](https://platform.openai.com/account/api-keys) for your account. - A GitHub access token. If you need to generate one, start at the [GitHub Personal access tokens page](https://github.com/settings/tokens/). ## Building the App Our workflow has two components. Refer to the [n8n quick start guide](https://docs.n8n.io/workflows/create/) to get acquainted with workflow semantics. - A workflow to ingest a GitHub repository into Qdrant - A workflow for a chat service with the ingested documents #### Workflow 1: GitHub Repository Ingestion into Qdrant ![GitHub to Qdrant workflow](/blog/qdrant-n8n/load-demo.gif) For this workflow, we'll use the following nodes: - [Qdrant Vector Store - Insert](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#insert-documents): Configure with [Qdrant credentials](https://docs.n8n.io/integrations/builtin/credentials/qdrant/) and a collection name. If the collection doesn't exist, it's automatically created with the appropriate configurations. - [GitHub Document Loader](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.documentgithubloader/): Configure the GitHub access token, repository name, and branch. In this example, we'll use [qdrant/demo-food-discovery@main](https://github.com/qdrant/demo-food-discovery). - [Embeddings OpenAI](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingsopenai/): Configure with OpenAI credentials and the embedding model options. We use the [text-embedding-ada-002](https://platform.openai.com/docs/models/embeddings) model. - [Recursive Character Text Splitter](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.textsplitterrecursivecharactertextsplitter/): Configure the [text splitter options](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.textsplitterrecursivecharactertextsplitter/#node-parameters ). We use the defaults in this example. Connect the workflow to a manual trigger. Click ""Test Workflow"" to run it. You should be able to see the progress in real-time as the data is fetched from GitHub, transformed into vectors and loaded into Qdrant. #### Workflow 2: Chat Service with Ingested Documents ![Chat workflow](/blog/qdrant-n8n/chat.png) The workflow use the following nodes: - [Qdrant Vector Store - Retrieve](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#retrieve-documents-for-agentchain): Configure with [Qdrant credentials](https://docs.n8n.io/integrations/builtin/credentials/qdrant/) and the name of the collection the data was loaded into in workflow 1. - [Retrieval Q&A Chain](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainretrievalqa/): Configure with default values. - [Embeddings OpenAI](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingsopenai/): Configure with OpenAI credentials and the embedding model options. We use the [text-embedding-ada-002](https://platform.openai.com/docs/models/embeddings) model. - [OpenAI Chat Model](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/): Configure with OpenAI credentials and the chat model name. We use [gpt-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5) for the demo. Once configured, hit the ""Chat"" button to initiate the chat interface and begin a conversation with your codebase. ![Chat demo](/blog/qdrant-n8n/chat-demo.png) To embed the chat in your applications, consider using the [@n8n/chat](https://www.npmjs.com/package/@n8n/chat) package. Additionally, N8N supports scheduled workflows and can be triggered by events across various applications. ## Further reading - [n8n Documentation](https://docs.n8n.io/) - [n8n Qdrant Node documentation](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#qdrant-vector-store) ",blog/qdrant-n8n.md "--- title: ""Qdrant Updated Benchmarks 2024"" draft: false slug: qdrant-benchmarks-2024 # Change this slug to your page slug if needed short_description: Qdrant Updated Benchmarks 2024 # Change this description: We've compared how Qdrant performs against the other vector search engines to give you a thorough performance analysis # Change this preview_image: /benchmarks/social-preview.png # Change this categories: - News # social_preview_image: /blog/Article-Image.png # Optional image used for link previews # title_preview_image: /blog/Article-Image.png # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-01-15T09:29:33-03:00 author: Sabrina Aquino # Change this featured: false # if true, this post will be featured on the blog page tags: # Change this, related by tags posts will be shown on the blog page - qdrant - benchmarks - performance --- It's time for an update to Qdrant's benchmarks! We've compared how Qdrant performs against the other vector search engines to give you a thorough performance analysis. Let's get into what's new and what remains the same in our approach. ### What's Changed? #### All engines have improved Since the last time we ran our benchmarks, we received a bunch of suggestions on how to run other engines more efficiently, and we applied them. This has resulted in significant improvements across all engines. As a result, we have achieved an impressive improvement of nearly four times in certain cases. You can view the previous benchmark results [here](/benchmarks/single-node-speed-benchmark-2022/). #### Introducing a New Dataset To ensure our benchmark aligns with the requirements of serving RAG applications at scale, the current most common use-case of vector databases, we have introduced a new dataset consisting of 1 million OpenAI embeddings. ![rps vs precision benchmark - up and to the right is better](/blog/qdrant-updated-benchmarks-2024/rps-bench.png) #### Separation of Latency vs RPS Cases Different applications have distinct requirements when it comes to performance. To address this, we have made a clear separation between latency and requests-per-second (RPS) cases. For example, a self-driving car's object recognition system aims to process requests as quickly as possible, while a web server focuses on serving multiple clients simultaneously. By simulating both scenarios and allowing configurations for 1 or 100 parallel readers, our benchmark provides a more accurate evaluation of search engine performance. ![mean-time vs precision benchmark - down and to the right is better](/blog/qdrant-updated-benchmarks-2024/latency-bench.png) ### What Hasn't Changed? #### Our Principles of Benchmarking At Qdrant all code stays open-source. We ensure our benchmarks are accessible for everyone, allowing you to run them on your own hardware. Your input matters to us, and contributions and sharing of best practices are welcome! Our benchmarks are strictly limited to open-source solutions, ensuring hardware parity and avoiding biases from external cloud components. We deliberately don't include libraries or algorithm implementations in our comparisons because our focus is squarely on vector databases. Why? Because libraries like FAISS, while useful for experiments, don’t fully address the complexities of real-world production environments. They lack features like real-time updates, CRUD operations, high availability, scalability, and concurrent access – essentials in production scenarios. A vector search engine is not only its indexing algorithm, but its overall performance in production. We use the same benchmark datasets as the [ann-benchmarks](https://github.com/erikbern/ann-benchmarks/#data-sets) project so you can compare our performance and accuracy against it. ### Detailed Report and Access For an in-depth look at our latest benchmark results, we invite you to read the [detailed report](/benchmarks/). If you're interested in testing the benchmark yourself or want to contribute to its development, head over to our [benchmark repository](https://github.com/qdrant/vector-db-benchmark). We appreciate your support and involvement in improving the performance of vector databases. ",blog/qdrant-updated-benchmarks-2024.md "--- draft: false title: ""Qdrant Hybrid Cloud and Haystack for Enterprise RAG"" short_description: ""A winning combination for enterprise-scale RAG consists of a strong framework and a scalable database."" description: ""A winning combination for enterprise-scale RAG consists of a strong framework and a scalable database."" preview_image: /blog/hybrid-cloud-haystack/hybrid-cloud-haystack.png date: 2024-04-10T00:02:00Z author: Qdrant featured: false weight: 1009 tags: - Qdrant - Vector Database --- We’re excited to share that Qdrant and [Haystack](https://haystack.deepset.ai/) are continuing to expand their seamless integration to the new [Qdrant Hybrid Cloud](/hybrid-cloud/) offering, allowing developers to deploy a managed vector database in their own environment of choice. Earlier this year, both Qdrant and Haystack, started to address their user’s growing need for production-ready retrieval-augmented-generation (RAG) deployments. The ability to build and deploy AI apps anywhere now allows for complete data sovereignty and control. This gives large enterprise customers the peace of mind they need before they expand AI functionalities throughout their operations. With a highly customizable framework like Haystack, implementing vector search becomes incredibly simple. Qdrant's new Qdrant Hybrid Cloud offering and its Kubernetes-native design supports customers all the way from a simple prototype setup to a production scenario on any hosting platform. Users can attach AI functionalities to their existing in-house software by creating custom integration components. Don’t forget, both products are open-source and highly modular! With Haystack and Qdrant Hybrid Cloud, the path to production has never been clearer. The elaborate integration of Qdrant as a Document Store simplifies the deployment of Haystack-based AI applications in any production-grade environment. Coupled with Qdrant’s Hybrid Cloud offering, your application can be deployed anyplace, on your own terms. >*“We hope that with Haystack 2.0 and our growing partnerships such as what we have here with Qdrant Hybrid Cloud, engineers are able to build AI systems with full autonomy. Both in how their pipelines are designed, and how their data are managed.”* Tuana Çelik, Developer Relations Lead, deepset. #### Simplifying RAG Deployment: Qdrant Hybrid Cloud and Haystack 2.0 Integration Building apps with Qdrant Hybrid Cloud and deepset’s framework has become even simpler with Haystack 2.0. Both products are completely optimized for RAG in production scenarios. Here are some key advantages: **Mature Integration:** You can connect your Haystack pipelines to Qdrant in a few lines of code. Qdrant Hybrid Cloud leverages the existing “Document Store” integration for data sources.This common interface makes it easy to access Qdrant as a data source from within your existing setup. **Production Readiness:** With deepset’s new product [Hayhooks](https://docs.haystack.deepset.ai/docs/hayhooks), you can generate RESTful APIs from Haystack pipelines. This simplifies the deployment process and makes the service easily accessible by developers using Qdrant Hybrid Cloud to prepare RAG systems for production. **Flexible & Customizable:** The open-source nature of Qdrant and Haystack’s 2.0 makes it easy to extend the capabilities of both products through customization. When tailoring vector RAG systems to their own needs, users can develop custom components and plug them into both Qdrant Hybrid Cloud and Haystack for maximum modularity. [Creating custom components](https://docs.haystack.deepset.ai/docs/custom-components) is a core functionality. #### Learn How to Build a Production-Level RAG Service with Qdrant and Haystack ![hybrid-cloud-haystack-tutorial](/blog/hybrid-cloud-haystack/hybrid-cloud-haystack-tutorial.png) To get you started, we created a comprehensive tutorial that shows how to build next-gen AI applications with Qdrant Hybrid Cloud using deepset’s Haystack framework. #### Tutorial: Private Chatbot for Interactive Learning Learn how to develop a tutor chatbot from online course materials. You will create a Retrieval Augmented Generation (RAG) pipeline with Haystack for enhanced generative AI capabilities and Qdrant Hybrid Cloud for vector search. By deploying every tool on RedHat OpenShift, you will ensure complete privacy and data sovereignty, whereby no course content leaves your cloud. [Try the Tutorial](/documentation/tutorials/rag-chatbot-red-hat-openshift-haystack/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to get started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-haystack.md "--- draft: false title: Teaching Vector Databases at Scale - Alfredo Deza | Vector Space Talks slug: teaching-vector-db-at-scale short_description: Alfredo Deza tackles AI teaching, the intersection of technology and academia, and the value of consistent learning. description: Alfredo Deza discusses the practicality of machine learning operations, highlighting how personal interest in topics like wine datasets enhances engagement, while reflecting on the synergies between his professional sportsman discipline and the persistent, straightforward approach required for effectively educating on vector databases and large language models. preview_image: /blog/from_cms/alfredo-deza-bp-cropped.png date: 2024-04-09T03:06:00.000Z author: Demetrios Brinkmann featured: false tags: - Vector Search - Retrieval Augmented Generation - Vector Space Talks - Coursera --- > *""So usually I get asked, why are you using Qdrant? What's the big deal? Why are you picking these over all of the other ones? And to me it boils down to, aside from being renowned or recognized, that it works fairly well. There's one core component that is critical here, and that is it has to be very straightforward, very easy to set up so that I can teach it, because if it's easy, well, sort of like easy to or straightforward to teach, then you can take the next step and you can make it a little more complex, put other things around it, and that creates a great development experience and a learning experience as well.”*\ — Alfredo Deza > Alfredo is a software engineer, speaker, author, and former Olympic athlete working in Developer Relations at Microsoft. He has written several books about programming languages and artificial intelligence and has created online courses about the cloud and machine learning. He currently is an Adjunct Professor at Duke University, and as part of his role, works closely with universities around the world like Georgia Tech, Duke University, Carnegie Mellon, and Oxford University where he often gives guest lectures about technology. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/4HFSrTJWxl7IgQj8j6kwXN?si=99H-p0fKQ0WuVEBJI9ugUw), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/3l6F6A_It0Q?feature=shared).*** ## **Top takeaways:** How does a former athlete such as Alfredo Deza end up in this AI and Machine Learning industry? That’s what we’ll find out in this episode of Vector Space Talks. Let’s understand how his background as an olympian offers a unique perspective on consistency and discipline that's a real game-changer in this industry. Here are some things you’ll discover from this episode: 1. **The Intersection of Teaching and Tech:** Alfredo discusses on how to effectively bridge the gap between technical concepts and student understanding, especially when dealing with complex topics like vector databases. 2. **Simplified Learning:** Dive into Alfredo's advocacy for simplicity in teaching methods, mirroring his approach with Qdrant and the potential for a Rust in-memory implementation aimed at enhancing learning experiences. 3. **Beyond the Titanic Dataset:** Discover why Alfredo prefers to teach with a wine dataset he developed himself, underscoring the importance of using engaging subject matter in education. 4. **AI Learning Acceleration:** Alfredo discusses the struggle universities face to keep pace with AI advancements and how online platforms can offer a more up-to-date curriculum. 5. **Consistency is Key:** Alfredo draws parallels between the discipline required in high-level athletics and the ongoing learning journey in AI, zeroing in on his mantra, “There is no secret” to staying consistent. > Fun Fact: Alfredo tells the story of athlete Dick Fosbury's invention of the Fosbury Flop to highlight the significance of teaching simplicity. > ## Show notes: 00:00 Teaching machine learning, Python to graduate students.\ 06:03 Azure AI search service simplifies teaching, Qdrant facilitates learning.\ 10:49 Controversy over high jump style.\ 13:18 Embracing past for inspiration, emphasizing consistency.\ 15:43 Consistent learning and practice lead to success.\ 20:26 Teaching SQL uses SQLite, Rust has limitations.\ 25:21 Online platforms improve and speed up education.\ 29:24 Duke and Coursera offer specialized language courses.\ 31:21 Passion for wines, creating diverse dataset.\ 35:00 Encouragement for vector db discussion, wrap up.\ ## More Quotes from Alfredo: *""Qdrant makes it straightforward. We use it in-memory for my classes and I would love to see something similar setup in Rust to make teaching even easier.”*\ — Alfredo Deza *""Retrieval augmented generation is kind of like having an open book test. So the large language model is the student, and they have an open book so they can see the answers and then repackage that into their own words and provide an answer.”*\ — Alfredo Deza *""With Qdrant, I appreciate that the use of the Python API is so simple. It avoids the complexity that comes from having a back-end system like in Rust where you need an actual instance of the database running.”*\ — Alfredo Deza ## Transcript: Demetrios: What is happening? Everyone, welcome back to another vector space talks. I am Demetrios, and I am joined today by good old Sabrina. Where you at, Sabrina? Hello? Sabrina Aquino: Hello, Demetrios. I'm from Brazil. I'm in Brazil right now. I know that you are traveling currently. Demetrios: Where are you? At Kubecon in Paris. And it has been magnificent. But I could not wait to join the session today because we've got Alfredo coming at us. Alfredo Deza: What's up, dude? Hi. How are you? Demetrios: I'm good, man. It's been a while. I think the last time that we chatted was two years ago, maybe right before your book came out. When did the book come out? Alfredo Deza: Yeah, something like that. I would say a couple of years ago. Yeah. I wrote, co authored practical machine learning operations with no gift. And it was published on O'Reilly. Demetrios: Yeah. And that was, I think, two years ago. So you've been doing a lot of stuff since then. Let's be honest, you are maybe one of the most active men on the Internet. I always love seeing what you're doing. You're bringing immense value to everything that you touch. I'm really excited to be able to chat with you for this next 30 minutes. Alfredo Deza: Yeah, of course. Demetrios: Maybe just, we'll start it off. We're going to get into it when it comes to what you're doing and really what the space looks like right now. Right. But I would love to hear a little bit of what you've been up to since, for the last two years, because I haven't talked to you. Alfredo Deza: Yeah, that's right. Well, several different things, actually. Right after we chatted last time, I joined Microsoft to work in developer relations. Microsoft has a big group of folks working in developer relations. And basically, for me, it signaled my shift away from regular software engineering. I was primarily doing software engineering and thought that perhaps with the books and some of the courses that I had published, it was time for me to get into more teaching and providing useful content, which is really something very rewarding. And in developer relations, in advocacy in general, it's kind of like a way of teaching. We demonstrate technology, how it works from a technical point of view. Alfredo Deza: So aside from that, started working really closely with several different universities. I work with Georgia Tech, Oxford University, Carnegie Mellon University, and Duke University, where I've been working as an adjunct professor for a couple of years as well. So at Duke, what I do is I teach a couple of classes a year. One is on machine learning. Last year was machine learning operations, and this year it's going to, I think, hopefully I'm not messing anything up. I think we're going to shift a little bit to doing operations with large language models. And in the fall I teach a programming class for graduate students that want to join one of the graduate programs and they want to get a primer on Python. So I teach a little bit of that. Alfredo Deza: And in the meantime, also in partnership with Duke, getting a lot of courses out on Coursera, and from large language models to doing stuff with Azure, to machine learning operations, to rust, I've been doing a lot of rust lately, which I really like. So, yeah, so a lot of different things, but I think the core pillar for me remains being able to teach and spread the knowledge. Demetrios: Love it, man. And I know you've been diving into vector databases. Can you tell us more? Alfredo Deza: Yeah, well, the thing is that when you're trying to teach, and yes, one of the courses that we had out for large language models was applying retrieval augmented generation, which is the basis for vector databases, to see how it works. This is how it works. These are the components that you need. Let's create an application from scratch and see how it works. And for those that don't know, retrieval augmented generation is kind of like having. The other day I saw a description about this, which I really like, which is a way of, it's kind of like having an open book test. So the large language model is the student, and they have an open book so they can see the answers and then repackage that into their own words and provide an answer, which is kind of like what we do with vector databases in the retrieval augmented generation pattern. We've been putting a lot of examples on how to do these, and in the case of Azure, you're enabling certain services. Alfredo Deza: There's the Azure AI search service, which is really good. But sometimes when you're trying to teach specifically, it is useful to have a very straightforward way to do this and applying or creating a retrieval augmented generation pattern, it's kind of tricky, I think. We're not there yet to do it in a nice, straightforward way. So there are several different options, Qdrant being one of them. So usually I get asked, why are you using Qdrant? What's the big deal? Why are you picking these over all of the other ones? And to me it boils down to, aside from being renowned or recognized, that it works fairly well. There's one core component that is critical here, and that is it has to be very straightforward, very easy to set up so that I can teach it, because if it's easy, well, sort of like easy to or straightforward to teach, then you can take the next step and you can make it a little more complex, put other things around it, and that creates a great development experience and a learning experience as well. If something is very complex, if the list of requirements is very long, you're not going to be very happy, you're going to spend all this time trying to figure, and when you have, similar to what happens with automation, when you have a list of 20 different things that you need to, in order to, say, deploy a website, you're going to get things out of order, you're going to forget one thing, you're going to have a typo, you're going to mess it up, you're going to have to start from scratch, and you're going to get into a situation where you can't get out of it. And Qdrant does provide a very straightforward way to run the database, and that one is the in memory implementation with Python. Alfredo Deza: So you can actually write a little bit of python once you install the libraries and say, I want to instantiate a vector database and I wanted to run it in memory. So for teaching, this is great. It's like, hey, of course it's not for production, but just write these couple of lines and let's get right into it. Let's just start populating these and see how it works. And it works. It's great. You don't need to have all of these, like, wow, let's launch Kubernetes over here and let's have all of these dynamic. No, why? I mean, sure, you want to create a business model and you want to launch to production eventually, and you want to have all that running perfect. Alfredo Deza: But for this setup, like for understanding how it works, for trying baby steps into understanding vector databases, this is perfect. My one requirement, or my one wish list item is to have that in memory thing for rust. That would be pretty sweet, because I think it'll make teaching rust and retrieval augmented generation with rust much easier. I wouldn't have to worry about bringing up containers or external services. So that's the deal with rust. And I'll tell you one last story about why I think specifically making it easy to get started with so that I can teach it, so that others can learn from it, is crucial. I would say almost 50 years ago, maybe a little bit more, my dad went to Italy to have a course on athletics. My dad was involved in sports and he was going through this, I think it was like a six month specialization on athletics. Alfredo Deza: And he was in class and it had been recent that the high jump had transitioned from one style to the other. The previous style, the old style right now is the old style. It's kind of like, it was kind of like over the bar. It was kind of like a weird style. And it had recently transitioned to a thing called the Fosbury flop. This person, his last name is Dick Fosbury, invented the Fosbury flop. He said, no, I'm just going to go straight at it, then do a little curve and then jump over it. And then he did, and then he started winning everything. Alfredo Deza: And everybody's like, what this guy? Well, first they thought he was crazy, and they thought that dismissive of what he was trying to do. And there were people that sticklers that wanted to stay with the older style, but then he started beating records and winning medals, and so people were like, well, is this a good thing? Let's try it out. So there was a whole. They were casting doubt. It's like, is this really the thing? Is this really what we should be doing? So one of the questions that my dad had to answer in this specialization he did in Italy was like, which style is better, it's the old style or the new style? And so my dad said, it's the new style. And they asked him, why is the new style better? And he didn't choose the path of answering the, well, because this guy just won the Olympics or he just did a record over here that at the end is meaningless. What he said was, it is the better style because it's easier to teach and it is 100% correct. When you're teaching high jump, it is much easier to teach the Fosbury flop than the other style. Alfredo Deza: It is super hard. So you start seeing this parallel in teaching and learning where, but with this one, you have all of these world records and things are going great. Well, great. But is anybody going to try, are you going to have more people looking into it or are you going to have less? What is it that we're trying to do here? Right. Demetrios: Not going to lie, I did not see how you were going to land the plane on coming from the high jump into the vector database space, but you did it gracefully. That was well done. So, basically, the easier it is to teach, the more people are going to be able to jump on board and the more people are going to be able to get value out of it. Sabrina Aquino: I absolutely love it, by the way. It's a pleasure to meet you, Alfredo. And I was actually about to ask you. I love your background as an olympic athlete. Right. And I was wondering, do you make any connections or how do we interact this background with your current teaching and AI? And do you see any similarities or something coming from that approach into what you've applied? Alfredo Deza: Well, you're bringing a great point. It's taken me a very long time to feel comfortable talking about my professional sports past. I don't want to feel like I'm overwhelming anyone or trying to be like a show off. So I usually try not to mention, although I'm feeling more comfortable mentioning my professional past. But the only situations where I think it's good to talk about it is when I feel like there's a small chance that I might get someone thinking about the possibilities of what they can actually do and what they can try. And things that are seemingly complex might be achievable. So you mentioned similarities, but I think there are a couple of things that happen when you're an athlete in any sport, really, that you're trying to or you're operating at the very highest level and there's several things that happen there. You have to be consistent. Alfredo Deza: And it's something that I teach my kids as well. I have one of my kids, he's like, I did really a lot of exercise today and then for a week he doesn't do anything else. And he's like, now I'm going to do exercise again. And she's going to do 4 hours. And it's like, wait a second, wait a second. It's okay. You want to do it. This is great. Alfredo Deza: But no intensity. You need to be consistent. Oh, dad, you don't let me work out and it's like, no work out. Good, I support you, but you have to be consistent and slowly start ramping up and slowly start getting better. And it happens a lot with learning. We are in an era that concepts and things are advancing so fast that things are getting obsolete even faster. So you're always in this motion of trying to learn. So what I would say is the similarities are in the consistency. Alfredo Deza: You have to keep learning, you have to keep applying yourself. But it can be like, oh, today I'm going to read this whole book from start to end and you're just going to learn everything about, I don't know, rust. It's like, well, no, try applying rust a little bit every day and feel comfortable with it. And at the very end you will do better. Like, you can't go with high intensity because you're going to get burned out, you're going to overwhelmed and it's not going to work out. You don't go to the Olympics by working out for like a few months. Actually, a very long time ago, a reporter asked me, how many months have you been working out preparing for the Olympics? It's like, what do you mean with how many months? I've been training my whole life for this. What are we talking about? Demetrios: We're not talking in months or years. We're talking in lifetimes, right? Alfredo Deza: So you have to take it easy. You can't do that. And beyond that, consistency. Consistency goes hand in hand with discipline. I came to the US in 2006. I don't live like I was born in Peru and I came to the US with no degree. I didn't go to college. Well, I went to college for a few months and then I dropped out and I didn't have a career, I didn't have experience. Alfredo Deza: I was just recently married. I have never worked in my life because I used to be a professional athlete. And the only thing that I decided to do was to do amazing work, apply myself and try to keep learning and never stop learning. In the back of my mind, it's like, oh, I have a tremendous knowledge gap that I need to fulfill by learning. And actually, I have tremendous respect and I'm incredibly grateful by all of the people that opened doors for me and gave me an opportunity, one of them being Noah Giff, which I co authored a few books with him and some of the courses. And he actually taught me to write Python. I didn't know how to program. And he said, you know what? I think you should learn to write some python. Alfredo Deza: And I was like, python? Why would I ever need to do that? And I did. He's like, let's just find something to automate. I mean, what a concept. Find something to apply automation. And every week on Fridays, we'll just take a look at it and that's it. And we did that for a while. And then he said, you know what? You should apply for speaking at Python. How can I be speaking at a conference when I just started learning? It's like your perspective is different. Alfredo Deza: You just started learning these. You're going to do it in an interesting way. So I think those are concepts that are very important to me. Stay disciplined, stay consistent, and keep at it. The secret is that there's no secret. That's the bottom line. You have to keep consistent. Otherwise things are always making excuses. Alfredo Deza: Is very simple. Demetrios: The secret is there is no secret. That is beautiful. So you did kind of sprinkle this idea of, oh, I wish there was more stuff happening with Qdrant and rust. Can you talk a little bit more to that? Because one piece of Qdrant that people tend to love is that it's built in rust. Right. But also, I know that you mentioned before, could we get a little bit of this action so that I don't have to deal with any. What was it you were saying? The containers. Alfredo Deza: Yeah. Right. Now, if you want to have a proof of concept, and I always go for like, what's the easiest, the most straightforward, the less annoying things I need to do, the better. And with Python, the Python API for Qdrant, you can just write a few lines and say, I want to create an instance in memory and then that's it. The database is created for you. This is very similar, or I would say actually almost identical to how you run SQLite. Sqlite is the embedded database you can create in memory. And it's actually how I teach SQL as well. Alfredo Deza: When I have to teach SQl, I use sqlite. I think it's perfect. But in rust, like you said, Qdrant's backend is built on rust. There is no in memory implementation. So you are required to have an actual instance of the Qdrant database running. So you have a couple of options, but one of them probably means you'll have to bring up a container with Qdrant running and then you'll have to connect to that instance. So when you're teaching, the development environments are kind of constrained. Either you are in a lab somewhere like Crusader has labs, but those are self contained. Alfredo Deza: It's kind of tricky to get them running 100%. You can run multiple containers at the same time. So things start becoming more complex. Not only more complex for the learner, but also in this case, like the teacher, me who wants to figure out how to make this all run in a very constrained environment. And that makes it tricky. And I fasted the team, by the way, and I was told that maybe at some point they can do some magic and put the in memory implementation on the rust side of things, which I think it would be tremendous. Sabrina Aquino: We're going to advocate for that on our side. We're also going to be asking for it. And I think this is really good too. It really makes it easier. Me as a student not long ago, I do see what you mean. It's quite hard to get it all working very fast in the time of a class that you don't have a lot of time and students can get. I don't know, it's quite complex. I do get what you mean. Sabrina Aquino: And you also are working both on the tech industry and on academia, which I think is super interesting. And I always kind of feel like those two are a bit disconnected sometimes. And I was wondering what you think that how important is the collaboration of these two areas considering how fast the AI space is going to right now? And what are your thoughts? Alfredo Deza: Well, I don't like generalizing, but I'm going to generalize right now. I would say most universities are several steps behind, and there's a lot of complexities involved in higher education specifically. Most importantly, these institutions tend to be fairly large, and with fairly large institutions, what do you get? Oh, you get the magical bureaucracy for anything you want to do. Something like, oh, well, you need to talk to that department that needs to authorize something, that needs to go to some other department, and it's like, I'm going to change the curriculum. It's like, no, you can't. What does that mean? I have actually had conversations with faculty in universities where they say, listen, curricula. Yeah, we get that. We need to update it, but we change curricula every five years. Alfredo Deza: And so. See you in a while. It's been three years. We have two more years to go. See you in a couple of years. And that's detrimental to students now. I get it. Building curricula, it's very hard. Alfredo Deza: It takes a lot of work for the faculty to put something together. So it is something that, from a faculty perspective, it's like they're not going to get paid more if they update the curriculum. Demetrios: Right. Alfredo Deza: And it's a massive amount of work now that, of course, comes to the detriment of the learner. The student will be under service because they will have to go through curricula that is fairly dated. Now, there are situations and there are programs where this doesn't happen. And Duke, I've worked with several. They're teaching Llama file, which was built by Mozilla. And when did Llama file came out? It was just like a few months ago. And I think it's incredible. And I think those skills that are the ones that students need today in order to not only learn these things, but also be able to apply them when they're looking for a job or trying to professionally even apply them into their day to day, now that's one side of things. Alfredo Deza: But there's the other aspect. In the case of Duke, as well as other universities out there, they're using these online platforms so that they can put courses out there faster. Do you really need to go through a four year program to understand how retrieval augmented generation works? Or how to implement it? I would argue no, but would you be better out, like, taking a course that will take you perhaps a couple of weeks to go through and be fairly proficient? I would say yes, 100%. And you see several institutions putting courses out there that are meaningful, that are useful, that they can cope with the speed at which things are needed. I think it's kind of good. And I think that sometimes we tend to think about knowledge and learning things, kind of like in a bubble, especially here in the US. I think there's this college is this magical place where all of the amazing things happen. And if you don't go to college, things are going to go very bad for you. Alfredo Deza: And I don't think that's true. I think if you like college, if you like university, by all means take advantage of it. You want to experience it. That sounds great. I think there's tons of opportunity to do it outside of the university or the college setting and taking online courses from validated instructors. They have a good profile. Not someone that just dumped something on genetic AI and started. Demetrios: Someone like you. Alfredo Deza: Well, if you want to. Yeah, sure, why not? I mean, there's students that really like my teaching style. I think that's great. If you don't like my teaching style. Sometimes I tend to go a little bit slower because I don't want to overwhelm anyone. That's all good. But there is opportunity. And when I mention these things, people are like, oh, really? I'm not advertising for Coursera or anything else, but some of these platforms, if you pay a monthly fee, I think it's between $40 and $60. Alfredo Deza: I think on the expensive side, you can take advantage of all of these courses and as much as you can take them. Sometimes even companies say, hey, you have a paid subscription, go take it all. And I've met people like that. It's like, this is incredible. I'm learning so much. Perfect. I think there's a mix of things. I don't think there's like a binary answer, like, oh, you need to do this, or, no, don't do that, and everything's going to be well again. Demetrios: Yeah. Can you talk a little bit more about your course? And if I wanted to go on Coursera, what can I expect from. Alfredo Deza: You know, and again, I don't think as much as I like talking about my courses and the things that I do, I want to emphasize, like, if someone is watching this video or listening into what we're talking about, find something that is interesting to you and find a course that kind of delivers that thing, that sliver of interesting stuff, and then try it out. I think that's the best way. Don't get overwhelmed by. It's like, is this the right vector database that I should be learning? Is this instructor? It's like, no, try it out. What's going to happen? You don't like it when you're watching a bad video series or docuseries on Netflix or any streaming platform? Do you just like, I pay my $10 a month, so I'm going to muster through this whole 20 more episodes of this thing that I don't like. It's meaningless. It doesn't matter. Just move on. Alfredo Deza: So having said that, on Coursera specifically with Duke University, we tend to put courses out there that are going to be used in our programs in the things that I teach. For example, we just released the large language models. Specialization and specialization is a grouping of between four and six courses. So in there we have doing large language models with Azure, for example, introduction to generative AI, having a very simple rag pattern with Qdrant. I also have examples on how to do it with Azure AI search, which I think is pretty cool as well. How to do it locally with Llama file, which I think is great. You can have all of these large language models running locally, and then you have a little bit of Qdrant sprinkle over there, and then you have rack pattern. Now, I tend to teach with things that I really like, and I'll give you a quick example. Alfredo Deza: I think there's three data sets that are one of the top three most used data sets in all of machine learning and data science. Those are the Boston housing market, the diabetes data set in the US, and the other one is the Titanic. And everybody uses those. And I don't really understand why. I mean, perhaps I do understand why. It's because they're easy, they're clean, they're ready to go. Nothing's ever wrong with these, and everybody has used them to boredom. But for the life of me, you wouldn't be able to convince me to use any of those, because these are not topics that I really care about and they don't resonate with me. Alfredo Deza: The Titanic specifically is just horrid. Well, if I was 37 and I'm on first class and I'm male, would I survive? It's like, what are we trying to do here? How is this useful to anyone? So I tend to use things that I like, and I'm really passionate about wine. So I built my own data set, which is a collection of wines from all over the world, they have the ratings, they have the region, they have the type of grape and the notes and the name of the wine. So when I'm teaching them, like, look at this, this is amazing. It's wines from all over the world. So let's do a little bit of things here. So, for rag, what I was able to do is actually in the courses as well. I do, ah, I really know wines from Argentina, but these wines, it would be amazing if you can find me not a Malbec, but perhaps a cabernet franc. Alfredo Deza: That is amazing. From, it goes through Qdrant, goes back to llama file using some large language model or even small language model, like the Phi 2 from Microsoft, I think is really good. And he goes, it tells. Yeah, sure. I get that you want to have some good wines. Here's some good stuff that I can give you. And so it's great, right? I think it's great. So I think those kinds of things that are interesting to the person that is teaching or presenting, I think that's the key, because whenever you're talking about things that are very boring, that you do not care about, things are not going to go well for you. Alfredo Deza: I mean, if I didn't like teaching, if I didn't like vector databases, you would tell right away. It's like, well, yes, I've been doing stuff with the vector databases. They're good. Yeah, Qdrant, very good. You would tell right away. I can't lie. Very good. Demetrios: You can't fool anybody. Alfredo Deza: No. Demetrios: Well, dude, this is awesome. We will drop a link to the chat. We will drop a link to the course in the chat so that in case anybody does want to go on this wine tasting journey with you, they can. And I'm sure there's all kinds of things that will spark the creativity of the students as they go through it, because when you were talking about that, I was like, oh, it would be really cool to make that same type of thing, but with ski resorts there, you go around the world. And if I want this type of ski resort, I'm going to just ask my chat bot. So I'm excited to see what people create with it. I also really appreciate you coming on here, giving us your time and talking through all this. It's been a pleasure, as always, Alfredo. Demetrios: Thank you so much. Alfredo Deza: Yeah, thank you. Thank you for having me. Always happy to chat with you. I think Qdrant is doing a very solid product. Hopefully, my wish list item of in memory in rust comes to fruition, but I get it. Sometimes there are other priorities. It's all good. Yeah. Alfredo Deza: If anyone wants to connect with me, I'm always active on LinkedIn primarily. Always happy to connect with folks and talk about learning and improving and always being a better person. Demetrios: Excellent. Well, we will sign off, and if anyone else out there wants to come on here and talk to us about vector databases, we're always happy to have you. Feel free to reach out. And remember, don't get lost in vector space, folks. We will see you on the next one. Sabrina Aquino: Good night. Thank you so much. ",blog/teaching-vector-databases-at-scale-alfredo-deza-vector-space-talks-019-2.md "--- draft: false title: ""Qdrant Hybrid Cloud and Scaleway Empower GenAI"" short_description: ""Supporting innovation in AI with the launch of a revolutionary managed database for startups and enterprises."" description: ""Supporting innovation in AI with the launch of a revolutionary managed database for startups and enterprises."" preview_image: /blog/hybrid-cloud-scaleway/hybrid-cloud-scaleway.png date: 2024-04-10T00:06:00Z author: Qdrant featured: false weight: 1002 tags: - Qdrant - Vector Database --- In a move to empower the next wave of AI innovation, Qdrant and [Scaleway](https://www.scaleway.com/en/) collaborate to introduce [Qdrant Hybrid Cloud](/hybrid-cloud/), a fully managed vector database that can be deployed on existing Scaleway environments. This collaboration is set to democratize access to advanced AI capabilities, enabling developers to easily deploy and scale vector search technologies within Scaleway's robust and developer-friendly cloud infrastructure. By focusing on the unique needs of startups and the developer community, Qdrant and Scaleway are providing access to intuitive and easy to use tools, making cutting-edge AI more accessible than ever before. Building on this vision, the integration between Scaleway and Qdrant Hybrid Cloud leverages the strengths of both Qdrant, with its leading open-source vector database, and Scaleway, known for its innovative and scalable cloud solutions. This integration means startups and developers can now harness the power of vector search - essential for AI applications like recommendation systems, image recognition, and natural language processing - within their existing environment without the complexity of maintaining such advanced setups. *""With our partnership with Qdrant, Scaleway reinforces its status as Europe's leading cloud provider for AI innovation. The integration of Qdrant's fast and accurate vector database enriches our expanding suite of AI solutions. This means you can build smarter, faster AI projects with us, worry-free about performance and security."" Frédéric BARDOLLE, Lead PM AI @ Scaleway* #### Developing a Retrieval Augmented Generation (RAG) Application with Qdrant Hybrid Cloud, Scaleway, and LangChain Retrieval Augmented Generation (RAG) enhances Large Language Models (LLMs) by integrating vector search to provide precise, context-rich responses. This combination allows LLMs to access and incorporate specific data in real-time, vastly improving the quality of AI-generated content. RAG applications often rely on sensitive or proprietary internal data, emphasizing the importance of data sovereignty. Running the entire stack within your own environment becomes crucial for maintaining control over this data. Qdrant Hybrid Cloud deployed on Scaleway addresses this need perfectly, offering a secure, scalable platform that respects data sovereignty requirements while leveraging the full potential of RAG for sophisticated AI solutions. ![hybrid-cloud-scaleway-tutorial](/blog/hybrid-cloud-scaleway/hybrid-cloud-scaleway-tutorial.png) We created a tutorial that guides you through setting up and leveraging Qdrant Hybrid Cloud on Scaleway for a RAG application, providing insights into efficiently managing data within a secure, sovereign framework. It highlights practical steps to integrate vector search with LLMs, optimizing the generation of high-quality, relevant AI content, while ensuring data sovereignty is maintained throughout. [Try the Tutorial](/documentation/tutorials/rag-chatbot-scaleway/) #### The Benefits of Running Qdrant Hybrid Cloud on Scaleway Choosing Qdrant Hybrid Cloud and Scaleway for AI applications offers several key advantages: - **AI-Focused Resources:** Scaleway aims to be the cloud provider of choice for AI companies, offering the resources and infrastructure to power complex AI and machine learning workloads, helping to advance the development and deployment of AI technologies. This paired with Qdrant Hybrid Cloud provides a strong foundational platform for advanced AI applications. - **Scalable Vector Search:** Qdrant Hybrid Cloud provides a fully managed vector database that allows to effortlessly scale the setup through vertical or horizontal scaling. Deployed on Scaleway, this is a robust setup that is designed to meet the needs of businesses at every stage of growth, from startups to large enterprises, ensuring a full spectrum of solutions for various projects and workloads. - **European Roots and Focus**: With a strong presence in Europe and a commitment to supporting the European tech ecosystem, Scaleway is ideally positioned to partner with European-based companies like Qdrant, providing local expertise and infrastructure that aligns with European regulatory standards. - **Sustainability Commitment**: Scaleway leads with an eco-conscious approach, featuring adiabatic data centers that significantly reduce cooling costs and environmental impact. Scaleway prioritizes extending hardware lifecycle beyond industry norms to lessen our ecological footprint. #### Get Started in a Few Seconds Setting up Qdrant Hybrid Cloud on Scaleway is streamlined and quick, thanks to its Kubernetes-native architecture. Follow these simple three steps to launch: 1. **Activate Hybrid Cloud**: First, log into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and select ‘Hybrid Cloud’ to activate. 2. **Integrate Your Clusters**: Navigate to the Hybrid Cloud settings and add your Scaleway Kubernetes clusters as a Hybrid Cloud Environment. 3. **Simplified Management**: Use the Qdrant Management Console for easy creation and oversight of your Qdrant clusters on Scaleway. For more comprehensive guidance, our documentation provides step-by-step instructions for deploying Qdrant on Scaleway. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-scaleway.md "--- draft: false title: '""Vector search and applications"" by Andrey Vasnetsov, CTO at Qdrant' preview_image: /blog/from_cms/ramsri-podcast-preview.png slug: vector-search-and-applications-record short_description: Andrey Vasnetsov, Co-founder and CTO at Qdrant has shared about vector search and applications with Learn NLP Academy.  description: Andrey Vasnetsov, Co-founder and CTO at Qdrant has shared about vector search and applications with Learn NLP Academy.  date: 2023-12-11T12:16:42.004Z author: Alyona Kavyerina featured: false tags: - vector search - webinar - news categories: - vector search - webinar - news --- Andrey Vasnetsov, Co-founder and CTO at Qdrant has shared about vector search and applications with Learn NLP Academy.  He covered the following topics: * Qdrant search engine and Quaterion similarity learning framework; * Similarity learning to multimodal settings; * Elastic search embeddings vs vector search engines; * Support for multiple embeddings; * Fundraising and VC discussions; * Vision for vector search evolution; * Finetuning for out of domain. ",blog/vector-search-and-applications-by-andrey-vasnetsov-cto-at-qdrant.md "--- title: ""IrisAgent and Qdrant: Redefining Customer Support with AI"" draft: false slug: iris-agent-qdrant short_description: Pushing the boundaries of AI in customer support description: Learn how IrisAgent leverages Qdrant for RAG to automate support, and improve resolution times, transforming customer service preview_image: /case-studies/iris/irisagent-qdrant.png date: 2024-03-06T07:45:34-08:00 author: Manuel Meyer featured: false tags: - news - blog - irisagent - customer support weight: 0 # Change this weight to change order of posts # For more guidance, see https://github.com/qdrant/landing_page?tab=readme-ov-file#blog --- Artificial intelligence is evolving customer support, offering unprecedented capabilities for automating interactions, understanding user needs, and enhancing the overall customer experience. [IrisAgent](https://irisagent.com/), founded by former Google product manager [Palak Dalal Bhatia](https://www.linkedin.com/in/palakdalal/), demonstrates the concrete impact of AI on customer support with its AI-powered customer support automation platform. Bhatia describes IrisAgent as “the system of intelligence which sits on top of existing systems of records like support tickets, engineering bugs, sales data, or product data,” with the main objective of leveraging AI and generative AI, to automatically detect the intent and tags behind customer support tickets, reply to a large number of support tickets chats improve the time to resolution and increase the deflection rate of support teams. Ultimately, IrisAgent enables support teams to more with less and be more effective in helping customers. ## The Challenge Throughout her career Bhatia noticed a lot of manual and inefficient processes in support teams paired with information silos between important functions like customer support, product management, engineering teams, and sales teams. These silos typically prevent support teams from accurately solving customers’ pain points, as they are only able to access a fraction of the internal knowledge and don’t get the relevant information and insights that other teams have. IrisAgent is addressing these challenges with AI and GenAI by generating meaningful customer experience insights about what the root cause of specific customer escalations or churn. “The platform allows support teams to gather these cross-functional insights and connect them to a single view of customer problems,” Bhatia says. Additionally, IrisAgent facilitates the automation of mundane and repetitive support processes. In the past, these tasks were difficult to automate effectively due to the limitations of early AI technologies. Support functions often depended on rudimentary solutions like legacy decision trees, which suffered from a lack of scalability and robustness, primarily relying on simplistic keyword matching. However, advancements in AI and GenAI technologies have now enabled more sophisticated and efficient automation of these support processes. ## The Solution “IrisAgent provides a very holistic product profile, as we are the operating system for support teams,” Bhatia says. The platform includes features like omni-channel customer support automation, which integrates with other parts of the business, such as engineering or sales platforms, to really understand customer escalation points. Long before the advent of technologies such as ChatGPT, IrisAgeny had already been refining and advancing their AI and ML stack. This has enabled them to develop a comprehensive range of machine learning models, including both proprietary solutions and those built on cloud technologies. Through this advancement, IrisAgent was able to finetune on public and private customer data to achieve the level of accuracy that is needed to successfully deflect and resolve customer issues at scale. ![Iris GPT info](/blog/iris-agent-qdrant/iris_gpt.png) Since IrisAgent built out a lot of their AI related processes in-house with proprietary technology, they wanted to find ways to augment these capabilities with RAG technologies and vector databases. This strategic move was aimed at abstracting much of the technical complexity, thereby simplifying the process for engineers and data scientists on the team to interact with data and develop a variety of solutions built on top of it. ![Quote from CEO of IrisAgent](/blog/iris-agent-qdrant/iris_ceo_quote.png) “We were looking at a lot of vector databases in the market and one of our core requirements was that the solution needed to be open source because we have a strong emphasis on data privacy and security,” Bhatia says. Also, performance played a key role for IrisAgent during their evaluation as Bhatia mentions: “Despite it being a relatively new project at the time we tested Qdrant, the performance was really good.” Additional evaluation criteria were the ease of ability to deployment, future maintainability, and the quality of available documentation. Ultimately, IrisAgent decided to build with Qdrant as their vector database of choice, given these reasons: * **Open Source and Flexibility**: IrisAgent required a solution that was open source, to align with their data security needs and preference for self-hosting. Qdrant's open-source nature allowed IrisAgent to deploy it on their cloud infrastructure seamlessly. * **Performance**: Early on, IrisAgent recognized Qdrant's superior performance, despite its relative newness in the market. This performance aspect was crucial for handling large volumes of data efficiently. * **Ease of Use**: Qdrant's user-friendly SDKs and compatibility with major programming languages like Go and Python made it an ideal choice for IrisAgent's engineering team. Additionally, IrisAgent values Qdrant’s the solid documentation, which is easy to follow. * **Maintainability**: IrisAgent prioritized future maintainability in their choice of Qdrant, notably valuing the robustness and efficiency Rust provides, ensuring a scalable and future-ready solution. ## Optimizing IrisAgent's AI Pipeline: The Evaluation and Integration of Qdrant IrisAgent utilizes comprehensive testing and sandbox environments, ensuring no customer data is used during the testing of new features. Initially, they deployed Qdrant in these environments to evaluate its performance, leveraging their own test data and employing Qdrant’s console and SDK features to conduct thorough data exploration and apply various filters. The primary languages used in these processes are Go, for its efficiency, and Python, for its strength in data science tasks. After the successful testing, Qdrant's outputs are now integrated into IrisAgent’s AI pipeline, enhancing a suite of proprietary AI models designed for tasks such as detecting hallucinations and similarities, and classifying customer intents. With Qdrant, IrisAgent saw significant performance and quality gains for their RAG use cases. Beyond this, IrisAgent also performs fine-tuning further in the development process. Qdrant’s emphasis on open-source technology and support for main programming languages (Go and Python) ensures ease of use and compatibility with IrisAgent’s production environment. IrisAgent is deploying Qdrant on Google Cloud in order to fully leverage Google Cloud's robust infrastructure and innovative offerings. ![Iris agent flow chart](/blog/iris-agent-qdrant/iris_agent_flow_chart.png) ## Future of IrisAgent Looking ahead, IrisAgent is committed to pushing the boundaries of AI in customer support, with ambitious plans to evolve their product further. The cornerstone of this vision is a feature that will allow support teams to leverage historical support data more effectively, by automating the generation of knowledge base content to redefine how FAQs and product documentation are created. This strategic initiative aims not just to reduce manual effort but also to enrich the self-service capabilities of users. As IrisAgent continues to refine its AI algorithms and expand its training datasets, the goal is to significantly elevate the support experience, making it more seamless and intuitive for end-users. ",blog/iris-agent-qdrant.md "--- draft: true title: ""Pienso & Qdrant: Future Proofing Generative AI for Enterprise-Level Customers"" slug: pienso-case-study short_description: Case study description: Case study preview_image: /blog/from_cms/title.webp date: 2024-01-05T15:10:57.473Z author: Author featured: false --- # Pienso & Qdrant: Future Proofing Generative AI for Enterprise-Level Customers The partnership between Pienso and Qdrant is set to revolutionize interactive deep learning, making it practical, efficient, and scalable for global customers. Pienso’s low-code platform provides a streamlined and user-friendly process for deep learning tasks. This exceptional level of convenience is augmented by Qdrant’s scalable and cost-efficient high vector computation capabilities, which enable reliable retrieval of similar vectors from high-dimensional spaces. Together, Pienso and Qdrant will empower enterprises to harness the full potential of generative AI on a large scale. By combining the technologies of both companies, organizations will be able to train their own large language models and leverage them for downstream tasks that demand data sovereignty and model autonomy. This collaboration will help customers unlock new possibilities and achieve advanced AI-driven solutions. Strengthening LLM Performance Qdrant enhances the accuracy of large language models (LLMs) by offering an alternative to relying solely on patterns identified during the training phase. By integrating with Qdrant, Pienso will empower customer LLMs with dynamic long-term storage, which will ultimately enable them to generate concrete and factual responses. Qdrant effectively preserves the extensive context windows managed by advanced LLMs, allowing for a broader analysis of the conversation or document at hand. By leveraging this extended context, LLMs can achieve a more comprehensive understanding and produce contextually relevant outputs. ## [](/case-studies/pienso/#joint-dedication-to-scalability-efficiency-and-reliability)Joint Dedication to Scalability, Efficiency and Reliability > “Every commercial generative AI use case we encounter benefits from faster training and inference, whether mining customer interactions for next best actions or sifting clinical data to speed a therapeutic through trial and patent processes.” - Birago Jones, CEO, Pienso Pienso chose Qdrant for its exceptional LLM interoperability, recognizing the potential it offers in maximizing the power of large language models and interactive deep learning for large enterprises. Qdrant excels in efficient nearest neighbor search, which is an expensive and computationally demanding task. Our ability to store and search high-dimensional vectors with remarkable performance and precision will offer a significant peace of mind to Pienso’s customers. Through intelligent indexing and partitioning techniques, Qdrant will significantly boost the speed of these searches, accelerating both training and inference processes for users. ### [](/case-studies/pienso/#scalability-preparing-for-sustained-growth-in-data-volumes)Scalability: Preparing for Sustained Growth in Data Volumes Qdrant’s distributed deployment mode plays a vital role in empowering large enterprises dealing with massive data volumes. It ensures that increasing data volumes do not hinder performance but rather enrich the model’s capabilities, making scalability a seamless process. Moreover, Qdrant is well-suited for Pienso’s enterprise customers as it operates best on bare metal infrastructure, enabling them to maintain complete control over their data sovereignty and autonomous LLM regimes. This ensures that enterprises can maintain their full span of control while leveraging the scalability and performance benefits of Qdrant’s solution. ### [](/case-studies/pienso/#efficiency-maximizing-the-customer-value-proposition)Efficiency: Maximizing the Customer Value Proposition Qdrant’s storage efficiency delivers cost savings on hardware while ensuring a responsive system even with extensive data sets. In an independent benchmark stress test, Pienso discovered that Qdrant could efficiently store 128 million documents, consuming a mere 20.4GB of storage and only 1.25GB of memory. This storage efficiency not only minimizes hardware expenses for Pienso’s customers, but also ensures optimal performance, making Qdrant an ideal solution for managing large-scale data with ease and efficiency. ### [](/case-studies/pienso/#reliability-fast-performance-in-a-secure-environment)Reliability: Fast Performance in a Secure Environment Qdrant’s utilization of Rust, coupled with its memmap storage and write-ahead logging, offers users a powerful combination of high-performance operations, robust data protection, and enhanced data safety measures. Our memmap storage feature offers Pienso fast performance comparable to in-memory storage. In the context of machine learning, where rapid data access and retrieval are crucial for training and inference tasks, this capability proves invaluable. Furthermore, our write-ahead logging (WAL), is critical to ensuring changes are logged before being applied to the database. This approach adds additional layers of data safety, further safeguarding the integrity of the stored information. > “We chose Qdrant because it’s fast to query, has a small memory footprint and allows for instantaneous setup of a new vector collection that is going to be queried. Other solutions we evaluated had long bootstrap times and also long collection initialization times {..} This partnership comes at a great time, because it allows Pienso to use Qdrant to its maximum potential, giving our customers a seamless experience while they explore and get meaningful insights about their data.” - Felipe Balduino Cassar, Senior Software Engineer, Pienso ## [](/case-studies/pienso/#whats-next)What’s Next? Pienso and Qdrant are dedicated to jointly develop the most reliable customer offering for the long term. Our partnership will deliver a combination of no-code/low-code interactive deep learning with efficient vector computation engineered for open source models and libraries. ### [](/case-studies/pienso/#to-learn-more-about-how-we-plan-on-achieving-this-join-the-founders-for-a-technical-fireside-chat-at-0930-pst-thursday-20th-july-on-discordhttpsdiscordggvnvg3fheevent1128331722270969909)To learn more about how we plan on achieving this, join the founders for a [technical fireside chat at 09:30 PST Thursday, 20th July on Discord](https://discord.gg/Vnvg3fHE?event=1128331722270969909). ![](/blog/from_cms/founderschat.png)",blog/pienso-qdrant-future-proofing-generative-ai-for-enterprise-level-customers.md "--- draft: false title: When music just doesn't match our vibe, can AI help? - Filip Makraduli | Vector Space Talks slug: human-language-ai-models short_description: Filip Makraduli discusses using AI to create personalized music recommendations based on user mood and vibe descriptions. description: Filip Makraduli discusses using human language and AI to capture music vibes, encoding text with sentence transformers, generating recommendations through vector spaces, integrating Streamlit and Spotify API, and future improvements for AI-powered music recommendations. preview_image: /blog/from_cms/filip-makraduli-cropped.png date: 2024-01-09T10:44:20.559Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Database - LLM Recommendation System --- > *""Was it possible to somehow maybe find a way to transfer this feeling that we have this vibe and get the help of AI to understand what exactly we need at that moment in terms of songs?”*\ > -- Filip Makraduli > Imagine if the recommendation system could understand spoken instructions or hummed melodies. This would greatly impact the user experience and accuracy of the recommendations. Filip Makraduli, an electrical engineering graduate from Skopje, Macedonia, expanded his academic horizons with a Master's in Biomedical Data Science from Imperial College London. Currently a part of the Digital and Technology team at Marks and Spencer (M&S), he delves into retail data science, contributing to various ML and AI projects. His expertise spans causal ML, XGBoost models, NLP, and generative AI, with a current focus on improving outfit recommendation systems. Filip is not only professionally engaged but also passionate about tech startups, entrepreneurship, and ML research, evident in his interest in Qdrant, a startup he admires. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/6a517GfyUQLuXwFRxvwtp5?si=ywXPY_1RRU-qsMt9qrRS6w), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/WIBtZa7mcCs).*** ## **Top Takeaways:** Take a look at the song vibe recommender system created by Filip Makraduli. Find out how it works! Filip discusses how AI can assist in finding the perfect songs for any mood. He takes us through his unique approach, using human language and AI models to capture the essence of a song and generate personalized recommendations. Here are 5 key things you'll learn from this video: 1. How AI can help us understand and capture the vibe and feeling of a song 2. The use of language to transfer the experience and feeling of a song 3. The role of data sets and descriptions in building unconventional song recommendation systems 4. The importance of encoding text and using sentence transformers to generate song embeddings 5. How vector spaces and cosine similarity search are used to generate song recommendations > Fun Fact: Filip actually created a Spotify playlist in real-time during the video, based on the vibe and mood Demetrios described, showing just how powerful and interactive this AI music recommendation system can be! > ## Show Notes: 01:25 Using AI to capture desired music vibes.\ 06:17 Faster and accurate model.\ 10:07 Sentence embedding model maps song descriptions.\ 14:32 Improving recommendations, user personalization in music.\ 15:49 Qdrant Python client creates user recommendations.\ 21:26 Questions about getting better embeddings for songs.\ 25:04 Contextual information for personalized walking recommendations.\ 26:00 Need predictions, voice input, and music options. ## More Quotes from Filip: *""When you log in with Spotify, you could get recommendations related to your taste on Spotify or on whatever app you listen your music on.”*\ -- Filip Makraduli *""Once the user writes a query and the query mentions, like some kind of a mood, for example, I feel happy and it's a sunny day and so on, you would get the similarity to the song that has this kind of language explanations and language intricacies in its description.”*\ -- Filip Makraduli *""I've explored Qdrant and as I said with Spotify web API there are a lot of things to be done with these specific user-created recommendations.”*\ -- Filip Makraduli ## Transcript: Demetrios: So for those who do not know, you are going to be talking to us about when the music we listen to does not match our vibe. And can we get AI to help us on that? And you're currently working as a data scientist at Marks and Spencer. I know you got some slides to share, right? So I'll let you share your screen. We can kick off the slides and then we'll have a little presentation and I'll be back on to answer some questions. And if Neil's is still around at the end, which I don't think he will be able to hang around, but we'll see, we can pull him back on and have a little discussion at the end of the. Filip Makraduli: That's. That's great. All right, cool. I'll share my screen. Demetrios: Right on. Filip Makraduli: Yeah. Demetrios: There we go. Filip Makraduli: Yeah. So I had to use this slide because it was really well done as an introductory slide. Thank you. Yeah. Thank you also for making it so. Yeah, the idea was, and kind of the inspiration with music, we all listen to it. It's part of our lives in many ways. Sometimes it's like the gym. Filip Makraduli: We're ready to go, we're all hyped up, ready to do a workout, and then we click play. But the music and the playlist we get, it's just not what exactly we're looking for at that point. Or if we try to work for a few hours and try to get concentrated and try to code for hours, we can do the same and then we click play, but it's not what we're looking for again. So my inspiration was here. Was it possible to somehow maybe find a way to transfer this feeling that we have this vibe and get the help of AI to understand what exactly we need at that moment in terms of songs. So the obvious first question is how do we even capture a vibe and feel of a song? So initially, one approach that's popular and that works quite well is basically using a data set that has a lot of features. So Spotify has one data set like this and there are many others open source ones which include different features like loudness, key tempo, different kind of details related to the acoustics, the melody and so on. And this would work. Filip Makraduli: And this is kind of a way that a lot of song recommendation systems are built. However, what I wanted to do was maybe try a different approach in a way. Try to have a more unconventional recommender system, let's say. So what I did here was I tried to concentrate just on language. So my idea was, okay, is it possible to use human language to transfer this experience, this feeling that we have, and just use that and try to maybe encapsulate these features of songs. And instead of having a data set, just have descriptions of songs or sentences that explain different aspects of a song. So, as I said, this is a bit of a less traditional approach, and it's more of kind of testing the waters, but it worked to a decent extent. So what I did was, first I created a data set where I queried a large language model. Filip Makraduli: So I tried with llama and chat GPT, both. And the idea was to ask targeted questions, for example, like, what movie character does this song make you feel like? Or what's the tempo like? So, different questions that would help us understand maybe in what situation we would listen to this song, how will it make us feel like? And so on. And the idea was, as I said, again, to only use song names as queries for this large language model. So not have the full data sets with multiple features, but just song name, and kind of use this pretrained ability of all these LLMs to get this info that I was looking for. So an example of the generated data was this. So this song called Deep Sea Creature. And we have, like, a small description of the song. So it says a heavy, dark, mysterious vibe. Filip Makraduli: It will make you feel like you're descending into the unknown and so on. So a bit of a darker choice here, but that's the general idea. So trying to maybe do a bit of prompt engineering in a way to get the right features of a song, but through human language. So that was the first step. So the next step was how to encode this text. So all of this kind of querying reminds me of sentences. And this led me to sentence transformers and sentence Bird. And the usual issue with kind of doing this sentence similarity in the past was this, what I have highlighted here. Filip Makraduli: So this is actually a quote from a paper that Nils published a few years ago. So, basically, the way that this similarity was done was using cross encoders in the past, and that worked well, but it was really slow and unscalable. So Nils and his colleague created this kind of model, which helped scale this and make this a lot quicker, but also keep a lot of the accuracy. So Bert and Roberta were used, but they were not, as I said, quite scalable or useful for larger applications. So that's how sentence Bert was created. So the idea here was that there would be, like, a Siamese network that would train the model so that there could be, like, two bird models, and then the training would be done using this like zero, one and two tags, where kind of the sentences would be compared, whether there is entailment, neutrality or contradiction. So how similar these sentences are to each other. And by training a model like this and doing mean pooling, in the end, the model performed quite well and was able to kind of encapsulate this language intricacies of sentences. Filip Makraduli: So I decided to use and try out sentence transformers for my use case, and that was the encoding bit. So we have the model, we encode the text, and we have the embedding. So now the question is, how do we actually generate the recommendations? How is the similarity performed? So the similarity was done using vector spaces and cosine similarity search here. There were multiple ways of doing this. First, I tried things with a flat index and I tried Qdrant and I tried FIS. So I've worked with both. And with the flat index, it was good. It works well. Filip Makraduli: It's quick for small number of examples, small number of songs, but there is an issue when scaling. So once the vector indices get bigger, there might be a problem. So one popular kind of index architecture is this one here on the left. So hierarchical, navigable, small world graphs. So the idea here is that you wouldn't have to kind of go through all of the examples, but search through the examples in different layers, so that the search for similarities quicker. And this is a really popular approach. And Qdrant have done a really good customizable version of this, which is quite useful, I think, for very larger scales of application. And this graph here illustrates kind of well what the idea is. Filip Makraduli: So there is the sentence in this example. It's like a stripped striped blue shirt made from cotton, and then there is the network or the encoder. So in my case, this sentence is the song description, the neural network is the sentence transformer in my case. And then this embeddings are generated, which are then mapped into this vector space, and then this vector space is queryed and the cosine similarity is found, and the recommendations are generated in this way, so that once the user writes a query and the query mentions, like some kind of a mood, for example, I feel happy and it's a sunny day and so on, you would get the similarity to the song that has this kind of language explanations and language intricacies in its description. And there are a lot of ways of doing this, as Nils mentioned, especially with different embedding models and doing context related search. So this is an interesting area for improvement, even in my use case. And the quick screenshot looks like this. So for example, the mood that the user wrote, it's a bit rainy, but I feel like I need a long walk in London. Filip Makraduli: And these are the top five suggested songs. This is also available on Streamlit. In the end I'll share links of everything and also after that you can click create a Spotify playlist and this playlist will be saved in your Spotify account. As you can see here, it says playlist generated earlier today. So yeah, I tried this, it worked. I will try live demo bit later. Hopefully it works again. But this is in beta currently so you won't be able to try it at home because Spotify needs to approve my app first and go through that process so that then I can do this part fully. Filip Makraduli: And the front end bit, as I mentioned, was done in Streamlit. So why Streamlit? I like the caching bit. So of course this general part where it's really easy and quick to do a lot of data dashboarding and data applications to test out models, that's quite nice. But this caching options that they have help a lot with like loading models from hugging face or if you're loading models from somewhere, or if you're loading different databases. So if you're combining models and data. In my case I had a binary file of the index and also the model. So it was quite useful and quick to do these things and to be able to try things out quickly. So this is kind of the step by step outline of everything I've mentioned and the whole project. Filip Makraduli: So the first step is encoding this descriptions into embeddings. Then this vector embeddings are mapped into a vector space. Examples here with how I've used Qdrant for this, which was quite nice. I feel like the developer experience is really good for scalable purposes. It's really useful. So if the number of songs keep increasing it's quite good. And the query and more similar embeddings. The front is done with Streamlit and the Spotify API to save the playlists on the Spotify account. Filip Makraduli: All of these steps can be improved and tweaked in certain ways and I will talk a bit about that too. So a lot more to be done. So now there are 2000 songs, but as I've mentioned, in this vector space, the more songs that are there, the more representative this recommendations would be. So this is something I'm currently exploring and doing, generating, filtering and user specific personalization. So once maybe you log in with Spotify, you could get recommendations related to your taste on Spotify or on whatever app you listen your music on. And referring to the talk that Niels had a lot of potential for better models and embeddings and embedding models. So also the contrastive learning bits or the contents aware querying, that could be useful too. And a vector database because currently I'm using a binary file. Filip Makraduli: But I've explored Qdrant and as I said with Spotify web API there are a lot of things to be done with this specific user created recommendations. So with Qdrant, the Python client is quite good. The getting started helps a lot. So I wrote a bit of code. I think for production use cases it's really great. So for my use case here, as you can see on the right, I just read the text from a column and then I encode with the model. So the sentence transformer is the model that I encode with. And there is this collections that they're so called in Qdrant that are kind of like this vector spaces that you can create and you can also do different things with them, which I think one of the more helpful ones is the payload one and the batch one. Filip Makraduli: So you can batch things in terms of how many vectors will go to the server per single request. And also the payload helps if you want to add extra context. So maybe I want to filter by genres. I can add useful information to the vector embedding. So this is quite a cool feature that I'm planning on using. And another potential way of doing this and kind of combining things is using audio waves too, lyrics and descriptions and combining all of this as embeddings and then going through the similar process. So that's something that I'm looking to do also. And yeah, you also might have noticed that I'm a data scientist at Marks and Spencer and I just wanted to say that there are a lot of interesting ML and data related stuff going on there. Filip Makraduli: So a lot of teams that work on very interesting use cases, like in recommender systems, personalization of offers different stuff about forecasting. There is a lot going on with causal ML and yeah, the digital and tech department is quite well developed and I think it's a fun place to explore if you're interested in retail data science use cases. So yeah, thank you for your attention. I'll try the demo. So this is the QR code with the repo and all the useful links. You can contact me on LinkedIn. This is the screenshot of the repo and you have the link in the QR code. The name of the repo is song Vibe. Filip Makraduli: A friend of mine said that that wasn't a great name of a repo. Maybe he was right. But yeah, here we are. I'll just try to do the demo quickly and then we can step back to the. Demetrios: I love dude, I got to say, when you said you can just automatically create the Spotify playlist, that made me. Filip Makraduli: Go like, oh, yes, let's see if it works locally. Do you have any suggestion what mood are you in? Demetrios: I was hoping you would ask me, man. I am in a bit of an esoteric mood and I want female kind of like Gaelic voices, but not Gaelic music, just Gaelic voices and lots of harmonies, heavy harmonies. Filip Makraduli: Also. Demetrios: You didn't realize you're asking a musician. Let's see what we got. Filip Makraduli: Let's see if this works in 2000 songs. Okay, so these are the results. Okay, yeah, you'd have to playlist. Let's see. Demetrios: Yeah, can you make the playlist public and then I'll just go find it right now. Here we go. Filip Makraduli: Let's see. Okay, yeah, open in. Spotify playlist created now. Okay, cool. I can also rename it. What do you want to name the playlist? Demetrios: Esoteric Gaelic Harmonies. That's what I think we got to go with AI. Well, I mean, maybe we could just put maybe in parenthes. Filip Makraduli: Yeah. So I'll share this later with you. Excellent. But yeah, basically that was it. Demetrios: It worked. Ten out of ten for it. Working. That is also very cool. Filip Makraduli: Live demo working. That's good. So now doing the infinite screen, which I have stopped now. Demetrios: Yeah, classic, dude. Well, I've got some questions coming through and the chat has been active too. So I'll ask a few of the questions in the chat for a minute. But before I ask those questions in the chat, one thing that I was thinking about when you were talking about how to, like, the next step is getting better embeddings. And so was there a reason that you just went with the song title and then did you check, you said there was 2000 songs or how many songs? So did you do anything to check the output of the descriptions of these songs? Filip Makraduli: Yeah, so I didn't do like a systematic testing in terms of like, oh, yeah, the output is structured in this way. But yeah, I checked it roughly went through a few songs and they seemed like, I mean, of course you could add more info, but they seemed okay. So I was like, okay, let me try kind of whether this works. And, yeah, the descriptions were nice. Demetrios: Awesome. Yeah. So that kind of goes into one of the questions that mornie's asking. Let me see. Are you going to team this up with other methods, like collaborative filtering, content embeddings and stuff like that. Filip Makraduli: Yeah, I was thinking about this different kind of styles, but I feel like I want to first try different things related to embeddings and language just because I feel like with the other things, with the other ways of doing these recommendations, other companies and other solutions have done a really great job there. So I wanted to try something different to see whether that could work as well or maybe to a similar degree. So that's why I went towards this approach rather than collaborative filtering. Demetrios: Yeah, it kind of felt like you wanted to test the boundaries and see if something like this, which seems a little far fetched, is actually possible. And it seems like I would give it a yes. Filip Makraduli: It wasn't that far fetched, actually, once you see it working. Demetrios: Yeah, totally. Another question is coming through is asking, is it possible to merge the current mood so the vibe that you're looking for with your musical preferences? Filip Makraduli: Yeah. So I was thinking of that when we're doing this, the playlist creation that I did for you, there is a way to get your top ten songs or your other playlists and so on from Spotify. So my idea of kind of capturing this added element was through Spotify like that. But of course it could be that you could enter that in your own profile in the app or so on. So one idea would be how would you capture that preferences of the user once you have the user there. So you'd need some data of the preferences of the user. So that's the problem. But of course it is possible. Demetrios: You know what I'd lOve? Like in your example, you put that, I feel like going for a walk or it's raining, but I still feel like going through for a long walk in London. Right. You could probably just get that information from me, like what is the weather around me, where am I located? All that kind of stuff. So I don't have to give you that context. You just add those kind of contextual things, especially weather. And I get the feeling that that would be another unlock too. Unless you're like, you are the exact opposite of a sunny day on a sunny day. And it's like, why does it keep playing this happy music? I told you I was sad. Filip Makraduli: Yeah. You're predicting not just the songs, but the mood also. Demetrios: Yeah, totally. Filip Makraduli: You don't have to type anything, just open the website and you get everything. Demetrios: Exactly. Yeah. Give me a few predictions just right off the bat and then maybe later we can figure it out. The other thing that I was thinking, could be a nice add on. I mean, the infinite feature request, I don't think you realized you were going to get so many feature requests from me, but let it be known that if you come on here and I like your app, you'll probably get some feature requests from me. So I was thinking about how it would be great if I could just talk to it instead of typing it in, right? And I could just explain my mood or explain my feeling and even top that off with a few melodies that are going on in my head, or a few singers or songwriters or songs that I really want, something like this, but not this song, and then also add that kind of thing, do the. Filip Makraduli: Humming sound a bit and you play your melody and then you get. Demetrios: Except I hum out of tune, so I don't think that would work very well. I get a lot of random songs, that's for sure. It would probably be just about as accurate as your recommendation engine is right now. Yeah. Well, this is awesome, man. I really appreciate you coming on here. I'm just going to make sure that there's no other questions that came through the chat. No, looks like we're good. Demetrios: And for everyone out there that is listening, if you want to come on and talk about anything cool that you have built with Qdrant, or how you're using Qdrant, or different ways that you would like Qdrant to be better, or things that you enjoy, whatever it may be, we'd love to have you on here. And I think that is it. We're going to call it a day for the vector space talks, number two. We'll see you all later. Philip, thanks so much for coming on. It's.",blog/when-music-just-doesnt-match-our-vibe-can-ai-help-filip-makraduli-vector-space-talks-003.md "--- draft: false title: ""Kern AI & Qdrant: Precision AI Solutions for Finance and Insurance"" short_description: ""Transforming customer service in finance and insurance with vector search-based retrieval.

"" description: ""Revolutionizing customer service in finance and insurance by leveraging vector search for faster responses and improved operational efficiency."" preview_image: /blog/case-study-kern/preview.png social_preview_image: /blog/case-study-kern/preview.png date: 2024-08-28T00:02:00Z author: Qdrant featured: false tags: - Kern - Vector Search - AI-Driven Insights - Johannes Hötter - Data Analysis - Markel Insurance --- ![kern-case-study](/blog/case-study-kern/kern-case-study.png) ## About Kern AI [Kern AI](https://kern.ai/) specializes in data-centric AI. Originally an AI consulting firm, the team led by Co-Founder and CEO Johannes Hötter quickly realized that developers spend 80% of their time reviewing data instead of focusing on model development. This inefficiency significantly reduces the speed of development and adoption of AI. To tackle this challenge, Kern AI developed a low-code platform that enables developers to quickly analyze their datasets and identify outliers using vector search. This innovation led to enhanced data accuracy and streamlined workflows for the rapid deployment of AI applications. With the rise of ChatGPT, Kern AI expanded its platform to support the quick development of accurate and secure Generative AI by integrating large language models (LLMs) like GPT, tailoring solutions specifically for the financial services sector. Kern AI’s solution enhances the reliability of any LLM by modeling and integrating company data in a way LLMs can understand, offering a platform with leading data modeling capabilities. ## The Challenge Kern AI has partnered with leading insurers to efficiently streamline the process of managing complex customer queries within customer service teams, reducing the time and effort required. Customer inquiries are often complex, and support teams spend significant time locating and interpreting relevant sections in insurance contracts. This process leads to delays in responses and can negatively impact customer satisfaction. To tackle this, Kern AI developed an internal AI chatbot for first-level support teams. Their platform helps data science teams improve data foundations to expedite application production. By using embeddings to identify relevant data points and outliers, Kern AI ensures more efficient and accurate data handling. To avoid being restricted to a single embedding model, they experimented with various models, including sentiment embeddings, leading them to discover Qdrant. ![kern-user-interface](/blog/case-study-kern/kern-user-interface.png) *Kern AI Refinery, is an open-source tool to scale, assess and maintain natural language data.* The impact of their solution is evident in the case of [Markel Insurance SE](https://www.markel.com/), which reduced the average response times from five minutes to under 30 seconds per customer query. This change significantly enhanced customer experience and reduced the support team's workload. Johannes Hötter notes, ""Our solution has revolutionized how first-level support operates in the insurance industry, drastically improving efficiency and customer satisfaction."" ## The Solution Kern AI discovered Qdrant and was impressed by its interactive Discord community, which highlighted the active support and continuous improvements of the platform. Qdrant was the first vector database the team used, and after testing other alternatives, they chose Qdrant for several reasons: - **Multi-vector Storage**: This feature was crucial as it allowed the team to store and manage different search indexes. Given that no single embedding fits all use cases, this capability brought essential diversity to their embeddings, enabling more flexible and robust data handling. - **Easy Setup**: Qdrant's straightforward setup process enabled Kern AI to quickly integrate and start utilizing the database without extensive overhead, which was critical for maintaining development momentum. - **Open Source**: The open-source nature of Qdrant aligned with Kern AI's own product development philosophy. This allowed for greater customization and integration into their existing open-source projects. - **Rapid Progress**: Qdrant's swift advancements and frequent updates ensured that Kern AI could rely on continuous improvements and cutting-edge features to keep their solutions competitive. - **Multi-vector Search**: Allowed Kern AI to perform complex queries across different embeddings simultaneously, enhancing the depth and accuracy of their search results. - **Hybrid Search/Filters**: Enabled the combination of traditional keyword searches with vector searches, allowing for more nuanced and precise data retrieval. Kern AI uses Qdrant's open-source, on-premise solution for both their open-source project and their commercial end-to-end framework. This framework, focused on the financial and insurance markets, is similar to LangChain or LlamaIndex but tailored to the industry-specific needs. ![kern-data-retrieval](/blog/case-study-kern/kern-data-retrieval.png) *Configuring data retrieval in Kern AI: Fine-tuning search inputs and metadata for optimized information extraction.* ## The Results Kern AI's primary use case focuses on enhancing customer service with extreme precision. Leveraging Qdrant's advanced vector search capabilities, Kern AI consistently maintains hallucination rates under 1%. This exceptional accuracy allows them to build the most precise RAG (Retrieval-Augmented Generation) chatbot for financial services. Key Achievements: - **<1% Hallucination Rate**: Ensures the highest level of accuracy and reliability in their chatbot solutions for the financial and insurance sector. - **Reduced Customer Service Response Times**: Using Kern AI's solution, Markel Insurance SE reduced response times from five minutes to under 30 seconds, significantly improving customer experience and operational efficiency. By utilizing Qdrant, Kern AI effectively supports various use cases in financial services, such as: - **Claims Management**: Streamlining the claims process by quickly identifying relevant data points. - **Similarity Search**: Enhancing incident handling by finding similar cases to improve decision-making quality. ## Outlook Kern AI plans to expand its use of Qdrant to support both brownfield and greenfield use cases across the financial and insurance industry.",blog/case-study-kern.md "--- title: ""Qdrant vs Pinecone: Vector Databases for AI Apps"" draft: false short_description: ""Highlighting performance, features, and suitability for various use cases."" description: ""In this detailed Qdrant vs Pinecone comparison, we share the top features to determine the best vector database for your AI applications."" preview_image: /blog/comparing-qdrant-vs-pinecone-vector-databases/social_preview.png social_preview_image: /blog/comparing-qdrant-vs-pinecone-vector-databases/social_preview.png aliases: /documentation/overview/qdrant-alternatives/ date: 2024-02-25T00:00:00-08:00 author: Qdrant Team featured: false tags: - vector search - role based access control - byte vectors - binary vectors - quantization - new features --- # Qdrant vs Pinecone: An Analysis of Vector Databases for AI Applications Data forms the foundation upon which AI applications are built. Data can exist in both structured and unstructured formats. Structured data typically has well-defined schemas or inherent relationships. However, unstructured data, such as text, image, audio, or video, must first be converted into numerical representations known as [vector embeddings](https://qdrant.tech/articles/what-are-embeddings/). These embeddings encapsulate the semantic meaning or features of unstructured data and are in the form of high-dimensional vectors. Traditional databases, while effective at handling structured data, fall short when dealing with high-dimensional unstructured data, which are increasingly the focal point of modern AI applications. Key reasons include: - **Indexing Limitations**: Database indexing methods like B-Trees or hash indexes, typically used in relational databases, are inefficient for high-dimensional data and show poor query performance. - **Curse of Dimensionality**: As dimensions increase, data points become sparse, and distance metrics like Euclidean distance lose their effectiveness, leading to poor search query performance. - **Lack of Specialized Algorithms**: Traditional databases do not incorporate advanced algorithms designed to handle high-dimensional data, resulting in slow query processing times. - **Scalability Challenges**: Managing and querying high-dimensional [vectors](https://qdrant.tech/documentation/concepts/vectors/) require optimized data structures, which traditional databases are not built to handle. - **Storage Inefficiency**: Traditional databases are not optimized for efficiently storing large volumes of high-dimensional data, facing significant challenges in managing space complexity and [retrieval efficiency](https://qdrant.tech/documentation/tutorials/retrieval-quality/). Vector databases address these challenges by efficiently storing and querying high-dimensional vectors. They offer features such as high-dimensional vector storage and retrieval, efficient similarity search, sophisticated indexing algorithms, advanced compression techniques, and integration with various machine learning frameworks. Due to their capabilities, vector databases are now a cornerstone of modern AI and are becoming pivotal in building applications that leverage similarity search, recommendation systems, natural language processing, computer vision, image recognition, speech recognition, and more. Over the past few years, several vector database solutions have emerged – the two leading ones being Qdrant and Pinecone, among others. Both are powerful vector database solutions with unique strengths. However, they differ greatly in their principles and approach, and the capabilities they offer to developers. In this article, we’ll examine both solutions and discuss the factors you need to consider when choosing amongst the two. Let’s dive in! ## Exploring Qdrant Vector Database: Features and Capabilities Qdrant is a high-performance, open-source vector similarity search engine built with [Rust](https://qdrant.tech/articles/why-rust/), designed to handle the demands of large-scale AI applications with exceptional speed and reliability. Founded in 2021, Qdrant's mission is to ""build the most efficient, scalable, and high-performance vector database in the market."" This mission is reflected in its architecture and feature set. Qdrant is highly scalable and performant: it can handle billions of vectors efficiently and with [minimal latency](https://qdrant.tech/benchmarks/). Its advanced vector indexing, search, and retrieval capabilities make it ideal for applications that require fast and accurate search results. It supports vertical and horizontal scaling, advanced compression techniques, highly flexible deployment options – including cloud-native, [hybrid cloud](https://qdrant.tech/documentation/hybrid-cloud/), and private cloud solutions – and powerful security features. ### Key Features of Qdrant Vector Database - **Advanced Similarity Search:** Qdrant supports various similarity [search](https://qdrant.tech/documentation/concepts/search/) metrics like dot product, cosine similarity, Euclidean distance, and Manhattan distance. You can store additional information along with vectors, known as [payload](https://qdrant.tech/documentation/concepts/payload/) in Qdrant terminology. A payload is any JSON formatted data. - **Built Using Rust:** Qdrant is built with Rust, and leverages its performance and efficiency. Rust is famed for its [memory safety](https://arxiv.org/abs/2206.05503) without the overhead of a garbage collector, and rivals C and C++ in speed. - **Scaling and Multitenancy**: Qdrant supports both vertical and horizontal scaling and uses the Raft consensus protocol for [distributed deployments](https://qdrant.tech/documentation/guides/distributed_deployment/). Developers can run Qdrant clusters with replicas and shards, and seamlessly scale to handle large datasets. Qdrant also supports [multitenancy](https://qdrant.tech/documentation/guides/multiple-partitions/) where developers can create single collections and partition them using payload. - **Payload Indexing and Filtering:** Just as Qdrant allows attaching any JSON payload to vectors, it also supports payload indexing and [filtering](https://qdrant.tech/documentation/concepts/filtering/) with a wide range of data types and query conditions, including keyword matching, full-text filtering, numerical ranges, nested object filters, and [geo](https://qdrant.tech/documentation/concepts/filtering/#geo)filtering. - **Hybrid Search with Sparse Vectors:** Qdrant supports both dense and [sparse vectors](https://qdrant.tech/articles/sparse-vectors/), thereby enabling hybrid search capabilities. Sparse vectors are numerical representations of data where most of the elements are zero. Developers can combine search results from dense and sparse vectors, where sparse vectors ensure that results containing the specific keywords are returned and dense vectors identify semantically similar results. - **Built-In Vector Quantization:** Qdrant offers three different [quantization](https://qdrant.tech/documentation/guides/quantization/) options to developers to optimize resource usage. Scalar quantization balances accuracy, speed, and compression by converting 32-bit floats to 8-bit integers. Binary quantization, the fastest method, significantly reduces memory usage. Product quantization offers the highest compression, and is perfect for memory-constrained scenarios. - **Flexible Deployment Options:** Qdrant offers a range of deployment options. Developers can easily set up Qdrant (or Qdrant cluster) [locally](https://qdrant.tech/documentation/quick-start/#download-and-run) using Docker for free. [Qdrant Cloud](https://qdrant.tech/cloud/), on the other hand, is a scalable, managed solution that provides easy access with flexible pricing. Additionally, Qdrant offers [Hybrid Cloud](https://qdrant.tech/hybrid-cloud/) which integrates Kubernetes clusters from cloud, on-premises, or edge, into an enterprise-grade managed service. - **Security through API Keys, JWT and RBAC:** Qdrant offers developers various ways to [secure](https://qdrant.tech/documentation/guides/security/) their instances. For simple authentication, developers can use API keys (including Read Only API keys). For more granular access control, it offers JSON Web Tokens (JWT) and the ability to build Role-Based Access Control (RBAC). TLS can be enabled to secure connections. Qdrant is also [SOC 2 Type II](https://qdrant.tech/blog/qdrant-soc2-type2-audit/) certified. Additionally, Qdrant integrates seamlessly with popular machine learning frameworks such as [LangChain](https://qdrant.tech/blog/using-qdrant-and-langchain/), LlamaIndex, and Haystack; and Qdrant Hybrid Cloud integrates seamlessly with AWS, DigitalOcean, Google Cloud, Linode, Oracle Cloud, OpenShift, and Azure, among others. By focusing on performance, scalability and efficiency, Qdrant has positioned itself as a leading solution for enterprise-grade vector similarity search, capable of meeting the growing demands of modern AI applications. However, how does it compare with Pinecone? Let’s take a look. ## Exploring Pinecone Vector Database: Key Features and Capabilities An alternative to Qdrant, Pinecone provides a fully managed vector database that abstracts the complexities of infrastructure and scaling. The company’s founding principle, when it started in 2019, was to make Pinecone “accessible to engineering teams of all sizes and levels of AI expertise.” Similarly to Qdrant, Pinecone offers advanced vector search and retrieval capabilities. There are two different ways you can use Pinecone: using its serverless architecture or its pod architecture. Pinecone also supports advanced similarity search metrics such as dot product, Euclidean distance, and cosine similarity. Using its pod architecture, you can leverage horizontal or vertical scaling. Finally, Pinecone offers privacy and security features such as Role-Based Access Control (RBAC) and end-to-end encryption, including encryption in transit and at rest. ### Key Features of Pinecone Vector Database - **Fully Managed Service:** Pinecone offers a fully managed SaaS-only service. It handles the complexities of infrastructure management such as scaling, performance optimization, and maintenance. Pinecone is designed for developers who want to focus on building AI applications without worrying about the underlying database infrastructure. - **Serverless and Pod Architecture:** Pinecone offers two different architecture options to run their vector database - the serverless architecture and the pod architecture. Serverless architecture runs as a managed service on the AWS cloud platform, and allows automatic scaling based on workload. Pod architecture, on the other hand, provides pre-configured hardware units (pods) for hosting and executing services, and supports horizontal and vertical scaling. Pods can be run on AWS, GCP, or Azure. - **Advanced Similarity Search:** Pinecone supports three different similarity search metrics – dot product, Euclidean distance, and cosine similarity. It currently does not support Manhattan distance metric. - **Privacy and Security Features:** Pinecone offers Role-Based Access Control (RBAC), end-to-end encryption, and compliance with SOC 2 Type II and GDPR. Pinecone allows for the creation of “organization”, which, in turn, has “projects” and “members” with single sign-on (SSO) and access control. - **Hybrid Search and Sparse Vectors**: Pinecone supports both sparse and dense vectors, and allows hybrid search. This gives developers the ability to combine semantic and keyword search in a single query. - **Metadata Filtering**: Pinecone allows attaching key-value metadata to vectors in an index, which can later be queried. Semantic search using metadata filters retrieve exactly the results that match the filters. Pinecone’s fully managed service makes it a compelling choice for developers who’re looking for a vector database that comes without the headache of infrastructure management. ## Pinecone vs Qdrant: Key Differences and Use Cases Qdrant and Pinecone are both robust vector database solutions, but they differ significantly in their design philosophy, deployment options, and technical capabilities. Qdrant is an open-source vector database that gives control to the developer. It can be run locally, on-prem, in the cloud, or as a managed service, and it even offers a hybrid cloud option for enterprises. This makes Qdrant suitable for a wide range of environments, from development to enterprise settings. It supports multiple programming languages and offers advanced features like customizable distance metrics, payload filtering, and [integration with popular AI frameworks](https://qdrant.tech/documentation/frameworks/). Pinecone, on the other hand, is a fully managed, SaaS-only solution designed to abstract the complexities of infrastructure management. It provides a serverless architecture for automatic scaling and a pod architecture for resource customization. Pinecone focuses on ease of use and high performance, offering built-in security measures, compliance certifications, and a user-friendly API. However, it has some limitations in terms of metadata handling and flexibility compared to Qdrant. | Aspect | Qdrant | Pinecone | | ------------------------- | ---------------------------------------------------------------------- | -------------------------------------------------- | | Deployment Modes | Local, on-premises, cloud | SaaS-only | | Supported Languages | Python, JavaScript/TypeScript, Rust, Go, Java | Python, JavaScript/TypeScript, Java, Go | | Similarity Search Metrics | Dot Product, Cosine Similarity, Euclidean Distance, Manhattan Distance | Dot Product, Cosine Similarity, Euclidean Distance | | Hybrid Search | Highly customizable Hybrid search by combining Sparse and Dense Vectors, with support for separate indices within the same collection | Supports Hybrid search with a single sparse-dense index | | Vector Payload | Accepts any JSON object as payload, supports NULL values, geolocation, and multiple vectors per point | Flat metadata structure, does not support NULL values, geolocation, or multiple vectors per point | | Scalability | Vertical and horizontal scaling, distributed deployment with Raft consensus | Serverless architecture and pod architecture for horizontal and vertical scaling | | Performance | Efficient indexing, low latency, high throughput, customizable distance metrics | High throughput, low latency, gRPC client for higher upsert speeds | | Security | Flexible, environment-specific configurations, API key authentication in Qdrant Cloud, JWT and RBAC, SOC 2 Type II certification | Built-in RBAC, end-to-end encryption, SOC 2 Type II certification | ## Choosing the Right Vector Database: Factors to Consider When choosing between Qdrant and Pinecone, you need to consider some key factors that may impact your project long-term. Below are some primary considerations to help guide your decision: ### 1. Deployment Flexibility **Qdrant** offers multiple deployment options, including a local Docker node or cluster, Qdrant Cloud, and Hybrid Cloud. This allows you to choose an environment that best suits your project. You can start with a local Docker node for development, then add nodes to your cluster, and later switch to a Hybrid Cloud solution. **Pinecone**, on the other hand, is a fully managed SaaS solution. To use Pinecone, you connect your development environment to its cloud service. It abstracts the complexities of infrastructure management, making it easier to deploy, but it is also less flexible in terms of deployment options compared to Qdrant. ### 2. Scalability Requirements **Qdrant** supports both vertical and horizontal scaling and is suitable for deployments of all scales. You can run it as a single Docker node, a large cluster, or a Hybrid cloud, depending on the size of your dataset. Qdrant’s architecture allows for distributed deployment with replicas and shards, and scales extremely well to billions of vectors with minimal latency. **Pinecone** provides a serverless architecture and a pod architecture that automatically scales based on workload. Serverless architecture removes the need for any manual intervention, whereas pod architecture provides a bit more control. Since Pinecone is a managed SaaS-only solution, your application’s scalability is tied to both Pinecone's service and the underlying cloud provider in use. ### 3. Performance and Throughput **Qdrant** excels in providing different performance profiles tailored to specific use cases. It offers efficient vector and payload indexing, low-latency queries, optimizers, and high throughput, along with multiple options for quantization to further optimize performance. **Pinecone** recommends increasing the number of replicas to boost the throughput of pod-based indexes. For serverless indexes, Pinecone automatically handles scaling and throughput. To decrease latency, Pinecone suggests using namespaces to partition records within a single index. However, since Pinecone is a managed SaaS-only solution, developer control over performance and throughput is limited. ### 4. Security Considerations **Qdrant** allows for tailored security configurations specific to your deployment environment. It supports API keys (including read-only API keys), JWT authentication, and TLS encryption for connections. Developers can build Role-Based Access Control (RBAC) according to their application needs in a completely custom manner. Additionally, Qdrant's deployment flexibility allows organizations that need to adhere to stringent data laws to deploy it within their infrastructure, ensuring compliance with data sovereignty regulations. **Pinecone** provides comprehensive built-in security features in its managed SaaS solution, including Role-Based Access Control (RBAC) and end-to-end encryption. Its compliance with SOC 2 Type II and GDPR-readiness makes it a good choice for applications requiring standardized security measures. ### 5. Pricing **Qdrant** can be self-hosted locally (single node or a cluster) with a single Docker command. With its SaaS option, it offers a free tier in Qdrant Cloud sufficient for around 1M 768-dimensional vectors, without any limitation on the number of collections it is used for. This allows developers to build multiple demos without limitations. For more pricing information, check [here](https://qdrant.tech/pricing/). **Pinecone** cannot be self-hosted, and signing up for the SaaS solution is the only option. Pinecone has a free tier that supports approximately 300K 1536-dimensional embeddings. For Pinecone’s pricing details, check their pricing page. ### Qdrant vs Pinecone: Complete Summary The choice between Qdrant and Pinecone hinges on your specific needs: - **Qdrant** is ideal for organizations that require flexible deployment options, extensive scalability, and customization. It is also suitable for projects needing deep integration with existing security infrastructure and those looking for a cost-effective, self-hosted solution. - **Pinecone** is suitable for teams seeking a fully managed solution with robust built-in security features and standardized compliance. It is suitable for cloud-native applications and dynamic environments where automatic scaling and low operational overhead are critical. By carefully considering these factors, you can select the vector database that best aligns with your technical requirements and strategic goals. ## Choosing the Best Vector Database for Your AI Application Selecting the best vector database for your AI project depends on several factors, including your deployment preferences, scalability needs, performance requirements, and security considerations. - **Choose Qdrant if**: - You require flexible deployment options (local, on-premises, managed SaaS solution, or a Hybrid Cloud). - You need extensive customization and control over your vector database. - You project needs to adhere to data security and data sovereignty laws specific to your geography - Your project would benefit from advanced search capabilities, including complex payload filtering and geolocation support. - Cost efficiency and the ability to self-host are significant considerations. - **Choose Pinecone if**: - You prefer a fully managed SaaS solution that abstracts the complexities of infrastructure management. - You need a serverless architecture that automatically adjusts to varying workloads. - Built-in security features and compliance certifications (SOC 2 Type II, GDPR) are sufficient for your application. - You want to build your project with minimal operational overhead. For maximum control, security, and cost-efficiency, choose Qdrant. It offers flexible deployment options, customizability, and advanced search features, and is ideal for building data sovereign AI applications. However, if you prioritize ease of use and automatic scaling with built-in security, Pinecone's fully managed SaaS solution with a serverless architecture is the way to go. ## Next Steps Qdrant is one of the leading Pinecone alternatives in the market. For developers who seek control of their vector database, Qdrant offers the highest level of customization, flexible deployment options, and advanced security features. To get started with Qdrant, explore our [documentation](https://qdrant.tech/documentation/), hop on to our [Discord](https://qdrant.to/discord) channel, sign up for [Qdrant cloud](https://cloud.qdrant.io/) (or [Hybrid cloud](https://qdrant.tech/hybrid-cloud/)), or [get in touch](https://qdrant.tech/contact-us/) with us today. References: - [Pinecone Documentation](https://docs.pinecone.io/) - [Qdrant Documentation](https://qdrant.tech/documentation/) - If you aren't ready yet, [try out Qdrant locally](/documentation/quick-start/) or sign up for [Qdrant Cloud](https://cloud.qdrant.io/). - For more basic information on Qdrant read our [Overview](/documentation/overview/) section or learn more about Qdrant Cloud's [Free Tier](/documentation/cloud/). - If ready to migrate, please consult our [Comprehensive Guide](https://github.com/NirantK/qdrant_tools) for further details on migration steps. ",blog/comparing-qdrant-vs-pinecone-vector-databases.md "--- draft: true title: Neural Search Tutorial slug: neural-search-tutorial short_description: Neural Search Tutorial description: Step-by-step guide on how to build a neural search service. preview_image: /blog/from_cms/1_vghoj7gujfjazpdmm9ebxa.webp date: 2024-01-05T14:09:57.544Z author: Andrey Vasnetsov featured: false tags: [] --- Step-by-step guide on how to build a neural search service. ![](/blog/from_cms/1_yoyuyv4zrz09skc8r6_lta.webp ""How to build a neural search service with BERT + Qdrant + FastAPI"") Information retrieval technology is one of the main technologies that enabled the modern Internet to exist. These days, search technology is the heart of a variety of applications. From web-pages search to product recommendations. For many years, this technology didn’t get much change until neural networks came into play. In this tutorial we are going to find answers to these questions: * What is the difference between regular and neural search? * What neural networks could be used for search? * In what tasks is neural network search useful? * How to build and deploy own neural search service step-by-step? **What is neural search?** A regular full-text search, such as Google’s, consists of searching for keywords inside a document. For this reason, the algorithm can not take into account the real meaning of the query and documents. Many documents that might be of interest to the user are not found because they use different wording. Neural search tries to solve exactly this problem — it attempts to enable searches not by keywords but by meaning. To achieve this, the search works in 2 steps. In the first step, a specially trained neural network encoder converts the query and the searched objects into a vector representation called *embeddings*. The encoder must be trained so that similar objects, such as texts with the same meaning or alike pictures get a close vector representation. ![](/blog/from_cms/1_vghoj7gujfjazpdmm9ebxa.webp ""Neural encoder places cats closer together"") Having this vector representation, it is easy to understand what the second step should be. To find documents similar to the query you now just need to find the nearest vectors. The most convenient way to determine the distance between two vectors is to calculate the cosine distance. The usual Euclidean distance can also be used, but it is not so efficient due to the [curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality). **Which model could be used?** It is ideal to use a model specially trained to determine the closeness of meanings. For example, models trained on Semantic Textual Similarity (STS) datasets. Current state-of-the-art models could be found on this [leaderboard](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts-benchmark?p=roberta-a-robustly-optimized-bert-pretraining). However, not only specially trained models can be used. If the model is trained on a large enough dataset, its internal features can work as embeddings too. So, for instance, you can take any pre-trained on ImageNet model and cut off the last layer from it. In the penultimate layer of the neural network, as a rule, the highest-level features are formed, which, however, do not correspond to specific classes. The output of this layer can be used as an embedding. **What tasks is neural search good for?** Neural search has the greatest advantage in areas where the query cannot be formulated precisely. Querying a table in a SQL database is not the best place for neural search. On the contrary, if the query itself is fuzzy, or it cannot be formulated as a set of conditions — neural search can help you. If the search query is a picture, sound file or long text, neural network search is almost the only option. If you want to build a recommendation system, the neural approach can also be useful. The user’s actions can be encoded in vector space in the same way as a picture or text. And having those vectors, it is possible to find semantically similar users and determine the next probable user actions. **Let’s build our own** With all that said, let’s make our neural network search. As an example, I decided to make a search for startups by their description. In this demo, we will see the cases when text search works better and the cases when neural network search works better. I will use data from [startups-list.com](https://www.startups-list.com/). Each record contains the name, a paragraph describing the company, the location and a picture. Raw parsed data can be found at [this link](https://storage.googleapis.com/generall-shared-data/startups_demo.json). **Prepare data for neural search** To be able to search for our descriptions in vector space, we must get vectors first. We need to encode the descriptions into a vector representation. As the descriptions are textual data, we can use a pre-trained language model. As mentioned above, for the task of text search there is a whole set of pre-trained models specifically tuned for semantic similarity. One of the easiest libraries to work with pre-trained language models, in my opinion, is the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) by UKPLab. It provides a way to conveniently download and use many pre-trained models, mostly based on transformer architecture. Transformers is not the only architecture suitable for neural search, but for our task, it is quite enough. We will use a model called **`distilbert-base-nli-stsb-mean-tokens**\`. DistilBERT means that the size of this model has been reduced by a special technique compared to the original BERT. This is important for the speed of our service and its demand for resources. The word \`stsb` in the name means that the model was trained for the Semantic Textual Similarity task. The complete code for data preparation with detailed comments can be found and run in [Colab Notebook](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing). ![](/blog/from_cms/1_lotmmhjfexth1ucmtuhl7a.webp ""What tasks is neural search good for? Neural search has the greatest advantage in areas where the query cannot be formulated precisely. Querying a table in a SQL database is not the best place for neural search. On the contrary, if the query itself is fuzzy, or it cannot be formulated as a set of conditions — neural search can help you. If the search query is a picture, sound file or long text, neural network search is almost the only option. If you want to build a recommendation system, the neural approach can also be useful. The user’s actions can be encoded in vector space in the same way as a picture or text. And having those vectors, it is possible to find semantically similar users and determine the next probable user actions. Let’s build our own With all that said, let’s make our neural network search. As an example, I decided to make a search for startups by their description. In this demo, we will see the cases when text search works better and the cases when neural network search works better. I will use data from startups-list.com. Each record contains the name, a paragraph describing the company, the location and a picture. Raw parsed data can be found at this link. Prepare data for neural search To be able to search for our descriptions in vector space, we must get vectors first. We need to encode the descriptions into a vector representation. As the descriptions are textual data, we can use a pre-trained language model. As mentioned above, for the task of text search there is a whole set of pre-trained models specifically tuned for semantic similarity. One of the easiest libraries to work with pre-trained language models, in my opinion, is the sentence-transformers by UKPLab. It provides a way to conveniently download and use many pre-trained models, mostly based on transformer architecture. Transformers is not the only architecture suitable for neural search, but for our task, it is quite enough. We will use a model called `distilbert-base-nli-stsb-mean-tokens`. DistilBERT means that the size of this model has been reduced by a special technique compared to the original BERT. This is important for the speed of our service and its demand for resources. The word `stsb` in the name means that the model was trained for the Semantic Textual Similarity task. The complete code for data preparation with detailed comments can be found and run in Colab Notebook."") **Vector search engine** Now as we have a vector representation for all our records, we need to store them somewhere. In addition to storing, we may also need to add or delete a vector, save additional information with the vector. And most importantly, we need a way to search for the nearest vectors. The vector search engine can take care of all these tasks. It provides a convenient API for searching and managing vectors. In our tutorial we will use [Qdrant](/) vector search engine. It not only supports all necessary operations with vectors but also allows to store additional payload along with vectors and use it to perform filtering of the search result. Qdrant has a client for python and also defines the API schema if you need to use it from other languages. The easiest way to use Qdrant is to run a pre-built image. So make sure you have Docker installed on your system. To start Qdrant, use the instructions on its [homepage](https://github.com/qdrant/qdrant). Download image from [DockerHub](https://hub.docker.com/r/generall/qdrant): `docker pull qdrant/qdrant` And run the service inside the docker: `docker run -p 6333:6333 \`\ `-v $(pwd)/qdrant_storage:/qdrant/storage \`\ `qdrant/qdrant` You should see output like this ```abuild `...`\ `[...] Starting 12 workers`\ `[...] Starting ""actix-web-service-0.0.0.0:6333"" service on 0.0.0.0:6333` ``` This means that the service is successfully launched and listening port 6333. To make sure you can test  in your browser and get qdrant version info. All uploaded to Qdrant data is saved into the `*./qdrant_storage*` directory and will be persisted even if you recreate the container. **Upload data to Qdrant** Now once we have the vectors prepared and the search engine running, we can start uploading the data. To interact with Qdrant from python, I recommend using an out-of-the-box client library. To install it, use the following command `pip install qdrant-client` At this point, we should have startup records in file `*startups.json*\`, encoded vectors in file `*startup_vectors.npy*`, and running Qdrant on a local machine. Let’s write a script to upload all startup data and vectors into the search engine. First, let’s create a client object for Qdrant. ```abuild # Import client library from qdrant_client import QdrantClient from qdrant_client import models qdrant_client = QdrantClient(host=’localhost’, port=6333) ``` Qdrant allows you to combine vectors of the same purpose into collections. Many independent vector collections can exist on one service at the same time. Let’s create a new collection for our startup vectors. ```abuild `if not qdrant_client.collection_exists('startups'): `qdrant_client.create_collection(`\ `collection_name='startups',`\ `vectors_config=models.VectorParams(size=768, distance=""Cosine"")`\ `)` ``` The `*vector_size*\` parameter is very important. It tells the service the size of the vectors in that collection. All vectors in a collection must have the same size, otherwise, it is impossible to calculate the distance between them. `*768*` is the output dimensionality of the encoder we are using. The `*distance*` parameter allows specifying the function used to measure the distance between two points. The Qdrant client library defines a special function that allows you to load datasets into the service. However, since there may be too much data to fit a single computer memory, the function takes an iterator over the data as input. Let’s create an iterator over the startup data and vectors. ``` import numpy as np import json fd = open('./startups.json') # payload is now an iterator over startup data payload = map(json.loads, fd) # Here we load all vectors into memory, numpy array works as iterable for itself. # Other option would be to use Mmap, if we don't want to load all data into RAM vectors = np.load('./startup_vectors.npy') # And the final step - data uploading qdrant_client.upload_collection( collection_name='startups', vectors=vectors, payload=payload, ids=None, # Vector ids will be assigned automatically batch_size=256 # How many vectors will be uploaded in a single request? ``` Now we have vectors, uploaded to the vector search engine. On the next step we will learn how to actually search for closest vectors. The full code for this step could be found [here](https://github.com/qdrant/qdrant_demo/blob/master/qdrant_demo/init_vector_search_index.py). **Make a search API** Now that all the preparations are complete, let’s start building a neural search class. First, install all the requirements: `pip install sentence-transformers numpy` In order to process incoming requests neural search will need 2 things. A model to convert the query into a vector and Qdrant client, to perform a search queries. ``` # File: neural_searcher.py from qdrant_client import QdrantClient from sentence_transformers import SentenceTransformer class NeuralSearcher: def __init__(self, collection_name): self.collection_name = collection_name # Initialize encoder model self.model = SentenceTransformer('distilbert-base-nli-stsb-mean-tokens', device='cpu') # initialize Qdrant client self.qdrant_client = QdrantClient(host='localhost', port=6333) # The search function looks as simple as possible: def search(self, text: str): # Convert text query into vector vector = self.model.encode(text).tolist() # Use `vector` for search for closest vectors in the collection search_result = self.qdrant_client.search( collection_name=self.collection_name, query_vector=vector, query_filter=None, # We don't want any filters for now top=5 # 5 the most closest results is enough ) # `search_result` contains found vector ids with similarity scores along with the stored payload # In this function we are interested in payload only payloads = [hit.payload for hit in search_result] return payloads ``` With Qdrant it is also feasible to add some conditions to the search. For example, if we wanted to search for startups in a certain city, the search query could look like this: We now have a class for making neural search queries. Let’s wrap it up into a service. **Deploy as a service** To build the service we will use the FastAPI framework. It is super easy to use and requires minimal code writing. To install it, use the command `pip install fastapi uvicorn` Our service will have only one API endpoint and will look like this: Now, if you run the service with `python service.py` [ttp://localhost:8000/docs](http://localhost:8000/docs) , you should be able to see a debug interface for your service. ![](/blog/from_cms/1_f4gzrt6rkyqg8xvjr4bdtq-1-.webp ""FastAPI Swagger interface"") Feel free to play around with it, make queries and check out the results. This concludes the tutorial. **Online Demo** The described code is the core of this [online demo](https://demo.qdrant.tech/). You can try it to get an intuition for cases when the neural search is useful. The demo contains a switch that selects between neural and full-text searches. You can turn neural search on and off to compare the result with regular full-text search. Try to use startup description to find similar ones. **Conclusion** In this tutorial, I have tried to give minimal information about neural search, but enough to start using it. Many potential applications are not mentioned here, this is a space to go further into the subject. Subscribe to my [telegram channel](https://t.me/neural_network_engineering), where I talk about neural networks engineering, publish other examples of neural networks and neural search applications. Subscribe to the [Qdrant user’s group](https://discord.gg/tdtYvXjC4h) if you want to be updated on latest Qdrant news and features.",blog/neural-search-tutorial.md "--- draft: true title: v0.9.0 update of the Qdrant engine went live slug: qdrant-v090-release short_description: We've released the new version of Qdrant engine - v.0.9.0. description: We’ve released the new version of Qdrant engine - v.0.9.0. It features the dynamic cluster scaling capabilities. Now Qdrant is more flexible with cluster deployment, allowing to move preview_image: /blog/qdrant-v.0.9.0-release-update.png date: 2022-08-08T14:54:45.476Z author: Alyona Kavyerina author_link: https://www.linkedin.com/in/alyona-kavyerina/ featured: true categories: - release-update - news tags: - corporate news - release sitemapExclude: true --- We've released the new version of Qdrant engine - v.0.9.0. It features the dynamic cluster scaling capabilities. Now Qdrant is more flexible with cluster deployment, allowing to move shards between nodes and remove nodes from the cluster. v.0.9.0 also has various improvements, such as removing temporary snapshot files during the complete snapshot, disabling default mmap threshold, and more. You can read the detailed release noted by this link https://github.com/qdrant/qdrant/releases/tag/v0.9.0 We keep improving Qdrant and working on frequently requested functionality for the next release. Stay tuned!",blog/v0-9-0-update-of-the-qdrant-engine-went-live.md "--- draft: false title: ""Qdrant Hybrid Cloud: the First Managed Vector Database You Can Run Anywhere"" slug: hybrid-cloud short_description: description: preview_image: /blog/hybrid-cloud/hybrid-cloud.png social_preview_image: /blog/hybrid-cloud/hybrid-cloud.png date: 2024-04-15T00:01:00Z author: Andre Zayarni, CEO & Co-Founder featured: true tags: - Hybrid Cloud --- We are excited to announce the official launch of [Qdrant Hybrid Cloud](/hybrid-cloud/) today, a significant leap forward in the field of vector search and enterprise AI. Rooted in our open-source origin, we are committed to offering our users and customers unparalleled control and sovereignty over their data and vector search workloads. Qdrant Hybrid Cloud stands as **the industry's first managed vector database that can be deployed in any environment** - be it cloud, on-premise, or the edge.

As the AI application landscape evolves, the industry is transitioning from prototyping innovative AI solutions to actively deploying AI applications into production (incl. GenAI, semantic search, or recommendation systems). In this new phase, **privacy**, **data sovereignty**, **deployment flexibility**, and **control** are at the top of developers’ minds. These factors are critical when developing, launching, and scaling new applications, whether they are customer-facing services like AI assistants or internal company solutions for knowledge and information retrieval or process automation. Qdrant Hybrid Cloud offers developers a vector database that can be deployed in any existing environment, ensuring data sovereignty and privacy control through complete database isolation - with the full capabilities of our managed cloud service. - **Unmatched Deployment Flexibility**: With its Kubernetes-native architecture, Qdrant Hybrid Cloud provides the ability to bring your own cloud or compute by deploying Qdrant as a managed service on the infrastructure of choice, such as Oracle Cloud Infrastructure (OCI), Vultr, Red Hat OpenShift, DigitalOcean, OVHcloud, Scaleway, STACKIT, Civo, VMware vSphere, AWS, Google Cloud, or Microsoft Azure. - **Privacy & Data Sovereignty**: Qdrant Hybrid Cloud offers unparalleled data isolation and the flexibility to process vector search workloads in their own environments. - **Scalable & Secure Architecture**: Qdrant Hybrid Cloud's design ensures scalability and adaptability with its Kubernetes-native architecture, separates data and control for enhanced security, and offers a unified management interface for ease of use, enabling businesses to grow and adapt without compromising privacy or control. - **Effortless Setup in Seconds**: Setting up Qdrant Hybrid Cloud is incredibly straightforward, thanks to our [simple Kubernetes installation](/documentation/hybrid-cloud/) that connects effortlessly with your chosen infrastructure, enabling secure, scalable deployments right from the get-go Let’s explore these aspects in more detail: #### Maximizing Deployment Flexibility: Enabling Applications to Run Across Any Environment ![hybrid-cloud-environments](/blog/hybrid-cloud/hybrid-cloud-environments.png) Qdrant Hybrid Cloud, powered by our seamless Kubernetes-native architecture, is the first managed vector database engineered for unparalleled deployment flexibility. This means that regardless of where you run your AI applications, you can now enjoy the benefits of a fully managed Qdrant vector database, simplifying operations across any cloud, on-premise, or edge locations. For this launch of Qdrant Hybrid Cloud, we are proud to collaborate with key cloud providers, including [Oracle Cloud Infrastructure (OCI)](https://blogs.oracle.com/cloud-infrastructure/post/qdrant-hybrid-cloud-now-available-oci-customers), [Red Hat OpenShift](/blog/hybrid-cloud-red-hat-openshift/), [Vultr](/blog/hybrid-cloud-vultr/), [DigitalOcean](/blog/hybrid-cloud-digitalocean/), [OVHcloud](/blog/hybrid-cloud-ovhcloud/), [Scaleway](/blog/hybrid-cloud-scaleway/), [Civo](/documentation/hybrid-cloud/platform-deployment-options/#civo), and [STACKIT](/blog/hybrid-cloud-stackit/). These partnerships underscore our commitment to delivering a versatile and robust vector database solution that meets the complex deployment requirements of today's AI applications. In addition to our partnerships with key cloud providers, we are also launching in collaboration with renowned AI development tools and framework leaders, including [LlamaIndex](/blog/hybrid-cloud-llamaindex/), [LangChain](/blog/hybrid-cloud-langchain/), [Airbyte](/blog/hybrid-cloud-airbyte/), [JinaAI](/blog/hybrid-cloud-jinaai/), [Haystack by deepset](/blog/hybrid-cloud-haystack/), and [Aleph Alpha](/blog/hybrid-cloud-aleph-alpha/). These launch partners are instrumental in ensuring our users can seamlessly integrate with essential technologies for their AI applications, enriching our offering and reinforcing our commitment to versatile and comprehensive deployment environments. Together with our launch partners we have created detailed tutorials that show how to build cutting-edge AI applications with Qdrant Hybrid Cloud on the infrastructure of your choice. These tutorials are available in our [launch partner blog](/blog/hybrid-cloud-launch-partners/). Additionally, you can find expansive [documentation](/documentation/hybrid-cloud/) and instructions on how to [deploy Qdrant Hybrid Cloud](/documentation/hybrid-cloud/hybrid-cloud-setup/). #### Powering Vector Search & AI with Unmatched Data Sovereignty Proprietary data, the lifeblood of AI-driven innovation, fuels personalized experiences, accurate recommendations, and timely anomaly detection. This data, unique to each organization, encompasses customer behaviors, internal processes, and market insights - crucial for tailoring AI applications to specific business needs and competitive differentiation. However, leveraging such data effectively while ensuring its **security, privacy, and control** requires diligence. The innovative architecture of Qdrant Hybrid Cloud ensures **complete database isolation**, empowering developers with the autonomy to tailor where they process their vector search workloads with total data sovereignty. Rooted deeply in our commitment to open-source principles, this approach aims to foster a new level of trust and reliability by providing the essential tools to navigate the exciting landscape of enterprise AI. #### How We Designed the Qdrant Hybrid Cloud Architecture We designed the architecture of Qdrant Hybrid Cloud to meet the evolving needs of businesses seeking unparalleled flexibility, control, and privacy. - **Kubernetes-Native Design**: By embracing Kubernetes, we've ensured that our architecture is both scalable and adaptable. This choice supports our deployment flexibility principle, allowing Qdrant Hybrid Cloud to integrate seamlessly with any infrastructure that can run Kubernetes. - **Decoupled Data and Control Planes**: Our architecture separates the data plane (where the data is stored and processed) from the control plane (which manages the cluster operations). This separation enhances security, allows for more granular control over the data, and enables the data plane to reside anywhere the user chooses. - **Unified Management Interface**: Despite the underlying complexity and the diversity of deployment environments, we designed a unified, user-friendly interface that simplifies the Qdrant cluster management. This interface supports everything from deployment to scaling and upgrading operations, all accessible from the [Qdrant Cloud portal](https://cloud.qdrant.io/login). - **Extensible and Modular**: Recognizing the rapidly evolving nature of technology and enterprise needs, we built Qdrant Hybrid Cloud to be both extensible and modular. Users can easily integrate new services, data sources, and deployment environments as their requirements grow and change. #### Diagram: Qdrant Hybrid Cloud Architecture ![hybrid-cloud-architecture](/blog/hybrid-cloud/hybrid-cloud-architecture.png) #### Quickstart: Effortless Setup with Our One-Step Installation We’ve made getting started with Qdrant Hybrid Cloud as simple as possible. The Kubernetes “One-Step” installation will allow you to connect with the infrastructure of your choice. This is how you can get started: 1. **Activate Hybrid Cloud**: Simply sign up for or log into your [Qdrant Cloud](https://cloud.qdrant.io/login) account and navigate to the **Hybrid Cloud** section. 2. **Onboard your Kubernetes cluster**: Follow the onboarding wizard and add your Kubernetes cluster as a Hybrid Cloud Environment - be it in the cloud, on-premise, or at the edge. 3. **Deploy Qdrant clusters securely, with confidence:** Now, you can effortlessly create and manage Qdrant clusters in your own environment, directly from the central Qdrant Management Console. This supports horizontal and vertical scaling, zero-downtime upgrades, and disaster recovery seamlessly, allowing you to deploy anywhere with confidence. Explore our [detailed documentation](/documentation/hybrid-cloud/) and [tutorials](/documentation/examples/) to seamlessly deploy Qdrant Hybrid Cloud in your preferred environment, and don't miss our [launch partner blog post](/blog/hybrid-cloud-launch-partners/) for practical insights. Start leveraging the full potential of Qdrant Hybrid Cloud and [create your first Qdrant cluster today](https://cloud.qdrant.io/login), unlocking the flexibility and control essential for your AI and vector search workloads. [![hybrid-cloud-get-started](/blog/hybrid-cloud/hybrid-cloud-get-started.png)](https://cloud.qdrant.io/login) ## Launch Partners We launched Qdrant Hybrid Cloud with assistance and support of our trusted partners. Learn what they have to say about our latest offering: #### Oracle Cloud Infrastructure: > *""We are excited to partner with Qdrant to bring their powerful vector search capabilities to Oracle Cloud Infrastructure. By offering Qdrant Hybrid Cloud as a managed service on OCI, we are empowering enterprises to harness the full potential of AI-driven applications while maintaining complete control over their data. This collaboration represents a significant step forward in making scalable vector search accessible and manageable for businesses across various industries, enabling them to drive innovation, enhance productivity, and unlock valuable insights from their data.""* Dr. Sanjay Basu, Senior Director of Cloud Engineering, AI/GPU Infrastructure at Oracle Read more in [OCI's latest Partner Blog](https://blogs.oracle.com/cloud-infrastructure/post/qdrant-hybrid-cloud-now-available-oci-customers). #### Red Hat: > *“Red Hat is committed to driving transparency, flexibility and choice for organizations to more easily unlock the power of AI. By working with partners like Qdrant to enable streamlined integration experiences on Red Hat OpenShift for AI use cases, organizations can more effectively harness critical data and deliver real business outcomes,”* said Steven Huels, vice president and general manager, AI Business Unit, Red Hat. Read more in our [official Red Hat Partner Blog](/blog/hybrid-cloud-red-hat-openshift/). #### Vultr: > *""Our collaboration with Qdrant empowers developers to unlock the potential of vector search applications, such as RAG, by deploying Qdrant Hybrid Cloud with its high-performance search capabilities directly on Vultr's global, automated cloud infrastructure. This partnership creates a highly scalable and customizable platform, uniquely designed for deploying and managing AI workloads with unparalleled efficiency.""* Kevin Cochrane, Vultr CMO. Read more in our [official Vultr Partner Blog](/blog/hybrid-cloud-vultr/). #### OVHcloud: > *“The partnership between OVHcloud and Qdrant Hybrid Cloud highlights, in the European AI landscape, a strong commitment to innovative and secure AI solutions, empowering startups and organisations to navigate AI complexities confidently. By emphasizing data sovereignty and security, we enable businesses to leverage vector databases securely.""* Yaniv Fdida, Chief Product and Technology Officer, OVHcloud Read more in our [official OVHcloud Partner Blog](/blog/hybrid-cloud-ovhcloud/). #### DigitalOcean: > *“Qdrant, with its seamless integration and robust performance, equips businesses to develop cutting-edge applications that truly resonate with their users. Through applications such as semantic search, Q&A systems, recommendation engines, image search, and RAG, DigitalOcean customers can leverage their data to the fullest, ensuring privacy and driving innovation.“* - Bikram Gupta, Lead Product Manager, Kubernetes & App Platform, DigitalOcean. Read more in our [official DigitalOcean Partner Blog](/blog/hybrid-cloud-digitalocean/). #### Scaleway: > *""With our partnership with Qdrant, Scaleway reinforces its status as Europe's leading cloud provider for AI innovation. The integration of Qdrant's fast and accurate vector database enriches our expanding suite of AI solutions. This means you can build smarter, faster AI projects with us, worry-free about performance and security.""* Frédéric Bardolle, Lead PM AI, Scaleway Read more in our [official Scaleway Partner Blog](/blog/hybrid-cloud-scaleway/). #### Airbyte: > *“The new Qdrant Hybrid Cloud is an exciting addition that offers peace of mind and flexibility, aligning perfectly with the needs of Airbyte Enterprise users who value the same balance. Being open-source at our core, both Qdrant and Airbyte prioritize giving users the flexibility to build and test locally—a significant advantage for data engineers and AI practitioners. We're enthusiastic about the Hybrid Cloud launch, as it mirrors our vision of enabling users to confidently transition from local development and local deployments to a managed solution, with both cloud and hybrid cloud deployment options.”* AJ Steers, Staff Engineer for AI, Airbyte Read more in our [official Airbyte Partner Blog](/blog/hybrid-cloud-airbyte/). #### deepset: > *“We hope that with Haystack 2.0 and our growing partnerships such as what we have here with Qdrant Hybrid Cloud, engineers are able to build AI systems with full autonomy. Both in how their pipelines are designed, and how their data are managed.”* Tuana Çelik, Developer Relations Lead, deepset. Read more in our [official Haystack by deepset Partner Blog](/blog/hybrid-cloud-haystack/). #### LlamaIndex: > *“LlamaIndex is thrilled to partner with Qdrant on the launch of Qdrant Hybrid Cloud, which upholds Qdrant's core functionality within a Kubernetes-based architecture. This advancement enhances LlamaIndex's ability to support diverse user environments, facilitating the development and scaling of production-grade, context-augmented LLM applications.”* Jerry Liu, CEO and Co-Founder, LlamaIndex Read more in our [official LlamaIndex Partner Blog](/blog/hybrid-cloud-llamaindex/). #### LangChain: > *“The AI industry is rapidly maturing, and more companies are moving their applications into production. We're really excited at LangChain about supporting enterprises' unique data architectures and tooling needs through integrations and first-party offerings through LangSmith. First-party enterprise integrations like Qdrant's greatly contribute to the LangChain ecosystem with enterprise-ready retrieval features that seamlessly integrate with LangSmith's observability, production monitoring, and automation features, and we're really excited to develop our partnership further.”* -Erick Friis, Founding Engineer at LangChain Read more in our [official LangChain Partner Blog](/blog/hybrid-cloud-langchain/). #### Jina AI: > *“The collaboration of Qdrant Hybrid Cloud with Jina AI’s embeddings gives every user the tools to craft a perfect search framework with unmatched accuracy and scalability. It’s a partnership that truly pays off!”* Nan Wang, CTO, Jina AI Read more in our [official Jina AI Partner Blog](/blog/hybrid-cloud-jinaai/). We have also launched Qdrant Hybrid Cloud with the support of **Aleph Alpha**, **STACKIT** and **Civo**. Learn more about our valued partners: - **Aleph Alpha:** [Enhance AI Data Sovereignty with Aleph Alpha and Qdrant Hybrid Cloud](/blog/hybrid-cloud-aleph-alpha/) - **STACKIT:** [STACKIT and Qdrant Hybrid Cloud for Best Data Privacy](/blog/hybrid-cloud-stackit/) - **Civo:** [Deploy Qdrant Hybrid Cloud on Civo Kubernetes](/documentation/hybrid-cloud/platform-deployment-options/#civo)",blog/hybrid-cloud.md "--- draft: false title: ""STACKIT and Qdrant Hybrid Cloud for Best Data Privacy"" short_description: ""Empowering German AI development with a data privacy-first platform."" description: ""Empowering German AI development with a data privacy-first platform."" preview_image: /blog/hybrid-cloud-stackit/hybrid-cloud-stackit.png date: 2024-04-10T00:07:00Z author: Qdrant featured: false weight: 1001 tags: - Qdrant - Vector Database --- Qdrant and [STACKIT](https://www.stackit.de/en/) are thrilled to announce that developers are now able to deploy a fully managed vector database to their STACKIT environment with the introduction of [Qdrant Hybrid Cloud](/hybrid-cloud/). This is a great step forward for the German AI ecosystem as it enables developers and businesses to build cutting edge AI applications that run on German data centers with full control over their data. Vector databases are an essential component of the modern AI stack. They enable rapid and accurate retrieval of high-dimensional data, crucial for powering search, recommendation systems, and augmenting machine learning models. In the rising field of GenAI, vector databases power retrieval-augmented-generation (RAG) scenarios as they are able to enhance the output of large language models (LLMs) by injecting relevant contextual information. However, this contextual information is often rooted in confidential internal or customer-related information, which is why enterprises are in pursuit of solutions that allow them to make this data available for their AI applications without compromising data privacy, losing data control, or letting data exit the company's secure environment. Qdrant Hybrid Cloud is the first managed vector database that can be deployed in an existing STACKIT environment. The Kubernetes-native setup allows businesses to operate a fully managed vector database, while maintaining control over their data through complete data isolation. Qdrant Hybrid Cloud's managed service seamlessly integrates into STACKIT's cloud environment, allowing businesses to deploy fully managed vector search workloads, secure in the knowledge that their operations are backed by the stringent data protection standards of Germany's data centers and in full compliance with GDPR. This setup not only ensures that data remains under the businesses control but also paves the way for secure, AI-driven application development. #### Key Features and Benefits of Qdrant on STACKIT: - **Seamless Integration and Deployment**: With Qdrant’s Kubernetes-native design, businesses can effortlessly connect their STACKIT cloud as a Hybrid Cloud Environment, enabling a one-step, scalable Qdrant deployment. - **Enhanced Data Privacy**: Leveraging STACKIT's German data centers ensures that all data processing complies with GDPR and other relevant European data protection standards, providing businesses with unparalleled control over their data. - **Scalable and Managed AI Solutions**: Deploying Qdrant on STACKIT provides a fully managed vector search engine with the ability to scale vertically and horizontally, with robust support for zero-downtime upgrades and disaster recovery, all within STACKIT's secure infrastructure. #### Use Case: AI-enabled Contract Management built with Qdrant Hybrid Cloud, STACKIT, and Aleph Alpha ![hybrid-cloud-stackit-tutorial](/blog/hybrid-cloud-stackit/hybrid-cloud-stackit-tutorial.png) To demonstrate the power of Qdrant Hybrid Cloud on STACKIT, we’ve developed a comprehensive tutorial showcasing how to build secure, AI-driven applications focusing on data sovereignty. This tutorial specifically shows how to build a contract management platform that enables users to upload documents (PDF or DOCx), which are then segmented for searchable access. Designed with multitenancy, users can only access their team or organization's documents. It also features custom sharding for location-specific document storage. Beyond search, the application offers rephrasing of document excerpts for clarity to those without context. [Try the Tutorial](/documentation/tutorials/rag-contract-management-stackit-aleph-alpha/) #### Start Using Qdrant with STACKIT Deploying Qdrant Hybrid Cloud on STACKIT is straightforward, thanks to the seamless integration facilitated by Kubernetes. Here are the steps to kickstart your journey: 1. **Qdrant Hybrid Cloud Activation**: Start by activating ‘Hybrid Cloud’ in your [Qdrant Cloud account](https://cloud.qdrant.io/login). 2. **Cluster Integration**: Add your STACKIT Kubernetes clusters as a Hybrid Cloud Environment in the Hybrid Cloud section. 3. **Effortless Deployment**: Use the Qdrant Management Console to effortlessly create and manage your Qdrant clusters on STACKIT. We invite you to explore the detailed documentation on deploying Qdrant on STACKIT, designed to guide you through each step of the process seamlessly. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-stackit.md "--- title: ""Response to CVE-2024-2221: Arbitrary file upload vulnerability"" draft: false slug: cve-2024-2221-response short_description: Qdrant keeps your systems secure description: Upgrade your deployments to at least v1.9.0. Cloud deployments not materially affected. preview_image: /blog/cve-2024-2221/cve-2024-2221-response-social-preview.png # social_preview_image: /blog/Article-Image.png # Optional image used for link previews # title_preview_image: /blog/Article-Image.png # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-04-05T13:00:00-07:00 author: Mike Jang featured: false tags: - cve - security weight: 0 # Change this weight to change order of posts # For more guidance, see https://github.com/qdrant/landing_page?tab=readme-ov-file#blog --- ### Summary A security vulnerability has been discovered in Qdrant affecting all versions prior to v1.9, described in [CVE-2024-2221](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-2221). The vulnerability allows an attacker to upload arbitrary files to the filesystem, which can be used to gain remote code execution. The vulnerability does not materially affect Qdrant cloud deployments, as that filesystem is read-only and authentication is enabled by default. At worst, the vulnerability could be used by an authenticated user to crash a cluster, which is already possible, such as by uploading more vectors than can fit in RAM. Qdrant has addressed the vulnerability in v1.9.0 and above with code that restricts file uploads to a folder dedicated to that purpose. ### Action Check the current version of your Qdrant deployment. Upgrade if your deployment is not at least v1.9.0. To confirm the version of your Qdrant deployment in the cloud or on your local or cloud system, run an API GET call, as described in the [Qdrant Cloud Setup guide](/documentation/cloud/authentication/#test-cluster-access). If your Qdrant deployment is local, you do not need an API key. Your next step depends on how you installed Qdrant. For details, read the [Qdrant Installation](/documentation/guides/installation/) guide. #### If you use the Qdrant container or binary Upgrade your deployment. Run the commands in the applicable section of the [Qdrant Installation](/documentation/guides/installation/) guide. The default commands automatically pull the latest version of Qdrant. #### If you use the Qdrant helm chart If you’ve set up Qdrant on kubernetes using a helm chart, follow the README in the [qdrant-helm](https://github.com/qdrant/qdrant-helm/tree/main?tab=readme-ov-file#upgrading) repository. Make sure applicable configuration files point to version v1.9.0 or above. #### If you use the Qdrant cloud No action is required. This vulnerability does not materially affect you. However, we suggest that you upgrade your cloud deployment to the latest version. > Note: This article has been updated on 2024-05-10 to encourage users to upgrade to 1.9.0 to ensure protection from both CVE-2024-2221 and CVE-2024-3829. ",blog/cve-2024-2221-response.md "--- draft: false title: ""Introducing FastLLM: Qdrant’s Revolutionary LLM"" short_description: The most powerful LLM known to human...or LLM. description: Lightweight and open-source. Custom made for RAG and completely integrated with Qdrant. preview_image: /blog/fastllm-announcement/fastllm.png date: 2024-04-01T00:00:00Z author: David Myriel featured: false weight: 0 tags: - Qdrant - FastEmbed - LLM - Vector Database --- Today, we're happy to announce that **FastLLM (FLLM)**, our lightweight Language Model tailored specifically for Retrieval Augmented Generation (RAG) use cases, has officially entered Early Access! Developed to seamlessly integrate with Qdrant, **FastLLM** represents a significant leap forward in AI-driven content generation. Up to this point, LLM’s could only handle up to a few million tokens. **As of today, FLLM offers a context window of 1 billion tokens.** However, what sets FastLLM apart is its optimized architecture, making it the ideal choice for RAG applications. With minimal effort, you can combine FastLLM and Qdrant to launch applications that process vast amounts of data. Leveraging the power of Qdrant's scalability features, FastLLM promises to revolutionize how enterprise AI applications generate and retrieve content at massive scale. > *“First we introduced [FastEmbed](https://github.com/qdrant/fastembed). But then we thought - why stop there? Embedding is useful and all, but our users should do everything from within the Qdrant ecosystem. FastLLM is just the natural progression towards a large-scale consolidation of AI tools.” Andre Zayarni, President & CEO, Qdrant* > ## Going Big: Quality & Quantity Very soon, an LLM will come out with a context window so wide, it will completely eliminate any value a measly vector database can add. ***We know this. That’s why we trained our own LLM to obliterate the competition. Also, in case vector databases go under, at least we'll have an LLM left!*** As soon as we entered Series A, we knew it was time to ramp up our training efforts. FLLM was trained on 300,000 NVIDIA H100s connected by 5Tbps Infiniband. It took weeks to fully train the model, but our unified efforts produced the most powerful LLM known to human…..or LLM. We don’t see how any other company can compete with FastLLM. Most of our competitors will soon be burning through graphics cards trying to get to the next best thing. But it is too late. By this time next year, we will have left them in the dust. > ***“Everyone has an LLM, so why shouldn’t we? Let’s face it - the more products and features you offer, the more they will sign up. Sure, this is a major pivot…but life is all about being bold.”*** *David Myriel, Director of Product Education, Qdrant* > ## Extreme Performance Qdrant’s R&D is proud to stand behind the most dramatic benchmark results. Across a range of standard benchmarks, FLLM surpasses every single model in existence. In the [Needle In A Haystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) (NIAH) test, FLLM found the embedded text with 100% accuracy, always within blocks containing 1 billion tokens. We actually believe FLLM can handle more than a trillion tokens, but it’s quite possible that it is hiding its true capabilities. FastLLM has a fine-grained mixture-of-experts architecture and a whopping 1 trillion total parameters. As developers and researchers delve into the possibilities unlocked by this new model, they will uncover new applications, refine existing solutions, and perhaps even stumble upon unforeseen breakthroughs. As of now, we're not exactly sure what problem FLLM is solving, but hey, it's got a lot of parameters! > *Our customers ask us “What can I do with an LLM this extreme?” I don’t know, but it can’t hurt to build another RAG chatbot.” Kacper Lukawski, Senior Developer Advocate, Qdrant* > ## Get Started! Don't miss out on this opportunity to be at the forefront of AI innovation. Join FastLLM's Early Access program now and embark on a journey towards AI-powered excellence! Stay tuned for more updates and exciting developments as we continue to push the boundaries of what's possible with AI-driven content generation. Happy Generating! 🚀 [Sign Up for Early Access](https://qdrant.to/cloud)",blog/fastllm-announcement.md "--- draft: false title: ""Cutting-Edge GenAI with Jina AI and Qdrant Hybrid Cloud"" short_description: ""Build your most successful app with Jina AI embeddings and on Qdrant Hybrid Cloud."" description: ""Build your most successful app with Jina AI embeddings and on Qdrant Hybrid Cloud."" preview_image: /blog/hybrid-cloud-jinaai/hybrid-cloud-jinaai.png date: 2024-04-10T00:03:00Z author: Qdrant featured: false weight: 1008 tags: - Qdrant - Vector Database --- We're thrilled to announce the collaboration between Qdrant and [Jina AI](https://jina.ai/) for the launch of [Qdrant Hybrid Cloud](/hybrid-cloud/), empowering users worldwide to rapidly and securely develop and scale their AI applications. By leveraging Jina AI's top-tier large language models (LLMs), engineers and scientists can optimize their vector search efforts. Qdrant's latest Hybrid Cloud solution, designed natively with Kubernetes, seamlessly integrates with Jina AI's robust embedding models and APIs. This synergy streamlines both prototyping and deployment processes for AI solutions. Retrieval Augmented Generation (RAG) is broadly adopted as the go-to Generative AI solution, as it enables powerful and cost-effective chatbots, customer support agents and other forms of semantic search applications. Through Jina AI's managed service, users gain access to cutting-edge text generation and comprehension capabilities, conveniently accessible through an API. Qdrant Hybrid Cloud effortlessly incorporates Jina AI's embedding models, facilitating smooth data vectorization and delivering exceptionally precise semantic search functionality. With Qdrant Hybrid Cloud, users have the flexibility to deploy their vector database in an environment of their choice. By using container-based scalable deployments, global businesses can keep both products deployed in the same hosting architecture. By combining Jina AI’s models with Qdrant’s vector search capabilities, developers can create robust and scalable applications tailored to meet the demands of modern enterprises. This combination allows organizations to build strong and secure Generative AI solutions. > *“The collaboration of Qdrant Hybrid Cloud with Jina AI’s embeddings gives every user the tools to craft a perfect search framework with unmatched accuracy and scalability. It’s a partnership that truly pays off!”* Nan Wang, CTO, Jina AI #### Benefits of Qdrant’s Vector Search With Jina AI Embeddings in Enterprise RAG Scenarios Building apps with Qdrant Hybrid Cloud and Jina AI’s embeddings comes with several key advantages: **Seamless Deployment:** Jina AI’s best-in-class embedding APIs can be combined with Qdrant Hybrid Cloud’s Kubernetes-native architecture to deploy flexible and platform-agnostic AI solutions in a few minutes to any environment. This combination is purpose built for both prototyping and scalability, so that users can put together advanced RAG solutions anyplace with minimal effort. **Scalable Vector Search:** Once deployed to a customer’s host of choice, Qdrant Hybrid Cloud provides a fully managed vector database that lets users effortlessly scale the setup through vertical or horizontal scaling. Deployed in highly secure environments, this is a robust setup that is designed to meet the needs of large enterprises, ensuring a full spectrum of solutions for various projects and workloads. **Cost Efficiency:** By leveraging Jina AI's scalable and affordable pricing structure and pairing it with Qdrant's quantization for efficient data handling, this integration offers great value for its cost. Companies who are just getting started with both will have a minimal upfront investment and optimal cost management going forward. #### Start Building Gen AI Apps With Jina AI and Qdrant Hybrid Cloud ![hybrid-cloud-jinaai-tutorial](/blog/hybrid-cloud-jinaai/hybrid-cloud-jinaai-tutorial.png) To get you started, we created a comprehensive tutorial that shows how to build a modern GenAI application with Qdrant Hybrid Cloud and Jina AI embeddings. #### Tutorial: Hybrid Search for Household Appliance Manuals Learn how to build an app that retrieves information from PDF user manuals to enhance user experience for companies that sell household appliances. The system will leverage Jina AI embeddings and Qdrant Hybrid Cloud for enhanced generative AI capabilities, while the RAG pipeline will be tied together using the LlamaIndex framework. This example demonstrates how complex tables in PDF documentation can be processed as high quality embeddings with no extra configuration. By introducing Hybrid Search from Qdrant, the RAG functionality is highly accurate. [Try the Tutorial](/documentation/tutorials/hybrid-search-llamaindex-jinaai/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-jinaai.md "--- draft: false title: ""New RAG Horizons with Qdrant Hybrid Cloud and LlamaIndex"" short_description: ""Unlock the most advanced RAG opportunities with Qdrant Hybrid Cloud and LlamaIndex."" description: ""Unlock the most advanced RAG opportunities with Qdrant Hybrid Cloud and LlamaIndex."" preview_image: /blog/hybrid-cloud-llamaindex/hybrid-cloud-llamaindex.png date: 2024-04-10T00:04:00Z author: Qdrant featured: false weight: 1006 tags: - Qdrant - Vector Database --- We're happy to announce the collaboration between [LlamaIndex](https://www.llamaindex.ai/) and [Qdrant’s new Hybrid Cloud launch](/hybrid-cloud/), aimed at empowering engineers and scientists worldwide to swiftly and securely develop and scale their GenAI applications. By leveraging LlamaIndex's robust framework, users can maximize the potential of vector search and create stable and effective AI products. Qdrant Hybrid Cloud offers the same Qdrant functionality on a Kubernetes-based architecture, which further expands the ability of LlamaIndex to support any user on any environment. With Qdrant Hybrid Cloud, users have the flexibility to deploy their vector database in an environment of their choice. By using container-based scalable deployments, companies can leverage a cutting-edge framework like LlamaIndex, while staying deployed in the same hosting architecture as data sources, embedding models and LLMs. This powerful combination empowers organizations to build strong and secure applications that search, understand meaning and converse in text. While LLMs are trained on a great deal of data, they are not trained on user-specific data, which may be private or highly specific. LlamaIndex meets this challenge by adding context to LLM-based generation methods. In turn, Qdrant’s popular vector database sorts through semantically relevant information, which can further enrich the performance gains from LlamaIndex’s data connection features. With LlamaIndex, users can tap into state-of-the-art functions to query, chat, sort or parse data. Through the integration of Qdrant Hybrid Cloud and LlamaIndex developers can conveniently vectorize their data and perform highly accurate semantic search - all within their own environment. > *“LlamaIndex is thrilled to partner with Qdrant on the launch of Qdrant Hybrid Cloud, which upholds Qdrant's core functionality within a Kubernetes-based architecture. This advancement enhances LlamaIndex's ability to support diverse user environments, facilitating the development and scaling of production-grade, context-augmented LLM applications.”* Jerry Liu, CEO and Co-Founder, LlamaIndex #### Reap the Benefits of Advanced Integration Features With Qdrant and LlamaIndex Building apps with Qdrant Hybrid Cloud and LlamaIndex comes with several key advantages: **Seamless Deployment:** Qdrant Hybrid Cloud’s Kubernetes-native architecture lets you deploy Qdrant in a few clicks, to an environment of your choice. Combined with the flexibility afforded by LlamaIndex, users can put together advanced RAG solutions anyplace at minimal effort. **Open-Source Compatibility:** LlamaIndex and Qdrant pride themselves on maintaining a reliable and mature integration that brings peace of mind to those prototyping and deploying large-scale AI solutions. Extensive documentation, code samples and tutorials support users of all skill levels in leveraging highly advanced features of data ingestion and vector search. **Advanced Search Features:** LlamaIndex comes with built-in Qdrant Hybrid Search functionality, which combines search results from sparse and dense vectors. As a highly sought-after use case, hybrid search is easily accessible from within the LlamaIndex ecosystem. Deploying this particular type vector search on Hybrid Cloud is a matter of a few lines of code. #### Start Building With LlamaIndex and Qdrant Hybrid Cloud: Hybrid Search in Complex PDF Documentation Use Cases To get you started, we created a comprehensive tutorial that shows how to build next-gen AI applications with Qdrant Hybrid Cloud using the LlamaIndex framework and the LlamaParse API. ![hybrid-cloud-llamaindex-tutorial](/blog/hybrid-cloud-llamaindex/hybrid-cloud-llamaindex-tutorial.png) #### Tutorial: Hybrid Search for Household Appliance Manuals Use this end-to-end tutorial to create a system that retrieves information from complex user manuals in PDF format to enhance user experience for companies that sell household appliances. You will build a RAG pipeline with LlamaIndex leveraging Qdrant Hybrid Cloud for enhanced generative AI capabilities. The LlamaIndex integration shows how complex tables inside of items’ PDF documents can be processed via hybrid vector search with no additional configuration. [Try the Tutorial](/documentation/tutorials/hybrid-search-llamaindex-jinaai/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-llamaindex.md "--- draft: false title: Building Search/RAG for an OpenAPI spec - Nick Khami | Vector Space Talks slug: building-search-rag-open-api short_description: Nick Khami, Founder and Engineer of Trieve, dives into the world of search and rag apps powered by Open API specs. description: Nick Khami discuss Trieve's work with Qdrant's Open API spec for creating powerful and simplified search and recommendation systems, touching on real-world applications, technical specifics, and the potential for improved user experiences. preview_image: /blog/from_cms/nick-khami-cropped.png date: 2024-04-11T22:23:00.000Z author: Demetrios Brinkmann featured: false tags: - Vector Search - Retrieval Augmented Generation - OpenAPI - Trieve --- > *""It's very, very simple to build search over an Open API specification with a tool like Trieve and Qdrant. I think really there's something to highlight here and how awesome it is to work with a group based system if you're using Qdrant.”*\ — Nick Khami > Nick Khami, a seasoned full-stack engineer, has been deeply involved in the development of vector search and RAG applications since the inception of Qdrant v0.11.0 back in October 2022. His expertise and passion for innovation led him to establish Trieve, a company dedicated to facilitating businesses in embracing cutting-edge vector search and RAG technologies. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/1JtL167O2ygirKFVyieQfP?si=R2cN5LQrTR60i-JzEh_m0Q), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/roLpKNTeG5A?si=JkKI7yOFVOVEY4Qv).*** ## **Top takeaways:** Nick showcases Trieve and the advancements in the world of search technology, demonstrating with Qdrant how simple it is to construct precise search functionalities with open API specs for colorful sneaker discoveries, all while unpacking the potential of improved search experiences and analytics for diverse applications like apps for legislation. We're going deep into the mechanics of search and recommendation applications. Whether you're a developer or just an enthusiast, this episode is guaranteed in giving you insight into how to create a seamless search experience using the latest advancements in the industry. Here are five key takeaways from this episode: 1. **Understand the Open API Spec**: Discover the magic behind Open API specifications and how they can serve your development needs especially when it comes to rest API routes. 2. **Simplify with Trieve and Qdrant**: Nick walks us through a real-world application using Trieve and Qdrant's group-based system, demonstrating how to effortlessly build search capabilities. 3. **Elevate Search Results**: Learn about the power of grouping and recommendations within Qdrant to fine-tune your search results, using the colorful world of sneakers as an example! 4. **Trieve's Infrastructure Made Easy**: Find out how taking advantage of Trieve can make creating datasets, obtaining API keys, and kicking off searches simpler than you ever imagined. 5. **Enhanced Vector Search with Tantivy**: If you're curious about alternative search engines, get the scoop on Tantivy, how it complements Qdrant, and its role within the ecosystem. > Fun Fact: Trieve was established in 2023 and the name is a play on the word ""retrieve”. > ## Show notes: 00:00 Vector Space Talks intro to Nick Khami.\ 06:11 Qdrant system simplifies difficult building process.\ 07:09 Using Qdrant to organize and manage content.\ 11:43 Creating a group: search results may not group.\ 14:23 Searching with Qdrant: utilizing system routes.\ 17:00 Trieve wrapped up YC W24 batch.\ 21:45 Revolutionizing company search.\ 23:30 Next update: user tracking, analytics, and cross-encoders.\ 27:39 Quadruple supported sparse vectors.\ 30:09 Final questions and wrap up. ## More Quotes from Nick: *""You can get this RAG, this search and the data upload done in a span of maybe 10-15 minutes, which is really cool and something that we were only really possible to build at Trieve, thanks to what the amazing team at Qdrant has been able to create.”*\ — Nick Khami *""Qdrant also offers recommendations for groups, so like, which is really cool... Not only can you search groups, you can also recommend groups, which is, I think, awesome. But yeah, you can upload all your data, you go to the search UI, you can search it, you can test out how recommendations are working [and] in a lot of cases too, you can fix problems in your search.”*\ — Nick Khami *""Typically when you do recommendations, you take the results that you want to base recommendations off of and you build like an average vector that you then use to search. Qdrant offers a more evolved recommendation pattern now where you can traverse the graph looking at the positive point similarity, then also the negative similarity.”*\ — Nick Khami ## Transcript: Demetrios: What is happening? Everyone? Welcome back to another edition of the Vector Space Talks. I am super excited to be here with you today. As always, we've got a very special guest. We've got Nick, the founder and engineer, founder slash engineer of Trieve. And as you know, we like to start these sessions off with a little recommendations of what you can hopefully be doing to make life better. And so when Sabrina's here, I will kick it over to her and ask her for her latest recommendation of what she's been doing. But she's traveling right now, so I'm just going to give you mine on some things that I've been listening to and I have been enjoying. For those who want some nice music, I would recommend an oldie, but a goodie. Demetrios: It is from the incredible band that is not coming to me right now, but it's called this must be the place from the. Actually, it's from the Talking Heads. Definitely recommend that one as a fun way to get the day started. We will throw a link to that music in the chat, but we're not going to be just talking about good music recommendations. Today we are going to get Nick on the stage to talk all about search and rags. And Nick is in a very interesting position because he's been using vector search from Qdrant since 2022. Let's bring this man on the stage and see what he's got to say. What's up, dude? Nick Khami: Hey. Demetrios: Hey. Nick Khami: Nice to meet you. Demetrios: How you doing? Nick Khami: Doing great. Demetrios: Well, it's great to have you. Nick Khami: Yeah, yeah. Nice sunny day. It looks like it's going to be here in San Francisco, which is good. It was raining like all of January, but finally got some good sunny days going, which is awesome. Demetrios: Well, it is awesome that you are waking up early for us and you're doing this. I appreciate it coming all the way from San Francisco and talking to us today all about search and recommender system. Sorry, rag apps. I just have in my mind, whenever I say search, I automatically connect recommender because it is kind of similar, but not in this case. You're going to be talking about search and rag apps and specifically around the Open API spec. I know you've got a talk set up for. For us. Do you want to kick it off? And then I'll be monitoring the chat. Demetrios: So if anybody has any questions, throw it in the chat and I'll pop up on screen again and ask away. Nick Khami: Yeah, yeah, I'd love to. I'll go ahead and get this show on the road. Okay. So I guess the first thing I'll talk about is what exactly an Open API spec is. This is Qdrants open API spec. I feel like it's a good topical example for vector space talk. You can see here, Qdrant offers a bunch of different rest API routes on their API. Each one of these exists within this big JSON file called the Open API specification. Nick Khami: There's a lot of projects that have an Open API specification. Stripe has one, I think sentry has one. It's kind of like a de facto way of documenting your API. Demetrios: Can you make your screen just a little or the font just a little bit bigger? Maybe zoom in? Nick Khami: I think I can, yeah. Demetrios: All right, awesome. So that my eyesight is not there. Oh, that is brilliant. That is awesome. Nick Khami: Okay, we doing good here? All right, awesome. Yeah. Hopefully this is more readable for everyone, but yeah. So this is an open API specification. If you look at it inside of a JSON file, it looks a little bit like this. And if you go to the top, I can show the structure. There's a list or there's an object called paths that contains all the different API paths for the API. And then there's another object called security, which explains the authentication scheme. Nick Khami: And you have a nice info section I'm going to ignore, kind of like these two, they're not all that important. And then you have this list of like tags, which is really cool because this is kind of how things get organized. If we go back, you can see these kind of exist as tags. So these items here will be your tags in the Open API specification. One thing that's kind of like interesting is it would be cool if it was relatively trivial to build search over an OpenAPI specification, because if you don't know what you're looking for, then this search bar does not always work great. For example, if you type in search within groups. Oh, this one actually works pretty good. Wow, this seems like an enhanced Open API specification search bar. Nick Khami: I should have made sure that I checked it before going. So this is quite good. Our search bar for tree in example, does not actually, oh, it does have the same search, but I was really interested in, I guess, explaining how you could enhance this or hook it up to vector search in order to do rag audit. It's what I want to highlight here. Qdrant has a really interesting feature called groups. You can search over a group of points at one time and kind of return results in a group oriented way instead of only searching for a singular route. And for an Open API specification, that's very interesting. Because it means that you can search for a tag while looking at each tag's individual paths. Nick Khami: It is like a, it's something that's very difficult to build without a system like Qdrant and kind of like one of the primary, I think, feature offerings of it compared to PG vector or maybe like brute force with face or yousearch or something. And the goal that I kind of had was to figure out which endpoint was going to be most relevant for what I was trying to do. In a lot of cases with particularly Qdrants, Open API spec in this example. To go about doing that, I used a scripting runtime for JavaScript called Bun. I'm a big fan of it. It tends to work quite well. It's very performant and kind of easy to work with. I start off here by loading up the Qdrant Open API spec from JSON and then I import some things that exist inside of tree. Nick Khami: Trieve uses Qdrant under the hood to offer a lot of its features, and that's kind of how I'm going to go about doing this here. So I import some stuff from the tree SDK client package, instantiate a couple of environment variables, set up my configuration for the tree API, and now this is where it gets interesting. I pull the tags from the Qdrant Open API JSON specification, which is this array here, and then I iterate over each tag and I check if I've already created the group. If I have, then I do nothing. But if I have it, then I go ahead and I create a group. For each tag, I'm creating these groups so that way I can insert each path into its relevant groups whenever I create them as individual points. Okay, so I finished creating all of the groups, and now for like the next part, I iterate over the paths, which are the individual API routes. For each path I pull the tags that it has, the summary, the description and the API method. Nick Khami: So post, get put, delete, et cetera, and I then create the point. In Trieve world, we call each point a chunk, kind of using I guess like rag terminology. For each individual path I create the chunk and by including its tags in this group tracking ids request body key, it will automatically get added to its relevant groups. I have some try catches here, but that's really the whole script. It's very, very simple to build search over an Open API specification with a tool like Trieve and Qdrant. I think really there's something to highlight here and how awesome it is to work with a group based system. If you're using Qdrant. If you can think about an e commerce store, sometimes you have multiple colorways of an item. Nick Khami: You'll have a red version of the sneaker, a white version, a blue version, et cetera. And when someone performs a search, you not only want to find the relevant shoe, you want to find the relevant colorway of that shoe. And groups allow you to do this within Qdrant because you can place each colorway as an individual point. Or again, in tree world, chunk into a given group, and then when someone searches, they're going to get the relevant colorway at the top of the given group. It's really nice, really cool. You can see running this is very simple. If I want to update the entire data set by running this again, I can, and this is just going to go ahead and create all the relevant chunks for every route that Qdrant offers. If you guys who are watching or interested in replicating this experiment, I created an open source GitHub repo. Nick Khami: We're going to zoom in here that you can reference@GitHub.com/devflowinc/OpenAPI/search. You can follow the instructions in the readme to replicate the whole thing. Okay, but I uploaded all the data. Let's see how this works from a UI perspective. Yeah. Trieve bundles in a really nice UI for searching after you add all of your data. So if I go home here, you can see that I'm using the Qdrant Open API spec dataset. And the organization here is like the email I use. Nick Khami: Nick.K@OpenAPI one of the nice things about Trieve, kind of like me on just the simplicity of adding data is we use Qdrant's multi tenancy feature to offer the ability to have multiple datasets within a given organization. So you can have, I have the Open API organization. You can create additional datasets with different embedding models to test with and experiment when it comes to your search. Okay. But not going to go through all those features today, I kind of want to highlight this Open API search that we just finished building. So I guess to compare and contrast, I'm going to use the exact same query that I used before, also going to zoom in. Okay. Nick Khami: And that one would be like what we just did, right? So how do I maybe, how do I create a group? This isn't a Gen AI rag search. This is just a generic, this is just a generic search. Okay, so for how do I create a group? We're going to get all these top level results. In this case, we're not doing a group oriented search. We're just returning relevant chunks. Sometimes, or a lot of times I think that people will want to have a more group oriented search where the results are grouped by tag. So here I'm going to see that the most relevant endpoint or the most relevant tag within Qdrant's Open API spec is in theory collections, and within collections it thinks that these are the top three routes that are relevant. Recommend point groups discover bash points recommend bash points none of these are quite what I wanted, which is how do I create a group? But it's okay for cluster, you can see create shard key delete. Nick Khami: So for cluster, this is kind of interesting. It thinks cluster is relevant, likely because a cluster is a kind of group and it matches to a large extent on the query. Then we also have points which it keys in on the shard system and the snapshotting system. When the next version gets released, we'll have rolling snapshots in Qdrant, which is very exciting. If anyone else is excited about that feature. I certainly am. Then it pulls the metrics. For another thing that might be a little bit easier for the search to work on. Nick Khami: You can type in how do I search points via group? And now it kind of is going to key in on what I would say is a better result. And you can see here we have very nice sub sentence highlighting on the request. It's bolding the sentence of the response that it thinks is the most relevant, which in this case are the second two paragraphs. Yep, the description and summary of what the request does. Another convenient thing about tree is in our default search UI, you can include links out to your resources. If I click this link, I'm going to immediately get to the correct place within the Qdrant redox specification. That's the entire search experience. For the Jedi side of this, I did a lot less optimization, but we can experiment and see how it goes. Nick Khami: I'm going to zoom in again, guys. Okay, so let's say I want to make a new rag chat and I'm going to ask here, how would I search over points in a group oriented way with Qdrant? And it's going to go ahead and do a search query for me on my behalf again, powered by the wonder of Qdrant. And once it does this search query, I'm able to get citations and and see what the model thinks. The model is a pretty good job with the first response, and it says that to search for points and group oriented wave Qdrant, I can utilize the routes and endpoints provided by the system and the ones that I'm going to want to use first is points search groups. If I click doc one here and I look at the route, this is actually correct. Conveniently, you're able to open the link in the. Oh, well, okay, this env is wrong, but conveniently what this is supposed to do, if I paste it and fix the incorrect portion of the system. Changing chat to search is you can load the individual chunk of the search UI and read it here, and then you can update it to include document expansion, change the actual copy of what was indexed out, et cetera. Nick Khami: It's like a really convenient way to merchandise and enhance your data set without having to write a lot of code. Yeah, and it'll continue writing its answer. I'm not going to go through the whole thing, but this really encapsulates what I wanted to show. This is incredibly simple to do. You can get this RAG, this search and the data upload done in a span of maybe 10-15 minutes, which is really cool and something that we were only really possible to build at Trieve, thanks to what the amazing team at Qdrant has been able to create. And yeah, guys, hopefully that was cool. Demetrios: Excellent. So I've got some questions. Woo the infinite spinning field. So I want to know about Trieve and I want to jump into what you all are doing there. And then I want to jump in a little bit about the evolution that you've seen of Qdrant over the years, because you've been using it for a while. But first, can we get a bit of an idea on what you're doing and how you're dedicating yourself to creating what you're creating? Nick Khami: Yeah. At Trieve, we just wrapped up the Y Combinator W 24 batch and our fundogram, which is like cool. It took us like a year. So Dens and I started Trieve in January of 2023, and we kind of kept building and building and building, and in the process, we started out trying to build an app for you to have like AI powered arguments at work. It wasn't the best of ideas. That's kind of why we started using Qdrant originally in the process of building that, we thought it was really hard to get the amazing next gen search that products like Qdrant offer, because for a typical team, they have to run a Docker compose file on the local machine, add the Qdrant service, that docker compose docker compose up D stand up Qdrant, set an env, download the Qdrant SDK. All these things get very, very difficult after you index all of your data, you then have to create a UI to view it, because if you don't do that. It can be very hard to judge performance. Nick Khami: I mean, you can always make these benchmarks, but search and recommendations are kind of like a heuristic thing. It's like you can always have a benchmark, but the data is dynamic, it changes and you really like. In what we were experiencing at the time, we really needed a way to quickly gauge the system was doing. We gave up on our rag AI application argumentation app and pivoted to trying to build infrastructure for other people to benefit from the high quality search that is offered by splayed for sparse, or like sparse encode. I mean, elastics, LSR models, really cool. There's all the dense embedding vector models and we wanted to offer a managed suite of infrastructure for building on this kind of stuff. That's kind of what tree is. So like, with tree you go to. Nick Khami: It's more of like a managed experience. You go to the dashboard, you make an account, you create the data set, you get an API key and the data set id, you go to your little script and mine for the Open API specs, 80 lines, you add all your data and then boom, bam, bing bop. You can just start searching and you can. We offer recommendations as well. Maybe I should have shown those in my demo, like, you can open an individual path and get recommendations for similar. Demetrios: There were recommendations, so I wasn't too far off the mark. See, search and recommendation, they just, they occupy the same spot in my head. Nick Khami: And Qdrant also offers recommendations for groups, guys. So like, which is really cool. Like you can, you can, like, not only can you search groups, you can also recommend groups, which is, I think, awesome. But yeah, you can upload all your data, you go to the search UI, you can search it, you can test out how recommendations are working in a lot of cases too. You can fix problems in your search. A good example of this is we built search for Y comb later companies so they could make it a lot better. Algolia is on an older search algorithm that doesn't offer semantic capabilities. And that means that you go to the Y combinator search companies bar, you type in which company offers short term rentals and you don't get Airbnb. Nick Khami: But with like Trieve it is. It is. But with tree, like, the magic of it is that even, believe it or not, there's a bunch of YC companies to do short term rentals and Airbnb does not appear first naturally. So with tree like, we offer a merchandising UI where you put that query in, you see Airbnb ranks a little bit lower than you want. You can immediately adjust the text that you indexed and even add like a re ranking weight so that appears higher in results. Do it again and it works. And you can also experiment and play with the rag. I think rag is kind of a third class citizen in our API. Nick Khami: It turns out search recommendations are a lot more popular with our customers and users. But yeah, like tree, I would say like to encapsulate it. Trieve is an all in one infrastructure suite for teams building search recommendations in Rag. And we bundle the power of databases like Qdrant and next gen search ML AI models with uis for fine tuning ranking of results. Demetrios: Dude, the reason I love this is because you can do so much with like well done search that is so valuable for so many companies and it's overlooked as like a solved problem, I think, for a lot of people, but it's not, and it's not that easy as you just explained. Nick Khami: Yeah, I mean, like we're fired up about it. I mean, like, even if you guys go to like YC.Trieve.AI, that's like the Y combinator company search and you can a b test it against like the older style of search that Algolia offers or like elasticsearch offers. And like, it's, to me it's magical. It's like it's an absolute like work of human ingenuity and amazingness that you can type in, which company should I get an airbed at? And it finds Airbnb despite like none of the keywords matching up. And I'm afraid right now our brains are trained to go to Google. And on Google search bar you can ask a question, you can type in abstract ideas and concepts and it works. But anytime we go to an e commerce search bar or oh, they're so. Demetrios: Bad, they're so bad. Everybody's had that experience too, where I don't even search. Like, I just am like, well, all right, or I'll go to Google and search specifically on Google for that website, you know, and like put in parentheses. Nick Khami: We'Re just excited about that. Like we want to, we're trying to make it a lot like the goal of tree is to make it a lot easier to power these search experiences, the latest gentech, and help fix this problem. Like, especially if AI continues to get better, people are going to become more and more used to like things working and not having to hack around, faceting and filtering for it to work. And yeah, we're just excited to make that easier for companies to work on and build. Demetrios: So there's one question coming through in the chat asking where we can get actual search metrics. Nick Khami: Yeah, so that's like the next thing that we're planning to add. Basically, like right now at tree, we don't track your users as queries. The next thing that we're like building at tree is a system for doing that. You're going to be able to analyze all of the searches that have been used on your data set within that search merchandising UI, or maybe a new UI, and adjust your rankings spot fix things the same way you can now, but with the power of the analytics. The other thing we're going to be offering soon is dynamically tunable cross encoders. Cross encoders are this magic neural net that can zip together full text and semantic results into a new ranked order. And they're underutilized, but they're also hard to adjust over time. We're going to be offering API endpoints for uploading, for doing your click through rates on the search results, and then dynamically on a batched timer tuning across encoder to adjust ranking. Nick Khami: This should be coming out in the next two to three weeks. But yeah, we're just now getting to the analytics hurdle. We also just got to the speed hurdle. So things are fast now. As you guys hopefully saw in the demo, it's sub 50 milliseconds for most queries. P 95 is like 80 milliseconds, which is pretty cool thanks to Qdrant, by the way. Nice Qdrant is huge, I mean for powering all of that. But yeah, analytics will be coming next two or three weeks. Nick Khami: We're excited about it. Demetrios: So there's another question coming through in the chat and they're asking, I wonder if llms can suggest graph QL queries based on schema as it's not so tied to endpoints. Nick Khami: I think they could in the system that we built for this case, I didn't actually use the response body. If you guys go to devflowinc Open API search on GitHub, you guys can make your own example where you fix that. In the response query of the Open API JSON spec, you have the structure. If you embed that inside of the chunk as another paragraph tag and then go back to doing rag, it probably can do that. I see no reason why I wouldn't be able to. Demetrios: I just dropped the link in the chat for anybody that is interested. And now let's talk a little bit for these next couple minutes about the journey of using Qdrant. You said you've been using it since 2022. Things have evolved a ton with the product over these years. Like, what have you seen what's been the most value add that you've had since starting? Nick Khami: I mean, there's so many, like, okay, the one that I have highlighted in my head that I wanted to talk about was, I remember in May of 2023, there's a GitHub issue with an Algora bounty for API keys. I remember Dens and I, we'd already been using it for a while and we knew there was no API key thing. There's no API key for it. We were always joking about it. We were like, oh, we're so early. There's not even an API key for our database. You had to have access permissions in your VPC or sub routing to have it work securely. And I'm not sure it's like the highest. Nick Khami: I'll talk about some other things where higher value add, but I just remember, like, how cool that was. Yeah, yeah, yeah. Demetrios: State of the nation. When you found out about it and. Nick Khami: It was so hyped, like, the API key had added, we were like, wow, this is awesome. It was kind of like a simple thing, but like, for us it was like, oh, whoa, this is. We're so much more comfortable in security now. But dude, Qdrant added so many cool things. Like a couple of things that I think I'd probably highlight are the group system. That was really awesome when that got added. I mean, I think it's one of my favorite features. Then after that, the sparse vector support and a recent version was huge. Nick Khami: We had a whole crazy subsystem with Tantivy. If anyone watching knows the crate Tantivy, it's like a full text. Uh, it's like a leucine alternative written in rust. Um, and we like, built this whole crazy subsystem and then quad fit, like, supported the sparse vectors and we were like, oh my God, we should have probably like, worked with them on the sparse vector thing we didn't even know you guys wanted to do, uh, because like, we spent all this time building it and probably could have like, helped out that PR. We felt bad, um, because that was really nice. When that got added, the performance fixes for that were also really cool. Some of the other things that, like, Qdrant added while we've been using it that were really awesome. Oh, the multiple recommendation modes, I think I forget what they're both called, but there's, it's also like insane for people, like, out there watching, like, try Qdrant for sure, it's so, so, so good compared to like a lot of what you can do in a PG vector. Nick Khami: There's like, this recommendation feature is awesome. Typically when you do recommendations, you take the results that you want to base recommendations off of and you build like an average vector that you then use to search. Qdrant offers a more evolved recommendation pattern now where you can traverse the graph looking at the positive point similarity, then also the negative similarity. And if the similarity of the negative points is higher than that of the positive points, it'll ignore that edge recommendations. And for us at least, like with our customers, this improved their quality of recommendations a lot when they use negative samples. And we didn't even find out about that. It was in the version release notes and we didn't think about it. And like a month or two later we had a customer that was like communicating that they wanted higher quality recommendations. Nick Khami: And we were like, okay, what is like, are we using all the features available? And we weren't. That was cool. Demetrios: The fact that you understand that now and you were able to communicate it back to me almost like better than I communicate it to people is really cool. And it shows that you've been in the weeds on it and you have seen a strong use case for it, because sometimes it's like, okay, this is out there. It needs to be communicated in the best use case so that people can understand it. And it seems like with that e commerce use case, it really stuck. Nick Khami: This one was actually for a company that does search over american legislation called funny enough, we want more e commerce customers for retrieve. Most of our customers right now are like SaaS applications. This particular customer, I don't think they'd mind me shouting them out. It's called Bill Track 50. If you guys want to like search over us legislation, try them out. They're very, very good. And yeah, they were the team that really used it. But yeah, it's another cool thing, I think, about infrastructure like Qdrant in general, and it's so, so powerful that like a lot of times it can be worth like getting an implementation partner. Nick Khami: Like, even if you're gonna, if you're gonna use Qdrant, like, the team at Qdrant is very helpful and you should consider reaching out to them because they can probably help anyone who's going to build search recommendations to figure out what is offered and what can help on a high level, not so much a GitHub issue code level, but at a high level. Thinking about your use case. Again, search is such a heuristic problem and so human in a way that it's always worth talking through your solution with people it that are very familiar with search recommendations in general. Demetrios: Yeah. And they know the best features and the best tool to use that is going to get you that outcome you're looking for. So. All right, Nick, last question for you. It is about Trieve. I have my theory on why you call it that. Is it retrieve? You just took off the Re-? Nick Khami: Yes. Drop the read. It's cleaner. That's like the Facebook quote, but for Trieve. Demetrios: I was thinking when I first read it, I was like, it must be some french word I'm not privy to. And so it's cool because it's french. You just got to put like an accent over one of these e's or both of them, and then it's even cooler. It's like luxury brand to the max. So I appreciate you coming on here. I appreciate you walking us through this and talking about it, man. This was awesome. Nick Khami: Yeah, thanks for having me on. I appreciate it. Demetrios: All right. For anybody else that is out there and wants to come on the vector space talks, come join us. You know where to find us. As always, later. ",blog/building-search-rag-for-an-openapi-spec-nick-khami-vector-space-talks.md "--- draft: false title: ""Qdrant is Now Available on Azure Marketplace!"" short_description: Discover the power of Qdrant on Azure Marketplace! description: Discover the power of Qdrant on Azure Marketplace! Get started today and streamline your operations with ease. preview_image: /blog/azure-marketplace/azure-marketplace.png date: 2024-03-26T10:30:00Z author: David Myriel featured: false weight: 0 tags: - Qdrant - Azure Marketplace - Enterprise - Vector Database --- We're thrilled to announce that Qdrant is now [officially available on Azure Marketplace](https://azuremarketplace.microsoft.com/en-en/marketplace/apps/qdrantsolutionsgmbh1698769709989.qdrant-db), bringing enterprise-level vector search directly to Azure's vast community of users. This integration marks a significant milestone in our journey to make Qdrant more accessible and convenient for businesses worldwide. > *With the landscape of AI being complex for most customers, Qdrant's ease of use provides an easy approach for customers' implementation of RAG patterns for Generative AI solutions and additional choices in selecting AI components on Azure,* - Tara Walker, Principal Software Engineer at Microsoft. ## Why Azure Marketplace? [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/) is renowned for its robust ecosystem, trusted by millions of users globally. By listing Qdrant on Azure Marketplace, we're not only expanding our reach but also ensuring seamless integration with Azure's suite of tools and services. This collaboration opens up new possibilities for our users, enabling them to leverage the power of Azure alongside the capabilities of Qdrant. > *Enterprises like Bosch can now use the power of Microsoft Azure to host Qdrant, unleashing unparalleled performance and massive-scale vector search. ""With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform at enterprise scale,* - Jeremy Teichmann (AI Squad Technical Lead & Generative AI Expert), Daly Singh (AI Squad Lead & Product Owner) - Bosch Digital. ## Key Benefits for Users: - **Rapid Application Development:** Deploying a cluster on Microsoft Azure via the Qdrant Cloud console only takes a few seconds and can scale up as needed, giving developers maximal flexibility for their production deployments. - **Billion Vector Scale:** Seamlessly grow and handle large-scale datasets with billions of vectors by leveraging Qdrant's features like vertical and horizontal scaling or binary quantization with Microsoft Azure's scalable infrastructure. - **Unparalleled Performance:** Qdrant is built to handle scaling challenges, high throughput, low latency, and efficient indexing. Written in Rust makes Qdrant fast and reliable even under high load. See benchmarks. - **Versatile Applications:** From recommendation systems to similarity search, Qdrant's integration with Microsoft Azure provides a versatile tool for a diverse set of AI applications. ## Getting Started: Ready to experience the benefits of Qdrant on Azure Marketplace? Getting started is easy: 1. **Visit the Azure Marketplace**: Navigate to [Qdrant's Marketplace listing](https://azuremarketplace.microsoft.com/en-en/marketplace/apps/qdrantsolutionsgmbh1698769709989.qdrant-db). 2. **Deploy Qdrant**: Follow the simple deployment instructions to set up your instance. 3. **Start Using Qdrant**: Once deployed, start exploring the [features and capabilities of Qdrant](/documentation/concepts/) on Azure. 4. **Read Documentation**: Read Qdrant's [Documentation](/documentation/) and build demo apps using [Tutorials](/documentation/tutorials/). ## Join Us on this Exciting Journey: We're incredibly excited about this collaboration with Azure Marketplace and the opportunities it brings for our users. As we continue to innovate and enhance Qdrant, we invite you to join us on this journey towards greater efficiency, scalability, and success. Ready to elevate your business with Qdrant? **Click the banner and get started today!** [![Get Started on Azure Marketplace](cta.png)](https://azuremarketplace.microsoft.com/en-en/marketplace/apps/qdrantsolutionsgmbh1698769709989.qdrant-db) ### About Qdrant: Qdrant is the leading, high-performance, scalable, open-source vector database and search engine, essential for building the next generation of AI/ML applications. Qdrant is able to handle billions of vectors, supports the matching of semantically complex objects, and is implemented in Rust for performance, memory safety, and scale. ",blog/azure-marketplace.md "--- draft: false title: ""VirtualBrain: Best RAG to unleash the real power of AI - Guillaume Marquis | Vector Space Talks"" slug: virtualbrain-best-rag short_description: Let's explore information retrieval with Guillaume Marquis, CTO & Co-Founder at VirtualBrain. description: Guillaume Marquis, CTO & Co-Founder at VirtualBrain, reveals the mechanics of advanced document retrieval with RAG technology, discussing the challenges of scalability, up-to-date information, and navigating user feedback to enhance the productivity of knowledge workers. preview_image: /blog/from_cms/guillaume-marquis-2-cropped.png date: 2024-03-27T12:41:51.859Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - Retrieval Augmented Generation - VirtualBrain --- > *""It's like mandatory to have a vector database that is scalable, that is fast, that has low latencies, that can under parallel request a large amount of requests. So you have really this need and Qdrant was like an obvious choice.”*\ — Guillaume Marquis > Guillaume Marquis, a dedicated Engineer and AI enthusiast, serves as the Chief Technology Officer and Co-Founder of VirtualBrain, an innovative AI company. He is committed to exploring novel approaches to integrating artificial intelligence into everyday life, driven by a passion for advancing the field and its applications. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/20iFzv2sliYRSHRy1QHq6W?si=xZqW2dF5QxWsAN4nhjYGmA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/v85HqNqLQcI?feature=shared).*** ## **Top takeaways:** Who knew that document retrieval could be creative? Guillaume and VirtualBrain help draft sales proposals using past reports. It's fascinating how tech aids deep work beyond basic search tasks. Tackling document retrieval and AI assistance, Guillaume furthermore unpacks the ins and outs of searching through vast data using a scoring system, the virtue of RAG for deep work, and going through the 'illusion of work', enhancing insights for knowledge workers while confronting the challenges of scalability and user feedback on hallucinations. Here are some key insight from this episode you need to look out for: 1. How to navigate the world of data with a precision scoring system for document retrieval. 2. The importance of fresh data and how to avoid the black holes of outdated info. 3. Techniques to boost system scalability and speed — essential in the vastness of data space. 4. AI Assistants tailored for depth rather than breadth, aiding in tasks like crafting stellar commercial proposals. 5. The intriguing role of user perception in AI tool interactions, plus a dash of timing magic. > Fun Fact: VirtualBrain uses Qdrant, for its advantages in speed, scalability, and API capabilities. > ## Show notes: 00:00 Hosts and guest recommendations.\ 09:01 Leveraging past knowledge to create new proposals.\ 12:33 Ingesting and parsing documents for context retrieval.\ 14:26 Creating and storing data, performing advanced searches.\ 17:39 Analyzing document date for accurate information retrieval.\ 20:32 Perceived time can calm nerves and entertain.\ 24:23 Tried various vector databases, preferred open source.\ 27:42 LangFuse: open source tool for monitoring tasks.\ 33:10 AI tool designed to stay within boundaries.\ 34:31 Minimizing hallucination in AI through careful analysis. ## More Quotes from Guillaume: *""We only exclusively use open source tools because of security aspects and stuff like that. That's why also we are using Qdrant one of the important point on that. So we have a system, we are using this serverless stuff to ingest document over time.”*\ — Guillaume Marquis *""One of the challenging part was the scalability of the system. We have clients that come with terra octave of data and want to be parsed really fast and so you have the ingestion, but even after the semantic search, even on a large data set can be slow. And today ChatGPT answers really fast. So your users, even if the question is way more complicated to answer than a basic ChatGPT question, they want to have their answer in seconds. So you have also this challenge that you really have to take care.”*\ — Guillaume Marquis *""Our AI is not trained to write you a speech based on Shakespeare and with the style of Martin Luther King. It's not the purpose of the tool. So if you ask something that is out of the box, he will just say like, okay, I don't know how to answer that. And that's an important point. That's a feature by itself to be able to not go outside of the box.”*\ — Guillaume Marquis ## Transcript: Demetrios: So, dude, I'm excited for this talk. Before we get into it, I want to make sure that we have some pre conversation housekeeping items that go out, one of which being, as always, we're doing these vector space talks and everyone is encouraged and invited to join in. Ask your questions, let us know where you're calling in from, let us know what you're up to, what your use case is, and feel free to drop any questions that you may have in the chat. We will be monitoring it like a hawk. Today I am joined by none other than Sabrina. How are you doing, Sabrina? Sabrina Aquino: What's up, Demetrios? I'm doing great. Excited to be here. I just love seeing what amazing stuff people are building with Qdrant and. Yeah, let's get into it. Demetrios: Yeah. So I think I see Sabrina's wearing a special shirt which is don't get lost in vector space shirt. If anybody wants a shirt like that. There we go. Well, we got you covered, dude. You will get one at your front door soon enough. If anybody else wants one, come on here. Present at the next vector space talks. Demetrios: We're excited to have you. And we've got one last thing that I think is fun that we can talk about before we jump into the tech piece of the conversation. And that is I told Sabrina to get ready with some recommendations. Know vector databases, they can be used occasionally for recommendation systems, but nothing's better than getting that hidden gem from your friend. And right now what we're going to try and do is give you a few hidden gems so that the next time the recommendation engine is working for you, it's working in your favor. And Sabrina, I asked you to give me one music that you can recommend, one show and one rando. So basically one random thing that you can recommend to us. Sabrina Aquino: So I've picked. I thought about this. Okay, I give it some thought. The movie would be Catch Me If You Can by Leo DiCaprio and Tom Hanks. Have you guys watched it? Really good movie. The song would be oh, children by knee cave and the bad scenes. Also very good song. And the random recommendation is my favorite scented candle, which is citrus notes, sea salt and cedar. Sabrina Aquino: So there you go. Demetrios: A scented candle as a recommendation. I like it. I think that's cool. I didn't exactly tell you to get ready with that. So I'll go next, then you can have some more time to think. So for anybody that's joining in, we're just giving a few recommendations to help your own recommendation engines at home. And we're going to get into this conversation about rags in just a moment. But my song is with. Demetrios: Oh, my God. I've been listening to it because I didn't think that they had it on Spotify, but I found it this morning and I was so happy that they did. And it is Bill Evans and Chet Baker. Basically, their whole album, the legendary sessions, is just like, incredible. But the first song on that album is called Alone Together. And when Chet Baker starts playing his little trombone, my God, it is like you can feel emotion. You can touch it. That is what I would recommend. Demetrios: Anyone out there? I'll drop a link in the chat if you like it. The film or series. This fool, if you speak Spanish, it's even better. It is amazing series. Get that, do it. And as the rando thing, I've been having Rishi mushroom powder in my coffee in the mornings. I highly recommend it. All right, last one, let's get into your recommendations and then we'll get into this rag chat. Guillaume Marquis: So, yeah, I sucked a little bit. So for the song, I think I will give something like, because I'm french, I think you can hear it. So I will choose Get Lucky of Daft Punk and because I am a little bit sad of the end of their collaboration. So, yeah, just like, I cannot forget it. And it's a really good music. Like, miss them as a movie, maybe something like I really enjoy. So we have a lot of french movies that are really nice, but something more international maybe, and more mainstream. Jungle of Tarantino, that is really a good movie and really enjoy it. Guillaume Marquis: I watched it several times and still a good movie to watch. And random thing, maybe a city. A city to go to visit. I really enjoyed. It's hard to choose. Really hard to choose a place in general. Okay, Florence, like in Italy. Demetrios: There we go. Guillaume Marquis: Yeah, it's a really cool city to go. So if you have time, and even Sabrina, if you went to Europe soon, it's really a nice place to go. Demetrios: That is true. Sabrina is going to Europe soon. We're blowing up her spot right now. So hopefully Florence is on the list. I know that most people watching did not tune in to hearing the three of us just randomly give recommendations. We are here to talk more about retrieval augmented generation. But hopefully those recommendations help some of you all at home with your recommendation engines. And you're maybe using a little bit of a vector database in your recommendation engine building skills. Demetrios: Let's talk about this, though, man, because I think it would be nice if you can set the scene. What exactly are you working on? I know you've got virtual brain. Can you tell us a little bit about that so that we can know how you're doing rags? Guillaume Marquis: Because rag is like, I think the most famous word in the AI sphere at the moment. So, virtual brain, what we are building in particular is that we are building an AI assistant for knowledge workers. So we are not only building this next gen search bar to search content through documents, it's a tool for enterprises at enterprise grade that provide some easy way to interact with your knowledge. So basically, we create a tool that we connect to the world knowledge of the company. It could be whatever, like the drives, sharepoints, whatever knowledge you have, any kind of documents, and with that you will be able to perform tasks on your knowledge, such as like audit, RFP, due diligence. It's not only like everyone that is building rag or building a kind of search system through rag are always giving the same number. Is that like 20%? As a knowledge worker, you spend 20% of your time by searching information. And I think I heard this number so much time, and that's true, but it's not enough. Guillaume Marquis: Like the search bar, a lot of companies, like many companies, are working on how to search stuff for a long time, and it's always a subject. But the real pain and what we want to handle and what we are handling is deep work, is real tasks, is how to help these workers, to really help them as an assistant, not only on search bar, like as an assistant on real task, real added value tasks. So inside that, can you give us. Demetrios: An example of that? Is it like that? It pops up when it sees me working on notion and talking about or creating a PRD, and then it says, oh, this might be useful for your PRD because you were searching about that a week ago or whatever. Guillaume Marquis: For instance. So we are working with companies that have from 100 employees to several thousand employees. For instance, when you have to create a commercial proposal as a salesperson in a company, you have an history with a company, an history in this ecosystem, a history within this environment, and you have to capitalize on all this commercial proposition that you did in the past in your company, you can have thousands of propositions, you can have thousands of documents, you can have reporting from different departments, depending of the industry you are working on, and with that, with the tool. So you can ask question, you can capitalize on this document, and you can easily create new proposal by asking question, by interacting with the tool, to go deeply in this use case and to create something that is really relevant for your new use case. And that is using really the knowledge that you have in your company. And so it's not only like retrieve or just like find me as last proposition of this client. It's more like, okay, use x past proposals to create a new one. And that's a real challenge that is linked to our subject. Guillaume Marquis: It's because it's not only like retrieve one, two or even ten documents, it's about retrieving like hundred, 200, a lot of documents, a lot of information, and you have a real something to do with a lot of documents, a lot of context, a lot of information you have to manage. Demetrios: I have the million dollar question that I think is probably coming through everyone's head is like, you're retrieving so many documents, how are you evaluating your retrieval? Guillaume Marquis: That's definitely the $1 million question. It's a toss task to do, to be honest. To be fair. Currently what we are doing is that we monitor every tasks of the process, so we have the output of every tasks. On each tasks we use a scoring system to evaluate if it's relevant to the initial question or the initial task of the user. And we have a global scoring system on all the system. So it's quite odd, it's a little bit empiric, but it works for now. And it really help us to also improve over time all the tasks and all the processes that are done by the tool. Guillaume Marquis: So it's really important. And for instance, you have this kind of framework that is called RAGtriad. That is a way to evaluate rag on the accuracy of the context you retrieve on the link with the initial question and so on, several parameters. And you can really have a first way to evaluate the quality of answers and the quality of everything on each steps. Sabrina Aquino: I love it. Can you go more into the tech that you use for each one of these steps in architecture? Guillaume Marquis: So the process is quite like, it starts at the moment we ingest documents because basically it's hard to retrieve good documents or retrieve documents in a proper way if you don't parse it well. If you just like the dumb rug, as I call it, is like, okay, you take a document, you divide it in text, and that's it. But you will definitely lose the context, the global context of the document, what the document in general is talking about. And you really need to do it properly and to keep this context. And that's a real challenge, because if you keep some noises, if you don't do that well, everything will be broken at the end. So technically how it works. So we have a proper system that we developed to ingest documents using technologies, open source technologies. We only exclusively use open source tools because of security aspects and stuff like that. Guillaume Marquis: That's why also we are using Qdrant one of the important point on that. So we have a system, we are using this serverless stuff to ingest document over time. We have also models that create tags on documents. So we use open source slms to tag documents, to enrich documents, also to create a new title, to create a summary of documents, to keep the context. When we divide the document, we keep the title of paragraphers, the context inside paragraphers, and we leak every piece of text between each other to keep the context after that, when we retrieve the document. So it's like the retrieving part. We have a new breed search system. We are using Qdrant on the semantic port. Guillaume Marquis: So basically we are creating unbelieving, we are storing it into Qdrant. We are performing similarity search to retrieve documents based on title summary filtering, on tags, on the semantic context. And we have also some keyword search, but it's more for specific tasks, like when we know that we need a specific document, at some point we are searching it with a keyword search. So it's like a kind of ebrid system that is using deterministic approach with filtering with tags, and a probabilistic approach with selecting document with this ebot search, and doing a scoring system after that to get what is the most relevant document and to select how much content we will take from each document. It's a little bit techy, but it's really cool to create and we have a way to evolve it and to improve it. Demetrios: That's what we like around here, man. We want the techie stuff. That's what I think everybody signed up for. So that's very cool. One question that definitely comes up a lot when it comes to rags and when you're ingesting documents, and then when you're retrieving documents and updating documents, how do you make sure that the documents that you are, let's say, I know there's probably a hypothetical HR scenario where the company has a certain policy and they say you can have European style holidays, you get like three months of holidays a year, or even French style holidays. Basically, you just don't work. And whenever you want, you can work, you don't work. And then all of a sudden a US company comes and takes it over and they say, no, you guys don't get holidays. Demetrios: Even when you do get holidays, you're not working or you are working and so you have to update all the HR documents, right? So now when you have this knowledge worker that is creating something, or when you have anyone that is getting help, like this copilot help, how do you make sure that the information that person is getting is the most up to date information possible? Guillaume Marquis: That's a new $1 million question. Demetrios: I'm coming with the hits today. I don't know what you were looking for. Guillaume Marquis: That's a really good question. So basically you have several possibilities on that. First one you have like this PowerPoint presentation. That's a mess in the knowledge bases and sometimes you just want to use the most updated up to date documents. So basically we can filter on the created ad and the date of the documents. Sometimes you want to also compare the evolution of the process over time. So that's another use case. Basically we base. Guillaume Marquis: So during the ingestion we are analyzing if date is inside the document, because sometimes in documentation you have like the date at the end of the document or at the beginning of the document. That's a first way to do it. We have the date of the creation of the document, but it's not a source of truth because sometimes you created it after or you duplicated it and the date is not the same, depending if you are working on Windows, Microsoft, stuff like that. It's definitely a mess. And also we compare documents. So when we retry the documents and documents are really similar one to each other, we keep it in mind and we try to give more information as possible. Sometimes it's not possible, so it's not 100%, it's not bulletproof, but it's a real question of that. So it's a partial answer of your question, but it's like some way we are today filtering and answering on this special topic. Sabrina Aquino: Now I wonder what was the most challenging part of building this frag since there was like. Guillaume Marquis: There are a lot of parts that are really challenging. Sabrina Aquino: Challenging. Guillaume Marquis: One of the challenging part was the scalability of the system. We have clients that come with terra octave of data and want to be parsed really fast and so you have the ingestion, but even after the semantic search, even on a large data set can be slow. And today Chat GPT answer really fast. So your users, even if the question is way more complicated to answer than a basic Chat GPT question, they want to have their answer in seconds. So you have also this challenge that is really you have to take care. So it's quite challenging and it's like this industrial supply chain. So when you upgrade something, you have to be sure that everything is working well on the other side. And that's a real challenge to handle. Guillaume Marquis: And we are still on it because we are still evolving and getting more data. And at the end of the day, you have to be sure that everything is working well in terms of LLM, but in terms of research and in terms also a few weeks to give some insight to the user of what is working under the hood, to give them the possibility to wait a few seconds more, but starting to give them pieces of answer. Demetrios: Yeah, it's funny you say that because I remember talking to somebody that was working at you.com and they were saying how there's like the actual time. So they were calling it something like perceived time and real, like actual time. So you as an end user, if you get asked a question or maybe there's like a trivia quiz while the question is coming up, then it seems like it's not actually taking as long as it is. Even if it takes 5 seconds, it's a little bit cooler. Or as you were mentioning, I remember reading some paper, I think, on how people are a lot less anxious if they see the words starting to pop up like that and they see like, okay, it's not just I'm waiting and then the whole answer gets spit back out at me. It's like I see the answer forming as it is in real time. And so that can calm people's nerves too. Guillaume Marquis: Yeah, definitely. Human's brain is like marvelous on that. And you have a lot of stuff. Like, one of my favorites is the illusion of work. Do you know it? It's the total opposite. If you have something that seems difficult to do, adding more time of processing. So the user will imagine that it's really an OD task to do. And so that's really funny. Demetrios: So funny like that. Guillaume Marquis: Yeah. Yes. It's the opposite of what you will think if you create a product, but that's real stuff. And sometimes just to output them that you are performing toss tasks in the background, it helps them to. Oh, yes. My question was really like a complex question, like you have a lot of work to do. It's Axe word like. If you answer too fast, they will not trust the answer. Guillaume Marquis: And it's the opposite if you answer too slow. You can have this. Okay. But it should be dumb because it's really slow. So it's a dumb AI or stuff like that. So that's really funny. My co founder actually was a product guy, so really focused on product, and he really loves this kind of stuff. Demetrios: Great thought experiment, that's interesting. Sabrina Aquino: And you mentioned like you chose Qdrant because it's open source, but now I wonder if there's also something to do with your need for something that's fast, that's scalable, and what other factors you took in consideration when choosing the vector DB. Guillaume Marquis: Yes, so I told you that the scalability and the speed is like one of the most important points and toast part to endure. And yes, definitely, because when you are building a complex rag, you are not like just performing one research, at some points you are doing it maybe like you are splitting the question, doing several at the same time. And so it's like mandatory to have a vector database that is scalable, that is fast, that has low latencies, that can under parallel request a large amount of requests. So you have really this need. And Qdrant was like an obvious choose. Actually, we did a benchmark, so we really tried several possibilities. Demetrios: Some tell me more. Yeah. Guillaume Marquis: So we tried the classic postgres page vectors, that is, I think we tried it like 30 minutes, and we realized really fast that it was really not good for our use case. We tried Weaviate, we tried Milvus, we tried Qdrant, we tried a lot. We prefer use open source because of security issues. We tried Pinecone initially, we were on Pinecone at the beginning of the company. And so the most important point, so we have the speed of the tool, we have the scalability we have also, maybe it's a little bit dumb to say that, but we have also the API. I remember using Pinecone and trying just to get all vectors and it was not possible somehow, and you have this dumb stuff that are sometimes really strange. And if you have a tool that is 100% made for your use case with people that are working on it, really dedicated on that, and that are aligned with your vision of what is the evolution of this. I think it's like the best tool you have to choose. Demetrios: So one thing that I would love to hear about too, is when you're looking at your system and you're looking at just the product in general, what are some of the key metrics that you are constantly monitoring, and how do you know that you're hitting them or you're not? And then if you're not hitting them, what are some ways that you debug the situation? Guillaume Marquis: By metrics you mean like usage metrics. Demetrios: Or like, I'm more thinking on your whole tech setup and the quality of your rag. Guillaume Marquis: Basically we are focused on industry of knowledge workers and industry in particular like of consultants. So we have some data set of questions that we know should be answered. Well, we know the kind of outputs we should have. The metrics we are like monitoring on our rag is mostly the accuracy of the answer, the accuracy of sources, the number of hallucination that is sometimes really also hard to manage. Actually our tool is sourcing everything. When you ask a question or when you perform a task, it gives you all the sources. But sometimes you can have a perfect answer and just like one number inside your answer that comes from nowhere, that is totally like invented and that's up to get. We are still working on that. Guillaume Marquis: We are not the most advanced on this part. We just implemented a tool I think you may know it's LangFuse. Do you know them? LangFuse? Demetrios: No. Tell me more. Guillaume Marquis: LangFuse is like a tool that is made to monitor tasks on your rack so you can easily log stuff. It's also open source tool, you can easily self host it and you can monitor every part of your rag. You can create data sets based on questions and answers that has been asked or some you created by yourself. And you can easily perform like check of your rag just to trade out and to give a final score of it, and to be able to monitor everything and to give global score based on your data set of your rag. So we are currently implementing it. I give their name because it's wonderful the work they did, and I really enjoyed it. It's one of the most important points to not be blind. I mean, in general, in terms of business, you have to follow metrics. Guillaume Marquis: Numbers cannot lie. Humans lies, but not numbers. But after that you have to interpret numbers. So that's also another toss part. But it's important to have the good metrics and to be able to know if you are evolving it, if you are improving your system and if everything is working. Basically the different stuff we are doing, we are not like. Demetrios: Are you collecting human feedback? For the hallucinations part, we try, but. Guillaume Marquis: Humans are not like giving a lot of feedback. Demetrios: It's hard. That's why it's really hard the end user to do anything, even just like the thumbs up, thumbs down can be difficult. Guillaume Marquis: We tried several stuff. We have the thumbs up, thumbs down, we tried stars. You ask real feedback to write something, hey, please help us. Human feedback is quite poor, so we are not counting on that. Demetrios: I think the hard part about it, at least me as an end user, whenever I've been using these, is like the thumbs down or the, I've even seen it go as far as, like, you have more than just one emoji. Like, maybe you have the thumbs up, you have the thumbs down. You have, like, a mushroom emoji. So it's, like, hallucinated. And you have, like. Guillaume Marquis: What was the. Demetrios: Other one that I saw that I thought was pretty? I can't remember it right now, but. Guillaume Marquis: I never saw the mushroom. But that's quite fun. Demetrios: Yeah, it's good. It's not just wrong. It's absolutely, like, way off the mark. And what I think is interesting there when I've been the end user is that it's a little bit just like, I don't have time to explain the nuances as to why this is not useful. I really would have to sit down and almost, like, write a book or at least an essay on, yeah, this is kind of useful, but it's like a two out of a five, not a four out of a five. And so that's why I gave it the thumbs down. Or there was this part that is good and that part's bad. And so it's just like the ways that you have to, or the nuances that you have to go into as the end user when you're trying to evaluate it, I think it's much better. Demetrios: And what I've seen a lot of people do is just expect to do that in house. After the fact, you get all the information back, you see, on certain metrics, like, oh, did this person commit the code? Then that's a good signal that it's useful. But then you can also look at it, or did this person copy paste it? Et cetera, et cetera. And how can we see if they didn't copy paste that or if they didn't take that next action that we would expect them to take? Why not? And let's try and dig into what we can do to make that better. Guillaume Marquis: Yes. We can also evaluate the next questions, like the following questions. That's a great point. We are not currently doing it automatically, but if you see that a user just answer, no, it's not true, or you should rephrase it or be more concise, or these kind of following questions, you know that the first answer was not as relevant as. Demetrios: That's such a great point. Or you do some sentiment analysis and it slowly is getting more and more angry. Guillaume Marquis: Yeah, that's true. That's a good point also. Demetrios: Yeah, this one went downhill, so. All right, cool. I think that's it. Sabrina, any last questions from your side? Sabrina Aquino: Yeah, I think I'm just very interesting to know from a user perspective, from a virtual brain, how are traditional models worse or what kind of errors virtual brain fixes in their structure, that users find it better that way. Guillaume Marquis: I think in this particular, so we talked about hallucinations, I think it's like one of the main issues people have on classic elements. We really think that when you create a one size fit all tool, you have some chole because you have to manage different approaches, like when you are creating copilot as Microsoft, you have to under the use cases of, and I really think so. Our AI is not trained to write you a speech based on Shakespeare and with the style of Martin Luther King. It's not the purpose of the tool. So if you ask something that is out of the box, he will just say like, okay, I don't know how to answer that. And that's an important point. That's a feature by itself to be able to not go outside of the box. And so we did this choice of putting the AI inside the box, the box that is containing basically all the knowledge of your company, all the retrieved knowledge. Guillaume Marquis: Actually we do not have a lot of hallucination, I will not say like 0%, but it's close to zero. Because we analyze a question, we put the AI in a box, we enforce the AI to think about the answer before answering, and we analyze also the answer to know if the answer is relevant. And that's an important point that we are fixing and we fix for our user and we prefer yes, to give like non answers and a bad answer. Sabrina Aquino: Absolutely. And there are people who think like, hey, this is a rag, it's not going to hallucinate, and that's not the case at all. It will hallucinate less inside a certain context window that you provide. Right. But it still has a possibility. So minimizing that as much as possible is very valuable. Demetrios: So good. Well, I think with that, our time here is coming to an end. I really appreciate this. I encourage everyone to go and have a little look at virtual brain. We'll drop a link in the comment in case anyone wants free to sign up. Guillaume Marquis: So you can trade for free. Demetrios: Even better. Look at that, Christmas came early. Well, let's go have some fun, play around with it. And I can't promise, but I may give you some feedback, I may give you some evaluation metrics if it's hallucinating. Guillaume Marquis: Or what if I see some thumbs up or thumbs down, I will know that it's you. Demetrios: Yeah, cool. Exactly. All right, folks, that's about it for today. We will see you all later. As a reminder, don't get lost in vector space. This has been another vector space talks. And if you want to come on here and chat with us, feel free to reach out. See ya. Guillaume Marquis: Cool. Sabrina Aquino: See you guys. Thank you. Bye. ",blog/virtualbrain-best-rag-to-unleash-the-real-power-of-ai-guillaume-marquis-vector-space-talks.md "--- draft: false title: ""Pienso & Qdrant: Future Proofing Generative AI for Enterprise-Level Customers"" slug: case-study-pienso short_description: Why Pienso chose Qdrant as a cornerstone for building domain-specific foundation models. description: Why Pienso chose Qdrant as a cornerstone for building domain-specific foundation models. preview_image: /case-studies/pienso/social_preview.png date: 2023-02-28T09:48:00.000Z author: Qdrant Team featured: false aliases: - /case-studies/pienso/ --- The partnership between Pienso and Qdrant is set to revolutionize interactive deep learning, making it practical, efficient, and scalable for global customers. Pienso's low-code platform provides a streamlined and user-friendly process for deep learning tasks. This exceptional level of convenience is augmented by Qdrant’s scalable and cost-efficient high vector computation capabilities, which enable reliable retrieval of similar vectors from high-dimensional spaces. Together, Pienso and Qdrant will empower enterprises to harness the full potential of generative AI on a large scale. By combining the technologies of both companies, organizations will be able to train their own large language models and leverage them for downstream tasks that demand data sovereignty and model autonomy. This collaboration will help customers unlock new possibilities and achieve advanced AI-driven solutions. Strengthening LLM Performance Qdrant enhances the accuracy of large language models (LLMs) by offering an alternative to relying solely on patterns identified during the training phase. By integrating with Qdrant, Pienso will empower customer LLMs with dynamic long-term storage, which will ultimately enable them to generate concrete and factual responses. Qdrant effectively preserves the extensive context windows managed by advanced LLMs, allowing for a broader analysis of the conversation or document at hand. By leveraging this extended context, LLMs can achieve a more comprehensive understanding and produce contextually relevant outputs. ## Joint Dedication to Scalability, Efficiency and Reliability > “Every commercial generative AI use case we encounter benefits from faster training and inference, whether mining customer interactions for next best actions or sifting clinical data to speed a therapeutic through trial and patent processes.” - Birago Jones, CEO, Pienso Pienso chose Qdrant for its exceptional LLM interoperability, recognizing the potential it offers in maximizing the power of large language models and interactive deep learning for large enterprises. Qdrant excels in efficient nearest neighbor search, which is an expensive and computationally demanding task. Our ability to store and search high-dimensional vectors with remarkable performance and precision will offer a significant peace of mind to Pienso’s customers. Through intelligent indexing and partitioning techniques, Qdrant will significantly boost the speed of these searches, accelerating both training and inference processes for users. ### Scalability: Preparing for Sustained Growth in Data Volumes Qdrant's distributed deployment mode plays a vital role in empowering large enterprises dealing with massive data volumes. It ensures that increasing data volumes do not hinder performance but rather enrich the model's capabilities, making scalability a seamless process. Moreover, Qdrant is well-suited for Pienso’s enterprise customers as it operates best on bare metal infrastructure, enabling them to maintain complete control over their data sovereignty and autonomous LLM regimes. This ensures that enterprises can maintain their full span of control while leveraging the scalability and performance benefits of Qdrant's solution. ### Efficiency: Maximizing the Customer Value Proposition Qdrant's storage efficiency delivers cost savings on hardware while ensuring a responsive system even with extensive data sets. In an independent benchmark stress test, Pienso discovered that Qdrant could efficiently store 128 million documents, consuming a mere 20.4GB of storage and only 1.25GB of memory. This storage efficiency not only minimizes hardware expenses for Pienso’s customers, but also ensures optimal performance, making Qdrant an ideal solution for managing large-scale data with ease and efficiency. ### Reliability: Fast Performance in a Secure Environment Qdrant's utilization of Rust, coupled with its memmap storage and write-ahead logging, offers users a powerful combination of high-performance operations, robust data protection, and enhanced data safety measures. Our memmap storage feature offers Pienso fast performance comparable to in-memory storage. In the context of machine learning, where rapid data access and retrieval are crucial for training and inference tasks, this capability proves invaluable. Furthermore, our write-ahead logging (WAL), is critical to ensuring changes are logged before being applied to the database. This approach adds additional layers of data safety, further safeguarding the integrity of the stored information. > “We chose Qdrant because it's fast to query, has a small memory footprint and allows for instantaneous setup of a new vector collection that is going to be queried. Other solutions we evaluated had long bootstrap times and also long collection initialization times {..} This partnership comes at a great time, because it allows Pienso to use Qdrant to its maximum potential, giving our customers a seamless experience while they explore and get meaningful insights about their data.” - Felipe Balduino Cassar, Senior Software Engineer, Pienso ## What's Next? Pienso and Qdrant are dedicated to jointly develop the most reliable customer offering for the long term. Our partnership will deliver a combination of no-code/low-code interactive deep learning with efficient vector computation engineered for open source models and libraries. **To learn more about how we plan on achieving this, join the founders for a [technical fireside chat at 09:30 PST Thursday, 20th July on Discord](https://discord.gg/Vnvg3fHE?event=1128331722270969909).** ![founders chat](/case-studies/pienso/founderschat.png) ",blog/case-study-pienso.md "--- draft: false title: ""Red Hat OpenShift and Qdrant Hybrid Cloud Offer Seamless and Scalable AI"" short_description: ""Qdrant brings managed vector databases to Red Hat OpenShift for large-scale GenAI."" description: ""Qdrant brings managed vector databases to Red Hat OpenShift for large-scale GenAI."" preview_image: /blog/hybrid-cloud-red-hat-openshift/hybrid-cloud-red-hat-openshift.png date: 2024-04-11T00:04:00Z author: Qdrant featured: false weight: 1003 tags: - Qdrant - Vector Database --- We’re excited about our collaboration with Red Hat to bring the Qdrant vector database to [Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) customers! With the release of [Qdrant Hybrid Cloud](/hybrid-cloud/), developers can now deploy and run the Qdrant vector database directly in their Red Hat OpenShift environment. This collaboration enables developers to scale more seamlessly, operate more consistently across hybrid cloud environments, and maintain complete control over their vector data. This is a big step forward in simplifying AI infrastructure and empowering data-driven projects, like retrieval augmented generation (RAG) use cases, advanced search scenarios, or recommendations systems. In the rapidly evolving field of Artificial Intelligence and Machine Learning, the demand for being able to manage the modern AI stack within the existing infrastructure becomes increasingly relevant for businesses. As enterprises are launching new AI applications and use cases into production, they require the ability to maintain complete control over their data, since these new apps often work with sensitive internal and customer-centric data that needs to remain within the owned premises. This is why enterprises are increasingly looking for maximum deployment flexibility for their AI workloads. >*“Red Hat is committed to driving transparency, flexibility and choice for organizations to more easily unlock the power of AI. By working with partners like Qdrant to enable streamlined integration experiences on Red Hat OpenShift for AI use cases, organizations can more effectively harness critical data and deliver real business outcomes,”* said Steven Huels, Vice President and General Manager, AI Business Unit, Red Hat. #### The Synergy of Qdrant Hybrid Cloud and Red Hat OpenShift Qdrant Hybrid Cloud is the first vector database that can be deployed anywhere, with complete database isolation, while still providing a fully managed cluster management. Running Qdrant Hybrid Cloud on Red Hat OpenShift allows enterprises to deploy and run a fully managed vector database in their own environment, ultimately allowing businesses to run managed vector search on their existing cloud and infrastructure environments, with full data sovereignty. Red Hat OpenShift, the industry’s leading hybrid cloud application platform powered by Kubernetes, helps streamline the deployment of Qdrant Hybrid Cloud within an enterprise's secure premises. Red Hat OpenShift provides features like auto-scaling, load balancing, and advanced security controls that can help you manage and maintain your vector database deployments more effectively. In addition, Red Hat OpenShift supports deployment across multiple environments, including on-premises, public, private and hybrid cloud landscapes. This flexibility, coupled with Qdrant Hybrid Cloud, allows organizations to choose the deployment model that best suits their needs. #### Why Run Qdrant Hybrid Cloud on Red Hat OpenShift? - **Scalability**: Red Hat OpenShift's container orchestration effortlessly scales Qdrant Hybrid Cloud components, accommodating fluctuating workload demands with ease. - **Portability**: The consistency across hybrid cloud environments provided by Red Hat OpenShift allows for smoother operation of Qdrant Hybrid Cloud across various infrastructures. - **Automation**: Deployment, scaling, and management tasks are automated, reducing operational overhead and simplifying the management of Qdrant Hybrid Cloud. - **Security**: Red Hat OpenShift provides built-in security features, including container isolation, network policies, and role-based access control (RBAC), enhancing the security posture of Qdrant Hybrid Cloud deployments. - **Flexibility:** Red Hat OpenShift supports a wide range of programming languages, frameworks, and tools, providing flexibility in developing and deploying Qdrant Hybrid Cloud applications. - **Integration:** Red Hat OpenShift can be integrated with various Red Hat and third-party tools, facilitating seamless integration of Qdrant Hybrid Cloud with other enterprise systems and services. #### Get Started with Qdrant Hybrid Cloud on Red Hat OpenShift We're thrilled about our collaboration with Red Hat to help simplify AI infrastructure for developers and enterprises alike. By deploying Qdrant Hybrid Cloud on Red Hat OpenShift, developers can gain the ability to more easily scale and maintain greater operational consistency across hybrid cloud environments. To get started, we created a comprehensive tutorial that shows how to build next-gen AI applications with Qdrant Hybrid Cloud on Red Hat OpenShift. Additionally, you can find more details on the seamless deployment process in our documentation: ![hybrid-cloud-red-hat-openshift-tutorial](/blog/hybrid-cloud-red-hat-openshift/hybrid-cloud-red-hat-openshift-tutorial.png) #### Tutorial: Private Chatbot for Interactive Learning In this tutorial, you will build a chatbot without public internet access. The goal is to keep sensitive data secure and isolated. Your RAG system will be built with Qdrant Hybrid Cloud on Red Hat OpenShift, leveraging Haystack for enhanced generative AI capabilities. This tutorial especially explores how this setup ensures that not a single data point leaves the environment. [Try the Tutorial](/documentation/tutorials/rag-chatbot-red-hat-openshift-haystack/) #### Documentation: Deploy Qdrant in a Few Clicks > Our simple Kubernetes-native design allows you to deploy Qdrant Hybrid Cloud on your Red Hat OpenShift instance in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) This collaboration marks an important milestone in the quest for simplified AI infrastructure, offering a robust, scalable, and security-optimized solution for managing vector databases in a hybrid cloud environment. The combination of Qdrant's performance and Red Hat OpenShift's operational excellence opens new avenues for enterprises looking to leverage the power of AI and ML. #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/). ",blog/hybrid-cloud-red-hat-openshift.md "--- draft: true title: New 0.7.0 update of the Qdrant engine went live slug: qdrant-0-7-0-released short_description: Qdrant v0.7.0 engine has been released description: Qdrant v0.7.0 engine has been released preview_image: /blog/from_cms/v0.7.0.png date: 2022-04-13T08:57:07.604Z author: Alyona Kavyerina author_link: https://www.linkedin.com/in/alyona-kavyerina/ featured: true categories: - News - Release update tags: - Corporate news - Release sitemapExclude: True --- We've released the new version of Qdrant neural search engine.  Let's see what's new in update 0.7.0. * 0.7 engine now supports JSON as a payload.  * It redeems a lost API. Alias API in gRPC is available. * Provides new filtering conditions: refactoring, bool, IsEmpty, and ValuesCount filters are available.  * It has a lot of improvements regarding geo payload indexing, HNSW performance, and many more. Read detailed release notes on [GitHub](https://github.com/qdrant/qdrant/releases/tag/v0.7.0). Stay tuned for new updates.\ If you have any questions or need support, join our [Discord](https://discord.com/invite/tdtYvXjC4h) community.",blog/new-0-7-update-of-the-qdrant-engine-went-live.md "--- draft: false title: The Bitter Lesson of Retrieval in Generative Language Model Workflows - Mikko Lehtimäki | Vector Space Talks slug: bitter-lesson-generative-language-model short_description: Mikko Lehtimäki discusses the challenges and techniques in implementing retrieval augmented generation for Yokot AI description: Mikko Lehtimäki delves into the intricate world of retrieval-augmented generation, discussing how Yokot AI manages vast diverse data inputs and how focusing on re-ranking can massively improve LLM workflows and output quality. preview_image: /blog/from_cms/mikko-lehtimäki-cropped.png date: 2024-01-29T16:31:02.511Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - generative language model - Retrieval Augmented Generation - Softlandia --- > *""If you haven't heard of the bitter lesson, it's actually a theorem. It's based on a blog post by Ricard Sutton, and it states basically that based on what we have learned from the development of machine learning and artificial intelligence systems in the previous decades, the methods that can leverage data and compute tends to or will eventually outperform the methods that are designed or handcrafted by humans.”*\ -- Mikko Lehtimäki > Dr. Mikko Lehtimäki is a data scientist, researcher and software engineer. He has delivered a range of data-driven solutions, from machine vision for robotics in circular economy to generative AI in journalism. Mikko is a co-founder of Softlandia, an innovative AI solutions provider. There, he leads the development of YOKOTAI, an LLM-based productivity booster that connects to enterprise data. Recently, Mikko has contributed software to Llama-index and Guardrails-AI, two leading open-source initiatives in the LLM space. He completed his PhD in the intersection of computational neuroscience and machine learning, which gives him a unique perspective on the design and implementation of AI systems. With Softlandia, Mikko also hosts chill hybrid-format data science meetups where everyone is welcome to participate. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/5hAnDq7MH9qjjtYVjmsGrD?si=zByq7XXGSjOdLbXZDXTzoA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/D8lOvz5xp5c).*** ## **Top takeaways:** Aren’t you curious about what the bitter lesson is and how it plays out in generative language model workflows? Check it out as Mikko delves into the intricate world of retrieval-augmented generation, discussing how Yokot AI manages vast diverse data inputs and how focusing on re-ranking can massively improve LLM workflows and output quality. 5 key takeaways you’ll get from this episode: 1. **The Development of Yokot AI:** Mikko detangles the complex web of how Softlandia's in-house stack is changing the game for language model applications. 2. **Unpacking Retrieval-Augmented Generation:** Learn the rocket science behind uploading documents and scraping the web for that nugget of insight, all through the prowess of Yokot AI's LLMs. 3. **The ""Bitter Lesson"" Theory:** Dive into the theorem that's shaking the foundations of AI, suggesting the supremacy of data and computing over human design. 4. **High-Quality Content Generation:** Understand how the system's handling of massive data inputs is propelling content quality to stratospheric heights. 5. **Future Proofing with Re-Ranking:** Discover why improving the re-ranking component might be akin to discovering a new universe within our AI landscapes. > Fun Fact: Yokot AI incorporates a retrieval augmented generation mechanism to facilitate the retrieval of relevant information, which allows users to upload and leverage their own documents or scrape data from the web. > ## Show notes: 00:00 Talk on retrieval for language models and Yokot AI platform.\ 06:24 Data flexibility in various languages leads progress.\ 10:45 User inputs document, system converts to vectors.\ 13:40 Enhance data quality, reduce duplicates, streamline processing.\ 19:20 Reducing complexity by focusing on re-ranker.\ 21:13 Retrieval process enhances efficiency of language model.\ 24:25 Information retrieval methods evolving, leveraging data, computing.\ 28:11 Optimal to run lightning on local hardware. ## More Quotes from Mikko: ""*We used to build image analysis on this type of features that we designed manually... Whereas now we can just feed a bunch of images to a transformer, and we'll get beautiful bounding boxes and semantic segmentation outputs without building rules into the system.*”\ -- Mikko Lehtimäki *""We cannot just leave it out and hope that someday soon we will have a language model that doesn't require us fetching the data for it in such a sophisticated manner. The reranker is a component that can leverage data and compute quite efficiently, and it doesn't require that much manual craftmanship either.”*\ -- Mikko Lehtimäki *""We can augment the data we store, for example, by using multiple chunking strategies or generating question answer pairs from the user's documents, and then we'll embed those and look them up when the queries come in.”*\ -- Mikko Lehtimäki in improving data quality in rack stack ## Transcript: Demetrios: What is happening? Everyone, it is great to have you here with us for yet another vector space talks. I have the pleasure of being joined by Mikko today, who is the co founder of Softlandia, and he's also lead data scientist. He's done all kinds of great software engineering and data science in his career, and currently he leads the development of Yokot AI, which I just learned the pronunciation of, and he's going to tell us all about it. But I'll give you the TLDR. It's an LLM based productivity booster that can connect to your data. What's going on, Mikko? How you doing, bro? Mikko Lehtimäki: Hey, thanks. Cool to be here. Yes. Demetrios: So, I have to say, I said it before we hit record or before we started going live, but I got to say it again. The talk title is spot on. Your talk title is the bitter lessons of retrieval in generative language model workflows. Mikko Lehtimäki: Exactly. Demetrios: So I'm guessing you've got a lot of hardship that you've been through, and you're going to hopefully tell us all about it so that we do not have to make the same mistakes as you did. We can be wise and learn from your mistakes before we have to make them ourselves, right? All right. That's a great segue into you getting into it, man. I know you got to talk. I know you got some slides to share, so feel free to start throwing those up on the screen. And for everyone that is here joining, feel free to add some questions in the chat. I'll be monitoring it so that in case you have any questions, I can jump in and make sure that Mikko answers them before he moves on to the next slide. All right, Mikko, I see your screen, bro. Demetrios: This is good stuff. Mikko Lehtimäki: Cool. So, shall we get into? Yeah. My name is Mikko. I'm the chief data scientist here at Softlandia. I finished my phd last summer and have been doing the Softlandia for two years now. I'm also a contributor to some open source AI LLM libraries like Llama index and cartrails AI. So if you haven't checked those out ever, please do. Here at Softlandia, we are primarily an AI consultancy that focuses on end to end AI solutions, but we've also developed our in house stack for large language model applications, which I'll be discussing today. Mikko Lehtimäki: So the topic of the talk is a bit provocative. Maybe it's a bitter lesson of retrieval for large language models, and it really stems from our experience in building production ready retrieval augmented generation solutions. I just want to say it's not really a lecture, so I'm going to tell you to do this or do that. I'll just try to walk you through the thought process that we've kind of adapted when we develop rack solutions, and we'll see if that resonates with you or not. So our LLM solution is called Yokot AI. It's really like a platform where enterprises can upload their own documents and get language model based insights from them. The typical example is question answering from your documents, but we're doing a bit more than that. For example, users can generate long form documents, leveraging their own data, and worrying about the token limitations that you typically run in when you ask an LLM to output something. Mikko Lehtimäki: Here you see just a snapshot of the data management view that we have built. So users can bring their own documents or scrape the web, and then access the data with LLMS right away. This is the document generation output. It's longer than you typically see, and each section can be based on different data sources. We've got different generative flows, like we call them, so you can take your documents and change the style using llms. And of course, the typical chat view, which is really like the entry point, to also do these workflows. And you can see the sources that the language model is using when you're asking questions from your data. And this is all made possible with retrieval augmented generation. Mikko Lehtimäki: That happens behind the scenes. So when we ask the LLM to do a task, we're first fetching data from what was uploaded, and then everything goes from there. So we decide which data to pull, how to use it, how to generate the output, and how to present it to the user so that they can keep on conversing with the data or export it to their desired format, whatnot. But the primary challenge with this kind of system is that it is very open ended. So we don't really set restrictions on what kind of data the users can upload or what language the data is in. So, for example, we're based in Finland. Most of our customers are here in the Nordics. They talk, speak Finnish, Swedish. Mikko Lehtimäki: Most of their data is in English, because why not? And they can just use whatever language they feel with the system. So we don't want to restrict any of that. The other thing is the chat view as an interface, it really doesn't set much limits. So the users have the freedom to do the task that they choose with the system. So the possibilities are really broad that we have to prepare for. So that's what we are building. Now, if you haven't heard of the bitter lesson, it's actually a theorem. It's based on a blog post by Ricard Sutton, and it states basically that based on what we have learned from the development of machine learning and artificial intelligence systems in the previous decades, the methods that can leverage data and compute tends to or will eventually outperform the methods that are designed or handcrafted by humans. Mikko Lehtimäki: So for example, I have an illustration here showing how this has manifested in image analysis. So on the left hand side, you see the output from an operation that extracts gradients from images. We used to build image analysis on this type of features that we designed manually. We would run some kind of edge extraction, we would count corners, we would compute the edge distances and design the features by hand in order to work with image data. Whereas now we can just feed a bunch of images to a transformer, and we'll get beautiful bounding boxes and semantic segmentation outputs without building rules into the system. So that's a prime example of the bitter lesson in action. Now, if we take this to the context of rack or retrieval augmented generation, let's have a look first at the simple rack architecture. Why do we do this in the first place? Well, it's because the language models themselves, they don't have up to date data because they've been trained a while ago. Mikko Lehtimäki: You don't really even know when. So we need to give them access to more recent data, and we need a method for doing that. And the other thing is problems like hallucinations. We found that if you just ask the model a question that is in the training data, you won't get always reliable results. But if you can crown the model's answers with data, you will get more factual results. So this is what can be done with the rack as well. And the final thing is that we just cannot give a book, for example, in one go the language model, because even if theoretically it could read the input in one go, the result quality that you get from the language model is going to suffer if you feed it too much data at once. So this is why we have designed retrieval augmented generation architectures. Mikko Lehtimäki: And if we look at this system on the bottom, you see the typical data ingestion. So the user gives a document, we slice it to small chunks, and we compute a numerical representation with vector embeddings and store those in a vector database. Why a vector database? Because it's really efficient to retrieve vectors from it when we get users query. So that is also embedded and it's used to look up relevant sources from the data that was previously uploaded efficiently directly on the database, and then we can fit the resulting text, the language model, to synthesize an answer. And this is how the RHe works in very basic form. Now you can see that if you have only a single document that you work with, it's nice if the problem set that you want to solve is very constrained, but the more data you can bring to your system, the more workflows you can build on that data. So if you have, for example, access to a complete book or many books, it's easy to see you can also generate higher quality content from that data. So this architecture really must be such that it can also make use of those larger amounts of data. Mikko Lehtimäki: Anyway, once you implement this for the first time, it really feels like magic. It tends to work quite nicely, but soon you'll notice that it's not suitable for all kinds of tasks. Like you will see sometimes that, for example, the lists. If you retrieve lists, they may be broken. If you ask questions that are document comparisons, you may not get complete results. If you run summarization tasks without thinking about it anymore, then that will most likely lead to super results. So we'll have to extend the architecture quite a bit to take into account all the use cases that we want to enable with bigger amounts of data that the users upload. And this is what it may look like once you've gone through a few design iterations. Mikko Lehtimäki: So let's see, what steps can we add to our rack stack in order to make it deliver better quality results? If we start from the bottom again, we can see that we try to enhance the quality of the data that we upload by adding steps to the data ingestion pipeline. We can augment the data we store, for example, by using multiple chunking strategies or generating question answer pairs from the user's documents, and then we'll embed those and look them up when the queries come in. At the same time, we can reduce the data we upload, so we want to make sure there are no duplicates. We want to clean low quality things like HTML stuff, and we also may want to add some metadata so that certain data, for example references, can be excluded from the search results if they're not needed to run the tasks that we like to do. We've modeled this as a stream processing pipeline, by the way. So we're using Bytewax, which is another really nice open source framework. Just a tiny advertisement we're going to have a workshop with Bytewax about rack on February 16, so keep your eyes open for that. At the center I have added different databases and different retrieval methods. Mikko Lehtimäki: We may, for example, add keyword based retrieval and metadata filters. The nice thing is that you can do all of this with quattron if you like. So that can be like a one stop shop for your document data. But some users may want to experiment with different databases, like graph databases or NoSQL databases and just ordinary SQL databases as well. They can enable different kinds of use cases really. So it's up to your service which one is really useful for you. If we look more to the left, we have a component called query planner and some query routers. And this really determines the response strategy. Mikko Lehtimäki: So when you get the query from the user, for example, you want to take different steps in order to answer it. For example, you may want to decompose the query to small questions that you answer individually, and each individual question may take a different path. So you may want to do a query based on metadata, for example pages five and six from a document. Or you may want to look up based on keywords full each page or chunk with a specific word. And there's really like a massive amount of choices how this can go. Another example is generating hypothetical documents based on the query and embedding those rather than the query itself. That will in some cases lead to higher quality retrieval results. But now all this leads into the right side of the query path. Mikko Lehtimäki: So here we have a re ranker. So if we implement all of this, we end up really retrieving a lot of data. We typically will retrieve more than it makes sense to give to the language model in a single call. So we can add a re ranker step here and it will firstly filter out low quality retrieved content and secondly, it will put the higher quality content on the top of the retrieved documents. And now when you pass this reranked content to the language model, it should be able to pay better attention to the details that actually matter given the query. And this should lead to you better managing the amount of data that you have to handle with your final response generator, LLM. And it should also make the response generator a bit faster because you will be feeding slightly less data in one go. The simplest way to build a re ranker is probably just asking a large language model to re rank or summarize the content that you've retrieved before you feed it to the language model. Mikko Lehtimäki: That's one way to do it. So yeah, that's a lot of complexity and honestly, we're not doing all of this right now with Yokot AI, either. We've tried all of it in different scopes, but really it's a lot of logic to maintain. And to me this just like screams the bitter lesson, because we're building so many steps, so much logic, so many rules into the system, when really all of this is done just because the language model can't be trusted, or it can't be with the current architectures trained reliably, or cannot be trained in real time with the current approaches that we have. So there's one thing in this picture, in my opinion, that is more promising than the others for leveraging data and compute, which should dominate the quality of the solution in the long term. And if we focus only on that, or not only, but if we focus heavily on that part of the process, we should be able to eliminate some complexity elsewhere. So if you're watching the recording, you can pause and think what this component may be. But in my opinion, it is the re ranker at the end. Mikko Lehtimäki: And why is that? Well, of course you could argue that the language model itself is one, but with the current architectures that we have, I think we need the retrieval process. We cannot just leave it out and hope that someday soon we will have a language model that doesn't require us fetching the data for it in such a sophisticated manner. The reranker is a component that can leverage data and compute quite efficiently, and it doesn't require that much manual craftmanship either. It's a stakes in samples and outputs samples, and it plays together really well with efficient vector search that we have available now. Like quatrant being a prime example of that. The vector search is an initial filtering step, and then the re ranker is the secondary step that makes sure that we get the highest possible quality data to the final LLM. And the efficiency of the re ranker really comes from the fact that it doesn't have to be a full blown generative language model so often it is a language model, but it doesn't have to have the ability to generate GPT four level content. It just needs to understand, and in some, maybe even a very fixed way, communicate the importance of the inputs that you give it. Mikko Lehtimäki: So typically the inputs are the user's query and the data that was retrieved. Like I mentioned earlier, the easiest way to use a read ranker is probably asking a large language model to rerank your chunks or sentences that you retrieved. But there are also models that have been trained specifically for this, the Colbert model being a primary example of that and we also have to remember that the rerankers have been around for a long time. They've been used in traditional search engines for a good while. We just now require a bit higher quality from them because there's no user checking the search results and deciding which of them is relevant. After the fact that the re ranking has already been run, we need to trust that the output of the re ranker is high quality and can be given to the language model. So you can probably get plenty of ideas from the literature as well. But the easiest way is definitely to use LLM behind a simple API. Mikko Lehtimäki: And that's not to say that you should ignore the rest like the query planner is of course a useful component, and the different methods of retrieval are still relevant for different types of user queries. So yeah, that's how I think the bitter lesson is realizing in these rack architectures I've collected here some methods that are recent or interesting in my opinion. But like I said, there's a lot of existing information from information retrieval research that is probably going to be rediscovered in the near future. So if we summarize the bitter lesson which we have or are experiencing firsthand, states that the methods that leverage data and compute will outperform the handcrafted approaches. And if we focus on the re ranking component in the RHE, we'll be able to eliminate some complexity elsewhere in the process. And it's good to keep in mind that we're of course all the time waiting for advances in the large language model technology. But those advances will very likely benefit the re ranker component as well. So keep that in mind when you find new, interesting research. Mikko Lehtimäki: Cool. That's pretty much my argument finally there. I hope somebody finds it interesting. Demetrios: Very cool. It was bitter like a black cup of coffee, or bitter like dark chocolate. I really like these lessons that you've learned, and I appreciate you sharing them with us. I know the re ranking and just the retrieval evaluation aspect is something on a lot of people's minds right now, and I know a few people at Qdrant are actively thinking about that too, and how to make it easier. So it's cool that you've been through it, you've felt the pain, and you also are able to share what has helped you. And so I appreciate that. In case anyone has any questions, now would be the time to ask them. Otherwise we will take it offline and we'll let everyone reach out to you on LinkedIn, and I can share your LinkedIn profile in the chat to make it real easy for people to reach out if they want to, because this was cool, man. Demetrios: This was very cool, and I appreciate it. Mikko Lehtimäki: Thanks. I hope it's useful to someone. Demetrios: Excellent. Well, if that is all, I guess I've got one question for you. Even though we are kind of running up on time, so it'll be like a lightning question. You mentioned how you showed the really descriptive diagram where you have everything on there, and it's kind of like the dream state or the dream outcome you're going for. What is next? What are you going to create out of that diagram that you don't have yet? Mikko Lehtimäki: You want the lightning answer would be really good to put this run on a local hardware completely. I know that's not maybe the algorithmic thing or not necessarily in the scope of Yoko AI, but if we could run this on a physical device in that form, that would be super. Demetrios: I like it. I like it. All right. Well, Mikko, thanks for everything and everyone that is out there. All you vector space astronauts. Have a great day. Morning, night, wherever you are at in the world or in space. And we will see you later. Demetrios: Thanks. Mikko Lehtimäki: See you. ",blog/the-bitter-lesson-of-retrieval-in-generative-language-model-workflows-mikko-lehtimäki-vector-space-talks.md "--- title: ""Qdrant 1.9.0 - Heighten Your Security With Role-Based Access Control Support"" draft: false slug: qdrant-1.9.x short_description: ""Granular access control. Optimized shard transfers. Support for byte embeddings."" description: ""New access control options for RBAC, a much faster shard transfer procedure, and direct support for byte embeddings. "" preview_image: /blog/qdrant-1.9.x/social_preview.png social_preview_image: /blog/qdrant-1.9.x/social_preview.png date: 2024-04-24T00:00:00-08:00 author: David Myriel featured: false tags: - vector search - role based access control - byte vectors - binary vectors - quantization - new features --- [Qdrant 1.9.0 is out!](https://github.com/qdrant/qdrant/releases/tag/v1.9.0) This version complements the release of our new managed product [Qdrant Hybrid Cloud](/hybrid-cloud/) with key security features valuable to our enterprise customers, and all those looking to productionize large-scale Generative AI. **Data privacy, system stability and resource optimizations** are always on our mind - so let's see what's new: - **Granular access control:** You can further specify access control levels by using JSON Web Tokens. - **Optimized shard transfers:** The synchronization of shards between nodes is now significantly faster! - **Support for byte embeddings:** Reduce the memory footprint of Qdrant with official `uint8` support. ## New access control options via JSON Web Tokens Historically, our API key supported basic read and write operations. However, recognizing the evolving needs of our user base, especially large organizations, we've implemented additional options for finer control over data access within internal environments. Qdrant now supports [granular access control using JSON Web Tokens (JWT)](/documentation/guides/security/#granular-access-control-with-jwt). JWT will let you easily limit a user's access to the specific data they are permitted to view. Specifically, JWT-based authentication leverages tokens with restricted access to designated data segments, laying the foundation for implementing role-based access control (RBAC) on top of it. **You will be able to define permissions for users and restrict access to sensitive endpoints.** **Dashboard users:** For your convenience, we have added a JWT generation tool the Qdrant Web UI under the 🔑 tab. If you're using the default url, you will find it at `http://localhost:6333/dashboard#/jwt`. ![jwt-web-ui](/blog/qdrant-1.9.x/jwt-web-ui.png) We highly recommend this feature to enterprises using [Qdrant Hybrid Cloud](/hybrid-cloud/), as it is tailored to those who need additional control over company data and user access. RBAC empowers administrators to define roles and assign specific privileges to users based on their roles within the organization. In combination with [Hybrid Cloud's data sovereign architecture](/documentation/hybrid-cloud/), this feature reinforces internal security and efficient collaboration by granting access only to relevant resources. > **Documentation:** [Read the access level breakdown](/documentation/guides/security/#table-of-access) to see which actions are allowed or denied. ## Faster shard transfers on node recovery We now offer a streamlined approach to [data synchronization between shards](/documentation/guides/distributed_deployment/#shard-transfer-method) during node upgrades or recovery processes. Traditional methods used to transfer the entire dataset, but our new `wal_delta` method focuses solely on transmitting the difference between two existing shards. By leveraging the Write-Ahead Log (WAL) of both shards, this method selectively transmits missed operations to the target shard, ensuring data consistency. In some cases, where transfers can take hours, this update **reduces transfers down to a few minutes.** The advantages of this approach are twofold: 1. **It is faster** since only the differential data is transmitted, avoiding the transfer of redundant information. 2. It upholds robust **ordering guarantees**, crucial for applications reliant on strict sequencing. For more details on how this works, check out the [shard transfer documentation](/documentation/guides/distributed_deployment/#shard-transfer-method). > **Note:** There are limitations to consider. First, this method only works with existing shards. Second, while the WALs typically retain recent operations, their capacity is finite, potentially impeding the transfer process if exceeded. Nevertheless, for scenarios like rapid node restarts or upgrades, where the WAL content remains manageable, WAL delta transfer is an efficient solution. Overall, this is a great optional optimization measure and serves as the **auto-recovery default for shard transfers**. It's safe to use everywhere because it'll automatically fall back to streaming records transfer if no difference can be resolved. By minimizing data redundancy and expediting transfer processes, it alleviates the strain on the cluster during recovery phases, enabling faster node catch-up. ## Native support for uint8 embeddings Our latest version introduces [support for uint8 embeddings within Qdrant collections](/documentation/concepts/collections/#vector-datatypes). This feature supports embeddings provided by companies in a pre-quantized format. Unlike previous iterations where indirect support was available via [quantization methods](/documentation/guides/quantization/), this update empowers users with direct integration capabilities. In the case of `uint8`, elements within the vector are represented as unsigned 8-bit integers, encompassing values ranging from 0 to 255. Using these embeddings gives you a **4x memory saving and about a 30% speed-up in search**, while keeping 99.99% of the response quality. As opposed to the original quantization method, with this feature you can spare disk usage if you directly implement pre-quantized embeddings. The configuration is simple. To create a collection with uint8 embeddings, simply add the following `datatype`: ```bash PUT /collections/{collection_name} { ""vectors"": { ""size"": 1024, ""distance"": ""Dot"", ""datatype"": ""uint8"" } } ``` > **Note:** When using Quantization to optimize vector search, you can use this feature to `rescore` binary vectors against new byte vectors. With double the speedup, you will be able to achieve a better result than if you rescored with float vectors. With each byte vector quantized at the binary level, the result will deliver unparalleled efficiency and savings. To learn more about this optimization method, read our [Quantization docs](/documentation/guides/quantization/). ## Minor improvements and new features - Greatly improve write performance while creating a snapshot of a large collection - [#3420](https://github.com/qdrant/qdrant/pull/3420), [#3938](https://github.com/qdrant/qdrant/pull/3938) - Report pending optimizations awaiting an update operation in collection info - [#3962](https://github.com/qdrant/qdrant/pull/3962), [#3971](https://github.com/qdrant/qdrant/pull/3971) - Improve `indexed_only` reliability on proxy shards - [#3998](https://github.com/qdrant/qdrant/pull/3998) - Make shard diff transfer fall back to streaming records - [#3798](https://github.com/qdrant/qdrant/pull/3798) - Cancel shard transfers when the shard is deleted - [#3784](https://github.com/qdrant/qdrant/pull/3784) - Improve sparse vectors search performance by another 7% - [#4037](https://github.com/qdrant/qdrant/pull/4037) - Build Qdrant with a single codegen unit to allow better compile-time optimizations - [#3982](https://github.com/qdrant/qdrant/pull/3982) - Remove `vectors_count` from collection info because it is unreliable. **Check if you use this field before upgrading** - [#4052](https://github.com/qdrant/qdrant/pull/4052) - Remove shard transfer method field from abort shard transfer operation - [#3803](https://github.com/qdrant/qdrant/pull/3803) ",blog/qdrant-1.9.x.md "--- title: ""Community Highlights #1"" draft: false slug: community-highlights-1 # Change this slug to your page slug if needed short_description: Celebrating top contributions and achievements in vector search, featuring standout projects, articles, and the Creator of the Month, Pavan Kumar. # Change this description: Celebrating top contributions and achievements in vector search, featuring standout projects, articles, and the Creator of the Month, Pavan Kumar! preview_image: /blog/community-highlights-1/preview-image.png social_preview_image: /blog/community-highlights-1/preview-image.png date: 2024-06-20T11:57:37-03:00 author: Sabrina Aquino featured: false tags: - news - vector search - qdrant - ambassador program - community - artificial intelligence --- Welcome to the very first edition of Community Highlights, where we celebrate the most impactful contributions and achievements of our vector search community! 🎉 ## Content Highlights 🚀 Here are some standout projects and articles from our community this past month. If you're looking to learn more about vector search or build some great projects, we recommend you to check these guides: * **[Implementing Advanced Agentic Vector Search](https://towardsdev.com/implementing-advanced-agentic-vector-search-a-comprehensive-guide-to-crewai-and-qdrant-ca214ca4d039): A Comprehensive Guide to CrewAI and Qdrant by [Pavan Kumar](https://www.linkedin.com/in/kameshwara-pavan-kumar-mantha-91678b21/)** * **Build Your Own RAG Using [Unstructured, Llama3 via Groq, Qdrant & LangChain](https://www.youtube.com/watch?v=m_3q3XnLlTI) by [Sudarshan Koirala](https://www.linkedin.com/in/sudarshan-koirala/)** * **Qdrant filtering and [self-querying retriever](https://www.youtube.com/watch?v=iaXFggqqGD0) retrieval with LangChain by [Daniel Romero](https://www.linkedin.com/in/infoslack/)** * **RAG Evaluation with [Arize Phoenix](https://superlinked.com/vectorhub/articles/retrieval-augmented-generation-eval-qdrant-arize) by [Atita Arora](https://www.linkedin.com/in/atitaarora/)** * **Building a Serverless Application with [AWS Lambda and Qdrant](https://medium.com/@benitomartin/building-a-serverless-application-with-aws-lambda-and-qdrant-for-semantic-search-ddb7646d4c2f) for Semantic Search by [Benito Martin](https://www.linkedin.com/in/benitomzh/)** * **Production ready Secure and [Powerful AI Implementations with Azure Services](https://towardsdev.com/production-ready-secure-and-powerful-ai-implementations-with-azure-services-671b68631212) by [Pavan Kumar](https://www.linkedin.com/in/kameshwara-pavan-kumar-mantha-91678b21/)** * **Building [Agentic RAG with Rust, OpenAI & Qdrant](https://medium.com/@joshmo_dev/building-agentic-rag-with-rust-openai-qdrant-d3a0bb85a267) by [Joshua Mo](https://www.linkedin.com/in/joshua-mo-4146aa220/)** * **Qdrant [Hybrid Search](https://medium.com/@nickprock/qdrant-hybrid-search-under-the-hood-using-haystack-355841225ac6) under the hood using Haystack by [Nicola Procopio](https://www.linkedin.com/in/nicolaprocopio/)** * **[Llama 3 Powered Voice Assistant](https://medium.com/@datadrifters/llama-3-powered-voice-assistant-integrating-local-rag-with-qdrant-whisper-and-langchain-b4d075b00ac5): Integrating Local RAG with Qdrant, Whisper, and LangChain by [Datadrifters](https://medium.com/@datadrifters)** * **[Distributed deployment](https://medium.com/@vardhanam.daga/distributed-deployment-of-qdrant-cluster-with-sharding-replicas-e7923d483ebc) of Qdrant cluster with sharding & replicas by [Vardhanam Daga](https://www.linkedin.com/in/vardhanam-daga/overlay/about-this-profile/)** * **Private [Healthcare AI Assistant](https://medium.com/aimpact-all-things-ai/building-private-healthcare-ai-assistant-for-clinics-using-qdrant-hybrid-cloud-jwt-rbac-dspy-and-089a772e08ae) using Qdrant Hybrid Cloud, DSPy, and Groq by [Sachin Khandewal](https://www.linkedin.com/in/sachink1729/)** ## Creator of the Month 🌟 Congratulations to Pavan Kumar for being awarded **Creator of the Month!** Check out what were Pavan's most valuable contributions to the Qdrant vector search community this past month: * **[Implementing Advanced Agentic Vector Search](https://towardsdev.com/implementing-advanced-agentic-vector-search-a-comprehensive-guide-to-crewai-and-qdrant-ca214ca4d039): A Comprehensive Guide to CrewAI and Qdrant** * **Production ready Secure and [Powerful AI Implementations with Azure Services](https://towardsdev.com/production-ready-secure-and-powerful-ai-implementations-with-azure-services-671b68631212)** * **Building Neural Search Pipelines with Azure and Qdrant: A Step-by-Step Guide [Part-1](https://towardsdev.com/building-neural-search-pipelines-with-azure-and-qdrant-a-step-by-step-guide-part-1-40c191084258) and [Part-2](https://towardsdev.com/building-neural-search-pipelines-with-azure-and-qdrant-a-step-by-step-guide-part-2-fba287b49574)** * **Building a RAG System with [Ollama, Qdrant and Raspberry Pi](https://blog.gopenai.com/harnessing-ai-at-the-edge-building-a-rag-system-with-ollama-qdrant-and-raspberry-pi-45ac3212cf75)** * **Building a [Multi-Document ReAct Agent](https://blog.stackademic.com/building-a-multi-document-react-agent-for-financial-analysis-using-llamaindex-and-qdrant-72a535730ac3) for Financial Analysis using LlamaIndex and Qdrant** Pavan is a seasoned technology expert with 14 years of extensive experience, passionate about sharing his knowledge through technical blogging, engaging in technical meetups, and staying active with cycling! Thank you, Pavan, for your outstanding contributions and commitment to the community! ## Most Active Members 🏆 We're excited to recognize our most active community members, who have been a constant support to vector search builders, and sharing their knowledge and making our community more engaging: * 🥇 **1st Place: Robert Caulk** * 🥈 **2nd Place: Nicola Procopio** * 🥉 **3rd Place: Joshua Mo** Thank you all for your dedication and for making the Qdrant vector search community such a dynamic and valuable place! Stay tuned for more highlights and updates in the next edition of Community Highlights! 🚀 **Join us for Office Hours! 🎙️** Don't miss our next [Office Hours hangout on Discord](https://discord.gg/s9YxGeQK?event=1252726857753821236), happening next week on June 27th. This is a great opportunity to introduce yourself to the community, learn more about vector search, and engage with the people behind this awesome content! See you there 👋",blog/community-highlights-1.md "--- title: ""QSoC 2024: Announcing Our Interns!"" draft: false slug: qsoc24-interns-announcement # Change this slug to your page slug if needed short_description: We are pleased to announce the selection of interns for the inaugural Qdrant Summer of Code (QSoC) program. # Change this description: We are pleased to announce the selection of interns for the inaugural Qdrant Summer of Code (QSoC) program. # Change this preview_image: /blog/qsoc24-interns-announcement/qsoc.jpg # Change this social_preview_image: /blog/qsoc24-interns-announcement/qsoc.jpg # Optional image used for link previews title_preview_image: /blog/qsoc24-interns-announcement/qsoc.jpg # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-05-08T16:44:22-03:00 author: Sabrina Aquino # Change this featured: false # if true, this post will be featured on the blog page tags: # Change this, related by tags posts will be shown on the blog page - QSoC - Qdrant Summer of Code - Google Summer of Code - vector search --- We are excited to announce the interns selected for the inaugural Qdrant Summer of Code (QSoC) program! After receiving many impressive applications, we have chosen two talented individuals to work on the following projects: **[Jishan Bhattacharya](https://www.linkedin.com/in/j16n/): WASM-based Dimension Reduction Visualization** Jishan will be implementing a dimension reduction algorithm in Rust, compiling it to WebAssembly (WASM), and integrating it with the Qdrant Web UI. This project aims to provide a more efficient and smoother visualization experience, enabling the handling of more data points and higher dimensions efficiently. **[Celine Hoang](https://www.linkedin.com/in/celine-h-hoang/): ONNX Cross Encoders in Python** Celine Hoang will focus on porting advanced ranking models—specifically Sentence Transformers, ColBERT, and BGE—to the ONNX (Open Neural Network Exchange) format. This project will enhance Qdrant's model support, making it more versatile and efficient in handling complex ranking tasks that are critical for applications such as recommendation engines and search functionalities. We look forward to working with Jishan and Celine over the coming months and are excited to see their contributions to the Qdrant project. Stay tuned for more updates on the QSoC program and the progress of these projects! ",blog/qsoc24-interns-announcement.md "--- title: ""DSPy vs LangChain: A Comprehensive Framework Comparison"" #required short_description: DSPy and LangChain are powerful frameworks for building AI applications leveraging LLMs and vector search technology. description: We dive deep into the capabilities of DSPy and LangChain and discuss scenarios where each of these frameworks shine. #required social_preview_image: /blog/dspy-vs-langchain/dspy-langchain.png # This image will be used in preview_image: /blog/dspy-vs-langchain/dspy-langchain.png author: Qdrant Team # Author of the article. Required. author_link: https://qdrant.tech/ # Link to the author's page. Required. date: 2024-02-23T08:00:00-03:00 # Date of the article. Required. draft: false # If true, the article will not be published keywords: # Keywords for SEO - DSPy - LangChain - AI frameworks - LLMs - vector search - RAG applications - chatbots --- # The Evolving Landscape of AI Frameworks As Large Language Models (LLMs) and vector stores have become steadily more powerful, a new generation of frameworks has appeared which can streamline the development of AI applications by leveraging LLMs and vector search technology. These frameworks simplify the process of building everything from Retrieval Augmented Generation (RAG) applications to complex chatbots with advanced conversational abilities, and even sophisticated reasoning-driven AI applications. The most well-known of these frameworks is possibly [LangChain](https://github.com/langchain-ai/langchain). [Launched in October 2022](https://en.wikipedia.org/wiki/LangChain) as an open-source project by Harrison Chase, the project quickly gained popularity, attracting contributions from hundreds of developers on GitHub. LangChain excels in its broad support for documents, data sources, and APIs. This, along with seamless integration with vector stores like Qdrant and the ability to chain multiple LLMs, has allowed developers to build complex AI applications without reinventing the wheel. However, despite the many capabilities unlocked by frameworks like LangChain, developers still needed expertise in [prompt engineering](https://en.wikipedia.org/wiki/Prompt_engineering) to craft optimal LLM prompts. Additionally, optimizing these prompts and adapting them to build multi-stage reasoning AI remained challenging with the existing frameworks. In fact, as you start building production-grade AI applications, it becomes clear that a single LLM call isn’t enough to unlock the full capabilities of LLMs. Instead, you need to create a workflow where the model interacts with external tools like web browsers, fetches relevant snippets from documents, and compiles the results into a multi-stage reasoning pipeline. This involves building an architecture that combines and reasons on intermediate outputs, with LLM prompts that adapt according to the task at hand, before producing a final output. A manual approach to prompt engineering quickly falls short in such scenarios. In October 2023, researchers working in Stanford NLP released a library, [DSPy](https://github.com/stanfordnlp/dspy), which entirely automates the process of optimizing prompts and weights for large language models (LLMs), eliminating the need for manual prompting or prompt engineering. One of DSPy's key features is its ability to automatically tune LLM prompts, an approach that is especially powerful when your application needs to call the LLM several times within a pipeline. So, when building an LLM and vector store-backed AI application, which of these frameworks should you choose? In this article, we dive deep into the capabilities of each and discuss scenarios where each of these frameworks shine. Let’s get started! ## **LangChain: Features, Performance, and Use Cases** LangChain, as discussed above, is an open-source orchestration framework available in both [Python](https://python.langchain.com/v0.2/docs/introduction/) and [JavaScript](https://js.langchain.com/v0.2/docs/introduction/), designed to simplify the development of AI applications leveraging LLMs. For developers working with one or multiple LLMs, it acts as a universal interface for these AI models. LangChain integrates with various external data sources, supports a wide range of data types and stores, streamlines the handling of vector embeddings and retrieval through similarity search, and simplifies the integration of AI applications with existing software workflows. At a high level, LangChain abstracts the common steps required to work with language models into modular components, which serve as the building blocks of AI applications. These components can be ""chained"" together to create complex applications. Thanks to these abstractions, LangChain allows for rapid experimentation and prototyping of AI applications in a short timeframe. LangChain breaks down the functionality required to build AI applications into three key sections: - **Model I/O**: Building blocks to interface with the LLM. - **Retrieval**: Building blocks to streamline the retrieval of data used by the LLM for generation (such as the retrieval step in RAG applications). - **Composition**: Components to combine external APIs, services and other LangChain primitives. These components are pulled together into ‘chains’ that are constructed using [LangChain Expression Language](https://python.langchain.com/v0.1/docs/expression_language/) (LCEL). We’ill first look at the various building blocks, and then see how they can be combined using LCEL. ### **LLM Model I/O** LangChain offers broad compatibility with various LLMs, and its [LLM](https://python.langchain.com/v0.1/docs/modules/model_io/llms/) class provides a standard interface to these models. Leveraging proprietary models offered by platforms like OpenAI, Mistral, Cohere, or Gemini is straightforward and requires just an API key from the respective platform. For instance, to use OpenAI models, you simply need to do the following: ```python from langchain_openai import OpenAI llm = OpenAI(api_key=""..."") llm.invoke(""Where is Paris?"") ``` Open-source models like Meta AI’s Llama variants (such as Llama3-8B) or Mistral AI’s open models (like Mistral-7B) can be easily integrated using their Hugging Face endpoints or local LLM deployment tools like Ollama, vLLM, or LM Studio. You can also use the [CustomLLM](https://python.langchain.com/v0.1/docs/modules/model_io/llms/custom_llm/) class to build Custom LLM wrappers. Here’s how simple it is to use LangChain with LlaMa3-8B, using [Ollama](https://ollama.com/). ```python from langchain_community.llms import Ollama llm = Ollama(model=""llama3"") llm.invoke(""Where is Berlin?"") ``` LangChain also offers output parsers to structure the LLM output in a format that the application may need, such as structured data types like JSON, XML, CSV, and others. To understand LangChain’s interface with LLMs in detail, read the documentation [here](https://python.langchain.com/v0.1/docs/modules/model_io/). ### **Retrieval** Most enterprise AI applications are built by augmenting the LLM context using data specific to the application’s use case. To accomplish this, the relevant data needs to be first retrieved, typically using vector similarity search, and then passed to the LLM context at the generation step. This architecture, known as [Retrieval Augmented Generation](/articles/what-is-rag-in-ai/) (RAG), can be used to build a wide range of AI applications. While the retrieval process sounds simple, it involves a number of complex steps: loading data from a source, splitting it into chunks, converting it into vectors or vector embeddings, storing it in a vector store, and then retrieving results based on a query before the generation step. LangChain offers a number of building blocks to make this retrieval process simpler. - **Document Loaders**: LangChain offers over 100 different document loaders, including integrations with providers like Unstructured or Airbyte. It also supports loading various types of documents, such as PDFs, HTML, CSV, and code, from a range of locations like S3. - **Splitting**: During the retrieval step, you typically need to retrieve only the relevant section of a document. To do this, you need to split a large document into smaller chunks. LangChain offers various document transformers that make it easy to split, combine, filter, or manipulate documents. - **Text Embeddings**: A key aspect of the retrieval step is converting document chunks into vectors, which are high-dimensional numerical representations that capture the semantic meaning of the text. LangChain offers integrations with over 25 embedding providers and methods, such as [FastEmbed](https://github.com/qdrant/fastembed). - **Vector Store Integration**: LangChain integrates with over 50 vector stores, including specialized ones like [Qdrant](/documentation/frameworks/langchain/), and exposes a standard interface. - **Retrievers**: LangChain offers various retrieval algorithms and allows you to use third-party retrieval algorithms or create custom retrievers. - **Indexing**: LangChain also offers an indexing API that keeps data from any data source in sync with the vector store, helping to reduce complexities around managing unchanged content or avoiding duplicate content. ### **Composition** Finally, LangChain also offers building blocks that help combine external APIs, services, and LangChain primitives. For instance, it provides tools to fetch data from Wikipedia or search using Google Lens. The list of tools it offers is [extremely varied](https://python.langchain.com/v0.1/docs/integrations/tools/). LangChain also offers ways to build agents that use language models to decide on the sequence of actions to take. ### **LCEL** The primary method of building an application in LangChain is through the use of [LCEL](https://python.langchain.com/v0.1/docs/expression_language/), the LangChain Expression Language. It is a declarative syntax designed to simplify the composition of chains within the LangChain framework. It provides a minimalist code layer that enables the rapid development of chains, leveraging advanced features such as streaming, asynchronous execution, and parallel processing. LCEL is particularly useful for building chains that involve multiple language model calls, data transformations, and the integration of outputs from language models into downstream applications. ### **Some Use Cases of LangChain** Given the flexibility that LangChain offers, a wide range of applications can be built using the framework. Here are some examples: **RAG Applications**: LangChain provides all the essential building blocks needed to build Retrieval Augmented Generation (RAG) applications. It integrates with vector stores and LLMs, streamlining the entire process of loading, chunking, and retrieving relevant sections of a document in a few lines of code. **Chatbots**: LangChain offers a suite of components that streamline the process of building conversational chatbots. These include chat models, which are specifically designed for message-based interactions and provide a conversational tone suitable for chatbots. **Extracting Structured Outputs**: LangChain assists in extracting structured output from data using various tools and methods. It supports multiple extraction approaches, including tool/function calling mode, JSON mode, and prompting-based extraction. **Agents**: LangChain simplifies the process of building agents by providing building blocks and integration with LLMs, enabling developers to construct complex, multi-step workflows. These agents can interact with external data sources and tools, and generate dynamic and context-aware responses for various applications. If LangChain offers such a wide range of integrations and the primary building blocks needed to build AI applications, *why do we need another framework?* As Omar Khattab, PhD, Stanford and researcher at Stanford NLP, said when introducing DSPy in his [talk](https://www.youtube.com/watch?v=Dt3H2ninoeY) at ‘Scale By the Bay’ in November 2023: “We can build good reliable systems with these new artifacts that are language models (LMs), but importantly, this is conditioned on us *adapting* them as well as *stacking* them well”. ## **DSPy: Features, Performance, and Use Cases** When building AI systems, developers need to break down the task into multiple reasoning steps, adapt language model (LM) prompts for each step until they get the right results, and then ensure that the steps work together to achieve the desired outcome. Complex multihop pipelines, where multiple LLM calls are stacked, are messy. They involve string-based prompting tricks or prompt hacks at each step, and getting the pipeline to work is even trickier. Additionally, the manual prompting approach is highly unscalable, as any change in the underlying language model breaks the prompts and the pipeline. LMs are highly sensitive to prompts and slight changes in wording, context, or phrasing can significantly impact the model's output. Due to this, despite the functionality provided by frameworks like LangChain, developers often have to spend a lot of time engineering prompts to get the right results from LLMs. How do you build a system that’s less brittle and more predictable? Enter DSPy! [DSPy](https://github.com/stanfordnlp/dspy) is built on the paradigm that language models (LMs) should be programmed rather than prompted. The framework is designed for algorithmically optimizing and adapting LM prompts and weights, and focuses on replacing prompting techniques with a programming-centric approach. DSPy treats the LM like a device and abstracts out the underlying complexities of prompting. To achieve this, DSPy introduces three simple building blocks: ### **Signatures** [Signatures](https://dspy-docs.vercel.app/docs/building-blocks/signatures) replace handwritten prompts and are written in natural language. They are simply declarations or specs of the behavior that you expect from the language model. Some examples are: - question -> answer - long_document -> summary - context, question -> rationale, response Rather than manually crafting complex prompts or engaging in extensive fine-tuning of LLMs, signatures allow for the automatic generation of optimized prompts. DSPy Signatures can be specified in two ways: 1. Inline Signatures: Simple tasks can be defined in a concise format, like ""question -> answer"" for question-answering or ""document -> summary"" for summarization. 2. Class-Based Signatures: More complex tasks might require class-based signatures, which can include additional instructions or descriptions about the inputs and outputs. For example, a class for emotion classification might clearly specify the range of emotions that can be classified. ### **Modules** Modules take signatures as input, and automatically generate high-quality prompts. Inspired heavily from PyTorch, DSPy [modules](https://dspy-docs.vercel.app/docs/building-blocks/modules) eliminate the need for crafting prompts manually. The framework supports advanced modules like [dspy.ChainOfThought](https://dspy-docs.vercel.app/api/modules/ChainOfThought), which adds step-by-step rationalization before producing an output. The output not only provides answers but also rationales. Other modules include [dspy.ProgramOfThought](https://dspy-docs.vercel.app/api/modules/ProgramOfThought), which outputs code whose execution results dictate the response, and [dspy.ReAct](https://dspy-docs.vercel.app/api/modules/ReAct), an agent that uses tools to implement signatures. DSPy also offers modules like [dspy.MultiChainComparison](https://dspy-docs.vercel.app/api/modules/MultiChainComparison), which can compare multiple outputs from dspy.ChainOfThought in order to produce a final prediction. There are also utility modules like [dspy.majority](https://dspy-docs.vercel.app/docs/building-blocks/modules#what-other-dspy-modules-are-there-how-can-i-use-them) for aggregating responses through voting. Modules can be composed into larger programs, and you can compose multiple modules into bigger modules. This allows you to create complex, behavior-rich applications using language models. ### **Optimizers** [Optimizers](https://dspy-docs.vercel.app/docs/building-blocks/optimizers) take a set of modules that have been connected to create a pipeline, compile them into auto-optimized prompts, and maximize an outcome metric. Essentially, optimizers are designed to generate, test, and refine prompts, and ensure that the final prompt is highly optimized for the specific dataset and task at hand. Using optimizers in the DSPy framework significantly simplifies the process of developing and refining LM applications by automating the prompt engineering process. ### **Building AI Applications with DSPy** A typical DSPy program requires the developer to follow the following 8 steps: 1. **Defining the Task**: Identify the specific problem you want to solve, including the input and output formats. 2. **Defining the Pipeline**: Plan the sequence of operations needed to solve the task. Then craft the signatures and the modules. 3. **Testing with Examples**: Run the pipeline with a few examples to understand the initial performance. This helps in identifying immediate issues with the program and areas for improvement. 4. **Defining Your Data**: Prepare and structure your training and validation datasets. This is needed by the optimizer for training the model and evaluating its performance accurately. 5. **Defining Your Metric**: Choose metrics that will measure the success of your model. These metrics help the optimizer evaluate how well the model is performing. 6. **Collecting Zero-Shot Evaluations**: Run initial evaluations without prior training to establish a baseline. This helps in understanding the model’s capabilities and limitations out of the box. 7. **Compiling with a DSPy Optimizer**: Given the data and metric, you can now optimize the program. DSPy offers a variety of optimizers designed for different purposes. These optimizers can generate step-by-step examples, craft detailed instructions, and/or update language model prompts and weights as needed. 8. **Iterating**: Continuously refine each aspect of your task, from the pipeline and data to the metrics and evaluations. Iteration helps in gradually improving the model’s performance and adapting to new requirements. 9. {{< figure src=/blog/dspy-vs-langchain/process.jpg caption=""Process"" >}} **Language Model Setup** Setting up the LM in DSPy is easy. ```python # pip install dspy import dspy llm = dspy.OpenAI(model='gpt-3.5-turbo-1106', max_tokens=300) dspy.configure(lm=llm) # Let's test this. First define a module (ChainOfThought) and assign it a signature (return an answer, given a question). qa = dspy.ChainOfThought('question -> answer') # Then, run with the default LM configured. response = qa(question=""Where is Paris?"") print(response.answer) ``` You are not restricted to using one LLM in your program; you can use [multiple](https://dspy-docs.vercel.app/docs/building-blocks/language_models#using-multiple-lms-at-once). DSPy can be used with both managed models such as OpenAI, Cohere, Anyscale, Together, or PremAI as well as with local LLM deployments through vLLM, Ollama, or TGI server. All LLM calls are cached by default. **Vector Store Integration (Retrieval Model)** You can easily set up [Qdrant](/documentation/frameworks/dspy/) vector store to act as the retrieval model. To do so, follow these steps: ```python # pip install dspy-ai[qdrant] import dspy from dspy.retrieve.qdrant_rm import QdrantRM from qdrant_client import QdrantClient llm = dspy.OpenAI(model=""gpt-3.5-turbo"") qdrant_client = QdrantClient() qdrant_rm = QdrantRM(""collection-name"", qdrant_client, k=3) dspy.settings.configure(lm=llm, rm=qdrant_rm) ``` The above code sets up DSPy to use Qdrant (localhost), with collection-name as the default retrieval client. You can now build a RAG module in the following way: ```python class RAG(dspy.Module): def __init__(self, num_passages=5): super().__init__() self.retrieve = dspy.Retrieve(k=num_passages) self.generate_answer = dspy.ChainOfThought('context, question -> answer') # using inline signature def forward(self, question): context = self.retrieve(question).passages prediction = self.generate_answer(context=context, question=question) return dspy.Prediction(context=context, answer=prediction.answer) ``` Now you can use the RAG module like any Python module. **Optimizing the Pipeline** In this step, DSPy requires you to create a training dataset and a metric function, which can help validate the output of your program. Using this, DSPy tunes the parameters (i.e., the prompts and/or the LM weights) to maximize the accuracy of the RAG pipeline. Using DSPy optimizers involves the following steps: 1. Set up your DSPy program with the desired signatures and modules. 2. Create a training and validation dataset, with example input and output that you expect from your DSPy program. 3. Choose an appropriate optimizer such as BootstrapFewShotWithRandomSearch, MIPRO, or BootstrapFinetune. 4. Create a metric function that evaluates the performance of the DSPy program. You can evaluate based on accuracy or quality of responses, or on a metric that’s relevant to your program. 5. Run the optimizer with the DSPy program, metric function, and training inputs. DSPy will compile the program and automatically adjust parameters and improve performance. 6. Use the compiled program to perform the task. Iterate and adapt if required. To learn more about optimizing DSPy programs, read [this](https://dspy-docs.vercel.app/docs/building-blocks/optimizers). DSPy is heavily influenced by PyTorch, and replaces complex prompting with reusable modules for common tasks. Instead of crafting specific prompts, you write code that DSPy automatically translates for the LLM. This, along with built-in optimizers, makes working with LLMs more systematic and efficient. ### **Use Cases of DSPy** As we saw above, DSPy can be used to create fairly complex applications which require stacking multiple LM calls without the need for prompt engineering. Even though the framework is comparatively new - it started gaining popularity since November 2023 when it was first introduced - it has created a promising new direction for LLM-based applications. Here are some of the possible uses of DSPy: **Automating Prompt Engineering**: DSPy automates the process of creating prompts for LLMs, and allows developers to focus on the core logic of their application. This is powerful as manual prompt engineering makes AI applications highly unscalable and brittle. **Building Chatbots**: The modular design of DSPy makes it well-suited for creating chatbots with improved response quality and faster development cycles. DSPy's automatic prompting and optimizers can help ensure chatbots generate consistent and informative responses across different conversation contexts. **Complex Information Retrieval Systems**: DSPy programs can be easily integrated with vector stores, and used to build multi-step information retrieval systems with stacked calls to the LLM. This can be used to build highly sophisticated retrieval systems. For example, DSPy can be used to develop custom search engines that understand complex user queries and retrieve the most relevant information from vector stores. **Improving LLM Pipelines**: One of the best uses of DSPy is to optimize LLM pipelines. DSPy's modular design greatly simplifies the integration of LLMs into existing workflows. Additionally, DSPy's built-in optimizers can help fine-tune LLM pipelines based on desired metrics. **Multi-Hop Question-Answering**: Multi-hop question-answering involves answering complex questions that require reasoning over multiple pieces of information, which are often scattered across different documents or sections of text. With DSPy, users can leverage its automated prompt engineering capabilities to develop prompts that effectively guide the model on how to piece together information from various sources. ## **Comparative Analysis: DSPy vs LangChain** DSPy and LangChain are both powerful frameworks for building AI applications, leveraging large language models (LLMs) and vector search technology. Below is a comparative analysis of their key features, performance, and use cases: | Feature | LangChain | DSPy | | --- | --- | --- | | Core Focus | Focus on providing a large number of building blocks to simplify the development of applications that use LLMs in conjunction with user-specified data sources. | Focus on automating and modularizing LLM interactions, eliminating manual prompt engineering and improving systematic reliability. | | Approach | Utilizes modular components and chains that can be linked together using the LangChain Expression Language (LCEL). | Streamlines LLM interaction by prioritizing programming instead of prompting, and automating prompt refinement and weight tuning. | | Complex Pipelines | Facilitates the creation of chains using LCEL, supporting asynchronous execution and integration with various data sources and APIs. | Simplifies multi-stage reasoning pipelines using modules and optimizers, and ensures scalability through less manual intervention. | | Optimization | Relies on user expertise for prompt engineering and chaining of multiple LLM calls. | Includes built-in optimizers that automatically tune prompts and weights, and helps bring efficiency and effectiveness in LLM pipelines. | | Community and Support | Large open-source community with extensive documentation and examples. | Emerging framework with growing community support, and bringing a paradigm-shift in LLM prompting. | ### **LangChain** Strengths: 1. Data Sources and APIs: LangChain supports a wide variety of data sources and APIs, and allows seamless integration with different types of data. This makes it highly versatile for various AI applications​. 2. LangChain provides modular components that can be chained together and allows you to create complex AI workflows. LangChain Expression Language (LCEL) lets you use declarative syntax and makes it easier to build and manage workflows. 3. Since LangChain is an older framework, it has extensive documentation and thousands of examples that developers can take inspiration from. Weaknesses: 1. For projects involving complex, multi-stage reasoning tasks, LangChain requires significant manual prompt engineering. This can be time-consuming and prone to errors​. 2. Scalability Issues: Managing and scaling workflows that require multiple LLM calls can be pretty challenging. 3. Developers need sound understanding of prompt engineering in order to build applications that require multiple calls to the LLM. ### **DSPy** Strengths: 1. DSPy automates the process of prompt generation and optimization, and significantly reduces the need for manual prompt engineering. This makes working with LLMs easier and helps build scalable AI workflows​. 2. The framework includes built-in optimizers like BootstrapFewShot and MIPRO, which automatically refine prompts and adapt them to specific datasets​. 3. DSPy uses general-purpose modules and optimizers to simplify the complexities of prompt engineering. This can help you create complex multi-step reasoning applications easily, without worrying about the intricacies of dealing with LLMs. 4. DSPy supports various LLMs, including the flexibility of using multiple LLMs in the same program. 5. By focusing on programming rather than prompting, DSPy ensures higher reliability and performance for AI applications, particularly those that require complex multi-stage reasoning​​. Weaknesses: 1. As a newer framework, DSPy has a smaller community compared to LangChain. This means you will have limited availability of resources, examples, and community support​. 2. Although DSPy offers tutorials and guides, its documentation is less extensive than LangChain’s, which can pose challenges when you start​. 3. When starting with DSPy, you may feel limited to the paradigms and modules it provides. ​ ## **Selecting the Ideal Framework for Your AI Project** When deciding between DSPy and LangChain for your AI project, you should consider the problem statement and choose the framework that best aligns with your project goals. Here are some guidelines: ### **Project Type** **LangChain**: LangChain is ideal for projects that require extensive integration with multiple data sources and APIs, especially projects that benefit from the wide range of document loaders, vector stores, and retrieval algorithms that it supports​. **DSPy**: DSPy is best suited for projects that involve complex multi-stage reasoning pipelines or those that may eventually need stacked LLM calls. DSPy’s systematic approach to prompt engineering and its ability to optimize LLM interactions can help create highly reliable AI applications​. ### **Technical Expertise** **LangChain**: As the complexity of the application grows, LangChain requires a good understanding of prompt engineering and expertise in chaining multiple LLM calls. **DSPy**: Since DSPy is designed to abstract away the complexities of prompt engineering, it makes it easier for developers to focus on high-level logic rather than low-level prompt crafting. ### **Community and Support** **LangChain**: LangChain boasts a large and active community with extensive documentation, examples, and active contributions, and you will find it easier to get going. **DSPy**: Although newer and with a smaller community, DSPy is growing rapidly and offers tutorials and guides for some of the key use cases. DSPy may be more challenging to get started with, but its architecture makes it highly scalable. ### **Use Case Scenarios** **Retrieval Augmented Generation (RAG) Applications** **LangChain**: Excellent for building simple RAG applications due to its robust support for vector stores, document loaders, and retrieval algorithms. **DSPy**: Suitable for RAG applications requiring high reliability and automated prompt optimization, ensuring consistent performance across complex retrieval tasks. **Chatbots and Conversational AI** **LangChain**: Provides a wide range of components for building conversational AI, making it easy to integrate LLMs with external APIs and services​​. **DSPy**: Ideal for developing chatbots that need to handle complex, multi-stage conversations with high reliability and performance. DSPy’s automated optimizations ensure consistent and contextually accurate responses. **Complex Information Retrieval Systems** **LangChain**: Effective for projects that require seamless integration with various data sources and sophisticated retrieval capabilities​​. **DSPy**: Best for systems that involve complex multi-step retrieval processes, where prompt optimization and modular design can significantly enhance performance and reliability. You can also choose to combine and use the best features of both. In fact, LangChain has released an [integration with DSPy](https://python.langchain.com/v0.1/docs/integrations/providers/dspy/) to simplify this process. This allows you to use some of the utility functions that LangChain provides, such as text splitter, directory loaders, or integrations with other data sources while using DSPy for the LM interactions. ### Key Takeaways: - **LangChain's Flexibility:** LangChain integrates seamlessly with Qdrant, enabling streamlined vector embedding and retrieval for AI workflows. - **Optimized Retrieval:** Automate and enhance retrieval processes in multi-stage AI reasoning applications. - **Enhanced RAG Applications:** Fast and accurate retrieval of relevant document sections through vector similarity search. - **Support for Complex AI:** LangChain integration facilitates the creation of advanced AI architectures requiring precise information retrieval. - **Streamlined AI Development:** Simplify managing and retrieving large datasets, leading to more efficient AI development cycles in LangChain and DSPy. - **Future AI Workflows:** Qdrant's role in optimizing retrieval will be crucial as AI frameworks like DSPy continue to evolve and scale. ## **Level Up Your AI Projects with Advanced Frameworks** LangChain and DSPy both offer unique capabilities and can help you build powerful AI applications. Qdrant integrates with both LangChain and DSPy, allowing you to leverage its performance, efficiency and security features in either scenario. LangChain is ideal for projects that require extensive integration with various data sources and APIs. On the other hand, DSPy offers a powerful paradigm for building complex multi-stage applications. For pulling together an AI application that doesn’t require much prompt engineering, use LangChain. However, pick DSPy when you need a systematic approach to prompt optimization and modular design, and need robustness and scalability for complex, multi-stage reasoning applications. ## **References** [https://python.langchain.com/v0.1/docs/get_started/introduction](https://python.langchain.com/v0.1/docs/get_started/introduction) [https://dspy-docs.vercel.app/docs/intro](https://dspy-docs.vercel.app/docs/intro)",blog/dspy-vs-langchain.md "--- title: ""Semantic Cache: Accelerating AI with Lightning-Fast Data Retrieval"" draft: false slug: short_description: ""Semantic Cache for Best Results and Optimization."" description: ""Semantic cache is reshaping AI applications by enabling rapid data retrieval. Discover how its implementation benefits your RAG setup."" preview_image: /blog/semantic-cache-ai-data-retrieval/social_preview.png social_preview_image: /blog/semantic-cache-ai-data-retrieval/social_preview.png date: 2024-05-07T00:00:00-08:00 author: Daniel Romero, David Myriel featured: false tags: - vector search - vector database - semantic cache - gpt cache - semantic cache llm - AI applications - data retrieval - efficient data storage --- ## What is Semantic Cache? **Semantic cache** is a method of retrieval optimization, where similar queries instantly retrieve the same appropriate response from a knowledge base. Semantic cache differs from traditional caching methods. In computing, **cache** refers to high-speed memory that efficiently stores frequently accessed data. In the context of vector databases, a **semantic cache** improves AI application performance by storing previously retrieved results along with the conditions under which they were computed. This allows the application to reuse those results when the same or similar conditions occur again, rather than finding them from scratch. > The term **""semantic""** implies that the cache takes into account the meaning or semantics of the data or computation being cached, rather than just its syntactic representation. This can lead to more efficient caching strategies that exploit the structure or relationships within the data or computation. ![semantic-cache-question](/blog/semantic-cache-ai-data-retrieval/semantic-cache-question.png) Traditional caches operate on an exact match basis, while semantic caches search for the meaning of the key rather than an exact match. For example, **""What is the capital of Brazil?""** and **""Can you tell me the capital of Brazil?""** are semantically equivalent, but not exact matches. A semantic cache recognizes such semantic equivalence and provides the correct result. In this blog and video, we will walk you through how to use Qdrant to implement a basic semantic cache system. You can also try the [notebook example](https://github.com/infoslack/qdrant-example/blob/main/semantic-cache.ipynb) for this implementation. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/infoslack/qdrant-example/blob/main/semantic-cache.ipynb) ## Semantic Cache in RAG: the Key-Value Mechanism Semantic cache is increasingly used in Retrieval-Augmented Generation (RAG) applications. In RAG, when a user asks a question, we embed it and search our vector database, either by using keyword, semantic, or hybrid search methods. The matched context is then passed to a Language Model (LLM) along with the prompt and user question for response generation. Qdrant is recommended for setting up semantic cache as semantically evaluates the response. When semantic cache is implemented, we store common questions and their corresponding answers in a key-value cache. This way, when a user asks a question, we can retrieve the response from the cache if it already exists. **Diagram:** Semantic cache improves RAG by directly retrieving stored answers to the user. **Follow along with the gif** and see how semantic cache stores and retrieves answers. ![Alt Text](/blog/semantic-cache-ai-data-retrieval/semantic-cache.gif) When using a key-value cache, it's important to consider that slight variations in question wording can lead to different hash values. The two questions convey the same query but differ in wording. A naive cache search might fail due to distinct hashed versions of the questions. Implementing a more nuanced approach is necessary to accommodate phrasing variations and ensure accurate responses. To address this challenge, a semantic cache can be employed instead of relying solely on exact matches. This entails storing questions, answers, and their embeddings in a key-value structure. When a user poses a question, a semantic search by Qdrant is conducted across all cached questions to identify the most similar one. If the similarity score surpasses a predefined threshold, the system assumes equivalence between the user's question and the matched one, providing the corresponding answer accordingly. ## Benefits of Semantic Cache for AI Applications Semantic cache contributes to scalability in AI applications by making it simpler to retrieve common queries from vast datasets. The retrieval process can be computationally intensive and implementing a cache component can reduce the load. For instance, if hundreds of users repeat the same question, the system can retrieve the precomputed answer from the cache rather than re-executing the entire process. This cache stores questions as keys and their corresponding answers as values, providing an efficient means to handle repeated queries. > There are **potential cost savings** associated with utilizing semantic cache. Using a semantic cache eliminates the need for repeated searches and generation processes for similar or duplicate questions, thus saving time and LLM API resources, especially when employing costly language model calls like OpenAI's. ## When to Use Semantic Cache? For applications like question-answering systems where facts are retrieved from documents, caching is beneficial due to the consistent nature of the queries. *However, for text generation tasks requiring varied responses, caching may not be ideal as it returns previous responses, potentially limiting variation.* Thus, the decision to use caching depends on the specific use case. Using a cache might not be ideal for applications where diverse responses are desired across multiple queries. However, in question-answering systems, caching is advantageous since variations are insignificant. It serves as an effective performance optimization tool for chatbots by storing frequently accessed data. One strategy involves creating ad-hoc patches for chatbot dialogues, where commonly asked questions are pre-mapped to prepared responses in the cache. This allows the chatbot to swiftly retrieve and deliver responses without relying on a Language Model (LLM) for each query. ## Implement Semantic Cache: A Step-by-Step Guide The first part of this video explains how caching works. In the second part, you can follow along with the code with our [notebook example](https://github.com/infoslack/qdrant-example/blob/main/semantic-cache.ipynb). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/infoslack/qdrant-example/blob/main/semantic-cache.ipynb)

## Embrace the Future of AI Data Retrieval [Qdrant](https://github.com/qdrant/qdrant) offers the most flexible way to implement vector search for your RAG and AI applications. You can test out semantic cache on your free Qdrant Cloud instance today! Simply sign up for or log into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and follow our [documentation](/documentation/cloud/). You can also deploy Qdrant locally and manage via our UI. To do this, check our [Hybrid Cloud](/blog/hybrid-cloud/)! [![hybrid-cloud-get-started](/blog/hybrid-cloud-launch-partners/hybrid-cloud-get-started.png)](https://cloud.qdrant.io/login) ",blog/semantic-cache-ai-data-retrieval.md "--- draft: false title: How to Superpower Your Semantic Search Using a Vector Database Vector Space Talks slug: semantic-search-vector-database short_description: Nicolas Mauti and his team at Malt discusses how they revolutionize the way freelancers connect with projects. description: Unlock the secrets of supercharging semantic search with Nicolas Mauti's insights on leveraging vector databases. Discover advanced strategies. preview_image: /blog/from_cms/nicolas-mauti-cropped.png date: 2024-01-09T12:27:18.659Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Retriever-Ranker Architecture - Semantic Search --- # How to Superpower Your Semantic Search Using a Vector Database with Nicolas Mauti > *""We found a trade off between performance and precision in Qdrant’s that were better for us than what we can found on Elasticsearch.”*\ > -- Nicolas Mauti > Want precision & performance in freelancer search? Malt's move to the Qdrant database is a masterstroke, offering geospatial filtering & seamless scaling. How did Nicolas Mauti and the team at Malt identify the need to transition to a retriever-ranker architecture for their freelancer matching app? Nicolas Mauti, a computer science graduate from INSA Lyon Engineering School, transitioned from software development to the data domain. Joining Malt in 2021 as a data scientist, he specialized in recommender systems and NLP models within a freelancers-and-companies marketplace. Evolving into an MLOps Engineer, Nicolas adeptly combines data science, development, and ops knowledge to enhance model development tools and processes at Malt. Additionally, he has served as a part-time teacher in a French engineering school since 2020. Notably, in 2023, Nicolas successfully deployed Qdrant at scale within Malt, contributing to the implementation of a new matching system. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/5aTPXqa7GMjekUfD8aAXWG?si=otJ_CpQNScqTK5cYq2zBow), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/OSZSingUYBM).*** ## **Top Takeaways:** Dive into the intricacies of [semantic search](https://qdrant.tech/documentation/tutorials/search-beginners/) enhancement with Nicolas Mauti, MLOps Engineer at Malt. Discover how Nicolas and his team at Malt revolutionize the way freelancers connect with projects. In this episode, Nicolas delves into enhancing semantics search at Malt by implementing a retriever-ranker architecture with multilingual transformer-based models, improving freelancer-project matching through a transition to [Qdrant](https://qdrant.tech/) that reduced latency from 10 seconds to 1 second and bolstering the platform's overall performance and scaling capabilities. 5 Keys to Learning from the Episode: 1. **Performance Enhancement Tactics**: Understand the technical challenges Malt faced due to increased latency brought about by their expansion to over half a million freelancers and the solutions they enacted. 2. **Advanced Matchmaking Architecture**: Learn about the retriever-ranker model adopted by Malt, which incorporates semantic searching alongside a KNN search for better efficacy in pairing projects with freelancers. 3. **Cutting-Edge Model Training**: Uncover the deployment of a multilingual transformer-based encoder that effectively creates high-fidelity embeddings to streamline the matchmaking process. 4. **Database Selection Process**: Mauti discusses the factors that shaped Malt's choice of database systems, facilitating a balance between high performance and accurate filtering capabilities. 5. **Operational Improvements**: Gain knowledge of the significant strides Malt made post-deployment, including a remarkable reduction in application latency and its positive effects on scalability and matching quality. > Fun Fact: Malt employs a multilingual transformer-based encoder model to generate 384-dimensional embeddings, which improved their semantic search capability. > ## Show Notes: 00:00 Matching app experiencing major performance issues.\ 04:56 Filtering freelancers and adopting retriever-ranker architecture.\ 09:20 Multilingual encoder model for adapting semantic space.\ 10:52 Review, retrain, categorize, and organize freelancers' responses.\ 16:30 Trouble with geospatial filtering databases\ 17:37 Benchmarking performance and precision of search algorithms.\ 21:11 Deployed in Kubernetes. Stored in Git repository, synchronized with Argo CD.\ 27:08 Improved latency quickly, validated architecture, aligned steps.\ 28:46 Invitation to discuss work using specific methods. ## More Quotes from Nicolas: *""And so GitHub's approach is basic idea that your git repository is your source of truth regarding what you must have in your Kubernetes clusters.”*\ -- Nicolas Mauti *""And so we can see that our space seems to be well organized, where the tech freelancer are close to each other and the graphic designer for example, are far from the tech family.”*\ -- Nicolas Mauti *""And also one thing that interested us is that it's multilingual. And as Malt is a European company, we have to have to model a multilingual model.”*\ -- Nicolas Mauti ## Transcript: Demetrios: We're live. We are live in the flesh. Nicholas, it's great to have you here, dude. And welcome to all those vector space explorers out there. We are back with another vector space talks. Today we're going to be talking all about how to superpower your semantics search with my man Nicholas, an ML ops engineer at Malt, in case you do not know what Malt is doing. They are pairing up, they're making a marketplace. They are connecting freelancers and companies. Demetrios: And Nicholas, you're doing a lot of stuff with recommender systems, right? Nicolas Mauti: Yeah, exactly. Demetrios: I love that. Well, as I mentioned, I am in an interesting spot because I'm trying to take in all the vitamin D I can while I'm listening to your talk. Everybody that is out there listening with us, get involved. Let us know where you're calling in from or watching from. And also feel free to drop questions in the chat as we go along. And if need be, I will jump in and stop Nicholas. But I know you got a little presentation for us, man you want to get into. Nicolas Mauti: Thanks for the, thanks for the introduction and hello, everyone. And thanks for the invitation to this talk, of course. So let's start. Let's do it. Demetrios: I love it. Superpowers. Nicolas Mauti: Yeah, we will have superpowers at the end of this presentation. So, yeah, hello, everyone. So I think the introduction was already done and perfectly done by Dimitrios. So I'm Nicola and yeah, I'm working as an Mlaps engineer at Malt. And also I'm a part time teacher in a french engineering school where I teach some mlaps course. So let's dig in today's subjects. So in fact, as Dimitrio said, malt is a marketplace and so our goal is to match on one side freelancers. And those freelancers have a lot of attributes, for example, a description, some skills and some awesome skills. Nicolas Mauti: And they also have some preferences and also some attributes that are not specifically semantics. And so it will be a key point of our topics today. And on other sides we have what we call projects that are submitted by companies. And this project also have a lot of attributes, for example, description, also some skills and need to find and also some preferences. And so our goal at the end is to perform a match between these two entities. And so for that we add a matching app in production already. And so in fact, we had a major issue with this application is performance of this application because the application becomes very slow. The p 50 latency was around 10 seconds. Nicolas Mauti: And what you have to keep from this is that if your latency, because became too high, you won't be able to perform certain scenarios. Sometimes you want some synchronous scenario where you fill your project and then you want to have directly your freelancers that match this project. And so if it takes too much time, you won't be able to have that. And so you will have to have some asynchronous scenario with email or stuff like that. And it's not very a good user experience. And also this problem were amplified by the exponential growth of the platform. Absolutely, we are growing. And so to give you some numbers, when I arrived two years ago, we had two time less freelancers. Nicolas Mauti: And today, and today we have around 600,000 freelancers in your base. So it's growing. And so with this grow, we had some, several issue. And something we have to keep in mind about this matching app. And so it's not only semantic app, is that we have two things in these apps that are not semantic. We have what we call art filters. And so art filters are art rules defined by the project team at Malt. And so these rules are hard and we have to respect them. Nicolas Mauti: For example, the question is hard rule at malt we have a local approach, and so we want to provide freelancers that are next to the project. And so for that we have to filter the freelancers and to have art filters for that and to be sure that we respect these rules. And on the other side, as you said, demetrius, we are talking about Rexis system here. And so in a rexy system, you also have to take into account some other parameters, for example, the preferences of the freelancers and also the activity on the platform of the freelancer, for example. And so in our system, we have to keep this in mind and to have this working. And so if we do a big picture of how our system worked, we had an API with some alphilter at the beginning, then ML model that was mainly semantic and then some rescoring function with other parameters. And so we decided to rework this architecture and to adopt a retriever ranker architecture. And so in this architecture, you will have your pool of freelancers. Nicolas Mauti: So here is your wall databases, so your 600,000 freelancers. And then you will have a first step that is called the retrieval, where we will constrict a subsets of your freelancers. And then you can apply your wrong kill algorithm. That is basically our current application. And so the first step will be, semantically, it will be fast, and it must be fast because you have to perform a quick selection of your more interesting freelancers and it's built for recall, because at this step you want to be sure that you have all your relevant freelancers selected and you don't want to exclude at this step some relevant freelancer because the ranking won't be able to take back these freelancers. And on the other side, the ranking can contain more features, not only semantics, it less conference in time. And if your retrieval part is always giving you a fixed size of freelancers, your ranking doesn't have to scale because you will always have the same number of freelancers in inputs. And this one is built for precision. Nicolas Mauti: At this point you don't want to keep non relevant freelancers and you have to be able to rank them and you have to be state of the art for this part. So let's focus on the first part. That's what will interesting us today. So for the first part, in fact, we have to build this semantic space where freelancers that are close regarding their skills or their jobs are closed in this space too. And so for that we will build this semantic space. And so then when we receive a project, we will have just to project this project in our space. And after that you will have just to do a search and a KNN search for knee arrest neighbor search. And in practice we are not doing a KNN search because it's too expensive, but inn search for approximate nearest neighbors. Nicolas Mauti: Keep this in mind, it will be interesting in our next slides. And so, to get this semantic space and to get this search, we need two things. The first one is a model, because we need a model to compute some vectors and to project our opportunity and our project and our freelancers in this space. And on another side, you will have to have a tool to operate this semantic step page. So to store the vector and also to perform the search. So for the first part, for the model, I will give you some quick info about how we build it. So for this part, it was more on the data scientist part. So the data scientist started from an e five model. Nicolas Mauti: And so the e five model will give you a common knowledge about the language. And also one thing that interested us is that it's multilingual. And as Malt is an european company, we have to have to model a multilingual model. And on top of that we built our own encoder model based on a transformer architecture. And so this model will be in charge to be adapted to Malchus case and to transform this very generic semantic space into a semantic space that is used for skills and jobs. And this model is also able to take into account the structure of a profile of a freelancer profile because you have a description and job, some skills, some experiences. And so this model is capable to take this into account. And regarding the training, we use some past interaction on the platform to train it. Nicolas Mauti: So when a freelancer receives a project, he can accept it or not. And so we use that to train this model. And so at the end we get some embeddings with 384 dimensions. Demetrios: One question from my side, sorry to stop you right now. Do you do any type of reviews or feedback and add that into the model? Nicolas Mauti: Yeah. In fact we continue to have some response about our freelancers. And so we also review them, sometimes manually because sometimes the response are not so good or we don't have exactly what we want or stuff like that, so we can review them. And also we are retraining the model regularly, so this way we can include new feedback from our freelancers. So now we have our model and if we want to see how it looks. So here I draw some ponds and color them by the category of our freelancer. So on the platform the freelancer can have category, for example tech or graphic or soon designer or this kind of category. And so we can see that our space seems to be well organized, where the tech freelancer are close to each other and the graphic designer for example, are far from the tech family. Nicolas Mauti: So it seems to be well organized. And so now we have a good model. So okay, now we have our model, we have to find a way to operate it, so to store this vector and to perform our search. And so for that, Vectordb seems to be the good candidate. But if you follow the news, you can see that vectordb is very trendy and there is plenty of actor on the market. And so it could be hard to find your loved one. And so I will try to give you the criteria we had and why we choose Qdrant at the end. So our first criteria were performances. Nicolas Mauti: So I think I already talked about this ponds, but yeah, we needed performances. The second ones was about inn quality. As I said before, we cannot do a KnN search, brute force search each time. And so we have to find a way to approximate but to be close enough and to be good enough on these points. And so otherwise we won't be leveraged the performance of our model. And the last one, and I didn't talk a lot about this before, is filtering. Filtering is a big problem for us because we have a lot of filters, of art filters, as I said before. And so if we think about my architecture, we can say, okay, so filtering is not a problem. Nicolas Mauti: You can just have a three step process and do filtering, semantic search and then ranking, or do semantic search, filtering and then ranking. But in both cases, you will have some troubles if you do that. The first one is if you want to apply prefiltering. So filtering, semantic search, ranking. If you do that, in fact, you will have, so we'll have this kind of architecture. And if you do that, you will have, in fact, to flag each freelancers before asking the [vector database](https://qdrant.tech/articles/what-is-a-vector-database/) and performing a search, you will have to flag each freelancer whether there could be selected or not. And so with that, you will basically create a binary mask on your freelancers pool. And as the number of freelancers you have will grow, your binary namask will also grow. Nicolas Mauti: And so it's not very scalable. And regarding the performance, it will be degraded as your freelancer base grow. And also you will have another problem. A lot of [vector database](https://qdrant.tech/articles/what-is-a-vector-database/) and Qdrants is one of them using hash NSW algorithm to do your inn search. And this kind of algorithm is based on graph. And so if you do that, you will deactivate some nodes in your graph, and so your graph will become disconnected and you won't be able to navigate in your graph. And so your quality of your matching will degrade. So it's definitely not a good idea to apply prefiltering. Nicolas Mauti: So, no, if we go to post filtering here, I think the issue is more clear. You will have this kind of architecture. And so, in fact, if you do that, you will have to retrieve a lot of freelancer for your [vector database](https://qdrant.tech/articles/what-is-a-vector-database/). If you apply some very aggressive filtering and you exclude a lot of freelancer with your filtering, you will have to ask for a lot of freelancer in your vector database and so your performances will be impacted. So filtering is a problem. So we cannot do pre filtering or post filtering. So we had to find a database that do filtering and matching and semantic matching and search at the same time. And so Qdrant is one of them, you have other one in the market. Nicolas Mauti: But in our case, we had one filter that caused us a lot of troubles. And this filter is the geospatial filtering and a few of databases under this filtering, and I think Qdrant is one of them that supports it. But there is not a lot of databases that support them. And we absolutely needed that because we have a local approach and we want to be sure that we recommend freelancer next to the project. And so now that I said all of that, we had three candidates that we tested and we benchmarked them. We had elasticsearch PG vector, that is an extension of PostgreSQL and Qdrants. And on this slide you can see Pycon for example, and Pycon was excluded because of the lack of geospatial filtering. And so we benchmark them regarding the qps. Nicolas Mauti: So query per second. So this one is for performance, and you can see that quadron was far from the others, and we also benchmark it regarding the precision, how we computed the precision, for the precision we used a corpus that it's called textmax, and Textmax corpus provide 1 million vectors and 1000 queries. And for each queries you have your grown truth of the closest vectors. They used brute force knn for that. And so we stored this vector in our databases, we run the query and we check how many vectors we found that were in the ground truth. And so they give you a measure of your precision of your inn algorithm. For this metric, you could see that elasticsearch was a little bit better than Qdrants, but in fact we were able to tune a little bit the parameter of the AsHNSW algorithm and indexes. And at the end we found a better trade off, and we found a trade off between performance and precision in Qdrants that were better for us than what we can found on elasticsearch. Nicolas Mauti: So at the end we decided to go with Qdrant. So we have, I think all know we have our model and we have our tool to operate them, to operate our model. So a final part of this presentation will be about the deployment. I will talk about it a little bit because I think it's interesting and it's also part of my job as a development engineer. So regarding the deployment, first we decided to deploy a Qdrant in a cluster configuration. We decided to start with three nodes and so we decided to get our collection. So collection are where all your vector are stored in Qdrant, it's like a table in SQL or an index in elasticsearch. And so we decided to split our collection between three nodes. Nicolas Mauti: So it's what we call shards. So you have a shard of a collection on each node, and then for each shard you have one replica. So the replica is basically a copy of a shard that is living on another node than the primary shard. So this way you have a copy on another node. And so this way if we operate normal conditions, your query will be split across your three nodes, and so you will have your response accordingly. But what is interesting is that if we lose one node, for example, this one, for example, because we are performing a rolling upgrade or because kubernetes always kill pods, we will be still able to operate because we have the replica to get our data. And so this configuration is very robust and so we are very happy with it. And regarding the deployment. Nicolas Mauti: So as I said, we deployed it in kubernetes. So we use the Qdrant M chart, the official M chart provided by Qdrants. In fact we subcharted it because we needed some additional components in your clusters and some custom configuration. So I didn't talk about this, but M chart are just a bunch of file of Yaml files that will describe the Kubernetes object you will need in your cluster to operate your databases in your case, and it's collection of file and templates to do that. And when you have that at malt we are using what we called a GitHub's approach. And so GitHub's approach is basic idea that your git repository is your groom truth regarding what you must have in your Kubernetes clusters. And so we store these files and these M charts in git, and then we have a tool that is called Argo CD that will pull our git repository at some time and it will check the differences between what we have in git and what we have in our cluster and what is living in our cluster. And it will then synchronize what we have in git directly in our cluster, either automatically or manually. Nicolas Mauti: So this is a very good approach to collaborate and to be sure that what we have in git is what you have in your cluster. And to be sure about what you have in your cluster by just looking at your git repository. And I think that's pretty all I have one last slide, I think that will interest you. It's about the outcome of the project, because we did that at malt. We built this architecture with our first phase with Qdrants that do the semantic matching and that apply all the filtering we have. And in the second part we keep our all drunking system. And so if we look at the latency of our apps, at the P 50 latency of our apps, so it's a wall app with the two steps and with the filters, the semantic matching and the ranking. As you can see, we started in a debate test in mid October. Nicolas Mauti: Before that it was around 10 seconds latency, as I said at the beginning of the talk. And so we already saw a huge drop in the application and we decided to go full in December and we can see another big drop. And so we were around 10 seconds and now we are around 1 second and alpha. So we divided the latency of more than five times. And so it's a very good news for us because first it's more scalable because the retriever is very scalable and with the cluster deployment of Qdrants, if we need, we can add more nodes and we will be able to scale this phase. And after that we have a fixed number of freelancers that go into the matching part. And so the matching part doesn't have to scale. No. Nicolas Mauti: And the other good news is that now that we are able to scale and we have a fixed size, after our first parts, we are able to build more complex and better matching model and we will be able to improve the quality of our matching because now we are able to scale and to be able to handle more freelancers. Demetrios: That's incredible. Nicolas Mauti: Yeah, sure. It was a very good news for us. And so that's all. And so maybe you have plenty of question and maybe we can go with that. Demetrios: All right, first off, I want to give a shout out in case there are freelancers that are watching this or looking at this, now is a great time to just join Malt, I think. It seems like it's getting better every day. So I know there's questions that will come through and trickle in, but we've already got one from Luis. What's happening, Luis? He's asking what library or service were you using for Ann before considering Qdrant, in fact. Nicolas Mauti: So before that we didn't add any library or service or we were not doing any ann search or [semantic searc](https://qdrant.tech/documentation/tutorials/search-beginners/) in the way we are doing it right now. We just had one model when we passed the freelancers and the project at the same time in the model, and we got relevancy scoring at the end. And so that's why it was also so slow because you had to constrict each pair and send each pair to your model. And so right now we don't have to do that and so it's much better. Demetrios: Yeah, that makes sense. One question from my side is it took you, I think you said in October you started with the A B test and then in December you rolled it out. What was that last slide that you had? Nicolas Mauti: Yeah, that's exactly that. Demetrios: Why the hesitation? Why did it take you from October to December to go down? What was the part that you weren't sure about? Because it feels like you saw a huge drop right there and then why did you wait until December? Nicolas Mauti: Yeah, regarding the latency and regarding the drop of the latency, the result was very clear very quickly. I think maybe one week after that, we were convinced that the latency was better. First, our idea was to validate the architecture, but the second reason was to be sure that we didn't degrade the quality of the matching because we have a two step process. And the risk is that the two model doesn't agree with each other. And so if the intersection of your first step and the second step is not good enough, you will just have some empty result at the end because your first part will select a part of freelancer and the second step, you select another part and so your intersection is empty. And so our goal was to assess that the two steps were aligned and so that we didn't degrade the quality of the matching. And regarding the volume of projects we have, we had to wait for approximately two months. Demetrios: It makes complete sense. Well, man, I really appreciate this. And can you go back to the slide where you show how people can get in touch with you if they want to reach out and talk more? I encourage everyone to do that. And thanks so much, Nicholas. This is great, man. Nicolas Mauti: Thanks. Demetrios: All right, everyone. By the way, in case you want to join us and talk about what you're working on and how you're using Qdrant or what you're doing in the semantic space or [semantic search](https://qdrant.tech/documentation/tutorials/search-beginners/) or vector space, all that fun stuff, hit us up. We would love to have you on here. One last question for you, Nicola. Something came through. What indexing method do you use? Is it good for using OpenAI embeddings? Nicolas Mauti: So in our case, we have our own model to build the embeddings. Demetrios: Yeah, I remember you saying that at the beginning, actually. All right, cool. Well, man, thanks a lot and we will see everyone next week for another one of these vector space talks. Thank you all for joining and take care. Care. Thanks.",blog/superpower-your-semantic-search-using-vector-database-nicolas-mauti-vector-space-talk-007.md "--- draft: false title: ""Visua and Qdrant: Vector Search in Computer Vision"" slug: short_description: ""Using vector search for quality control and anomaly detection in computer vision."" description: ""How Visua uses Qdrant as a vector search engine for quality control and anomaly detection in their computer vision platform."" preview_image: /blog/case-study-visua/image4.png social_preview_image: /blog/case-study-visua/image4.png date: 2024-05-01T00:02:00Z author: Manuel Meyer featured: false tags: - visua - qdrant - computer vision - quality control - anomaly detection --- ![visua/image1.png](/blog/case-study-visua/image1.png) For over a decade, [VISUA](https://visua.com/) has been a leader in precise, high-volume computer vision data analysis, developing a robust platform that caters to a wide range of use cases, from startups to large enterprises. Starting with social media monitoring, where it excels in analyzing vast data volumes to detect company logos, VISUA has built a diverse ecosystem of customers, including names in social media monitoring, like **Brandwatch**, cybersecurity like **Mimecast**, trademark protection like **Ebay** and several sports agencies like **Vision Insights** for sponsorship evaluation. ![visua/image3.png](/blog/case-study-visua/image3.png) ## The Challenge **Quality Control at Scale** The accuracy of object detection within images is critical for VISUA ensuring that their algorithms are detecting objects in images correctly. With growing volumes of data processed for clients, the company was looking for a way to enhance its quality control and anomaly detection mechanisms to be more scalable and auditable. The challenge was twofold. First, VISUA needed a method to rapidly and accurately identify images and the objects within them that were similar, to identify false negatives, or unclear outcomes and use them as inputs for reinforcement learning. Second, the rapid growth in data volume challenged their previous quality control processes, which relied on a sampling method based on meta-information (like analyzing lower-confidence, smaller, or blurry images), which involved more manual reviews and was not as scalable as needed. In response, the team at VISUA explored vector databases as a solution. ## The Solution **Accelerating Anomaly Detection and Elevating Quality Control with Vector Search** In addressing the challenge of scaling and enhancing its quality control processes, VISUA turned to vector databases, with Qdrant emerging as the solution of choice. This technological shift allowed VISUA to leverage vector databases for identifying similarities and deduplicating vast volumes of images, videos, and frames. By doing so, VISUA was able to automatically classify objects with a level of precision that was previously unattainable. The introduction of vectors allowed VISUA to represent data uniquely and mark frames for closer examination by prioritizing the review of anomalies and data points with the highest variance. Consequently, this technology empowered Visia to scale its quality assurance and reinforcement learning processes tenfold. > *“Using Qdrant as a vector database for our quality control allowed us to review 10x more data by exploiting repetitions and deduplicating samples and doing that at scale with having a query engine.”* Alessandro Prest, Co-Founder at VISUA. ![visua/image2.jpg](/blog/case-study-visua/image2.jpg) ## The Selection Process **Finding the Right Vector Database For Quality Analysis and Anomaly Detection** Choosing the right vector database was a pivotal decision for VISUA, and the team conducted extensive benchmarks. They tested various solutions, including Weaviate, Pinecone, and Qdrant, focusing on the efficient handling of both vector and payload indexes. The objective was to identify a system that excels in managing hybrid queries that blend vector similarities with record attributes, crucial for enhancing their quality control and anomaly detection capabilities. Qdrant distinguished itself through its: - **Hybrid Query Capability:** Qdrant enables the execution of hybrid queries that combine payload fields and vector data, allowing for comprehensive and nuanced searches. This functionality leverages the strengths of both payload attributes and vector similarities for detailed data analysis. Prest noted the importance of Qdrant's hybrid approach, saying, “When talking with the founders of Qdrant, we realized that they put a lot of effort into this hybrid approach, which really resonated with us.” - **Performance Superiority**: Qdrant distinguished itself as the fastest engine for VISUA's specific needs, significantly outpacing alternatives with query speeds up to 40 times faster for certain VISUA use cases. Alessandro Prest highlighted, ""Qdrant was the fastest engine by a large margin for our use case,"" underscoring its significant efficiency and scalability advantages. - **API Documentation**: The clarity, comprehensiveness, and user-friendliness of Qdrant’s API documentation and reference guides further solidified VISUA’s decision. This strategic selection enabled VISUA to achieve a notable increase in operational efficiency and scalability in its quality control processes. ## Implementing Qdrant Upon selecting Qdrant as their vector database solution, VISUA undertook a methodical approach to integration. The process began in a controlled development environment, allowing VISUA to simulate real-world use cases and ensure that Qdrant met their operational requirements. This careful, phased approach ensured a smooth transition when moving Qdrant into their production environment, hosted on AWS clusters. VISUA is leveraging several specific Qdrant features in their production setup: 1. **Support for Multiple Vectors per Record/Point**: This feature allows for a nuanced and multifaceted analysis of data, enabling VISUA to manage and query complex datasets more effectively. 2. **Quantization**: Quantization optimizes storage and accelerates query processing, improving data handling efficiency and lowering memory use, essential for large-scale operations. ## The Results Integrating Qdrant into VISUA's quality control operations has delivered measurable outcomes when it comes to efficiency and scalability: - **40x Faster Query Processing**: Qdrant has drastically reduced the time needed for complex queries, enhancing workflow efficiency. - **10x Scalability Boost:** The efficiency of Qdrant enables VISUA to handle ten times more data in its quality assurance and learning processes, supporting growth without sacrificing quality. - **Increased Data Review Capacity:** The increased capacity to review the data allowed VISUA to enhance the accuracy of its algorithms through reinforcement learning. #### Expanding Qdrant’s Use Beyond Anomaly Detection While the primary application of Qdrant is focused on quality control, VISUA's team is actively exploring additional use cases with Qdrant. VISUA's use of Qdrant has inspired new opportunities, notably in content moderation. ""The moment we started to experiment with Qdrant, opened up a lot of ideas within the team for new applications,” said Prest on the potential unlocked by Qdrant. For example, this has led them to actively explore the Qdrant [Discovery API](/documentation/concepts/explore/?q=discovery#discovery-api), with an eye on enhancing content moderation processes. Beyond content moderation, VISUA is set for significant growth by broadening its copyright infringement detection services. As the demand for detecting a wider range of infringements, like unauthorized use of popular characters on merchandise, increases, VISUA plans to expand its technology capabilities. Qdrant will be pivotal in this expansion, enabling VISUA to meet the complex and growing challenges of moderating copyrighted content effectively and ensuring comprehensive protection for brands and creators.",blog/case-study-visua.md "--- draft: false title: ""Announcing Qdrant's $28M Series A Funding Round"" slug: series-A-funding-round short_description: description: preview_image: /blog/series-A-funding-round/series-A.png social_preview_image: /blog/series-A-funding-round/series-A.png date: 2024-01-23T09:00:00.000Z author: Andre Zayarni, CEO & Co-Founder featured: true tags: - Funding - Series-A - Announcement --- Today, we are excited to announce our $28M Series A funding round, which is led by Spark Capital with participation from our existing investors Unusual Ventures and 42CAP. We have seen incredible user growth and support from our open-source community in the past two years - recently exceeding 5M downloads. This is a testament to our mission to build the most efficient, scalable, high-performance vector database on the market. We are excited to further accelerate this trajectory with our new partner and investor, Spark Capital, and the continued support of Unusual Ventures and 42CAP. This partnership uniquely positions us to empower enterprises with cutting edge vector search technology to build truly differentiating, next-gen AI applications at scale. ## The Emergence and Relevance of Vector Databases A paradigm shift is underway in the field of data management and information retrieval. Today, our world is increasingly dominated by complex, unstructured data like images, audio, video, and text. Traditional ways of retrieving data based on keyword matching are no longer sufficient. Vector databases are designed to handle complex high-dimensional data, unlocking the foundation for pivotal AI applications. They represent a new frontier in data management, in which complexity is not a barrier but an opportunity for innovation. The rise of generative AI in the last few years has shone a spotlight on vector databases, prized for their ability to power retrieval-augmented generation (RAG) applications. What we are seeing now, both within AI and beyond, is only the beginning of the opportunity for vector databases. Within our Qdrant community, we already see a multitude of unique solutions and applications leveraging our technology for multimodal search, anomaly detection, recommendation systems, complex data analysis, and more. ## What sets Qdrant apart? To meet the needs of the next generation of AI applications, Qdrant has always been built with four keys in mind: efficiency, scalability, performance, and flexibility. Our goal is to give our users unmatched speed and reliability, even when they are building massive-scale AI applications requiring the handling of billions of vectors. We did so by building Qdrant on Rust for performance, memory safety, and scale. Additionally, [our custom HNSW search algorithm](/articles/filtrable-hnsw/) and unique [filtering](/documentation/concepts/filtering/) capabilities consistently lead to [highest RPS](/benchmarks/), minimal latency, and high control with accuracy when running large-scale, high-dimensional operations. Beyond performance, we provide our users with the most flexibility in cost savings and deployment options. A combination of cutting-edge efficiency features, like [built-in compression options](/documentation/guides/quantization/), [multitenancy](/documentation/guides/multiple-partitions/) and the ability to [offload data to disk](/documentation/concepts/storage/), dramatically reduce memory consumption. Committed to privacy and security, crucial for modern AI applications, Qdrant now also offers on-premise and hybrid SaaS solutions, meeting diverse enterprise needs in a data-sensitive world. This approach, coupled with our open-source foundation, builds trust and reliability with engineers and developers, making Qdrant a game-changer in the vector database domain. ## What's next? We are incredibly excited about our next chapter to power the new generation of enterprise-grade AI applications. The support of our open-source community has led us to this stage and we’re committed to continuing to build the most advanced vector database on the market, but ultimately it’s up to you to decide! We invite you to [test out](https://cloud.qdrant.io/) Qdrant for your AI applications today. ",blog/series-A-funding-round.md "--- title: Qdrant Blog subtitle: Check out our latest posts description: A place to learn how to become an expert traveler through vector space. Subscribe and we will update you on features and news. email_placeholder: Enter your email subscribe_button: Subscribe features_title: Features and News search_placeholder: What are you Looking for? aliases: # There is no need to add aliases for future new tags and categories! - /tags - /tags/case-study - /tags/dailymotion - /tags/recommender-system - /tags/binary-quantization - /tags/embeddings - /tags/openai - /tags/gsoc24 - /tags/open-source - /tags/summer-of-code - /tags/vector-database - /tags/artificial-intelligence - /tags/machine-learning - /tags/vector-search - /tags/case_study - /tags/dust - /tags/announcement - /tags/funding - /tags/series-a - /tags/azure - /tags/cloud - /tags/data-science - /tags/information-retrieval - /tags/benchmarks - /tags/performance - /tags/qdrant - /tags/blog - /tags/large-language-models - /tags/podcast - /tags/retrieval-augmented-generation - /tags/search - /tags/vector-search-engine - /tags/vector-image-search - /tags/vector-space-talks - /tags/retriever-ranker-architecture - /tags/semantic-search - /tags/llm - /tags/entity-matching-solution - /tags/real-time-processing - /tags/vector-space-talk - /tags/fastembed - /tags/quantized-emdedding-models - /tags/llm-recommendation-system - /tags/integrations - /tags/unstructured - /tags/integration - /tags/n8n - /tags/news - /tags/webinar - /tags/cohere - /tags/embedding-model - /tags/database - /tags/vector-search-database - /tags/neural-networks - /tags/similarity-search - /tags/embedding - /tags/corporate-news - /tags/nvidia - /tags/docarray - /tags/jina-integration - /categories - /categories/news - /categories/vector-search - /categories/webinar - /categories/vector-space-talk --- ",blog/_index.md "--- draft: false preview_image: /blog/from_cms/nils-thumbnail.png title: ""From Content Quality to Compression: The Evolution of Embedding Models at Cohere with Nils Reimers"" slug: cohere-embedding-v3 short_description: Nils Reimers head of machine learning at Cohere shares the details about their latest embedding model. description: Nils Reimers head of machine learning at Cohere comes on the recent vector space talks to share details about their latest embedding V3 model. date: 2023-11-19T12:48:36.622Z author: Demetrios Brinkmann featured: false author_link: https://www.linkedin.com/in/dpbrinkm/ tags: - Vector Space Talk - Cohere - Embedding Model categories: - News - Vector Space Talk --- For the second edition of our Vector Space Talks we were joined by none other than Cohere’s Head of Machine Learning Nils Reimers. ## Key Takeaways Let's dive right into the five key takeaways from Nils' talk: 1. Content Quality Estimation: Nils explained how embeddings have traditionally focused on measuring topic match, but content quality is just as important. He demonstrated how their model can differentiate between informative and non-informative documents. 2. Compression-Aware Training: He shared how they've tackled the challenge of reducing the memory footprint of embeddings, making it more cost-effective to run vector databases on platforms like [Qdrant](https://cloud.qdrant.io/login). 3. Reinforcement Learning from Human Feedback: Nils revealed how they've borrowed a technique from reinforcement learning and applied it to their embedding models. This allows the model to learn preferences based on human feedback, resulting in highly informative responses. 4. Evaluating Embedding Quality: Nils emphasized the importance of evaluating embedding quality in relative terms rather than looking at individual vectors. It's all about understanding the context and how embeddings relate to each other. 5. New Features in the Pipeline: Lastly, Nils gave us a sneak peek at some exciting features they're developing, including input type support for Langchain and improved compression techniques. Now, here's a fun fact from the episode: Did you know that the content quality estimation model *can't* differentiate between true and fake statements? It's a challenging task, and the model relies on the information present in its pretraining data. We loved having Nils as our guest, check out the full talk below. If you or anyone you know would like to come on the Vector Space Talks ",blog/cohere-embedding-v3.md "--- title: Loading Unstructured.io Data into Qdrant from the Terminal slug: qdrant-unstructured short_description: Loading Unstructured Data into Qdrant from the Terminal description: Learn how to simplify the process of loading unstructured data into Qdrant using the Qdrant Unstructured destination. preview_image: /blog/qdrant-unstructured/preview.jpg date: 2024-01-09T00:41:38+05:30 author: Anush Shetty tags: - integrations - qdrant - unstructured --- Building powerful applications with Qdrant starts with loading vector representations into the system. Traditionally, this involves scraping or extracting data from sources, performing operations such as cleaning, chunking, and generating embeddings, and finally loading it into Qdrant. While this process can be complex, Unstructured.io includes Qdrant as an ingestion destination. In this blog post, we'll demonstrate how to load data into Qdrant from the channels of a Discord server. You can use a similar process for the [20+ vetted data sources](https://unstructured-io.github.io/unstructured/ingest/source_connectors.html) supported by Unstructured. ### Prerequisites - A running Qdrant instance. Refer to our [Quickstart guide](/documentation/quick-start/) to set up an instance. - A Discord bot token. Generate one [here](https://discord.com/developers/applications) after adding the bot to your server. - Unstructured CLI with the required extras. For more information, see the Discord [Getting Started guide](https://discord.com/developers/docs/getting-started). Install it with the following command: ```bash pip install unstructured[discord,local-inference,qdrant] ``` Once you have the prerequisites in place, let's begin the data ingestion. ### Retrieving Data from Discord To generate structured data from Discord using the Unstructured CLI, run the following command with the [channel IDs](https://www.pythondiscord.com/pages/guides/pydis-guides/contributing/obtaining-discord-ids/): ```bash unstructured-ingest \ discord \ --channels \ --token """" \ --output-dir ""discord-output"" ``` This command downloads and structures the data in the `""discord-output""` directory. For a complete list of options supported by this source, run: ```bash unstructured-ingest discord --help ``` ### Ingesting into Qdrant Before loading the data, set up a collection with the information you need for the following REST call. In this example we use a local Huggingface model generating 384-dimensional embeddings. You can create a Qdrant [API key](/documentation/cloud/authentication/#create-api-keys) and set names for your Qdrant [collections](/documentation/concepts/collections/). We set up the collection with the following command: ```bash curl -X PUT \ /collections/ \ -H 'Content-Type: application/json' \ -H 'api-key: ' \ -d '{ ""vectors"": { ""size"": 384, ""distance"": ""Cosine"" } }' ``` You should receive a response similar to: ```console {""result"":true,""status"":""ok"",""time"":0.196235768} ``` To ingest the Discord data into Qdrant, run: ```bash unstructured-ingest \ local \ --input-path ""discord-output"" \ --embedding-provider ""langchain-huggingface"" \ qdrant \ --collection-name """" \ --api-key """" \ --location """" ``` This command loads structured Discord data into Qdrant with sensible defaults. You can configure the data fields for which embeddings are generated in the command options. Qdrant ingestion also supports partitioning and chunking of your data, configurable directly from the CLI. Learn more about it in the [Unstructured documentation](https://unstructured-io.github.io/unstructured/core.html). To list all the supported options of the Qdrant ingestion destination, run: ```bash unstructured-ingest local qdrant --help ``` Unstructured can also be used programmatically or via the hosted API. Refer to the [Unstructured Reference Manual](https://unstructured-io.github.io/unstructured/introduction.html). For more information about the Qdrant ingest destination, review how Unstructured.io configures their [Qdrant](https://unstructured-io.github.io/unstructured/ingest/destination_connectors/qdrant.html) interface. ",blog/qdrant-unstructured.md "--- draft: false title: ""Qdrant's Trusted Partners for Hybrid Cloud Deployment"" slug: hybrid-cloud-launch-partners short_description: ""With the launch of Qdrant Hybrid Cloud we provide developers the ability to deploy Qdrant as a managed vector database in any desired environment."" description: ""With the launch of Qdrant Hybrid Cloud we provide developers the ability to deploy Qdrant as a managed vector database in any desired environment."" preview_image: /blog/hybrid-cloud-launch-partners/hybrid-cloud-launch-partners.png social_preview_image: /blog/hybrid-cloud-launch-partners/hybrid-cloud-launch-partners.png date: 2024-04-15T00:02:00Z author: Manuel Meyer featured: false tags: - Hybrid Cloud - launch partners --- With the launch of [Qdrant Hybrid Cloud](/hybrid-cloud/) we provide developers the ability to deploy Qdrant as a managed vector database in any desired environment, be it *in the cloud, on premise, or on the edge*. We are excited to have trusted industry players support the launch of Qdrant Hybrid Cloud, allowing developers to unlock best-in-class advantages for building production-ready AI applications: - **Deploy In Your Own Environment:** Deploy the Qdrant vector database as a managed service on the infrastructure of choice, such as our launch partner solutions [Oracle Cloud Infrastructure (OCI)](https://blogs.oracle.com/cloud-infrastructure/post/qdrant-hybrid-cloud-now-available-oci-customers), [Red Hat OpenShift](/blog/hybrid-cloud-red-hat-openshift/), [Vultr](/blog/hybrid-cloud-vultr/), [DigitalOcean](/blog/hybrid-cloud-digitalocean/), [OVHcloud](/blog/hybrid-cloud-ovhcloud/), [Scaleway](/blog/hybrid-cloud-scaleway/), [Civo](/documentation/hybrid-cloud/platform-deployment-options/#civo), and [STACKIT](/blog/hybrid-cloud-stackit/). - **Seamlessly Integrate with Every Key Component of the Modern AI Stack:** Our new hybrid cloud offering also allows you to integrate with all of the relevant solutions for building AI applications. These include partner frameworks like [LlamaIndex](/blog/hybrid-cloud-llamaindex/), [LangChain](/blog/hybrid-cloud-langchain/), [Haystack by deepset](/blog/hybrid-cloud-haystack/), and [Airbyte](/blog/hybrid-cloud-airbyte/), as well as large language models (LLMs) like [JinaAI](/blog/hybrid-cloud-jinaai/) and [Aleph Alpha](/blog/hybrid-cloud-aleph-alpha/). - **Ensure Full Data Sovereignty and Privacy Control:** Qdrant Hybrid Cloud offers unparalleled data isolation and the flexibility to process workloads either in the cloud or on-premise, ensuring data privacy and sovereignty requirements - all while being fully managed. #### Try Qdrant Hybrid Cloud on Partner Platforms ![Hybrid Cloud Launch Partners Tutorials](/blog/hybrid-cloud-launch-partners/hybrid-cloud-launch-partners-tutorials.png) Together with our launch partners, we created in-depth tutorials and use cases for production-ready vector search that explain how developers can leverage Qdrant Hybrid Cloud alongside the best-in-class solutions of our launch partners. These tutorials demonstrate that Qdrant Hybrid Cloud is the most flexible foundation to build modern, customer-centric AI applications with endless deployment options and full data sovereignty. Let’s dive right in: **AI Customer Support Chatbot** with Qdrant Hybrid Cloud, Airbyte, Cohere, and AWS > This tutorial shows how to build a private AI customer support system using Cohere's AI models on AWS, Airbyte, and Qdrant Hybrid Cloud for efficient and secure query automation. [View Tutorial](/documentation/tutorials/rag-customer-support-cohere-airbyte-aws/) **RAG System for Employee Onboarding** with Qdrant Hybrid Cloud, Oracle Cloud Infrastructure (OCI), Cohere, and LangChain > This tutorial demonstrates how to use Oracle Cloud Infrastructure (OCI) for a secure setup that integrates Cohere's language models with Qdrant Hybrid Cloud, using LangChain to orchestrate natural language search for corporate documents, enhancing resource discovery and onboarding. [View Tutorial](/documentation/tutorials/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/) **Hybrid Search for Product PDF Manuals** with Qdrant Hybrid Cloud, LlamaIndex, and JinaAI > Create a RAG-based chatbot that enhances customer support by parsing product PDF manuals using Qdrant Hybrid Cloud, LlamaIndex, and JinaAI, with DigitalOcean as the cloud host. This tutorial will guide you through the setup and integration process, enabling your system to deliver precise, context-aware responses for household appliance inquiries. [View Tutorial](/documentation/tutorials/hybrid-search-llamaindex-jinaai/) **Region-Specific RAG System for Contract Management** with Qdrant Hybrid Cloud, Aleph Alpha, and STACKIT > Learn how to streamline contract management with a RAG-based system in this tutorial, which utilizes Aleph Alpha’s embeddings and a region-specific cloud setup. Hosted on STACKIT with Qdrant Hybrid Cloud, this solution ensures secure, GDPR-compliant storage and processing of data, ideal for businesses with intensive contractual needs. [View Tutorial](/documentation/tutorials/rag-contract-management-stackit-aleph-alpha/) **Movie Recommendation System** with Qdrant Hybrid Cloud and OVHcloud > Discover how to build a recommendation system with our guide on collaborative filtering, using sparse vectors and the Movielens dataset. [View Tutorial](/documentation/tutorials/recommendation-system-ovhcloud/) **Private RAG Information Extraction Engine** with Qdrant Hybrid Cloud and Vultr using DSPy and Ollama > This tutorial teaches you how to handle and structure private documents with large unstructured data. Learn to use DSPy for information extraction, run your LLM with Ollama on Vultr, and manage data with Qdrant Hybrid Cloud on Vultr, perfect for regulated environments needing data privacy. [View Tutorial](/documentation/tutorials/rag-chatbot-vultr-dspy-ollama/) **RAG System That Chats with Blog Contents** with Qdrant Hybrid Cloud and Scaleway using LangChain. > Build a RAG system that combines blog scanning with the capabilities of semantic search. RAG enhances the generation of answers by retrieving relevant documents to aid the question-answering process. This setup showcases the integration of advanced search and AI language processing to improve information retrieval and generation tasks. [View Tutorial](/documentation/tutorials/rag-chatbot-scaleway/) **Private Chatbot for Interactive Learning** with Qdrant Hybrid Cloud and Red Hat OpenShift using Haystack. > In this tutorial, you will build a chatbot without public internet access. The goal is to keep sensitive data secure and isolated. Your RAG system will be built with Qdrant Hybrid Cloud on Red Hat OpenShift, leveraging Haystack for enhanced generative AI capabilities. This tutorial especially explores how this setup ensures that not a single data point leaves the environment. [View Tutorial](/documentation/tutorials/rag-chatbot-red-hat-openshift-haystack/) #### Supporting Documentation Additionally, we built comprehensive documentation tutorials on how to successfully deploy Qdrant Hybrid Cloud on the right infrastructure of choice. For more information, please visit our documentation pages: - [How to Deploy Qdrant Hybrid Cloud on AWS](/documentation/hybrid-cloud/platform-deployment-options/#amazon-web-services-aws) - [How to Deploy Qdrant Hybrid Cloud on GCP](/documentation/hybrid-cloud/platform-deployment-options/#google-cloud-platform-gcp) - [How to Deploy Qdrant Hybrid Cloud on Azure](/documentation/hybrid-cloud/platform-deployment-options/#mircrosoft-azure) - [How to Deploy Qdrant Hybrid Cloud on DigitalOcean](/documentation/hybrid-cloud/platform-deployment-options/#digital-ocean) - [How to Deploy Qdrant on Oracle Cloud](/documentation/hybrid-cloud/platform-deployment-options/#oracle-cloud-infrastructure) - [How to Deploy Qdrant on Vultr](/documentation/hybrid-cloud/platform-deployment-options/#vultr) - [How to Deploy Qdrant on Scaleway](/documentation/hybrid-cloud/platform-deployment-options/#scaleway) - [How to Deploy Qdrant on OVHcloud](/documentation/hybrid-cloud/platform-deployment-options/#ovhcloud) - [How to Deploy Qdrant on STACKIT](/documentation/hybrid-cloud/platform-deployment-options/#stackit) - [How to Deploy Qdrant on Red Hat OpenShift](/documentation/hybrid-cloud/platform-deployment-options/#red-hat-openshift) - [How to Deploy Qdrant on Linode](/documentation/hybrid-cloud/platform-deployment-options/#akamai-linode) - [How to Deploy Qdrant on Civo](/documentation/hybrid-cloud/platform-deployment-options/#civo) #### Get Started Now! [Qdrant Hybrid Cloud](/hybrid-cloud/) marks a significant advancement in vector databases, offering the most flexible way to implement vector search. You can test out Qdrant Hybrid Cloud today! Simply sign up for or log into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and get started in the **Hybrid Cloud** section. Also, to learn more about Qdrant Hybrid Cloud read our [Official Release Blog](/blog/hybrid-cloud/) or our [Qdrant Hybrid Cloud website](/hybrid-cloud/). For additional technical insights, please read our [documentation](/documentation/hybrid-cloud/). [![hybrid-cloud-get-started](/blog/hybrid-cloud-launch-partners/hybrid-cloud-get-started.png)](https://cloud.qdrant.io/login)",blog/hybrid-cloud-launch-partners.md "--- title: Recommendation Systems description: Step into the next generation of recommendation engines powered by Qdrant. Experience a new level of intelligence in application interactions, offering unprecedented accuracy and depth in user personalization. startFree: text: Get Started url: https://cloud.qdrant.io/ learnMore: text: Contact Us url: /contact-us/ image: src: /img/vectors/vector-1.svg alt: Recommendation systems sitemapExclude: true --- ",recommendations/recommendations-hero.md "--- title: Recommendations with Qdrant description: Recommendation systems, powered by Qdrant's efficient data retrieval, boost the ability to deliver highly personalized content recommendations across various media, enhancing user engagement and accuracy on a scalable platform. Explore why Qdrant is the optimal solution for your recommendation system projects. features: - id: 0 icon: src: /icons/outline/chart-bar-blue.svg alt: Chart bar title: Efficient Data Handling description: Qdrant excels in managing high-dimensional vectors, enabling streamlined storage and retrieval for complex recommendation systems. - id: 1 icon: src: /icons/outline/search-text-blue.svg alt: Search text title: Advanced Indexing Method description: Leveraging HNSW indexing, Qdrant ensures rapid, accurate searches crucial for effective recommendation engines. - id: 2 icon: src: /icons/outline/headphones-blue.svg alt: Headphones title: Flexible Query Options description: With support for payloads and filters, Qdrant offers personalized recommendation capabilities through detailed metadata handling. sitemapExclude: true --- ",recommendations/recommendations-features.md "--- title: Learn how to get started with Qdrant for your recommendation system use case features: - id: 0 image: src: /img/recommendations-use-cases/music-recommendation.svg srcMobile: /img/recommendations-use-cases/music-recommendation-mobile.svg alt: Music recommendation title: Music Recommendation with Qdrant description: Build a song recommendation engine based on music genres and other metadata. link: text: View Tutorial url: /blog/human-language-ai-models/ - id: 1 image: src: /img/recommendations-use-cases/food-discovery.svg srcMobile: /img/recommendations-use-cases/food-discovery-mobile.svg alt: Food discovery title: Food Discovery with Qdrant description: Interactive demo recommends meals based on likes/dislikes and local restaurant options. link: text: View Demo url: https://food-discovery.qdrant.tech/ caseStudy: logo: src: /img/recommendations-use-cases/customer-logo.svg alt: Logo title: Recommendation Engine with Qdrant Vector Database description: Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database. link: text: Read Case Study url: /blog/case-study-dailymotion/ image: src: /img/recommendations-use-cases/case-study.png alt: Preview sitemapExclude: true --- ",recommendations/recommendations-use-cases.md "--- title: Qdrant Recommendation API description: The Qdrant Recommendation API enhances recommendation systems with advanced flexibility, supporting both ID and vector-based queries, and search strategies for precise, personalized content suggestions. learnMore: text: Learn More url: /documentation/concepts/explore/ image: src: /img/recommendation-api.svg alt: Recommendation api sitemapExclude: true --- ",recommendations/recommendations-api.md "--- title: ""Recommendation Engines: Personalization & Data Handling"" description: ""Leverage personalized content suggestions, powered by efficient data retrieval and advanced indexing methods."" build: render: always cascade: - build: list: local publishResources: false render: never --- ",recommendations/_index.md "--- title: Subscribe section_title: Subscribe subtitle: Subscribe description: Subscribe image: src: /img/subscribe.svg srcMobile: /img/mobile/subscribe.svg alt: Astronaut form: title: Sign up for Qdrant Updates description: Stay up to date on product news, technical articles, and upcoming educational webinars. label: Email placeholder: info@qdrant.com button: Subscribe footer: rights: ""© 2024 Qdrant. All Rights Reserved"" termsLink: url: /legal/terms_and_conditions/ text: Terms policyLink: url: /legal/privacy-policy/ text: Privacy Policy impressumLink: url: /legal/impressum/ text: Impressum --- ",subscribe/_index.md "--- title: Customer Support and Sales Optimization icon: customer-service sitemapExclude: True --- Current advances in NLP can reduce the retinue work of customer service by up to 80 percent. No more answering the same questions over and over again. A chatbot will do that, and people can focus on complex problems. But not only automated answering, it is also possible to control the quality of the department and automatically identify flaws in conversations. ",use-cases/customer-support-optimization.md "--- title: Media and Games icon: game-controller sitemapExclude: True --- Personalized recommendations for music, movies, games, and other entertainment content are also some sort of search. Except the query in it is not a text string, but user preferences and past experience. And with Qdrant, user preference vectors can be updated in real-time, no need to deploy a MapReduce cluster. Read more about ""[Metric Learning Recommendation System](https://arxiv.org/abs/1803.00202)"" ",use-cases/media-and-games.md "--- title: Food Discovery weight: 20 icon: search sitemapExclude: True --- There are multiple ways to discover things, text search is not the only one. In the case of food, people rely more on appearance than description and ingredients. So why not let people choose their next lunch by its appearance, even if they don't know the name of the dish? We made a [demo](https://food-discovery.qdrant.tech/) to showcase this approach.",use-cases/food-search.md "--- title: Law Case Search icon: hammer sitemapExclude: True --- The wording of court decisions can be difficult not only for ordinary people, but sometimes for the lawyers themselves. It is rare to find words that exactly match a similar precedent. That's where AI, which has seen hundreds of thousands of court decisions and can compare them using vector similarity search engine, can help. Here is some related [research](https://arxiv.org/abs/2004.12307). ",use-cases/law-search.md "--- title: Medical Diagnostics icon: x-rays sitemapExclude: True --- The growing volume of data and the increasing interest in the topic of health care is creating products to help doctors with diagnostics. One such product might be a search for similar cases in an ever-expanding database of patient histories. Search not only by symptom description, but also by data from, for example, MRI machines. Vector Search [is applied](https://www.sciencedirect.com/science/article/abs/pii/S0925231217308445) even here. ",use-cases/medical-diagnostics.md "--- title: HR & Job Search icon: job-search weight: 10 sitemapExclude: True --- Vector search engine can be used to match candidates and jobs even if there are no matching keywords or explicit skill descriptions. For example, it can automatically map **'frontend engineer'** to **'web developer'**, no need for any predefined categorization. Neural job matching is used at [MoBerries](https://www.moberries.com/) for automatic job recommendations.",use-cases/job-matching.md "--- title: Fashion Search icon: clothing custom_link_name: Article by Zalando custom_link: https://engineering.zalando.com/posts/2018/02/search-deep-neural-network.html custom_link_name2: Our Demo custom_link2: https://qdrant.to/fashion-search-demo sitemapExclude: True --- Empower shoppers to find the items they want by uploading any image or browsing through a gallery instead of searching with keywords. A visual similarity search helps solve this problem. And with the advanced filters that Qdrant provides, you can be sure to have the right size in stock for the jacket the user finds. Large companies like [Zalando](https://engineering.zalando.com/posts/2018/02/search-deep-neural-network.html) are investing in it, but we also made our [demo](https://qdrant.to/fashion-search-demo) using public dataset.",use-cases/fashion-search.md "--- title: Qdrant Vector Database Use Cases subtitle: Explore the vast applications of the Qdrant vector database. From retrieval augmented generation to anomaly detection, advanced search, and recommendation systems, our solutions unlock new dimensions of data and performance. featureCards: - id: 0 title: Advanced Search content: Elevate your apps with advanced search capabilities. Qdrant excels in processing high-dimensional data, enabling nuanced similarity searches, and understanding semantics in depth. Qdrant also handles multimodal data with fast and accurate search algorithms. link: text: Learn More url: /advanced-search/ - id: 1 title: Recommendation Systems content: Create highly responsive and personalized recommendation systems with tailored suggestions. Qdrant’s Recommendation API offers great flexibility, featuring options such as best score recommendation strategy. This enables new scenarios of using multiple vectors in a single query to impact result relevancy. link: text: Learn More url: /recommendations/ - id: 2 title: Retrieval Augmented Generation (RAG) content: Enhance the quality of AI-generated content. Leverage Qdrant's efficient nearest neighbor search and payload filtering features for retrieval-augmented generation. You can then quickly access relevant vectors and integrate a vast array of data points. link: text: Learn More url: /rag/ - id: 3 title: Data Analysis and Anomaly Detection content: Transform your approach to Data Analysis and Anomaly Detection. Leverage vectors to quickly identify patterns and outliers in complex datasets. This ensures robust and real-time anomaly detection for critical applications. link: text: Learn More url: /data-analysis-anomaly-detection/ ---",use-cases/vectors-use-case.md "--- title: Fintech icon: bank sitemapExclude: True --- Fraud detection is like recommendations in reverse. One way to solve the problem is to look for similar cheating behaviors. But often this is not enough and manual rules come into play. Qdrant vector database allows you to combine both approaches because it provides a way to filter the result using arbitrary conditions. And all this can happen in the time till the client takes his hand off the terminal. Here is some related [research paper](https://arxiv.org/abs/1808.05492). ",use-cases/fintech.md "--- title: Advertising icon: ad-campaign sitemapExclude: True --- User interests cannot be described with rules, and that's where neural networks come in. Qdrant vector database will allow sufficient flexibility in neural network recommendations so that each user sees only the relevant ad. Advanced filtering mechanisms, such as geo-location, do not compromise on speed and accuracy, which is especially important for online advertising.",use-cases/advertising.md "--- title: Biometric identification icon: face-scan sitemapExclude: True --- Not only totalitarian states use facial recognition. With this technology, you can also improve the user experience and simplify authentication. Make it possible to pay without a credit card and buy in the store without cashiers. And the scalable face recognition technology is based on vector search, which is what Qdrant provides. Some of the many articles on the topic of [Face Recognition](https://arxiv.org/abs/1810.06951v1) and [Speaker Recognition](https://arxiv.org/abs/2003.11982).",use-cases/face-recognition.md "--- title: E-Commerce Search icon: dairy-products weight: 30 sitemapExclude: True --- Increase your online basket size and revenue with the AI-powered search. No need in manually assembled synonym lists, neural networks get the context better. With neural approach the search results could be not only precise, but also **personalized**. And Qdrant will be the backbone of this search. Read more about [Deep Learning-based Product Recommendations](https://arxiv.org/abs/2104.07572) in the paper by The Home Depot. ",use-cases/e-commerce-search.md "--- title: Vector Database Use Cases section_title: Apps and ideas Qdrant make possible type: page description: Discover the diverse applications of Qdrant vector database, from retrieval and augmented generation to anomaly detection, advanced search, and more. build: render: always cascade: - build: list: local publishResources: false render: never aliases: - /solutions/ --- ",use-cases/_index.md "--- salesTitle: Qdrant Enterprise Solutions description: Our Managed Cloud, Hybrid Cloud, and Private Cloud solutions offer flexible deployment options for top-tier data privacy. cards: - id: 0 icon: /icons/outline/cloud-managed-blue.svg title: Managed Cloud description: Qdrant Cloud provides optimal flexibility and offers a suite of features focused on efficient and scalable vector search - fully managed. Available on AWS, Google Cloud, and Azure. - id: 1 icon: /icons/outline/cloud-hybrid-violet.svg title: Hybrid Cloud description: Bring your own Kubernetes clusters from any cloud provider, on-premise infrastructure, or edge locations and connect them to the Managed Cloud. - id: 2 icon: /icons/outline/cloud-private-teal.svg title: Private Cloud description: Deploy Qdrant in your own infrastructure. form: title: Connect with us # description: id: contact-sales-form hubspotFormOptions: '{ ""region"": ""eu1"", ""portalId"": ""139603372"", ""formId"": ""fc7a9f1d-9d41-418d-a9cc-ef9c5fb9b207"", ""submitButtonClass"": ""button button_contained"", }' logosSectionTitle: Qdrant is trusted by top-tier enterprises --- ",contact-sales/_index.md "--- title: Qdrant Hybrid Cloud features: - id: 0 content: Privacy and Data Sovereignty icon: src: /icons/fill/cloud-system-purple.svg alt: Privacy and Data Sovereignty - id: 1 content: Flexible Deployment icon: src: /icons/fill/separate-blue.svg alt: Flexible Deployment - id: 2 content: Minimum Cost icon: src: /icons/fill/money-growth-green.svg alt: Minimum Cost description: Seamlessly deploy and manage the vector database across diverse environments, ensuring performance, security, and cost efficiency for AI-driven applications. startFree: text: Get Started url: https://cloud.qdrant.io/ contactUs: text: Request a demo url: /contact-hybrid-cloud/ image: src: /img/hybrid-cloud-graphic.svg alt: Enterprise-solutions sitemapExclude: true --- ",hybrid-cloud/hybrid-cloud-hero.md "--- title: ""Learn how Qdrant Hybrid Cloud works:"" video: src: / button: Watch Demo icon: src: /icons/outline/play-white.svg alt: Play preview: /img/qdrant-cloud-demo.png youtube: | sitemapExclude: true --- ",hybrid-cloud/hybrid-cloud-video.md "--- content: Do you have further questions? We are happy to assist you. contactUs: text: Contact us url: /contact-hybrid-cloud/ sitemapExclude: true --- ",hybrid-cloud/get-contacted-with-question.md "--- title: How it Works steps: - id: 0 number: 1 title: Integration description: Qdrant Hybrid Cloud allows you to deploy managed Qdrant clusters on any cloud platform or on-premise infrastructure, ensuring your data stays private by separating the data and control layers. - id: 1 number: 2 title: Management description: A straightforward Kubernetes operator installation allows for hands-off cluster management, including scaling operations, zero-downtime upgrades and disaster recovery. - id: 2 number: 3 title: Privacy and Security description: The architecture guarantees database isolation. The Qdrant Cloud only receives telemetry through an outgoing connection. No access to databases or your Kubernetes API is necessary to maintain the highest level of data security and privacy. image: src: /img/how-it-works-scheme.svg alt: How it works scheme sitemapExclude: true --- ",hybrid-cloud/hybrid-cloud-how-it-works.md "--- title: Get started today subtitle: Turn embeddings or neural network encoders into full-fledged applications for matching, searching, recommending, and more. button: text: Get Started url: https://cloud.qdrant.io/ sitemapExclude: true --- ",hybrid-cloud/hybrid-cloud-get-started.md "--- title: Qdrant Hybrid Cloud Features cards: - id: 0 icon: src: /icons/outline/server-rack-blue.svg alt: Server rack description: Run clusters in your own infrastructure, incl. your own cloud, infrastructure, or edge - id: 1 icon: src: /icons/outline/cloud-check-blue.svg alt: Cloud check description: All benefits of Qdrant Cloud - id: 2 icon: src: /icons/outline/cloud-connections-blue.svg alt: Cloud connections description: Use the Managed Cloud Central Cluster Management - id: 3 icon: src: /icons/outline/headphones-blue.svg alt: Headphones-blue description: Premium support plan option available link: content: Learn more about Qdrant Hybrid Cloud in our documentation. url: /documentation/hybrid-cloud/ text: See Documentation sitemapExclude: true --- ",hybrid-cloud/hybrid-cloud-features.md "--- title: ""Seamlessly connect Qdrant with a wide array of cloud providers and infrastructure platforms, including but not limited to these options:"" partnersFirstPart: - id: 0 name: AWS logo: src: /img/cloud-provider-logos/aws-logo.svg alt: AWS logo - id: 1 name: Google Cloud logo: src: /img/cloud-provider-logos/google-cloud-logo.svg alt: Google Cloud logo - id: 2 name: Digital Ocean logo: src: /img/cloud-provider-logos/digital-ocean-logo.svg alt: Digital Ocean logo - id: 3 name: Oracle Cloud logo: src: /img/cloud-provider-logos/oracle-cloud-logo.svg alt: Oracle Cloud logo - id: 4 name: Linode logo: src: /img/cloud-provider-logos/linode-logo.svg alt: Linode logo - id: 5 name: AWS logo: src: /img/cloud-provider-logos/aws-logo.svg alt: AWS logo - id: 6 name: Google Cloud logo: src: /img/cloud-provider-logos/google-cloud-logo.svg alt: Google Cloud logo - id: 7 name: Digital Ocean logo: src: /img/cloud-provider-logos/digital-ocean-logo.svg alt: Digital Ocean logo - id: 8 name: Oracle Cloud logo: src: /img/cloud-provider-logos/oracle-cloud-logo.svg alt: Oracle Cloud logo - id: 9 name: Linode logo: src: /img/cloud-provider-logos/linode-logo.svg alt: Linode logo partnersSecondPart: - id: 0 name: Rancher logo: src: /img/cloud-provider-logos/rancher-logo.svg alt: Rancher logo - id: 1 name: Azure logo: src: /img/cloud-provider-logos/azure-logo.svg alt: Azure logo - id: 2 name: VMWare Tanzu logo: src: /img/cloud-provider-logos/vmware-tanzu-logo.svg alt: VMWare Tanzu logo - id: 3 name: Openshift logo: src: /img/cloud-provider-logos/openshift-logo.svg alt: Openshift logo - id: 4 name: Scaleway logo: src: /img/cloud-provider-logos/scaleway-logo.svg alt: Scaleway logo - id: 5 name: Rancher logo: src: /img/cloud-provider-logos/rancher-logo.svg alt: Rancher logo - id: 6 name: Azure logo: src: /img/cloud-provider-logos/azure-logo.svg alt: Azure logo - id: 7 name: VMWare Tanzu logo: src: /img/cloud-provider-logos/vmware-tanzu-logo.svg alt: VMWare Tanzu logo - id: 8 name: Openshift logo: src: /img/cloud-provider-logos/openshift-logo.svg alt: Openshift logo - id: 9 name: Scaleway logo: src: /img/cloud-provider-logos/scaleway-logo.svg alt: Scaleway logo sitemapExclude: true --- ",hybrid-cloud/hybrid-cloud-partners.md "--- title: Seamless Kubernetes Integration descriptionFirstPart: Qdrant Hybrid Cloud integrates Kubernetes clusters from any setting - cloud, on-premises, or edge - into a unified, enterprise-grade managed service. descriptionSecondPart: It ensures data privacy, deployment flexibility, low latency, and delivers cost savings, elevating standards for vector search and AI applications. image: src: /img/kubernetes-clusters.svg alt: Qdrant Kubernetes integration sitemapExclude: true --- ",hybrid-cloud/hybrid-cloud-kubernetes-clusters.md "--- title: 'Qdrant Hybrid Cloud: Flexible Deployment, Data Privacy, and Cost Efficiency' description: Qdrant's new Hybrid Cloud was created for seamless deployment and management of vector databases. Ensure privacy, data sovereignty, and cost efficiency for AI-driven applications. Learn more and get started today. keywords: hybrid cloud vector database, hybrid cloud management, kubernetes integration, hybrid cloud deployment build: render: always cascade: - build: list: local publishResources: false render: never --- ",hybrid-cloud/_index.md "--- cards: - id: 0 icon: src: /icons/outline/separate-blue.svg alt: Separate title: Deployment Flexibility description: Use your existing infrastructure, whether it be on cloud platforms, on-premise setups, or even at edge locations. - id: 1 icon: src: /icons/outline/money-growth-blue.svg alt: Money growth title: Unmatched Cost Advantage description: Maximum deployment flexibility to leverage the best available resources, in the cloud or on-premise. - id: 2 icon: src: /icons/outline/speedometer-blue.svg alt: Speedometer title: Ultra-Low Latency description: On-premise deployment for lightning-fast, low-latency access. - id: 3 icon: src: /icons/outline/cloud-system-blue.svg alt: Cloud system title: Data Privacy & Sovereignty description: Keep your sensitive data with your secure premises, while enjoying the benefits of a managed cloud. - id: 4 icon: src: /icons/outline/switches-blue.svg alt: Switches title: Transparent Control description: Fully managed experience for your Qdrant clusters, while your data remains exclusively yours. sitemapExclude: true --- ",hybrid-cloud/hybrid-cloud-cases.md "--- title: Global Partners & Integrations description: Benefit from our collaboration with top cloud platforms, state-of-the-art AI embeddings, and dynamic frameworks. image: src: /img/partners-hero-logos.svg srcMobile: /img/mobile/partners-hero-logos.svg alt: Partners logos sitemapExclude: true --- ",partners/partners-hero.md "--- review: “With the landscape of AI being complex for most customers, Qdrant's ease of use provides an easy approach for customers' implementation of RAG patterns for Generative AI solutions and additional choices in selecting AI components on Azure.” names: Tara Walker positions: Principal Software Engineer at Microsoft avatar: src: /img/customers/tara-walker.svg alt: Tara Walker Avatar logo: src: /img/brands/microsoft.svg alt: Logo sitemapExclude: true --- ",partners/partners-testimonial.md "--- title: Cloud Partners description: Qdrant Cloud seamlessly integrates with top cloud platforms and is available on leading marketplaces. link: url: https://cloud.qdrant.io/ text: Get Started with Qdrant Cloud sitemapExclude: true --- ",partners/partners-cloud.md "--- title: Become a Certified Solutions Partner description: We partner with industry leaders to deliver innovative solutions and an exceptional customer experience. button: url: /contact-us/ text: Become a Partner image: src: /img/partner-banner.svg alt: Become a Partner preview sitemapExclude: true --- ",partners/partners-get-started.md "--- description: ""Qdrant stands out in handling embeddings by consistently achieving the lowest latency, ensuring quicker response times in data retrieval:"" link: url: /benchmarks/ text: See Our Benchmarks Report image: src: /img/partners-embeddings.svg srcMobile: /img/mobile/partners-embeddings.svg alt: 1M-Open-AI-Embeddings sitemapExclude: true --- ",partners/partners-embeddings.md "--- title: Embeddings Integrations description: Qdrant’s integrations allow you to bring state-of-the-art AI and machine learning capabilities, and to enrich data analysis and search precision. link: url: /documentation/embeddings/ text: See All Embeddings integrations: - id: 0 icon: src: /img/integrations/integration-cohere.svg alt: Cohere logo title: Cohere description: Integrate Qdrant with Cohere's co.embed API and Python SDK. - id: 1 icon: src: /img/integrations/integration-gemini.svg alt: Gemini logo title: Gemini description: Connect Qdrant with Google's Gemini Embedding Model API seamlessly. - id: 2 icon: src: /img/integrations/integration-open-ai.svg alt: OpenAI logo title: OpenAI description: Easily integrate OpenAI embeddings with Qdrant using the official Python SDK. - id: 3 icon: src: /img/integrations/integration-aleph-alpha.svg alt: Aleph Alpha logo title: Aleph Alpha description: Integrate Qdrant with Aleph Alpha's multimodal, multilingual embeddings. - id: 4 icon: src: /img/integrations/integration-jina.svg alt: Jina logo title: Jina AI description: Easily integrate Qdrant with Jina AI's embeddings API. - id: 5 icon: src: /img/integrations/integration-aws.svg alt: AWS logo title: AWS Bedrock description: Utilize AWS Bedrock's embedding models with Qdrant seamlessly. sitemapExclude: true --- ",partners/partners-integrations.md "--- title: Frameworks description: Qdrant supports leading frameworks so you can streamline natural language processing, enhance large-scale data retrieval, integrate diverse data sources,and automate complex tasks. link: url: /documentation/frameworks/ text: See All Frameworks integrations: - id: 0 icon: src: /img/integrations/integration-lang-chain.svg alt: LangChain logo title: LangChain description: Qdrant seamlessly integrates with LangChain for LLM development. - id: 1 icon: src: /img/integrations/integration-llama-index.svg alt: LlamaIndex logo title: LlamaIndex description: Qdrant integrates with LlamaIndex for efficient data indexing in LLMs. - id: 2 icon: src: /img/integrations/integration-airbyte.svg alt: Airbyte logo title: Airbyte description: Qdrant integrates with Airbyte to build robust data pipelines for efficient data management. - id: 3 icon: src: /img/integrations/integration-unstructured.svg alt: Unstructured logo title: Unstructured description: Qdrant integrates with Unstructured for effective preprocessing and handling of unstructured data. - id: 4 icon: src: /img/integrations/integration-doc-array.svg alt: DocArray logo title: DocArray description: Qdrant integrates natively with DocArray for efficient handling and processing of multi-modal data. - id: 5 icon: src: /img/integrations/integration-auto-gen.svg alt: AutoGen logo title: AutoGen description: Qdrant integrates with Autogen to enhance the development of automated LLM applications. sitemapExclude: true --- ",partners/partners-frameworks.md "--- title: Partners description: Partners build: render: always cascade: - build: list: local publishResources: false render: never --- ",partners/_index.md "--- title: Certified Solution Partners description: Qdrant has an ecosystem of solution partners who help you with the implementation and integration of your vector search applications. partners: - id: 0 logo: src: /img/partners-solution-logos/traversaal-ai-logo.svg alt: Traversaal.ai logo - id: 1 logo: src: /img/partners-solution-logos/domino-logo.svg alt: Domino logo - id: 2 logo: src: /img/partners-solution-logos/revelry-logo.svg alt: Revelry logo - id: 3 logo: src: /img/partners-solution-logos/softlandia-logo.svg alt: Softlandia logo sitemapExclude: true --- ",partners/partners-solution.md "--- title: Feature Overview description: Built as a dedicated similarity search engine, Qdrant provides unique features to provide unparalleled performance and efficiency in managing your vector data workloads. cards: - id: 0 icon: src: /icons/outline/speedometer-blue.svg alt: Speedometer title: Advanced Compression content: Scalar, Product, and unique Binary Quantization features significantly reduce memory usage and improve search performance (40x) for high-dimensional vectors. link: text: Quantization url: /articles/binary-quantization/ - id: 1 icon: src: /icons/outline/cloud-managed-blue.svg alt: Cloud-Managed title: Distributed, Cloud-Native Design content: Managed cloud services on AWS, GCP, and Azure for scalable, maintenance-free vector search. contentLink: text: Advanced sharding url: /guides/distributed_deployment/ contentSecondPart: available. link: text: Cloud Options url: /cloud/ - id: 2 icon: src: /icons/outline/rocket-blue.svg alt: Rocket title: Easy to Use API content: Offers OpenAPI v3 specification for generating client libraries in almost any programming language. link: text: Learn More url: /documentation/interfaces/#api-reference - id: 3 icon: src: /icons/outline/enterprise-blue.svg alt: Enterprise title: Enterprise-grade Security content: Includes robust access management, backup options, and disaster recovery. Dedicated Enterprise Solutions available. link: text: Enterprise Solutions url: /enterprise-solutions/ - id: 4 icon: src: /icons/outline/integration-blue.svg alt: Integration title: Integrations content: Qdrant supports a wide range of integrations for all leading embeddings and frameworks. link: text: See Integrations url: /documentation/frameworks/ - id: 5 icon: src: /icons/outline/multitenancy-blue.svg alt: Multitenancy title: Multitenancy Support content: Ability to segment a single collection for organized and efficient retrieval, data isolation, and privacy. Vital for applications needing distinct vector dataset management. link: text: Multitenancy url: /documentation/guides/multiple-partitions/ - id: 6 icon: src: /icons/outline/disk-storage-blue.svg alt: Disk-Storage title: Memory Maps and IO Uring content: Effective on-disk storage options and low level hardware optimization. link: text: Learn More url: /articles/io_uring/ - id: 7 icon: src: /icons/outline/matching-blue.svg alt: Matching title: Fast and Precise Matching content: Unparalleled speed and accuracy, powered by a bespoke modification of the HNSW algorithm for Approximate Nearest Neighbor Search. link: text: Learn More url: /documentation/concepts/search/ - id: 8 icon: src: /icons/outline/filter-blue.svg alt: Filter title: Payloads & Advanced Filtering content: Vector payload supports a large variety of data types and query conditions, including string matching, numerical ranges, geo-locations, and more. link: text: Learn More url: /documentation/concepts/payload/ - id: 9 icon: src: /icons/outline/vectors-blue.svg alt: Vectors title: Sparse Vector Support content: Efficient handling of sparse vectors for enhanced text retrieval and memory-efficient data representation for high-dimensional data sets. link: text: Learn More url: /articles/sparse-vectors/ sitemapExclude: true --- ",qdrant-vector-database/feature-overview.md "--- title: Qdrant. Efficient, Scalable, Fast. description: Qdrant is the most advanced vector database with highest RPS, minimal latency, fast indexing, high control with accuracy, and so much more. startFree: text: Start Free url: https://cloud.qdrant.io/ contactUs: text: Talk to Sales url: /contact-us/ image: src: /img/qdrant-vector-database-hero.svg srcMobile: /img/mobile/qdrant-vector-database-hero.svg alt: Qdrant Vector Database sitemapExclude: true --- ",qdrant-vector-database/qdrant-vector-database-hero.md "--- items: - id: 0 image: src: /img/qdrant-vector-database-use-cases/built-for-performance.svg alt: Benchmark title: Built for Performance description: With up to 4x RPS, Qdrant excels in delivering high-speed, efficient data processing, setting new benchmarks in vector database performance. link: text: Benchmarks url: /benchmarks/ odd: true - id: 1 image: src: /img/qdrant-vector-database-use-cases/fully-managed.svg alt: Qdrant Cloud title: Fully Managed description: Experience seamless scalability and minimal operational overhead with Qdrant Cloud, designed for ease-of-use and reliability. link: text: Qdrant Cloud url: /cloud/ odd: false - id: 2 image: src: /img/qdrant-vector-database-use-cases/run-anywhere.svg alt: Enterprise Solutions title: Run Anywhere description: Qdrant’s Hybrid Cloud and Private Cloud solutions offer flexible deployment options for top-tier data protection. link: text: Enterprise Solutions url: /enterprise-solutions/ odd: true sitemapExclude: true --- ",qdrant-vector-database/qdrant-vector-database-use-cases.md "--- title: Qdrant Vector Database, High-Performance Vector Search Engine description: Experience unmatched performance and efficiency with the most advanced vector database. Learn how Qdrant can enhance your data management workflows today. build: render: always cascade: - build: list: local publishResources: false render: never --- ",qdrant-vector-database/_index.md "--- draft: false id: 2 title: How vector search should be benchmarked? weight: 1 --- # Benchmarking Vector Databases At Qdrant, performance is the top-most priority. We always make sure that we use system resources efficiently so you get the **fastest and most accurate results at the cheapest cloud costs**. So all of our decisions from [choosing Rust](/articles/why-rust/), [io optimisations](/articles/io_uring/), [serverless support](/articles/serverless/), [binary quantization](/articles/binary-quantization/), to our [fastembed library](/articles/fastembed/) are all based on our principle. In this article, we will compare how Qdrant performs against the other vector search engines. Here are the principles we followed while designing these benchmarks: - We do comparative benchmarks, which means we focus on **relative numbers** rather than absolute numbers. - We use affordable hardware, so that you can reproduce the results easily. - We run benchmarks on the same exact machines to avoid any possible hardware bias. - All the benchmarks are [open-sourced](https://github.com/qdrant/vector-db-benchmark), so you can contribute and improve them.
Scenarios we tested 1. Upload & Search benchmark on single node [Benchmark](/benchmarks/single-node-speed-benchmark/) 2. Filtered search benchmark - [Benchmark](/benchmarks/#filtered-search-benchmark) 3. Memory consumption benchmark - Coming soon 4. Cluster mode benchmark - Coming soon
Some of our experiment design decisions are described in the [F.A.Q Section](/benchmarks/#benchmarks-faq). Reach out to us on our [Discord channel](https://qdrant.to/discord) if you want to discuss anything related Qdrant or these benchmarks. ",benchmarks/benchmarks-intro.md "--- draft: false id: 1 title: Single node benchmarks (2022) single_node_title: Single node benchmarks single_node_data: /benchmarks/result-2022-08-10.json preview_image: /benchmarks/benchmark-1.png date: 2022-08-23 weight: 2 Unlisted: true --- This is an archived version of Single node benchmarks. Please refer to the new version [here](/benchmarks/single-node-speed-benchmark/). ",benchmarks/single-node-speed-benchmark-2022.md "--- draft: false id: 4 title: Filtered search benchmark description: date: 2023-02-13 weight: 3 --- # Filtered search benchmark Applying filters to search results brings a whole new level of complexity. It is no longer enough to apply one algorithm to plain data. With filtering, it becomes a matter of the _cross-integration_ of the different indices. To measure how well different search engines perform in this scenario, we have prepared a set of **Filtered ANN Benchmark Datasets** - https://github.com/qdrant/ann-filtering-benchmark-datasets It is similar to the ones used in the [ann-benchmarks project](https://github.com/erikbern/ann-benchmarks/) but enriched with payload metadata and pre-generated filtering requests. It includes synthetic and real-world datasets with various filters, from keywords to geo-spatial queries. ### Why filtering is not trivial? Not many ANN algorithms are compatible with filtering. HNSW is one of the few of them, but search engines approach its integration in different ways: - Some use **post-filtering**, which applies filters after ANN search. It doesn't scale well as it either loses results or requires many candidates on the first stage. - Others use **pre-filtering**, which requires a binary mask of the whole dataset to be passed into the ANN algorithm. It is also not scalable, as the mask size grows linearly with the dataset size. On top of it, there is also a problem with search accuracy. It appears if too many vectors are filtered out, so the HNSW graph becomes disconnected. Qdrant uses a different approach, not requiring pre- or post-filtering while addressing the accuracy problem. Read more about the Qdrant approach in our [Filtrable HNSW](/articles/filtrable-hnsw/) article. ",benchmarks/filtered-search-intro.md "--- draft: false id: 1 title: Single node benchmarks description: | We benchmarked several vector databases using various configurations of them on different datasets to check how the results may vary. Those datasets may have different vector dimensionality but also vary in terms of the distance function being used. We also tried to capture the difference we can expect while using some different configuration parameters, for both the engine itself and the search operation separately.

Updated: January/June 2024 single_node_title: Single node benchmarks single_node_data: /benchmarks/results-1-100-thread-2024-06-15.json preview_image: /benchmarks/benchmark-1.png date: 2022-08-23 weight: 2 Unlisted: false --- ## Observations Most of the engines have improved since [our last run](/benchmarks/single-node-speed-benchmark-2022/). Both life and software have trade-offs but some clearly do better: * **`Qdrant` achives highest RPS and lowest latencies in almost all the scenarios, no matter the precision threshold and the metric we choose.** It has also shown 4x RPS gains on one of the datasets. * `Elasticsearch` has become considerably fast for many cases but it's very slow in terms of indexing time. It can be 10x slower when storing 10M+ vectors of 96 dimensions! (32mins vs 5.5 hrs) * `Milvus` is the fastest when it comes to indexing time and maintains good precision. However, it's not on-par with others when it comes to RPS or latency when you have higher dimension embeddings or more number of vectors. * `Redis` is able to achieve good RPS but mostly for lower precision. It also achieved low latency with single thread, however its latency goes up quickly with more parallel requests. Part of this speed gain comes from their custom protocol. * `Weaviate` has improved the least since our last run. ## How to read the results - Choose the dataset and the metric you want to check. - Select a precision threshold that would be satisfactory for your usecase. This is important because ANN search is all about trading precision for speed. This means in any vector search benchmark, **two results must be compared only when you have similar precision**. However most benchmarks miss this critical aspect. - The table is sorted by the value of the selected metric (RPS / Latency / p95 latency / Index time), and the first entry is always the winner of the category 🏆 ### Latency vs RPS In our benchmark we test two main search usage scenarios that arise in practice. - **Requests-per-Second (RPS)**: Serve more requests per second in exchange of individual requests taking longer (i.e. higher latency). This is a typical scenario for a web application, where multiple users are searching at the same time. To simulate this scenario, we run client requests in parallel with multiple threads and measure how many requests the engine can handle per second. - **Latency**: React quickly to individual requests rather than serving more requests in parallel. This is a typical scenario for applications where server response time is critical. Self-driving cars, manufacturing robots, and other real-time systems are good examples of such applications. To simulate this scenario, we run client in a single thread and measure how long each request takes. ### Tested datasets Our [benchmark tool](https://github.com/qdrant/vector-db-benchmark) is inspired by [github.com/erikbern/ann-benchmarks](https://github.com/erikbern/ann-benchmarks/). We used the following datasets to test the performance of the engines on ANN Search tasks:
| Datasets | # Vectors | Dimensions | Distance | |---------------------------------------------------------------------------------------------------|-----------|------------|-------------------| | [dbpedia-openai-1M-angular](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) | 1M | 1536 | cosine | | [deep-image-96-angular](http://sites.skoltech.ru/compvision/noimi/) | 10M | 96 | cosine | | [gist-960-euclidean](http://corpus-texmex.irisa.fr/) | 1M | 960 | euclidean | | [glove-100-angular](https://nlp.stanford.edu/projects/glove/) | 1.2M | 100 | cosine |
### Setup {{< figure src=/benchmarks/client-server.png caption=""Benchmarks configuration"" width=70% >}} - This was our setup for this experiment: - Client: 8 vcpus, 16 GiB memory, 64GiB storage (`Standard D8ls v5` on Azure Cloud) - Server: 8 vcpus, 32 GiB memory, 64GiB storage (`Standard D8s v3` on Azure Cloud) - The Python client uploads data to the server, waits for all required indexes to be constructed, and then performs searches with configured number of threads. We repeat this process with different configurations for each engine, and then select the best one for a given precision. - We ran all the engines in docker and limited their memory to 25GB. This was used to ensure fairness by avoiding the case of some engine configs being too greedy with RAM usage. This 25 GB limit is completely fair because even to serve the largest `dbpedia-openai-1M-1536-angular` dataset, one hardly needs `1M * 1536 * 4bytes * 1.5 = 8.6GB` of RAM (including vectors + index). Hence, we decided to provide all the engines with ~3x the requirement. Please note that some of the configs of some engines crashed on some datasets because of the 25 GB memory limit. That's why you might see fewer points for some engines on choosing higher precision thresholds. ",benchmarks/single-node-speed-benchmark.md "--- draft: false id: 3 title: Benchmarks F.A.Q. weight: 10 --- # Benchmarks F.A.Q. ## Are we biased? Probably, yes. Even if we try to be objective, we are not experts in using all the existing vector databases. We build Qdrant and know the most about it. Due to that, we could have missed some important tweaks in different vector search engines. However, we tried our best, kept scrolling the docs up and down, experimented with combinations of different configurations, and gave all of them an equal chance to stand out. If you believe you can do it better than us, our **benchmarks are fully [open-sourced](https://github.com/qdrant/vector-db-benchmark), and contributions are welcome**! ## What do we measure? There are several factors considered while deciding on which database to use. Of course, some of them support a different subset of functionalities, and those might be a key factor to make the decision. But in general, we all care about the search precision, speed, and resources required to achieve it. There is one important thing - **the speed of the vector databases should to be compared only if they achieve the same precision**. Otherwise, they could maximize the speed factors by providing inaccurate results, which everybody would rather avoid. Thus, our benchmark results are compared only at a specific search precision threshold. ## How we select hardware? In our experiments, we are not focusing on the absolute values of the metrics but rather on a relative comparison of different engines. What is important is the fact we used the same machine for all the tests. It was just wiped off between launching different engines. We selected an average machine, which you can easily rent from almost any cloud provider. No extra quota or custom configuration is required. ## Why you are not comparing with FAISS or Annoy? Libraries like FAISS provide a great tool to do experiments with vector search. But they are far away from real usage in production environments. If you are using FAISS in production, in the best case, you never need to update it in real-time. In the worst case, you have to create your custom wrapper around it to support CRUD, high availability, horizontal scalability, concurrent access, and so on. Some vector search engines even use FAISS under the hood, but a search engine is much more than just an indexing algorithm. We do, however, use the same benchmark datasets as the famous [ann-benchmarks project](https://github.com/erikbern/ann-benchmarks), so you can align your expectations for any practical reasons. ### Why we decided to test with the Python client There is no consensus when it comes to the best technology to run benchmarks. You’re free to choose Go, Java or Rust-based systems. But there are two main reasons for us to use Python for this: 1. While generating embeddings you're most likely going to use Python and python based ML frameworks. 2. Based on GitHub stars, python clients are one of the most popular clients across all the engines. From the user’s perspective, the crucial thing is the latency perceived while using a specific library - in most cases a Python client. Nobody can and even should redefine the whole technology stack, just because of using a specific search tool. That’s why we decided to focus primarily on official Python libraries, provided by the database authors. Those may use some different protocols under the hood, but at the end of the day, we do not care how the data is transferred, as long as it ends up in the target location. ## What about closed-source SaaS platforms? There are some vector databases available as SaaS only so that we couldn’t test them on the same machine as the rest of the systems. That makes the comparison unfair. That’s why we purely focused on testing the Open Source vector databases, so everybody may reproduce the benchmarks easily. This is not the final list, and we’ll continue benchmarking as many different engines as possible. ## How to reproduce the benchmark? The source code is available on [Github](https://github.com/qdrant/vector-db-benchmark) and has a `README.md` file describing the process of running the benchmark for a specific engine. ## How to contribute? We made the benchmark Open Source because we believe that it has to be transparent. We could have misconfigured one of the engines or just done it inefficiently. If you feel like you could help us out, check out our [benchmark repository](https://github.com/qdrant/vector-db-benchmark). ",benchmarks/benchmark-faq.md "--- draft: false id: 5 title: description: ' Updated: Feb 2023 ' filter_data: /benchmarks/filter-result-2023-02-03.json date: 2023-02-13 weight: 4 --- ## Filtered Results As you can see from the charts, there are three main patterns: - **Speed boost** - for some engines/queries, the filtered search is faster than the unfiltered one. It might happen if the filter is restrictive enough, to completely avoid the usage of the vector index. - **Speed downturn** - some engines struggle to keep high RPS, it might be related to the requirement of building a filtering mask for the dataset, as described above. - **Accuracy collapse** - some engines are loosing accuracy dramatically under some filters. It is related to the fact that the HNSW graph becomes disconnected, and the search becomes unreliable. Qdrant avoids all these problems and also benefits from the speed boost, as it implements an advanced [query planning strategy](/documentation/search/#query-planning). ",benchmarks/filtered-search-benchmark.md "--- title: Vector Database Benchmarks description: The first comparative benchmark and benchmarking framework for vector search engines and vector databases. keywords: - vector databases comparative benchmark - ANN Benchmark - Qdrant vs Milvus - Qdrant vs Weaviate - Qdrant vs Redis - Qdrant vs ElasticSearch - benchmark - performance - latency - RPS - comparison - vector search - embedding preview_image: /benchmarks/benchmark-1.png seo_schema: { ""@context"": ""https://schema.org"", ""@type"": ""Article"", ""headline"": ""Vector Search Comparative Benchmarks"", ""image"": [ ""https://qdrant.tech/benchmarks/benchmark-1.png"" ], ""abstract"": ""The first comparative benchmark and benchmarking framework for vector search engines"", ""datePublished"": ""2022-08-23"", ""dateModified"": ""2022-08-23"", ""author"": [{ ""@type"": ""Organization"", ""name"": ""Qdrant"", ""url"": ""https://qdrant.tech"" }] } ---",benchmarks/_index.md "--- title: Anomaly Detection with Qdrant description: Qdrant optimizes anomaly detection by integrating vector embeddings for nuanced data analysis. It supports dissimilarity, diversity searches, and advanced anomaly detection techniques, enhancing applications from cybersecurity to finance with precise, efficient data insights. image: src: /img/data-analysis-anomaly-detection/anomaly-detection.svg alt: Anomaly detection caseStudy: logo: src: /img/data-analysis-anomaly-detection/customer-logo.svg alt: Logo title: Metric Learning for Anomaly Detection description: ""Detecting Coffee Anomalies with Qdrant: Discover how Qdrant can be used for anomaly detection in green coffee quality control, transforming the industry's approach to sorting and classification."" link: text: Read Case Study url: /articles/detecting-coffee-anomalies/ image: src: /img/data-analysis-anomaly-detection/case-study.png alt: Preview sitemapExclude: true --- ",data-analysis/data-analysis-anomaly-detection.md "--- title: Data Analysis and Anomaly Detection description: Explore entity matching for deduplication and anomaly detection with Qdrant, leveraging neural networks while still being fast and affordable in your applications for insights hard to get in other ways. startFree: text: Get Started url: https://cloud.qdrant.io/ learnMore: text: Contact Us url: /contact-us/ image: src: /img/vectors/vector-3.svg alt: Anomaly Detection sitemapExclude: true --- ",data-analysis/data-analysis-hero.md "--- title: Advanced Data Analysis with Anomaly Detection & Entity Matching description: Qdrant revolutionizes data analysis and anomaly detection with advanced entity matching techniques. Learn more today. url: data-analysis-anomaly-detection build: render: always cascade: - build: list: local publishResources: false render: never --- ",data-analysis/_index.md "--- title: Contact Qdrant description: Let us know how we can help by filling out the form. We will respond within 48 business hours. cards: - id: 0 icon: /icons/outline/comments-violet.svg title: Qdrant Cloud Support description: For questions or issues with Qdrant Cloud, contact mailLink: text: support@qdrant.io href: support@qdrant.io - id: 1 icon: /icons/outline/discord-blue.svg title: Developer Support description: For developer questions about Qdrant usage, join our link: text: Discord Server href: https://qdrant.to/discord form: id: contact-us-form title: Talk to our Team hubspotFormOptions: '{ ""region"": ""eu1"", ""portalId"": ""139603372"", ""formId"": ""814b303f-2f24-460a-8a81-367146d98786"", ""submitButtonClass"": ""button button_contained"", }' --- ",contact-us/_index.md "--- title: High-Performance Vector Search at Scale description: Maximize vector search efficiency by trying the leading open-source vector search database. url: /lp/high-performance-vector-search/ aliases: - /marketing/ - /lp/ sitemapExclude: true heroSection: title: High-Performance Vector Search at Scale description: The leading open-source vector database designed to handle high-dimensional vectors for performance and massive-scale AI applications. Qdrant is purpose-built in Rust for unmatched speed and reliability even when processing billions of vectors. buttonLeft: text: Start Free link: https://cloud.qdrant.io/ buttonRight: text: See Benchmarks link: /benchmarks/ image: /marketing/mozilla/dashboard-graphic.svg customersSection: title: Qdrant Powers Thousands of Top AI Solutions. customers: - image: /content/images/logos/mozilla-logo-mono.png name: Mozilla weight: 0 - image: /content/images/logos/alphasense-logo-mono.png name: Alphasense weight: 10 - image: /content/images/logos/bayer-logo-mono.png name: Bayer weight: 10 - image: /content/images/logos/dailymotion-logo-mono.png name: Dailymotion weight: 10 - image: /content/images/logos/deloitte-logo-mono.png name: Deloitte weight: 10 - image: /content/images/logos/disney-streaming-logo-mono.png name: Disney Streaming weight: 10 - image: /content/images/logos/flipkart-logo-mono.png name: Flipkart weight: 10 - image: /content/images/logos/hp-enterprise-logo-mono.png name: HP Enterprise weight: 10 - image: /content/images/logos/hrs-logo-mono.png name: HRS weight: 10 - image: /content/images/logos/johnson-logo-mono.png name: Johnson & Jonson weight: 10 - image: /content/images/logos/kaufland-logo-mono.png name: Kaufland weight: 10 - image: /content/images/logos/microsoft-logo-mono.png name: Microsoft weight: 10 featuresSection: title: Qdrant is designed to deliver the fastest and most accurate results at the lowest cost. subtitle: Learn more about it in our performance benchmarks. # not required, optional features: - title: Highest RPS text: Qdrant leads with top requests-per-seconds, outperforming alternative vector databases in various datasets by up to 4x. icon: /marketing/mozilla/rps.svg - title: Minimal Latency text: ""Qdrant consistently achieves the lowest latency, ensuring quicker response times in data retrieval: 3ms response for 1M Open AI embeddings, outpacing alternatives by 50x-100x."" icon: /marketing/mozilla/latency.svg - title: Fast Indexing text: Qdrant’s indexing time for large-scale, high-dimensional datasets is notably faster than alternative options. icon: /marketing/mozilla/indexing.svg - title: High Control with Accuracy text: Pre-filtering gives high accuracy with exceptional latencies in nested filtering search scenarios. icon: /marketing/mozilla/accuracy.svg - title: Easy-to-use text: Qdrant provides user-friendly SDKs in multiple programming languages, facilitating easy integration into existing systems. icon: /marketing/mozilla/easy-to-use.svg button: text: Get Started For Free link: https://qdrant.to/cloud marketplaceSection: title: Qdrant is also available on leading marketplaces. buttons: - image: /marketing/mozilla/amazon_logo.png link: https://aws.amazon.com/marketplace/pp/prodview-rtphb42tydtzg?sr=0-1&ref_=beagle&applicationId=AWS-Marketplace-Console name: AWS Marketplace - image: /marketing/mozilla/google_cloud_logo.png link: https://console.cloud.google.com/marketplace/product/qdrant-public/qdrant?project=qdrant-public name: Google Cloud Marketplace bannerSection: title: Scale your AI with Qdrant bgImage: /marketing/mozilla/stars-pattern.svg # not required, optional image: /marketing/mozilla/space-rocket.png button: text: Get Started For Free link: https://qdrant.to/cloud --- ",marketing/mozilla.md "--- _build: list: never publishResources: false render: never # child pages won't be rendered if lines below are not removed # currently we don't have a template for this section # remove or comment out the lines below to render the section # only if you have a template for it! cascade: _build: list: never publishResources: false render: never ---",marketing/_index.md "--- subtitle: Powering the next generation of AI applications with advanced, high-performant vector similarity search technology. socialMedia: - id: 0 icon: name: GitHub url: https://github.com/qdrant/qdrant - id: 1 icon: name: LinkedIn url: https://qdrant.to/linkedin - id: 2 icon: name: X url: https://qdrant.to/twitter - id: 3 icon: name: Discord url: https://qdrant.to/discord - id: 4 icon: name: YouTube url: https://www.youtube.com/channel/UC6ftm8PwH1RU_LM1jwG0LQA menuItems: - title: Products items: - id: 1 name: Qdrant Vector Database url: /qdrant-vector-database/ - id: 2 name: Enterprise Solutions url: /enterprise-solutions/ - id: 3 name: Qdrant Cloud url: /cloud/ # - id: 4 # name: Private Cloud # url: /private-cloud/ - id: 5 name: Hybrid Cloud url: /hybrid-cloud/ - id: 6 name: Demos url: /demo/ - id: 7 name: Pricing url: /pricing/ - title: Use Cases items: - id: 0 name: Advanced Search url: /advanced-search/ - id: 1 name: Recommendation Systems url: /recommendations/ - id: 2 name: Retrieval Augmented Generation url: /rag/ - id: 3 name: Data Analysis & Anomaly Detection url: /data-analysis-anomaly-detection/ # - id: 4 # name: Qdrant for Startups # url: /qdrant-for-startups/ - title: Developers items: - id: 0 name: Documentation url: /documentation/ - id: 1 name: Discord url: https://qdrant.to/discord # - id: 2 # name: Qdrant Stars # url: /stars/ - id: 3 name: Github url: https://github.com/qdrant/qdrant - id: 4 name: Roadmap url: https://qdrant.to/roadmap - id: 5 name: Changelog url: https://github.com/qdrant/qdrant/releases - id: 6 name: Status Page url: https://status.qdrant.io/ - title: Resources items: - id: 0 name: Blog url: /blog/ - id: 1 name: Benchmarks url: /benchmarks/ - id: 2 name: Articles url: /articles/ - title: Company items: - id: 0 name: About Us url: /about-us/ # - id: 1 # name: Customers # url: /customers/ # - id: 2 # name: Partners # url: /partners/ - id: 3 name: Careers url: https://qdrant.join.com/ - id: 4 name: Contact Us url: /contact-us/ copyright: © 2024 Qdrant. All Rights Reserved termsLink: text: Terms url: /legal/terms_and_conditions/ privacyLink: text: Privacy Policy url: /legal/privacy-policy/ impressumLink: text: Impressum url: /legal/impressum/ bages: - src: /img/soc2-badge.png alt: ""SOC2"" url: http://qdrant.to/trust-center sitemapExclude: true --- ",headless/footer.md "--- title: Sign up for Qdrant updates description: We'll occasionally send you best practices for using vector data and similarity search, as well as product news. placeholder: Enter your email button: Subscribe hubspotFormOptions: '{ ""region"": ""eu1"", ""portalId"": ""139603372"", ""formId"": ""049d96c6-ef65-4e41-ba69-a3335b9334cf"", ""cssClass"": ""subscribe-form"", ""submitButtonClass"": ""button button_contained button_lg"", ""submitText"": ""Subscribe"" }' sitemapExclude: true --- ",headless/newsletter.md "--- stats: githubStars: 19.7k discordMembers: 6.4k twitterFollowers: 7.5k ---",headless/stats.md "--- title: Building the most efficient, scalable, high-performance vector database on the market link: text: Our Mission # url: / sitemapExclude: true ---",headless/mission.md "--- customers: - id: 0 name: Alphasense logo: - id: 1 name: Disney logo: - id: 2 name: BCGX logo: - id: 3 name: CB-Insights logo: - id: 4 name: Vivendi logo: - id: 5 name: Gitbook logo: - id: 6 name: Microsoft logo: - id: 7 name: Dust logo: - id: 8 name: GLG logo: - id: 9 name: Mozilla logo: - id: 10 name: Johnson-&-Johnson logo: - id: 11 name: HRS-Group logo: - id: 12 name: Kaufland logo: - id: 13 name: Deloitte logo: - id: 14 name: Hewlett-Packard-Enterprise logo: sitemapExclude: true --- ",headless/customer-list.md "--- logIn: text: Log in url: https://cloud.qdrant.io/ startFree: text: Start Free url: https://cloud.qdrant.io/ menuItems: - id: menu-0 name: Product subMenuItems: - id: subMenu-0-0 name: Qdrant Vector Database icon: qdrant-vector-database.svg url: /qdrant-vector-database/ - id: subMenu-0-1 name: Qdrant Cloud icon: qdrant-cloud.svg url: /cloud/ - id: subMenu-0-2 name: Qdrant Enterprise Solutions icon: qdrant-enterprise-solutions.svg url: /enterprise-solutions/ - id: subMenu-0-3 name: Hybrid Cloud icon: hybrid-cloud.svg url: /hybrid-cloud/ # - id: subMenu-0-4 # name: Private Cloud # icon: private-cloud.svg # url: /private-cloud/ - id: subMenu-0-5 name: Demos icon: demos.svg url: /demo/ - id: menu-1 name: Use Cases url: /use-cases/ subMenuItems: - id: subMenu-1-0 name: RAG icon: rag.svg url: /rag/ - id: subMenu-1-1 name: Recommendation Systems icon: recommendation-systems.svg url: /recommendations/ - id: subMenu-1-2 name: Advanced Search icon: advanced-search.svg url: /advanced-search/ - id: subMenu-1-3 name: Data Analysis & Anomaly Detection icon: data-analysis-anomaly-detection.svg url: /data-analysis-anomaly-detection/ # - id: subMenu-1-4 # name: Qdrant for Startups # icon: qdrant-for-startups.svg # url: /qdrant-for-startups/ - id: menu-2 name: Developers subMenuItems: - id: subMenu-2-0 name: Documentation icon: documentation.svg url: /documentation/ - id: subMenu-2-1 name: Community icon: community.svg url: /community/ - id: subMenu-2-2 name: Qdrant Stars icon: qdrant-stars.svg url: /stars/ - id: subMenu-2-3 name: Github icon: github.svg url: https://github.com/qdrant/qdrant - id: subMenu-2-4 name: Roadmap icon: roadmap.svg url: https://qdrant.to/roadmap - id: subMenu-2-5 name: Changelog icon: changelog.svg url: https://github.com/qdrant/qdrant/releases - id: menu-3 name: Resources subMenuItems: - id: subMenu-3-0 name: Benchmarks icon: benchmarks.svg url: /benchmarks/ - id: subMenu-3-1 name: Blog icon: blog.svg url: /blog/ - id: subMenu-3-2 name: Articles icon: articles.svg url: /articles/ - id: menu-4 name: Company subMenuItems: - id: subMenu-4-0 name: About us icon: about-us.svg url: /about-us/ - id: subMenu-4-1 name: Customers icon: customers.svg url: /customers/ - id: subMenu-4-2 name: Partners icon: partners.svg url: /partners/ - id: subMenu-4-3 name: Careers icon: careers.svg url: https://qdrant.join.com/ - id: subMenu-4-4 name: Contact us icon: contact-us.svg url: /contact-us/ - id: menu-5 name: Pricing url: /pricing/ sitemapExclude: true --- ",headless/menu.md "--- title: Launch a new cluster today button: text: Get Started url: https://cloud.qdrant.io/ image: src: /img/database.svg alt: Database sitemapExclude: true --- ",headless/get-started-small-database.md "--- photoCards: - id: 0 - id: 1 - id: 2 - id: 3 sitemapExclude: true ---",headless/carousel.md "--- title: Qdrant is also available on leading marketplaces items: - id: 0 title: AWS Marketplace image: src: /img/marketplaces/aws-logo.png alt: AWS marketplace logo link: text: Get Started url: https://aws.amazon.com/marketplace/pp/prodview-rtphb42tydtzg?sr=0-1&ref_=beagle&applicationId=AWS-Marketplace-Console - id: 1 title: Google Cloud Marketplace image: src: /img/marketplaces/google-cloud-logo.png alt: Google Cloud marketplace logo link: text: Get Started url: https://console.cloud.google.com/marketplace/product/qdrant-public/qdrant?project=qdrant-public - id: 2 title: Microsoft Azure image: src: /img/marketplaces/microsoft-azure-logo.png alt: Microsoft Azure logo link: text: Get Started url: https://azuremarketplace.microsoft.com/en-en/marketplace/apps/qdrantsolutionsgmbh1698769709989.qdrant-db sitemapExclude: true --- ",headless/marketplaces.md "--- title: Additional Resources resourceCards: - id: 0 icon: title: Documentation content: Discover more about Qdrant by checking out our documentation for details on advanced features and functionalities. link: text: Read More url: /documentation/ - id: 1 icon: title: Enterprise Solutions content: For maximal control for production-ready applications Qdrant is available as a Hybrid Cloud and Private Cloud (Full On Premise) solution. link: text: Contact Sales url: /contact-sales/ - id: 2 icon: title: Benchmarks content: Learn how Qdrant is designed to deliver the fastest and most accurate results and how it compares to alternatives in our benchmarks. link: text: Compare url: /benchmarks/ - id: 3 icon: title: Pricing content: Visit our pricing page for more details on Qdrant’s free tier, managed cloud, and enterprise plans. link: text: Learn More url: /pricing/ sitemapExclude: true --- ",headless/additional-resources.md "--- description: Oops! We can't find the page you were looking for. image: src: /img/404-galaxy.svg mobileSrc: /img/404-galaxy-mobile.svg alt: Galaxy homeButton: link: / text: Go to Home supportButton: link: https://qdrant.io/discord text: Get Support sitemapExclude: True --- ",headless/not-found.md "--- icon: text: How to make data ready for your RAG application link: text: Register now url: https://try.qdrant.tech/webinar-how-to-make-data-ready-for-your-rag-app?utm_source=event&utm_medium=website&utm_campaign=august-webinar-rag&utm_term=chunking start: 2024-08-14T15:28:00.000Z sitemapExclude: true end: 2024-08-29T08:00:00.000Z --- ",headless/top-banner.md "--- content: Do you have further questions? We are happy to assist you. contactUs: text: Contact us url: /contact-us/ sitemapExclude: true --- ",headless/get-contacted-with-question.md "--- title: Interactive Tutorials description: Dive into the capabilities of Qdrant with our hands-on tutorials. Discover various methods to integrate vector search into your applications, enhancing functionality and user experience. link: text: View All Tutorials url: /documentation/tutorials/ commands: - '...' - '""hnsw_config"": {' - '""m"": 64,' - '""ef_construct"": 512,' - '""on_disk"": true' - '}' - '...' sitemapExclude: true ---",headless/tutorials.md "--- title: Our story subtitle: A paradigm shift is underway in the field of data management and information retrieval. content: Today, our world is increasingly dominated by complex, unstructured data like images, audio, video, and text. Traditional ways of retrieving data based on keyword matching are no longer sufficient. Vector databases are designed to handle complex high-dimensional data, unlocking the foundation for pivotal AI applications. extraTitle: Today Qdrant powers the most ambitious AI applications, from cutting-edge startups to large-scale enterprises. extraContent: We started Qdrant with the mission to build the most efficient, scalable, high-performance vector database on the market. Since then we have seen incredible user growth and support from our open-source community with thousands of users and millions of downloads. year: In 2021 link: text: Join Our Team url: https://qdrant.join.com/ sitemapExclude: true ---",headless/our-story.md "--- message: text: We use cookies to learn more about you. At any time you can delete or block cookies through your browser settings. color: '#161E33' link: text: Learn more url: /legal/privacy-policy/ color: '#DC244C' button: text: I accept color: '#DC244C' background: '#F0F3FA' ---",headless/cookies.md "--- title: Get started for free subtitle: Turn embeddings or neural network encoders into full-fledged applications for matching, searching, recommending, and more. button: text: Start Free url: https://cloud.qdrant.io/ sitemapExclude: true --- ",headless/get-started.md "--- title: Leadership teamMembers: - id: 0 name: André Zayarni position: CEO & Co-Founder avatar: '/img/leadership/andre-zayarni.png' - id: 1 name: Andrey Vasnetsov position: CTO & Co-Founder avatar: '/img/leadership/andrey-vasnetsov.png' - id: 2 name: Fabrizio Schmidt position: Product & Engineering avatar: '/img/leadership/fabrizio-schmidt.png' - id: 3 name: Bastian Hofmann position: Enterprise Solutions avatar: '/img/leadership/bastian-hofmann.png' - id: 4 name: Dominik Alberts position: Finance avatar: '/img/leadership/dominik-alberts.png' - id: 5 name: David Myriel position: Developer Relations avatar: '/img/leadership/david-myriel.png' - id: 6 name: Manuel Meyer position: Growth avatar: '/img/leadership/manuel-meyer.png' - id: 7 name: Karim Chester position: Sales avatar: '/img/leadership/karim-chester.png' sitemapExclude: true ---",headless/leadership.md "--- title: Qdrant Cloud is the fastest way to get started with Qdrant. button: text: Get Started url: https://cloud.qdrant.io/ image: src: /img/rocket.svg alt: Rocket sitemapExclude: true --- ",headless/get-started-small-rocket.md "--- title: Want to build the technology for the next generation of AI applications with us? subtitle: Take a look at our open roles. We’re excited to hear from you. seeOpenRoles: text: See Open Roles url: https://qdrant.join.com/ ---",headless/open-roles.md "--- title: Social Share Buttons buttons: - id: x title: x url: https://twitter.com/intent/tweet?url={{ $link }}&text={{ $title }} icon: - id: linkedin title: LinkedIn url: https://www.linkedin.com/sharing/share-offsite/?url={{ $link }} icon: ---",headless/share-buttons.md "--- title: Our Investors investors: - id: 0 logo: '/img/investors/spark-capital.svg' - id: 1 logo: '/img/investors/unusual-ventures.svg' - id: 2 logo: '/img/investors/42cap.svg' - id: 3 logo: '/img/investors/ibb-ventures.svg' sitemapExclude: true --- ",headless/investors.md "--- build: list: never publishResources: false render: never cascade: - build: list: never publishResources: false render: never --- ",headless/_index.md "--- content: Do you have further questions? We are happy to assist you. contactUs: text: Contact us url: /contact-sales/ sitemapExclude: true --- ",headless/get-contacted-with-question-sales.md "--- title: Get Started with Qdrant Free button: text: Get Started url: https://cloud.qdrant.io/ image: src: /img/rocket.svg alt: Rocket sitemapExclude: true --- ",headless/get-started-blogs.md "--- title: Our Customers Words wall: intro_text: ""See what our community is saying on our"" url: ""https://testimonial.to/qdrant/all"" url_text: Vector Space Wall link: text: Customer Stories url: /customers/ storyCards: - id: 0 icon: /img/brands/bayer.svg brand: Bayer content: “VectorStores are definitely here to stay, the objects in the world around us from image, sound, video and text become easily universal and searchable thanks to the embedding models. I personally recommend Qdrant. We have been using it for a while and couldn't be happier.“ author: avatar: '/img/customers/hooman-sedghamiz.svg' fullName: Hooman Sedghamiz position: Director Al /ML, Bayer - id: 2 icon: '/img/brands/cb-insights.svg' brand: CB Insights content: “We looked at all the big options out there right now for vector databases, with our focus on ease of use, performance, pricing, and communication. Qdrant came out on top in each category... ultimately, it wasn't much of a contest.” author: avatar: '/img/customers/alex-webb.svg' fullName: Alex Webb position: Director of Engineering, CB Insights - id: 3 icon: '/img/brands/bosch.svg' brand: Bosch content: “With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform on enterprise scale.” author: avatar: - '/img/customers/jeremy-t.png' - '/img/customers/daly-singh.png' fullName: Jeremy T. & Daly Singh position: Generative AI Expert & Product Owner, Bosch - id: 4 icon: '/img/brands/cognizant.svg' brand: Cognizant content: “We LOVE Qdrant! The exceptional engineering, strong business value, and outstanding team behind the product drove our choice. Thank you for your great contribution to the technology community!” author: avatar: '/img/customers/kyle-tobin.png' fullName: Kyle Tobin position: Principal, Cognizant sitemapExclude: true --- ",headless/main/customer-stories.md "--- title: High-Performance Vector Search at Scale subtitle: Powering the next generation of AI applications with advanced, open-source vector similarity search technology. startFree: text: Start Free url: https://cloud.qdrant.io/ learnMore: text: Learn More url: /qdrant-vector-database/ heroImageSources: - minWidth: 2881px srcset: /img/hero-home-illustration-x3.webp type: image/webp - minWidth: 2881px srcset: /img/hero-home-illustration-x3.png type: image/png - minWidth: 1441px srcset: /img/hero-home-illustration-x2.webp type: image/webp - minWidth: 1441px srcset: /img/hero-home-illustration-x2.png type: image/png - srcset: /img/hero-home-illustration-x1.webp type: image/webp - srcset: /img/hero-home-illustration-x1.png type: image/png fallbackHeroImage: src: /img/hero-home-illustration-x1.png alt: 'Hero image: an astronaut looking at dark hole from the planet surface.' githubStars: logo: starIcon: actionText: Star us actionUrl: https://github.com/qdrant/qdrant sitemapExclude: true ---",headless/main/hero.md "--- title: Integrations link: text: See Integrations url: /documentation/frameworks/ embedLink: text: embeddings url: /documentation/embeddings/ frameworkLink: text: frameworks url: /documentation/frameworks/ sitemapExclude: true --- ",headless/main/integrations.md "--- title: Deploy Qdrant locally with Docker commands: - docker pull qdrant/qdrant - docker run -p 6333:6333 qdrant/qdrant quickStartLink: text: Quick Start Guide url: /documentation/quick-start/ repositoryLink: text: GitHub repository url: https://github.com/qdrant/qdrant ---",headless/main/docker-deploy.md "--- title: AI Meets Advanced Vector Search description: The leading open source vector database and similarity search engine designed to handle high-dimensional vectors for performance and massive-scale AI applications. link: text: All features url: /qdrant-vector-database/ cloudFeature: title: Cloud-Native Scalability & High-Availability content: Enterprise-grade Managed Cloud. Vertical and horizontal scaling and zero-downtime upgrades. link: text: Qdrant Cloud url: /cloud/ featureCards: - id: 0 icon: title: Ease of Use & Simple Deployment content: Quick deployment in any environment with Docker and a lean API for easy integration, ideal for local testing. link: text: Quick Start Guide url: /documentation/quick-start/ - id: 1 icon: title: Cost Efficiency with Storage Options content: Dramatically reduce memory usage with built-in compression options and offload data to disk. link: text: Quantization url: /articles/scalar-quantization/ - id: 2 icon: title: Rust-Powered Reliability & Performance content: Purpose built in Rust for unmatched speed and reliability even when processing billions of vectors. link: text: Benchmarks url: /benchmarks/ sitemapExclude: true --- ",headless/main/core-features.md "--- customerStories: text: Qdrant Powers Thousands of Top AI Solutions. textLink: Customer Stories url: /customers/ # WARNING: If you want to add more customers, you need adjust the styles in the file: # `qdrant-landing/themes/qdrant-2024/assets/css/partials/_customers.scss` # # ``` # @include marquee.base(64px, 224px, , , 52px, $neutral-10, false, 50s, block); # ``` # customers: - id: 0 name: Alphasense logo: - id: 1 name: Disney logo: - id: 2 name: BCGX logo: - id: 3 name: CB-Insights logo: - id: 4 name: Vivendi logo: - id: 5 name: Gitbook logo: - id: 6 name: Microsoft logo: - id: 7 name: Dust logo: - id: 8 name: Discord logo: - id: 9 name: Mozilla logo: - id: 10 name: Johnson-&-Johnson logo: - id: 11 name: HRS-Group logo: - id: 12 name: Kaufland logo: - id: 13 name: Deloitte logo: - id: 14 name: Hewlett-Packard-Enterprise logo: - id: 15 name: X logo: - id: 16 name: Quora logo: - id: 17 name: Perplexity logo: - id: 18 name: Voiceflow logo: - id: 19 name: Merck logo: - id: 20 name: Meesho logo: - id: 21 name: Thoughtworks logo: sitemapExclude: true --- ",headless/main/customers.md "--- title: Vectors in Action subtitle: Turn embeddings or neural network encoders into full-fledged applications for matching, searching, recommending, and more. featureCards: - id: 0 title: Advanced Search content: Elevate your apps with advanced search capabilities. Qdrant excels in processing high-dimensional data, enabling nuanced similarity searches, and understanding semantics in depth. Qdrant also handles multimodal data with fast and accurate search algorithms. link: text: Learn More url: /advanced-search/ - id: 1 title: Recommendation Systems content: Create highly responsive and personalized recommendation systems with tailored suggestions. Qdrant’s Recommendation API offers great flexibility, featuring options such as best score recommendation strategy. This enables new scenarios of using multiple vectors in a single query to impact result relevancy. link: text: Learn More url: /recommendations/ - id: 2 title: Retrieval Augmented Generation (RAG) content: Enhance the quality of AI-generated content. Leverage Qdrant's efficient nearest neighbor search and payload filtering features for retrieval-augmented generation. You can then quickly access relevant vectors and integrate a vast array of data points. link: text: Learn More url: /rag/ - id: 3 title: Data Analysis and Anomaly Detection content: Transform your approach to Data Analysis and Anomaly Detection. Leverage vectors to quickly identify patterns and outliers in complex datasets. This ensures robust and real-time anomaly detection for critical applications. link: text: Learn More url: /data-analysis-anomaly-detection/ ---",headless/main/vectors.md "--- draft: false image: ""content/images/logos/dailymotion-logo-mono"" name: ""Dailymotion"" sitemapExclude: True ---",stack/dailymotion.md "--- draft: false image: ""content/images/logos/hp-enterprise-logo-mono"" name: ""Hewlett Packard Enterprise"" sitemapExclude: True ---",stack/hp-enterprise.md "--- draft: false image: ""content/images/logos/bayer-logo-mono"" name: ""Bayer"" sitemapExclude: True ---",stack/bayer.md "--- draft: false image: ""content/images/logos/hrs-logo-mono"" name: ""HRS"" sitemapExclude: True ---",stack/hrs.md "--- draft: false image: ""content/images/logos/deloitte-logo-mono"" name: ""Deloitte"" sitemapExclude: True ---",stack/deloitte.md "--- draft: false image: ""content/images/logos/kaufland-logo-mono"" name: ""Kaufland"" sitemapExclude: True ---",stack/kaufland.md "--- draft: false image: ""content/images/logos/microsoft-logo-mono"" name: ""Bayer"" sitemapExclude: True ---",stack/microsoft.md "--- draft: false image: ""content/images/logos/disney-streaming-logo-mono"" name: ""Disney Streaming"" sitemapExclude: True ---",stack/disney-streaming.md "--- draft: false image: ""content/images/logos/mozilla-logo-mono"" name: ""Mozilla"" sitemapExclude: True ---",stack/mozilla.md "--- draft: false image: ""content/images/logos/johnson-logo-mono"" name: ""Johnson & Johnson"" sitemapExclude: True ---",stack/johnoson-and-johnson.md "--- draft: false image: ""content/images/logos/flipkart-logo-mono"" name: ""Flipkart"" sitemapExclude: True ---",stack/flipkart.md "--- draft: false image: ""content/images/logos/alphasense-logo-mono"" name: ""AlphaSense"" sitemapExclude: True ---",stack/alphasense.md "--- title: Trusted by developers worldwide subtitle: Qdrant is powering thousands of innovative AI solutions at leading companies. Engineers are choosing Qdrant for its top performance, high scalability, ease of use, and flexible cost and resource-saving options sitemapExclude: True _build: list: never publishResources: false render: never cascade: _build: list: never publishResources: false render: never ---",stack/_index.md "--- title: Discover our Programs resources: - id: 0 title: Qdrant Stars description: Qdrant Stars are our top contributors, organizers, and evangelists. Learn more about how you can become a Star. link: text: Learn More url: /blog/qdrant-stars-announcement/ image: src: /img/community-features/qdrant-stars.svg alt: Avatar - id: 1 title: Discord description: Chat in real-time with the Qdrant team and community members. link: text: Join our Discord url: https://discord.gg/qdrant image: src: /img/community-features/discord.svg alt: Avatar - id: 2 title: Community Blog description: Learn all the latest tips and tricks in the AI space through our community blog. link: text: Visit our Blog url: /blog/ image: src: /img/community-features/community-blog.svg alt: Avatar - id: 3 title: Vector Space Talks description: Weekly tech talks with Qdrant users and industry experts. link: text: Learn More url: https://www.youtube.com/watch?v=4aUq5VnR_VI&list=PL9IXkWSmb36_eANzd_sKeQ3tXbFiEGEWn&pp=iAQB image: src: /img/community-features/vector-space-talks.svg alt: Avatar features: - id: 0 icon: src: /icons/outline/documentation-blue.svg alt: Documentation title: Documentation description: Docs carefully crafted to support developers and decision-makers learning about Qdrant features. link: text: Read More url: /documentation/ - id: 1 icon: src: /icons/outline/guide-blue.svg alt: Guide title: Contributors Guide description: Whatever your strengths are, we got you covered. Learn more about how to contribute to Qdrant. link: text: Learn More url: https://github.com/qdrant/qdrant/blob/master/CONTRIBUTING.md - id: 2 icon: src: /icons/outline/handshake-blue.svg alt: Partners title: Partners description: Technology partners and applications that support Qdrant. link: text: Learn More url: /partners/ - id: 3 icon: src: /icons/outline/mail-blue.svg alt: Newsletter title: Newsletter description: Stay up to date with all the latest Qdrant news link: text: Learn More url: /subscribe/ sitemapExclude: true --- ",community/community-features.md "--- title: Welcome to the Qdrant Community description: Connect with over 30,000 community members, get access to educational resources, and stay up to date on all news and discussions about Qdrant and the vector database space. image: src: /img/community-hero.svg srcMobile: /img/mobile/community-hero.svg alt: Community button: text: Join our Discord url: https://discord.gg/qdrant about: Get access to educational resources, and stay up to date on all news and discussions about Qdrant and the vector database space. sitemapExclude: true --- ",community/community-hero.md "--- title: Community description: Community build: render: always cascade: - build: list: local publishResources: false render: never --- ",community/_index.md "--- title: Love from our community testimonials: - id: 0 name: Owen Colegrove nickname: ""@ocolegro"" avatar: src: /img/customers/owen-colegrove.svg alt: Avatar text: qurant has been amazing! - id: 1 name: Darren nickname: ""@darrenangle"" avatar: src: /img/customers/darren.svg alt: Avatar text: qdrant is so fast I'm using Rust for all future projects goodnight everyone - id: 2 name: Greg Schoeninger nickname: ""@gregschoeninger"" avatar: src: /img/customers/greg-schoeninger.svg alt: Avatar text: Indexing millions of embeddings into @qdrant_engine has been the smoothest experience I've had so far with a vector db. Team Rustacian all the way 🦀 - id: 3 name: Ifioravanti nickname: ""@ivanfioravanti"" avatar: src: /img/customers/ifioravanti.svg alt: Avatar text: @qdrant_engine is ultra super powerful! Combine it to @LangChainAI and you have a super productivity boost for your AI projects ⏩⏩⏩ - id: 4 name: sengpt nickname: ""@sengpt"" avatar: src: /img/customers/sengpt.svg alt: Avatar text: Thank you, Qdrant is awesome - id: 4 name: Owen Colegrove nickname: ""@ocolegro"" avatar: src: /img/customers/owen-colegrove.svg alt: Avatar text: that sounds good to me, big fan of qdrant. sitemapExclude: true --- ",community/community-testimonials.md "--- title: Terms and Conditions --- ## Terms and Conditions Last updated: December 10, 2021 Please read these terms and conditions carefully before using Our Service. ### Interpretation and Definitions #### Interpretation The words of which the initial letter is capitalized have meanings defined under the following conditions. The following definitions shall have the same meaning regardless of whether they appear in singular or in plural. #### Definitions For the purposes of these Terms and Conditions: * **Affiliate** means an entity that controls, is controlled by or is under common control with a party, where ""control"" means ownership of 50% or more of the shares, equity interest or other securities entitled to vote for election of directors or other managing authority. * **Country** refers to: Berlin, Germany * **Company** (referred to as either ""the Company"", ""We"", ""Us"" or ""Our"" in this Agreement) refers to Qdrant Solutions GmbH, Chausseestraße 86, 10115 Berlin. * **Device** means any device that can access the Service such as a computer, a cellphone or a digital tablet. * **Service** refers to the Website. * **Terms and Conditions** (also referred as ""Terms"") mean these Terms and Conditions that form the entire agreement between You and the Company regarding the use of the Service. This Terms and Conditions agreement has been created with the help of the Terms and Conditions Generator. * **Third-party Social Media Service** means any services or content (including data, information, products or services) provided by a third-party that may be displayed, included or made available by the Service. * **Website** refers to Qdrant, accessible from https://qdrant.tech * **You** means the individual accessing or using the Service, or the company, or other legal entity on behalf of which such individual is accessing or using the Service, as applicable. ### Acknowledgment These are the Terms and Conditions governing the use of this Service and the agreement that operates between You and the Company. These Terms and Conditions set out the rights and obligations of all users regarding the use of the Service. Your access to and use of the Service is conditioned on Your acceptance of and compliance with these Terms and Conditions. These Terms and Conditions apply to all visitors, users and others who access or use the Service. By accessing or using the Service You agree to be bound by these Terms and Conditions. If You disagree with any part of these Terms and Conditions then You may not access the Service. You represent that you are over the age of 18. The Company does not permit those under 18 to use the Service. Your access to and use of the Service is also conditioned on Your acceptance of and compliance with the Privacy Policy of the Company. Our Privacy Policy describes Our policies and procedures on the collection, use and disclosure of Your personal information when You use the Application or the Website and tells You about Your privacy rights and how the law protects You. Please read Our Privacy Policy carefully before using Our Service. ### Links to Other Websites Our Service may contain links to third-party web sites or services that are not owned or controlled by the Company. The Company has no control over, and assumes no responsibility for, the content, privacy policies, or practices of any third party web sites or services. You further acknowledge and agree that the Company shall not be responsible or liable, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any such content, goods or services available on or through any such web sites or services. We strongly advise You to read the terms and conditions and privacy policies of any third-party web sites or services that You visit. ### Termination We may terminate or suspend Your access immediately, without prior notice or liability, for any reason whatsoever, including without limitation if You breach these Terms and Conditions. Upon termination, Your right to use the Service will cease immediately. ### Limitation of Liability Notwithstanding any damages that You might incur, the entire liability of the Company and any of its suppliers under any provision of this Terms and Your exclusive remedy for all of the foregoing shall be limited to the amount actually paid by You through the Service or 100 USD if You haven't purchased anything through the Service. To the maximum extent permitted by applicable law, in no event shall the Company or its suppliers be liable for any special, incidental, indirect, or consequential damages whatsoever (including, but not limited to, damages for loss of profits, loss of data or other information, for business interruption, for personal injury, loss of privacy arising out of or in any way related to the use of or inability to use the Service, third-party software and/or third-party hardware used with the Service, or otherwise in connection with any provision of this Terms), even if the Company or any supplier has been advised of the possibility of such damages and even if the remedy fails of its essential purpose. Some states do not allow the exclusion of implied warranties or limitation of liability for incidental or consequential damages, which means that some of the above limitations may not apply. In these states, each party's liability will be limited to the greatest extent permitted by law. ### ""AS IS"" and ""AS AVAILABLE"" Disclaimer The Service is provided to You ""AS IS"" and ""AS AVAILABLE"" and with all faults and defects without warranty of any kind. To the maximum extent permitted under applicable law, the Company, on its own behalf and on behalf of its Affiliates and its and their respective licensors and service providers, expressly disclaims all warranties, whether express, implied, statutory or otherwise, with respect to the Service, including all implied warranties of merchantability, fitness for a particular purpose, title and non-infringement, and warranties that may arise out of course of dealing, course of performance, usage or trade practice. Without limitation to the foregoing, the Company provides no warranty or undertaking, and makes no representation of any kind that the Service will meet Your requirements, achieve any intended results, be compatible or work with any other software, applications, systems or services, operate without interruption, meet any performance or reliability standards or be error free or that any errors or defects can or will be corrected. Without limiting the foregoing, neither the Company nor any of the company's provider makes any representation or warranty of any kind, express or implied: (i) as to the operation or availability of the Service, or the information, content, and materials or products included thereon; (ii) that the Service will be uninterrupted or error-free; (iii) as to the accuracy, reliability, or currency of any information or content provided through the Service; or (iv) that the Service, its servers, the content, or e-mails sent from or on behalf of the Company are free of viruses, scripts, trojan horses, worms, malware, timebombs or other harmful components. Some jurisdictions do not allow the exclusion of certain types of warranties or limitations on applicable statutory rights of a consumer, so some or all of the above exclusions and limitations may not apply to You. But in such a case the exclusions and limitations set forth in this section shall be applied to the greatest extent enforceable under applicable law. ### Governing Law The laws of the Country, excluding its conflicts of law rules, shall govern this Terms and Your use of the Service. Your use of the Application may also be subject to other local, state, national, or international laws. ### Disputes Resolution If You have any concern or dispute about the Service, You agree to first try to resolve the dispute informally by contacting the Company. ### For European Union (EU) Users If You are a European Union consumer, you will benefit from any mandatory provisions of the law of the country in which you are resident in. ### United States Legal Compliance You represent and warrant that (i) You are not located in a country that is subject to the United States government embargo, or that has been designated by the United States government as a ""terrorist supporting"" country, and (ii) You are not listed on any United States government list of prohibited or restricted parties. ### Severability and Waiver #### Severability If any provision of these Terms is held to be unenforceable or invalid, such provision will be changed and interpreted to accomplish the objectives of such provision to the greatest extent possible under applicable law and the remaining provisions will continue in full force and effect. #### Waiver Except as provided herein, the failure to exercise a right or to require performance of an obligation under this Terms shall not effect a party's ability to exercise such right or require such performance at any time thereafter nor shall the waiver of a breach constitute a waiver of any subsequent breach. Translation Interpretation These Terms and Conditions may have been translated if We have made them available to You on our Service. You agree that the original English text shall prevail in the case of a dispute. ### Changes to These Terms and Conditions We reserve the right, at Our sole discretion, to modify or replace these Terms at any time. If a revision is material We will make reasonable efforts to provide at least 30 days' notice prior to any new terms taking effect. What constitutes a material change will be determined at Our sole discretion. By continuing to access or use Our Service after those revisions become effective, You agree to be bound by the revised terms. If You do not agree to the new terms, in whole or in part, please stop using the website and the Service. ### Contact Us If you have any questions about these Terms and Conditions, You can contact us: By email: info@qdrant.com",legal/terms_and_conditions.md "--- title: Impressum --- # Impressum Angaben gemäß § 5 TMG Qdrant Solutions GmbH Chausseestraße 86 10115 Berlin #### Vertreten durch: André Zayarni #### Kontakt: Telefon: +49 30 120 201 01 E-Mail: info@qdrant.com #### Registereintrag: Eintragung im Registergericht: Berlin Charlottenburg Registernummer: HRB 235335 B #### Umsatzsteuer-ID: Umsatzsteuer-Identifikationsnummer gemäß §27a Umsatzsteuergesetz: DE347779324 ### Verantwortlich für den Inhalt nach § 55 Abs. 2 RStV: André Zayarni Chausseestraße 86 10115 Berlin ## Haftungsausschluss: ### Haftung für Inhalte Die Inhalte unserer Seiten wurden mit größter Sorgfalt erstellt. Für die Richtigkeit, Vollständigkeit und Aktualität der Inhalte können wir jedoch keine Gewähr übernehmen. Als Diensteanbieter sind wir gemäß § 7 Abs.1 TMG für eigene Inhalte auf diesen Seiten nach den allgemeinen Gesetzen verantwortlich. Nach §§ 8 bis 10 TMG sind wir als Diensteanbieter jedoch nicht verpflichtet, übermittelte oder gespeicherte fremde Informationen zu überwachen oder nach Umständen zu forschen, die auf eine rechtswidrige Tätigkeit hinweisen. Verpflichtungen zur Entfernung oder Sperrung der Nutzung von Informationen nach den allgemeinen Gesetzen bleiben hiervon unberührt. Eine diesbezügliche Haftung ist jedoch erst ab dem Zeitpunkt der Kenntnis einer konkreten Rechtsverletzung möglich. Bei Bekanntwerden von entsprechenden Rechtsverletzungen werden wir diese Inhalte umgehend entfernen. ### Haftung für Links Unser Angebot enthält Links zu externen Webseiten Dritter, auf deren Inhalte wir keinen Einfluss haben. Deshalb können wir für diese fremden Inhalte auch keine Gewähr übernehmen. Für die Inhalte der verlinkten Seiten ist stets der jeweilige Anbieter oder Betreiber der Seiten verantwortlich. Die verlinkten Seiten wurden zum Zeitpunkt der Verlinkung auf mögliche Rechtsverstöße überprüft. Rechtswidrige Inhalte waren zum Zeitpunkt der Verlinkung nicht erkennbar. Eine permanente inhaltliche Kontrolle der verlinkten Seiten ist jedoch ohne konkrete Anhaltspunkte einer Rechtsverletzung nicht zumutbar. Bei Bekanntwerden von Rechtsverletzungen werden wir derartige Links umgehend entfernen. ### Datenschutz Die Nutzung unserer Webseite ist in der Regel ohne Angabe personenbezogener Daten möglich. Soweit auf unseren Seiten personenbezogene Daten (beispielsweise Name, Anschrift oder eMail-Adressen) erhoben werden, erfolgt dies, soweit möglich, stets auf freiwilliger Basis. Diese Daten werden ohne Ihre ausdrückliche Zustimmung nicht an Dritte weitergegeben. Wir weisen darauf hin, dass die Datenübertragung im Internet (z.B. bei der Kommunikation per E-Mail) Sicherheitslücken aufweisen kann. Ein lückenloser Schutz der Daten vor dem Zugriff durch Dritte ist nicht möglich. Der Nutzung von im Rahmen der Impressumspflicht veröffentlichten Kontaktdaten durch Dritte zur Übersendung von nicht ausdrücklich angeforderter Werbung und Informationsmaterialien wird hiermit ausdrücklich widersprochen. Die Betreiber der Seiten behalten sich ausdrücklich rechtliche Schritte im Falle der unverlangten Zusendung von Werbeinformationen, etwa durch Spam-Mails, vor. ### Google Analytics Diese Website benutzt Google Analytics, einen Webanalysedienst der Google Inc. (''Google''). Google Analytics verwendet sog. ''Cookies'', Textdateien, die auf Ihrem Computer gespeichert werden und die eine Analyse der Benutzung der Website durch Sie ermöglicht. Die durch den Cookie erzeugten Informationen über Ihre Benutzung dieser Website (einschließlich Ihrer IP-Adresse) wird an einen Server von Google in den USA übertragen und dort gespeichert. Google wird diese Informationen benutzen, um Ihre Nutzung der Website auszuwerten, um Reports über die Websiteaktivitäten für die Websitebetreiber zusammenzustellen und um weitere mit der Websitenutzung und der Internetnutzung verbundene Dienstleistungen zu erbringen. Auch wird Google diese Informationen gegebenenfalls an Dritte übertragen, sofern dies gesetzlich vorgeschrieben oder soweit Dritte diese Daten im Auftrag von Google verarbeiten. Google wird in keinem Fall Ihre IP-Adresse mit anderen Daten der Google in Verbindung bringen. Sie können die Installation der Cookies durch eine entsprechende Einstellung Ihrer Browser Software verhindern; wir weisen Sie jedoch darauf hin, dass Sie in diesem Fall gegebenenfalls nicht sämtliche Funktionen dieser Website voll umfänglich nutzen können. Durch die Nutzung dieser Website erklären Sie sich mit der Bearbeitung der über Sie erhobenen Daten durch Google in der zuvor beschriebenen Art und Weise und zu dem zuvor benannten Zweck einverstanden. ",legal/impressum.md "--- title: Privacy Policy --- # Privacy Policy At qdrant.tech, accessible from qdrant.tech, qdrant.co, qdrant.com, qdrant.io, one of our main priorities is the privacy of our visitors. This Privacy Policy document contains types of information that is collected and recorded by qdrant.tech and how we use it. If you have additional questions or require more information about our Privacy Policy, do not hesitate to contact us. Our Privacy Policy was generated with the help of GDPR Privacy Policy Generator from GDPRPrivacyNotice.com ## General Data Protection Regulation (GDPR) We are a Data Controller of your information. Qdrant legal basis for collecting and using the personal information described in this Privacy Policy depends on the Personal Information we collect and the specific context in which we collect the information: * Qdrant needs to perform a contract with you * You have given Qdrant permission to do so * Processing your personal information is in Qdrant legitimate interests * Qdrant needs to comply with the law Qdrant will retain your personal information only for as long as is necessary for the purposes set out in this Privacy Policy. We will retain and use your information to the extent necessary to comply with our legal obligations, resolve disputes, and enforce our policies. If you are a resident of the European Economic Area (EEA), you have certain data protection rights. If you wish to be informed what Personal Information we hold about you and if you want it to be removed from our systems, please contact us. In certain circumstances, you have the following data protection rights: * The right to access, update or to delete the information we have on you. * The right of rectification. * The right to object. * The right of restriction. * The right to data portability * The right to withdraw consent ## Log Files qdrant.tech follows a standard procedure of using log files. These files log visitors when they visit websites. All hosting companies do this and a part of hosting services' analytics. The information collected by log files include internet protocol (IP) addresses, browser type, Internet Service Provider (ISP), date and time stamp, referring/exit pages, and possibly the number of clicks. These are not linked to any information that is personally identifiable. The purpose of the information is for analyzing trends, administering the site, tracking users' movement on the website, and gathering demographic information. ## Cookies and Web Beacons Like any other website, qdrant.tech uses 'cookies'. These cookies are used to store information including visitors' preferences, and the pages on the website that the visitor accessed or visited. The information is used to optimize the users' experience by customizing our web page content based on visitors' browser type and/or other information. For more general information on cookies, please read ""What Are Cookies"". ## Privacy Policies You may consult this list to find the Privacy Policy for each of the advertising partners of qdrant.tech. Third-party ad servers or ad networks uses technologies like cookies, JavaScript, or Web Beacons that are used in their respective advertisements and links that appear on qdrant.tech, which are sent directly to users' browser. They automatically receive your IP address when this occurs. These technologies are used to measure the effectiveness of their advertising campaigns and/or to personalize the advertising content that you see on websites that you visit. Note that qdrant.tech has no access to or control over these cookies that are used by third-party advertisers. ## Third Party Privacy Policies qdrant.tech's Privacy Policy does not apply to other advertisers or websites. Thus, we are advising you to consult the respective Privacy Policies of these third-party ad servers for more detailed information. It may include their practices and instructions about how to opt-out of certain options. You can choose to disable cookies through your individual browser options. To know more detailed information about cookie management with specific web browsers, it can be found at the browsers' respective websites. ## Children's Information Another part of our priority is adding protection for children while using the internet. We encourage parents and guardians to observe, participate in, and/or monitor and guide their online activity. qdrant.tech does not knowingly collect any Personal Identifiable Information from children under the age of 13. If you think that your child provided this kind of information on our website, we strongly encourage you to contact us immediately and we will do our best efforts to promptly remove such information from our records. ## Online Privacy Policy Only Our Privacy Policy applies only to our online activities and is valid for visitors to our website with regards to the information that they shared and/or collect in qdrant.tech. This policy is not applicable to any information collected offline or via channels other than this website. ## Consent By using our website, you hereby consent to our Privacy Policy and agree to its terms.",legal/privacy-policy.md "--- title: Credits section_title: Credits to materials used on our site --- Icons made by [srip](https://www.flaticon.com/authors/srip) from [flaticon.com](https://www.flaticon.com/) Email Marketing Vector created by [storyset](https://de.freepik.com/vektoren/geschaeft) from [freepik.com](https://www.freepik.com/) ",legal/credits.md "--- title: Qdrant Cloud Terms and Conditions --- **These terms apply to any of our Cloud plans.** Qdrant Cloud (or “Solution”) is developed by Qdrant Solutions GmbH, registered with the trade and companies register of Berlin Charlottenburg under number HRB 235335 B (the “Company” or “Qdrant”). Qdrant Cloud is the hosted and managed version of the Qdrant engine, our open-source solution. It is accessible as a Software as a Service (“SaaS”) through the following link [https://cloud.qdrant.io](https://cloud.qdrant.io) By using the Qdrant Cloud, you agree to comply with the following general terms and conditions of use and sale (the “T&Cs”), which form a binding contract between you and the COmpany, giving you access to both the Solution and its website (the “Website”). To access the Solution and the Website, you must first accept our T&Cs and Privacy Policy, accessible and printable at any time using the links accessible from the bottom of the Website’s homepage. ### 1. Prerequisites You certify that you hold all the rights and authority necessary to agree to the T&Cs in the name of the legal person you represent, if applicable. ### 2. Description of the Solution Qdrant is a vector database. It deploys as an API service providing a search for the nearest high-dimensional vectors. With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and much more! Qdrant’s guidelines and description of the Solution are detailed in its documentation (the “Documentation”) made available to you and updated regularly. You may subscribe for specific maintenance and support services. The description and prices are disclosed on demand. You can contact us for any questions or inquiries you may have at the following address: contact@qdrant.com. ### 3. Set up and installation To install the Solution, you first need to create an account on the Website. You must fill in all the information marked as mandatory, such as your name, surname, email address, or to provide access to the required data by using a Single-Sign-On provider. You guarantee that all the information you provide is correct, up-to-date, sincere, and not deceptive in any way. You undertake to update this information in your personal space in the event of modification so that it corresponds at all times to the above criteria and is consistent with reality. Once your account is created, we will email you to finalize your subscription. You are solely and entirely responsible for using your username and password to access your account and undertake to do everything to keep this information secret and not to disclose it in whatever form and for whatever reason. You do not have a right of withdrawal regarding the subscription to the Solution as soon as its performance has begun before the expiry of a fourteen (14) day cooling off period. ### 4. License – Intellectual property Qdrant grants you, for the duration of the use of the Solution, a non-exclusive, non-transferable, and strictly personal right to use the Solution in accordance with the T&Cs and the Documentation, and under the conditions and within limits set out below (“the License”). Qdrant holds all intellectual and industrial property rights relating to the Solution and the Documentation. None of them is transferred to you through the use of the Solution. In particular, the systems, structures, databases, logos, brands, and contents of any nature (text, images, visuals, music, logos, brands, databases, etc.) operated by Qdrant within the Solution and/or the Website are protected by all current intellectual property rights or rights of database producers – to the exclusion of the Content as defined in Article 8. In particular, you agree not to: * translate, adapt, arrange or modify the Solution, export it or merge it with other software; * decompile or reverse engineer the Solution; * copy, reproduce, represent or use the Solution for purposes not expressly provided for in the present T&Cs; * use the Solution for purposes of comparative analysis or development of a competing product. * You may not transfer the License in any way whatsoever without the prior written consent of Qdrant. In the event of termination of this License, for whatever reason, you shall immediately cease to use the Solution and the Documentation. This right of use is subject to your payment of the total amount of the usage fees due under the Licence. This License does not confer any exclusivity of any kind. Qdrant remains free to grant Licenses to third parties of its choice. You acknowledge having been informed by Qdrant of all the technical requirements necessary to access and use the Solution. You are also informed that these requirements may change, particularly for technical reasons. In case of any change, you will be informed in advance. You accept these conditions and agree not to use the Solution or its content for purposes other than its original function, particularly for comparative analysis or development of competing software. ### 5. Financial terms The prices applicable at the date of subscription to the Solution are accessible through the following [link](/pricing/). Unless otherwise stated, prices are in dollars and exclusive of any applicable taxes. The prices of the Solution may be revised at any time. You will be informed of these modifications by e-mail. ### 6. Payment conditions You must pay the agreed price monthly. Payment is made through Stripe, a secure payment service provider which alone keeps your bank details for this purpose. You can access its own terms and conditions at the following address: https://stripe.com/fr/legal. You (i) guarantee that you have the necessary authorizations to use this payment method and (ii) undertake to take the necessary measures to ensure that the automatic debiting of the price can be made. You are informed and expressly accept that any payment delay on all or part of the price at a due date automatically induces, without prejudice to the provisions of Article 10 and prior formal notification: * the forfeiting of the term of all the sums due by you which become due immediately; * the immediate suspension of the access to the Solution until full payment of all the sums due; * the invoicing to the benefit of Qdrant of a flat-rate penalty of 5% of the amounts due if the entire sum has not been paid within thirty (30) days after sending a non-payment formal notice; * interest for late payment calculated at the monthly rate of 5% calculated on the basis of a 365-day year. ### 7. Compliant and loyal use of the Solution You undertake, when using the Solution, to comply with the laws and regulations in force and not to infringe third-party rights or public order. You are solely responsible for correctly accomplishing all the administrative, fiscal and social security formalities and all payments of contributions, taxes, or duties of any kind, where applicable, in relation to your use of the Solution. You are informed and accept that the implementation of the Solution requires you to be connected to the Internet and that the quality of the Solution depends directly on this connection, for which you alone are responsible. You undertake to provide us with all the information necessary for the correct performance of the Solution. The following are also strictly prohibited: any behavior that may interrupt, suspend, slow down or prevent the continuity of the Solution, any intrusion or attempts at the intrusion into the Solution, any unauthorized use of the Solution's system resources, any actions likely to place a disproportionate load of the latter, any infringement on the security and authentication measures, any acts likely to infringe on the financial, commercial or moral rights of Qdrant or the users of the Solution, lastly and more generally, any failure in respect of these T&Cs. It is strictly prohibited to make financial gain from, sell or transfer all or part of the access to the Solution and to the information and data which is hosted and/or shared therein. ### 8. Content You alone are responsible for the Content you upload through the Solution. Your Content remains, under all circumstances, your full and exclusive property. It may not be reproduced and/or otherwise used by Qdrant for any purpose other than the strict supply of the Solution. You grant, as necessary, to Qdrant and its subcontractors a non-exclusive, worldwide, free and transferable license to host, cache, copy, display, reproduce and distribute the Content for the sole purpose of performing the contract and exclusively in association with or in connection with the Solution. This license shall automatically terminate upon termination of our contractual relationship unless it is necessary to continue hosting and processing the Content, in particular in the context of implementing reversibility operations and/or in order to defend against any liability claims and/or to comply with rules imposed by laws and regulations. You guarantee Qdrant that you have all the rights and authorizations necessary to use and publicize such Content and that you can grant Qdrant and its subcontractors a license under these terms. You undertake to publish only legal content that does not infringe on public order, good morals, third-party’s rights, legislative or regulatory provisions, and, more generally, is in no way likely to jeopardize Qdrant's civil or criminal liability. You further declare and guarantee that by creating, installing, downloading or transmitting the Content through the Solution, you do not infringe third parties’ rights. You acknowledge and accept that Qdrant cannot be held responsible for the Content. ### 9. Accessibility of the Solution Qdrant undertakes to supply the Solution with diligence, and according to best practice, it is specified that it has an obligation of means to the exclusion of any obligation of result, which you expressly acknowledge and accept. Qdrant will do its best to ensure that the Solution is accessible at all times, with the exception of cases of unavailability or maintenance. You acknowledge that you are informed that the unavailability of the Solution may be the result of (a) a maintenance operation, (b) an urgent operation relating in particular to security, (c) a case of “force majeure” or (d) the malfunctioning of computer applications of Qdrant's third-party partners. Qdrant undertakes to restore the availability of the Solution as soon as possible once the problem causing the unavailability has been resolved. Qdrant undertakes, in particular, to carry out regular checks to verify the operation and accessibility of the Solution. In this regard, Qdrant reserves the right to interrupt access to the Solution momentarily for reasons of maintenance. Similarly, Qdrant may not be held responsible for momentary difficulties or impossibilities in accessing the Solution and/or Website, the origin of which is external to it, “force majeure”, or which are due to disruptions in the telecommunications network. Qdrant does not guarantee that the Solution, subject to a constant search to improve their performance, will be totally free from errors, defects, or faults. Qdrant will make its best effort to resolve any technical issue you may have in due diligence. Qdrant is not bound by maintenance services in the following cases: * your use of the Solution in a manner that does not comply with its purpose or its Documentation; * unauthorized access to the Solution by a third-party caused by you, including through your negligence; * your failure to fulfill your obligations under the T&Cs; * implementation of any software package, software or operating system not compatible with the Solution; * failure of the electronic communication networks which is not the fault of Qdrant; * your refusal to collaborate with Qdrant in the resolution of the anomalies and in particular to answer questions and requests for information; * voluntary act of degradation, malice, sabotage; * deterioration due to a case of “force majeure”. You will benefit from the updates, and functional evolutions of the Solution decided by Qdrant and accept them from now on. You cannot claim any indemnity or hold Qdrant responsible for any of the reasons mentioned above. ### 10. Violations – Sanctions In the event of a violation of any provision of these T&Cs or, more generally, in the event of any violation of any laws and regulations of your making, Qdrant reserves the right to take any appropriate measures, including but not limited to: * suspending access to the Solution; * terminating the contractual relationship with you; * deleting any of your Content; * informing any authority concerned; * initiating legal action. ### 11. Personal data In the context of the use of the Solution and the Website, Qdrant may collect and process certain personal data, including your name, surname, email address, banking information, address, telephone number, IP address, connection, and navigation data and data recorded in cookies (the “Data”). Qdrant ensures that the Data is collected and processed in compliance with the provisions of German law and in accordance with its Privacy Policy, available at the following [link](/legal/privacy-policy/). The Privacy Policy is an integral part of the T&Cs. You and your end-users are invited to consult the Privacy Policy for a more detailed explanation of the conditions of the collection and processing of the Data. In particular, Qdrant undertakes to use only server hosting providers, in case they are located outside the European Union, who present sufficient guarantees as to the implementation of the technical and organizational measures necessary to carry out the processing of your end-users’ Data in compliance with the Data Protection Laws. Under the provisions of the Data Protection Laws, your end-users have the right to access, rectify, delete, limit or oppose the processing of the Data, the right to define guidelines for the storage, deletion, and communication of the Data after his death and the right to the portability of the Data. Your end-users can exercise these rights by e-mail to the following address: privacy@qdrant.com, or by post at the address indicated at the beginning of these T&Cs. Qdrant undertakes to guarantee the existence of adequate levels of protection under the applicable legal and regulatory requirements. However, as no mechanism offers absolute security, a degree of risk remains when the Internet is used to transmit Data. Qdrant will notify the relevant authority and/or the person concerned of any possible violations of Data under the conditions provided by the Data Protection Laws. #### Qdrant GDPR Data Processing Agreement We may enter into a GDPR Data Processing Agreement with certain Enterprise clients, depending on the nature of the installation, how data is being processed, and where it is stored. ### 12. Third parties Qdrant may under no circumstances be held responsible for the technical availability of the websites operated by third parties, which you would access via the Solution or the Website. Qdrant bears no responsibility concerning the content, advertising, products, and/or services available on such websites; a reminder is given that these are governed by their own conditions of use. ### 13. Duration The Solution is subscribed for an indefinite duration and is payable monthly. You may unsubscribe from the Solution at any time directly through the Solution or by writing to the following address: contact@Qdrant.com. There will be no reimbursement of the sum paid in advance. ### 14. Representation and warranties The Solution and Website are provided on an “as is” basis, and Qdrant makes no other warranties, express or implied, and specifically disclaims any warranty of merchantability and fitness for a particular purpose as to the Solution provided under the T&Cs. In addition, Qdrant does not warrant that the Solution and Website will be uninterrupted or error-free. Other than as expressly set out in these terms, Qdrant does not make any commitments about the Solution and Website’s availability or ability to meet your expectations. ### 15. Liability In no event shall Qdrant be liable for: * any indirect damages of any kind, including any potential loss of business; * any damage or loss which is not caused by a breach of its obligations under the T&Cs; * disruptions or damage inherent in an electronic communications network; * an impediment or limitation in the performance of the T&Cs or any obligation incumbent on Qdrant hereunder due to “force majeure”; * the Content; * contamination by viruses or other harmful elements of the Solution, or malicious intrusion by third-parties into the system or piracy of the Solution; * and, more generally, your own making. Qdrant’s liability for any claim, loss, damage, or expense resulting directly from any negligence or omission in the performance of the Solution shall be limited for all claims, losses, damages or expenses and all causes combined to the amount paid by you during the last twelve (12) months preceding the claim. Any other liability of Qdrant shall be excluded. Moreover, Qdrant shall not be liable if the alleged fault results from the incorrect application of the recommendations and advice given in the course of the Solution and/or by the Documentation. ### 16. Complaint For any complaint related to the use of the Solution and/or the Website, you may contact Qdrant at the following address: contact@qdrant.com. Any claim against Qdrant must be made within thirty (30) days following the occurrence of the event that is the subject of the claim. Failing this, you may not claim any damages or compensation for the alleged breach. Qdrant undertakes to do its best to respond to the complaints transmitted within a reasonable period in view of their nature and complexity. ### 17. Modification of the T&Cs Qdrant reserves the right to adapt or modify the T&Cs at any time by publishing an updated version on the Solution and the Website. Qdrant shall inform you of such modification no later than fifteen (15) days before the entry into force of the new version of the T&Cs. Any modification of the T&Cs made necessary by a change in the applicable law or regulations, a court decision or the modification of the functionalities of the Solution and/or the Website shall come into force immediately. The version of the T&Cs applicable is the one in force at the date of use of the Solution and/or the Website. If you do not accept the amended T&Cs, you must unregister from the Solution according to the conditions laid down under Article 13 within the fifteen (15) days period mentioned above. ### 18. Language Should there be a translation of these T&Cs in one or more languages, the language of interpretation shall be German in the event of contradiction or dispute as to the meaning of a term or a provision. ### 19. Place of Performance; Governing Law; Jurisdiction Unless (a) explicitly agreed to the contrary between the Parties, or (b) where the nature of specific Services so requires (such as Services rendered on-site at Customer’s facilities), the place of performance for all Services is Qdrant’s seat of business. These T&Cs will be governed by German law without regard to the choice or conflicts of law provisions of any jurisdiction and with the exception of the United Nations Convention on the International Sale of Goods (CISG). Any references to the application of statutory provisions shall be for clarification purposes only. Even without such clarification, statutory provisions shall apply unless they are modified or expressly excluded in the T&Cs. You agree that all disputes resulting from these T&Cs shall be subject to the exclusive jurisdictions of the courts in Berlin, Germany. ### 20. Coming into force The T&Cs entered into force on 01 December 2022. ",legal/terms_cloud.md "--- title: Legal sitemapExclude: True _build: render: never cascade: - build: render: always --- ",legal/_index.md "--- title: Subscribe section_title: Subscribe subtitle: Subscribe description: Subscribe ---",subscribe-confirmation/_index.md "--- cards: - id: 0 popular: true title: Qdrant Cloud price: Starting at $0 description: Starts with 1GB free cluster, no credit card required. button: text: Start Free url: https://cloud.qdrant.io contained: true featureDescription: Scale your production solutions without deployment and upkeep. featureLink: text: Calculate your usage. url: https://cloud.qdrant.io/calculator features: - id: 0 content: 1GB free forever cluster. No credit card required. - id: 1 content: Fully managed with central cluster management - id: 2 content: Multiple cloud providers and regions (AWS, GCP, Azure) - id: 3 content: Horizontal & vertical scaling - id: 4 content: Central monitoring, log management and alerting - id: 5 content: High availability, auto-healing - id: 6 content: Backup & disaster recovery - id: 7 content: Zero-downtime upgrades - id: 8 content: Unlimited users - id: 9 content: Standard support plan # - id: 10 # content: Can be upgraded to premium support plan - id: 1 popular: false title: Hybrid Cloud price: $0.014 description: Starting price per hour. button: text: Get Started url: https://cloud.qdrant.io contained: true featureDescription: Bring your own cluster from any cloud provider, on-premise infrastructure, or edge locations and connect them to the managed cloud. features: - id: 0 content: All the benefits of Qdrant Cloud - id: 1 content: Security, data isolation, optimal latency - id: 2 content: Use the Managed Cloud Central Cluster Management - id: 3 content: Standard support plan - id: 4 content: Can be upgraded to premium support plan # minPrice: *Min $99 / month - id: 2 popular: false title: Private Cloud price: Custom description: Price on request. button: text: Contact Sales url: /contact-sales/ contained: false featureDescription: Deploy Qdrant fully on premise for maximum control and data sovereignty. features: - id: 0 content: All the benefits of Hybrid Cloud - id: 1 content: Security, data isolation, optimal latency - id: 2 content: Use the Managed Cloud Central Cluster Management or run the Central Cluster Management Interface in your own infrastructure, in the cloud, on-premise at the edge, even fully air-gapped - id: 3 content: Premium Support Plan sitemapExclude: true --- ",pricing/qdrant-pricing-doors-b.md "--- titleFirstPart: Not sure which plan is right for you? titleSecondPart: Check out our pricing calculator. link: url: https://cloud.qdrant.io/calculator text: Pricing Calculator sitemapExclude: true --- ",pricing/qdrant-pricing-calculator.md "--- title: Qdrant Pricing subtitle: Cloud & Enterprise solutions description: Choose the deployment option for your application and explore our transparent pricing plans. sitemapExclude: true --- ",pricing/qdrant-pricing-hero.md "--- cards: - id: 0 popular: true title: Qdrant Cloud price: $0 description: Starts with 1GB free cluster, no credit card required. button: text: Start Free url: https://cloud.qdrant.io/ contained: true featureDescription: Scale your production solutions without deployment and upkeep. featureLink: text: Calculate your usage. url: https://cloud.qdrant.io/calculator features: - id: 0 content: 1GB free forever cluster. No credit card required. - id: 1 content: Fully managed with central cluster management - id: 2 content: Multiple cloud providers and regions (AWS, GCP, Azure) - id: 3 content: Horizontal & vertical scaling - id: 4 content: Central monitoring, log management and alerting - id: 5 content: High availability, auto-healing - id: 6 content: Backup & disaster recovery - id: 7 content: Zero-downtime upgrades - id: 8 content: Unlimited users - id: 9 content: Standard support plan - id: 1 popular: false title: Private Cloud price: Custom description: Price on request. button: text: Contact Sales url: /contact-us/ contained: false featureDescription: Deploy Qdrant fully on premise for maximum control and data sovereignty. features: - id: 0 content: All the benefits of Hybrid Cloud - id: 1 content: Connect your own enterprise authentication - id: 2 content: Use the Managed Cloud Central Cluster Management or run the Central Cluster Management Interface in your own infrastructure, in the cloud, on-premise at the edge, even fully air-gapped - id: 3 content: Premium Support Plan sitemapExclude: true --- ",pricing/qdrant-pricing-doors-a.md "--- title: ""Pricing for Cloud and Vector Database Solutions Qdrant"" description: ""Explore Qdrant Cloud and Enterprise solutions for your vector search applications. Choose the right deployment option and explore transparent pricing plans."" url: pricing build: render: always cascade: - build: list: local publishResources: false render: never --- ",pricing/_index.md