text
stringlengths
25
143k
source
stringlengths
12
112
--- title: Learn how to get started with Qdrant for your search use case features: - id: 0 image: src: /img/advanced-search-use-cases/startup-semantic-search.svg alt: Startup Semantic Search title: Startup Semantic Search Demo description: The demo showcases semantic search for startup descriptions through SentenceTransformer and Qdrant, comparing neural search's accuracy with traditional searches for better content discovery. link: text: View Demo url: https://demo.qdrant.tech/ - id: 1 image: src: /img/advanced-search-use-cases/multimodal-semantic-search.svg alt: Multimodal Semantic Search title: Multimodal Semantic Search with Aleph Alpha description: This tutorial shows you how to run a proper multimodal semantic search system with a few lines of code, without the need to annotate the data or train your networks. link: text: View Tutorial url: /documentation/examples/aleph-alpha-search/ - id: 2 image: src: /img/advanced-search-use-cases/simple-neural-search.svg alt: Simple Neural Search title: Create a Simple Neural Search Service description: This tutorial shows you how to build and deploy your own neural search service. link: text: View Tutorial url: /documentation/tutorials/neural-search/ - id: 3 image: src: /img/advanced-search-use-cases/image-classification.svg alt: Image Classification title: Image Classification with Qdrant Vector Semantic Search description: In this tutorial, you will learn how a semantic search engine for images can help diagnose different types of skin conditions. link: text: View Tutorial url: https://www.youtube.com/watch?v=sNFmN16AM1o - id: 4 image: src: /img/advanced-search-use-cases/semantic-search-101.svg alt: Semantic Search 101 title: Semantic Search 101 description: Build a semantic search engine for science fiction books in 5 mins. link: text: View Tutorial url: /documentation/tutorials/search-beginners/ - id: 5 image: src: /img/advanced-search-use-cases/hybrid-search-service-fastembed.svg alt: Create a Hybrid Search Service with Fastembed title: Create a Hybrid Search Service with Fastembed description: This tutorial guides you through building and deploying your own hybrid search service using Fastembed. link: text: View Tutorial url: /documentation/tutorials/hybrid-search-fastembed/ sitemapExclude: true ---
advanced-search/advanced-search-use-cases.md
--- title: Search with Qdrant description: Qdrant enhances search, offering semantic, similarity, multimodal, and hybrid search capabilities for accurate, user-centric results, serving applications in different industries like e-commerce to healthcare. features: - id: 0 icon: src: /icons/outline/similarity-blue.svg alt: Similarity title: Semantic Search description: Qdrant optimizes similarity search, identifying the closest database items to any query vector for applications like recommendation systems, RAG and image retrieval, enhancing accuracy and user experience. link: text: Learn More url: /documentation/concepts/search/ - id: 1 icon: src: /icons/outline/search-text-blue.svg alt: Search text title: Hybrid Search for Text description: By combining dense vector embeddings with sparse vectors e.g. BM25, Qdrant powers semantic search to deliver context-aware results, transcending traditional keyword search by understanding the deeper meaning of data. link: text: Learn More url: /documentation/tutorials/hybrid-search-fastembed/ - id: 2 icon: src: /icons/outline/selection-blue.svg alt: Selection title: Multimodal Search description: Qdrant's capability extends to multi-modal search, indexing and retrieving various data forms (text, images, audio) once vectorized, facilitating a comprehensive search experience. link: text: View Tutorial url: /documentation/tutorials/aleph-alpha-search/ - id: 3 icon: src: /icons/outline/filter-blue.svg alt: Filter title: Single Stage filtering that Works description: Qdrant enhances search speeds and control and context understanding through filtering on any nested entry in our payload. Unique architecture allows Qdrant to avoid expensive pre-filtering and post-filtering stages, making search faster and accurate. link: text: Learn More url: /articles/filtrable-hnsw/ sitemapExclude: true ---
advanced-search/advanced-search-features.md
--- title: "Advanced Search Solutions: High-Performance Vector Search" description: Explore how Qdrant's advanced search solutions enhance accuracy and user interaction depth across various industries, from e-commerce to healthcare. build: render: always cascade: - build: list: local publishResources: false render: never ---
advanced-search/_index.md
--- title: Advanced Search description: Dive into next-gen search capabilities with Qdrant, offering a smarter way to deliver precise and tailored content to users, enhancing interaction accuracy and depth. startFree: text: Get Started url: https://cloud.qdrant.io/ learnMore: text: Contact Us url: /contact-us/ image: src: /img/vectors/vector-0.svg alt: Advanced search sitemapExclude: true ---
advanced-search/advanced-search-hero.md
--- title: Qdrant Enterprise Solutions items: - id: 0 image: src: /img/enterprise-solutions-use-cases/managed-cloud.svg alt: Managed Cloud title: Managed Cloud description: Qdrant Cloud provides optimal flexibility and offers a suite of features focused on efficient and scalable vector search - fully managed. Available on AWS, Google Cloud, and Azure. link: text: Learn More url: /cloud/ odd: true - id: 1 image: src: /img/enterprise-solutions-use-cases/hybrid-cloud.svg alt: Hybrid Cloud title: Hybrid Cloud description: Bring your own Kubernetes clusters from any cloud provider, on-premise infrastructure, or edge locations and connect them to the managed cloud. link: text: Learn More url: /hybrid-cloud/ odd: false - id: 2 image: src: /img/enterprise-solutions-use-cases/private-cloud.svg alt: Private Cloud title: Private Cloud description: Experience maximum control and security by deploying Qdrant in your own infrastructure or edge locations. link: text: Learn More url: /private-cloud/ odd: true sitemapExclude: true ---
enterprise-solutions/enterprise-solutions-use-cases.md
--- review: Enterprises like Bosch use Qdrant for unparalleled performance and massive-scale vector search. “With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform at enterprise scale.” names: Jeremy Teichmann & Daly Singh positions: Generative AI Expert & Product Owner avatar: src: /img/customers/jeremy-t-daly-singh.svg alt: Jeremy Teichmann Avatar logo: src: /img/brands/bosch-gray.svg alt: Logo sitemapExclude: true ---
enterprise-solutions/testimonial.md
--- title: Enterprise-Grade Vector Search description: "The premier vector database for enterprises: flexible deployment options for low latency and state-of-the-art privacy and security features. High performance at billion vector scale." startFree: text: Start Free url: https://cloud.qdrant.io/ contactUs: text: Talk to Sales url: /contact-sales/ image: src: /img/enterprise-solutions-hero.png srcMobile: /img/mobile/enterprise-solutions-hero-mobile.png alt: Enterprise-solutions sitemapExclude: true ---
enterprise-solutions/enterprise-solutions-hero.md
--- title: Enterprise Benefits cards: - id: 0 icon: src: /icons/outline/security-blue.svg alt: Security title: Security description: Robust access management, backup options, and disaster recovery. - id: 1 icon: src: /icons/outline/cloud-system-blue.svg alt: Cloud System title: Data Sovereignty description: Keep your sensitive data within your secure premises. - id: 0 icon: src: /icons/outline/speedometer-blue.svg alt: Speedometer title: Low-Latency description: On-premise deployment for lightning-fast, low-latency access. - id: 0 icon: src: /icons/outline/chart-bar-blue.svg alt: Chart-Bar title: Efficiency description: Reduce memory usage with built-in compression, multitenancy, and offloading data to disk. sitemapExclude: true ---
enterprise-solutions/enterprise-benefits.md
--- title: Enterprise Search Solutions for Your Business | Qdrant description: Unlock the power of custom vector search with Qdrant's Enterprise Search Solutions. Tailored to your business needs to grow AI capabilities and data management. url: enterprise-solutions build: render: always cascade: - build: list: local publishResources: false render: never ---
enterprise-solutions/_index.md
--- title: Components --- ## Buttons **.button** <a href="#" class="button button_contained">Text</a> <button class="button button_outlined">Text</button> <button class="button button_contained" disabled>Text</button> ### Variants <div class="row"> <div class="col-4 p-4"> **.button .button_contained .button_sm** <a href="#" class="button button_contained button_sm">Try Free</a> **.button .button_contained .button_md** <a href="#" class="button button_contained button_md">Try Free</a> **.button .button_contained .button_lg** <a href="#" class="button button_contained button_lg">Try Free</a> **.button .button_contained .button_disabled** <a href="#" class="button button_contained button_disabled">Try Free</a> </div> <div class="col-4 text-bg-dark p-4"> **.button .button_outlined .button_sm** <a href="#" class="button button_outlined button_sm">Try Free</a> **.button .button_outlined .button_md** <a href="#" class="button button_outlined button_md">Try Free</a> **.button .button_outlined .button_lg** <a href="#" class="button button_outlined button_lg">Try Free</a> **.button .button_outlined .button_disabled** <a href="#" class="button button_outlined button_disabled">Try Free</a> </div> </div> ## Links **.link** <a href="#" class="link">Text</a>
debug.skip/components.md
--- title: Bootstrap slug: bootstrap --- <h2>Colors</h2> <details> <summary>Toggle details</summary> <h3>Text Color</h3> <p>Ignore the background colors in this section, they are just to show the text color.</p> <p class="text-primary">.text-primary</p> <p class="text-secondary">.text-secondary</p> <p class="text-success">.text-success</p> <p class="text-danger">.text-danger</p> <p class="text-warning bg-dark">.text-warning</p> <p class="text-info bg-dark">.text-info</p> <p class="text-light bg-dark">.text-light</p> <p class="text-dark">.text-dark</p> <p class="text-body">.text-body</p> <p class="text-muted">.text-muted</p> <p class="text-white bg-dark">.text-white</p> <p class="text-black-50">.text-black-50</p> <p class="text-white-50 bg-dark">.text-white-50</p> <h3>Background with contrasting text color</h3> <div class="text-bg-primary p-3">Primary with contrasting color</div> <div class="text-bg-secondary p-3">Secondary with contrasting color</div> <div class="text-bg-success p-3">Success with contrasting color</div> <div class="text-bg-danger p-3">Danger with contrasting color</div> <div class="text-bg-warning p-3">Warning with contrasting color</div> <div class="text-bg-info p-3">Info with contrasting color</div> <div class="text-bg-light p-3">Light with contrasting color</div> <div class="text-bg-dark p-3">Dark with contrasting color</div> <h3>Background Classes</h3> <div class="p-3 mb-2 bg-primary text-white">.bg-primary</div> <div class="p-3 mb-2 bg-secondary text-white">.bg-secondary</div> <div class="p-3 mb-2 bg-success text-white">.bg-success</div> <div class="p-3 mb-2 bg-danger text-white">.bg-danger</div> <div class="p-3 mb-2 bg-warning text-dark">.bg-warning</div> <div class="p-3 mb-2 bg-info text-dark">.bg-info</div> <div class="p-3 mb-2 bg-light text-dark">.bg-light</div> <div class="p-3 mb-2 bg-dark text-white">.bg-dark</div> <div class="p-3 mb-2 bg-body text-dark">.bg-body</div> <div class="p-3 mb-2 bg-white text-dark">.bg-white</div> <div class="p-3 mb-2 bg-transparent text-dark">.bg-transparent</div> <h3>Colored Links</h3> <a href="#" class="link-primary">Primary link</a><br> <a href="#" class="link-secondary">Secondary link</a><br> <a href="#" class="link-success">Success link</a><br> <a href="#" class="link-danger">Danger link</a><br> <a href="#" class="link-warning">Warning link</a><br> <a href="#" class="link-info">Info link</a><br> <a href="#" class="link-light">Light link</a><br> <a href="#" class="link-dark">Dark link</a><br> </details> <h2>Typography</h2> <details> <summary>Toggle details</summary> <h1>h1. Bootstrap heading</h1> <h2>h2. Bootstrap heading</h2> <h3>h3. Bootstrap heading</h3> <h4>h4. Bootstrap heading</h4> <h5>h5. Bootstrap heading</h5> <h6>h6. Bootstrap heading</h6> <p class="h1">h1. Bootstrap heading</p> <p class="h2">h2. Bootstrap heading</p> <p class="h3">h3. Bootstrap heading</p> <p class="h4">h4. Bootstrap heading</p> <p class="h5">h5. Bootstrap heading</p> <p class="h6">h6. Bootstrap heading</p> <h3> Fancy display heading <small class="text-muted">With faded secondary text</small> </h3> <h1 class="display-1">Display 1</h1> <h1 class="display-2">Display 2</h1> <h1 class="display-3">Display 3</h1> <h1 class="display-4">Display 4</h1> <h1 class="display-5">Display 5</h1> <h1 class="display-6">Display 6</h1> <p class="lead"> This is a lead paragraph. It stands out from regular paragraphs. <a href="#">Some link</a> </p> <p>You can use the mark tag to <mark>highlight</mark> text.</p> <p><del>This line of text is meant to be treated as deleted text.</del></p> <p><s>This line of text is meant to be treated as no longer accurate.</s></p> <p><ins>This line of text is meant to be treated as an addition to the document.</ins></p> <p><u>This line of text will render as underlined.</u></p> <p><small>This line of text is meant to be treated as fine print.</small></p> <p><strong>This line rendered as bold text.</strong></p> <p><em>This line rendered as italicized text.</em></p> <p><abbr title="attribute">attr</abbr></p> <p><abbr title="HyperText Markup Language" class="initialism">HTML</abbr></p> <p><a href="#">This is a link</a></p> <blockquote class="blockquote"> <p>A well-known quote, contained in a blockquote element.</p> </blockquote> <figure> <blockquote class="blockquote"> <p>A well-known quote, contained in a blockquote element.</p> </blockquote> <figcaption class="blockquote-footer"> Someone famous in <cite title="Source Title">Source Title</cite> </figcaption> </figure> <ul class="list-unstyled"> <li>This is a list.</li> <li>It appears completely unstyled.</li> <li>Structurally, it's still a list.</li> <li>However, this style only applies to immediate child elements.</li> <li>Nested lists: <ul> <li>are unaffected by this style</li> <li>will still show a bullet</li> <li>and have appropriate left margin</li> </ul> </li> <li>This may still come in handy in some situations.</li> </ul> </details>
debug.skip/bootstrap.md
--- title: Debugging ---
debug.skip/_index.md
--- title: "Qdrant 1.7.0 has just landed!" short_description: "Qdrant 1.7.0 brought a bunch of new features. Let's take a closer look at them!" description: "Sparse vectors, Discovery API, user-defined sharding, and snapshot-based shard transfer. That's what you can find in the latest Qdrant 1.7.0 release!" social_preview_image: /articles_data/qdrant-1.7.x/social_preview.png small_preview_image: /articles_data/qdrant-1.7.x/icon.svg preview_dir: /articles_data/qdrant-1.7.x/preview weight: -90 author: Kacper Łukawski author_link: https://kacperlukawski.com date: 2023-12-10T10:00:00Z draft: false keywords: - vector search - new features - sparse vectors - discovery - exploration - custom sharding - snapshot-based shard transfer - hybrid search - bm25 - tfidf - splade --- Please welcome the long-awaited [Qdrant 1.7.0 release](https://github.com/qdrant/qdrant/releases/tag/v1.7.0). Except for a handful of minor fixes and improvements, this release brings some cool brand-new features that we are excited to share! The latest version of your favorite vector search engine finally supports **sparse vectors**. That's the feature many of you requested, so why should we ignore it? We also decided to continue our journey with [vector similarity beyond search](/articles/vector-similarity-beyond-search/). The new Discovery API covers some utterly new use cases. We're more than excited to see what you will build with it! But there is more to it! Check out what's new in **Qdrant 1.7.0**! 1. Sparse vectors: do you want to use keyword-based search? Support for sparse vectors is finally here! 2. Discovery API: an entirely new way of using vectors for restricted search and exploration. 3. User-defined sharding: you can now decide which points should be stored on which shard. 4. Snapshot-based shard transfer: a new option for moving shards between nodes. Do you see something missing? Your feedback drives the development of Qdrant, so do not hesitate to [join our Discord community](https://qdrant.to/discord) and help us build the best vector search engine out there! ## New features Qdrant 1.7.0 brings a bunch of new features. Let's take a closer look at them! ### Sparse vectors Traditional keyword-based search mechanisms often rely on algorithms like TF-IDF, BM25, or comparable methods. While these techniques internally utilize vectors, they typically involve sparse vector representations. In these methods, the **vectors are predominantly filled with zeros, containing a relatively small number of non-zero values**. Those sparse vectors are theoretically high dimensional, definitely way higher than the dense vectors used in semantic search. However, since the majority of dimensions are usually zeros, we store them differently and just keep the non-zero dimensions. Until now, Qdrant has not been able to handle sparse vectors natively. Some were trying to convert them to dense vectors, but that was not the best solution or a suggested way. We even wrote a piece with [our thoughts on building a hybrid search](/articles/hybrid-search/), and we encouraged you to use a different tool for keyword lookup. Things have changed since then, as so many of you wanted a single tool for sparse and dense vectors. And responding to this [popular](https://github.com/qdrant/qdrant/issues/1678) [demand](https://github.com/qdrant/qdrant/issues/1135), we've now introduced sparse vectors! If you're coming across the topic of sparse vectors for the first time, our [Brief History of Search](/documentation/overview/vector-search/) explains the difference between sparse and dense vectors. Check out the [sparse vectors article](../sparse-vectors/) and [sparse vectors index docs](/documentation/concepts/indexing/#sparse-vector-index) for more details on what this new index means for Qdrant users. ### Discovery API The recently launched [Discovery API](/documentation/concepts/explore/#discovery-api) extends the range of scenarios for leveraging vectors. While its interface mirrors the [Recommendation API](/documentation/concepts/explore/#recommendation-api), it focuses on refining the search parameters for greater precision. The concept of 'context' refers to a collection of positive-negative pairs that define zones within a space. Each pair effectively divides the space into positive or negative segments. This concept guides the search operation to prioritize points based on their inclusion within positive zones or their avoidance of negative zones. Essentially, the search algorithm favors points that fall within multiple positive zones or steer clear of negative ones. The Discovery API can be used in two ways - either with or without the target point. The first case is called a **discovery search**, while the second is called a **context search**. #### Discovery search *Discovery search* is an operation that uses a target point to find the most relevant points in the collection, while performing the search in the preferred areas only. That is basically a search operation with more control over the search space. ![Discovery search visualization](/articles_data/qdrant-1.7.x/discovery-search.png) Please refer to the [Discovery API documentation on discovery search](/documentation/concepts/explore/#discovery-search) for more details and the internal mechanics of the operation. #### Context search The mode of *context search* is similar to the discovery search, but it does not use a target point. Instead, the `context` is used to navigate the [HNSW graph](https://arxiv.org/abs/1603.09320) towards preferred zones. It is expected that the results in that mode will be diverse, and not centered around one point. *Context Search* could serve as a solution for individuals seeking a more exploratory approach to navigate the vector space. ![Context search visualization](/articles_data/qdrant-1.7.x/context-search.png) ### User-defined sharding Qdrant's collections are divided into shards. A single **shard** is a self-contained store of points, which can be moved between nodes. Up till now, the points were distributed among shards by using a consistent hashing algorithm, so that shards were managing non-intersecting subsets of points. The latter one remains true, but now you can define your own sharding and decide which points should be stored on which shard. Sounds cool, right? But why would you need that? Well, there are multiple scenarios in which you may want to use custom sharding. For example, you may want to store some points on a dedicated node, or you may want to store points from the same user on the same shard and While the existing behavior is still the default one, you can now define the shards when you create a collection. Then, you can assign each point to a shard by providing a `shard_key` in the `upsert` operation. What's more, you can also search over the selected shards only, by providing the `shard_key` parameter in the search operation. ```http request POST /collections/my_collection/points/search { "vector": [0.29, 0.81, 0.75, 0.11], "shard_key": ["cats", "dogs"], "limit": 10, "with_payload": true, } ``` If you want to know more about the user-defined sharding, please refer to the [sharding documentation](/documentation/guides/distributed_deployment/#sharding). ### Snapshot-based shard transfer That's a really more in depth technical improvement for the distributed mode users, that we implemented a new options the shard transfer mechanism. The new approach is based on the snapshot of the shard, which is transferred to the target node. Moving shards is required for dynamical scaling of the cluster. Your data can migrate between nodes, and the way you move it is crucial for the performance of the whole system. The good old `stream_records` method (still the default one) transmits all the records between the machines and indexes them on the target node. In the case of moving the shard, it's necessary to recreate the HNSW index each time. However, with the introduction of the new `snapshot` approach, the snapshot itself, inclusive of all data and potentially quantized content, is transferred to the target node. This comprehensive snapshot includes the entire index, enabling the target node to seamlessly load it and promptly begin handling requests without the need for index recreation. There are multiple scenarios in which you may prefer one over the other. Please check out the docs of the [shard transfer method](/documentation/guides/distributed_deployment/#shard-transfer-method) for more details and head-to-head comparison. As for now, the old `stream_records` method is still the default one, but we may decide to change it in the future. ## Minor improvements Beyond introducing new features, Qdrant 1.7.0 enhances performance and addresses various minor issues. Here's a rundown of the key improvements: 1. Improvement of HNSW Index Building on High CPU Systems ([PR#2869](https://github.com/qdrant/qdrant/pull/2869)). 2. Improving [Search Tail Latencies](https://github.com/qdrant/qdrant/pull/2931): improvement for high CPU systems with many parallel searches, directly impacting the user experience by reducing latency. 3. [Adding Index for Geo Map Payloads](https://github.com/qdrant/qdrant/pull/2768): index for geo map payloads can significantly improve search performance, especially for applications involving geographical data. 4. Stability of Consensus on Big High Load Clusters: enhancing the stability of consensus in large, high-load environments is critical for ensuring the reliability and scalability of the system ([PR#3013](https://github.com/qdrant/qdrant/pull/3013), [PR#3026](https://github.com/qdrant/qdrant/pull/3026), [PR#2942](https://github.com/qdrant/qdrant/pull/2942), [PR#3103](https://github.com/qdrant/qdrant/pull/3103), [PR#3054](https://github.com/qdrant/qdrant/pull/3054)). 5. Configurable Timeout for Searches: allowing users to configure the timeout for searches provides greater flexibility and can help optimize system performance under different operational conditions ([PR#2748](https://github.com/qdrant/qdrant/pull/2748), [PR#2771](https://github.com/qdrant/qdrant/pull/2771)). ## Release notes [Our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.7.0) are a place to go if you are interested in more details. Please remember that Qdrant is an open source project, so feel free to [contribute](https://github.com/qdrant/qdrant/issues)!
articles/qdrant-1.7.x.md
--- title: "Any* Embedding Model Can Become a Late Interaction Model... If You Give It a Chance!" short_description: "Standard dense embedding models perform surprisingly well in late interaction scenarios." description: "We recently discovered that embedding models can become late interaction models & can perform surprisingly well in some scenarios. See what we learned here." preview_dir: /articles_data/late-interaction-models/preview social_preview_image: /articles_data/late-interaction-models/social-preview.png weight: -160 author: Kacper Łukawski author_link: https://kacperlukawski.com date: 2024-08-14T00:00:00.000Z --- \* At least any open-source model, since you need access to its internals. ## You Can Adapt Dense Embedding Models for Late Interaction Qdrant 1.10 introduced support for multi-vector representations, with late interaction being a prominent example of this model. In essence, both documents and queries are represented by multiple vectors, and identifying the most relevant documents involves calculating a score based on the similarity between the corresponding query and document embeddings. If you're not familiar with this paradigm, our updated [Hybrid Search](/articles/hybrid-search/) article explains how multi-vector representations can enhance retrieval quality. **Figure 1:** We can visualize late interaction between corresponding document-query embedding pairs. ![Late interaction model](/articles_data/late-interaction-models/late-interaction.png) There are many specialized late interaction models, such as [ColBERT](https://qdrant.tech/documentation/fastembed/fastembed-colbert/), but **it appears that regular dense embedding models can also be effectively utilized in this manner**. > In this study, we will demonstrate that standard dense embedding models, traditionally used for single-vector representations, can be effectively adapted for late interaction scenarios using output token embeddings as multi-vector representations. By testing out retrieval with Qdrant’s multi-vector feature, we will show that these models can rival or surpass specialized late interaction models in retrieval performance, while offering lower complexity and greater efficiency. This work redefines the potential of dense models in advanced search pipelines, presenting a new method for optimizing retrieval systems. ## Understanding Embedding Models The inner workings of embedding models might be surprising to some. The model doesn’t operate directly on the input text; instead, it requires a tokenization step to convert the text into a sequence of token identifiers. Each token identifier is then passed through an embedding layer, which transforms it into a dense vector. Essentially, the embedding layer acts as a lookup table that maps token identifiers to dense vectors. These vectors are then fed into the transformer model as input. **Figure 2:** The tokenization step, which takes place before vectors are added to the transformer model. ![Input token embeddings](/articles_data/late-interaction-models/input-embeddings.png) The input token embeddings are context-free and are learned during the model’s training process. This means that each token always receives the same embedding, regardless of its position in the text. At this stage, the token embeddings are unaware of the context in which they appear. It is the transformer model’s role to contextualize these embeddings. Much has been discussed about the role of attention in transformer models, but in essence, this mechanism is responsible for capturing cross-token relationships. Each transformer module takes a sequence of token embeddings as input and produces a sequence of output token embeddings. Both sequences are of the same length, with each token embedding being enriched by information from the other token embeddings at the current step. **Figure 3:** The mechanism that produces a sequence of output token embeddings. ![Output token embeddings](/articles_data/late-interaction-models/output-embeddings.png) **Figure 4:** The final step performed by the embedding model is pooling the output token embeddings to generate a single vector representation of the input text. ![Pooling](/articles_data/late-interaction-models/pooling.png) There are several pooling strategies, but regardless of which one a model uses, the output is always a single vector representation, which inevitably loses some information about the input. It’s akin to giving someone detailed, step-by-step directions to the nearest grocery store versus simply pointing in the general direction. While the vague direction might suffice in some cases, the detailed instructions are more likely to lead to the desired outcome. ## Using Output Token Embeddings for Multi-Vector Representations We often overlook the output token embeddings, but the fact is—they also serve as multi-vector representations of the input text. So, why not explore their use in a multi-vector retrieval model, similar to late interaction models? ### Experimental Findings We conducted several experiments to determine whether output token embeddings could be effectively used in place of traditional late interaction models. The results are quite promising. <table> <thead> <tr> <th>Dataset</th> <th>Model</th> <th>Experiment</th> <th>NDCG@10</th> </tr> </thead> <tbody> <tr> <th rowspan="6">SciFact</th> <td><code>prithivida/Splade_PP_en_v1</code></td> <td>sparse vectors</td> <td>0.70928</td> </tr> <tr> <td><code>colbert-ir/colbertv2.0</code></td> <td>late interaction model</td> <td>0.69579</td> </tr> <tr> <td rowspan="2"><code>all-MiniLM-L6-v2</code></td> <td>single dense vector representation</td> <td>0.64508</td> </tr> <tr> <td>output token embeddings</td> <td>0.70724</td> </tr> <tr> <td rowspan="2"><code>BAAI/bge-small-en</code></td> <td>single dense vector representation</td> <td>0.68213</td> </tr> <tr> <td>output token embeddings</td> <td><u>0.73696</u></td> </tr> <tr> <td colspan="4"></td> </tr> <tr> <th rowspan="6">NFCorpus</th> <td><code>prithivida/Splade_PP_en_v1</code></td> <td>sparse vectors</td> <td>0.34166</td> </tr> <tr> <td><code>colbert-ir/colbertv2.0</code></td> <td>late interaction model</td> <td>0.35036</td> </tr> <tr> <td rowspan="2"><code>all-MiniLM-L6-v2</code></td> <td>single dense vector representation</td> <td>0.31594</td> </tr> <tr> <td>output token embeddings</td> <td>0.35779</td> </tr> <tr> <td rowspan="2"><code>BAAI/bge-small-en</code></td> <td>single dense vector representation</td> <td>0.29696</td> </tr> <tr> <td>output token embeddings</td> <td><u>0.37502</u></td> </tr> <tr> <td colspan="4"></td> </tr> <tr> <th rowspan="6">ArguAna</th> <td><code>prithivida/Splade_PP_en_v1</code></td> <td>sparse vectors</td> <td>0.47271</td> </tr> <tr> <td><code>colbert-ir/colbertv2.0</code></td> <td>late interaction model</td> <td>0.44534</td> </tr> <tr> <td rowspan="2"><code>all-MiniLM-L6-v2</code></td> <td>single dense vector representation</td> <td>0.50167</td> </tr> <tr> <td>output token embeddings</td> <td>0.45997</td> </tr> <tr> <td rowspan="2"><code>BAAI/bge-small-en</code></td> <td>single dense vector representation</td> <td><u>0.58857</u></td> </tr> <tr> <td>output token embeddings</td> <td>0.57648</td> </tr> </tbody> </table> The [source code for these experiments is open-source](https://github.com/kacperlukawski/beir-qdrant/blob/main/examples/retrieval/search/evaluate_all_exact.py) and utilizes [`beir-qdrant`](https://github.com/kacperlukawski/beir-qdrant), an integration of Qdrant with the [BeIR library](https://github.com/beir-cellar/beir). While this package is not officially maintained by the Qdrant team, it may prove useful for those interested in experimenting with various Qdrant configurations to see how they impact retrieval quality. All experiments were conducted using Qdrant in exact search mode, ensuring the results are not influenced by approximate search. Even the simple `all-MiniLM-L6-v2` model can be applied in a late interaction model fashion, resulting in a positive impact on retrieval quality. However, the best results were achieved with the `BAAI/bge-small-en` model, which outperformed both sparse and late interaction models. It's important to note that ColBERT has not been trained on BeIR datasets, making its performance fully out of domain. Nevertheless, the `all-MiniLM-L6-v2` [training dataset](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2#training-data) also lacks any BeIR data, yet it still performs remarkably well. ## Comparative Analysis of Dense vs. Late Interaction Models The retrieval quality speaks for itself, but there are other important factors to consider. The traditional dense embedding models we tested are less complex than late interaction or sparse models. With fewer parameters, these models are expected to be faster during inference and more cost-effective to maintain. Below is a comparison of the models used in the experiments: | Model | Number of parameters | |------------------------------|----------------------| | `prithivida/Splade_PP_en_v1` | 109,514,298 | | `colbert-ir/colbertv2.0` | 109,580,544 | | `BAAI/bge-small-en` | 33,360,000 | | `all-MiniLM-L6-v2` | 22,713,216 | One argument against using output token embeddings is the increased storage requirements compared to ColBERT-like models. For instance, the `all-MiniLM-L6-v2` model produces 384-dimensional output token embeddings, which is three times more than the 128-dimensional embeddings generated by ColBERT-like models. This increase not only leads to higher memory usage but also impacts the computational cost of retrieval, as calculating distances takes more time. Mitigating this issue through vector compression would make a lot of sense. ## Exploring Quantization for Multi-Vector Representations Binary quantization is generally more effective for high-dimensional vectors, making the `all-MiniLM-L6-v2` model, with its relatively low-dimensional outputs, less ideal for this approach. However, scalar quantization appeared to be a viable alternative. The table below summarizes the impact of quantization on retrieval quality. <table> <thead> <tr> <th>Dataset</th> <th>Model</th> <th>Experiment</th> <th>NDCG@10</th> </tr> </thead> <tbody> <tr> <th rowspan="2">SciFact</th> <td rowspan="2"><code>all-MiniLM-L6-v2</code></td> <td>output token embeddings</td> <td>0.70724</td> </tr> <tr> <td>output token embeddings (uint8)</td> <td>0.70297</td> </tr> <tr> <td colspan="4"></td> </tr> <tr> <th rowspan="2">NFCorpus</th> <td rowspan="2"><code>all-MiniLM-L6-v2</code></td> <td>output token embeddings</td> <td>0.35779</td> </tr> <tr> <td>output token embeddings (uint8)</td> <td>0.35572</td> </tr> </tbody> </table> It’s important to note that quantization doesn’t always preserve retrieval quality at the same level, but in this case, scalar quantization appears to have minimal impact on retrieval performance. The effect is negligible, while the memory savings are substantial. We managed to maintain the original quality while using four times less memory. Additionally, a quantized vector requires 384 bytes, compared to ColBERT’s 512 bytes. This results in a 25% reduction in memory usage, with retrieval quality remaining nearly unchanged. ## Practical Application: Enhancing Retrieval with Dense Models If you’re using one of the sentence transformer models, the output token embeddings are calculated by default. While a single vector representation is more efficient in terms of storage and computation, there’s no need to discard the output token embeddings. According to our experiments, these embeddings can significantly enhance retrieval quality. You can store both the single vector and the output token embeddings in Qdrant, using the single vector for the initial retrieval step and then reranking the results with the output token embeddings. **Figure 5:** A single model pipeline that relies solely on the output token embeddings for reranking. ![Single model reranking](/articles_data/late-interaction-models/single-model-reranking.png) To demonstrate this concept, we implemented a simple reranking pipeline in Qdrant. This pipeline uses a dense embedding model for the initial oversampled retrieval and then relies solely on the output token embeddings for the reranking step. ### Single Model Retrieval and Reranking Benchmarks Our tests focused on using the same model for both retrieval and reranking. The reported metric is NDCG@10. In all tests, we applied an oversampling factor of 5x, meaning the retrieval step returned 50 results, which were then narrowed down to 10 during the reranking step. Below are the results for some of the BeIR datasets: <table> <thead> <tr> <th rowspan="2">Dataset</th> <th colspan="2"><code>all-miniLM-L6-v2</code></th> <th colspan="2"><code>BAAI/bge-small-en</code></th> </tr> <tr> <th>dense embeddings only</th> <th>dense + reranking</th> <th>dense embeddings only</th> <th>dense + reranking</th> </tr> </thead> <tbody> <tr> <th>SciFact</th> <td>0.64508</td> <td>0.70293</td> <td>0.68213</td> <td><u>0.73053</u></td> </tr> <tr> <th>NFCorpus</th> <td>0.31594</td> <td>0.34297</td> <td>0.29696</td> <td><u>0.35996</u></td> </tr> <tr> <th>ArguAna</th> <td>0.50167</td> <td>0.45378</td> <td><u>0.58857</u></td> <td>0.57302</td> </tr> <tr> <th>Touche-2020</th> <td>0.16904</td> <td>0.19693</td> <td>0.13055</td> <td><u>0.19821</u></td> </tr> <tr> <th>TREC-COVID</th> <td>0.47246</td> <td><u>0.6379</u></td> <td>0.45788</td> <td>0.53539</td> </tr> <tr> <th>FiQA-2018</th> <td>0.36867</td> <td><u>0.41587</u></td> <td>0.31091</td> <td>0.39067</td> </tr> </tbody> </table> The source code for the benchmark is publicly available, and [you can find it in the repository of the `beir-qdrant` package](https://github.com/kacperlukawski/beir-qdrant/blob/main/examples/retrieval/search/evaluate_reranking.py). Overall, adding a reranking step using the same model typically improves retrieval quality. However, the quality of various late interaction models is [often reported based on their reranking performance when BM25 is used for the initial retrieval](https://huggingface.co/mixedbread-ai/mxbai-colbert-large-v1#1-reranking-performance). This experiment aimed to demonstrate how a single model can be effectively used for both retrieval and reranking, and the results are quite promising. Now, let's explore how to implement this using the new Query API introduced in Qdrant 1.10. ## Setting Up Qdrant for Late Interaction The new Query API in Qdrant 1.10 enables the construction of even more complex retrieval pipelines. We can use the single vector created after pooling for the initial retrieval step and then rerank the results using the output token embeddings. Assuming the collection is named `my-collection` and is configured to store two named vectors: `dense-vector` and `output-token-embeddings`, here’s how such a collection could be created in Qdrant: ```python from qdrant_client import QdrantClient, models client = QdrantClient("http://localhost:6333") client.create_collection( collection_name="my-collection", vectors_config={ "dense-vector": models.VectorParams( size=384, distance=models.Distance.COSINE, ), "output-token-embeddings": models.VectorParams( size=384, distance=models.Distance.COSINE, multivector_config=models.MultiVectorConfig( comparator=models.MultiVectorComparator.MAX_SIM ), ), } ) ``` Both vectors are of the same size since they are produced by the same `all-MiniLM-L6-v2` model. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("all-MiniLM-L6-v2") ``` Now, instead of using the search API with just a single dense vector, we can create a reranking pipeline. First, we retrieve 50 results using the dense vector, and then we rerank them using the output token embeddings to obtain the top 10 results. ```python query = "What else can be done with just all-MiniLM-L6-v2 model?" client.query_points( collection_name="my-collection", prefetch=[ # Prefetch the dense embeddings of the top-50 documents models.Prefetch( query=model.encode(query).tolist(), using="dense-vector", limit=50, ) ], # Rerank the top-50 documents retrieved by the dense embedding model # and return just the top-10. Please note we call the same model, but # we ask for the token embeddings by setting the output_value parameter. query=model.encode(query, output_value="token_embeddings").tolist(), using="output-token-embeddings", limit=10, ) ``` ## Try the Experiment Yourself In a real-world scenario, you might take it a step further by first calculating the token embeddings and then performing pooling to obtain the single vector representation. This approach allows you to complete everything in a single pass. The simplest way to start experimenting with building complex reranking pipelines in Qdrant is by using the forever-free cluster on [Qdrant Cloud](https://cloud.qdrant.io/) and reading [Qdrant's documentation](/documentation/). The [source code for these experiments is open-source](https://github.com/kacperlukawski/beir-qdrant/blob/main/examples/retrieval/search/evaluate_all_exact.py) and uses [`beir-qdrant`](https://github.com/kacperlukawski/beir-qdrant), an integration of Qdrant with the [BeIR library](https://github.com/beir-cellar/beir). ## Future Directions and Research Opportunities The initial experiments using output token embeddings in the retrieval process have yielded promising results. However, we plan to conduct further benchmarks to validate these findings and explore the incorporation of sparse methods for the initial retrieval. Additionally, we aim to investigate the impact of quantization on multi-vector representations and its effects on retrieval quality. Finally, we will assess retrieval speed, a crucial factor for many applications.
articles/late-interaction-models.md
--- title: Metric Learning Tips & Tricks short_description: How to train an object matching model and serve it in production. description: Practical recommendations on how to train a matching model and serve it in production. Even with no labeled data. # external_link: https://vasnetsov93.medium.com/metric-learning-tips-n-tricks-2e4cfee6b75b social_preview_image: /articles_data/metric-learning-tips/preview/social_preview.jpg preview_dir: /articles_data/metric-learning-tips/preview small_preview_image: /articles_data/metric-learning-tips/scatter-graph.svg weight: 20 author: Andrei Vasnetsov author_link: https://blog.vasnetsov.com/ date: 2021-05-15T10:18:00.000Z # aliases: [ /articles/metric-learning-tips/ ] --- ## How to train object matching model with no labeled data and use it in production Currently, most machine-learning-related business cases are solved as a classification problems. Classification algorithms are so well studied in practice that even if the original problem is not directly a classification task, it is usually decomposed or approximately converted into one. However, despite its simplicity, the classification task has requirements that could complicate its production integration and scaling. E.g. it requires a fixed number of classes, where each class should have a sufficient number of training samples. In this article, I will describe how we overcome these limitations by switching to metric learning. By the example of matching job positions and candidates, I will show how to train metric learning model with no manually labeled data, how to estimate prediction confidence, and how to serve metric learning in production. ## What is metric learning and why using it? According to Wikipedia, metric learning is the task of learning a distance function over objects. In practice, it means that we can train a model that tells a number for any pair of given objects. And this number should represent a degree or score of similarity between those given objects. For example, objects with a score of 0.9 could be more similar than objects with a score of 0.5 Actual scores and their direction could vary among different implementations. In practice, there are two main approaches to metric learning and two corresponding types of NN architectures. The first is the interaction-based approach, which first builds local interactions (i.e., local matching signals) between two objects. Deep neural networks learn hierarchical interaction patterns for matching. Examples of neural network architectures include MV-LSTM, ARC-II, and MatchPyramid. ![MV-LSTM, example of interaction-based model](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/mv_lstm.png) > MV-LSTM, example of interaction-based model, [Shengxian Wan et al. ](https://www.researchgate.net/figure/Illustration-of-MV-LSTM-S-X-and-S-Y-are-the-in_fig1_285271115) via Researchgate The second is the representation-based approach. In this case distance function is composed of 2 components: the Encoder transforms an object into embedded representation - usually a large float point vector, and the Comparator takes embeddings of a pair of objects from the Encoder and calculates their similarity. The most well-known example of this embedding representation is Word2Vec. Examples of neural network architectures also include DSSM, C-DSSM, and ARC-I. The Comparator is usually a very simple function that could be calculated very quickly. It might be cosine similarity or even a dot production. Two-stage schema allows performing complex calculations only once per object. Once transformed, the Comparator can calculate object similarity independent of the Encoder much more quickly. For more convenience, embeddings can be placed into specialized storages or vector search engines. These search engines allow to manage embeddings using API, perform searches and other operations with vectors. ![C-DSSM, example of representation-based model](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/cdssm.png) > C-DSSM, example of representation-based model, [Xue Li et al.](https://arxiv.org/abs/1901.10710v2) via arXiv Pre-trained NNs can also be used. The output of the second-to-last layer could work as an embedded representation. Further in this article, I would focus on the representation-based approach, as it proved to be more flexible and fast. So what are the advantages of using metric learning comparing to classification? Object Encoder does not assume the number of classes. So if you can't split your object into classes, if the number of classes is too high, or you suspect that it could grow in the future - consider using metric learning. In our case, business goal was to find suitable vacancies for candidates who specify the title of the desired position. To solve this, we used to apply a classifier to determine the job category of the vacancy and the candidate. But this solution was limited to only a few hundred categories. Candidates were complaining that they couldn't find the right category for them. Training the classifier for new categories would be too long and require new training data for each new category. Switching to metric learning allowed us to overcome these limitations, the resulting solution could compare any pair position descriptions, even if we don't have this category reference yet. ![T-SNE with job samples](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/embeddings.png) > T-SNE with job samples, Image by Author. Play with [Embedding Projector](https://projector.tensorflow.org/?config=https://gist.githubusercontent.com/generall/7e712425e3b340c2c4dbc1a29f515d91/raw/b45b2b6f6c1d5ab3d3363c50805f3834a85c8879/config.json) yourself. With metric learning, we learn not a concrete job type but how to match job descriptions from a candidate's CV and a vacancy. Secondly, with metric learning, it is easy to add more reference occupations without model retraining. We can then add the reference to a vector search engine. Next time we will match occupations - this new reference vector will be searchable. ## Data for metric learning Unlike classifiers, a metric learning training does not require specific class labels. All that is required are examples of similar and dissimilar objects. We would call them positive and negative samples. At the same time, it could be a relative similarity between a pair of objects. For example, twins look more alike to each other than a pair of random people. And random people are more similar to each other than a man and a cat. A model can use such relative examples for learning. The good news is that the division into classes is only a special case of determining similarity. To use such datasets, it is enough to declare samples from one class as positive and samples from another class as negative. In this way, it is possible to combine several datasets with mismatched classes into one generalized dataset for metric learning. But not only datasets with division into classes are suitable for extracting positive and negative examples. If, for example, there are additional features in the description of the object, the value of these features can also be used as a similarity factor. It may not be as explicit as class membership, but the relative similarity is also suitable for learning. In the case of job descriptions, there are many ontologies of occupations, which were able to be combined into a single dataset thanks to this approach. We even went a step further and used identical job titles to find similar descriptions. As a result, we got a self-supervised universal dataset that did not require any manual labeling. Unfortunately, universality does not allow some techniques to be applied in training. Next, I will describe how to overcome this disadvantage. ## Training the model There are several ways to train a metric learning model. Among the most popular is the use of Triplet or Contrastive loss functions, but I will not go deep into them in this article. However, I will tell you about one interesting trick that helped us work with unified training examples. One of the most important practices to efficiently train the metric learning model is hard negative mining. This technique aims to include negative samples on which model gave worse predictions during the last training epoch. Most articles that describe this technique assume that training data consists of many small classes (in most cases it is people's faces). With data like this, it is easy to find bad samples - if two samples from different classes have a high similarity score, we can use it as a negative sample. But we had no such classes in our data, the only thing we have is occupation pairs assumed to be similar in some way. We cannot guarantee that there is no better match for each job occupation among this pair. That is why we can't use hard negative mining for our model. ![Loss variations](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/losses.png) > [Alfonso Medela et al.](https://arxiv.org/abs/1905.10675) via arXiv To compensate for this limitation we can try to increase the number of random (weak) negative samples. One way to achieve this is to train the model longer, so it will see more samples by the end of the training. But we found a better solution in adjusting our loss function. In a regular implementation of Triplet or Contractive loss, each positive pair is compared with some or a few negative samples. What we did is we allow pair comparison amongst the whole batch. That means that loss-function penalizes all pairs of random objects if its score exceeds any of the positive scores in a batch. This extension gives `~ N * B^2` comparisons where `B` is a size of batch and `N` is a number of batches. Much bigger than `~ N * B` in regular triplet loss. This means that increasing the size of the batch significantly increases the number of negative comparisons, and therefore should improve the model performance. We were able to observe this dependence in our experiments. Similar idea we also found in the article [Supervised Contrastive Learning](https://arxiv.org/abs/2004.11362). ## Model confidence In real life it is often needed to know how confident the model was in the prediction. Whether manual adjustment or validation of the result is required. With conventional classification, it is easy to understand by scores how confident the model is in the result. If the probability values of different classes are close to each other, the model is not confident. If, on the contrary, the most probable class differs greatly, then the model is confident. At first glance, this cannot be applied to metric learning. Even if the predicted object similarity score is small it might only mean that the reference set has no proper objects to compare with. Conversely, the model can group garbage objects with a large score. Fortunately, we found a small modification to the embedding generator, which allows us to define confidence in the same way as it is done in conventional classifiers with a Softmax activation function. The modification consists in building an embedding as a combination of feature groups. Each feature group is presented as a one-hot encoded sub-vector in the embedding. If the model can confidently predict the feature value - the corresponding sub-vector will have a high absolute value in some of its elements. For a more intuitive understanding, I recommend thinking about embeddings not as points in space, but as a set of binary features. To implement this modification and form proper feature groups we would need to change a regular linear output layer to a concatenation of several Softmax layers. Each softmax component would represent an independent feature and force the neural network to learn them. Let's take for example that we have 4 softmax components with 128 elements each. Every such component could be roughly imagined as a one-hot-encoded number in the range of 0 to 127. Thus, the resulting vector will represent one of `128^4` possible combinations. If the trained model is good enough, you can even try to interpret the values of singular features individually. ![Softmax feature embeddings](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/feature_embedding.png) > Softmax feature embeddings, Image by Author. ## Neural rules Machine learning models rarely train to 100% accuracy. In a conventional classifier, errors can only be eliminated by modifying and repeating the training process. Metric training, however, is more flexible in this matter and allows you to introduce additional steps that allow you to correct the errors of an already trained model. A common error of the metric learning model is erroneously declaring objects close although in reality they are not. To correct this kind of error, we introduce exclusion rules. Rules consist of 2 object anchors encoded into vector space. If the target object falls into one of the anchors' effects area - it triggers the rule. It will exclude all objects in the second anchor area from the prediction result. ![Exclusion rules](https://gist.githubusercontent.com/generall/4821e3c6b5eee603d56729e7a156e461/raw/b0eb4ea5d088fe1095e529eb12708ac69f304ce3/exclusion_rule.png) > Neural exclusion rules, Image by Author. The convenience of working with embeddings is that regardless of the number of rules, you only need to perform the encoding once per object. Then to find a suitable rule, it is enough to compare the target object's embedding and the pre-calculated embeddings of the rule's anchors. Which, when implemented, translates into just one additional query to the vector search engine. ## Vector search in production When implementing a metric learning model in production, the question arises about the storage and management of vectors. It should be easy to add new vectors if new job descriptions appear in the service. In our case, we also needed to apply additional conditions to the search. We needed to filter, for example, the location of candidates and the level of language proficiency. We did not find a ready-made tool for such vector management, so we created [Qdrant](https://github.com/qdrant/qdrant) - open-source vector search engine. It allows you to add and delete vectors with a simple API, independent of a programming language you are using. You can also assign the payload to vectors. This payload allows additional filtering during the search request. Qdrant has a pre-built docker image and start working with it is just as simple as running ```bash docker run -p 6333:6333 qdrant/qdrant ``` Documentation with examples could be found [here](https://api.qdrant.tech/api-reference). ## Conclusion In this article, I have shown how metric learning can be more scalable and flexible than the classification models. I suggest trying similar approaches in your tasks - it might be matching similar texts, images, or audio data. With the existing variety of pre-trained neural networks and a vector search engine, it is easy to build your metric learning-based application.
articles/metric-learning-tips.md
--- title: Qdrant 0.10 released short_description: A short review of all the features introduced in Qdrant 0.10 description: Qdrant 0.10 brings a lot of changes. Check out what's new! preview_dir: /articles_data/qdrant-0-10-release/preview small_preview_image: /articles_data/qdrant-0-10-release/new-svgrepo-com.svg social_preview_image: /articles_data/qdrant-0-10-release/preview/social_preview.jpg weight: 70 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2022-09-19T13:30:00+02:00 draft: false --- [Qdrant 0.10 is a new version](https://github.com/qdrant/qdrant/releases/tag/v0.10.0) that brings a lot of performance improvements, but also some new features which were heavily requested by our users. Here is an overview of what has changed. ## Storing multiple vectors per object Previously, if you wanted to use semantic search with multiple vectors per object, you had to create separate collections for each vector type. This was even if the vectors shared some other attributes in the payload. With Qdrant 0.10, you can now store all of these vectors together in the same collection, which allows you to share a single copy of the payload. This makes it easier to use semantic search with multiple vector types, and reduces the amount of work you need to do to set up your collections. ## Batch vector search Previously, you had to send multiple requests to the Qdrant API to perform multiple non-related tasks. However, this can cause significant network overhead and slow down the process, especially if you have a poor connection speed. Fortunately, the [new batch search feature](/documentation/concepts/search/#batch-search-api) allows you to avoid this issue. With just one API call, Qdrant will handle multiple search requests in the most efficient way possible. This means that you can perform multiple tasks simultaneously without having to worry about network overhead or slow performance. ## Built-in ARM support To make our application accessible to ARM users, we have compiled it specifically for that platform. If it is not compiled for ARM, the device will have to emulate it, which can slow down performance. To ensure the best possible experience for ARM users, we have created Docker images specifically for that platform. Keep in mind that using a limited set of processor instructions may affect the performance of your vector search. Therefore, we have tested both ARM and non-ARM architectures using similar setups to understand the potential impact on performance. ## Full-text filtering Qdrant is a vector database that allows you to quickly search for the nearest neighbors. However, you may need to apply additional filters on top of the semantic search. Up until version 0.10, Qdrant only supported keyword filters. With the release of Qdrant 0.10, [you can now use full-text filters](/documentation/concepts/filtering/#full-text-match) as well. This new filter type can be used on its own or in combination with other filter types to provide even more flexibility in your searches.
articles/qdrant-0-10-release.md
--- title: "Using LangChain for Question Answering with Qdrant" short_description: "Large Language Models might be developed fast with modern tool. Here is how!" description: "We combined LangChain, a pre-trained LLM from OpenAI, SentenceTransformers & Qdrant to create a question answering system with just a few lines of code. Learn more!" social_preview_image: /articles_data/langchain-integration/social_preview.png small_preview_image: /articles_data/langchain-integration/chain.svg preview_dir: /articles_data/langchain-integration/preview weight: 6 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-01-31T10:53:20+01:00 draft: false keywords: - vector search - langchain - llm - large language models - question answering - openai - embeddings --- # Streamlining Question Answering: Simplifying Integration with LangChain and Qdrant Building applications with Large Language Models doesn't have to be complicated. A lot has been going on recently to simplify the development, so you can utilize already pre-trained models and support even complex pipelines with a few lines of code. [LangChain](https://langchain.readthedocs.io) provides unified interfaces to different libraries, so you can avoid writing boilerplate code and focus on the value you want to bring. ## Why Use Qdrant for Question Answering with LangChain? It has been reported millions of times recently, but let's say that again. ChatGPT-like models struggle with generating factual statements if no context is provided. They have some general knowledge but cannot guarantee to produce a valid answer consistently. Thus, it is better to provide some facts we know are actual, so it can just choose the valid parts and extract them from all the provided contextual data to give a comprehensive answer. [Vector database, such as Qdrant](https://qdrant.tech/), is of great help here, as their ability to perform a [semantic search](https://qdrant.tech/documentation/tutorials/search-beginners/) over a huge knowledge base is crucial to preselect some possibly valid documents, so they can be provided into the LLM. That's also one of the **chains** implemented in [LangChain](https://qdrant.tech/documentation/frameworks/langchain/), which is called `VectorDBQA`. And Qdrant got integrated with the library, so it might be used to build it effortlessly. ### The Two-Model Approach Surprisingly enough, there will be two models required to set things up. First of all, we need an embedding model that will convert the set of facts into vectors, and store those into Qdrant. That's an identical process to any other semantic search application. We're going to use one of the `SentenceTransformers` models, so it can be hosted locally. The embeddings created by that model will be put into Qdrant and used to retrieve the most similar documents, given the query. However, when we receive a query, there are two steps involved. First of all, we ask Qdrant to provide the most relevant documents and simply combine all of them into a single text. Then, we build a prompt to the LLM (in our case [OpenAI](https://openai.com/)), including those documents as a context, of course together with the question asked. So the input to the LLM looks like the following: ```text Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. It's as certain as 2 + 2 = 4 ... Question: How much is 2 + 2? Helpful Answer: ``` There might be several context documents combined, and it is solely up to LLM to choose the right piece of content. But our expectation is, the model should respond with just `4`. ## Why do we need two different models? Both solve some different tasks. The first model performs feature extraction, by converting the text into vectors, while the second one helps in text generation or summarization. Disclaimer: This is not the only way to solve that task with LangChain. Such a chain is called `stuff` in the library nomenclature. ![](/articles_data/langchain-integration/flow-diagram.png) Enough theory! This sounds like a pretty complex application, as it involves several systems. But with LangChain, it might be implemented in just a few lines of code, thanks to the recent integration with [Qdrant](https://qdrant.tech/). We're not even going to work directly with `QdrantClient`, as everything is already done in the background by LangChain. If you want to get into the source code right away, all the processing is available as a [Google Colab notebook](https://colab.research.google.com/drive/19RxxkZdnq_YqBH5kBV10Rt0Rax-kminD?usp=sharing). ## How to Implement Question Answering with LangChain and Qdrant ### Step 1: Configuration A journey of a thousand miles begins with a single step, in our case with the configuration of all the services. We'll be using [Qdrant Cloud](https://cloud.qdrant.io), so we need an API key. The same is for OpenAI - the API key has to be obtained from their website. ![](/articles_data/langchain-integration/code-configuration.png) ### Step 2: Building the knowledge base We also need some facts from which the answers will be generated. There is plenty of public datasets available, and [Natural Questions](https://ai.google.com/research/NaturalQuestions/visualization) is one of them. It consists of the whole HTML content of the websites they were scraped from. That means we need some preprocessing to extract plain text content. As a result, we’re going to have two lists of strings - one for questions and the other one for the answers. The answers have to be vectorized with the first of our models. The `sentence-transformers/all-mpnet-base-v2` is one of the possibilities, but there are some other options available. LangChain will handle that part of the process in a single function call. ![](/articles_data/langchain-integration/code-qdrant.png) ### Step 3: Setting up QA with Qdrant in a loop `VectorDBQA` is a chain that performs the process described above. So it, first of all, loads some facts from Qdrant and then feeds them into OpenAI LLM which should analyze them to find the answer to a given question. The only last thing to do before using it is to put things together, also with a single function call. ![](/articles_data/langchain-integration/code-vectordbqa.png) ## Step 4: Testing out the chain And that's it! We can put some queries, and LangChain will perform all the required processing to find the answer in the provided context. ![](/articles_data/langchain-integration/code-answering.png) ```text > what kind of music is scott joplin most famous for Scott Joplin is most famous for composing ragtime music. > who died from the band faith no more Chuck Mosley > when does maggie come on grey's anatomy Maggie first appears in season 10, episode 1, which aired on September 26, 2013. > can't take my eyes off you lyrics meaning I don't know. > who lasted the longest on alone season 2 David McIntyre lasted the longest on Alone season 2, with a total of 66 days. ``` The great thing about such a setup is that the knowledge base might be easily extended with some new facts and those will be included in the prompts sent to LLM later on. Of course, assuming their similarity to the given question will be in the top results returned by Qdrant. If you want to run the chain on your own, the simplest way to reproduce it is to open the [Google Colab notebook](https://colab.research.google.com/drive/19RxxkZdnq_YqBH5kBV10Rt0Rax-kminD?usp=sharing).
articles/langchain-integration.md
--- title: "Optimizing OpenAI Embeddings: Enhance Efficiency with Qdrant's Binary Quantization" draft: false slug: binary-quantization-openai short_description: Use Qdrant's Binary Quantization to enhance OpenAI embeddings description: Explore how Qdrant's Binary Quantization can significantly improve the efficiency and performance of OpenAI's Ada-003 embeddings. Learn best practices for real-time search applications. preview_dir: /articles_data/binary-quantization-openai/preview preview_image: /articles-data/binary-quantization-openai/Article-Image.png small_preview_image: /articles_data/binary-quantization-openai/icon.svg social_preview_image: /articles_data/binary-quantization-openai/preview/social-preview.png title_preview_image: /articles_data/binary-quantization-openai/preview/preview.webp date: 2024-02-21T13:12:08-08:00 author: Nirant Kasliwal author_link: https://nirantk.com/about/ featured: false tags: - OpenAI - binary quantization - embeddings weight: -130 aliases: [ /blog/binary-quantization-openai/ ] --- OpenAI Ada-003 embeddings are a powerful tool for natural language processing (NLP). However, the size of the embeddings are a challenge, especially with real-time search and retrieval. In this article, we explore how you can use Qdrant's Binary Quantization to enhance the performance and efficiency of OpenAI embeddings. In this post, we discuss: - The significance of OpenAI embeddings and real-world challenges. - Qdrant's Binary Quantization, and how it can improve the performance of OpenAI embeddings - Results of an experiment that highlights improvements in search efficiency and accuracy - Implications of these findings for real-world applications - Best practices for leveraging Binary Quantization to enhance OpenAI embeddings If you're new to Binary Quantization, consider reading our article which walks you through the concept and [how to use it with Qdrant](/articles/binary-quantization/) You can also try out these techniques as described in [Binary Quantization OpenAI](https://github.com/qdrant/examples/blob/openai-3/binary-quantization-openai/README.md), which includes Jupyter notebooks. ## New OpenAI embeddings: performance and changes As the technology of embedding models has advanced, demand has grown. Users are looking more for powerful and efficient text-embedding models. OpenAI's Ada-003 embeddings offer state-of-the-art performance on a wide range of NLP tasks, including those noted in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) and [MIRACL](https://openai.com/blog/new-embedding-models-and-api-updates). These models include multilingual support in over 100 languages. The transition from text-embedding-ada-002 to text-embedding-3-large has led to a significant jump in performance scores (from 31.4% to 54.9% on MIRACL). #### Matryoshka representation learning The new OpenAI models have been trained with a novel approach called "[Matryoshka Representation Learning](https://aniketrege.github.io/blog/2024/mrl/)". Developers can set up embeddings of different sizes (number of dimensions). In this post, we use small and large variants. Developers can select embeddings which balances accuracy and size. Here, we show how the accuracy of binary quantization is quite good across different dimensions -- for both the models. ## Enhanced performance and efficiency with binary quantization By reducing storage needs, you can scale applications with lower costs. This addresses a critical challenge posed by the original embedding sizes. Binary Quantization also speeds the search process. It simplifies the complex distance calculations between vectors into more manageable bitwise operations, which supports potentially real-time searches across vast datasets. The accompanying graph illustrates the promising accuracy levels achievable with binary quantization across different model sizes, showcasing its practicality without severely compromising on performance. This dual advantage of storage reduction and accelerated search capabilities underscores the transformative potential of Binary Quantization in deploying OpenAI embeddings more effectively across various real-world applications. ![](/blog/openai/Accuracy_Models.png) The efficiency gains from Binary Quantization are as follows: - Reduced storage footprint: It helps with large-scale datasets. It also saves on memory, and scales up to 30x at the same cost. - Enhanced speed of data retrieval: Smaller data sizes generally leads to faster searches. - Accelerated search process: It is based on simplified distance calculations between vectors to bitwise operations. This enables real-time querying even in extensive databases. ### Experiment setup: OpenAI embeddings in focus To identify Binary Quantization's impact on search efficiency and accuracy, we designed our experiment on OpenAI text-embedding models. These models, which capture nuanced linguistic features and semantic relationships, are the backbone of our analysis. We then delve deep into the potential enhancements offered by Qdrant's Binary Quantization feature. This approach not only leverages the high-caliber OpenAI embeddings but also provides a broad basis for evaluating the search mechanism under scrutiny. #### Dataset The research employs 100K random samples from the [OpenAI 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) 1M dataset, focusing on 100 randomly selected records. These records serve as queries in the experiment, aiming to assess how Binary Quantization influences search efficiency and precision within the dataset. We then use the embeddings of the queries to search for the nearest neighbors in the dataset. #### Parameters: oversampling, rescoring, and search limits For each record, we run a parameter sweep over the number of oversampling, rescoring, and search limits. We can then understand the impact of these parameters on search accuracy and efficiency. Our experiment was designed to assess the impact of Binary Quantization under various conditions, based on the following parameters: - **Oversampling**: By oversampling, we can limit the loss of information inherent in quantization. This also helps to preserve the semantic richness of your OpenAI embeddings. We experimented with different oversampling factors, and identified the impact on the accuracy and efficiency of search. Spoiler: higher oversampling factors tend to improve the accuracy of searches. However, they usually require more computational resources. - **Rescoring**: Rescoring refines the first results of an initial binary search. This process leverages the original high-dimensional vectors to refine the search results, **always** improving accuracy. We toggled rescoring on and off to measure effectiveness, when combined with Binary Quantization. We also measured the impact on search performance. - **Search Limits**: We specify the number of results from the search process. We experimented with various search limits to measure their impact the accuracy and efficiency. We explored the trade-offs between search depth and performance. The results provide insight for applications with different precision and speed requirements. Through this detailed setup, our experiment sought to shed light on the nuanced interplay between Binary Quantization and the high-quality embeddings produced by OpenAI's models. By meticulously adjusting and observing the outcomes under different conditions, we aimed to uncover actionable insights that could empower users to harness the full potential of Qdrant in combination with OpenAI's embeddings, regardless of their specific application needs. ### Results: binary quantization's impact on OpenAI embeddings To analyze the impact of rescoring (`True` or `False`), we compared results across different model configurations and search limits. Rescoring sets up a more precise search, based on results from an initial query. #### Rescoring ![Graph that measures the impact of rescoring](/blog/openai/Rescoring_Impact.png) Here are some key observations, which analyzes the impact of rescoring (`True` or `False`): 1. **Significantly Improved Accuracy**: - Across all models and dimension configurations, enabling rescoring (`True`) consistently results in higher accuracy scores compared to when rescoring is disabled (`False`). - The improvement in accuracy is true across various search limits (10, 20, 50, 100). 2. **Model and Dimension Specific Observations**: - For the `text-embedding-3-large` model with 3072 dimensions, rescoring boosts the accuracy from an average of about 76-77% without rescoring to 97-99% with rescoring, depending on the search limit and oversampling rate. - The accuracy improvement with increased oversampling is more pronounced when rescoring is enabled, indicating a better utilization of the additional binary codes in refining search results. - With the `text-embedding-3-small` model at 512 dimensions, accuracy increases from around 53-55% without rescoring to 71-91% with rescoring, highlighting the significant impact of rescoring, especially at lower dimensions. In contrast, for lower dimension models (such as text-embedding-3-small with 512 dimensions), the incremental accuracy gains from increased oversampling levels are less significant, even with rescoring enabled. This suggests a diminishing return on accuracy improvement with higher oversampling in lower dimension spaces. 3. **Influence of Search Limit**: - The performance gain from rescoring seems to be relatively stable across different search limits, suggesting that rescoring consistently enhances accuracy regardless of the number of top results considered. In summary, enabling rescoring dramatically improves search accuracy across all tested configurations. It is crucial feature for applications where precision is paramount. The consistent performance boost provided by rescoring underscores its value in refining search results, particularly when working with complex, high-dimensional data like OpenAI embeddings. This enhancement is critical for applications that demand high accuracy, such as semantic search, content discovery, and recommendation systems, where the quality of search results directly impacts user experience and satisfaction. ### Dataset combinations For those exploring the integration of text embedding models with Qdrant, it's crucial to consider various model configurations for optimal performance. The dataset combinations defined above illustrate different configurations to test against Qdrant. These combinations vary by two primary attributes: 1. **Model Name**: Signifying the specific text embedding model variant, such as "text-embedding-3-large" or "text-embedding-3-small". This distinction correlates with the model's capacity, with "large" models offering more detailed embeddings at the cost of increased computational resources. 2. **Dimensions**: This refers to the size of the vector embeddings produced by the model. Options range from 512 to 3072 dimensions. Higher dimensions could lead to more precise embeddings but might also increase the search time and memory usage in Qdrant. Optimizing these parameters is a balancing act between search accuracy and resource efficiency. Testing across these combinations allows users to identify the configuration that best meets their specific needs, considering the trade-offs between computational resources and the quality of search results. ```python dataset_combinations = [ { "model_name": "text-embedding-3-large", "dimensions": 3072, }, { "model_name": "text-embedding-3-large", "dimensions": 1024, }, { "model_name": "text-embedding-3-large", "dimensions": 1536, }, { "model_name": "text-embedding-3-small", "dimensions": 512, }, { "model_name": "text-embedding-3-small", "dimensions": 1024, }, { "model_name": "text-embedding-3-small", "dimensions": 1536, }, ] ``` #### Exploring dataset combinations and their impacts on model performance The code snippet iterates through predefined dataset and model combinations. For each combination, characterized by the model name and its dimensions, the corresponding experiment's results are loaded. These results, which are stored in JSON format, include performance metrics like accuracy under different configurations: with and without oversampling, and with and without a rescore step. Following the extraction of these metrics, the code computes the average accuracy across different settings, excluding extreme cases of very low limits (specifically, limits of 1 and 5). This computation groups the results by oversampling, rescore presence, and limit, before calculating the mean accuracy for each subgroup. After gathering and processing this data, the average accuracies are organized into a pivot table. This table is indexed by the limit (the number of top results considered), and columns are formed based on combinations of oversampling and rescoring. ```python import pandas as pd for combination in dataset_combinations: model_name = combination["model_name"] dimensions = combination["dimensions"] print(f"Model: {model_name}, dimensions: {dimensions}") results = pd.read_json(f"../results/results-{model_name}-{dimensions}.json", lines=True) average_accuracy = results[results["limit"] != 1] average_accuracy = average_accuracy[average_accuracy["limit"] != 5] average_accuracy = average_accuracy.groupby(["oversampling", "rescore", "limit"])[ "accuracy" ].mean() average_accuracy = average_accuracy.reset_index() acc = average_accuracy.pivot( index="limit", columns=["oversampling", "rescore"], values="accuracy" ) print(acc) ``` Here is a selected slice of these results, with `rescore=True`: |Method|Dimensionality|Test Dataset|Recall|Oversampling| |-|-|-|-|-| |OpenAI text-embedding-3-large (highest MTEB score from the table) |3072|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M) | 0.9966|3x| |OpenAI text-embedding-3-small|1536|[DBpedia 100K](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-small-1536-100K)| 0.9847|3x| |OpenAI text-embedding-3-large|1536|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M)| 0.9826|3x| #### Impact of oversampling You can use oversampling in machine learning to counteract imbalances in datasets. It works well when one class significantly outnumbers others. This imbalance can skew the performance of models, which favors the majority class at the expense of others. By creating additional samples from the minority classes, oversampling helps equalize the representation of classes in the training dataset, thus enabling more fair and accurate modeling of real-world scenarios. The screenshot showcases the effect of oversampling on model performance metrics. While the actual metrics aren't shown, we expect to see improvements in measures such as precision, recall, or F1-score. These improvements illustrate the effectiveness of oversampling in creating a more balanced dataset. It allows the model to learn a better representation of all classes, not just the dominant one. Without an explicit code snippet or output, we focus on the role of oversampling in model fairness and performance. Through graphical representation, you can set up before-and-after comparisons. These comparisons illustrate the contribution to machine learning projects. ![Measuring the impact of oversampling](/blog/openai/Oversampling_Impact.png) ### Leveraging binary quantization: best practices We recommend the following best practices for leveraging Binary Quantization to enhance OpenAI embeddings: 1. Embedding Model: Use the text-embedding-3-large from MTEB. It is most accurate among those tested. 2. Dimensions: Use the highest dimension available for the model, to maximize accuracy. The results are true for English and other languages. 3. Oversampling: Use an oversampling factor of 3 for the best balance between accuracy and efficiency. This factor is suitable for a wide range of applications. 4. Rescoring: Enable rescoring to improve the accuracy of search results. 5. RAM: Store the full vectors and payload on disk. Limit what you load from memory to the binary quantization index. This helps reduce the memory footprint and improve the overall efficiency of the system. The incremental latency from the disk read is negligible compared to the latency savings from the binary scoring in Qdrant, which uses SIMD instructions where possible. ## What's next? Binary quantization is exceptional if you need to work with large volumes of data under high recall expectations. You can try this feature either by spinning up a [Qdrant container image](https://hub.docker.com/r/qdrant/qdrant) locally or, having us create one for you through a [free account](https://cloud.qdrant.io/login) in our cloud hosted service. The article gives examples of data sets and configuration you can use to get going. Our documentation covers [adding large datasets to Qdrant](/documentation/tutorials/bulk-upload/) to your Qdrant instance as well as [more quantization methods](/documentation/guides/quantization/). Want to discuss these findings and learn more about Binary Quantization? [Join our Discord community.](https://discord.gg/qdrant)
articles/binary-quantization-openai.md
--- title: "How to Implement Multitenancy and Custom Sharding in Qdrant" short_description: "Explore how Qdrant's multitenancy and custom sharding streamline machine-learning operations, enhancing scalability and data security." description: "Discover how multitenancy and custom sharding in Qdrant can streamline your machine-learning operations. Learn how to scale efficiently and manage data securely." social_preview_image: /articles_data/multitenancy/social_preview.png preview_dir: /articles_data/multitenancy/preview small_preview_image: /articles_data/multitenancy/icon.svg weight: -120 author: David Myriel date: 2024-02-06T13:21:00.000Z draft: false keywords: - multitenancy - custom sharding - multiple partitions - vector database --- # Scaling Your Machine Learning Setup: The Power of Multitenancy and Custom Sharding in Qdrant We are seeing the topics of [multitenancy](/documentation/guides/multiple-partitions/) and [distributed deployment](/documentation/guides/distributed_deployment/#sharding) pop-up daily on our [Discord support channel](https://qdrant.to/discord). This tells us that many of you are looking to scale Qdrant along with the rest of your machine learning setup. Whether you are building a bank fraud-detection system, [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/) for e-commerce, or services for the federal government - you will need to leverage a multitenant architecture to scale your product. In the world of SaaS and enterprise apps, this setup is the norm. It will considerably increase your application's performance and lower your hosting costs. ## Multitenancy & custom sharding with Qdrant We have developed two major features just for this. __You can now scale a single Qdrant cluster and support all of your customers worldwide.__ Under [multitenancy](/documentation/guides/multiple-partitions/), each customer's data is completely isolated and only accessible by them. At times, if this data is location-sensitive, Qdrant also gives you the option to divide your cluster by region or other criteria that further secure your customer's access. This is called [custom sharding](/documentation/guides/distributed_deployment/#user-defined-sharding). Combining these two will result in an efficiently-partitioned architecture that further leverages the convenience of a single Qdrant cluster. This article will briefly explain the benefits and show how you can get started using both features. ## One collection, many tenants When working with Qdrant, you can upsert all your data to a single collection, and then partition each vector via its payload. This means that all your users are leveraging the power of a single Qdrant cluster, but their data is still isolated within the collection. Let's take a look at a two-tenant collection: **Figure 1:** Each individual vector is assigned a specific payload that denotes which tenant it belongs to. This is how a large number of different tenants can share a single Qdrant collection. ![Qdrant Multitenancy](/articles_data/multitenancy/multitenancy-single.png) Qdrant is built to excel in a single collection with a vast number of tenants. You should only create multiple collections when your data is not homogenous or if users' vectors are created by different embedding models. Creating too many collections may result in resource overhead and cause dependencies. This can increase costs and affect overall performance. ## Sharding your database With Qdrant, you can also specify a shard for each vector individually. This feature is useful if you want to [control where your data is kept in the cluster](/documentation/guides/distributed_deployment/#sharding). For example, one set of vectors can be assigned to one shard on its own node, while another set can be on a completely different node. During vector search, your operations will be able to hit only the subset of shards they actually need. In massive-scale deployments, __this can significantly improve the performance of operations that do not require the whole collection to be scanned__. This works in the other direction as well. Whenever you search for something, you can specify a shard or several shards and Qdrant will know where to find them. It will avoid asking all machines in your cluster for results. This will minimize overhead and maximize performance. ### Common use cases A clear use-case for this feature is managing a multitenant collection, where each tenant (let it be a user or organization) is assumed to be segregated, so they can have their data stored in separate shards. Sharding solves the problem of region-based data placement, whereby certain data needs to be kept within specific locations. To do this, however, you will need to [move your shards between nodes](/documentation/guides/distributed_deployment/#moving-shards). **Figure 2:** Users can both upsert and query shards that are relevant to them, all within the same collection. Regional sharding can help avoid cross-continental traffic. ![Qdrant Multitenancy](/articles_data/multitenancy/shards.png) Custom sharding also gives you precise control over other use cases. A time-based data placement means that data streams can index shards that represent latest updates. If you organize your shards by date, you can have great control over the recency of retrieved data. This is relevant for social media platforms, which greatly rely on time-sensitive data. ## Before I go any further.....how secure is my user data? By design, Qdrant offers three levels of isolation. We initially introduced collection-based isolation, but your scaled setup has to move beyond this level. In this scenario, you will leverage payload-based isolation (from multitenancy) and resource-based isolation (from sharding). The ultimate goal is to have a single collection, where you can manipulate and customize placement of shards inside your cluster more precisely and avoid any kind of overhead. The diagram below shows the arrangement of your data within a two-tier isolation arrangement. **Figure 3:** Users can query the collection based on two filters: the `group_id` and the individual `shard_key_selector`. This gives your data two additional levels of isolation. ![Qdrant Multitenancy](/articles_data/multitenancy/multitenancy.png) ## Create custom shards for a single collection When creating a collection, you will need to configure user-defined sharding. This lets you control the shard placement of your data, so that operations can hit only the subset of shards they actually need. In big clusters, this can significantly improve the performance of operations, since you won't need to go through the entire collection to retrieve data. ```python client.create_collection( collection_name="{tenant_data}", shard_number=2, sharding_method=models.ShardingMethod.CUSTOM, # ... other collection parameters ) client.create_shard_key("{tenant_data}", "canada") client.create_shard_key("{tenant_data}", "germany") ``` In this example, your cluster is divided between Germany and Canada. Canadian and German law differ when it comes to international data transfer. Let's say you are creating a RAG application that supports the healthcare industry. Your Canadian customer data will have to be clearly separated for compliance purposes from your German customer. Even though it is part of the same collection, data from each shard is isolated from other shards and can be retrieved as such. For additional examples on shards and retrieval, consult [Distributed Deployments](/documentation/guides/distributed_deployment/) documentation and [Qdrant Client specification](https://python-client.qdrant.tech). ## Configure a multitenant setup for users Let's continue and start adding data. As you upsert your vectors to your new collection, you can add a `group_id` field to each vector. If you do this, Qdrant will assign each vector to its respective group. Additionally, each vector can now be allocated to a shard. You can specify the `shard_key_selector` for each individual vector. In this example, you are upserting data belonging to `tenant_1` to the Canadian region. ```python client.upsert( collection_name="{tenant_data}", points=[ models.PointStruct( id=1, payload={"group_id": "tenant_1"}, vector=[0.9, 0.1, 0.1], ), models.PointStruct( id=2, payload={"group_id": "tenant_1"}, vector=[0.1, 0.9, 0.1], ), ], shard_key_selector="canada", ) ``` Keep in mind that the data for each `group_id` is isolated. In the example below, `tenant_1` vectors are kept separate from `tenant_2`. The first tenant will be able to access their data in the Canadian portion of the cluster. However, as shown below `tenant_2 `might only be able to retrieve information hosted in Germany. ```python client.upsert( collection_name="{tenant_data}", points=[ models.PointStruct( id=3, payload={"group_id": "tenant_2"}, vector=[0.1, 0.1, 0.9], ), ], shard_key_selector="germany", ) ``` ## Retrieve data via filters The access control setup is completed as you specify the criteria for data retrieval. When searching for vectors, you need to use a `query_filter` along with `group_id` to filter vectors for each user. ```python client.search( collection_name="{tenant_data}", query_filter=models.Filter( must=[ models.FieldCondition( key="group_id", match=models.MatchValue( value="tenant_1", ), ), ] ), query_vector=[0.1, 0.1, 0.9], limit=10, ) ``` ## Performance considerations The speed of indexation may become a bottleneck if you are adding large amounts of data in this way, as each user's vector will be indexed into the same collection. To avoid this bottleneck, consider _bypassing the construction of a global vector index_ for the entire collection and building it only for individual groups instead. By adopting this strategy, Qdrant will index vectors for each user independently, significantly accelerating the process. To implement this approach, you should: 1. Set `payload_m` in the HNSW configuration to a non-zero value, such as 16. 2. Set `m` in hnsw config to 0. This will disable building global index for the whole collection. ```python from qdrant_client import QdrantClient, models client = QdrantClient("localhost", port=6333) client.create_collection( collection_name="{tenant_data}", vectors_config=models.VectorParams(size=768, distance=models.Distance.COSINE), hnsw_config=models.HnswConfigDiff( payload_m=16, m=0, ), ) ``` 3. Create keyword payload index for `group_id` field. ```python client.create_payload_index( collection_name="{tenant_data}", field_name="group_id", field_schema=models.PayloadSchemaType.KEYWORD, ) ``` > Note: Keep in mind that global requests (without the `group_id` filter) will be slower since they will necessitate scanning all groups to identify the nearest neighbors. ## Explore multitenancy and custom sharding in Qdrant for scalable solutions Qdrant is ready to support a massive-scale architecture for your machine learning project. If you want to see whether our [vector database](https://qdrant.tech/) is right for you, try the [quickstart tutorial](/documentation/quick-start/) or read our [docs and tutorials](/documentation/). To spin up a free instance of Qdrant, sign up for [Qdrant Cloud](https://qdrant.to/cloud) - no strings attached. Get support or share ideas in our [Discord](https://qdrant.to/discord) community. This is where we talk about vector search theory, publish examples and demos and discuss vector database setups.
articles/multitenancy.md
--- title: "What is RAG: Understanding Retrieval-Augmented Generation" draft: false slug: what-is-rag-in-ai? short_description: What is RAG? description: Explore how RAG enables LLMs to retrieve and utilize relevant external data when generating responses, rather than being limited to their original training data alone. preview_dir: /articles_data/what-is-rag-in-ai/preview weight: -150 social_preview_image: /articles_data/what-is-rag-in-ai/preview/social_preview.jpg small_preview_image: /articles_data/what-is-rag-in-ai/icon.svg date: 2024-03-19T9:29:33-03:00 author: Sabrina Aquino author_link: https://github.com/sabrinaaquino featured: true tags: - retrieval augmented generation - what is rag - embeddings - llm rag - rag application --- > Retrieval-augmented generation (RAG) integrates external information retrieval into the process of generating responses by Large Language Models (LLMs). It searches a database for information beyond its pre-trained knowledge base, significantly improving the accuracy and relevance of the generated responses. Language models have exploded on the internet ever since ChatGPT came out, and rightfully so. They can write essays, code entire programs, and even make memes (though we’re still deciding on whether that's a good thing). But as brilliant as these chatbots become, they still have **limitations** in tasks requiring external knowledge and factual information. Yes, it can describe the honeybee's waggle dance in excruciating detail. But they become far more valuable if they can generate insights from **any data** that we provide, rather than just their original training data. Since retraining those large language models from scratch costs millions of dollars and takes months, we need better ways to give our existing LLMs access to our custom data. While you could be more creative with your prompts, it is only a short-term solution. LLMs can consider only a **limited** amount of text in their responses, known as a [context window](https://www.hopsworks.ai/dictionary/context-window-for-llms). Some models like GPT-3 can see up to around 12 pages of text (that’s 4,096 tokens of context). That’s not good enough for most knowledge bases. ![How a RAG works](/articles_data/what-is-rag-in-ai/how-rag-works.jpg) The image above shows how a basic RAG system works. Before forwarding the question to the LLM, we have a layer that searches our knowledge base for the "relevant knowledge" to answer the user query. Specifically, in this case, the spending data from the last month. Our LLM can now generate a **relevant non-hallucinated** response about our budget. As your data grows, you’ll need efficient ways to identify the most relevant information for your LLM's limited memory. This is where you’ll want a proper way to store and retrieve the specific data you’ll need for your query, without needing the LLM to remember it. **Vector databases** store information as **vector embeddings**. This format supports efficient similarity searches to retrieve relevant data for your query. For example, Qdrant is specifically designed to perform fast, even in scenarios dealing with billions of vectors. This article will focus on RAG systems and architecture. If you’re interested in learning more about vector search, we recommend the following articles: [What is a Vector Database?](/articles/what-is-a-vector-database/) and [What are Vector Embeddings?](/articles/what-are-embeddings/). ## RAG architecture At its core, a RAG architecture includes the **retriever** and the **generator**. Let's start by understanding what each of these components does. ### The Retriever When you ask a question to the retriever, it uses **similarity search** to scan through a vast knowledge base of vector embeddings. It then pulls out the most **relevant** vectors to help answer that query. There are a few different techniques it can use to know what’s relevant: #### How indexing works in RAG retrievers The indexing process organizes the data into your vector database in a way that makes it easily searchable. This allows the RAG to access relevant information when responding to a query. ![How indexing works](/articles_data/what-is-rag-in-ai/how-indexing-works.jpg) As shown in the image above, here’s the process: * Start with a _loader_ that gathers _documents_ containing your data. These documents could be anything from articles and books to web pages and social media posts. * Next, a _splitter_ divides the documents into smaller chunks, typically sentences or paragraphs. * This is because RAG models work better with smaller pieces of text. In the diagram, these are _document snippets_. * Each text chunk is then fed into an _embedding machine_. This machine uses complex algorithms to convert the text into [vector embeddings](/articles/what-are-embeddings/). All the generated vector embeddings are stored in a knowledge base of indexed information. This supports efficient retrieval of similar pieces of information when needed. #### Query vectorization Once you have vectorized your knowledge base you can do the same to the user query. When the model sees a new query, it uses the same preprocessing and embedding techniques. This ensures that the query vector is compatible with the document vectors in the index. ![How retrieval works](/articles_data/what-is-rag-in-ai/how-retrieval-works.jpg) #### Retrieval of relevant documents When the system needs to find the most relevant documents or passages to answer a query, it utilizes vector similarity techniques. **Vector similarity** is a fundamental concept in machine learning and natural language processing (NLP) that quantifies the resemblance between vectors, which are mathematical representations of data points. The system can employ different vector similarity strategies depending on the type of vectors used to represent the data: ##### Sparse vector representations A sparse vector is characterized by a high dimensionality, with most of its elements being zero. The classic approach is **keyword search**, which scans documents for the exact words or phrases in the query. The search creates sparse vector representations of documents by counting word occurrences and inversely weighting common words. Queries with rarer words get prioritized. ![Sparse vector representation](/articles_data/what-is-rag-in-ai/sparse-vectors.jpg) [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) (Term Frequency-Inverse Document Frequency) and [BM25](https://en.wikipedia.org/wiki/Okapi_BM25) are two classic related algorithms. They're simple and computationally efficient. However, they can struggle with synonyms and don't always capture semantic similarities. If you’re interested in going deeper, refer to our article on [Sparse Vectors](/articles/sparse-vectors/). ##### Dense vector embeddings This approach uses large language models like [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)) to encode the query and passages into dense vector embeddings. These models are compact numerical representations that capture semantic meaning. Vector databases like Qdrant store these embeddings, allowing retrieval based on **semantic similarity** rather than just keywords using distance metrics like cosine similarity. This allows the retriever to match based on semantic understanding rather than just keywords. So if I ask about "compounds that cause BO," it can retrieve relevant info about "molecules that create body odor" even if those exact words weren't used. We explain more about it in our [What are Vector Embeddings](/articles/what-are-embeddings/) article. #### Hybrid search However, neither keyword search nor vector search are always perfect. Keyword search may miss relevant information expressed differently, while vector search can sometimes struggle with specificity or neglect important statistical word patterns. Hybrid methods aim to combine the strengths of different techniques. ![Hybrid search overview](/articles_data/what-is-rag-in-ai/hybrid-search.jpg) Some common hybrid approaches include: * Using keyword search to get an initial set of candidate documents. Next, the documents are re-ranked/re-scored using semantic vector representations. * Starting with semantic vectors to find generally topically relevant documents. Next, the documents are filtered/re-ranked e based on keyword matches or other metadata. * Considering both semantic vector closeness and statistical keyword patterns/weights in a combined scoring model. * Having multiple stages were different techniques. One example: start with an initial keyword retrieval, followed by semantic re-ranking, then a final re-ranking using even more complex models. When you combine the powers of different search methods in a complementary way, you can provide higher quality, more comprehensive results. Check out our article on [Hybrid Search](/articles/hybrid-search/) if you’d like to learn more. ### The Generator With the top relevant passages retrieved, it's now the generator's job to produce a final answer by synthesizing and expressing that information in natural language. The LLM is typically a model like GPT, BART or T5, trained on massive datasets to understand and generate human-like text. It now takes not only the query (or question) as input but also the relevant documents or passages that the retriever identified as potentially containing the answer to generate its response. ![How a Generator works](/articles_data/what-is-rag-in-ai/how-generation-works.png) The retriever and generator don't operate in isolation. The image bellow shows how the output of the retrieval feeds the generator to produce the final generated response. ![The entire architecture of a RAG system](/articles_data/what-is-rag-in-ai/rag-system.jpg) ## Where is RAG being used? Because of their more knowledgeable and contextual responses, we can find RAG models being applied in many areas today, especially those who need factual accuracy and knowledge depth. ### Real-World Applications: **Question answering:** This is perhaps the most prominent use case for RAG models. They power advanced question-answering systems that can retrieve relevant information from large knowledge bases and then generate fluent answers. **Language generation:** RAG enables more factual and contextualized text generation for contextualized text summarization from multiple sources **Data-to-text generation:** By retrieving relevant structured data, RAG models can generate product/business intelligence reports from databases or describing insights from data visualizations and charts **Multimedia understanding:** RAG isn't limited to text - it can retrieve multimodal information like images, video, and audio to enhance understanding. Answering questions about images/videos by retrieving relevant textual context. ## Creating your first RAG chatbot with Langchain, Groq, and OpenAI Are you ready to create your own RAG chatbot from the ground up? We have a video explaining everything from the beginning. Daniel Romero’s will guide you through: * Setting up your chatbot * Preprocessing and organizing data for your chatbot's use * Applying vector similarity search algorithms * Enhancing the efficiency and response quality After building your RAG chatbot, you'll be able to evaluate its performance against that of a chatbot powered solely by a Large Language Model (LLM). <div style="max-width: 640px; margin: 0 auto; padding-bottom: 1em"> <div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;"> <iframe width="100%" height="100%" src="https://www.youtube.com/embed/O60-KuZZeQA" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe> </div> </div> ## What’s next? Have a RAG project you want to bring to life? Join our [Discord community](https://discord.gg/qdrant) where we’re always sharing tips and answering questions on vector search and retrieval. Learn more about how to properly evaluate your RAG responses: [Evaluating Retrieval Augmented Generation - a framework for assessment](https://superlinked.com/vectorhub/evaluating-retrieval-augmented-generation-a-framework-for-assessment).
articles/what-is-rag-in-ai.md
--- title: Semantic Search As You Type short_description: "Instant search using Qdrant" description: To show off Qdrant's performance, we show how to do a quick search-as-you-type that will come back within a few milliseconds. social_preview_image: /articles_data/search-as-you-type/preview/social_preview.jpg small_preview_image: /articles_data/search-as-you-type/icon.svg preview_dir: /articles_data/search-as-you-type/preview weight: -2 author: Andre Bogus author_link: https://llogiq.github.io date: 2023-08-14T00:00:00+01:00 draft: false keywords: search, semantic, vector, llm, integration, benchmark, recommend, performance, rust --- Qdrant is one of the fastest vector search engines out there, so while looking for a demo to show off, we came upon the idea to do a search-as-you-type box with a fully semantic search backend. Now we already have a semantic/keyword hybrid search on our website. But that one is written in Python, which incurs some overhead for the interpreter. Naturally, I wanted to see how fast I could go using Rust. Since Qdrant doesn't embed by itself, I had to decide on an embedding model. The prior version used the [SentenceTransformers](https://www.sbert.net/) package, which in turn employs Bert-based [All-MiniLM-L6-V2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2/tree/main) model. This model is battle-tested and delivers fair results at speed, so not experimenting on this front I took an [ONNX version](https://huggingface.co/optimum/all-MiniLM-L6-v2/tree/main) and ran that within the service. The workflow looks like this: ![Search Qdrant by Embedding](/articles_data/search-as-you-type/Qdrant_Search_by_Embedding.png) This will, after tokenizing and embedding send a `/collections/site/points/search` POST request to Qdrant, sending the following JSON: ```json POST collections/site/points/search { "vector": [-0.06716014,-0.056464013, ...(382 values omitted)], "limit": 5, "with_payload": true, } ``` Even with avoiding a network round-trip, the embedding still takes some time. As always in optimization, if you cannot do the work faster, a good solution is to avoid work altogether (please don't tell my employer). This can be done by pre-computing common prefixes and calculating embeddings for them, then storing them in a `prefix_cache` collection. Now the [`recommend`](https://api.qdrant.tech/api-reference/search/recommend-points) API method can find the best matches without doing any embedding. For now, I use short (up to and including 5 letters) prefixes, but I can also parse the logs to get the most common search terms and add them to the cache later. ![Qdrant Recommendation](/articles_data/search-as-you-type/Qdrant_Recommendation.png) Making that work requires setting up the `prefix_cache` collection with points that have the prefix as their `point_id` and the embedding as their `vector`, which lets us do the lookup with no search or index. The `prefix_to_id` function currently uses the `u64` variant of `PointId`, which can hold eight bytes, enough for this use. If the need arises, one could instead encode the names as UUID, hashing the input. Since I know all our prefixes are within 8 bytes, I decided against this for now. The `recommend` endpoint works roughly the same as `search_points`, but instead of searching for a vector, Qdrant searches for one or more points (you can also give negative example points the search engine will try to avoid in the results). It was built to help drive recommendation engines, saving the round-trip of sending the current point's vector back to Qdrant to find more similar ones. However Qdrant goes a bit further by allowing us to select a different collection to lookup the points, which allows us to keep our `prefix_cache` collection separate from the site data. So in our case, Qdrant first looks up the point from the `prefix_cache`, takes its vector and searches for that in the `site` collection, using the precomputed embeddings from the cache. The API endpoint expects a POST of the following JSON to `/collections/site/points/recommend`: ```json POST collections/site/points/recommend { "positive": [1936024932], "limit": 5, "with_payload": true, "lookup_from": { "collection": "prefix_cache" } } ``` Now I have, in the best Rust tradition, a blazingly fast semantic search. To demo it, I used our [Qdrant documentation website](/documentation/)'s page search, replacing our previous Python implementation. So in order to not just spew empty words, here is a benchmark, showing different queries that exercise different code paths. Since the operations themselves are far faster than the network whose fickle nature would have swamped most measurable differences, I benchmarked both the Python and Rust services locally. I'm measuring both versions on the same AMD Ryzen 9 5900HX with 16GB RAM running Linux. The table shows the average time and error bound in milliseconds. I only measured up to a thousand concurrent requests. None of the services showed any slowdown with more requests in that range. I do not expect our service to become DDOS'd, so I didn't benchmark with more load. Without further ado, here are the results: | query length | Short | Long | |---------------|-----------|------------| | Python 🐍 | 16 ± 4 ms | 16 ± 4 ms | | Rust 🦀 | 1½ ± ½ ms | 5 ± 1 ms | The Rust version consistently outperforms the Python version and offers a semantic search even on few-character queries. If the prefix cache is hit (as in the short query length), the semantic search can even get more than ten times faster than the Python version. The general speed-up is due to both the relatively lower overhead of Rust + Actix Web compared to Python + FastAPI (even if that already performs admirably), as well as using ONNX Runtime instead of SentenceTransformers for the embedding. The prefix cache gives the Rust version a real boost by doing a semantic search without doing any embedding work. As an aside, while the millisecond differences shown here may mean relatively little for our users, whose latency will be dominated by the network in between, when typing, every millisecond more or less can make a difference in user perception. Also search-as-you-type generates between three and five times as much load as a plain search, so the service will experience more traffic. Less time per request means being able to handle more of them. Mission accomplished! But wait, there's more! ### Prioritizing Exact Matches and Headings To improve on the quality of the results, Qdrant can do multiple searches in parallel, and then the service puts the results in sequence, taking the first best matches. The extended code searches: 1. Text matches in titles 2. Text matches in body (paragraphs or lists) 3. Semantic matches in titles 4. Any Semantic matches Those are put together by taking them in the above order, deduplicating as necessary. ![merge workflow](/articles_data/search-as-you-type/sayt_merge.png) Instead of sending a `search` or `recommend` request, one can also send a `search/batch` or `recommend/batch` request, respectively. Each of those contain a `"searches"` property with any number of search/recommend JSON requests: ```json POST collections/site/points/search/batch { "searches": [ { "vector": [-0.06716014,-0.056464013, ...], "filter": { "must": [ { "key": "text", "match": { "text": <query> }}, { "key": "tag", "match": { "any": ["h1", "h2", "h3"] }}, ] } ..., }, { "vector": [-0.06716014,-0.056464013, ...], "filter": { "must": [ { "key": "body", "match": { "text": <query> }} ] } ..., }, { "vector": [-0.06716014,-0.056464013, ...], "filter": { "must": [ { "key": "tag", "match": { "any": ["h1", "h2", "h3"] }} ] } ..., }, { "vector": [-0.06716014,-0.056464013, ...], ..., }, ] } ``` As the queries are done in a batch request, there isn't any additional network overhead and only very modest computation overhead, yet the results will be better in many cases. The only additional complexity is to flatten the result lists and take the first 5 results, deduplicating by point ID. Now there is one final problem: The query may be short enough to take the recommend code path, but still not be in the prefix cache. In that case, doing the search *sequentially* would mean two round-trips between the service and the Qdrant instance. The solution is to *concurrently* start both requests and take the first successful non-empty result. ![sequential vs. concurrent flow](/articles_data/search-as-you-type/sayt_concurrency.png) While this means more load for the Qdrant vector search engine, this is not the limiting factor. The relevant data is already in cache in many cases, so the overhead stays within acceptable bounds, and the maximum latency in case of prefix cache misses is measurably reduced. The code is available on the [Qdrant github](https://github.com/qdrant/page-search) To sum up: Rust is fast, recommend lets us use precomputed embeddings, batch requests are awesome and one can do a semantic search in mere milliseconds.
articles/search-as-you-type.md
--- title: "Vector Similarity: Going Beyond Full-Text Search | Qdrant" short_description: Explore how vector similarity enhances data discovery beyond full-text search, including diversity sampling and more! description: Discover how vector similarity expands data exploration beyond full-text search. Explore diversity sampling and more for enhanced data discovery! preview_dir: /articles_data/vector-similarity-beyond-search/preview small_preview_image: /articles_data/vector-similarity-beyond-search/icon.svg social_preview_image: /articles_data/vector-similarity-beyond-search/preview/social_preview.jpg weight: -1 author: Luis Cossío author_link: https://coszio.github.io/ date: 2023-08-08T08:00:00+03:00 draft: false keywords: - vector similarity - exploration - dissimilarity - discovery - diversity - recommendation --- # Vector Similarity: Unleashing Data Insights Beyond Traditional Search When making use of unstructured data, there are traditional go-to solutions that are well-known for developers: - **Full-text search** when you need to find documents that contain a particular word or phrase. - **[Vector search](https://qdrant.tech/documentation/overview/vector-search/)** when you need to find documents that are semantically similar to a given query. Sometimes people mix those two approaches, so it might look like the vector similarity is just an extension of full-text search. However, in this article, we will explore some promising new techniques that can be used to expand the use-case of unstructured data and demonstrate that vector similarity creates its own stack of data exploration tools. ## What is vector similarity search? Vector similarity offers a range of powerful functions that go far beyond those available in traditional full-text search engines. From dissimilarity search to diversity and recommendation, these methods can expand the cases in which vectors are useful. Vector Databases, which are designed to store and process immense amounts of vectors, are the first candidates to implement these new techniques and allow users to exploit their data to its fullest. ## Vector similarity search vs. full-text search While there is an intersection in the functionality of these two approaches, there is also a vast area of functions that is unique to each of them. For example, the exact phrase matching and counting of results are native to full-text search, while vector similarity support for this type of operation is limited. On the other hand, vector similarity easily allows cross-modal retrieval of images by text or vice-versa, which is impossible with full-text search. This mismatch in expectations might sometimes lead to confusion. Attempting to use a vector similarity as a full-text search can result in a range of frustrations, from slow response times to poor search results, to limited functionality. As an outcome, they are getting only a fraction of the benefits of vector similarity. {{< figure width=70% src=/articles_data/vector-similarity-beyond-search/venn-diagram.png caption="Full-text search and Vector Similarity Functionality overlap" >}} Below we will explore why the vector similarity stack deserves new interfaces and design patterns that will unlock the full potential of this technology, which can still be used in conjunction with full-text search. ## New ways to interact with similarities Having a vector representation of unstructured data unlocks new ways of interacting with it. For example, it can be used to measure semantic similarity between words, to cluster words or documents based on their meaning, to find related images, or even to generate new text. However, these interactions can go beyond finding their nearest neighbors (kNN). There are several other techniques that can be leveraged by vector representations beyond the traditional kNN search. These include dissimilarity search, diversity search, recommendations, and discovery functions. ## Dissimilarity ssearch The Dissimilarity —or farthest— search is the most straightforward concept after the nearest search, which can’t be reproduced in a traditional full-text search. It aims to find the most un-similar or distant documents across the collection. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/dissimilarity.png caption="Dissimilarity Search" >}} Unlike full-text match, Vector similarity can compare any pair of documents (or points) and assign a similarity score. It doesn’t rely on keywords or other metadata. With vector similarity, we can easily achieve a dissimilarity search by inverting the search objective from maximizing similarity to minimizing it. The dissimilarity search can find items in areas where previously no other search could be used. Let’s look at a few examples. ### Case: mislabeling detection For example, we have a dataset of furniture in which we have classified our items into what kind of furniture they are: tables, chairs, lamps, etc. To ensure our catalog is accurate, we can use a dissimilarity search to highlight items that are most likely mislabeled. To do this, we only need to search for the most dissimilar items using the embedding of the category title itself as a query. This can be too broad, so, by combining it with filters —a [Qdrant superpower](/articles/filtrable-hnsw/)—, we can narrow down the search to a specific category. {{< figure src=/articles_data/vector-similarity-beyond-search/mislabelling.png caption="Mislabeling Detection" >}} The output of this search can be further processed with heavier models or human supervision to detect actual mislabeling. ### Case: outlier detection In some cases, we might not even have labels, but it is still possible to try to detect anomalies in our dataset. Dissimilarity search can be used for this purpose as well. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/anomaly-detection.png caption="Anomaly Detection" >}} The only thing we need is a bunch of reference points that we consider "normal". Then we can search for the most dissimilar points to this reference set and use them as candidates for further analysis. ## Diversity search Even with no input provided vector, (dis-)similarity can improve an overall selection of items from the dataset. The naive approach is to do random sampling. However, unless our dataset has a uniform distribution, the results of such sampling might be biased toward more frequent types of items. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/diversity-random.png caption="Example of random sampling" >}} The similarity information can increase the diversity of those results and make the first overview more interesting. That is especially useful when users do not yet know what they are looking for and want to explore the dataset. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/diversity-force.png caption="Example of similarity-based sampling" >}} The power of vector similarity, in the context of being able to compare any two points, allows making a diverse selection of the collection possible without any labeling efforts. By maximizing the distance between all points in the response, we can have an algorithm that will sequentially output dissimilar results. {{< figure src=/articles_data/vector-similarity-beyond-search/diversity.png caption="Diversity Search" >}} Some forms of diversity sampling are already used in the industry and are known as [Maximum Margin Relevance](https://python.langchain.com/docs/integrations/vectorstores/qdrant#maximum-marginal-relevance-search-mmr) (MMR). Techniques like this were developed to enhance similarity on a universal search API. However, there is still room for new ideas, particularly regarding diversity retrieval. By utilizing more advanced vector-native engines, it could be possible to take use cases to the next level and achieve even better results. ## Vector similarity recommendations Vector similarity can go above a single query vector. It can combine multiple positive and negative examples for a more accurate retrieval. Building a recommendation API in a vector database can take advantage of using already stored vectors as part of the queries, by specifying the point id. Doing this, we can skip query-time neural network inference, and make the recommendation search faster. There are multiple ways to implement recommendations with vectors. ### Vector-features recommendations The first approach is to take all positive and negative examples and average them to create a single query vector. In this technique, the more significant components of positive vectors are canceled out by the negative ones, and the resulting vector is a combination of all the features present in the positive examples, but not in the negative ones. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/feature-based-recommendations.png caption="Vector-Features Based Recommendations" >}} This approach is already implemented in Qdrant, and while it works great when the vectors are assumed to have each of their dimensions represent some kind of feature of the data, sometimes distances are a better tool to judge negative and positive examples. ### Relative distance recommendations Another approach is to use the distance between negative examples to the candidates to help them create exclusion areas. In this technique, we perform searches near the positive examples while excluding the points that are closer to a negative example than to a positive one. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/relative-distance-recommendations.png caption="Relative Distance Recommendations" >}} The main use-case of both approaches —of course— is to take some history of user interactions and recommend new items based on it. ## Discovery In many exploration scenarios, the desired destination is not known in advance. The search process in this case can consist of multiple steps, where each step would provide a little more information to guide the search in the right direction. To get more intuition about the possible ways to implement this approach, let’s take a look at how similarity modes are trained in the first place: The most well-known loss function used to train similarity models is a [triplet-loss](https://en.wikipedia.org/wiki/Triplet_loss). In this loss, the model is trained by fitting the information of relative similarity of 3 objects: the Anchor, Positive, and Negative examples. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/triplet-loss.png caption="Triplet Loss" >}} Using the same mechanics, we can look at the training process from the other side. Given a trained model, the user can provide positive and negative examples, and the goal of the discovery process is then to find suitable anchors across the stored collection of vectors. <!-- ToDo: image where we know positive and nagative --> {{< figure width=60% src=/articles_data/vector-similarity-beyond-search/discovery.png caption="Reversed triplet loss" >}} Multiple positive-negative pairs can be provided to make the discovery process more accurate. Worth mentioning, that as well as in NN training, the dataset may contain noise and some portion of contradictory information, so a discovery process should be tolerant of this kind of data imperfections. <!-- Image with multiple pairs --> {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/discovery-noise.png caption="Sample pairs" >}} The important difference between this and the recommendation method is that the positive-negative pairs in the discovery method don’t assume that the final result should be close to positive, it only assumes that it should be closer than the negative one. {{< figure width=80% src=/articles_data/vector-similarity-beyond-search/discovery-vs-recommendations.png caption="Discovery vs Recommendation" >}} In combination with filtering or similarity search, the additional context information provided by the discovery pairs can be used as a re-ranking factor. ## A new API stack for vector databases When you introduce vector similarity capabilities into your text search engine, you extend its functionality. However, it doesn't work the other way around, as the vector similarity as a concept is much broader than some task-specific implementations of full-text search. [Vector databases](https://qdrant.tech/), which introduce built-in full-text functionality, must make several compromises: - Choose a specific full-text search variant. - Either sacrifice API consistency or limit vector similarity functionality to only basic kNN search. - Introduce additional complexity to the system. Qdrant, on the contrary, puts vector similarity in the center of its API and architecture, such that it allows us to move towards a new stack of vector-native operations. We believe that this is the future of vector databases, and we are excited to see what new use-cases will be unlocked by these techniques. ## Key takeaways: - Vector similarity offers advanced data exploration tools beyond traditional full-text search, including dissimilarity search, diversity sampling, and recommendation systems. - Practical applications of vector similarity include improving data quality through mislabeling detection and anomaly identification. - Enhanced user experiences are achieved by leveraging advanced search techniques, providing users with intuitive data exploration, and improving decision-making processes. Ready to unlock the full potential of your data? [Try a free demo](https://qdrant.tech/contact-us/) to explore how vector similarity can revolutionize your data insights and drive smarter decision-making.
articles/vector-similarity-beyond-search.md
--- title: Q&A with Similarity Learning short_description: A complete guide to building a Q&A system with similarity learning. description: A complete guide to building a Q&A system using Quaterion and SentenceTransformers. social_preview_image: /articles_data/faq-question-answering/preview/social_preview.jpg preview_dir: /articles_data/faq-question-answering/preview small_preview_image: /articles_data/faq-question-answering/icon.svg weight: 9 author: George Panchuk author_link: https://medium.com/@george.panchuk date: 2022-06-28T08:57:07.604Z # aliases: [ /articles/faq-question-answering/ ] --- # Question-answering system with Similarity Learning and Quaterion Many problems in modern machine learning are approached as classification tasks. Some are the classification tasks by design, but others are artificially transformed into such. And when you try to apply an approach, which does not naturally fit your problem, you risk coming up with over-complicated or bulky solutions. In some cases, you would even get worse performance. Imagine that you got a new task and decided to solve it with a good old classification approach. Firstly, you will need labeled data. If it came on a plate with the task, you're lucky, but if it didn't, you might need to label it manually. And I guess you are already familiar with how painful it might be. Assuming you somehow labeled all required data and trained a model. It shows good performance - well done! But a day later, your manager told you about a bunch of new data with new classes, which your model has to handle. You repeat your pipeline. Then, two days later, you've been reached out one more time. You need to update the model again, and again, and again. Sounds tedious and expensive for me, does not it for you? ## Automating customer support Let's now take a look at the concrete example. There is a pressing problem with automating customer support. The service should be capable of answering user questions and retrieving relevant articles from the documentation without any human involvement. With the classification approach, you need to build a hierarchy of classification models to determine the question's topic. You have to collect and label a whole custom dataset of your private documentation topics to train that. And then, each time you have a new topic in your documentation, you have to re-train the whole pile of classifiers with additionally labeled data. Can we make it easier? ## Similarity option One of the possible alternatives is Similarity Learning, which we are going to discuss in this article. It suggests getting rid of the classes and making decisions based on the similarity between objects instead. To do it quickly, we would need some intermediate representation - embeddings. Embeddings are high-dimensional vectors with semantic information accumulated in them. As embeddings are vectors, one can apply a simple function to calculate the similarity score between them, for example, cosine or euclidean distance. So with similarity learning, all we need to do is provide pairs of correct questions and answers. And then, the model will learn to distinguish proper answers by the similarity of embeddings. >If you want to learn more about similarity learning and applications, check out this [article](/documentation/tutorials/neural-search/) which might be an asset. ## Let's build Similarity learning approach seems a lot simpler than classification in this case, and if you have some doubts on your mind, let me dispel them. As I have no any resource with exhaustive F.A.Q. which might serve as a dataset, I've scrapped it from sites of popular cloud providers. The dataset consists of just 8.5k pairs of question and answers, you can take a closer look at it [here](https://github.com/qdrant/demo-cloud-faq). Once we have data, we need to obtain embeddings for it. It is not a novel technique in NLP to represent texts as embeddings. There are plenty of algorithms and models to calculate them. You could have heard of Word2Vec, GloVe, ELMo, BERT, all these models can provide text embeddings. However, it is better to produce embeddings with a model trained for semantic similarity tasks. For instance, we can find such models at [sentence-transformers](https://www.sbert.net/docs/pretrained_models.html). Authors claim that `all-mpnet-base-v2` provides the best quality, but let's pick `all-MiniLM-L6-v2` for our tutorial as it is 5x faster and still offers good results. Having all this, we can test our approach. We won't take all our dataset at the moment, but only a part of it. To measure model's performance we will use two metrics - [mean reciprocal rank](https://en.wikipedia.org/wiki/Mean_reciprocal_rank) and [precision@1](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision_at_k). We have a [ready script](https://github.com/qdrant/demo-cloud-faq/blob/experiments/faq/baseline.py) for this experiment, let's just launch it now. <div class="table-responsive"> | precision@1 | reciprocal_rank | |-------------|-----------------| | 0.564 | 0.663 | </div> That's already quite decent quality, but maybe we can do better? ## Improving results with fine-tuning Actually, we can! Model we used has a good natural language understanding, but it has never seen our data. An approach called `fine-tuning` might be helpful to overcome this issue. With fine-tuning you don't need to design a task-specific architecture, but take a model pre-trained on another task, apply a couple of layers on top and train its parameters. Sounds good, but as similarity learning is not as common as classification, it might be a bit inconvenient to fine-tune a model with traditional tools. For this reason we will use [Quaterion](https://github.com/qdrant/quaterion) - a framework for fine-tuning similarity learning models. Let's see how we can train models with it First, create our project and call it `faq`. > All project dependencies, utils scripts not covered in the tutorial can be found in the > [repository](https://github.com/qdrant/demo-cloud-faq/tree/tutorial). ### Configure training The main entity in Quaterion is [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html). This class makes model's building process fast and convenient. `TrainableModel` is a wrapper around [pytorch_lightning.LightningModule](https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html). [Lightning](https://www.pytorchlightning.ai/) handles all the training process complexities, like training loop, device managing, etc. and saves user from a necessity to implement all this routine manually. Also Lightning's modularity is worth to be mentioned. It improves separation of responsibilities, makes code more readable, robust and easy to write. All these features make Pytorch Lightning a perfect training backend for Quaterion. To use `TrainableModel` you need to inherit your model class from it. The same way you would use `LightningModule` in pure `pytorch_lightning`. Mandatory methods are `configure_loss`, `configure_encoders`, `configure_head`, `configure_optimizers`. The majority of mentioned methods are quite easy to implement, you'll probably just need a couple of imports to do that. But `configure_encoders` requires some code:) Let's create a `model.py` with model's template and a placeholder for `configure_encoders` for the moment. ```python from typing import Union, Dict, Optional from torch.optim import Adam from quaterion import TrainableModel from quaterion.loss import MultipleNegativesRankingLoss, SimilarityLoss from quaterion_models.encoders import Encoder from quaterion_models.heads import EncoderHead from quaterion_models.heads.skip_connection_head import SkipConnectionHead class FAQModel(TrainableModel): def __init__(self, lr=10e-5, *args, **kwargs): self.lr = lr super().__init__(*args, **kwargs) def configure_optimizers(self): return Adam(self.model.parameters(), lr=self.lr) def configure_loss(self) -> SimilarityLoss: return MultipleNegativesRankingLoss(symmetric=True) def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]: ... # ToDo def configure_head(self, input_embedding_size: int) -> EncoderHead: return SkipConnectionHead(input_embedding_size) ``` - `configure_optimizers` is a method provided by Lightning. An eagle-eye of you could notice mysterious `self.model`, it is actually a [SimilarityModel](https://quaterion-models.qdrant.tech/quaterion_models.model.html) instance. We will cover it later. - `configure_loss` is a loss function to be used during training. You can choose a ready-made implementation from Quaterion. However, since Quaterion's purpose is not to cover all possible losses, or other entities and features of similarity learning, but to provide a convenient framework to build and use such models, there might not be a desired loss. In this case it is possible to use [PytorchMetricLearningWrapper](https://quaterion.qdrant.tech/quaterion.loss.extras.pytorch_metric_learning_wrapper.html) to bring required loss from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/) library, which has a rich collection of losses. You can also implement a custom loss yourself. - `configure_head` - model built via Quaterion is a combination of encoders and a top layer - head. As with losses, some head implementations are provided. They can be found at [quaterion_models.heads](https://quaterion-models.qdrant.tech/quaterion_models.heads.html). At our example we use [MultipleNegativesRankingLoss](https://quaterion.qdrant.tech/quaterion.loss.multiple_negatives_ranking_loss.html). This loss is especially good for training retrieval tasks. It assumes that we pass only positive pairs (similar objects) and considers all other objects as negative examples. `MultipleNegativesRankingLoss` use cosine to measure distance under the hood, but it is a configurable parameter. Quaterion provides implementation for other distances as well. You can find available ones at [quaterion.distances](https://quaterion.qdrant.tech/quaterion.distances.html). Now we can come back to `configure_encoders`:) ### Configure Encoder The encoder task is to convert objects into embeddings. They usually take advantage of some pre-trained models, in our case `all-MiniLM-L6-v2` from `sentence-transformers`. In order to use it in Quaterion, we need to create a wrapper inherited from the [Encoder](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html) class. Let's create our encoder in `encoder.py` ```python import os from torch import Tensor, nn from sentence_transformers.models import Transformer, Pooling from quaterion_models.encoders import Encoder from quaterion_models.types import TensorInterchange, CollateFnType class FAQEncoder(Encoder): def __init__(self, transformer, pooling): super().__init__() self.transformer = transformer self.pooling = pooling self.encoder = nn.Sequential(self.transformer, self.pooling) @property def trainable(self) -> bool: # Defines if we want to train encoder itself, or head layer only return False @property def embedding_size(self) -> int: return self.transformer.get_word_embedding_dimension() def forward(self, batch: TensorInterchange) -> Tensor: return self.encoder(batch)["sentence_embedding"] def get_collate_fn(self) -> CollateFnType: return self.transformer.tokenize @staticmethod def _transformer_path(path: str): return os.path.join(path, "transformer") @staticmethod def _pooling_path(path: str): return os.path.join(path, "pooling") def save(self, output_path: str): transformer_path = self._transformer_path(output_path) os.makedirs(transformer_path, exist_ok=True) pooling_path = self._pooling_path(output_path) os.makedirs(pooling_path, exist_ok=True) self.transformer.save(transformer_path) self.pooling.save(pooling_path) @classmethod def load(cls, input_path: str) -> Encoder: transformer = Transformer.load(cls._transformer_path(input_path)) pooling = Pooling.load(cls._pooling_path(input_path)) return cls(transformer=transformer, pooling=pooling) ``` As you can notice, there are more methods implemented, then we've already discussed. Let's go through them now! - In `__init__` we register our pre-trained layers, similar as you do in [torch.nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html) descendant. - `trainable` defines whether current `Encoder` layers should be updated during training or not. If `trainable=False`, then all layers will be frozen. - `embedding_size` is a size of encoder's output, it is required for proper `head` configuration. - `get_collate_fn` is a tricky one. Here you should return a method which prepares a batch of raw data into the input, suitable for the encoder. If `get_collate_fn` is not overridden, then the [default_collate](https://pytorch.org/docs/stable/data.html#torch.utils.data.default_collate) will be used. The remaining methods are considered self-describing. As our encoder is ready, we now are able to fill `configure_encoders`. Just insert the following code into `model.py`: ```python ... from sentence_transformers import SentenceTransformer from sentence_transformers.models import Transformer, Pooling from faq.encoder import FAQEncoder class FAQModel(TrainableModel): ... def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]: pre_trained_model = SentenceTransformer("all-MiniLM-L6-v2") transformer: Transformer = pre_trained_model[0] pooling: Pooling = pre_trained_model[1] encoder = FAQEncoder(transformer, pooling) return encoder ``` ### Data preparation Okay, we have raw data and a trainable model. But we don't know yet how to feed this data to our model. Currently, Quaterion takes two types of similarity representation - pairs and groups. The groups format assumes that all objects split into groups of similar objects. All objects inside one group are similar, and all other objects outside this group considered dissimilar to them. But in the case of pairs, we can only assume similarity between explicitly specified pairs of objects. We can apply any of the approaches with our data, but pairs one seems more intuitive. The format in which Similarity is represented determines which loss can be used. For example, _ContrastiveLoss_ and _MultipleNegativesRankingLoss_ works with pairs format. [SimilarityPairSample](https://quaterion.qdrant.tech/quaterion.dataset.similarity_samples.html#quaterion.dataset.similarity_samples.SimilarityPairSample) could be used to represent pairs. Let's take a look at it: ```python @dataclass class SimilarityPairSample: obj_a: Any obj_b: Any score: float = 1.0 subgroup: int = 0 ``` Here might be some questions: what `score` and `subgroup` are? Well, `score` is a measure of expected samples similarity. If you only need to specify if two samples are similar or not, you can use `1.0` and `0.0` respectively. `subgroups` parameter is required for more granular description of what negative examples could be. By default, all pairs belong the subgroup zero. That means that we would need to specify all negative examples manually. But in most cases, we can avoid this by enabling different subgroups. All objects from different subgroups will be considered as negative examples in loss, and thus it provides a way to set negative examples implicitly. With this knowledge, we now can create our `Dataset` class in `dataset.py` to feed our model: ```python import json from typing import List, Dict from torch.utils.data import Dataset from quaterion.dataset.similarity_samples import SimilarityPairSample class FAQDataset(Dataset): """Dataset class to process .jsonl files with FAQ from popular cloud providers.""" def __init__(self, dataset_path): self.dataset: List[Dict[str, str]] = self.read_dataset(dataset_path) def __getitem__(self, index) -> SimilarityPairSample: line = self.dataset[index] question = line["question"] # All questions have a unique subgroup # Meaning that all other answers are considered negative pairs subgroup = hash(question) return SimilarityPairSample( obj_a=question, obj_b=line["answer"], score=1, subgroup=subgroup ) def __len__(self): return len(self.dataset) @staticmethod def read_dataset(dataset_path) -> List[Dict[str, str]]: """Read jsonl-file into a memory.""" with open(dataset_path, "r") as fd: return [json.loads(json_line) for json_line in fd] ``` We assigned a unique subgroup for each question, so all other objects which have different question will be considered as negative examples. ### Evaluation Metric We still haven't added any metrics to the model. For this purpose Quaterion provides `configure_metrics`. We just need to override it and attach interested metrics. Quaterion has some popular retrieval metrics implemented - such as _precision @ k_ or _mean reciprocal rank_. They can be found in [quaterion.eval](https://quaterion.qdrant.tech/quaterion.eval.html) package. But there are just a few metrics, it is assumed that desirable ones will be made by user or taken from another libraries. You will probably need to inherit from `PairMetric` or `GroupMetric` to implement a new one. In `configure_metrics` we need to return a list of `AttachedMetric`. They are just wrappers around metric instances and helps to log metrics more easily. Under the hood `logging` is handled by `pytorch-lightning`. You can configure it as you want - pass required parameters as keyword arguments to `AttachedMetric`. For additional info visit [logging documentation page](https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html) Let's add mentioned metrics for our `FAQModel`. Add this code to `model.py`: ```python ... from quaterion.eval.pair import RetrievalPrecision, RetrievalReciprocalRank from quaterion.eval.attached_metric import AttachedMetric class FAQModel(TrainableModel): def __init__(self, lr=10e-5, *args, **kwargs): self.lr = lr super().__init__(*args, **kwargs) ... def configure_metrics(self): return [ AttachedMetric( "RetrievalPrecision", RetrievalPrecision(k=1), prog_bar=True, on_epoch=True, ), AttachedMetric( "RetrievalReciprocalRank", RetrievalReciprocalRank(), prog_bar=True, on_epoch=True ), ] ``` ### Fast training with Cache Quaterion has one more cherry on top of the cake when it comes to non-trainable encoders. If encoders are frozen, they are deterministic and emit the exact embeddings for the same input data on each epoch. It provides a way to avoid repeated calculations and reduce training time. For this purpose Quaterion has a cache functionality. Before training starts, the cache runs one epoch to pre-calculate all embeddings with frozen encoders and then store them on a device you chose (currently CPU or GPU). Everything you need is to define which encoders are trainable or not and set cache settings. And that's it: everything else Quaterion will handle for you. To configure cache you need to override `configure_cache` method in `TrainableModel`. This method should return an instance of [CacheConfig](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheConfig). Let's add cache to our model: ```python ... from quaterion.train.cache import CacheConfig, CacheType ... class FAQModel(TrainableModel): ... def configure_caches(self) -> Optional[CacheConfig]: return CacheConfig(CacheType.AUTO) ... ``` [CacheType](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheType) determines how the cache will be stored in memory. ### Training Now we need to combine all our code together in `train.py` and launch a training process. ```python import torch import pytorch_lightning as pl from quaterion import Quaterion from quaterion.dataset import PairsSimilarityDataLoader from faq.dataset import FAQDataset def train(model, train_dataset_path, val_dataset_path, params): use_gpu = params.get("cuda", torch.cuda.is_available()) trainer = pl.Trainer( min_epochs=params.get("min_epochs", 1), max_epochs=params.get("max_epochs", 500), auto_select_gpus=use_gpu, log_every_n_steps=params.get("log_every_n_steps", 1), gpus=int(use_gpu), ) train_dataset = FAQDataset(train_dataset_path) val_dataset = FAQDataset(val_dataset_path) train_dataloader = PairsSimilarityDataLoader( train_dataset, batch_size=1024 ) val_dataloader = PairsSimilarityDataLoader( val_dataset, batch_size=1024 ) Quaterion.fit(model, trainer, train_dataloader, val_dataloader) if __name__ == "__main__": import os from pytorch_lightning import seed_everything from faq.model import FAQModel from faq.config import DATA_DIR, ROOT_DIR seed_everything(42, workers=True) faq_model = FAQModel() train_path = os.path.join( DATA_DIR, "train_cloud_faq_dataset.jsonl" ) val_path = os.path.join( DATA_DIR, "val_cloud_faq_dataset.jsonl" ) train(faq_model, train_path, val_path, {}) faq_model.save_servable(os.path.join(ROOT_DIR, "servable")) ``` Here are a couple of unseen classes, `PairsSimilarityDataLoader`, which is a native dataloader for `SimilarityPairSample` objects, and `Quaterion` is an entry point to the training process. ### Dataset-wise evaluation Up to this moment we've calculated only batch-wise metrics. Such metrics can fluctuate a lot depending on a batch size and can be misleading. It might be helpful if we can calculate a metric on a whole dataset or some large part of it. Raw data may consume a huge amount of memory, and usually we can't fit it into one batch. Embeddings, on the contrary, most probably will consume less. That's where `Evaluator` enters the scene. At first, having dataset of `SimilaritySample`, `Evaluator` encodes it via `SimilarityModel` and compute corresponding labels. After that, it calculates a metric value, which could be more representative than batch-wise ones. However, you still can find yourself in a situation where evaluation becomes too slow, or there is no enough space left in the memory. A bottleneck might be a squared distance matrix, which one needs to calculate to compute a retrieval metric. You can mitigate this bottleneck by calculating a rectangle matrix with reduced size. `Evaluator` accepts `sampler` with a sample size to select only specified amount of embeddings. If sample size is not specified, evaluation is performed on all embeddings. Fewer words! Let's add evaluator to our code and finish `train.py`. ```python ... from quaterion.eval.evaluator import Evaluator from quaterion.eval.pair import RetrievalReciprocalRank, RetrievalPrecision from quaterion.eval.samplers.pair_sampler import PairSampler ... def train(model, train_dataset_path, val_dataset_path, params): ... metrics = { "rrk": RetrievalReciprocalRank(), "rp@1": RetrievalPrecision(k=1) } sampler = PairSampler() evaluator = Evaluator(metrics, sampler) results = Quaterion.evaluate(evaluator, val_dataset, model.model) print(f"results: {results}") ``` ### Train Results At this point we can train our model, I do it via `python3 -m faq.train`. <div class="table-responsive"> |epoch|train_precision@1|train_reciprocal_rank|val_precision@1|val_reciprocal_rank| |-----|-----------------|---------------------|---------------|-------------------| |0 |0.650 |0.732 |0.659 |0.741 | |100 |0.665 |0.746 |0.673 |0.754 | |200 |0.677 |0.757 |0.682 |0.763 | |300 |0.686 |0.765 |0.688 |0.768 | |400 |0.695 |0.772 |0.694 |0.773 | |500 |0.701 |0.778 |0.700 |0.777 | </div> Results obtained with `Evaluator`: <div class="table-responsive"> | precision@1 | reciprocal_rank | |-------------|-----------------| | 0.577 | 0.675 | </div> After training all the metrics have been increased. And this training was done in just 3 minutes on a single gpu! There is no overfitting and the results are steadily growing, although I think there is still room for improvement and experimentation. ## Model serving As you could already notice, Quaterion framework is split into two separate libraries: `quaterion` and [quaterion-models](https://quaterion-models.qdrant.tech/). The former one contains training related stuff like losses, cache, `pytorch-lightning` dependency, etc. While the latter one contains only modules necessary for serving: encoders, heads and `SimilarityModel` itself. The reasons for this separation are: - less amount of entities you need to operate in a production environment - reduced memory footprint It is essential to isolate training dependencies from the serving environment cause the training step is usually more complicated. Training dependencies are quickly going out of control, significantly slowing down the deployment and serving timings and increasing unnecessary resource usage. The very last row of `train.py` - `faq_model.save_servable(...)` saves encoders and the model in a fashion that eliminates all Quaterion dependencies and stores only the most necessary data to run a model in production. In `serve.py` we load and encode all the answers and then look for the closest vectors to the questions we are interested in: ```python import os import json import torch from quaterion_models.model import SimilarityModel from quaterion.distances import Distance from faq.config import DATA_DIR, ROOT_DIR if __name__ == "__main__": device = "cuda:0" if torch.cuda.is_available() else "cpu" model = SimilarityModel.load(os.path.join(ROOT_DIR, "servable")) model.to(device) dataset_path = os.path.join(DATA_DIR, "val_cloud_faq_dataset.jsonl") with open(dataset_path) as fd: answers = [json.loads(json_line)["answer"] for json_line in fd] # everything is ready, let's encode our answers answer_embeddings = model.encode(answers, to_numpy=False) # Some prepared questions and answers to ensure that our model works as intended questions = [ "what is the pricing of aws lambda functions powered by aws graviton2 processors?", "can i run a cluster or job for a long time?", "what is the dell open manage system administrator suite (omsa)?", "what are the differences between the event streams standard and event streams enterprise plans?", ] ground_truth_answers = [ "aws lambda functions powered by aws graviton2 processors are 20% cheaper compared to x86-based lambda functions", "yes, you can run a cluster for as long as is required", "omsa enables you to perform certain hardware configuration tasks and to monitor the hardware directly via the operating system", "to find out more information about the different event streams plans, see choosing your plan", ] # encode our questions and find the closest to them answer embeddings question_embeddings = model.encode(questions, to_numpy=False) distance = Distance.get_by_name(Distance.COSINE) question_answers_distances = distance.distance_matrix( question_embeddings, answer_embeddings ) answers_indices = question_answers_distances.min(dim=1)[1] for q_ind, a_ind in enumerate(answers_indices): print("Q:", questions[q_ind]) print("A:", answers[a_ind], end="\n\n") assert ( answers[a_ind] == ground_truth_answers[q_ind] ), f"<{answers[a_ind]}> != <{ground_truth_answers[q_ind]}>" ``` We stored our collection of answer embeddings in memory and perform search directly in Python. For production purposes, it's better to use some sort of vector search engine like [Qdrant](https://github.com/qdrant/qdrant). It provides durability, speed boost, and a bunch of other features. So far, we've implemented a whole training process, prepared model for serving and even applied a trained model today with `Quaterion`. Thank you for your time and attention! I hope you enjoyed this huge tutorial and will use `Quaterion` for your similarity learning projects. All ready to use code can be found [here](https://github.com/qdrant/demo-cloud-faq/tree/tutorial). Stay tuned!:)
articles/faq-question-answering.md
--- title: "Discovery needs context" short_description: Discover points by constraining the vector space. description: Discovery Search, an innovative way to constrain the vector space in which a search is performed, relying only on vectors. social_preview_image: /articles_data/discovery-search/social_preview.jpg small_preview_image: /articles_data/discovery-search/icon.svg preview_dir: /articles_data/discovery-search/preview weight: -110 author: Luis Cossío author_link: https://coszio.github.io date: 2024-01-31T08:00:00-03:00 draft: false keywords: - why use a vector database - specialty - search - multimodal - state-of-the-art - vector-search --- # Discovery needs context When Christopher Columbus and his crew sailed to cross the Atlantic Ocean, they were not looking for the Americas. They were looking for a new route to India because they were convinced that the Earth was round. They didn't know anything about a new continent, but since they were going west, they stumbled upon it. They couldn't reach their _target_, because the geography didn't let them, but once they realized it wasn't India, they claimed it a new "discovery" for their crown. If we consider that sailors need water to sail, then we can establish a _context_ which is positive in the water, and negative on land. Once the sailor's search was stopped by the land, they could not go any further, and a new route was found. Let's keep these concepts of _target_ and _context_ in mind as we explore the new functionality of Qdrant: __Discovery search__. ## What is discovery search? In version 1.7, Qdrant [released](/articles/qdrant-1.7.x/) this novel API that lets you constrain the space in which a search is performed, relying only on pure vectors. This is a powerful tool that lets you explore the vector space in a more controlled way. It can be used to find points that are not necessarily closest to the target, but are still relevant to the search. You can already select which points are available to the search by using payload filters. This by itself is very versatile because it allows us to craft complex filters that show only the points that satisfy their criteria deterministically. However, the payload associated with each point is arbitrary and cannot tell us anything about their position in the vector space. In other words, filtering out irrelevant points can be seen as creating a _mask_ rather than a hyperplane –cutting in between the positive and negative vectors– in the space. ## Understanding context This is where a __vector _context___ can help. We define _context_ as a list of pairs. Each pair is made up of a positive and a negative vector. With a context, we can define hyperplanes within the vector space, which always prefer the positive over the negative vectors. This effectively partitions the space where the search is performed. After the space is partitioned, we then need a _target_ to return the points that are more similar to it. ![Discovery search visualization](/articles_data/discovery-search/discovery-search.png) While positive and negative vectors might suggest the use of the <a href="/documentation/concepts/explore/#recommendation-api" target="_blank">recommendation interface</a>, in the case of _context_ they require to be paired up in a positive-negative fashion. This is inspired from the machine-learning concept of <a href="https://en.wikipedia.org/wiki/Triplet_loss" target="_blank">_triplet loss_</a>, where you have three vectors: an anchor, a positive, and a negative. Triplet loss is an evaluation of how much the anchor is closer to the positive than to the negative vector, so that learning happens by "moving" the positive and negative points to try to get a better evaluation. However, during discovery, we consider the positive and negative vectors as static points, and we search through the whole dataset for the "anchors", or result candidates, which fit this characteristic better. ![Triplet loss](/articles_data/discovery-search/triplet-loss.png) [__Discovery search__](#discovery-search), then, is made up of two main inputs: - __target__: the main point of interest - __context__: the pairs of positive and negative points we just defined. However, it is not the only way to use it. Alternatively, you can __only__ provide a context, which invokes a [__Context Search__](#context-search). This is useful when you want to explore the space defined by the context, but don't have a specific target in mind. But hold your horses, we'll get to that [later ↪](#context-search). ## Real-world discovery search applications Let's talk about the first case: context with a target. To understand why this is useful, let's take a look at a real-world example: using a multimodal encoder like [CLIP](https://openai.com/blog/clip/) to search for images, from text __and__ images. CLIP is a neural network that can embed both images and text into the same vector space. This means that you can search for images using either a text query or an image query. For this example, we'll reuse our [food recommendations demo](https://food-discovery.qdrant.tech/) by typing "burger" in the text input: ![Burger text input in food demo](/articles_data/discovery-search/search-for-burger.png) This is basically nearest neighbor search, and while technically we have only images of burgers, one of them is a logo representation of a burger. We're looking for actual burgers, though. Let's try to exclude images like that by adding it as a negative example: ![Try to exclude burger drawing](/articles_data/discovery-search/try-to-exclude-non-burger.png) Wait a second, what has just happened? These pictures have __nothing__ to do with burgers, and still, they appear on the first results. Is the demo broken? Turns out, multimodal encoders <a href="https://modalitygap.readthedocs.io/en/latest/" target="_blank">might not work how you expect them to</a>. Images and text are embedded in the same space, but they are not necessarily close to each other. This means that we can create a mental model of the distribution as two separate planes, one for images and one for text. ![Mental model of CLIP embeddings](/articles_data/discovery-search/clip-mental-model.png) This is where discovery excels because it allows us to constrain the space considering the same mode (images) while using a target from the other mode (text). ![Cross-modal search with discovery](/articles_data/discovery-search/clip-discovery.png) Discovery search also lets us keep giving feedback to the search engine in the shape of more context pairs, so we can keep refining our search until we find what we are looking for. Another intuitive example: imagine you're looking for a fish pizza, but pizza names can be confusing, so you can just type "pizza", and prefer a fish over meat. Discovery search will let you use these inputs to suggest a fish pizza... even if it's not called fish pizza! ![Simple discovery example](/articles_data/discovery-search/discovery-example-with-images.png) ## Context search Now, the second case: only providing context. Ever been caught in the same recommendations on your favorite music streaming service? This may be caused by getting stuck in a similarity bubble. As user input gets more complex, diversity becomes scarce, and it becomes harder to force the system to recommend something different. ![Context vs recommendation search](/articles_data/discovery-search/context-vs-recommendation.png) __Context search__ solves this by de-focusing the search around a single point. Instead, it selects points randomly from within a zone in the vector space. This search is the most influenced by _triplet loss_, as the score can be thought of as _"how much a point is closer to a negative than a positive vector?"_. If it is closer to the positive one, then its score will be zero, same as any other point within the same zone. But if it is on the negative side, it will be assigned a more and more negative score the further it gets. ![Context search visualization](/articles_data/discovery-search/context-search.png) Creating complex tastes in a high-dimensional space becomes easier since you can just add more context pairs to the search. This way, you should be able to constrain the space enough so you select points from a per-search "category" created just from the context in the input. ![A more complex context search](/articles_data/discovery-search/complex-context-search.png) This way you can give refreshing recommendations, while still being in control by providing positive and negative feedback, or even by trying out different permutations of pairs. ## Key takeaways: - Discovery search is a powerful tool for controlled exploration in vector spaces. Context, consisting of positive and negative vectors constrain the search space, while a target guides the search. - Real-world applications include multimodal search, diverse recommendations, and context-driven exploration. - Ready to learn more about the math behind it and how to use it? Check out the [documentation](/documentation/concepts/explore/#discovery-api)
articles/discovery-search.md
--- title: "FastEmbed: Qdrant's Efficient Python Library for Embedding Generation" short_description: "FastEmbed: Quantized Embedding models for fast CPU Generation" description: "Learn how to accurately and efficiently create text embeddings with FastEmbed." social_preview_image: /articles_data/fastembed/preview/social_preview.jpg small_preview_image: /articles_data/fastembed/preview/lightning.svg preview_dir: /articles_data/fastembed/preview weight: -60 author: Nirant Kasliwal author_link: https://nirantk.com/about/ date: 2023-10-18T10:00:00+03:00 draft: false keywords: - vector search - embedding models - Flag Embedding - OpenAI Ada - NLP - embeddings - ONNX Runtime - quantized embedding model --- Data Science and Machine Learning practitioners often find themselves navigating through a labyrinth of models, libraries, and frameworks. Which model to choose, what embedding size, and how to approach tokenizing, are just some questions you are faced with when starting your work. We understood how many data scientists wanted an easier and more intuitive means to do their embedding work. This is why we built FastEmbed, a Python library engineered for speed, efficiency, and usability. We have created easy to use default workflows, handling the 80% use cases in NLP embedding. ## Current State of Affairs for Generating Embeddings Usually you make embedding by utilizing PyTorch or TensorFlow models under the hood. However, using these libraries comes at a cost in terms of ease of use and computational speed. This is at least in part because these are built for both: model inference and improvement e.g. via fine-tuning. To tackle these problems we built a small library focused on the task of quickly and efficiently creating text embeddings. We also decided to start with only a small sample of best in class transformer models. By keeping it small and focused on a particular use case, we could make our library focused without all the extraneous dependencies. We ship with limited models, quantize the model weights and seamlessly integrate them with the ONNX Runtime. FastEmbed strikes a balance between inference time, resource utilization and performance (recall/accuracy). ## Quick Embedding Text Document Example Here is an example of how simple we have made embedding text documents: ```python documents: List[str] = [ "Hello, World!", "fastembed is supported by and maintained by Qdrant." ]  embedding_model = DefaultEmbedding()  embeddings: List[np.ndarray] = list(embedding_model.embed(documents)) ``` These 3 lines of code do a lot of heavy lifting for you: They download the quantized model, load it using ONNXRuntime, and then run a batched embedding creation of your documents. ### Code Walkthrough Let’s delve into a more advanced example code snippet line-by-line: ```python from fastembed.embedding import DefaultEmbedding ``` Here, we import the FlagEmbedding class from FastEmbed and alias it as Embedding. This is the core class responsible for generating embeddings based on your chosen text model. This is also the class which you can import directly as DefaultEmbedding which is [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5) ```python documents: List[str] = [ "passage: Hello, World!", "query: How is the World?", "passage: This is an example passage.", "fastembed is supported by and maintained by Qdrant." ] ``` In this list called documents, we define four text strings that we want to convert into embeddings. Note the use of prefixes “passage” and “query” to differentiate the types of embeddings to be generated. This is inherited from the cross-encoder implementation of the BAAI/bge series of models themselves. This is particularly useful for retrieval and we strongly recommend using this as well. The use of text prefixes like “query” and “passage” isn’t merely syntactic sugar; it informs the algorithm on how to treat the text for embedding generation. A “query” prefix often triggers the model to generate embeddings that are optimized for similarity comparisons, while “passage” embeddings are fine-tuned for contextual understanding. If you omit the prefix, the default behavior is applied, although specifying it is recommended for more nuanced results. Next, we initialize the Embedding model with the default model: [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5). ```python embedding_model = DefaultEmbedding() ``` The default model and several other models have a context window of a maximum of 512 tokens. This maximum limit comes from the embedding model training and design itself. If you'd like to embed sequences larger than that, we'd recommend using some pooling strategy to get a single vector out of the sequence. For example, you can use the mean of the embeddings of different chunks of a document. This is also what the [SBERT Paper recommends](https://lilianweng.github.io/posts/2021-05-31-contrastive/#sentence-bert) This model strikes a balance between speed and accuracy, ideal for real-world applications. ```python embeddings: List[np.ndarray] = list(embedding_model.embed(documents)) ``` Finally, we call the `embed()` method on our embedding_model object, passing in the documents list. The method returns a Python generator, so we convert it to a list to get all the embeddings. These embeddings are NumPy arrays, optimized for fast mathematical operations. The `embed()` method returns a list of NumPy arrays, each corresponding to the embedding of a document in your original documents list. The dimensions of these arrays are determined by the model you chose e.g. for “BAAI/bge-small-en-v1.5” it’s a 384-dimensional vector. You can easily parse these NumPy arrays for any downstream application—be it clustering, similarity comparison, or feeding them into a machine learning model for further analysis. ## 3 Key Features of FastEmbed FastEmbed is built for inference speed, without sacrificing (too much) performance: 1. 50% faster than PyTorch Transformers 2. Better performance than Sentence Transformers and OpenAI Ada-002 3. Cosine similarity of quantized and original model vectors is 0.92 We use `BAAI/bge-small-en-v1.5` as our DefaultEmbedding, hence we've chosen that for comparison: ![](/articles_data/fastembed/throughput.png) ## Under the Hood of FastEmbed **Quantized Models**: We quantize the models for CPU (and Mac Metal) – giving you the best buck for your compute model. Our default model is so small, you can run this in AWS Lambda if you’d like! Shout out to Huggingface's [Optimum](https://github.com/huggingface/optimum) – which made it easier to quantize models. **Reduced Installation Time**: FastEmbed sets itself apart by maintaining a low minimum RAM/Disk usage. It’s designed to be agile and fast, useful for businesses looking to integrate text embedding for production usage. For FastEmbed, the list of dependencies is refreshingly brief: > - onnx: Version ^1.11 – We’ll try to drop this also in the future if we can! > - onnxruntime: Version ^1.15 > - tqdm: Version ^4.65 – used only at Download > - requests: Version ^2.31 – used only at Download > - tokenizers: Version ^0.13 This minimized list serves two purposes. First, it significantly reduces the installation time, allowing for quicker deployments. Second, it limits the amount of disk space required, making it a viable option even for environments with storage limitations. Notably absent from the dependency list are bulky libraries like PyTorch, and there’s no requirement for CUDA drivers. This is intentional. FastEmbed is engineered to deliver optimal performance right on your CPU, eliminating the need for specialized hardware or complex setups. **ONNXRuntime**: The ONNXRuntime gives us the ability to support multiple providers. The quantization we do is limited for CPU (Intel), but we intend to support GPU versions of the same in the future as well.  This allows for greater customization and optimization, further aligning with your specific performance and computational requirements. ## Current Models We’ve started with a small set of supported models: All the models we support are [quantized](https://pytorch.org/docs/stable/quantization.html) to enable even faster computation! If you're using FastEmbed and you've got ideas or need certain features, feel free to let us know. Just drop an issue on our GitHub page. That's where we look first when we're deciding what to work on next. Here's where you can do it: [FastEmbed GitHub Issues](https://github.com/qdrant/fastembed/issues). When it comes to FastEmbed's DefaultEmbedding model, we're committed to supporting the best Open Source models. If anything changes, you'll see a new version number pop up, like going from 0.0.6 to 0.1. So, it's a good idea to lock in the FastEmbed version you're using to avoid surprises. ## Using FastEmbed with Qdrant Qdrant is a Vector Store, offering comprehensive, efficient, and scalable [enterprise solutions](https://qdrant.tech/enterprise-solutions/) for modern machine learning and AI applications. Whether you are dealing with billions of data points, require a low latency performant [vector database solution](https://qdrant.tech/qdrant-vector-database/), or specialized quantization methods – [Qdrant is engineered](/documentation/overview/) to meet those demands head-on. The fusion of FastEmbed with Qdrant’s vector store capabilities enables a transparent workflow for seamless embedding generation, storage, and retrieval. This simplifies the API design — while still giving you the flexibility to make significant changes e.g. you can use FastEmbed to make your own embedding other than the DefaultEmbedding and use that with Qdrant. Below is a detailed guide on how to get started with FastEmbed in conjunction with Qdrant. ### Step 1: Installation Before diving into the code, the initial step involves installing the Qdrant Client along with the FastEmbed library. This can be done using pip: ``` pip install qdrant-client[fastembed] ``` For those using zsh as their shell, you might encounter syntax issues. In such cases, wrap the package name in quotes: ``` pip install 'qdrant-client[fastembed]' ``` ### Step 2: Initializing the Qdrant Client After successful installation, the next step involves initializing the Qdrant Client. This can be done either in-memory or by specifying a database path: ```python from qdrant_client import QdrantClient # Initialize the client client = QdrantClient(":memory:")  # or QdrantClient(path="path/to/db") ``` ### Step 3: Preparing Documents, Metadata, and IDs Once the client is initialized, prepare the text documents you wish to embed, along with any associated metadata and unique IDs: ```python docs = [ "Qdrant has Langchain integrations", "Qdrant also has Llama Index integrations" ] metadata = [ {"source": "Langchain-docs"}, {"source": "LlamaIndex-docs"}, ] ids = [42, 2] ``` Note that the add method we’ll use is overloaded: If you skip the ids, we’ll generate those for you. metadata is obviously optional. So, you can simply use this too: ```python docs = [ "Qdrant has Langchain integrations", "Qdrant also has Llama Index integrations" ] ``` ### Step 4: Adding Documents to a Collection With your documents, metadata, and IDs ready, you can proceed to add these to a specified collection within Qdrant using the add method: ```python client.add( collection_name="demo_collection", documents=docs, metadata=metadata, ids=ids ) ``` Inside this function, Qdrant Client uses FastEmbed to make the text embedding, generate ids if they’re missing, and then add them to the index with metadata. This uses the DefaultEmbedding model: [BAAI/bge-small-en-v1.5](https://huggingface.co/baai/bge-small-en-v1.5) ![INDEX TIME: Sequence Diagram for Qdrant and FastEmbed](/articles_data/fastembed/generate-embeddings-from-docs.png) ### Step 5: Performing Queries Finally, you can perform queries on your stored documents. Qdrant offers a robust querying capability, and the query results can be easily retrieved as follows: ```python search_result = client.query( collection_name="demo_collection", query_text="This is a query document" ) print(search_result) ``` Behind the scenes, we first convert the query_text to the embedding and use that to query the vector index. ![QUERY TIME: Sequence Diagram for Qdrant and FastEmbed integration](/articles_data/fastembed/generate-embeddings-query.png) By following these steps, you effectively utilize the combined capabilities of FastEmbed and Qdrant, thereby streamlining your embedding generation and retrieval tasks. Qdrant is designed to handle large-scale datasets with billions of data points. Its architecture employs techniques like [binary quantization](https://qdrant.tech/articles/binary-quantization/) and [scalar quantization](https://qdrant.tech/articles/scalar-quantization/) for efficient storage and retrieval. When you inject FastEmbed’s CPU-first design and lightweight nature into this equation, you end up with a system that can scale seamlessly while maintaining low latency. ## Summary If you're curious about how FastEmbed and Qdrant can make your search tasks a breeze, why not take it for a spin? You get a real feel for what it can do. Here are two easy ways to get started: 1. **Cloud**: Get started with a free plan on the [Qdrant Cloud](https://qdrant.to/cloud?utm_source=qdrant&utm_medium=website&utm_campaign=fastembed&utm_content=article). 2. **Docker Container**: If you're the DIY type, you can set everything up on your own machine. Here's a quick guide to help you out: [Quick Start with Docker](/documentation/quick-start/?utm_source=qdrant&utm_medium=website&utm_campaign=fastembed&utm_content=article). So, go ahead, take it for a test drive. We're excited to hear what you think! Lastly, If you find FastEmbed useful and want to keep up with what we're doing, giving our GitHub repo a star would mean a lot to us. Here's the link to [star the repository](https://github.com/qdrant/fastembed). If you ever have questions about FastEmbed, please ask them on the Qdrant Discord: [https://discord.gg/Qy6HCJK9Dc](https://discord.gg/Qy6HCJK9Dc)
articles/fastembed.md
--- title: "Product Quantization in Vector Search | Qdrant" short_description: "Vector search with low memory? Try out our brand-new Product Quantization!" description: "Discover product quantization in vector search technology. Learn how it optimizes storage and accelerates search processes for high-dimensional data." social_preview_image: /articles_data/product-quantization/social_preview.png small_preview_image: /articles_data/product-quantization/product-quantization-icon.svg preview_dir: /articles_data/product-quantization/preview weight: 4 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-05-30T09:45:00+02:00 draft: false keywords: - vector search - product quantization - memory optimization aliases: [ /articles/product_quantization/ ] --- # Product Quantization Demystified: Streamlining Efficiency in Data Management Qdrant 1.1.0 brought the support of [Scalar Quantization](/articles/scalar-quantization/), a technique of reducing the memory footprint by even four times, by using `int8` to represent the values that would be normally represented by `float32`. The memory usage in [vector search](https://qdrant.tech/solutions/) might be reduced even further! Please welcome **Product Quantization**, a brand-new feature of Qdrant 1.2.0! ## What is Product Quantization? Product Quantization converts floating-point numbers into integers like every other quantization method. However, the process is slightly more complicated than [Scalar Quantization](https://qdrant.tech/articles/scalar-quantization/) and is more customizable, so you can find the sweet spot between memory usage and search precision. This article covers all the steps required to perform Product Quantization and the way it's implemented in Qdrant. ## How Does Product Quantization Work? Let’s assume we have a few vectors being added to the collection and that our optimizer decided to start creating a new segment. ![A list of raw vectors](/articles_data/product-quantization/raw-vectors.png) ### Cutting the vector into pieces First of all, our vectors are going to be divided into **chunks** aka **subvectors**. The number of chunks is configurable, but as a rule of thumb - the lower it is, the higher the compression rate. That also comes with reduced search precision, but in some cases, you may prefer to keep the memory usage as low as possible. ![A list of chunked vectors](/articles_data/product-quantization/chunked-vectors.png) Qdrant API allows choosing the compression ratio from 4x up to 64x. In our example, we selected 16x, so each subvector will consist of 4 floats (16 bytes), and it will eventually be represented by a single byte. ### Clustering The chunks of our vectors are then used as input for clustering. Qdrant uses the K-means algorithm, with $ K = 256 $. It was selected a priori, as this is the maximum number of values a single byte represents. As a result, we receive a list of 256 centroids for each chunk and assign each of them a unique id. **The clustering is done separately for each group of chunks.** ![Clustered chunks of vectors](/articles_data/product-quantization/chunks-clustering.png) Each chunk of a vector might now be mapped to the closest centroid. That’s where we lose the precision, as a single point will only represent a whole subspace. Instead of using a subvector, we can store the id of the closest centroid. If we repeat that for each chunk, we can approximate the original embedding as a vector of subsequent ids of the centroids. The dimensionality of the created vector is equal to the number of chunks, in our case 2. ![A new vector built from the ids of the centroids](/articles_data/product-quantization/vector-of-ids.png) ### Full process All those steps build the following pipeline of Product Quantization: ![Full process of Product Quantization](/articles_data/product-quantization/full-process.png) ## Measuring the distance Vector search relies on the distances between the points. Enabling Product Quantization slightly changes the way it has to be calculated. The query vector is divided into chunks, and then we figure the overall distance as a sum of distances between the subvectors and the centroids assigned to the specific id of the vector we compare to. We know the coordinates of the centroids, so that's easy. ![Calculating the distance of between the query and the stored vector](/articles_data/product-quantization/distance-calculation.png) #### Qdrant implementation Search operation requires calculating the distance to multiple points. Since we calculate the distance to a finite set of centroids, those might be precomputed and reused. Qdrant creates a lookup table for each query, so it can then simply sum up several terms to measure the distance between a query and all the centroids. | | Centroid 0 | Centroid 1 | ... | |-------------|------------|------------|-----| | **Chunk 0** | 0.14213 | 0.51242 | | | **Chunk 1** | 0.08421 | 0.00142 | | | **...** | ... | ... | ... | ## Product Quantization Benchmarks Product Quantization comes with a cost - there are some additional operations to perform so that the performance might be reduced. However, memory usage might be reduced drastically as well. As usual, we did some benchmarks to give you a brief understanding of what you may expect. Again, we reused the same pipeline as in [the other benchmarks we published](/benchmarks/). We selected [Arxiv-titles-384-angular-no-filters](https://github.com/qdrant/ann-filtering-benchmark-datasets) and [Glove-100](https://github.com/erikbern/ann-benchmarks/) datasets to measure the impact of Product Quantization on precision and time. Both experiments were launched with $ EF = 128 $. The results are summarized in the tables: #### Glove-100 <table> <thead> <tr> <th></th> <th>Original</th> <th>1D clusters</th> <th>2D clusters</th> <th>3D clusters</th> </tr> </thead> <tbody> <tr> <th>Mean precision</th> <td>0.7158</td> <td>0.7143</td> <td>0.6731</td> <td>0.5854</td> </tr> <tr> <th>Mean search time</th> <td>2336 µs</td> <td>2750 µs</td> <td>2597 µs</td> <td>2534 µs</td> </tr> <tr> <th>Compression</th> <td>x1</td> <td>x4</td> <td>x8</td> <td>x12</td> </tr> <tr> <th>Upload & indexing time</th> <td>147 s</td> <td>339 s</td> <td>217 s</td> <td>178 s</td> </tr> </tbody> </table> Product Quantization increases both indexing and searching time. The higher the compression ratio, the lower the search precision. The main benefit is undoubtedly the reduced usage of memory. #### Arxiv-titles-384-angular-no-filters <table> <thead> <tr> <th></th> <th>Original</th> <th>1D clusters</th> <th>2D clusters</th> <th>4D clusters</th> <th>8D clusters</th> </tr> </thead> <tbody> <tr> <th>Mean precision</th> <td>0.9837</td> <td>0.9677</td> <td>0.9143</td> <td>0.8068</td> <td>0.6618</td> </tr> <tr> <th>Mean search time</th> <td>2719 µs</td> <td>4134 µs</td> <td>2947 µs</td> <td>2175 µs</td> <td>2053 µs</td> </tr> <tr> <th>Compression</th> <td>x1</td> <td>x4</td> <td>x8</td> <td>x16</td> <td>x32</td> </tr> <tr> <th>Upload & indexing time</th> <td>332 s</td> <td>921 s</td> <td>597 s</td> <td>481 s</td> <td>474 s</td> </tr> </tbody> </table> It turns out that in some cases, Product Quantization may not only reduce the memory usage, but also the search time. ## Product Quantization vs Scalar Quantization Compared to [Scalar Quantization](https://qdrant.tech/articles/scalar-quantization/), Product Quantization offers a higher compression rate. However, this comes with considerable trade-offs in accuracy, and at times, in-RAM search speed. Product Quantization tends to be favored in certain specific scenarios: - Deployment in a low-RAM environment where the limiting factor is the number of disk reads rather than the vector comparison itself - Situations where the dimensionality of the original vectors is sufficiently high - Cases where indexing speed is not a critical factor In circumstances that do not align with the above, Scalar Quantization should be the preferred choice. ## Using Qdrant for Product Quantization If you’re already a Qdrant user, we have, documentation on [Product Quantization](/documentation/guides/quantization/#setting-up-product-quantization) that will help you to set and configure the new quantization for your data and achieve even up to 64x memory reduction. Ready to experience the power of Product Quantization? [Sign up now](https://cloud.qdrant.io/) for a free Qdrant demo and optimize your data management today!
articles/product-quantization.md
--- title: "What is a Vector Database?" draft: false slug: what-is-a-vector-database? short_description: What is a Vector Database? Use Cases & Examples | Qdrant description: Discover what a vector database is, its core functionalities, and real-world applications. Unlock advanced data management with our comprehensive guide. preview_dir: /articles_data/what-is-a-vector-database/preview weight: -100 social_preview_image: /articles_data/what-is-a-vector-database/preview/social-preview.jpg small_preview_image: /articles_data/what-is-a-vector-database/icon.svg date: 2024-01-25T09:29:33-03:00 author: Sabrina Aquino featured: true tags: - vector-search - vector-database - embeddings aliases: [ /blog/what-is-a-vector-database/ ] --- # Why use a Vector Database & How Does it Work? In the ever-evolving landscape of data management and artificial intelligence, [vector databases](https://qdrant.tech/qdrant-vector-database/) have emerged as a revolutionary tool for efficiently handling complex, high-dimensional data. But what exactly is a vector database? This comprehensive guide delves into the fundamentals of vector databases, exploring their unique capabilities, core functionalities, and real-world applications. ## What is a Vector Database? A [Vector Database](https://qdrant.tech/qdrant-vector-database/) is a specialized database system designed for efficiently indexing, querying, and retrieving high-dimensional vector data. Those systems enable advanced data analysis and similarity-search operations that extend well beyond the traditional, structured query approach of conventional databases. ## Why use a Vector Database? The data flood is real. In 2024, we're drowning in unstructured data like images, text, and audio, that don’t fit into neatly organized tables. Still, we need a way to easily tap into the value within this chaos of almost 330 million terabytes of data being created each day. Traditional databases, even with extensions that provide some vector handling capabilities, struggle with the complexities and demands of high-dimensional vector data. Handling of vector data is extremely resource-intensive. A traditional vector is around 6Kb. You can see how scaling to millions of vectors can demand substantial system memory and computational resources. Which is at least very challenging for traditional [OLTP](https://www.ibm.com/topics/oltp) and [OLAP](https://www.ibm.com/topics/olap) databases to manage. ![](/articles_data/what-is-a-vector-database/Why-Use-Vector-Database.jpg) Vector databases allow you to understand the **context** or **conceptual similarity** of unstructured data by representing them as **vectors**, enabling advanced analysis and retrieval based on data similarity. For example, in recommendation systems, vector databases can analyze user behavior and item characteristics to suggest products or content with a high degree of personal relevance. In search engines and research databases, they enhance the user experience by providing results that are **semantically** similar to the query. They do not rely solely on the exact words typed into the search bar. If you're new to the vector search space, this article explains the key concepts and relationships that you need to know. So let's get into it. ## What is Vector Data? To understand vector databases, let's begin by defining what is a 'vector' or 'vector data'. Vectors are a **numerical representation** of some type of complex information. To represent textual data, for example, it will encapsulate the nuances of language, such as semantics and context. With an image, the vector data encapsulates aspects like color, texture, and shape. The **dimensions** relate to the complexity and the amount of information each image contains. Each pixel in an image can be seen as one dimension, as it holds data (like color intensity values for red, green, and blue channels in a color image). So even a small image with thousands of pixels translates to thousands of dimensions. So from now on, when we talk about high-dimensional data, we mean that the data contains a large number of data points (pixels, features, semantics, syntax). The **creation** of vector data (so we can store this high-dimensional data on our vector database) is primarily done through **embeddings**. ![](/articles_data/what-is-a-vector-database/Vector-Data.jpg) ### How do Embeddings Work? [Embeddings](https://qdrant.tech/articles/what-are-embeddings/) translate this high-dimensional data into a more manageable, **lower-dimensional** vector form that's more suitable for machine learning and data processing applications, typically through **neural network models**. In creating dimensions for text, for example, the process involves analyzing the text to capture its linguistic elements. Transformer-based neural networks like **BERT** (Bidirectional Encoder Representations from Transformers) and **GPT** (Generative Pre-trained Transformer), are widely used for creating text embeddings. Each layer extracts different levels of features, such as context, semantics, and syntax. ![](/articles_data/what-is-a-vector-database/How-Do-Embeddings-Work_.jpg) The final layers of the network condense this information into a vector that is a compact, lower-dimensional representation of the image but still retains the essential information. ## The Core Functionalities of Vector Databases ### Vector Database Indexing Have you ever tried to find a specific face in a massive crowd photo? Well, vector databases face a similar challenge when dealing with tons of high-dimensional vectors. Now, imagine dividing the crowd into smaller groups based on hair color, then eye color, then clothing style. Each layer gets you closer to who you’re looking for. Vector databases use similar **multi-layered** structures called indexes to organize vectors based on their "likeness." This way, finding similar images becomes a quick hop across related groups, instead of scanning every picture one by one. ![](/articles_data/what-is-a-vector-database/Indexing.jpg) Different indexing methods exist, each with its strengths. [HNSW](/articles/filtrable-hnsw/) balances speed and accuracy like a well-connected network of shortcuts in the crowd. Others, like IVF or Product Quantization, focus on specific tasks or memory efficiency. ### Binary Quantization Quantization is a technique used for reducing the total size of the database. It works by compressing vectors into a more compact representation at the cost of accuracy. [Binary Quantization](/articles/binary-quantization/) is a fast indexing and data compression method used by Qdrant. It supports vector comparisons, which can dramatically speed up query processing times (up to 40x faster!). Think of each data point as a ruler. Binary quantization splits this ruler in half at a certain point, marking everything above as "1" and everything below as "0". This [binarization](https://deepai.org/machine-learning-glossary-and-terms/binarization) process results in a string of bits, representing the original vector. ![](/articles_data/what-is-a-vector-database/Binary-Quant.png) This "quantized" code is much smaller and easier to compare. Especially for OpenAI embeddings, this type of quantization has proven to achieve a massive performance improvement at a lower cost of accuracy. ### Similarity Search [Similarity search](/documentation/concepts/search/) allows you to search not by keywords but by meaning. This way you can do searches such as similar songs that evoke the same mood, finding images that match your artistic vision, or even exploring emotional patterns in text. The way it works is, when the user queries the database, this query is also converted into a vector (the query vector). The [vector search](/documentation/overview/vector-search/) starts at the top layer of the HNSW index, where the algorithm quickly identifies the area of the graph likely to contain vectors closest to the query vector. The algorithm compares your query vector to all the others, using metrics like "distance" or "similarity" to gauge how close they are. The search then moves down progressively narrowing down to more closely related vectors. The goal is to narrow down the dataset to the most relevant items. The image below illustrates this. ![](/articles_data/what-is-a-vector-database/Similarity-Search-and-Retrieval.jpg) Once the closest vectors are identified at the bottom layer, these points translate back to actual data, like images or music, representing your search results. ### Scalability [Vector databases](https://qdrant.tech/qdrant-vector-database/) often deal with datasets that comprise billions of high-dimensional vectors. This data isn't just large in volume but also complex in nature, requiring more computing power and memory to process. Scalable systems can handle this increased complexity without performance degradation. This is achieved through a combination of a **distributed architecture**, **dynamic resource allocation**, **data partitioning**, **load balancing**, and **optimization techniques**. Systems like Qdrant exemplify scalability in vector databases. It [leverages Rust's efficiency](https://qdrant.tech/articles/why-rust/) in **memory management** and **performance**, which allows the handling of large-scale data with optimized resource usage. ### Efficient Query Processing The key to efficient query processing in these databases is linked to their **indexing methods**, which enable quick navigation through complex data structures. By mapping and accessing the high-dimensional vector space, HNSW and similar indexing techniques significantly reduce the time needed to locate and retrieve relevant data. ![](/articles_data/what-is-a-vector-database/search-query.jpg) Other techniques like **handling computational load** and **parallel processing** are used for performance, especially when managing multiple simultaneous queries. Complementing them, **strategic caching** is also employed to store frequently accessed data, facilitating a quicker retrieval for subsequent queries. ### Using Metadata and Filters Filters use metadata to refine search queries within the database. For example, in a database containing text documents, a user might want to search for documents not only based on textual similarity but also filter the results by publication date or author. When a query is made, the system can use **both** the vector data and the metadata to process the query. In other words, the database doesn’t just look for the closest vectors. It also considers the additional criteria set by the metadata filters, creating a more customizable search experience. ![](/articles_data/what-is-a-vector-database/metadata.jpg) ### Data Security and Access Control Vector databases often store sensitive information. This could include personal data in customer databases, confidential images, or proprietary text documents. Ensuring data security means protecting this information from unauthorized access, breaches, and other forms of cyber threats. At Qdrant, this includes mechanisms such as: - User authentication - Encryption for data at rest and in transit - Keeping audit trails - Advanced database monitoring and anomaly detection ## What is the Architecture of a Vector Database? A vector database is made of multiple different entities and relations. Here's a high-level overview of Qdrant's terminologies and how they fit into the larger picture: ![](/articles_data/what-is-a-vector-database/Architecture-of-a-Vector-Database.jpg) **Collections**: [Collections](/documentation/concepts/collections/) are a named set of data points, where each point is a vector with an associated payload. All vectors within a collection must have the same dimensionality and be comparable using a single metric. **Distance Metrics**: These metrics are used to measure the similarity between vectors. The choice of distance metric is made when creating a collection. It depends on the nature of the vectors and how they were generated, considering the neural network used for the encoding. **Points**: Each [point](/documentation/concepts/points/) consists of a **vector** and can also include an optional **identifier** (ID) and **[payload](/documentation/concepts/payload/)**. The vector represents the high-dimensional data and the payload carries metadata information in a JSON format, giving the data point more context or attributes. **Storage Options**: There are two primary storage options. The in-memory storage option keeps all vectors in RAM, which allows for the highest speed in data access since disk access is only required for persistence. Alternatively, the Memmap storage option creates a virtual address space linked with the file on disk, giving a balance between memory usage and access speed. **Clients**: Qdrant supports various programming languages for client interaction, such as Python, Go, Rust, and Typescript. This way developers can connect to and interact with Qdrant using the programming language they prefer. ## Vector Database Use Cases If we had to summarize the [use cases for vector databases](https://qdrant.tech/use-cases/) into a single word, it would be "match". They are great at finding non-obvious ways to correspond or “match” data with a given query. Whether it's through similarity in images, text, user preferences, or patterns in data. Here are some examples of how to take advantage of using vector databases: [Personalized recommendation systems](https://qdrant.tech/recommendations/) to analyze and interpret complex user data, such as preferences, behaviors, and interactions. For example, on Spotify, if a user frequently listens to the same song or skips it, the recommendation engine takes note of this to personalize future suggestions. [Semantic search](https://qdrant.tech/documentation/tutorials/search-beginners/) allows for systems to be able to capture the deeper semantic meaning of words and text. In modern search engines, if someone searches for "tips for planting in spring," it tries to understand the intent and contextual meaning behind the query. It doesn’t try just matching the words themselves. Here’s an example of a [vector search engine for Startups](https://demo.qdrant.tech/) made with Qdrant: ![](/articles_data/what-is-a-vector-database/semantic-search.png) There are many other use cases like for **fraud detection and anomaly analysis** used in sectors like finance and cybersecurity, to detect anomalies and potential fraud. And **Content-Based Image Retrieval (CBIR)** for images by comparing vector representations rather than metadata or tags. Those are just a few examples. The ability of vector databases to “match” data with queries makes them essential for multiple types of applications. Here are some more [use cases examples](/use-cases/) you can take a look at. ### Get Started With Qdrant’s Vector Database Today Now that you're familiar with the core concepts around vector databases, it’s time to get your hands dirty. [Start by building your own semantic search engine](/documentation/tutorials/search-beginners/) for science fiction books in just about 5 minutes with the help of Qdrant. You can also watch our [video tutorial](https://www.youtube.com/watch?v=AASiqmtKo54). Feeling ready to dive into a more complex project? Take the next step and get started building an actual [Neural Search Service with a complete API and a dataset](/documentation/tutorials/neural-search/). Let’s get into action!
articles/what-is-a-vector-database.md
--- title: Layer Recycling and Fine-tuning Efficiency short_description: Tradeoff between speed and performance in layer recycling description: Learn when and how to use layer recycling to achieve different performance targets. preview_dir: /articles_data/embedding-recycling/preview small_preview_image: /articles_data/embedding-recycling/icon.svg social_preview_image: /articles_data/embedding-recycling/preview/social_preview.jpg weight: 10 author: Yusuf Sarıgöz author_link: https://medium.com/@yusufsarigoz date: 2022-08-23T13:00:00+03:00 draft: false aliases: [ /articles/embedding-recycler/ ] --- A recent [paper](https://arxiv.org/abs/2207.04993) by Allen AI has attracted attention in the NLP community as they cache the output of a certain intermediate layer in the training and inference phases to achieve a speedup of ~83% with a negligible loss in model performance. This technique is quite similar to [the caching mechanism in Quaterion](https://quaterion.qdrant.tech/tutorials/cache_tutorial.html), but the latter is intended for any data modalities while the former focuses only on language models despite presenting important insights from their experiments. In this post, I will share our findings combined with those, hoping to provide the community with a wider perspective on layer recycling. ## How layer recycling works The main idea of layer recycling is to accelerate the training (and inference) by avoiding repeated passes of the same data object through the frozen layers. Instead, it is possible to pass objects through those layers only once, cache the output and use them as inputs to the unfrozen layers in future epochs. In the paper, they usually cache 50% of the layers, e.g., the output of the 6th multi-head self-attention block in a 12-block encoder. However, they find out that it does not work equally for all the tasks. For example, the question answering task suffers from a more significant degradation in performance with 50% of the layers recycled, and they choose to lower it down to 25% for this task, so they suggest determining the level of caching based on the task at hand. they also note that caching provides a more considerable speedup for larger models and on lower-end machines. In layer recycling, the cache is hit for exactly the same object. It is easy to achieve this in textual data as it is easily hashable, but you may need more advanced tricks to generate keys for the cache when you want to generalize this technique to diverse data types. For instance, hashing PyTorch tensors [does not work as you may expect](https://github.com/joblib/joblib/issues/1282). Quaterion comes with an intelligent key extractor that may be applied to any data type, but it is also allowed to customize it with a callable passed as an argument. Thanks to this flexibility, we were able to run a variety of experiments in different setups, and I believe that these findings will be helpful for your future projects. ## Experiments We conducted different experiments to test the performance with: 1. Different numbers of layers recycled in [the similar cars search example](https://quaterion.qdrant.tech/tutorials/cars-tutorial.html). 2. Different numbers of samples in the dataset for training and fine-tuning for similar cars search. 3. Different numbers of layers recycled in [the question answerring example](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html). ## Easy layer recycling with Quaterion The easiest way of caching layers in Quaterion is to compose a [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel) with a frozen [Encoder](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html#quaterion_models.encoders.encoder.Encoder) and an unfrozen [EncoderHead](https://quaterion-models.qdrant.tech/quaterion_models.heads.encoder_head.html#quaterion_models.heads.encoder_head.EncoderHead). Therefore, we modified the `TrainableModel` in the [example](https://github.com/qdrant/quaterion/blob/master/examples/cars/models.py) as in the following: ```python class Model(TrainableModel): # ... def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]: pre_trained_encoder = torchvision.models.resnet34(pretrained=True) self.avgpool = copy.deepcopy(pre_trained_encoder.avgpool) self.finetuned_block = copy.deepcopy(pre_trained_encoder.layer4) modules = [] for name, child in pre_trained_encoder.named_children(): modules.append(child) if name == "layer3": break pre_trained_encoder = nn.Sequential(*modules) return CarsEncoder(pre_trained_encoder) def configure_head(self, input_embedding_size) -> EncoderHead: return SequentialHead(self.finetuned_block, self.avgpool, nn.Flatten(), SkipConnectionHead(512, dropout=0.3, skip_dropout=0.2), output_size=512) # ... ``` This trick lets us finetune one more layer from the base model as a part of the `EncoderHead` while still benefiting from the speedup in the frozen `Encoder` provided by the cache. ## Experiment 1: Percentage of layers recycled The paper states that recycling 50% of the layers yields little to no loss in performance when compared to full fine-tuning. In this setup, we compared performances of four methods: 1. Freeze the whole base model and train only `EncoderHead`. 2. Move one of the four residual blocks `EncoderHead` and train it together with the head layer while freezing the rest (75% layer recycling). 3. Move two of the four residual blocks to `EncoderHead` while freezing the rest (50% layer recycling). 4. Train the whole base model together with `EncoderHead`. **Note**: During these experiments, we used ResNet34 instead of ResNet152 as the pretrained model in order to be able to use a reasonable batch size in full training. The baseline score with ResNet34 is 0.106. | Model | RRP | | ------------- | ---- | | Full training | 0.32 | | 50% recycling | 0.31 | | 75% recycling | 0.28 | | Head only | 0.22 | | Baseline | 0.11 | As is seen in the table, the performance in 50% layer recycling is very close to that in full training. Additionally, we can still have a considerable speedup in 50% layer recycling with only a small drop in performance. Although 75% layer recycling is better than training only `EncoderHead`, its performance drops quickly when compared to 50% layer recycling and full training. ## Experiment 2: Amount of available data In the second experiment setup, we compared performances of fine-tuning strategies with different dataset sizes. We sampled 50% of the training set randomly while still evaluating models on the whole validation set. | Model | RRP | | ------------- | ---- | | Full training | 0.27 | | 50% recycling | 0.26 | | 75% recycling | 0.25 | | Head only | 0.21 | | Baseline | 0.11 | This experiment shows that, the smaller the available dataset is, the bigger drop in performance we observe in full training, 50% and 75% layer recycling. On the other hand, the level of degradation in training only `EncoderHead` is really small when compared to others. When we further reduce the dataset size, full training becomes untrainable at some point, while we can still improve over the baseline by training only `EncoderHead`. ## Experiment 3: Layer recycling in question answering We also wanted to test layer recycling in a different domain as one of the most important takeaways of the paper is that the performance of layer recycling is task-dependent. To this end, we set up an experiment with the code from the [Question Answering with Similarity Learning tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html). | Model | RP@1 | RRK | | ------------- | ---- | ---- | | Full training | 0.76 | 0.65 | | 50% recycling | 0.75 | 0.63 | | 75% recycling | 0.69 | 0.59 | | Head only | 0.67 | 0.58 | | Baseline | 0.64 | 0.55 | In this task, 50% layer recycling can still do a good job with only a small drop in performance when compared to full training. However, the level of degradation is smaller than that in the similar cars search example. This can be attributed to several factors such as the pretrained model quality, dataset size and task definition, and it can be the subject of a more elaborate and comprehensive research project. Another observation is that the performance of 75% layer recycling is closer to that of training only `EncoderHead` than 50% layer recycling. ## Conclusion We set up several experiments to test layer recycling under different constraints and confirmed that layer recycling yields varying performances with different tasks and domains. One of the most important observations is the fact that the level of degradation in layer recycling is sublinear with a comparison to full training, i.e., we lose a smaller percentage of performance than the percentage we recycle. Additionally, training only `EncoderHead` is more resistant to small dataset sizes. There is even a critical size under which full training does not work at all. The issue of performance differences shows that there is still room for further research on layer recycling, and luckily Quaterion is flexible enough to run such experiments quickly. We will continue to report our findings on fine-tuning efficiency. **Fun fact**: The preview image for this article was created with Dall.e with the following prompt: "Photo-realistic robot using a tuning fork to adjust a piano." [Click here](/articles_data/embedding-recycling/full.png) to see it in full size!
articles/embedding-recycler.md
--- title: "What are Vector Embeddings? - Revolutionize Your Search Experience" draft: false slug: what-are-embeddings? short_description: Explore the power of vector embeddings. Learn to use numerical machine learning representations to build a personalized Neural Search Service with Fastembed. description: Discover the power of vector embeddings. Learn how to harness the potential of numerical machine learning representations to create a personalized Neural Search Service with FastEmbed. preview_dir: /articles_data/what-are-embeddings/preview weight: -102 social_preview_image: /articles_data/what-are-embeddings/preview/social-preview.jpg small_preview_image: /articles_data/what-are-embeddings/icon.svg date: 2024-02-06T15:29:33-03:00 author: Sabrina Aquino author_link: https://github.com/sabrinaaquino featured: true tags: - vector-search - vector-database - embeddings - machine-learning - artificial intelligence --- > **Embeddings** are numerical machine learning representations of the semantic of the input data. They capture the meaning of complex, high-dimensional data, like text, images, or audio, into vectors. Enabling algorithms to process and analyze the data more efficiently. You know when you’re scrolling through your social media feeds and the content just feels incredibly tailored to you? There's the news you care about, followed by a perfect tutorial with your favorite tech stack, and then a meme that makes you laugh so hard you snort. Or what about how YouTube recommends videos you ended up loving. It’s by creators you've never even heard of and you didn’t even send YouTube a note about your ideal content lineup. This is the magic of embeddings. These are the result of **deep learning models** analyzing the data of your interactions online. From your likes, shares, comments, searches, the kind of content you linger on, and even the content you decide to skip. It also allows the algorithm to predict future content that you are likely to appreciate. The same embeddings can be repurposed for search, ads, and other features, creating a highly personalized user experience. ![How embeddings are applied to perform recommendantions and other use cases](/articles_data/what-are-embeddings/Embeddings-Use-Case.jpg) They make [high-dimensional](https://www.sciencedirect.com/topics/computer-science/high-dimensional-data) data more manageable. This reduces storage requirements, improves computational efficiency, and makes sense of a ton of **unstructured** data. ## Why use vector embeddings? The **nuances** of natural language or the hidden **meaning** in large datasets of images, sounds, or user interactions are hard to fit into a table. Traditional relational databases can't efficiently query most types of data being currently used and produced, making the **retrieval** of this information very limited. In the embeddings space, synonyms tend to appear in similar contexts and end up having similar embeddings. The space is a system smart enough to understand that "pretty" and "attractive" are playing for the same team. Without being explicitly told so. That’s the magic. At their core, vector embeddings are about semantics. They take the idea that "a word is known by the company it keeps" and apply it on a grand scale. ![Example of how synonyms are placed closer together in the embeddings space](/articles_data/what-are-embeddings/Similar-Embeddings.jpg) This capability is crucial for creating search systems, recommendation engines, retrieval augmented generation (RAG) and any application that benefits from a deep understanding of content. ## How do embeddings work? Embeddings are created through neural networks. They capture complex relationships and semantics into [dense vectors](https://www1.se.cuhk.edu.hk/~seem5680/lecture/semantics-with-dense-vectors-2018.pdf) which are more suitable for machine learning and data processing applications. They can then project these vectors into a proper **high-dimensional** space, specifically, a [Vector Database](/articles/what-is-a-vector-database/). ![The process for turning raw data into embeddings and placing them into the vector space](/articles_data/what-are-embeddings/How-Embeddings-Work.jpg) The meaning of a data point is implicitly defined by its **position** on the vector space. After the vectors are stored, we can use their spatial properties to perform [nearest neighbor searches](https://en.wikipedia.org/wiki/Nearest_neighbor_search#:~:text=Nearest%20neighbor%20search%20(NNS)%2C,the%20larger%20the%20function%20values.). These searches retrieve semantically similar items based on how close they are in this space. > The quality of the vector representations drives the performance. The embedding model that works best for you depends on your use case. ### Creating vector embeddings Embeddings translate the complexities of human language to a format that computers can understand. It uses neural networks to assign **numerical values** to the input data, in a way that similar data has similar values. ![The process of using Neural Networks to create vector embeddings](/articles_data/what-are-embeddings/How-Do-Embeddings-Work_.jpg) For example, if I want to make my computer understand the word 'right', I can assign a number like 1.3. So when my computer sees 1.3, it sees the word 'right’. Now I want to make my computer understand the context of the word ‘right’. I can use a two-dimensional vector, such as [1.3, 0.8], to represent 'right'. The first number 1.3 still identifies the word 'right', but the second number 0.8 specifies the context. We can introduce more dimensions to capture more nuances. For example, a third dimension could represent formality of the word, a fourth could indicate its emotional connotation (positive, neutral, negative), and so on. The evolution of this concept led to the development of embedding models like [Word2Vec](https://en.wikipedia.org/wiki/Word2vec) and [GloVe](https://en.wikipedia.org/wiki/GloVe). They learn to understand the context in which words appear to generate high-dimensional vectors for each word, capturing far more complex properties. ![How Word2Vec model creates the embeddings for a word](/articles_data/what-are-embeddings/Word2Vec-model.jpg) However, these models still have limitations. They generate a single vector per word, based on its usage across texts. This means all the nuances of the word "right" are blended into one vector representation. That is not enough information for computers to fully understand the context. So, how do we help computers grasp the nuances of language in different contexts? In other words, how do we differentiate between: * "your answer is right" * "turn right at the corner" * "everyone has the right to freedom of speech" Each of these sentences use the word 'right', with different meanings. More advanced models like [BERT](https://en.wikipedia.org/wiki/BERT_(language_model)) and [GPT](https://en.wikipedia.org/wiki/Generative_pre-trained_transformer) use deep learning models based on the [transformer architecture](https://arxiv.org/abs/1706.03762), which helps computers consider the full context of a word. These models pay attention to the entire context. The model understands the specific use of a word in its **surroundings**, and then creates different embeddings for each. ![How the BERT model creates the embeddings for a word](/articles_data/what-are-embeddings/BERT-model.jpg) But how does this process of understanding and interpreting work in practice? Think of the term: "biophilic design", for example. To generate its embedding, the transformer architecture can use the following contexts: * "Biophilic design incorporates natural elements into architectural planning." * "Offices with biophilic design elements report higher employee well-being." * "...plant life, natural light, and water features are key aspects of biophilic design." And then it compares contexts to known architectural and design principles: * "Sustainable designs prioritize environmental harmony." * "Ergonomic spaces enhance user comfort and health." The model creates a vector embedding for "biophilic design" that encapsulates the concept of integrating natural elements into man-made environments. Augmented with attributes that highlight the correlation between this integration and its positive impact on health, well-being, and environmental sustainability. ### Integration with embedding APIs Selecting the right embedding model for your use case is crucial to your application performance. Qdrant makes it easier by offering seamless integration with the best selection of embedding APIs, including [Cohere](/documentation/embeddings/cohere/), [Gemini](/documentation/embeddings/gemini/), [Jina Embeddings](/documentation/embeddings/jina-embeddings/), [OpenAI](/documentation/embeddings/openai/), [Aleph Alpha](/documentation/embeddings/aleph-alpha/), [Fastembed](https://github.com/qdrant/fastembed), and [AWS Bedrock](/documentation/embeddings/bedrock/). If you’re looking for NLP and rapid prototyping, including language translation, question-answering, and text generation, OpenAI is a great choice. Gemini is ideal for image search, duplicate detection, and clustering tasks. Fastembed, which we’ll use on the example below, is designed for efficiency and speed, great for applications needing low-latency responses, such as autocomplete and instant content recommendations. We plan to go deeper into selecting the best model based on performance, cost, integration ease, and scalability in a future post. ## Create a neural search service with Fastmbed Now that you’re familiar with the core concepts around vector embeddings, how about start building your own [Neural Search Service](/documentation/tutorials/neural-search/)? Tutorial guides you through a practical application of how to use Qdrant for document management based on descriptions of companies from [startups-list.com](https://www.startups-list.com/). From embedding data, integrating it with Qdrant's vector database, constructing a search API, and finally deploying your solution with FastAPI. Check out what the final version of this project looks like on the [live online demo](https://qdrant.to/semantic-search-demo). Let us know what you’re building with embeddings! Join our [Discord](https://discord.gg/qdrant-907569970500743200) community and share your projects!
articles/what-are-embeddings.md
--- title: "Scalar Quantization: Background, Practices & More | Qdrant" short_description: "Discover scalar quantization for optimized data storage and improved performance, including data compression benefits and efficiency enhancements." description: "Discover the efficiency of scalar quantization for optimized data storage and enhanced performance. Learn about its data compression benefits and efficiency improvements." social_preview_image: /articles_data/scalar-quantization/social_preview.png small_preview_image: /articles_data/scalar-quantization/scalar-quantization-icon.svg preview_dir: /articles_data/scalar-quantization/preview weight: 5 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-03-27T10:45:00+01:00 draft: false keywords: - vector search - scalar quantization - memory optimization --- # Efficiency Unleashed: The Power of Scalar Quantization High-dimensional vector embeddings can be memory-intensive, especially when working with large datasets consisting of millions of vectors. Memory footprint really starts being a concern when we scale things up. A simple choice of the data type used to store a single number impacts even billions of numbers and can drive the memory requirements crazy. The higher the precision of your type, the more accurately you can represent the numbers. The more accurate your vectors, the more precise is the distance calculation. But the advantages stop paying off when you need to order more and more memory. Qdrant chose `float32` as a default type used to store the numbers of your embeddings. So a single number needs 4 bytes of the memory and a 512-dimensional vector occupies 2 kB. That's only the memory used to store the vector. There is also an overhead of the HNSW graph, so as a rule of thumb we estimate the memory size with the following formula: ```text memory_size = 1.5 * number_of_vectors * vector_dimension * 4 bytes ``` While Qdrant offers various options to store some parts of the data on disk, starting from version 1.1.0, you can also optimize your memory by compressing the embeddings. We've implemented the mechanism of **Scalar Quantization**! It turns out to have not only a positive impact on memory but also on the performance. ## Scalar quantization Scalar quantization is a data compression technique that converts floating point values into integers. In case of Qdrant `float32` gets converted into `int8`, so a single number needs 75% less memory. It's not a simple rounding though! It's a process that makes that transformation partially reversible, so we can also revert integers back to floats with a small loss of precision. ### Theoretical background Assume we have a collection of `float32` vectors and denote a single value as `f32`. In reality neural embeddings do not cover a whole range represented by the floating point numbers, but rather a small subrange. Since we know all the other vectors, we can establish some statistics of all the numbers. For example, the distribution of the values will be typically normal: ![A distribution of the vector values](/articles_data/scalar-quantization/float32-distribution.png) Our example shows that 99% of the values come from a `[-2.0, 5.0]` range. And the conversion to `int8` will surely lose some precision, so we rather prefer keeping the representation accuracy within the range of 99% of the most probable values and ignoring the precision of the outliers. There might be a different choice of the range width, actually, any value from a range `[0, 1]`, where `0` means empty range, and `1` would keep all the values. That's a hyperparameter of the procedure called `quantile`. A value of `0.95` or `0.99` is typically a reasonable choice, but in general `quantile ∈ [0, 1]`. #### Conversion to integers Let's talk about the conversion to `int8`. Integers also have a finite set of values that might be represented. Within a single byte they may represent up to 256 different values, either from `[-128, 127]` or `[0, 255]`. ![Value ranges represented by int8](/articles_data/scalar-quantization/int8-value-range.png) Since we put some boundaries on the numbers that might be represented by the `f32`, and `i8` has some natural boundaries, the process of converting the values between those two ranges is quite natural: $$ f32 = \alpha \times i8 + offset $$ $$ i8 = \frac{f32 - offset}{\alpha} $$ The parameters $ \alpha $ and $ offset $ has to be calculated for a given set of vectors, but that comes easily by putting the minimum and maximum of the represented range for both `f32` and `i8`. ![Float32 to int8 conversion](/articles_data/scalar-quantization/float32-to-int8-conversion.png) For the unsigned `int8` it will go as following: $$ \begin{equation} \begin{cases} -2 = \alpha \times 0 + offset \\\\ 5 = \alpha \times 255 + offset \end{cases} \end{equation} $$ In case of signed `int8`, we'll just change the represented range boundaries: $$ \begin{equation} \begin{cases} -2 = \alpha \times (-128) + offset \\\\ 5 = \alpha \times 127 + offset \end{cases} \end{equation} $$ For any set of vector values we can simply calculate the $ \alpha $ and $ offset $ and those values have to be stored along with the collection to enable to conversion between the types. #### Distance calculation We do not store the vectors in the collections represented by `int8` instead of `float32` just for the sake of compressing the memory. But the coordinates are being used while we calculate the distance between the vectors. Both dot product and cosine distance requires multiplying the corresponding coordinates of two vectors, so that's the operation we perform quite often on `float32`. Here is how it would look like if we perform the conversion to `int8`: $$ f32 \times f32' = $$ $$ = (\alpha \times i8 + offset) \times (\alpha \times i8' + offset) = $$ $$ = \alpha^{2} \times i8 \times i8' + \underbrace{offset \times \alpha \times i8' + offset \times \alpha \times i8 + offset^{2}}_\text{pre-compute} $$ The first term, $ \alpha^{2} \times i8 \times i8' $ has to be calculated when we measure the distance as it depends on both vectors. However, both the second and the third term ($ offset \times \alpha \times i8' $ and $ offset \times \alpha \times i8 $ respectively), depend only on a single vector and those might be precomputed and kept for each vector. The last term, $ offset^{2} $ does not depend on any of the values, so it might be even computed once and reused. If we had to calculate all the terms to measure the distance, the performance could have been even worse than without the conversion. But thanks for the fact we can precompute the majority of the terms, things are getting simpler. And in turns out the scalar quantization has a positive impact not only on the memory usage, but also on the performance. As usual, we performed some benchmarks to support this statement! ## Benchmarks We simply used the same approach as we use in all [the other benchmarks we publish](/benchmarks/). Both [Arxiv-titles-384-angular-no-filters](https://github.com/qdrant/ann-filtering-benchmark-datasets) and [Gist-960](https://github.com/erikbern/ann-benchmarks/) datasets were chosen to make the comparison between non-quantized and quantized vectors. The results are summarized in the tables: #### Arxiv-titles-384-angular-no-filters <table> <thead> <tr> <th colspan="2"></th> <th colspan="2">ef = 128</th> <th colspan="2">ef = 256</th> <th colspan="2">ef = 512</th> </tr> <tr> <th></th> <th><small>Upload and indexing time</small></th> <th><small>Mean search precision</small></th> <th><small>Mean search time</small></th> <th><small>Mean search precision</small></th> <th><small>Mean search time</small></th> <th><small>Mean search precision</small></th> <th><small>Mean search time</small></th> </tr> </thead> <tbody> <tr> <th>Non-quantized vectors</th> <td>649 s</td> <td>0.989</td> <td>0.0094</td> <td>0.994</td> <td>0.0932</td> <td>0.996</td> <td>0.161</td> </tr> <tr> <th>Scalar Quantization</th> <td>496 s</td> <td>0.986</td> <td>0.0037</td> <td>0.993</td> <td>0.060</td> <td>0.996</td> <td>0.115</td> </tr> <tr> <td>Difference</td> <td><span style="color: green;">-23.57%</span></td> <td><span style="color: red;">-0.3%</span></td> <td><span style="color: green;">-60.64%</span></td> <td><span style="color: red;">-0.1%</span></td> <td><span style="color: green;">-35.62%</span></td> <td>0%</td> <td><span style="color: green;">-28.57%</span></td> </tr> </tbody> </table> A slight decrease in search precision results in a considerable improvement in the latency. Unless you aim for the highest precision possible, you should not notice the difference in your search quality. #### Gist-960 <table> <thead> <tr> <th colspan="2"></th> <th colspan="2">ef = 128</th> <th colspan="2">ef = 256</th> <th colspan="2">ef = 512</th> </tr> <tr> <th></th> <th><small>Upload and indexing time</small></th> <th><small>Mean search precision</small></th> <th><small>Mean search time</small></th> <th><small>Mean search precision</small></th> <th><small>Mean search time</small></th> <th><small>Mean search precision</small></th> <th><small>Mean search time</small></th> </tr> </thead> <tbody> <tr> <th>Non-quantized vectors</th> <td>452</td> <td>0.802</td> <td>0.077</td> <td>0.887</td> <td>0.135</td> <td>0.941</td> <td>0.231</td> </tr> <tr> <th>Scalar Quantization</th> <td>312</td> <td>0.802</td> <td>0.043</td> <td>0.888</td> <td>0.077</td> <td>0.941</td> <td>0.135</td> </tr> <tr> <td>Difference</td> <td><span style="color: green;">-30.79%</span></td> <td>0%</td> <td><span style="color: green;">-44,16%</span></td> <td><span style="color: green;">+0.11%</span></td> <td><span style="color: green;">-42.96%</span></td> <td>0%</td> <td><span style="color: green;">-41,56%</span></td> </tr> </tbody> </table> In all the cases, the decrease in search precision is negligible, but we keep a latency reduction of at least 28.57%, even up to 60,64%, while searching. As a rule of thumb, the higher the dimensionality of the vectors, the lower the precision loss. ### Oversampling and rescoring A distinctive feature of the Qdrant architecture is the ability to combine the search for quantized and original vectors in a single query. This enables the best combination of speed, accuracy, and RAM usage. Qdrant stores the original vectors, so it is possible to rescore the top-k results with the original vectors after doing the neighbours search in quantized space. That obviously has some impact on the performance, but in order to measure how big it is, we made the comparison in different search scenarios. We used a machine with a very slow network-mounted disk and tested the following scenarios with different amounts of allowed RAM: | Setup | RPS | Precision | |-----------------------------|------|-----------| | 4.5GB memory | 600 | 0.99 | | 4.5GB memory + SQ + rescore | 1000 | 0.989 | And another group with more strict memory limits: | Setup | RPS | Precision | |------------------------------|------|-----------| | 2GB memory | 2 | 0.99 | | 2GB memory + SQ + rescore | 30 | 0.989 | | 2GB memory + SQ + no rescore | 1200 | 0.974 | In those experiments, throughput was mainly defined by the number of disk reads, and quantization efficiently reduces it by allowing more vectors in RAM. Read more about on-disk storage in Qdrant and how we measure its performance in our article: [Minimal RAM you need to serve a million vectors ](/articles/memory-consumption/). The mechanism of Scalar Quantization with rescoring disabled pushes the limits of low-end machines even further. It seems like handling lots of requests does not require an expensive setup if you can agree to a small decrease in the search precision. ### Accessing best practices Qdrant documentation on [Scalar Quantization](/documentation/quantization/#setting-up-quantization-in-qdrant) is a great resource describing different scenarios and strategies to achieve up to 4x lower memory footprint and even up to 2x performance increase.
articles/scalar-quantization.md
--- title: Extending ChatGPT with a Qdrant-based knowledge base short_description: "ChatGPT factuality might be improved with semantic search. Here is how." description: "ChatGPT factuality might be improved with semantic search. Here is how." social_preview_image: /articles_data/chatgpt-plugin/social_preview.jpg small_preview_image: /articles_data/chatgpt-plugin/chatgpt-plugin-icon.svg preview_dir: /articles_data/chatgpt-plugin/preview weight: 7 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-03-23T18:01:00+01:00 draft: false keywords: - openai - chatgpt - chatgpt plugin - knowledge base - similarity search --- In recent months, ChatGPT has revolutionised the way we communicate, learn, and interact with technology. Our social platforms got flooded with prompts, responses to them, whole articles and countless other examples of using Large Language Models to generate content unrecognisable from the one written by a human. Despite their numerous benefits, these models have flaws, as evidenced by the phenomenon of hallucination - the generation of incorrect or nonsensical information in response to user input. This issue, which can compromise the reliability and credibility of AI-generated content, has become a growing concern among researchers and users alike. Those concerns started another wave of entirely new libraries, such as Langchain, trying to overcome those issues, for example, by combining tools like vector databases to bring the required context into the prompts. And that is, so far, the best way to incorporate new and rapidly changing knowledge into the neural model. So good that OpenAI decided to introduce a way to extend the model capabilities with external plugins at the model level. These plugins, designed to enhance the model's performance, serve as modular extensions that seamlessly interface with the core system. By adding a knowledge base plugin to ChatGPT, we can effectively provide the AI with a curated, trustworthy source of information, ensuring that the generated content is more accurate and relevant. Qdrant may act as a vector database where all the facts will be stored and served to the model upon request. If you’d like to ask ChatGPT questions about your data sources, such as files, notes, or emails, starting with the official [ChatGPT retrieval plugin repository](https://github.com/openai/chatgpt-retrieval-plugin) is the easiest way. Qdrant is already integrated, so that you can use it right away. In the following sections, we will guide you through setting up the knowledge base using Qdrant and demonstrate how this powerful combination can significantly improve ChatGPT's performance and output quality. ## Implementing a knowledge base with Qdrant The official ChatGPT retrieval plugin uses a vector database to build your knowledge base. Your documents are chunked and vectorized with the OpenAI's text-embedding-ada-002 model to be stored in Qdrant. That enables semantic search capabilities. So, whenever ChatGPT thinks it might be relevant to check the knowledge base, it forms a query and sends it to the plugin to incorporate the results into its response. You can now modify the knowledge base, and ChatGPT will always know the most recent facts. No model fine-tuning is required. Let’s implement that for your documents. In our case, this will be Qdrant’s documentation, so you can ask even technical questions about Qdrant directly in ChatGPT. Everything starts with cloning the plugin's repository. ```bash git clone [email protected]:openai/chatgpt-retrieval-plugin.git ``` Please use your favourite IDE to open the project once cloned. ### Prerequisites You’ll need to ensure three things before we start: 1. Create an OpenAI API key, so you can use their embeddings model programmatically. If you already have an account, you can generate one at https://platform.openai.com/account/api-keys. Otherwise, registering an account might be required. 2. Run a Qdrant instance. The instance has to be reachable from the outside, so you either need to launch it on-premise or use the [Qdrant Cloud](https://cloud.qdrant.io/) offering. A free 1GB cluster is available, which might be enough in many cases. We’ll use the cloud. 3. Since ChatGPT will interact with your service through the network, you must deploy it, making it possible to connect from the Internet. Unfortunately, localhost is not an option, but any provider, such as Heroku or fly.io, will work perfectly. We will use [fly.io](https://fly.io/), so please register an account. You may also need to install the flyctl tool for the deployment. The process is described on the homepage of fly.io. ### Configuration The retrieval plugin is a FastAPI-based application, and its default functionality might be enough in most cases. However, some configuration is required so ChatGPT knows how and when to use it. However, we can start setting up Fly.io, as we need to know the service's hostname to configure it fully. First, let’s login into the Fly CLI: ```bash flyctl auth login ``` That will open the browser, so you can simply provide the credentials, and all the further commands will be executed with your account. If you have never used fly.io, you may need to give the credit card details before running any instance, but there is a Hobby Plan you won’t be charged for. Let’s try to launch the instance already, but do not deploy it. We’ll get the hostname assigned and have all the details to fill in the configuration. The retrieval plugin uses TCP port 8080, so we need to configure fly.io, so it redirects all the traffic to it as well. ```bash flyctl launch --no-deploy --internal-port 8080 ``` We’ll be prompted about the application name and the region it should be deployed to. Please choose whatever works best for you. After that, we should see the hostname of the newly created application: ```text ... Hostname: your-application-name.fly.dev ... ``` Let’s note it down. We’ll need it for the configuration of the service. But we’re going to start with setting all the applications secrets: ```bash flyctl secrets set DATASTORE=qdrant \ OPENAI_API_KEY=<your-openai-api-key> \ QDRANT_URL=https://<your-qdrant-instance>.aws.cloud.qdrant.io \ QDRANT_API_KEY=<your-qdrant-api-key> \ BEARER_TOKEN=eyJhbGciOiJIUzI1NiJ9.e30.ZRrHA1JJJW8opsbCGfG_HACGpVUMN_a9IV7pAx_Zmeo ``` The secrets will be staged for the first deployment. There is an example of a minimal Bearer token generated by https://jwt.io/. **Please adjust the token and do not expose it publicly, but you can keep the same value for the demo.** Right now, let’s dive into the application config files. You can optionally provide your icon and keep it as `.well-known/logo.png` file, but there are two additional files we’re going to modify. The `.well-known/openapi.yaml` file describes the exposed API in the OpenAPI format. Lines 3 to 5 might be filled with the application title and description, but the essential part is setting the server URL the application will run. Eventually, the top part of the file should look like the following: ```yaml openapi: 3.0.0 info: title: Qdrant Plugin API version: 1.0.0 description: Plugin for searching through the Qdrant doc… servers: - url: https://your-application-name.fly.dev ... ``` There is another file in the same directory, and that’s the most crucial piece to configure. It contains the description of the plugin we’re implementing, and ChatGPT uses this description to determine if it should communicate with our knowledge base. The file is called `.well-known/ai-plugin.json`, and let’s edit it before we finally deploy the app. There are various properties we need to fill in: | **Property** | **Meaning** | **Example** | |-------------------------|----------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `name_for_model` | Name of the plugin for the ChatGPT model | *qdrant* | | `name_for_human` | Human-friendly model name, to be displayed in ChatGPT UI | *Qdrant Documentation Plugin* | | `description_for_model` | Description of the purpose of the plugin, so ChatGPT knows in what cases it should be using it to answer a question. | *Plugin for searching through the Qdrant documentation to find answers to questions and retrieve relevant information. Use it whenever a user asks something that might be related to Qdrant vector database or semantic vector search* | | `description_for_human` | Short description of the plugin, also to be displayed in the ChatGPT UI. | *Search through Qdrant docs* | | `auth` | Authorization scheme used by the application. By default, the bearer token has to be configured. | ```{"type": "user_http", "authorization_type": "bearer"}``` | | `api.url` | Link to the OpenAPI schema definition. Please adjust based on your application URL. | *https://your-application-name.fly.dev/.well-known/openapi.yaml* | | `logo_url` | Link to the application logo. Please adjust based on your application URL. | *https://your-application-name.fly.dev/.well-known/logo.png* | A complete file may look as follows: ```json { "schema_version": "v1", "name_for_model": "qdrant", "name_for_human": "Qdrant Documentation Plugin", "description_for_model": "Plugin for searching through the Qdrant documentation to find answers to questions and retrieve relevant information. Use it whenever a user asks something that might be related to Qdrant vector database or semantic vector search", "description_for_human": "Search through Qdrant docs", "auth": { "type": "user_http", "authorization_type": "bearer" }, "api": { "type": "openapi", "url": "https://your-application-name.fly.dev/.well-known/openapi.yaml", "has_user_authentication": false }, "logo_url": "https://your-application-name.fly.dev/.well-known/logo.png", "contact_email": "[email protected]", "legal_info_url": "[email protected]" } ``` That was the last step before running the final command. The command that will deploy the application on the server: ```bash flyctl deploy ``` The command will build the image using the Dockerfile and deploy the service at a given URL. Once the command is finished, the service should be running on the hostname we got previously: ```text https://your-application-name.fly.dev ``` ## Integration with ChatGPT Once we have deployed the service, we can point ChatGPT to it, so the model knows how to connect. When you open the ChatGPT UI, you should see a dropdown with a Plugins tab included: ![](/articles_data/chatgpt-plugin/step-1.png) Once selected, you should be able to choose one of check the plugin store: ![](/articles_data/chatgpt-plugin/step-2.png) There are some premade plugins available, but there’s also a possibility to install your own plugin by clicking on the "*Develop your own plugin*" option in the bottom right corner: ![](/articles_data/chatgpt-plugin/step-3.png) We need to confirm our plugin is ready, but since we relied on the official retrieval plugin from OpenAI, this should be all fine: ![](/articles_data/chatgpt-plugin/step-4.png) After clicking on "*My manifest is ready*", we can already point ChatGPT to our newly created service: ![](/articles_data/chatgpt-plugin/step-5.png) A successful plugin installation should end up with the following information: ![](/articles_data/chatgpt-plugin/step-6.png) There is a name and a description of the plugin we provided. Let’s click on "*Done*" and return to the "*Plugin store*" window again. There is another option we need to choose in the bottom right corner: ![](/articles_data/chatgpt-plugin/step-7.png) Our plugin is not officially verified, but we can, of course, use it freely. The installation requires just the service URL: ![](/articles_data/chatgpt-plugin/step-8.png) OpenAI cannot guarantee the plugin provides factual information, so there is a warning we need to accept: ![](/articles_data/chatgpt-plugin/step-9.png) Finally, we need to provide the Bearer token again: ![](/articles_data/chatgpt-plugin/step-10.png) Our plugin is now ready to be tested. Since there is no data inside the knowledge base, extracting any facts is impossible, but we’re going to put some data using the Swagger UI exposed by our service at https://your-application-name.fly.dev/docs. We need to authorize first, and then call the upsert method with some docs. For the demo purposes, we can just put a single document extracted from the Qdrant documentation to see whether integration works properly: ![](/articles_data/chatgpt-plugin/step-11.png) We can come back to ChatGPT UI, and send a prompt, but we need to make sure the plugin is selected: ![](/articles_data/chatgpt-plugin/step-12.png) Now if our prompt seems somehow related to the plugin description provided, the model will automatically form a query and send it to the HTTP API. The query will get vectorized by our app, and then used to find some relevant documents that will be used as a context to generate the response. ![](/articles_data/chatgpt-plugin/step-13.png) We have a powerful language model, that can interact with our knowledge base, to return not only grammatically correct but also factual information. And this is how your interactions with the model may start to look like: <iframe width="560" height="315" src="https://www.youtube.com/embed/fQUGuHEYeog" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> However, a single document is not enough to enable the full power of the plugin. If you want to put more documents that you have collected, there are already some scripts available in the `scripts/` directory that allows converting JSON, JSON lines or even zip archives.
articles/chatgpt-plugin.md
--- title: Deliver Better Recommendations with Qdrant’s new API short_description: Qdrant 1.6 brings recommendations strategies and more flexibility to the Recommendation API. description: Qdrant 1.6 brings recommendations strategies and more flexibility to the Recommendation API. preview_dir: /articles_data/new-recommendation-api/preview social_preview_image: /articles_data/new-recommendation-api/preview/social_preview.png small_preview_image: /articles_data/new-recommendation-api/icon.svg weight: -80 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-10-25T09:46:00.000Z --- The most popular use case for vector search engines, such as Qdrant, is Semantic search with a single query vector. Given the query, we can vectorize (embed) it and find the closest points in the index. But [Vector Similarity beyond Search](/articles/vector-similarity-beyond-search/) does exist, and recommendation systems are a great example. Recommendations might be seen as a multi-aim search, where we want to find items close to positive and far from negative examples. This use of vector databases has many applications, including recommendation systems for e-commerce, content, or even dating apps. Qdrant has provided the [Recommendation API](/documentation/concepts/search/#recommendation-api) for a while, and with the latest release, [Qdrant 1.6](https://github.com/qdrant/qdrant/releases/tag/v1.6.0), we're glad to give you more flexibility and control over the Recommendation API. Here, we'll discuss some internals and show how they may be used in practice. ### Recap of the old recommendations API The previous [Recommendation API](/documentation/concepts/search/#recommendation-api) in Qdrant came with some limitations. First of all, it was required to pass vector IDs for both positive and negative example points. If you wanted to use vector embeddings directly, you had to either create a new point in a collection or mimic the behaviour of the Recommendation API by using the [Search API](/documentation/concepts/search/#search-api). Moreover, in the previous releases of Qdrant, you were always asked to provide at least one positive example. This requirement was based on the algorithm used to combine multiple samples into a single query vector. It was a simple, yet effective approach. However, if the only information you had was that your user dislikes some items, you couldn't use it directly. Qdrant 1.6 brings a more flexible API. You can now provide both IDs and vectors of positive and negative examples. You can even combine them within a single request. That makes the new implementation backward compatible, so you can easily upgrade an existing Qdrant instance without any changes in your code. And the default behaviour of the API is still the same as before. However, we extended the API, so **you can now choose the strategy of how to find the recommended points**. ```http POST /collections/{collection_name}/points/recommend { "positive": [100, 231], "negative": [718, [0.2, 0.3, 0.4, 0.5]], "filter": { "must": [ { "key": "city", "match": { "value": "London" } } ] }, "strategy": "average_vector", "limit": 3 } ``` There are two key changes in the request. First of all, we can adjust the strategy of search and set it to `average_vector` (the default) or `best_score`. Moreover, we can pass both IDs (`718`) and embeddings (`[0.2, 0.3, 0.4, 0.5]`) as both positive and negative examples. ## HNSW ANN example and strategy Let’s start with an example to help you understand the [HNSW graph](/articles/filtrable-hnsw/). Assume you want to travel to a small city on another continent: 1. You start from your hometown and take a bus to the local airport. 2. Then, take a flight to one of the closest hubs. 3. From there, you have to take another flight to a hub on your destination continent. 4. Hopefully, one last flight to your destination city. 5. You still have one more leg on local transport to get to your final address. This journey is similar to the HNSW graph’s use in Qdrant's approximate nearest neighbours search. ![Transport network](/articles_data/new-recommendation-api/example-transport-network.png) HNSW is a multilayer graph of vectors (embeddings), with connections based on vector proximity. The top layer has the least points, and the distances between those points are the biggest. The deeper we go, the more points we have, and the distances get closer. The graph is built in a way that the points are connected to their closest neighbours at every layer. All the points from a particular layer are also in the layer below, so switching the search layer while staying in the same location is possible. In the case of transport networks, the top layer would be the airline hubs, well-connected but with big distances between the airports. Local airports, along with railways and buses, with higher density and smaller distances, make up the middle layers. Lastly, our bottom layer consists of local means of transport, which is the densest and has the smallest distances between the points. You don’t have to check all the possible connections when you travel. You select an intercontinental flight, then a local one, and finally a bus or a taxi. All the decisions are made based on the distance between the points. The search process in HNSW is also based on similarly traversing the graph. Start from the entry point in the top layer, find its closest point and then use that point as the entry point into the next densest layer. This process repeats until we reach the bottom layer. Visited points and distances to the original query vector are kept in memory. If none of the neighbours of the current point is better than the best match, we can stop the traversal, as this is a local minimum. We start at the biggest scale, and then gradually zoom in. In this oversimplified example, we assumed that the distance between the points is the only factor that matters. In reality, we might want to consider other criteria, such as the ticket price, or avoid some specific locations due to certain restrictions. That means, there are various strategies for choosing the best match, which is also true in the case of vector recommendations. We can use different approaches to determine the path of traversing the HNSW graph by changing how we calculate the score of a candidate point during traversal. The default behaviour is based on pure distance, but Qdrant 1.6 exposes two strategies for the recommendation API. ### Average vector The default strategy, called `average_vector` is the previous one, based on the average of positive and negative examples. It simplifies the recommendations process and converts it into a single vector search. It supports both point IDs and vectors as parameters. For example, you can get recommendations based on past interactions with existing points combined with query vector embedding. Internally, that mechanism is based on the averages of positive and negative examples and was calculated with the following formula: $$ \text{average vector} = \text{avg}(\text{positive vectors}) + \left( \text{avg}(\text{positive vectors}) - \text{avg}(\text{negative vectors}) \right) $$ The `average_vector` converts the problem of recommendations into a single vector search. ### The new hotness - Best score The new strategy is called `best_score`. It does not rely on averages and is more flexible. It allows you to pass just negative samples and uses a slightly more sophisticated algorithm under the hood. The best score is chosen at every step of HNSW graph traversal. We separately calculate the distance between a traversed point and every positive and negative example. In the case of the best score strategy, **there is no single query vector anymore, but a bunch of positive and negative queries**. As a result, for each sample in the query, we have a set of distances, one for each sample. In the next step, we simply take the best scores for positives and negatives, creating two separate values. Best scores are just the closest distances of a query to positives and negatives. The idea is: **if a point is closer to any negative than to any positive example, we do not want it**. We penalize being close to the negatives, so instead of using the similarity value directly, we check if it’s closer to positives or negatives. The following formula is used to calculate the score of a traversed potential point: ```rust if best_positive_score > best_negative_score { score = best_positive_score } else { score = -(best_negative_score * best_negative_score) } ``` If the point is closer to the negatives, we penalize it by taking the negative squared value of the best negative score. For a closer negative, the score of the candidate point will always be lower or equal to zero, making the chances of choosing that point significantly lower. However, if the best negative score is higher than the best positive score, we still prefer those that are further away from the negatives. That procedure effectively **pulls the traversal procedure away from the negative examples**. If you want to know more about the internals of HNSW, you can check out the article about the [Filtrable HNSW](/articles/filtrable-hnsw/) that covers the topic thoroughly. ## Food Discovery demo Our [Food Discovery demo](/articles/food-discovery-demo/) is an application built on top of the new [Recommendation API](/documentation/concepts/search/#recommendation-api). It allows you to find a meal based on liked and disliked photos. There are some updates, enabled by the new Qdrant release: * **Ability to include multiple textual queries in the recommendation request.** Previously, we only allowed passing a single query to solve the cold start problem. Right now, you can pass multiple queries and mix them with the liked/disliked photos. This became possible because of the new flexibility in parameters. We can pass both point IDs and embedding vectors in the same request, and user queries are obviously not a part of the collection. * **Switch between the recommendation strategies.** You can now choose between the `average_vector` and the `best_score` scoring algorithm. ### Differences between the strategies The UI of the Food Discovery demo allows you to switch between the strategies. The `best_vector` is the default one, but with just a single switch, you can see how the results differ when using the previous `average_vector` strategy. If you select just a single positive example, both algorithms work identically. ##### One positive example <video autoplay="true" loop="true" width="100%" controls><source src="/articles_data/new-recommendation-api/one-positive.mp4" type="video/mp4"></video> The difference only becomes apparent when you start adding more examples, especially if you choose some negatives. ##### One positive and one negative example <video autoplay="true" loop="true" width="100%" controls><source src="/articles_data/new-recommendation-api/one-positive-one-negative.mp4" type="video/mp4"></video> The more likes and dislikes we add, the more diverse the results of the `best_score` strategy will be. In the old strategy, there is just a single vector, so all the examples are similar to it. The new one takes into account all the examples separately, making the variety richer. ##### Multiple positive and negative examples <video autoplay="true" loop="true" width="100%" controls><source src="/articles_data/new-recommendation-api/multiple.mp4" type="video/mp4"></video> Choosing the right strategy is dataset-dependent, and the embeddings play a significant role here. Thus, it’s always worth trying both of them and comparing the results in a particular case. #### Handling the negatives only In the case of our Food Discovery demo, passing just the negative images can work as an outlier detection mechanism. While the dataset was supposed to contain only food photos, this is not actually true. A simple way to find these outliers is to pass in food item photos as negatives, leading to the results being the most "unlike" food images. In our case you will see pill bottles and books. **The `average_vector` strategy still requires providing at least one positive example!** However, since cosine distance is set up for the collection used in the demo, we faked it using [a trick described in the previous article](/articles/food-discovery-demo/#negative-feedback-only). In a nutshell, if you only pass negative examples, their vectors will be averaged, and the negated resulting vector will be used as a query to the search endpoint. ##### Negatives only <video autoplay="true" loop="true" width="100%" controls><source src="/articles_data/new-recommendation-api/negatives-only.mp4" type="video/mp4"></video> Still, both methods return different results, so they each have their place depending on the questions being asked and the datasets being used. #### Challenges with multimodality Food Discovery uses the [CLIP embeddings model](https://huggingface.co/sentence-transformers/clip-ViT-B-32), which is multimodal, allowing both images and texts encoded into the same vector space. Using this model allows for image queries, text queries, or both of them combined. We utilized that mechanism in the updated demo, allowing you to pass the textual queries to filter the results further. ##### A single text query <video autoplay="true" loop="true" width="100%" controls><source src="/articles_data/new-recommendation-api/text-query.mp4" type="video/mp4"></video> Text queries might be mixed with the liked and disliked photos, so you can combine them in a single request. However, you might be surprised by the results achieved with the new strategy, if you start adding the negative examples. ##### A single text query with negative example <video autoplay="true" loop="true" width="100%" controls><source src="/articles_data/new-recommendation-api/text-query-with-negative.mp4" type="video/mp4"></video> This is an issue related to the embeddings themselves. Our dataset contains a bunch of image embeddings that are pretty close to each other. On the other hand, our text queries are quite far from most of the image embeddings, but relatively close to some of them, so the text-to-image search seems to work well. When all query items come from the same domain, such as only text, everything works fine. However, if we mix positive text and negative image embeddings, the results of the `best_score` are overwhelmed by the negative samples, which are simply closer to the dataset embeddings. If you experience such a problem, the `average_vector` strategy might be a better choice. ### Check out the demo The [Food Discovery Demo](https://food-discovery.qdrant.tech/) is available online, so you can test and see the difference. This is an open source project, so you can easily deploy it on your own. The source code is available in the [GitHub repository ](https://github.com/qdrant/demo-food-discovery/) and the [README](https://github.com/qdrant/demo-food-discovery/blob/main/README.md) describes the process of setting it up. Since calculating the embeddings takes a while, we precomputed them and exported them as a [snapshot](https://storage.googleapis.com/common-datasets-snapshots/wolt-clip-ViT-B-32.snapshot), which might be easily imported into any Qdrant instance. [Qdrant Cloud is the easiest way to start](https://cloud.qdrant.io/), though!
articles/new-recommendation-api.md
--- title: " Data Privacy with Qdrant: Implementing Role-Based Access Control (RBAC)" #required short_description: "Secure Your Data with Qdrant: Implementing RBAC" description: Discover how Qdrant's Role-Based Access Control (RBAC) ensures data privacy and compliance for your AI applications. Build secure and scalable systems with ease. Read more now! social_preview_image: /articles_data/data-privacy/preview/social_preview.jpg # This image will be used in social media previews, should be 1200x630px. Required. preview_dir: /articles_data/data-privacy/preview # This directory contains images that will be used in the article preview. They can be generated from one image. Read more below. Required. weight: -110 # This is the order of the article in the list of articles at the footer. The lower the number, the higher the article will be in the list. author: Qdrant Team # Author of the article. Required. author_link: https://qdrant.tech/ # Link to the author's page. Required. date: 2024-06-18T08:00:00-03:00 # Date of the article. Required. draft: false # If true, the article will not be published keywords: # Keywords for SEO - Role-Based Access Control (RBAC) - Data Privacy in Vector Databases - Secure AI Data Management - Qdrant Data Security - Enterprise Data Compliance --- Data stored in vector databases is often proprietary to the enterprise and may include sensitive information like customer records, legal contracts, electronic health records (EHR), financial data, and intellectual property. Moreover, strong security measures become critical to safeguarding this data. If the data stored in a vector database is not secured, it may open a vulnerability known as "[embedding inversion attack](https://arxiv.org/abs/2004.00053)," where malicious actors could potentially [reconstruct the original data from the embeddings](https://arxiv.org/pdf/2305.03010) themselves. Strict compliance regulations govern data stored in vector databases across various industries. For instance, healthcare must comply with HIPAA, which dictates how protected health information (PHI) is stored, transmitted, and secured. Similarly, the financial services industry follows PCI DSS to safeguard sensitive financial data. These regulations require developers to ensure data storage and transmission comply with industry-specific legal frameworks across different regions. **As a result, features that enable data privacy, security and sovereignty are deciding factors when choosing the right vector database.** This article explores various strategies to ensure the security of your critical data while leveraging the benefits of vector search. Implementing some of these security approaches can help you build privacy-enhanced similarity search algorithms and integrate them into your AI applications. Additionally, you will learn how to build a fully data-sovereign architecture, allowing you to retain control over your data and comply with relevant data laws and regulations. > To skip right to the code implementation, [click here](/articles/data-privacy/#jwt-on-qdrant). ## Vector Database Security: An Overview Vector databases are often unsecured by default to facilitate rapid prototyping and experimentation. This approach allows developers to quickly ingest data, build vector representations, and test similarity search algorithms without initial security concerns. However, in production environments, unsecured databases pose significant data breach risks. For production use, robust security systems are essential. Authentication, particularly using static API keys, is a common approach to control access and prevent unauthorized modifications. Yet, simple API authentication is insufficient for enterprise data, which requires granular control. The primary challenge with static API keys is their all-or-nothing access, inadequate for role-based data segregation in enterprise applications. Additionally, a compromised key could grant attackers full access to manipulate or steal data. To strengthen the security of the vector database, developers typically need the following: 1. **Encryption**: This ensures that sensitive data is scrambled as it travels between the application and the vector database. This safeguards against Man-in-the-Middle ([MitM](https://en.wikipedia.org/wiki/Man-in-the-middle_attack)) attacks, where malicious actors can attempt to intercept and steal data during transmission. 2. **Role-Based Access Control**: As mentioned before, traditional static API keys grant all-or-nothing access, which is a significant security risk in enterprise environments. RBAC offers a more granular approach by defining user roles and assigning specific data access permissions based on those roles. For example, an analyst might have read-only access to specific datasets, while an administrator might have full CRUD (Create, Read, Update, Delete) permissions across the database. 3. **Deployment Flexibility**: Data residency regulations like GDPR (General Data Protection Regulation) and industry-specific compliance requirements dictate where data can be stored, processed, and accessed. Developers would need to choose a database solution which offers deployment options that comply with these regulations. This might include on-premise deployments within a company's private cloud or geographically distributed cloud deployments that adhere to data residency laws. ## How Qdrant Handles Data Privacy and Security One of the cornerstones of our design choices at Qdrant has been the focus on security features. We have built in a range of features keeping the enterprise user in mind, which allow building of granular access control on a fully data sovereign architecture. A Qdrant instance is unsecured by default. However, when you are ready to deploy in production, Qdrant offers a range of security features that allow you to control access to your data, protect it from breaches, and adhere to regulatory requirements. Using Qdrant, you can build granular access control, segregate roles and privileges, and create a fully data sovereign architecture. ### API Keys and TLS Encryption For simpler use cases, Qdrant offers API key-based authentication. This includes both regular API keys and read-only API keys. Regular API keys grant full access to read, write, and delete operations, while read-only keys restrict access to data retrieval operations only, preventing write actions. On Qdrant Cloud, you can create API keys using the [Cloud Dashboard](https://qdrant.to/cloud). This allows you to generate API keys that give you access to a single node or cluster, or multiple clusters. You can read the steps to do so [here](/documentation/cloud/authentication/). ![web-ui](/articles_data/data-privacy/web-ui.png) For on-premise or local deployments, you'll need to configure API key authentication. This involves specifying a key in either the Qdrant configuration file or as an environment variable. This ensures that all requests to the server must include a valid API key sent in the header. When using the simple API key-based authentication, you should also turn on TLS encryption. Otherwise, you are exposing the connection to sniffing and MitM attacks. To secure your connection using TLS, you would need to create a certificate and private key, and then [enable TLS](/documentation/guides/security/#tls) in the configuration. API authentication, coupled with TLS encryption, offers a first layer of security for your Qdrant instance. However, to enable more granular access control, the recommended approach is to leverage JSON Web Tokens (JWTs). ### JWT on Qdrant JSON Web Tokens (JWTs) are a compact, URL-safe, and stateless means of representing _claims_ to be transferred between two parties. These claims are encoded as a JSON object and are cryptographically signed. JWT is composed of three parts: a header, a payload, and a signature, which are concatenated with dots (.) to form a single string. The header contains the type of token and algorithm being used. The payload contains the claims (explained in detail later). The signature is a cryptographic hash and ensures the token’s integrity. In Qdrant, JWT forms the foundation through which powerful access controls can be built. Let’s understand how. JWT is enabled on the Qdrant instance by specifying the API key and turning on the **jwt_rbac** feature in the configuration (alternatively, they can be set as environment variables). For any subsequent request, the API key is used to encode or decode the token. The way JWT works is that just the API key is enough to generate the token, and doesn’t require any communication with the Qdrant instance or server. There are several libraries that help generate tokens by encoding a payload, such as [PyJWT](https://pyjwt.readthedocs.io/en/stable/) (for Python), [jsonwebtoken](https://www.npmjs.com/package/jsonwebtoken) (for JavaScript), and [jsonwebtoken](https://crates.io/crates/jsonwebtoken) (for Rust). Qdrant uses the HS256 algorithm to encode or decode the tokens. We will look at the payload structure shortly, but here’s how you can generate a token using PyJWT. ```python import jwt import datetime # Define your API key and other payload data api_key = "your_api_key" payload = { ... } token = jwt.encode(payload, api_key, algorithm="HS256") print(token) ``` Once you have generated the token, you should include it in the subsequent requests. You can do so by providing it as a bearer token in the Authorization header, or in the API Key header of your requests. Below is an example of how to do so using QdrantClient in Python: ```python from qdrant_client import QdrantClient qdrant_client = QdrantClient( "http://localhost:6333", api_key="<JWT>", # the token goes here ) # Example search vector search_vector = [0.1, 0.2, 0.3, 0.4] # Example similarity search request response = qdrant_client.search( collection_name="demo_collection", query_vector=search_vector, limit=5 # Number of results to retrieve ) ``` For convenience, we have added a JWT generation tool in the Qdrant Web UI, which is present under the 🔑 tab. For your local deployments, you will find it at [http://localhost:6333/dashboard#/jwt](http://localhost:6333/dashboard#/jwt). ### Payload Configuration There are several different options (claims) you can use in the JWT payload that help control access and functionality. Let’s look at them one by one. **exp**: This claim is the expiration time of the token, and is a unix timestamp in seconds. After the expiration time, the token will be invalid. **value_exists**: This claim validates the token against a specific key-value stored in a collection. By using this claim, you can revoke access by simply changing a value without having to invalidate the API key. **access**: This claim defines the access level of the token. The access level can be global read (r) or manage (m). It can also be specific to a collection, or even a subset of a collection, using read (r) and read-write (rw). Let’s look at a few example JWT payload configurations. **Scenario 1: 1-hour expiry time, and read-only access to a collection** ```json { "exp": 1690995200, // Set to 1 hour from the current time (Unix timestamp) "access": [ { "collection": "demo_collection", "access": "r" // Read-only access } ] } ``` **Scenario 2: 1-hour expiry time, and access to user with a specific role** Suppose you have a ‘users’ collection and have defined specific roles for each user, such as ‘developer’, ‘manager’, ‘admin’, ‘analyst’, and ‘revoked’. In such a scenario, you can use a combination of **exp** and **value_exists**. ```json { "exp": 1690995200, "value_exists": { "collection": "users", "matches": [ { "key": "username", "value": "john" }, { "key": "role", "value": "developer" } ], }, } ``` Now, if you ever want to revoke access for a user, simply change the value of their role. All future requests will be invalid using a token payload of the above type. **Scenario 3: 1-hour expiry time, and read-write access to a subset of a collection** You can even specify access levels specific to subsets of a collection. This can be especially useful when you are leveraging [multitenancy](/documentation/guides/multiple-partitions/), and want to segregate access. ```json { "exp": 1690995200, "access": [ { "collection": "demo_collection", "access": "r", "payload": { "user_id": "user_123456" } } ] } ``` By combining the claims, you can fully customize the access level that a user or a role has within the vector store. ### Creating Role-Based Access Control (RBAC) Using JWT As we saw above, JWT claims create powerful levers through which you can create granular access control on Qdrant. Let’s bring it all together and understand how it helps you create Role-Based Access Control (RBAC). In a typical enterprise application, you will have a segregation of users based on their roles and permissions. These could be: 1. **Admin or Owner:** with full access, and can generate API keys. 2. **Editor:** with read-write access levels to specific collections. 3. **Viewer:** with read-only access to specific collections. 4. **Data Scientist or Analyst:** with read-only access to specific collections. 5. **Developer:** with read-write access to development- or testing-specific collections, but limited access to production data. 6. **Guest:** with limited read-only access to publicly available collections. In addition, you can create access levels within sections of a collection. In a multi-tenant application, where you have used payload-based partitioning, you can create read-only access for specific user roles for a subset of the collection that belongs to that user. Your application requirements will eventually help you decide the roles and access levels you should create. For example, in an application managing customer data, you could create additional roles such as: **Customer Support Representative**: read-write access to customer service-related data but no access to billing information. **Billing Department**: read-only access to billing data and read-write access to payment records. **Marketing Analyst**: read-only access to anonymized customer data for analytics. Each role can be assigned a JWT with claims that specify expiration times, read/write permissions for collections, and validating conditions. In such an application, an example JWT payload for a customer support representative role could be: ```json { "exp": 1690995200, "access": [ { "collection": "customer_data", "access": "rw", "payload": { "department": "support" } } ], "value_exists": { "collection": "departments", "matches": [ { "key": "department", "value": "support" } ] } } ``` As you can see, by implementing RBAC, you can ensure proper segregation of roles and their privileges, and avoid privacy loopholes in your application. ## Qdrant Hybrid Cloud and Data Sovereignty Data governance varies by country, especially for global organizations dealing with different regulations on data privacy, security, and access. This often necessitates deploying infrastructure within specific geographical boundaries. To address these needs, the vector database you choose should support deployment and scaling within your controlled infrastructure. [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/) offers this flexibility, along with features like sharding, replicas, JWT authentication, and monitoring. Qdrant Hybrid Cloud integrates Kubernetes clusters from various environments—cloud, on-premises, or edge—into a unified managed service. This allows organizations to manage Qdrant databases through the Qdrant Cloud UI while keeping the databases within their infrastructure. With JWT and RBAC, Qdrant Hybrid Cloud provides a secure, private, and sovereign vector store. Enterprises can scale their AI applications geographically, comply with local laws, and maintain strict data control. ## Conclusion Vector similarity is increasingly becoming the backbone of AI applications that leverage unstructured data. By transforming data into vectors – their numerical representations – organizations can build powerful applications that harness semantic search, ranging from better recommendation systems to algorithms that help with personalization, or powerful customer support chatbots. However, to fully leverage the power of AI in production, organizations need to choose a vector database that offers strong privacy and security features, while also helping them adhere to local laws and regulations. Qdrant provides exceptional efficiency and performance, along with the capability to implement granular access control to data, Role-Based Access Control (RBAC), and the ability to build a fully data-sovereign architecture. Interested in mastering vector search security and deployment strategies? [Join our Discord community](https://discord.gg/qdrant) to explore more advanced search strategies, connect with other developers and researchers in the industry, and stay updated on the latest innovations!
articles/data-privacy.md
--- title: Question Answering as a Service with Cohere and Qdrant short_description: "End-to-end Question Answering system for the biomedical data with SaaS tools: Cohere co.embed API and Qdrant" description: "End-to-end Question Answering system for the biomedical data with SaaS tools: Cohere co.embed API and Qdrant" social_preview_image: /articles_data/qa-with-cohere-and-qdrant/social_preview.png small_preview_image: /articles_data/qa-with-cohere-and-qdrant/q-and-a-article-icon.svg preview_dir: /articles_data/qa-with-cohere-and-qdrant/preview weight: 7 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2022-11-29T15:45:00+01:00 draft: false keywords: - vector search - question answering - cohere - co.embed - embeddings --- Bi-encoders are probably the most efficient way of setting up a semantic Question Answering system. This architecture relies on the same neural model that creates vector embeddings for both questions and answers. The assumption is, both question and answer should have representations close to each other in the latent space. It should be like that because they should both describe the same semantic concept. That doesn't apply to answers like "Yes" or "No" though, but standard FAQ-like problems are a bit easier as there is typically an overlap between both texts. Not necessarily in terms of wording, but in their semantics. ![Bi-encoder structure. Both queries (questions) and documents (answers) are vectorized by the same neural encoder. Output embeddings are then compared by a chosen distance function, typically cosine similarity.](/articles_data/qa-with-cohere-and-qdrant/biencoder-diagram.png) And yeah, you need to **bring your own embeddings**, in order to even start. There are various ways how to obtain them, but using Cohere [co.embed API](https://docs.cohere.ai/reference/embed) is probably the easiest and most convenient method. ## Why co.embed API and Qdrant go well together? Maintaining a **Large Language Model** might be hard and expensive. Scaling it up and down, when the traffic changes, require even more effort and becomes unpredictable. That might be definitely a blocker for any semantic search system. But if you want to start right away, you may consider using a SaaS model, Cohere’s [co.embed API](https://docs.cohere.ai/reference/embed) in particular. It gives you state-of-the-art language models available as a Highly Available HTTP service with no need to train or maintain your own service. As all the communication is done with JSONs, you can simply provide the co.embed output as Qdrant input. ```python # Putting the co.embed API response directly as Qdrant method input qdrant_client.upsert( collection_name="collection", points=rest.Batch( ids=[...], vectors=cohere_client.embed(...).embeddings, payloads=[...], ), ) ``` Both tools are easy to combine, so you can start working with semantic search in a few minutes, not days. And what if your needs are so specific that you need to fine-tune a general usage model? Co.embed API goes beyond pre-trained encoders and allows providing some custom datasets to [customize the embedding model with your own data](https://docs.cohere.com/docs/finetuning). As a result, you get the quality of domain-specific models, but without worrying about infrastructure. ## System architecture overview In real systems, answers get vectorized and stored in an efficient vector search database. We typically don’t even need to provide specific answers, but just use sentences or paragraphs of text and vectorize them instead. Still, if a bit longer piece of text contains the answer to a particular question, its distance to the question embedding should not be that far away. And for sure closer than all the other, non-matching answers. Storing the answer embeddings in a vector database makes the search process way easier. ![Building the database of possible answers. All the texts are converted into their vector embeddings and those embeddings are stored in a vector database, i.e. Qdrant.](/articles_data/qa-with-cohere-and-qdrant/vector-database.png) ## Looking for the correct answer Once our database is working and all the answer embeddings are already in place, we can start querying it. We basically perform the same vectorization on a given question and ask the database to provide some near neighbours. We rely on the embeddings to be close to each other, so we expect the points with the smallest distance in the latent space to contain the proper answer. ![While searching, a question gets vectorized by the same neural encoder. Vector database is a component that looks for the closest answer vectors using i.e. cosine similarity. A proper system, like Qdrant, will make the lookup process more efficient, as it won’t calculate the distance to all the answer embeddings. Thanks to HNSW, it will be able to find the nearest neighbours with sublinear complexity.](/articles_data/qa-with-cohere-and-qdrant/search-with-vector-database.png) ## Implementing the QA search system with SaaS tools We don’t want to maintain our own service for the neural encoder, nor even set up a Qdrant instance. There are SaaS solutions for both — Cohere’s [co.embed API](https://docs.cohere.ai/reference/embed) and [Qdrant Cloud](https://qdrant.to/cloud), so we’ll use them instead of on-premise tools. ### Question Answering on biomedical data We’re going to implement the Question Answering system for the biomedical data. There is a *[pubmed_qa](https://huggingface.co/datasets/pubmed_qa)* dataset, with it *pqa_labeled* subset containing 1,000 examples of questions and answers labelled by domain experts. Our system is going to be fed with the embeddings generated by co.embed API and we’ll load them to Qdrant. Using Qdrant Cloud vs your own instance does not matter much here. There is a subtle difference in how to connect to the cloud instance, but all the other operations are executed in the same way. ```python from datasets import load_dataset # Loading the dataset from HuggingFace hub. It consists of several columns: pubid, # question, context, long_answer and final_decision. For the purposes of our system, # we’ll use question and long_answer. dataset = load_dataset("pubmed_qa", "pqa_labeled") ``` | **pubid** | **question** | **context** | **long_answer** | **final_decision** | |-----------|---------------------------------------------------|-------------|---------------------------------------------------|--------------------| | 18802997 | Can calprotectin predict relapse risk in infla... | ... | Measuring calprotectin may help to identify UC... | maybe | | 20538207 | Should temperature be monitorized during kidne... | ... | The new storage can affords more stable temper... | no | | 25521278 | Is plate clearing a risk factor for obesity? | ... | The tendency to clear one's plate when eating ... | yes | | 17595200 | Is there an intrauterine influence on obesity? | ... | Comparison of mother-offspring and father-offs.. | no | | 15280782 | Is unsafe sexual behaviour increasing among HI... | ... | There was no evidence of a trend in unsafe sex... | no | ### Using Cohere and Qdrant to build the answers database In order to start generating the embeddings, you need to [create a Cohere account](https://dashboard.cohere.ai/welcome/register). That will start your trial period, so you’ll be able to vectorize the texts for free. Once logged in, your default API key will be available in [Settings](https://dashboard.cohere.ai/api-keys). We’ll need it to call the co.embed API. with the official python package. ```python import cohere cohere_client = cohere.Client(COHERE_API_KEY) # Generating the embeddings with Cohere client library embeddings = cohere_client.embed( texts=["A test sentence"], model="large", ) vector_size = len(embeddings.embeddings[0]) print(vector_size) # output: 4096 ``` Let’s connect to the Qdrant instance first and create a collection with the proper configuration, so we can put some embeddings into it later on. ```python # Connecting to Qdrant Cloud with qdrant-client requires providing the api_key. # If you use an on-premise instance, it has to be skipped. qdrant_client = QdrantClient( host="xyz-example.eu-central.aws.cloud.qdrant.io", prefer_grpc=True, api_key=QDRANT_API_KEY, ) ``` Now we’re able to vectorize all the answers. They are going to form our collection, so we can also put them already into Qdrant, along with the payloads and identifiers. That will make our dataset easily searchable. ```python answer_response = cohere_client.embed( texts=dataset["train"]["long_answer"], model="large", ) vectors = [ # Conversion to float is required for Qdrant list(map(float, vector)) for vector in answer_response.embeddings ] ids = [entry["pubid"] for entry in dataset["train"]] # Filling up Qdrant collection with the embeddings generated by Cohere co.embed API qdrant_client.upsert( collection_name="pubmed_qa", points=rest.Batch( ids=ids, vectors=vectors, payloads=list(dataset["train"]), ) ) ``` And that’s it. Without even setting up a single server on our own, we created a system that might be easily asked a question. I don’t want to call it serverless, as this term is already taken, but co.embed API with Qdrant Cloud makes everything way easier to maintain. ### Answering the questions with semantic search — the quality It’s high time to query our database with some questions. It might be interesting to somehow measure the quality of the system in general. In those kinds of problems we typically use *top-k accuracy*. We assume the prediction of the system was correct if the correct answer was present in the first *k* results. ```python # Finding the position at which Qdrant provided the expected answer for each question. # That allows to calculate accuracy@k for different values of k. k_max = 10 answer_positions = [] for embedding, pubid in tqdm(zip(question_response.embeddings, ids)): response = qdrant_client.search( collection_name="pubmed_qa", query_vector=embedding, limit=k_max, ) answer_ids = [record.id for record in response] if pubid in answer_ids: answer_positions.append(answer_ids.index(pubid)) else: answer_positions.append(-1) ``` Saved answer positions allow us to calculate the metric for different *k* values. ```python # Prepared answer positions are being used to calculate different values of accuracy@k for k in range(1, k_max + 1): correct_answers = len( list( filter(lambda x: 0 <= x < k, answer_positions) ) ) print(f"accuracy@{k} =", correct_answers / len(dataset["train"])) ``` Here are the values of the top-k accuracy for different values of k: | **metric** | **value** | |-------------|-----------| | accuracy@1 | 0.877 | | accuracy@2 | 0.921 | | accuracy@3 | 0.942 | | accuracy@4 | 0.950 | | accuracy@5 | 0.956 | | accuracy@6 | 0.960 | | accuracy@7 | 0.964 | | accuracy@8 | 0.971 | | accuracy@9 | 0.976 | | accuracy@10 | 0.977 | It seems like our system worked pretty well even if we consider just the first result, with the lowest distance. We failed with around 12% of questions. But numbers become better with the higher values of k. It might be also valuable to check out what questions our system failed to answer, their perfect match and our guesses. We managed to implement a working Question Answering system within just a few lines of code. If you are fine with the results achieved, then you can start using it right away. Still, if you feel you need a slight improvement, then fine-tuning the model is a way to go. If you want to check out the full source code, it is available on [Google Colab](https://colab.research.google.com/drive/1YOYq5PbRhQ_cjhi6k4t1FnWgQm8jZ6hm?usp=sharing).
articles/qa-with-cohere-and-qdrant.md
--- title: "Is RAG Dead? The Role of Vector Databases in Vector Search | Qdrant" short_description: Learn how Qdrant’s vector database enhances enterprise AI with superior accuracy and cost-effectiveness. description: Uncover the necessity of vector databases for RAG and learn how Qdrant's vector database empowers enterprise AI with unmatched accuracy and cost-effectiveness. social_preview_image: /articles_data/rag-is-dead/preview/social_preview.jpg small_preview_image: /articles_data/rag-is-dead/icon.svg preview_dir: /articles_data/rag-is-dead/preview weight: -131 author: David Myriel author_link: https://github.com/davidmyriel date: 2024-02-27T00:00:00.000Z draft: false keywords: - vector database - vector search - retrieval augmented generation - gemini 1.5 --- # Is RAG Dead? The Role of Vector Databases in AI Efficiency and Vector Search When Anthropic came out with a context window of 100K tokens, they said: “*[Vector search](https://qdrant.tech/solutions/) is dead. LLMs are getting more accurate and won’t need RAG anymore.*” Google’s Gemini 1.5 now offers a context window of 10 million tokens. [Their supporting paper](https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf) claims victory over accuracy issues, even when applying Greg Kamradt’s [NIAH methodology](https://twitter.com/GregKamradt/status/1722386725635580292). *It’s over. [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/) (Retrieval Augmented Generation) must be completely obsolete now. Right?* No. Larger context windows are never the solution. Let me repeat. Never. They require more computational resources and lead to slower processing times. The community is already stress testing Gemini 1.5: ![RAG and Gemini 1.5](/articles_data/rag-is-dead/rag-is-dead-1.png) This is not surprising. LLMs require massive amounts of compute and memory to run. To cite Grant, running such a model by itself “would deplete a small coal mine to generate each completion”. Also, who is waiting 30 seconds for a response? ## Context stuffing is not the solution > Relying on context is expensive, and it doesn’t improve response quality in real-world applications. Retrieval based on [vector search](https://qdrant.tech/solutions/) offers much higher precision. If you solely rely on an [LLM](https://qdrant.tech/articles/what-is-rag-in-ai/) to perfect retrieval and precision, you are doing it wrong. A large context window makes it harder to focus on relevant information. This increases the risk of errors or hallucinations in its responses. Google found Gemini 1.5 significantly more accurate than GPT-4 at shorter context lengths and “a very small decrease in recall towards 1M tokens”. The recall is still below 0.8. ![Gemini 1.5 Data](/articles_data/rag-is-dead/rag-is-dead-2.png) We don’t think 60-80% is good enough. The LLM might retrieve enough relevant facts in its context window, but it still loses up to 40% of the available information. > The whole point of vector search is to circumvent this process by efficiently picking the information your app needs to generate the best response. A [vector database](https://qdrant.tech/) keeps the compute load low and the query response fast. You don’t need to wait for the LLM at all. Qdrant’s benchmark results are strongly in favor of accuracy and efficiency. We recommend that you consider them before deciding that an LLM is enough. Take a look at our [open-source benchmark reports](/benchmarks/) and [try out the tests](https://github.com/qdrant/vector-db-benchmark) yourself. ## Vector search in compound systems The future of AI lies in careful system engineering. As per [Zaharia et al.](https://bair.berkeley.edu/blog/2024/02/18/compound-ai-systems/), results from Databricks find that “60% of LLM applications use some form of RAG, while 30% use multi-step chains.” Even Gemini 1.5 demonstrates the need for a complex strategy. When looking at [Google’s MMLU Benchmark](https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf), the model was called 32 times to reach a score of 90.0% accuracy. This shows us that even a basic compound arrangement is superior to monolithic models. As a retrieval system, a [vector database](https://qdrant.tech/) perfectly fits the need for compound systems. Introducing them into your design opens the possibilities for superior applications of LLMs. It is superior because it’s faster, more accurate, and much cheaper to run. > The key advantage of RAG is that it allows an LLM to pull in real-time information from up-to-date internal and external knowledge sources, making it more dynamic and adaptable to new information. - Oliver Molander, CEO of IMAGINAI > ## Qdrant scales to enterprise RAG scenarios People still don’t understand the economic benefit of vector databases. Why would a large corporate AI system need a standalone vector database like [Qdrant](https://qdrant.tech/)? In our minds, this is the most important question. Let’s pretend that LLMs cease struggling with context thresholds altogether. **How much would all of this cost?** If you are running a RAG solution in an enterprise environment with petabytes of private data, your compute bill will be unimaginable. Let's assume 1 cent per 1K input tokens (which is the current GPT-4 Turbo pricing). Whatever you are doing, every time you go 100 thousand tokens deep, it will cost you $1. That’s a buck a question. > According to our estimations, vector search queries are **at least** 100 million times cheaper than queries made by LLMs. Conversely, the only up-front investment with vector databases is the indexing (which requires more compute). After this step, everything else is a breeze. Once setup, Qdrant easily scales via [features like Multitenancy and Sharding](/articles/multitenancy/). This lets you scale up your reliance on the vector retrieval process and minimize your use of the compute-heavy LLMs. As an optimization measure, Qdrant is irreplaceable. Julien Simon from HuggingFace says it best: > RAG is not a workaround for limited context size. For mission-critical enterprise use cases, RAG is a way to leverage high-value, proprietary company knowledge that will never be found in public datasets used for LLM training. At the moment, the best place to index and query this knowledge is some sort of vector index. In addition, RAG downgrades the LLM to a writing assistant. Since built-in knowledge becomes much less important, a nice small 7B open-source model usually does the trick at a fraction of the cost of a huge generic model. ## Get superior accuracy with Qdrant's vector database As LLMs continue to require enormous computing power, users will need to leverage vector search and [RAG](https://qdrant.tech/). Our customers remind us of this fact every day. As a product, [our vector database](https://qdrant.tech/) is highly scalable and business-friendly. We develop our features strategically to follow our company’s Unix philosophy. We want to keep Qdrant compact, efficient and with a focused purpose. This purpose is to empower our customers to use it however they see fit. When large enterprises release their generative AI into production, they need to keep costs under control, while retaining the best possible quality of responses. Qdrant has the [vector search solutions](https://qdrant.tech/solutions/) to do just that. Revolutionize your vector search capabilities and get started with [a Qdrant demo](https://qdrant.tech/contact-us/).
articles/rag-is-dead.md
--- title: "BM42: New Baseline for Hybrid Search" short_description: "Introducing next evolutionary step in lexical search." description: "Introducing BM42 - a new sparse embedding approach, which combines the benefits of exact keyword search with the intelligence of transformers." social_preview_image: /articles_data/bm42/social-preview.jpg preview_dir: /articles_data/bm42/preview weight: -140 author: Andrey Vasnetsov date: 2024-07-01T12:00:00+03:00 draft: false keywords: - hybrid search - sparse embeddings - bm25 --- <aside role="status"> Please note that the benchmark section of this article was updated after the publication due to a mistake in the evaluation script. BM42 does not outperform BM25 implementation of other vendors. Please consider BM42 as an experimental approach, which requires further research and development before it can be used in production. </aside> For the last 40 years, BM25 has served as the standard for search engines. It is a simple yet powerful algorithm that has been used by many search engines, including Google, Bing, and Yahoo. Though it seemed that the advent of vector search would diminish its influence, it did so only partially. The current state-of-the-art approach to retrieval nowadays tries to incorporate BM25 along with embeddings into a hybrid search system. However, the use case of text retrieval has significantly shifted since the introduction of RAG. Many assumptions upon which BM25 was built are no longer valid. For example, the typical length of documents and queries vary significantly between traditional web search and modern RAG systems. In this article, we will recap what made BM25 relevant for so long and why alternatives have struggled to replace it. Finally, we will discuss BM42, as the next step in the evolution of lexical search. ## Why has BM25 stayed relevant for so long? To understand why, we need to analyze its components. The famous BM25 formula is defined as: $$ \text{score}(D,Q) = \sum_{i=1}^{N} \text{IDF}(q_i) \times \frac{f(q_i, D) \cdot (k_1 + 1)}{f(q_i, D) + k_1 \cdot \left(1 - b + b \cdot \frac{|D|}{\text{avgdl}}\right)} $$ Let's simplify this to gain a better understanding. - The $score(D, Q)$ - means that we compute the score for each pair of document $D$ and query $Q$. - The $\sum_{i=1}^{N}$ - means that each of $N$ terms in the query contribute to the final score as a part of the sum. - The $\text{IDF}(q_i)$ - is the inverse document frequency. The more rare the term $q_i$ is, the more it contributes to the score. A simplified formula for this is: $$ \text{IDF}(q_i) = \frac{\text{Number of documents}}{\text{Number of documents with } q_i} $$ It is fair to say that the `IDF` is the most important part of the BM25 formula. `IDF` selects the most important terms in the query relative to the specific document collection. So intuitively, we can interpret the `IDF` as **term importance within the corpora**. That explains why BM25 is so good at handling queries, which dense embeddings consider out-of-domain. The last component of the formula can be intuitively interpreted as **term importance within the document**. This might look a bit complicated, so let's break it down. $$ \text{Term importance in document }(q_i) = \color{red}\frac{f(q_i, D)\color{black} \cdot \color{blue}(k_1 + 1) \color{black} }{\color{red}f(q_i, D)\color{black} + \color{blue}k_1\color{black} \cdot \left(1 - \color{blue}b\color{black} + \color{blue}b\color{black} \cdot \frac{|D|}{\text{avgdl}}\right)} $$ - The $\color{red}f(q_i, D)\color{black}$ - is the frequency of the term $q_i$ in the document $D$. Or in other words, the number of times the term $q_i$ appears in the document $D$. - The $\color{blue}k_1\color{black}$ and $\color{blue}b\color{black}$ are the hyperparameters of the BM25 formula. In most implementations, they are constants set to $k_1=1.5$ and $b=0.75$. Those constants define relative implications of the term frequency and the document length in the formula. - The $\frac{|D|}{\text{avgdl}}$ - is the relative length of the document $D$ compared to the average document length in the corpora. The intuition befind this part is following: if the token is found in the smaller document, it is more likely that this token is important for this document. #### Will BM25 term importance in the document work for RAG? As we can see, the *term importance in the document* heavily depends on the statistics within the document. Moreover, statistics works well if the document is long enough. Therefore, it is suitable for searching webpages, books, articles, etc. However, would it work as well for modern search applications, such as RAG? Let's see. The typical length of a document in RAG is much shorter than that of web search. In fact, even if we are working with webpages and articles, we would prefer to split them into chunks so that a) Dense models can handle them and b) We can pinpoint the exact part of the document which is relevant to the query As a result, the document size in RAG is small and fixed. That effectively renders the term importance in the document part of the BM25 formula useless. The term frequency in the document is always 0 or 1, and the relative length of the document is always 1. So, the only part of the BM25 formula that is still relevant for RAG is `IDF`. Let's see how we can leverage it. ## Why SPLADE is not always the answer Before discussing our new approach, let's examine the current state-of-the-art alternative to BM25 - SPLADE. The idea behind SPLADE is interesting—what if we let a smart, end-to-end trained model generate a bag-of-words representation of the text for us? It will assign all the weights to the tokens, so we won't need to bother with statistics and hyperparameters. The documents are then represented as a sparse embedding, where each token is represented as an element of the sparse vector. And it works in academic benchmarks. Many papers report that SPLADE outperforms BM25 in terms of retrieval quality. This performance, however, comes at a cost. * **Inappropriate Tokenizer**: To incorporate transformers for this task, SPLADE models require using a standard transformer tokenizer. These tokenizers are not designed for retrieval tasks. For example, if the word is not in the (quite limited) vocabulary, it will be either split into subwords or replaced with a `[UNK]` token. This behavior works well for language modeling but is completely destructive for retrieval tasks. * **Expensive Token Expansion**: In order to compensate the tokenization issues, SPLADE uses *token expansion* technique. This means that we generate a set of similar tokens for each token in the query. There are a few problems with this approach: - It is computationally and memory expensive. We need to generate more values for each token in the document, which increases both the storage size and retrieval time. - It is not always clear where to stop with the token expansion. The more tokens we generate, the more likely we are to get the relevant one. But simultaneously, the more tokens we generate, the more likely we are to get irrelevant results. - Token expansion dilutes the interpretability of the search. We can't say which tokens were used in the document and which were generated by the token expansion. * **Domain and Language Dependency**: SPLADE models are trained on specific corpora. This means that they are not always generalizable to new or rare domains. As they don't use any statistics from the corpora, they cannot adapt to the new domain without fine-tuning. * **Inference Time**: Additionally, currently available SPLADE models are quite big and slow. They usually require a GPU to make the inference in a reasonable time. At Qdrant, we acknowledge the aforementioned problems and are looking for a solution. Our idea was to combine the best of both worlds - the simplicity and interpretability of BM25 and the intelligence of transformers while avoiding the pitfalls of SPLADE. And here is what we came up with. ## The best of both worlds As previously mentioned, `IDF` is the most important part of the BM25 formula. In fact it is so important, that we decided to build its calculation into the Qdrant engine itself. Check out our latest [release notes](https://github.com/qdrant/qdrant/releases/tag/v1.10.0). This type of separation allows streaming updates of the sparse embeddings while keeping the `IDF` calculation up-to-date. As for the second part of the formula, *the term importance within the document* needs to be rethought. Since we can't rely on the statistics within the document, we can try to use the semantics of the document instead. And semantics is what transformers are good at. Therefore, we only need to solve two problems: - How does one extract the importance information from the transformer? - How can tokenization issues be avoided? ### Attention is all you need Transformer models, even those used to generate embeddings, generate a bunch of different outputs. Some of those outputs are used to generate embeddings. Others are used to solve other kinds of tasks, such as classification, text generation, etc. The one particularly interesting output for us is the attention matrix. {{< figure src="/articles_data/bm42/attention-matrix.png" alt="Attention matrix" caption="Attention matrix" width="60%" >}} The attention matrix is a square matrix, where each row and column corresponds to the token in the input sequence. It represents the importance of each token in the input sequence for each other. The classical transformer models are trained to predict masked tokens in the context, so the attention weights define which context tokens influence the masked token most. Apart from regular text tokens, the transformer model also has a special token called `[CLS]`. This token represents the whole sequence in the classification tasks, which is exactly what we need. By looking at the attention row for the `[CLS]` token, we can get the importance of each token in the document for the whole document. ```python sentences = "Hello, World - is the starting point in most programming languages" features = transformer.tokenize(sentences) # ... attentions = transformer.auto_model(**features, output_attentions=True).attentions weights = torch.mean(attentions[-1][0,:,0], axis=0) # ▲ ▲ ▲ ▲ # │ │ │ └─── [CLS] token is the first one # │ │ └─────── First item of the batch # │ └────────── Last transformer layer # └────────────────────────── Averate all 6 attention heads for weight, token in zip(weights, tokens): print(f"{token}: {weight}") # [CLS] : 0.434 // Filter out the [CLS] token # hello : 0.039 # , : 0.039 # world : 0.107 // <-- The most important token # - : 0.033 # is : 0.024 # the : 0.031 # starting : 0.054 # point : 0.028 # in : 0.018 # most : 0.016 # programming : 0.060 // <-- The third most important token # languages : 0.062 // <-- The second most important token # [SEP] : 0.047 // Filter out the [SEP] token ``` The resulting formula for the BM42 score would look like this: $$ \text{score}(D,Q) = \sum_{i=1}^{N} \text{IDF}(q_i) \times \text{Attention}(\text{CLS}, q_i) $$ Note that classical transformers have multiple attention heads, so we can get multiple importance vectors for the same document. The simplest way to combine them is to simply average them. These averaged attention vectors make up the importance information we were looking for. The best part is, one can get them from any transformer model, without any additional training. Therefore, BM42 can support any natural language as long as there is a transformer model for it. In our implementation, we use the `sentence-transformers/all-MiniLM-L6-v2` model, which gives a huge boost in the inference speed compared to the SPLADE models. In practice, any transformer model can be used. It doesn't require any additional training, and can be easily adapted to work as BM42 backend. ### WordPiece retokenization The final piece of the puzzle we need to solve is the tokenization issue. In order to get attention vectors, we need to use native transformer tokenization. But this tokenization is not suitable for the retrieval tasks. What can we do about it? Actually, the solution we came up with is quite simple. We reverse the tokenization process after we get the attention vectors. Transformers use [WordPiece](https://huggingface.co/learn/nlp-course/en/chapter6/6) tokenization. In case it sees the word, which is not in the vocabulary, it splits it into subwords. Here is how that looks: ```text "unbelievable" -> ["un", "##believ", "##able"] ``` What can merge the subwords back into the words. Luckily, the subwords are marked with the `##` prefix, so we can easily detect them. Since the attention weights are normalized, we can simply sum the attention weights of the subwords to get the attention weight of the word. After that, we can apply the same traditional NLP techniques, as - Removing of the stop-words - Removing of the punctuation - Lemmatization In this way, we can significantly reduce the number of tokens, and therefore minimize the memory footprint of the sparse embeddings. We won't simultaneously compromise the ability to match (almost) exact tokens. ## Practical examples | Trait | BM25 | SPLADE | BM42 | |-------------------------|--------------|--------------|--------------| | Interpretability | High ✅ | Ok 🆗 | High ✅ | | Document Inference speed| Very high ✅ | Slow 🐌 | High ✅ | | Query Inference speed | Very high ✅ | Slow 🐌 | Very high ✅ | | Memory footprint | Low ✅ | High ❌ | Low ✅ | | In-domain accuracy | Ok 🆗 | High ✅ | High ✅ | | Out-of-domain accuracy | Ok 🆗 | Low ❌ | Ok 🆗 | | Small documents accuracy| Low ❌ | High ✅ | High ✅ | | Large documents accuracy| High ✅ | Low ❌ | Ok 🆗 | | Unknown tokens handling | Yes ✅ | Bad ❌ | Yes ✅ | | Multi-lingual support | Yes ✅ | No ❌ | Yes ✅ | | Best Match | Yes ✅ | No ❌ | Yes ✅ | Starting from Qdrant v1.10.0, BM42 can be used in Qdrant via FastEmbed inference. Let's see how you can setup a collection for hybrid search with BM42 and [jina.ai](https://jina.ai/embeddings/) dense embeddings. ```http PUT collections/my-hybrid-collection { "vectors": { "jina": { "size": 768, "distance": "Cosine" } }, "sparse_vectors": { "bm42": { "modifier": "idf" // <--- This parameter enables the IDF calculation } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient() client.create_collection( collection_name="my-hybrid-collection", vectors_config={ "jina": models.VectorParams( size=768, distance=models.Distance.COSINE, ) }, sparse_vectors_config={ "bm42": models.SparseVectorParams( modifier=models.Modifier.IDF, ) } ) ``` The search query will retrieve the documents with both dense and sparse embeddings and combine the scores using the Reciprocal Rank Fusion (RRF) algorithm. ```python from fastembed import SparseTextEmbedding, TextEmbedding query_text = "best programming language for beginners?" model_bm42 = SparseTextEmbedding(model_name="Qdrant/bm42-all-minilm-l6-v2-attentions") model_jina = TextEmbedding(model_name="jinaai/jina-embeddings-v2-base-en") sparse_embedding = list(embedding_model.query_embed(query_text))[0] dense_embedding = list(model_jina.query_embed(query_text))[0] client.query_points( collection_name="my-hybrid-collection", prefetch=[ models.Prefetch(query=sparse_embedding.as_object(), using="bm42", limit=10), models.Prefetch(query=dense_embedding.tolist(), using="jina", limit=10), ], query=models.FusionQuery(fusion=models.Fusion.RRF), # <--- Combine the scores limit=10 ) ``` ### Benchmarks To prove the point further we have conducted some benchmarks to highlight the cases where BM42 outperforms BM25. Please note, that we didn't intend to make an exhaustive evaluation, as we are presenting a new approach, not a new model. For out experiments we choose [quora](https://huggingface.co/datasets/BeIR/quora) dataset, which represents a question-deduplication task ~~the Question-Answering task~~. The typical example of the dataset is the following: ```text {"_id": "109", "text": "How GST affects the CAs and tax officers?"} {"_id": "110", "text": "Why can't I do my homework?"} {"_id": "111", "text": "How difficult is it get into RSI?"} ``` As you can see, it has pretty short texts, there are not much of the statistics to rely on. After encoding with BM42, the average vector size is only **5.6 elements per document**. With `datatype: uint8` available in Qdrant, the total size of the sparse vector index is about **13MB** for ~530k documents. As a reference point, we use: - BM25 with tantivy - the [sparse vector BM25 implementation](https://github.com/qdrant/bm42_eval/blob/master/index_bm25_qdrant.py) with the same preprocessing pipeline like for BM42: tokenization, stop-words removal, and lemmatization | | BM25 (tantivy) | BM25 (Sparse) | BM42 | |----------------------|-------------------|---------------|----------| | ~~Precision @ 10~~ * | ~~0.45~~ | ~~0.45~~ | ~~0.49~~ | | Recall @ 10 | ~~0.71~~ **0.89** | 0.83 | 0.85 | \* - values were corrected after the publication due to a mistake in the evaluation script. <aside role="status"> When used properly, BM25 with tantivy achieves the best results. Our initial implementation performed wrong character escaping that led to understating the value of <code>recall@10</code> for tantivy. </aside> To make our benchmarks transparent, we have published scripts we used for the evaluation: see [github repo](https://github.com/qdrant/bm42_eval). Please note, that both BM25 and BM42 won't work well on their own in a production environment. Best results are achieved with a combination of sparse and dense embeddings in a hybrid approach. In this scenario, the two models are complementary to each other. The sparse model is responsible for exact token matching, while the dense model is responsible for semantic matching. Some more advanced models might outperform default `sentence-transformers/all-MiniLM-L6-v2` model we were using. We encourage developers involved in training embedding models to include a way to extract attention weights and contribute to the BM42 backend. ## Fostering curiosity and experimentation Despite all of its advantages, BM42 is not always a silver bullet. For large documents without chunks, BM25 might still be a better choice. There might be a smarter way to extract the importance information from the transformer. There could be a better method to weigh IDF against attention scores. Qdrant does not specialize in model training. Our core project is the search engine itself. However, we understand that we are not operating in a vacuum. By introducing BM42, we are stepping up to empower our community with novel tools for experimentation. We truly believe that the sparse vectors method is at exact level of abstraction to yield both powerful and flexible results. Many of you are sharing your recent Qdrant projects in our [Discord channel](https://discord.com/invite/qdrant). Feel free to try out BM42 and let us know what you come up with.
articles/bm42.md
--- title: "Binary Quantization - Vector Search, 40x Faster " short_description: "Binary Quantization is a newly introduced mechanism of reducing the memory footprint and increasing performance" description: "Binary Quantization is a newly introduced mechanism of reducing the memory footprint and increasing performance" social_preview_image: /articles_data/binary-quantization/social_preview.png small_preview_image: /articles_data/binary-quantization/binary-quantization-icon.svg preview_dir: /articles_data/binary-quantization/preview weight: -40 author: Nirant Kasliwal author_link: https://nirantk.com/about/ date: 2023-09-18T13:00:00+03:00 draft: false keywords: - vector search - binary quantization - memory optimization --- # Optimizing High-Dimensional Vectors with Binary Quantization Qdrant is built to handle typical scaling challenges: high throughput, low latency and efficient indexing. **Binary quantization (BQ)** is our latest attempt to give our customers the edge they need to scale efficiently. This feature is particularly excellent for collections with large vector lengths and a large number of points. Our results are dramatic: Using BQ will reduce your memory consumption and improve retrieval speeds by up to 40x. As is the case with other quantization methods, these benefits come at the cost of recall degradation. However, our implementation lets you balance the tradeoff between speed and recall accuracy at time of search, rather than time of index creation. The rest of this article will cover: 1. The importance of binary quantization 2. Basic implementation using our Python client 3. Benchmark analysis and usage recommendations ## What is Binary Quantization? Binary quantization (BQ) converts any vector embedding of floating point numbers into a vector of binary or boolean values. This feature is an extension of our past work on [scalar quantization](/articles/scalar-quantization/) where we convert `float32` to `uint8` and then leverage a specific SIMD CPU instruction to perform fast vector comparison. ![What is binary quantization](/articles_data/binary-quantization/bq-2.png) **This binarization function is how we convert a range to binary values. All numbers greater than zero are marked as 1. If it's zero or less, they become 0.** The benefit of reducing the vector embeddings to binary values is that boolean operations are very fast and need significantly less CPU instructions. In exchange for reducing our 32 bit embeddings to 1 bit embeddings we can see up to a 40x retrieval speed up gain! One of the reasons vector search still works with such a high compression rate is that these large vectors are over-parameterized for retrieval. This is because they are designed for ranking, clustering, and similar use cases, which typically need more information encoded in the vector. For example, The 1536 dimension OpenAI embedding is worse than Open Source counterparts of 384 dimension at retrieval and ranking. Specifically, it scores 49.25 on the same [Embedding Retrieval Benchmark](https://huggingface.co/spaces/mteb/leaderboard) where the Open Source `bge-small` scores 51.82. This 2.57 points difference adds up quite soon. Our implementation of quantization achieves a good balance between full, large vectors at ranking time and binary vectors at search and retrieval time. It also has the ability for you to adjust this balance depending on your use case. ## Faster search and retrieval Unlike product quantization, binary quantization does not rely on reducing the search space for each probe. Instead, we build a binary index that helps us achieve large increases in search speed. ![Speed by quantization method](/articles_data/binary-quantization/bq-3.png) HNSW is the approximate nearest neighbor search. This means our accuracy improves up to a point of diminishing returns, as we check the index for more similar candidates. In the context of binary quantization, this is referred to as the **oversampling rate**. For example, if `oversampling=2.0` and the `limit=100`, then 200 vectors will first be selected using a quantized index. For those 200 vectors, the full 32 bit vector will be used with their HNSW index to a much more accurate 100 item result set. As opposed to doing a full HNSW search, we oversample a preliminary search and then only do the full search on this much smaller set of vectors. ## Improved storage efficiency The following diagram shows the binarization function, whereby we reduce 32 bits storage to 1 bit information. Text embeddings can be over 1024 elements of floating point 32 bit numbers. For example, remember that OpenAI embeddings are 1536 element vectors. This means each vector is 6kB for just storing the vector. ![Improved storage efficiency](/articles_data/binary-quantization/bq-4.png) In addition to storing the vector, we also need to maintain an index for faster search and retrieval. Qdrant’s formula to estimate overall memory consumption is: `memory_size = 1.5 * number_of_vectors * vector_dimension * 4 bytes` For 100K OpenAI Embedding (`ada-002`) vectors we would need 900 Megabytes of RAM and disk space. This consumption can start to add up rapidly as you create multiple collections or add more items to the database. **With binary quantization, those same 100K OpenAI vectors only require 128 MB of RAM.** We benchmarked this result using methods similar to those covered in our [Scalar Quantization memory estimation](/articles/scalar-quantization/#benchmarks). This reduction in RAM usage is achieved through the compression that happens in the binary conversion. HNSW and quantized vectors will live in RAM for quick access, while original vectors can be offloaded to disk only. For searching, quantized HNSW will provide oversampled candidates, then they will be re-evaluated using their disk-stored original vectors to refine the final results. All of this happens under the hood without any additional intervention on your part. ### When should you not use BQ? Since this method exploits the over-parameterization of embedding, you can expect poorer results for small embeddings i.e. less than 1024 dimensions. With the smaller number of elements, there is not enough information maintained in the binary vector to achieve good results. You will still get faster boolean operations and reduced RAM usage, but the accuracy degradation might be too high. ## Sample implementation Now that we have introduced you to binary quantization, let’s try our a basic implementation. In this example, we will be using OpenAI and Cohere with Qdrant. #### Create a collection with Binary Quantization enabled Here is what you should do at indexing time when you create the collection: 1. We store all the "full" vectors on disk. 2. Then we set the binary embeddings to be in RAM. By default, both the full vectors and BQ get stored in RAM. We move the full vectors to disk because this saves us memory and allows us to store more vectors in RAM. By doing this, we explicitly move the binary vectors to memory by setting `always_ram=True`. ```python from qdrant_client import QdrantClient #collect to our Qdrant Server client = QdrantClient( url="http://localhost:6333", prefer_grpc=True, ) #Create the collection to hold our embeddings # on_disk=True and the quantization_config are the areas to focus on collection_name = "binary-quantization" if not client.collection_exists(collection_name): client.create_collection( collection_name=f"{collection_name}", vectors_config=models.VectorParams( size=1536, distance=models.Distance.DOT, on_disk=True, ), optimizers_config=models.OptimizersConfigDiff( default_segment_number=5, indexing_threshold=0, ), quantization_config=models.BinaryQuantization( binary=models.BinaryQuantizationConfig(always_ram=True), ), ) ``` #### What is happening in the OptimizerConfig? We're setting `indexing_threshold` to 0 i.e. disabling the indexing to zero. This allows faster uploads of vectors and payloads. We will turn it back on down below, once all the data is loaded #### Next, we upload our vectors to this and then enable indexing: ```python batch_size = 10000 client.upload_collection( collection_name=collection_name, ids=range(len(dataset)), vectors=dataset["openai"], payload=[ {"text": x} for x in dataset["text"] ], parallel=10, # based on the machine ) ``` Enable indexing again: ```python client.update_collection( collection_name=f"{collection_name}", optimizer_config=models.OptimizersConfigDiff( indexing_threshold=20000 ) ) ``` #### Configure the search parameters: When setting search parameters, we specify that we want to use `oversampling` and `rescore`. Here is an example snippet: ```python client.search( collection_name="{collection_name}", query_vector=[0.2, 0.1, 0.9, 0.7, ...], search_params=models.SearchParams( quantization=models.QuantizationSearchParams( ignore=False, rescore=True, oversampling=2.0, ) ) ) ``` After Qdrant pulls the oversampled vectors set, the full vectors which will be, say 1536 dimensions for OpenAI will then be pulled up from disk. Qdrant computes the nearest neighbor with the query vector and returns the accurate, rescored order. This method produces much more accurate results. We enabled this by setting `rescore=True`. These two parameters are how you are going to balance speed versus accuracy. The larger the size of your oversample, the more items you need to read from disk and the more elements you have to search with the relatively slower full vector index. On the other hand, doing this will produce more accurate results. If you have lower accuracy requirements you can even try doing a small oversample without rescoring. Or maybe, for your data set combined with your accuracy versus speed requirements you can just search the binary index and no rescoring, i.e. leaving those two parameters out of the search query. ## Benchmark results We retrieved some early results on the relationship between limit and oversampling using the the DBPedia OpenAI 1M vector dataset. We ran all these experiments on a Qdrant instance where 100K vectors were indexed and used 100 random queries. We varied the 3 parameters that will affect query time and accuracy: limit, rescore and oversampling. We offer these as an initial exploration of this new feature. You are highly encouraged to reproduce these experiments with your data sets. > Aside: Since this is a new innovation in vector databases, we are keen to hear feedback and results. [Join our Discord server](https://discord.gg/Qy6HCJK9Dc) for further discussion! **Oversampling:** In the figure below, we illustrate the relationship between recall and number of candidates: ![Correct vs candidates](/articles_data/binary-quantization/bq-5.png) We see that "correct" results i.e. recall increases as the number of potential "candidates" increase (limit x oversampling). To highlight the impact of changing the `limit`, different limit values are broken apart into different curves. For example, we see that the lowest recall for limit 50 is around 94 correct, with 100 candidates. This also implies we used an oversampling of 2.0 As oversampling increases, we see a general improvement in results – but that does not hold in every case. **Rescore:** As expected, rescoring increases the time it takes to return a query. We also repeated the experiment with oversampling except this time we looked at how rescore impacted result accuracy. ![Relationship between limit and rescore on correct](/articles_data/binary-quantization/bq-7.png) **Limit:** We experiment with limits from Top 1 to Top 50 and we are able to get to 100% recall at limit 50, with rescore=True, in an index with 100K vectors. ## Recommendations Quantization gives you the option to make tradeoffs against other parameters: Dimension count/embedding size Throughput and Latency requirements Recall requirements If you're working with OpenAI or Cohere embeddings, we recommend the following oversampling settings: |Method|Dimensionality|Test Dataset|Recall|Oversampling| |-|-|-|-|-| |OpenAI text-embedding-3-large|3072|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M) | 0.9966|3x| |OpenAI text-embedding-3-small|1536|[DBpedia 100K](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-small-1536-100K)| 0.9847|3x| |OpenAI text-embedding-3-large|1536|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M)| 0.9826|3x| |Cohere AI embed-english-v2.0|4096|[Wikipedia](https://huggingface.co/datasets/nreimers/wikipedia-22-12-large/tree/main) 1M|0.98|2x| |OpenAI text-embedding-ada-002|1536|[DbPedia 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) |0.98|4x| |Gemini|768|No Open Data| 0.9563|3x| |Mistral Embed|768|No Open Data| 0.9445 |3x| If you determine that binary quantization is appropriate for your datasets and queries then we suggest the following: - Binary Quantization with always_ram=True - Vectors stored on disk - Oversampling=2.0 (or more) - Rescore=True ## What's next? Binary quantization is exceptional if you need to work with large volumes of data under high recall expectations. You can try this feature either by spinning up a [Qdrant container image](https://hub.docker.com/r/qdrant/qdrant) locally or, having us create one for you through a [free account](https://cloud.qdrant.io/login) in our cloud hosted service. The article gives examples of data sets and configuration you can use to get going. Our documentation covers [adding large datasets to Qdrant](/documentation/tutorials/bulk-upload/) to your Qdrant instance as well as [more quantization methods](/documentation/guides/quantization/). If you have any feedback, drop us a note on Twitter or LinkedIn to tell us about your results. [Join our lively Discord Server](https://discord.gg/Qy6HCJK9Dc) if you want to discuss BQ with like-minded people!
articles/binary-quantization.md
--- title: Introducing Qdrant 0.11 short_description: Check out what's new in Qdrant 0.11 description: Replication support is the most important change introduced by Qdrant 0.11. Check out what else has been added! preview_dir: /articles_data/qdrant-0-11-release/preview small_preview_image: /articles_data/qdrant-0-11-release/announcement-svgrepo-com.svg social_preview_image: /articles_data/qdrant-0-11-release/preview/social_preview.jpg weight: 65 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2022-10-26T13:55:00+02:00 draft: false --- We are excited to [announce the release of Qdrant v0.11](https://github.com/qdrant/qdrant/releases/tag/v0.11.0), which introduces a number of new features and improvements. ## Replication One of the key features in this release is replication support, which allows Qdrant to provide a high availability setup with distributed deployment out of the box. This, combined with sharding, enables you to horizontally scale both the size of your collections and the throughput of your cluster. This means that you can use Qdrant to handle large amounts of data without sacrificing performance or reliability. ## Administration API Another new feature is the administration API, which allows you to disable write operations to the service. This is useful in situations where search availability is more critical than updates, and can help prevent issues like memory usage watermarks from affecting your searches. ## Exact search We have also added the ability to report indexed payload points in the info API, which allows you to verify that payload values were properly formatted for indexing. In addition, we have introduced a new `exact` search parameter that allows you to force exact searches of vectors, even if an ANN index is built. This can be useful for validating the accuracy of your HNSW configuration. ## Backward compatibility This release is backward compatible with v0.10.5 storage in single node deployment, but unfortunately, distributed deployment is not compatible with previous versions due to the large number of changes required for the replica set implementation. However, clients are tested for backward compatibility with the v0.10.x service.
articles/qdrant-0-11-release.md
--- title: Finding errors in datasets with Similarity Search short_description: Finding errors datasets with distance-based methods description: Improving quality of text-and-images datasets on the online furniture marketplace example. preview_dir: /articles_data/dataset-quality/preview social_preview_image: /articles_data/dataset-quality/preview/social_preview.jpg small_preview_image: /articles_data/dataset-quality/icon.svg weight: 8 author: George Panchuk author_link: https://medium.com/@george.panchuk date: 2022-07-18T10:18:00.000Z # aliases: [ /articles/dataset-quality/ ] --- Nowadays, people create a huge number of applications of various types and solve problems in different areas. Despite such diversity, they have something in common - they need to process data. Real-world data is a living structure, it grows day by day, changes a lot and becomes harder to work with. In some cases, you need to categorize or label your data, which can be a tough problem given its scale. The process of splitting or labelling is error-prone and these errors can be very costly. Imagine that you failed to achieve the desired quality of the model due to inaccurate labels. Worse, your users are faced with a lot of irrelevant items, unable to find what they need and getting annoyed by it. Thus, you get poor retention, and it directly impacts company revenue. It is really important to avoid such errors in your data. ## Furniture web-marketplace Let’s say you work on an online furniture marketplace. {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/furniture_marketplace.png caption="Furniture marketplace" >}} In this case, to ensure a good user experience, you need to split items into different categories: tables, chairs, beds, etc. One can arrange all the items manually and spend a lot of money and time on this. There is also another way: train a classification or similarity model and rely on it. With both approaches it is difficult to avoid mistakes. Manual labelling is a tedious task, but it requires concentration. Once you got distracted or your eyes became blurred mistakes won't keep you waiting. The model also can be wrong. You can analyse the most uncertain predictions and fix them, but the other errors will still leak to the site. There is no silver bullet. You should validate your dataset thoroughly, and you need tools for this. When you are sure that there are not many objects placed in the wrong category, they can be considered outliers or anomalies. Thus, you can train a model or a bunch of models capable of looking for anomalies, e.g. autoencoder and a classifier on it. However, this is again a resource-intensive task, both in terms of time and manual labour, since labels have to be provided for classification. On the contrary, if the proportion of out-of-place elements is high enough, outlier search methods are likely to be useless. ### Similarity search The idea behind similarity search is to measure semantic similarity between related parts of the data. E.g. between category title and item images. The hypothesis is, that unsuitable items will be less similar. We can't directly compare text and image data. For this we need an intermediate representation - embeddings. Embeddings are just numeric vectors containing semantic information. We can apply a pre-trained model to our data to produce these vectors. After embeddings are created, we can measure the distances between them. Assume we want to search for something other than a single bed in «Single beds» category. {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/similarity_search.png caption="Similarity search" >}} One of the possible pipelines would look like this: - Take the name of the category as an anchor and calculate the anchor embedding. - Calculate embeddings for images of each object placed into this category. - Compare obtained anchor and object embeddings. - Find the furthest. For instance, we can do it with the [CLIP](https://huggingface.co/sentence-transformers/clip-ViT-B-32-multilingual-v1) model. {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/category_vs_image_transparent.png caption="Category vs. Image" >}} We can also calculate embeddings for titles instead of images, or even for both of them to find more errors. {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/category_vs_name_and_image_transparent.png caption="Category vs. Title and Image" >}} As you can see, different approaches can find new errors or the same ones. Stacking several techniques or even the same techniques with different models may provide better coverage. Hint: Caching embeddings for the same models and reusing them among different methods can significantly speed up your lookup. ### Diversity search Since pre-trained models have only general knowledge about the data, they can still leave some misplaced items undetected. You might find yourself in a situation when the model focuses on non-important features, selects a lot of irrelevant elements, and fails to find genuine errors. To mitigate this issue, you can perform a diversity search. Diversity search is a method for finding the most distinctive examples in the data. As similarity search, it also operates on embeddings and measures the distances between them. The difference lies in deciding which point should be extracted next. Let's imagine how to get 3 points with similarity search and then with diversity search. Similarity: 1. Calculate distance matrix 2. Choose your anchor 3. Get a vector corresponding to the distances from the selected anchor from the distance matrix 4. Sort fetched vector 5. Get top-3 embeddings Diversity: 1. Calculate distance matrix 2. Initialize starting point (randomly or according to the certain conditions) 3. Get a distance vector for the selected starting point from the distance matrix 4. Find the furthest point 5. Get a distance vector for the new point 6. Find the furthest point from all of already fetched points {{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/diversity_transparent.png caption="Diversity search" >}} Diversity search utilizes the very same embeddings, and you can reuse them. If your data is huge and does not fit into memory, vector search engines like [Qdrant](https://github.com/qdrant/qdrant) might be helpful. Although the described methods can be used independently. But they are simple to combine and improve detection capabilities. If the quality remains insufficient, you can fine-tune the models using a similarity learning approach (e.g. with [Quaterion](https://quaterion.qdrant.tech) both to provide a better representation of your data and pull apart dissimilar objects in space. ## Conclusion In this article, we enlightened distance-based methods to find errors in categorized datasets. Showed how to find incorrectly placed items in the furniture web store. I hope these methods will help you catch sneaky samples leaked into the wrong categories in your data, and make your users` experience more enjoyable. Poke the [demo](https://dataset-quality.qdrant.tech). Stay tuned :)
articles/dataset-quality.md
--- title: "What is a Sparse Vector? How to Achieve Vector-based Hybrid Search" short_description: "Discover sparse vectors, their function, and significance in modern data processing, including methods like SPLADE for efficient use." description: "Learn what sparse vectors are, how they work, and their importance in modern data processing. Explore methods like SPLADE for creating and leveraging sparse vectors efficiently." social_preview_image: /articles_data/sparse-vectors/social_preview.png small_preview_image: /articles_data/sparse-vectors/sparse-vectors-icon.svg preview_dir: /articles_data/sparse-vectors/preview weight: -100 author: Nirant Kasliwal author_link: https://nirantk.com/about date: 2023-12-09T13:00:00+03:00 draft: false keywords: - sparse vectors - SPLADE - hybrid search - vector search --- Think of a library with a vast index card system. Each index card only has a few keywords marked out (sparse vector) of a large possible set for each book (document). This is what sparse vectors enable for text. ## What are sparse and dense vectors? Sparse vectors are like the Marie Kondo of data—keeping only what sparks joy (or relevance, in this case). Consider a simplified example of 2 documents, each with 200 words. A dense vector would have several hundred non-zero values, whereas a sparse vector could have, much fewer, say only 20 non-zero values. In this example: We assume it selects only 2 words or tokens from each document. The rest of the values are zero. This is why it's called a sparse vector. ```python dense = [0.2, 0.3, 0.5, 0.7, ...] # several hundred floats sparse = [{331: 0.5}, {14136: 0.7}] # 20 key value pairs ``` The numbers 331 and 14136 map to specific tokens in the vocabulary e.g. `['chocolate', 'icecream']`. The rest of the values are zero. This is why it's called a sparse vector. The tokens aren't always words though, sometimes they can be sub-words: `['ch', 'ocolate']` too. They're pivotal in information retrieval, especially in ranking and search systems. BM25, a standard ranking function used by search engines like [Elasticsearch](https://www.elastic.co/blog/practical-bm25-part-2-the-bm25-algorithm-and-its-variables?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors), exemplifies this. BM25 calculates the relevance of documents to a given search query. BM25's capabilities are well-established, yet it has its limitations. BM25 relies solely on the frequency of words in a document and does not attempt to comprehend the meaning or the contextual importance of the words. Additionally, it requires the computation of the entire corpus's statistics in advance, posing a challenge for large datasets. Sparse vectors harness the power of neural networks to surmount these limitations while retaining the ability to query exact words and phrases. They excel in handling large text data, making them crucial in modern data processing a and marking an advancement over traditional methods such as BM25. # Understanding sparse vectors Sparse Vectors are a representation where each dimension corresponds to a word or subword, greatly aiding in interpreting document rankings. This clarity is why sparse vectors are essential in modern search and recommendation systems, complimenting the meaning-rich embedding or dense vectors. Dense vectors from models like OpenAI Ada-002 or Sentence Transformers contain non-zero values for every element. In contrast, sparse vectors focus on relative word weights per document, with most values being zero. This results in a more efficient and interpretable system, especially in text-heavy applications like search. Sparse Vectors shine in domains and scenarios where many rare keywords or specialized terms are present. For example, in the medical domain, many rare terms are not present in the general vocabulary, so general-purpose dense vectors cannot capture the nuances of the domain. | Feature | Sparse Vectors | Dense Vectors | |---------------------------|---------------------------------------------|----------------------------------------------| | **Data Representation** | Majority of elements are zero | All elements are non-zero | | **Computational Efficiency** | Generally higher, especially in operations involving zero elements | Lower, as operations are performed on all elements | | **Information Density** | Less dense, focuses on key features | Highly dense, capturing nuanced relationships | | **Example Applications** | Text search, Hybrid search | [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/), many general machine learning tasks | Where do sparse vectors fail though? They're not great at capturing nuanced relationships between words. For example, they can't capture the relationship between "king" and "queen" as well as dense vectors. # SPLADE Let's check out [SPLADE](https://europe.naverlabs.com/research/computer-science/splade-a-sparse-bi-encoder-bert-based-model-achieves-effective-and-efficient-full-text-document-ranking/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors), an excellent way to make sparse vectors. Let's look at some numbers first. Higher is better: | Model | MRR@10 (MS MARCO Dev) | Type | |--------------------|---------|----------------| | BM25 | 0.184 | Sparse | | TCT-ColBERT | 0.359 | Dense | | doc2query-T5 [link](https://github.com/castorini/docTTTTTquery) | 0.277 | Sparse | | SPLADE | 0.322 | Sparse | | SPLADE-max | 0.340 | Sparse | | SPLADE-doc | 0.322 | Sparse | | DistilSPLADE-max | 0.368 | Sparse | All numbers are from [SPLADEv2](https://arxiv.org/abs/2109.10086). MRR is [Mean Reciprocal Rank](https://www.wikiwand.com/en/Mean_reciprocal_rank#References), a standard metric for ranking. [MS MARCO](https://microsoft.github.io/MSMARCO-Passage-Ranking/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) is a dataset for evaluating ranking and retrieval for passages. SPLADE is quite flexible as a method, with regularization knobs that can be tuned to obtain [different models](https://github.com/naver/splade) as well: > SPLADE is more a class of models rather than a model per se: depending on the regularization magnitude, we can obtain different models (from very sparse to models doing intense query/doc expansion) with different properties and performance. First, let's look at how to create a sparse vector. Then, we'll look at the concepts behind SPLADE. ## Creating a sparse vector We'll explore two different ways to create a sparse vector. The higher performance way to create a sparse vector from dedicated document and query encoders. We'll look at a simpler approach -- here we will use the same model for both document and query. We will get a dictionary of token ids and their corresponding weights for a sample text - representing a document. If you'd like to follow along, here's a [Colab Notebook](https://colab.research.google.com/gist/NirantK/ad658be3abefc09b17ce29f45255e14e/splade-single-encoder.ipynb), [alternate link](https://gist.github.com/NirantK/ad658be3abefc09b17ce29f45255e14e) with all the code. ### Setting Up ```python from transformers import AutoModelForMaskedLM, AutoTokenizer model_id = "naver/splade-cocondenser-ensembledistil" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForMaskedLM.from_pretrained(model_id) text = """Arthur Robert Ashe Jr. (July 10, 1943 – February 6, 1993) was an American professional tennis player. He won three Grand Slam titles in singles and two in doubles.""" ``` ### Computing the sparse vector ```python import torch def compute_vector(text): """ Computes a vector from logits and attention mask using ReLU, log, and max operations. """ tokens = tokenizer(text, return_tensors="pt") output = model(**tokens) logits, attention_mask = output.logits, tokens.attention_mask relu_log = torch.log(1 + torch.relu(logits)) weighted_log = relu_log * attention_mask.unsqueeze(-1) max_val, _ = torch.max(weighted_log, dim=1) vec = max_val.squeeze() return vec, tokens vec, tokens = compute_vector(text) print(vec.shape) ``` You'll notice that there are 38 tokens in the text based on this tokenizer. This will be different from the number of tokens in the vector. In a TF-IDF, we'd assign weights only to these tokens or words. In SPLADE, we assign weights to all the tokens in the vocabulary using this vector using our learned model. ## Term expansion and weights ```python def extract_and_map_sparse_vector(vector, tokenizer): """ Extracts non-zero elements from a given vector and maps these elements to their human-readable tokens using a tokenizer. The function creates and returns a sorted dictionary where keys are the tokens corresponding to non-zero elements in the vector, and values are the weights of these elements, sorted in descending order of weights. This function is useful in NLP tasks where you need to understand the significance of different tokens based on a model's output vector. It first identifies non-zero values in the vector, maps them to tokens, and sorts them by weight for better interpretability. Args: vector (torch.Tensor): A PyTorch tensor from which to extract non-zero elements. tokenizer: The tokenizer used for tokenization in the model, providing the mapping from tokens to indices. Returns: dict: A sorted dictionary mapping human-readable tokens to their corresponding non-zero weights. """ # Extract indices and values of non-zero elements in the vector cols = vector.nonzero().squeeze().cpu().tolist() weights = vector[cols].cpu().tolist() # Map indices to tokens and create a dictionary idx2token = {idx: token for token, idx in tokenizer.get_vocab().items()} token_weight_dict = { idx2token[idx]: round(weight, 2) for idx, weight in zip(cols, weights) } # Sort the dictionary by weights in descending order sorted_token_weight_dict = { k: v for k, v in sorted( token_weight_dict.items(), key=lambda item: item[1], reverse=True ) } return sorted_token_weight_dict # Usage example sorted_tokens = extract_and_map_sparse_vector(vec, tokenizer) sorted_tokens ``` There will be 102 sorted tokens in total. This has expanded to include tokens that weren't in the original text. This is the term expansion we will talk about next. Here are some terms that are added: "Berlin", and "founder" - despite having no mention of Arthur's race (which leads to Owen's Berlin win) and his work as the founder of Arthur Ashe Institute for Urban Health. Here are the top few `sorted_tokens` with a weight of more than 1: ```python { "ashe": 2.95, "arthur": 2.61, "tennis": 2.22, "robert": 1.74, "jr": 1.55, "he": 1.39, "founder": 1.36, "doubles": 1.24, "won": 1.22, "slam": 1.22, "died": 1.19, "singles": 1.1, "was": 1.07, "player": 1.06, "titles": 0.99, ... } ``` If you're interested in using the higher-performance approach, check out the following models: 1. [naver/efficient-splade-VI-BT-large-doc](https://huggingface.co/naver/efficient-splade-vi-bt-large-doc) 2. [naver/efficient-splade-VI-BT-large-query](https://huggingface.co/naver/efficient-splade-vi-bt-large-doc) ## Why SPLADE works: term expansion Consider a query "solar energy advantages". SPLADE might expand this to include terms like "renewable," "sustainable," and "photovoltaic," which are contextually relevant but not explicitly mentioned. This process is called term expansion, and it's a key component of SPLADE. SPLADE learns the query/document expansion to include other relevant terms. This is a crucial advantage over other sparse methods which include the exact word, but completely miss the contextually relevant ones. This expansion has a direct relationship with what we can control when making a SPLADE model: Sparsity via Regularisation. The number of tokens (BERT wordpieces) we use to represent each document. If we use more tokens, we can represent more terms, but the vectors become denser. This number is typically between 20 to 200 per document. As a reference point, the dense BERT vector is 768 dimensions, OpenAI Embedding is 1536 dimensions, and the sparse vector is 30 dimensions. For example, assume a 1M document corpus. Say, we use 100 sparse token ids + weights per document. Correspondingly, dense BERT vector would be 768M floats, the OpenAI Embedding would be 1.536B floats, and the sparse vector would be a maximum of 100M integers + 100M floats. This could mean a **10x reduction in memory usage**, which is a huge win for large-scale systems: | Vector Type | Memory (GB) | |-------------------|-------------------------| | Dense BERT Vector | 6.144 | | OpenAI Embedding | 12.288 | | Sparse Vector | 1.12 | ## How SPLADE works: leveraging BERT SPLADE leverages a transformer architecture to generate sparse representations of documents and queries, enabling efficient retrieval. Let's dive into the process. The output logits from the transformer backbone are inputs upon which SPLADE builds. The transformer architecture can be something familiar like BERT. Rather than producing dense probability distributions, SPLADE utilizes these logits to construct sparse vectors—think of them as a distilled essence of tokens, where each dimension corresponds to a term from the vocabulary and its associated weight in the context of the given document or query. This sparsity is critical; it mirrors the probability distributions from a typical [Masked Language Modeling](http://jalammar.github.io/illustrated-bert/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) task but is tuned for retrieval effectiveness, emphasizing terms that are both: 1. Contextually relevant: Terms that represent a document well should be given more weight. 2. Discriminative across documents: Terms that a document has, and other documents don't, should be given more weight. The token-level distributions that you'd expect in a standard transformer model are now transformed into token-level importance scores in SPLADE. These scores reflect the significance of each term in the context of the document or query, guiding the model to allocate more weight to terms that are likely to be more meaningful for retrieval purposes. The resulting sparse vectors are not only memory-efficient but also tailored for precise matching in the high-dimensional space of a search engine like Qdrant. ## Interpreting SPLADE A downside of dense vectors is that they are not interpretable, making it difficult to understand why a document is relevant to a query. SPLADE importance estimation can provide insights into the 'why' behind a document's relevance to a query. By shedding light on which tokens contribute most to the retrieval score, SPLADE offers some degree of interpretability alongside performance, a rare feat in the realm of neural IR systems. For engineers working on search, this transparency is invaluable. ## Known limitations of SPLADE ### Pooling strategy The switch to max pooling in SPLADE improved its performance on the MS MARCO and TREC datasets. However, this indicates a potential limitation of the baseline SPLADE pooling method, suggesting that SPLADE's performance is sensitive to the choice of pooling strategy​​. ### Document and query Eecoder The SPLADE model variant that uses a document encoder with max pooling but no query encoder reaches the same performance level as the prior SPLADE model. This suggests a limitation in the necessity of a query encoder, potentially affecting the efficiency of the model​​. ## Other sparse vector methods SPLADE is not the only method to create sparse vectors. Essentially, sparse vectors are a superset of TF-IDF and BM25, which are the most popular text retrieval methods. In other words, you can create a sparse vector using the term frequency and inverse document frequency (TF-IDF) to reproduce the BM25 score exactly. Additionally, attention weights from Sentence Transformers can be used to create sparse vectors. This method preserves the ability to query exact words and phrases but avoids the computational overhead of query expansion used in SPLADE. We will cover these methods in detail in a future article. ## Leveraging sparse vectors in Qdrant for hybrid search Qdrant supports a separate index for Sparse Vectors. This enables you to use the same collection for both dense and sparse vectors. Each "Point" in Qdrant can have both dense and sparse vectors. But let's first take a look at how you can work with sparse vectors in Qdrant. ## Practical implementation in Python Let's dive into how Qdrant handles sparse vectors with an example. Here is what we will cover: 1. Setting Up Qdrant Client: Initially, we establish a connection with Qdrant using the QdrantClient. This setup is crucial for subsequent operations. 2. Creating a Collection with Sparse Vector Support: In Qdrant, a collection is a container for your vectors. Here, we create a collection specifically designed to support sparse vectors. This is done using the create_collection method where we define the parameters for sparse vectors, such as setting the index configuration. 3. Inserting Sparse Vectors: Once the collection is set up, we can insert sparse vectors into it. This involves defining the sparse vector with its indices and values, and then upserting this point into the collection. 4. Querying with Sparse Vectors: To perform a search, we first prepare a query vector. This involves computing the vector from a query text and extracting its indices and values. We then use these details to construct a query against our collection. 5. Retrieving and Interpreting Results: The search operation returns results that include the id of the matching document, its score, and other relevant details. The score is a crucial aspect, reflecting the similarity between the query and the documents in the collection. ### 1. Set up ```python # Qdrant client setup client = QdrantClient(":memory:") # Define collection name COLLECTION_NAME = "example_collection" # Insert sparse vector into Qdrant collection point_id = 1 # Assign a unique ID for the point ``` ### 2. Create a collection with sparse vector support ```python client.create_collection( collection_name=COLLECTION_NAME, vectors_config={}, sparse_vectors_config={ "text": models.SparseVectorParams( index=models.SparseIndexParams( on_disk=False, ) ) }, ) ``` ### 3. Insert sparse vectors Here, we see the process of inserting a sparse vector into the Qdrant collection. This step is key to building a dataset that can be quickly retrieved in the first stage of the retrieval process, utilizing the efficiency of sparse vectors. Since this is for demonstration purposes, we insert only one point with Sparse Vector and no dense vector. ```python client.upsert( collection_name=COLLECTION_NAME, points=[ models.PointStruct( id=point_id, payload={}, # Add any additional payload if necessary vector={ "text": models.SparseVector( indices=indices.tolist(), values=values.tolist() ) }, ) ], ) ``` By upserting points with sparse vectors, we prepare our dataset for rapid first-stage retrieval, laying the groundwork for subsequent detailed analysis using dense vectors. Notice that we use "text" to denote the name of the sparse vector. Those familiar with the Qdrant API will notice that the extra care taken to be consistent with the existing named vectors API -- this is to make it easier to use sparse vectors in existing codebases. As always, you're able to **apply payload filters**, shard keys, and other advanced features you've come to expect from Qdrant. To make things easier for you, the indices and values don't have to be sorted before upsert. Qdrant will sort them when the index is persisted e.g. on disk. ### 4. Query with sparse vectors We use the same process to prepare a query vector as well. This involves computing the vector from a query text and extracting its indices and values. We then use these details to construct a query against our collection. ```python # Preparing a query vector query_text = "Who was Arthur Ashe?" query_vec, query_tokens = compute_vector(query_text) query_vec.shape query_indices = query_vec.nonzero().numpy().flatten() query_values = query_vec.detach().numpy()[indices] ``` In this example, we use the same model for both document and query. This is not a requirement, but it's a simpler approach. ### 5. Retrieve and interpret results After setting up the collection and inserting sparse vectors, the next critical step is retrieving and interpreting the results. This process involves executing a search query and then analyzing the returned results. ```python # Searching for similar documents result = client.search( collection_name=COLLECTION_NAME, query_vector=models.NamedSparseVector( name="text", vector=models.SparseVector( indices=query_indices, values=query_values, ), ), with_vectors=True, ) result ``` In the above code, we execute a search against our collection using the prepared sparse vector query. The `client.search` method takes the collection name and the query vector as inputs. The query vector is constructed using the `models.NamedSparseVector`, which includes the indices and values derived from the query text. This is a crucial step in efficiently retrieving relevant documents. ```python ScoredPoint( id=1, version=0, score=3.4292831420898438, payload={}, vector={ "text": SparseVector( indices=[2001, 2002, 2010, 2018, 2032, ...], values=[ 1.0660614967346191, 1.391068458557129, 0.8903818726539612, 0.2502821087837219, ..., ], ) }, ) ``` The result, as shown above, is a `ScoredPoint` object containing the ID of the retrieved document, its version, a similarity score, and the sparse vector. The score is a key element as it quantifies the similarity between the query and the document, based on their respective vectors. To understand how this scoring works, we use the familiar dot product method: $$\text{Similarity}(\text{Query}, \text{Document}) = \sum_{i \in I} \text{Query}_i \times \text{Document}_i$$ This formula calculates the similarity score by multiplying corresponding elements of the query and document vectors and summing these products. This method is particularly effective with sparse vectors, where many elements are zero, leading to a computationally efficient process. The higher the score, the greater the similarity between the query and the document, making it a valuable metric for assessing the relevance of the retrieved documents. ## Hybrid search: combining sparse and dense vectors By combining search results from both dense and sparse vectors, you can achieve a hybrid search that is both efficient and accurate. Results from sparse vectors will guarantee, that all results with the required keywords are returned, while dense vectors will cover the semantically similar results. The mixture of dense and sparse results can be presented directly to the user, or used as a first stage of a two-stage retrieval process. Let's see how you can make a hybrid search query in Qdrant. First, you need to create a collection with both dense and sparse vectors: ```python client.create_collection( collection_name=COLLECTION_NAME, vectors_config={ "text-dense": models.VectorParams( size=1536, # OpenAI Embeddings distance=models.Distance.COSINE, ) }, sparse_vectors_config={ "text-sparse": models.SparseVectorParams( index=models.SparseIndexParams( on_disk=False, ) ) }, ) ``` Then, assuming you have upserted both dense and sparse vectors, you can query them together: ```python query_text = "Who was Arthur Ashe?" # Compute sparse and dense vectors query_indices, query_values = compute_sparse_vector(query_text) query_dense_vector = compute_dense_vector(query_text) client.search_batch( collection_name=COLLECTION_NAME, requests=[ models.SearchRequest( vector=models.NamedVector( name="text-dense", vector=query_dense_vector, ), limit=10, ), models.SearchRequest( vector=models.NamedSparseVector( name="text-sparse", vector=models.SparseVector( indices=query_indices, values=query_values, ), ), limit=10, ), ], ) ``` The result will be a pair of result lists, one for dense and one for sparse vectors. Having those results, there are several ways to combine them: ### Mixing or fusion You can mix the results from both dense and sparse vectors, based purely on their relative scores. This is a simple and effective approach, but it doesn't take into account the semantic similarity between the results. Among the [popular mixing methods](https://medium.com/plain-simple-software/distribution-based-score-fusion-dbsf-a-new-approach-to-vector-search-ranking-f87c37488b18) are: - Reciprocal Ranked Fusion (RRF) - Relative Score Fusion (RSF) - Distribution-Based Score Fusion (DBSF) {{< figure src=/articles_data/sparse-vectors/mixture.png caption="Relative Score Fusion" width=80% >}} [Ranx](https://github.com/AmenRa/ranx) is a great library for mixing results from different sources. ### Re-ranking You can use obtained results as a first stage of a two-stage retrieval process. In the second stage, you can re-rank the results from the first stage using a more complex model, such as [Cross-Encoders](https://www.sbert.net/examples/applications/cross-encoder/README.html) or services like [Cohere Rerank](https://txt.cohere.com/rerank/). And that's it! You've successfully achieved hybrid search with Qdrant! ## Additional resources For those who want to dive deeper, here are the top papers on the topic most of which have code available: 1. Problem Motivation: [Sparse Overcomplete Word Vector Representations](https://ar5iv.org/abs/1506.02004?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) 1. [SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval](https://ar5iv.org/abs/2109.10086?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) 1. [SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking](https://ar5iv.org/abs/2107.05720?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) 1. Late Interaction - [ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction](https://ar5iv.org/abs/2112.01488?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) 1. [SparseEmbed: Learning Sparse Lexical Representations with Contextual Embeddings for Retrieval](https://research.google/pubs/pub52289/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) **Why just read when you can try it out?** We've packed an easy-to-use Colab for you on how to make a Sparse Vector: [Sparse Vectors Single Encoder Demo](https://colab.research.google.com/drive/1wa2Yr5BCOgV0MTOFFTude99BOXCLHXky?usp=sharing). Run it, tinker with it, and start seeing the magic unfold in your projects. We can't wait to hear how you use it! ## Conclusion Alright, folks, let's wrap it up. Better search isn't a 'nice-to-have,' it's a game-changer, and Qdrant can get you there. Got questions? Our [Discord community](https://qdrant.to/discord?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) is teeming with answers. If you enjoyed reading this, why not sign up for our [newsletter](/subscribe/?utm_source=qdrant&utm_medium=website&utm_campaign=sparse-vectors&utm_content=article&utm_term=sparse-vectors) to stay ahead of the curve. And, of course, a big thanks to you, our readers, for pushing us to make ranking better for everyone.
articles/sparse-vectors.md
--- title: Google Summer of Code 2023 - Polygon Geo Filter for Qdrant Vector Database short_description: Gsoc'23 Polygon Geo Filter for Qdrant Vector Database description: A Summary of my work and experience at Qdrant's Gsoc '23. preview_dir: /articles_data/geo-polygon-filter-gsoc/preview small_preview_image: /articles_data/geo-polygon-filter-gsoc/icon.svg social_preview_image: /articles_data/geo-polygon-filter-gsoc/preview/social_preview.jpg weight: -50 author: Zein Wen author_link: https://www.linkedin.com/in/zishenwen/ date: 2023-10-12T08:00:00+03:00 draft: false keywords: - payload filtering - geo polygon - search condition - gsoc'23 --- ## Introduction Greetings, I'm Zein Wen, and I was a Google Summer of Code 2023 participant at Qdrant. I got to work with an amazing mentor, Arnaud Gourlay, on enhancing the Qdrant Geo Polygon Filter. This new feature allows users to refine their query results using polygons. As the latest addition to the Geo Filter family of radius and rectangle filters, this enhancement promises greater flexibility in querying geo data, unlocking interesting new use cases. ## Project Overview {{< figure src="/articles_data/geo-polygon-filter-gsoc/geo-filter-example.png" caption="A Use Case of Geo Filter (https://traveltime.com/blog/map-postcode-data-catchment-area)" alt="A Use Case of Geo Filter" >}} Because Qdrant is a powerful query vector database it presents immense potential for machine learning-driven applications, such as recommendation. However, the scope of vector queries alone may not always meet user requirements. Consider a scenario where you're seeking restaurant recommendations; it's not just about a list of restaurants, but those within your neighborhood. This is where the Geo Filter comes into play, enhancing query by incorporating additional filtering criteria. Up until now, Qdrant's geographic filter options were confined to circular and rectangular shapes, which may not align with the diverse boundaries found in the real world. This scenario was exactly what led to a user feature request and we decided it would be a good feature to tackle since it introduces greater capability for geo-related queries. ## Technical Challenges **1. Geo Geometry Computation** {{< figure src="/articles_data/geo-polygon-filter-gsoc/basic-concept.png" caption="Geo Space Basic Concept" alt="Geo Space Basic Concept" >}} Internally, the Geo Filter doesn't start by testing each individual geo location as this would be computationally expensive. Instead, we create a geo hash layer that [divides the world](https://en.wikipedia.org/wiki/Grid_(spatial_index)#Grid-based_spatial_indexing) into rectangles. When a spatial index is created for Qdrant entries it assigns the entry to the geohash for its location. During a query we first identify all potential geo hashes that satisfy the filters and subsequently check for location candidates within those hashes. Accomplishing this search involves two critical geometry computations: 1. determining if a polygon intersects with a rectangle 2. ascertaining if a point lies within a polygon. {{< figure src=/articles_data/geo-polygon-filter-gsoc/geo-computation-testing.png caption="Geometry Computation Testing" alt="Geometry Computation Testing" >}} While we have a geo crate (a Rust library) that provides APIs for these computations, we dug in deeper to understand the underlying algorithms and verify their accuracy. This lead us to conduct extensive testing and visualization to determine correctness. In addition to assessing the current crate, we also discovered that there are multiple algorithms available for these computations. We invested time in exploring different approaches, such as [winding windows](https://en.wikipedia.org/wiki/Point_in_polygon#Winding%20number%20algorithm:~:text=of%20the%20algorithm.-,Winding%20number%20algorithm,-%5Bedit%5D) and [ray casting](https://en.wikipedia.org/wiki/Point_in_polygon#Winding%20number%20algorithm:~:text=.%5B2%5D-,Ray%20casting%20algorithm,-%5Bedit%5D), to grasp their distinctions, and pave the way for future improvements. Through this process, I enjoyed honing my ability to swiftly grasp unfamiliar concepts. In addition, I needed to develop analytical strategies to dissect and draw meaningful conclusions from them. This experience has been invaluable in expanding my problem-solving toolkit. **2. Proto and JSON format design** Considerable effort was devoted to designing the ProtoBuf and JSON interfaces for this new feature. This component is directly exposed to users, requiring a consistent and user-friendly interface, which in turns help drive a a positive user experience and less code modifications in the future. Initially, we contemplated aligning our interface with the [GeoJSON](https://geojson.org/) specification, given its prominence as a standard for many geo-related APIs. However, we soon realized that the way GeoJSON defines geometries significantly differs from our current JSON and ProtoBuf coordinate definitions for our point radius and rectangular filter. As a result, we prioritized API-level consistency and user experience, opting to align the new polygon definition with all our existing definitions. In addition, we planned to develop a separate multi-polygon filter in addition to the polygon. However, after careful consideration, we recognize that, for our use case, polygon filters can achieve the same result as a multi-polygon filter. This relationship mirrors how we currently handle multiple circles or rectangles. Consequently, we deemed the multi-polygon filter redundant and would introduce unnecessary complexity to the API. Doing this work illustrated to me the challenge of navigating real-world solutions that require striking a balance between adhering to established standards and prioritizing user experience. It also was key to understanding the wisdom of focusing on developing what's truly necessary for users, without overextending our efforts. ## Outcomes **1. Capability of Deep Dive** Navigating unfamiliar code bases, concepts, APIs, and techniques is a common challenge for developers. Participating in GSoC was akin to me going from the safety of a swimming pool and right into the expanse of the ocean. Having my mentor’s support during this transition was invaluable. He provided me with numerous opportunities to independently delve into areas I had never explored before. I have grown into no longer fearing unknown technical areas, whether it's unfamiliar code, techniques, or concepts in specific domains. I've gained confidence in my ability to learn them step by step and use them to create the things I envision. **2. Always Put User in Minds** Another crucial lesson I learned is the importance of considering the user's experience and their specific use cases. While development may sometimes entail iterative processes, every aspect that directly impacts the user must be approached and executed with empathy. Neglecting this consideration can lead not only to functional errors but also erode the trust of users due to inconsistency and confusion, which then leads to them no longer using my work. **3. Speak Up and Effectively Communicate** Finally, In the course of development, encountering differing opinions is commonplace. It's essential to remain open to others' ideas, while also possessing the resolve to communicate one's own perspective clearly. This fosters productive discussions and ultimately elevates the quality of the development process. ### Wrap up Being selected for Google Summer of Code 2023 and collaborating with Arnaud and the other Qdrant engineers, along with all the other community members, has been a true privilege. I'm deeply grateful to those who invested their time and effort in reviewing my code, engaging in discussions about alternatives and design choices, and offering assistance when needed. Through these interactions, I've experienced firsthand the essence of open source and the culture that encourages collaboration. This experience not only allowed me to write Rust code for a real-world product for the first time, but it also opened the door to the amazing world of open source. Without a doubt, I'm eager to continue growing alongside this community and contribute to new features and enhancements that elevate the product. I've also become an advocate for Qdrant, introducing this project to numerous coworkers and friends in the tech industry. I'm excited to witness new users and contributors emerge from within my own network! If you want to try out my work, read the [documentation](/documentation/concepts/filtering/#geo-polygon) and then, either sign up for a free [cloud account](https://cloud.qdrant.io) or download the [Docker image](https://hub.docker.com/r/qdrant/qdrant). I look forward to seeing how people are using my work in their own applications!
articles/geo-polygon-filter-gsoc.md
--- title: "Introducing Qdrant 1.3.0" short_description: "New version is out! Our latest release brings about some exciting performance improvements and much-needed fixes." description: "New version is out! Our latest release brings about some exciting performance improvements and much-needed fixes." social_preview_image: /articles_data/qdrant-1.3.x/social_preview.png small_preview_image: /articles_data/qdrant-1.3.x/icon.svg preview_dir: /articles_data/qdrant-1.3.x/preview weight: 2 author: David Sertic author_link: date: 2023-06-26T00:00:00Z draft: false keywords: - vector search - new features - oversampling - grouping lookup - io_uring - oversampling - group lookup --- A brand-new [Qdrant 1.3.0 release](https://github.com/qdrant/qdrant/releases/tag/v1.3.0) comes packed with a plethora of new features, performance improvements and bux fixes: 1. Asynchronous I/O interface: Reduce overhead by managing I/O operations asynchronously, thus minimizing context switches. 2. Oversampling for Quantization: Improve the accuracy and performance of your queries while using Scalar or Product Quantization. 3. Grouping API lookup: Storage optimization method that lets you look for points in another collection using group ids. 4. Qdrant Web UI: A convenient dashboard to help you manage data stored in Qdrant. 5. Temp directory for Snapshots: Set a separate storage directory for temporary snapshots on a faster disk. 6. Other important changes Your feedback is valuable to us, and are always tying to include some of your feature requests into our roadmap. Join [our Discord community](https://qdrant.to/discord) and help us build Qdrant!. ## New features ### Asychronous I/O interface Going forward, we will support the `io_uring` asychnronous interface for storage devices on Linux-based systems. Since its introduction, `io_uring` has been proven to speed up slow-disk deployments as it decouples kernel work from the IO process. <aside role="status">This experimental feature works on Linux kernels > 5.4 </aside> This interface uses two ring buffers to queue and manage I/O operations asynchronously, avoiding costly context switches and reducing overhead. Unlike mmap, it frees the user threads to do computations instead of waiting for the kernel to complete. ![io_uring](/articles_data/qdrant-1.3.x/io-uring.png) #### Enable the interface from your config file: ```yaml storage: # enable the async scorer which uses io_uring async_scorer: true ``` You can return to the mmap based backend by either deleting the `async_scorer` entry or setting the value to `false`. This optimization will mainly benefit workloads with lots of disk IO (e.g. querying on-disk collections with rescoring). Please keep in mind that this feature is experimental and that the interface may change in further versions. ### Oversampling for quantization We are introducing [oversampling](/documentation/guides/quantization/#oversampling) as a new way to help you improve the accuracy and performance of similarity search algorithms. With this method, you are able to significantly compress high-dimensional vectors in memory and then compensate the accuracy loss by re-scoring additional points with the original vectors. You will experience much faster performance with quantization due to parallel disk usage when reading vectors. Much better IO means that you can keep quantized vectors in RAM, so the pre-selection will be even faster. Finally, once pre-selection is done, you can use parallel IO to retrieve original vectors, which is significantly faster than traversing HNSW on slow disks. #### Set the oversampling factor via query: Here is how you can configure the oversampling factor - define how many extra vectors should be pre-selected using the quantized index, and then re-scored using original vectors. ```http POST /collections/{collection_name}/points/search { "params": { "quantization": { "ignore": false, "rescore": true, "oversampling": 2.4 } }, "vector": [0.2, 0.1, 0.9, 0.7], "limit": 100 } ``` ```python from qdrant_client import QdrantClient from qdrant_client.http import models client = QdrantClient("localhost", port=6333) client.search( collection_name="{collection_name}", query_vector=[0.2, 0.1, 0.9, 0.7], search_params=models.SearchParams( quantization=models.QuantizationSearchParams( ignore=False, rescore=True, oversampling=2.4 ) ) ) ``` In this case, if `oversampling` is 2.4 and `limit` is 100, then 240 vectors will be pre-selected using quantized index, and then the top 100 points will be returned after re-scoring with the unquantized vectors. As you can see from the example above, this parameter is set during the query. This is a flexible method that will let you tune query accuracy. While the index is not changed, you can decide how many points you want to retrieve using quantized vectors. ### Grouping API lookup In version 1.2.0, we introduced a mechanism for requesting groups of points. Our new feature extends this functionality by giving you the option to look for points in another collection using the group ids. We wanted to add this feature, since having a single point for the shared data of the same item optimizes storage use, particularly if the payload is large. This has the extra benefit of having a single point to update when the information shared by the points in a group changes. ![Group Lookup](/articles_data/qdrant-1.3.x/group-lookup.png) For example, if you have a collection of documents, you may want to chunk them and store the points for the chunks in a separate collection, making sure that you store the point id from the document it belongs in the payload of the chunk point. #### Adding the parameter to grouping API request: When using the grouping API, add the `with_lookup` parameter to bring the information from those points into each group: ```http POST /collections/chunks/points/search/groups { // Same as in the regular search API "vector": [1.1], ..., // Grouping parameters "group_by": "document_id", "limit": 2, "group_size": 2, // Lookup parameters "with_lookup": { // Name of the collection to look up points in "collection_name": "documents", // Options for specifying what to bring from the payload // of the looked up point, true by default "with_payload": ["title", "text"], // Options for specifying what to bring from the vector(s) // of the looked up point, true by default "with_vectors: false, } } ``` ```python client.search_groups( collection_name="chunks", # Same as in the regular search() API query_vector=[1.1], ..., # Grouping parameters group_by="document_id", # Path of the field to group by limit=2, # Max amount of groups group_size=2, # Max amount of points per group # Lookup parameters with_lookup=models.WithLookup( # Name of the collection to look up points in collection_name="documents", # Options for specifying what to bring from the payload # of the looked up point, True by default with_payload=["title", "text"] # Options for specifying what to bring from the vector(s) # of the looked up point, True by default with_vectors=False, ) ) ``` ### Qdrant web user interface We are excited to announce a more user-friendly way to organize and work with your collections inside of Qdrant. Our dashboard's design is simple, but very intuitive and easy to access. Try it out now! If you have Docker running, you can [quickstart Qdrant](/documentation/quick-start/) and access the Dashboard locally from [http://localhost:6333/dashboard](http://localhost:6333/dashboard). You should see this simple access point to Qdrant: ![Qdrant Web UI](/articles_data/qdrant-1.3.x/web-ui.png) ### Temporary directory for Snapshots Currently, temporary snapshot files are created inside the `/storage` directory. Oftentimes `/storage` is a network-mounted disk. Therefore, we found this method suboptimal because `/storage` is limited in disk size and also because writing data to it may affect disk performance as it consumes bandwidth. This new feature allows you to specify a different directory on another disk that is faster. We expect this feature to significantly optimize cloud performance. To change it, access `config.yaml` and set `storage.temp_path` to another directory location. ## Important changes The latest release focuses not only on the new features but also introduces some changes making Qdrant even more reliable. ### Optimizing group requests Internally, `is_empty` was not using the index when it was called, so it had to deserialize the whole payload to see if the key had values or not. Our new update makes sure to check the index first, before confirming with the payload if it is actually `empty`/`null`, so these changes improve performance only when the negated condition is true (e.g. it improves when the field is not empty). Going forward, this will improve the way grouping API requests are handled. ### Faster read access with mmap If you used mmap, you most likely found that segments were always created with cold caches. The first request to the database needed to request the disk, which made startup slower despite plenty of RAM being available. We have implemeneted a way to ask the kernel to "heat up" the disk cache and make initialization much faster. The function is expected to be used on startup and after segment optimization and reloading of newly indexed segment. So far this is only implemented for "immutable" memmaps. ## Release notes As usual, [our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.3.0) describe all the changes introduced in the latest version.
articles/qdrant-1.3.x.md
--- title: Vector Search in constant time short_description: Apply Quantum Computing to your search engine description: Quantum Quantization enables vector search in constant time. This article will discuss the concept of quantum quantization for ANN vector search. preview_dir: /articles_data/quantum-quantization/preview social_preview_image: /articles_data/quantum-quantization/social_preview.png small_preview_image: /articles_data/quantum-quantization/icon.svg weight: 1000 author: Prankstorm Team draft: false author_link: https://www.youtube.com/watch?v=dQw4w9WgXcQ date: 2023-04-01T00:48:00.000Z --- The advent of quantum computing has revolutionized many areas of science and technology, and one of the most intriguing developments has been its potential application to artificial neural networks (ANNs). One area where quantum computing can significantly improve performance is in vector search, a critical component of many machine learning tasks. In this article, we will discuss the concept of quantum quantization for ANN vector search, focusing on the conversion of float32 to qbit vectors and the ability to perform vector search on arbitrary-sized databases in constant time. ## Quantum Quantization and Entanglement Quantum quantization is a novel approach that leverages the power of quantum computing to speed up the search process in ANNs. By converting traditional float32 vectors into qbit vectors, we can create quantum entanglement between the qbits. Quantum entanglement is a unique phenomenon in which the states of two or more particles become interdependent, regardless of the distance between them. This property of quantum systems can be harnessed to create highly efficient vector search algorithms. The conversion of float32 vectors to qbit vectors can be represented by the following formula: ```text qbit_vector = Q( float32_vector ) ``` where Q is the quantum quantization function that transforms the float32_vector into a quantum entangled qbit_vector. ## Vector Search in Constant Time The primary advantage of using quantum quantization for ANN vector search is the ability to search through an arbitrary-sized database in constant time. The key to performing vector search in constant time with quantum quantization is to use a quantum algorithm called Grover's algorithm. Grover's algorithm is a quantum search algorithm that finds the location of a marked item in an unsorted database in O(√N) time, where N is the size of the database. This is a significant improvement over classical algorithms, which require O(N) time to solve the same problem. However, the is one another trick, which allows to improve Grover's algorithm performanse dramatically. This trick is called transposition and it allows to reduce the number of Grover's iterations from O(√N) to O(√D), where D - is a dimension of the vector space. And since the dimension of the vector space is much smaller than the number of vectors, and usually is a constant, this trick allows to reduce the number of Grover's iterations from O(√N) to O(√D) = O(1). Check out our [Quantum Quantization PR](https://github.com/qdrant/qdrant/pull/1639) on GitHub.
articles/quantum-quantization.md
--- title: "Introducing Qdrant 1.2.x" short_description: "Check out what Qdrant 1.2 brings to vector search" description: "Check out what Qdrant 1.2 brings to vector search" social_preview_image: /articles_data/qdrant-1.2.x/social_preview.png small_preview_image: /articles_data/qdrant-1.2.x/icon.svg preview_dir: /articles_data/qdrant-1.2.x/preview weight: 8 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-05-24T10:45:00+02:00 draft: false keywords: - vector search - new features - product quantization - optional vectors - nested filters - appendable mmap - group requests --- A brand-new Qdrant 1.2 release comes packed with a plethora of new features, some of which were highly requested by our users. If you want to shape the development of the Qdrant vector database, please [join our Discord community](https://qdrant.to/discord) and let us know how you use it! ## New features As usual, a minor version update of Qdrant brings some interesting new features. We love to see your feedback, and we tried to include the features most requested by our community. ### Product Quantization The primary focus of Qdrant was always performance. That's why we built it in Rust, but we were always concerned about making vector search affordable. From the very beginning, Qdrant offered support for disk-stored collections, as storage space is way cheaper than memory. That's also why we have introduced the [Scalar Quantization](/articles/scalar-quantization/) mechanism recently, which makes it possible to reduce the memory requirements by up to four times. Today, we are bringing a new quantization mechanism to life. A separate article on [Product Quantization](/documentation/quantization/#product-quantization) will describe that feature in more detail. In a nutshell, you can **reduce the memory requirements by up to 64 times**! ### Optional named vectors Qdrant has been supporting multiple named vectors per point for quite a long time. Those may have utterly different dimensionality and distance functions used to calculate similarity. Having multiple embeddings per item is an essential real-world scenario. For example, you might be encoding textual and visual data using different models. Or you might be experimenting with different models but don't want to make your payloads redundant by keeping them in separate collections. ![Optional vectors](/articles_data/qdrant-1.2.x/optional-vectors.png) However, up to the previous version, we requested that you provide all the vectors for each point. There have been many requests to allow nullable vectors, as sometimes you cannot generate an embedding or simply don't want to for reasons we don't need to know. ### Grouping requests Embeddings are great for capturing the semantics of the documents, but we rarely encode larger pieces of data into a single vector. Having a summary of a book may sound attractive, but in reality, we divide it into paragraphs or some different parts to have higher granularity. That pays off when we perform the semantic search, as we can return the relevant pieces only. That's also how modern tools like Langchain process the data. The typical way is to encode some smaller parts of the document and keep the document id as a payload attribute. ![Query without grouping request](/articles_data/qdrant-1.2.x/without-grouping-request.png) There are cases where we want to find relevant parts, but only up to a specific number of results per document (for example, only a single one). Up till now, we had to implement such a mechanism on the client side and send several calls to the Qdrant engine. But that's no longer the case. Qdrant 1.2 provides a mechanism for [grouping requests](/documentation/search/#grouping-api), which can handle that server-side, within a single call to the database. This mechanism is similar to the SQL `GROUP BY` clause. ![Query with grouping request](/articles_data/qdrant-1.2.x/with-grouping-request.png) You are not limited to a single result per document, and you can select how many entries will be returned. ### Nested filters Unlike some other vector databases, Qdrant accepts any arbitrary JSON payload, including arrays, objects, and arrays of objects. You can also [filter the search results using nested keys](/documentation/filtering/#nested-key), even though arrays (using the `[]` syntax). Before Qdrant 1.2 it was impossible to express some more complex conditions for the nested structures. For example, let's assume we have the following payload: ```json { "country": "Japan", "cities": [ { "name": "Tokyo", "population": 9.3, "area": 2194 }, { "name": "Osaka", "population": 2.7, "area": 223 }, { "name": "Kyoto", "population": 1.5, "area": 827.8 } ] } ``` We want to filter out the results to include the countries with a city with over 2 million citizens and an area bigger than 500 square kilometers but no more than 1000. There is no such a city in Japan, looking at our data, but if we wrote the following filter, it would be returned: ```json { "filter": { "must": [ { "key": "country.cities[].population", "range": { "gte": 2 } }, { "key": "country.cities[].area", "range": { "gt": 500, "lte": 1000 } } ] }, "limit": 3 } ``` Japan would be returned because Tokyo and Osaka match the first criteria, while Kyoto fulfills the second. But that's not what we wanted to achieve. That's the motivation behind introducing a new type of nested filter. ```json { "filter": { "must": [ { "nested": { "key": "country.cities", "filter": { "must": [ { "key": "population", "range": { "gte": 2 } }, { "key": "area", "range": { "gt": 500, "lte": 1000 } } ] } } } ] }, "limit": 3 } ``` The syntax is consistent with all the other supported filters and enables new possibilities. In our case, it allows us to express the joined condition on a nested structure and make the results list empty but correct. ## Important changes The latest release focuses not only on the new features but also introduces some changes making Qdrant even more reliable. ### Recovery mode There has been an issue in memory-constrained environments, such as cloud, happening when users were pushing massive amounts of data into the service using `wait=false`. This data influx resulted in an overreaching of disk or RAM limits before the Write-Ahead Logging (WAL) was fully applied. This situation was causing Qdrant to attempt a restart and reapplication of WAL, failing recurrently due to the same memory constraints and pushing the service into a frustrating crash loop with many Out-of-Memory errors. Qdrant 1.2 enters recovery mode, if enabled, when it detects a failure on startup. That makes the service halt the loading of collection data and commence operations in a partial state. This state allows for removing collections but doesn't support search or update functions. **Recovery mode [has to be enabled by user](/documentation/administration/#recovery-mode).** ### Appendable mmap For a long time, segments using mmap storage were `non-appendable` and could only be constructed by the optimizer. Dynamically adding vectors to the mmap file is fairly complicated and thus not implemented in Qdrant, but we did our best to implement it in the recent release. If you want to read more about segments, check out our docs on [vector storage](/documentation/storage/#vector-storage). ## Security There are two major changes in terms of [security](/documentation/security/): 1. **API-key support** - basic authentication with a static API key to prevent unwanted access. Previously API keys were only supported in [Qdrant Cloud](https://cloud.qdrant.io/). 2. **TLS support** - to use encrypted connections and prevent sniffing/MitM attacks. ## Release notes As usual, [our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.2.0) describe all the changes introduced in the latest version.
articles/qdrant-1.2.x.md
--- title: "Qdrant under the hood: io_uring" short_description: "The Linux io_uring API offers great performance in certain cases. Here's how Qdrant uses it!" description: "Slow disk decelerating your Qdrant deployment? Get on top of IO overhead with this one trick!" social_preview_image: /articles_data/io_uring/social_preview.png small_preview_image: /articles_data/io_uring/io_uring-icon.svg preview_dir: /articles_data/io_uring/preview weight: 3 author: Andre Bogus author_link: https://llogiq.github.io date: 2023-06-21T09:45:00+02:00 draft: false keywords: - vector search - linux - optimization aliases: [ /articles/io-uring/ ] --- With Qdrant [version 1.3.0](https://github.com/qdrant/qdrant/releases/tag/v1.3.0) we introduce the alternative io\_uring based *async uring* storage backend on Linux-based systems. Since its introduction, io\_uring has been known to improve async throughput wherever the OS syscall overhead gets too high, which tends to occur in situations where software becomes *IO bound* (that is, mostly waiting on disk). ## Input+Output Around the mid-90s, the internet took off. The first servers used a process- per-request setup, which was good for serving hundreds if not thousands of concurrent request. The POSIX Input + Output (IO) was modeled in a strictly synchronous way. The overhead of starting a new process for each request made this model unsustainable. So servers started forgoing process separation, opting for the thread-per-request model. But even that ran into limitations. I distinctly remember when someone asked the question whether a server could serve 10k concurrent connections, which at the time exhausted the memory of most systems (because every thread had to have its own stack and some other metadata, which quickly filled up available memory). As a result, the synchronous IO was replaced by asynchronous IO during the 2.5 kernel update, either via `select` or `epoll` (the latter being Linux-only, but a small bit more efficient, so most servers of the time used it). However, even this crude form of asynchronous IO carries the overhead of at least one system call per operation. Each system call incurs a context switch, and while this operation is itself not that slow, the switch disturbs the caches. Today's CPUs are much faster than memory, but if their caches start to miss data, the memory accesses required led to longer and longer wait times for the CPU. ### Memory-mapped IO Another way of dealing with file IO (which unlike network IO doesn't have a hard time requirement) is to map parts of files into memory - the system fakes having that chunk of the file in memory, so when you read from a location there, the kernel interrupts your process to load the needed data from disk, and resumes your process once done, whereas writing to the memory will also notify the kernel. Also the kernel can prefetch data while the program is running, thus reducing the likelyhood of interrupts. Thus there is still some overhead, but (especially in asynchronous applications) it's far less than with `epoll`. The reason this API is rarely used in web servers is that these usually have a large variety of files to access, unlike a database, which can map its own backing store into memory once. ### Combating the Poll-ution There were multiple experiments to improve matters, some even going so far as moving a HTTP server into the kernel, which of course brought its own share of problems. Others like Intel added their own APIs that ignored the kernel and worked directly on the hardware. Finally, Jens Axboe took matters into his own hands and proposed a ring buffer based interface called *io\_uring*. The buffers are not directly for data, but for operations. User processes can setup a Submission Queue (SQ) and a Completion Queue (CQ), both of which are shared between the process and the kernel, so there's no copying overhead. ![io_uring diagram](/articles_data/io_uring/io-uring.png) Apart from avoiding copying overhead, the queue-based architecture lends itself to multithreading as item insertion/extraction can be made lockless, and once the queues are set up, there is no further syscall that would stop any user thread. Servers that use this can easily get to over 100k concurrent requests. Today Linux allows asynchronous IO via io\_uring for network, disk and accessing other ports, e.g. for printing or recording video. ## And what about Qdrant? Qdrant can store everything in memory, but not all data sets may fit, which can require storing on disk. Before io\_uring, Qdrant used mmap to do its IO. This led to some modest overhead in case of disk latency. The kernel may stop a user thread trying to access a mapped region, which incurs some context switching overhead plus the wait time until the disk IO is finished. Ultimately, this works very well with the asynchronous nature of Qdrant's core. One of the great optimizations Qdrant offers is quantization (either [scalar](/articles/scalar-quantization/) or [product](/articles/product-quantization/)-based). However unless the collection resides fully in memory, this optimization method generates significant disk IO, so it is a prime candidate for possible improvements. If you run Qdrant on Linux, you can enable io\_uring with the following in your configuration: ```yaml # within the storage config storage: # enable the async scorer which uses io_uring async_scorer: true ``` You can return to the mmap based backend by either deleting the `async_scorer` entry or setting the value to `false`. ## Benchmarks To run the benchmark, use a test instance of Qdrant. If necessary spin up a docker container and load a snapshot of the collection you want to benchmark with. You can copy and edit our [benchmark script](/articles_data/io_uring/rescore-benchmark.sh) to run the benchmark. Run the script with and without enabling `storage.async_scorer` and once. You can measure IO usage with `iostat` from another console. For our benchmark, we chose the laion dataset picking 5 million 768d entries. We enabled scalar quantization + HNSW with m=16 and ef_construct=512. We do the quantization in RAM, HNSW in RAM but keep the original vectors on disk (which was a network drive rented from Hetzner for the benchmark). If you want to reproduce the benchmarks, you can get snapshots containing the datasets: * [mmap only](https://storage.googleapis.com/common-datasets-snapshots/laion-768-6m-mmap.snapshot) * [with scalar quantization](https://storage.googleapis.com/common-datasets-snapshots/laion-768-6m-sq-m16-mmap.shapshot) Running the benchmark, we get the following IOPS, CPU loads and wall clock times: | | oversampling | parallel | ~max IOPS | CPU% (of 4 cores) | time (s) (avg of 3) | |----------|--------------|----------|-----------|-------------------|---------------------| | io_uring | 1 | 4 | 4000 | 200 | 12 | | mmap | 1 | 4 | 2000 | 93 | 43 | | io_uring | 1 | 8 | 4000 | 200 | 12 | | mmap | 1 | 8 | 2000 | 90 | 43 | | io_uring | 4 | 8 | 7000 | 100 | 30 | | mmap | 4 | 8 | 2300 | 50 | 145 | Note that in this case, the IO operations have relatively high latency due to using a network disk. Thus, the kernel takes more time to fulfil the mmap requests, and application threads need to wait, which is reflected in the CPU percentage. On the other hand, with the io\_uring backend, the application threads can better use available cores for the rescore operation without any IO-induced delays. Oversampling is a new feature to improve accuracy at the cost of some performance. It allows setting a factor, which is multiplied with the `limit` while doing the search. The results are then re-scored using the original vector and only then the top results up to the limit are selected. ## Discussion Looking back, disk IO used to be very serialized; re-positioning read-write heads on moving platter was a slow and messy business. So the system overhead didn't matter as much, but nowadays with SSDs that can often even parallelize operations while offering near-perfect random access, the overhead starts to become quite visible. While memory-mapped IO gives us a fair deal in terms of ease of use and performance, we can improve on the latter in exchange for some modest complexity increase. io\_uring is still quite young, having only been introduced in 2019 with kernel 5.1, so some administrators will be wary of introducing it. Of course, as with performance, the right answer is usually "it depends", so please review your personal risk profile and act accordingly. ## Best Practices If your on-disk collection's query performance is of sufficiently high priority to you, enable the io\_uring-based async\_scorer to greatly reduce operating system overhead from disk IO. On the other hand, if your collections are in memory only, activating it will be ineffective. Also note that many queries are not IO bound, so the overhead may or may not become measurable in your workload. Finally, on-device disks typically carry lower latency than network drives, which may also affect mmap overhead. Therefore before you roll out io\_uring, perform the above or a similar benchmark with both mmap and io\_uring and measure both wall time and IOps). Benchmarks are always highly use-case dependent, so your mileage may vary. Still, doing that benchmark once is a small price for the possible performance wins. Also please [tell us](https://discord.com/channels/907569970500743200/907569971079569410) about your benchmark results!
articles/io_uring.md
--- title: "Hybrid Search Revamped - Building with Qdrant's Query API" short_description: "Merging different search methods to improve the search quality was never easier" description: "Our new Query API allows you to build a hybrid search system that uses different search methods to improve search quality & experience. Learn more here." preview_dir: /articles_data/hybrid-search/preview social_preview_image: /articles_data/hybrid-search/social-preview.png weight: -150 author: Kacper Łukawski author_link: https://kacperlukawski.com date: 2024-07-25T00:00:00.000Z --- It's been over a year since we published the original article on how to build a hybrid search system with Qdrant. The idea was straightforward: combine the results from different search methods to improve retrieval quality. Back in 2023, you still needed to use an additional service to bring lexical search capabilities and combine all the intermediate results. Things have changed since then. Once we introduced support for sparse vectors, [the additional search service became obsolete](/articles/sparse-vectors/), but you were still required to combine the results from different methods on your end. **Qdrant 1.10 introduces a new Query API that lets you build a search system by combining different search methods to improve retrieval quality**. Everything is now done on the server side, and you can focus on building the best search experience for your users. In this article, we will show you how to utilize the new [Query API](/documentation/concepts/search/#query-api) to build a hybrid search system. ## Introducing the new Query API At Qdrant, we believe that vector search capabilities go well beyond a simple search for nearest neighbors. That's why we provided separate methods for different search use cases, such as `search`, `recommend`, or `discover`. With the latest release, we are happy to introduce the new Query API, which combines all of these methods into a single endpoint and also supports creating nested multistage queries that can be used to build complex search pipelines. If you are an existing Qdrant user, you probably have a running search mechanism that you want to improve, whether sparse or dense. Doing any changes should be preceded by a proper evaluation of its effectiveness. ## How effective is your search system? None of the experiments makes sense if you don't measure the quality. How else would you compare which method works better for your use case? The most common way of doing that is by using the standard metrics, such as `precision@k`, `MRR`, or `NDCG`. There are existing libraries, such as [ranx](https://amenra.github.io/ranx/), that can help you with that. We need to have the ground truth dataset to calculate any of these, but curating it is a separate task. ```python from ranx import Qrels, Run, evaluate # Qrels, or query relevance judgments, keep the ground truth data qrels_dict = { "q_1": { "d_12": 5, "d_25": 3 }, "q_2": { "d_11": 6, "d_22": 1 } } # Runs are built from the search results run_dict = { "q_1": { "d_12": 0.9, "d_23": 0.8, "d_25": 0.7, "d_36": 0.6, "d_32": 0.5, "d_35": 0.4 }, "q_2": { "d_12": 0.9, "d_11": 0.8, "d_25": 0.7, "d_36": 0.6, "d_22": 0.5, "d_35": 0.4 } } # We need to create both objects, and then we can evaluate the run against the qrels qrels = Qrels(qrels_dict) run = Run(run_dict) # Calculating the NDCG@5 metric is as simple as that evaluate(qrels, run, "ndcg@5") ``` ## Available embedding options with Query API Support for multiple vectors per point is nothing new in Qdrant, but introducing the Query API makes it even more powerful. The 1.10 release supports the multivectors, allowing you to treat embedding lists as a single entity. There are many possible ways of utilizing this feature, and the most prominent one is the support for late interaction models, such as [ColBERT](https://qdrant.tech/documentation/fastembed/fastembed-colbert/). Instead of having a single embedding for each document or query, this family of models creates a separate one for each token of text. In the search process, the final score is calculated based on the interaction between the tokens of the query and the document. Contrary to cross-encoders, document embedding might be precomputed and stored in the database, which makes the search process much faster. If you are curious about the details, please check out [the article about ColBERT, written by our friends from Jina AI](https://jina.ai/news/what-is-colbert-and-late-interaction-and-why-they-matter-in-search/). ![Late interaction](/articles_data/hybrid-search/late-interaction.png) Besides multivectors, you can use regular dense and sparse vectors, and experiment with smaller data types to reduce memory use. Named vectors can help you store different dimensionalities of the embeddings, which is useful if you use multiple models to represent your data, or want to utilize the Matryoshka embeddings. ![Multiple vectors per point](/articles_data/hybrid-search/multiple-vectors.png) There is no single way of building a hybrid search. The process of designing it is an exploratory exercise, where you need to test various setups and measure their effectiveness. Building a proper search experience is a complex task, and it's better to keep it data-driven, not just rely on the intuition. ## Fusion vs reranking We can, distinguish two main approaches to building a hybrid search system: fusion and reranking. The former is about combining the results from different search methods, based solely on the scores returned by each method. That usually involves some normalization, as the scores returned by different methods might be in different ranges. After that, there is a formula that takes the relevancy measures and calculates the final score that we use later on to reorder the documents. Qdrant has built-in support for the Reciprocal Rank Fusion method, which is the de facto standard in the field. ![Fusion](/articles_data/hybrid-search/fusion.png) Reranking, on the other hand, is about taking the results from different search methods and reordering them based on some additional processing using the content of the documents, not just the scores. This processing may rely on an additional neural model, such as a cross-encoder which would be inefficient enough to be used on the whole dataset. These methods are practically applicable only when used on a smaller subset of candidates returned by the faster search methods. Late interaction models, such as ColBERT, are way more efficient in this case, as they can be used to rerank the candidates without the need to access all the documents in the collection. ![Reranking](/articles_data/hybrid-search/reranking.png) ### Why not a linear combination? It's often proposed to use full-text and vector search scores to form a linear combination formula to rerank the results. So it goes like this: ```final_score = 0.7 * vector_score + 0.3 * full_text_score``` However, we didn't even consider such a setup. Why? Those scores don't make the problem linearly separable. We used the BM25 score along with cosine vector similarity to use both of them as points coordinates in 2-dimensional space. The chart shows how those points are distributed: ![A distribution of both Qdrant and BM25 scores mapped into 2D space.](/articles_data/hybrid-search/linear-combination.png) *A distribution of both Qdrant and BM25 scores mapped into 2D space. It clearly shows relevant and non-relevant objects are not linearly separable in that space, so using a linear combination of both scores won't give us a proper hybrid search.* Both relevant and non-relevant items are mixed. **None of the linear formulas would be able to distinguish between them.** Thus, that's not the way to solve it. ## Building a hybrid search system in Qdrant Ultimately, **any search mechanism might also be a reranking mechanism**. You can prefetch results with sparse vectors and then rerank them with the dense ones, or the other way around. Or, if you have Matryoshka embeddings, you can start with oversampling the candidates with the dense vectors of the lowest dimensionality and then gradually reduce the number of candidates by reranking them with the higher-dimensional embeddings. Nothing stops you from combining both fusion and reranking. Let's go a step further and build a hybrid search mechanism that combines the results from the Matryoshka embeddings, dense vectors, and sparse vectors and then reranks them with the late interaction model. In the meantime, we will introduce additional reranking and fusion steps. ![Complex search pipeline](/articles_data/hybrid-search/complex-search-pipeline.png) Our search pipeline consists of two branches, each of them responsible for retrieving a subset of documents that we eventually want to rerank with the late interaction model. Let's connect to Qdrant first and then build the search pipeline. ```python from qdrant_client import QdrantClient, models client = QdrantClient("http://localhost:6333") ``` All the steps utilizing Matryoshka embeddings might be specified in the Query API as a nested structure: ```python # The first branch of our search pipeline retrieves 25 documents # using the Matryoshka embeddings with multistep retrieval. matryoshka_prefetch = models.Prefetch( prefetch=[ models.Prefetch( prefetch=[ # The first prefetch operation retrieves 100 documents # using the Matryoshka embeddings with the lowest # dimensionality of 64. models.Prefetch( query=[0.456, -0.789, ..., 0.239], using="matryoshka-64dim", limit=100, ), ], # Then, the retrieved documents are re-ranked using the # Matryoshka embeddings with the dimensionality of 128. query=[0.456, -0.789, ..., -0.789], using="matryoshka-128dim", limit=50, ) ], # Finally, the results are re-ranked using the Matryoshka # embeddings with the dimensionality of 256. query=[0.456, -0.789, ..., 0.123], using="matryoshka-256dim", limit=25, ) ``` Similarly, we can build the second branch of our search pipeline, which retrieves the documents using the dense and sparse vectors and performs the fusion of them using the Reciprocal Rank Fusion method: ```python # The second branch of our search pipeline also retrieves 25 documents, # but uses the dense and sparse vectors, with their results combined # using the Reciprocal Rank Fusion. sparse_dense_rrf_prefetch = models.Prefetch( prefetch=[ models.Prefetch( prefetch=[ # The first prefetch operation retrieves 100 documents # using dense vectors using integer data type. Retrieval # is faster, but quality is lower. models.Prefetch( query=[7, 63, ..., 92], using="dense-uint8", limit=100, ) ], # Integer-based embeddings are then re-ranked using the # float-based embeddings. Here we just want to retrieve # 25 documents. query=[-1.234, 0.762, ..., 1.532], using="dense", limit=25, ), # Here we just add another 25 documents using the sparse # vectors only. models.Prefetch( query=models.SparseVector( indices=[125, 9325, 58214], values=[-0.164, 0.229, 0.731], ), using="sparse", limit=25, ), ], # RRF is activated below, so there is no need to specify the # query vector here, as fusion is done on the scores of the # retrieved documents. query=models.FusionQuery( fusion=models.Fusion.RRF, ), ) ``` The second branch could have already been called hybrid, as it combines the results from the dense and sparse vectors with fusion. However, nothing stops us from building even more complex search pipelines. Here is how the target call to the Query API would look like in Python: ```python client.query_points( "my-collection", prefetch=[ matryoshka_prefetch, sparse_dense_rrf_prefetch, ], # Finally rerank the results with the late interaction model. It only # considers the documents retrieved by all the prefetch operations above. # Return 10 final results. query=[ [1.928, -0.654, ..., 0.213], [-1.197, 0.583, ..., 1.901], ..., [0.112, -1.473, ..., 1.786], ], using="late-interaction", with_payload=False, limit=10, ) ``` The options are endless, the new Query API gives you the flexibility to experiment with different setups. **You rarely need to build such a complex search pipeline**, but it's good to know that you can do that if needed. ## Some anecdotal observations Neither of the algorithms performs best in all cases. In some cases, keyword-based search will be the winner and vice-versa. The following table shows some interesting examples we could find in the [WANDS](https://github.com/wayfair/WANDS) dataset during experimentation: <table> <thead> <th>Query</th> <th>BM25 Search</th> <th>Vector Search</th> </thead> <tbody> <tr> <th>cybersport desk</th> <td>desk ❌</td> <td>gaming desk ✅</td> </tr> <tr> <th>plates for icecream</th> <td>"eat" plates on wood wall décor ❌</td> <td>alicyn 8.5 '' melamine dessert plate ✅</td> </tr> <tr> <th>kitchen table with a thick board</th> <td>craft kitchen acacia wood cutting board ❌</td> <td>industrial solid wood dining table ✅</td> </tr> <tr> <th>wooden bedside table</th> <td>30 '' bedside table lamp ❌</td> <td>portable bedside end table ✅</td> </tr> </tbody> </table> Also examples where keyword-based search did better: <table> <thead> <th>Query</th> <th>BM25 Search</th> <th>Vector Search</th> </thead> <tbody> <tr> <th>computer chair</th> <td>vibrant computer task chair ✅</td> <td>office chair ❌</td> </tr> <tr> <th>64.2 inch console table</th> <td>cervantez 64.2 '' console table ✅</td> <td>69.5 '' console table ❌</td> </tr> </tbody> </table> ## Try the New Query API in Qdrant 1.10 The new Query API introduced in Qdrant 1.10 is a game-changer for building hybrid search systems. You don't need any additional services to combine the results from different search methods, and you can even create more complex pipelines and serve them directly from Qdrant. Our webinar on *Building the Ultimate Hybrid Search* takes you through the process of building a hybrid search system with Qdrant Query API. If you missed it, you can [watch the recording](https://www.youtube.com/watch?v=LAZOxqzceEU), or [check the notebooks](https://github.com/qdrant/workshop-ultimate-hybrid-search). <div style="max-width: 640px; margin: 0 auto; padding-bottom: 1em"> <div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;"> <iframe width="100%" height="100%" src="https://www.youtube.com/embed/LAZOxqzceEU" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe> </div> </div> If you have any questions or need help with building your hybrid search system, don't hesitate to reach out to us on [Discord](https://qdrant.to/discord).
articles/hybrid-search.md
--- title: "Neural Search 101: A Complete Guide and Step-by-Step Tutorial" short_description: Step-by-step guide on how to build a neural search service. description: Discover the power of neural search. Learn what neural search is and follow our tutorial to build a neural search service using BERT, Qdrant, and FastAPI. # external_link: https://blog.qdrant.tech/neural-search-tutorial-3f034ab13adc social_preview_image: /articles_data/neural-search-tutorial/social_preview.jpg preview_dir: /articles_data/neural-search-tutorial/preview small_preview_image: /articles_data/neural-search-tutorial/tutorial.svg weight: 50 author: Andrey Vasnetsov author_link: https://blog.vasnetsov.com/ date: 2021-06-10T10:18:00.000Z # aliases: [ /articles/neural-search-tutorial/ ] --- # Neural Search 101: A Comprehensive Guide and Step-by-Step Tutorial Information retrieval technology is one of the main technologies that enabled the modern Internet to exist. These days, search technology is the heart of a variety of applications. From web-pages search to product recommendations. For many years, this technology didn't get much change until neural networks came into play. In this guide we are going to find answers to these questions: * What is the difference between regular and neural search? * What neural networks could be used for search? * In what tasks is neural network search useful? * How to build and deploy own neural search service step-by-step? ## What is neural search? A regular full-text search, such as Google's, consists of searching for keywords inside a document. For this reason, the algorithm can not take into account the real meaning of the query and documents. Many documents that might be of interest to the user are not found because they use different wording. Neural search tries to solve exactly this problem - it attempts to enable searches not by keywords but by meaning. To achieve this, the search works in 2 steps. In the first step, a specially trained neural network encoder converts the query and the searched objects into a vector representation called embeddings. The encoder must be trained so that similar objects, such as texts with the same meaning or alike pictures get a close vector representation. ![Encoders and embedding space](https://gist.githubusercontent.com/generall/c229cc94be8c15095286b0c55a3f19d7/raw/e52e3f1a320cd985ebc96f48955d7f355de8876c/encoders.png) Having this vector representation, it is easy to understand what the second step should be. To find documents similar to the query you now just need to find the nearest vectors. The most convenient way to determine the distance between two vectors is to calculate the cosine distance. The usual Euclidean distance can also be used, but it is not so efficient due to [the curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality). ## Which model could be used? It is ideal to use a model specially trained to determine the closeness of meanings. For example, models trained on Semantic Textual Similarity (STS) datasets. Current state-of-the-art models can be found on this [leaderboard](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts-benchmark?p=roberta-a-robustly-optimized-bert-pretraining). However, not only specially trained models can be used. If the model is trained on a large enough dataset, its internal features can work as embeddings too. So, for instance, you can take any pre-trained on ImageNet model and cut off the last layer from it. In the penultimate layer of the neural network, as a rule, the highest-level features are formed, which, however, do not correspond to specific classes. The output of this layer can be used as an embedding. ## What tasks is neural search good for? Neural search has the greatest advantage in areas where the query cannot be formulated precisely. Querying a table in an SQL database is not the best place for neural search. On the contrary, if the query itself is fuzzy, or it cannot be formulated as a set of conditions - neural search can help you. If the search query is a picture, sound file or long text, neural network search is almost the only option. If you want to build a recommendation system, the neural approach can also be useful. The user's actions can be encoded in vector space in the same way as a picture or text. And having those vectors, it is possible to find semantically similar users and determine the next probable user actions. ## Step-by-step neural search tutorial using Qdrant With all that said, let's make our neural network search. As an example, I decided to make a search for startups by their description. In this demo, we will see the cases when text search works better and the cases when neural network search works better. I will use data from [startups-list.com](https://www.startups-list.com/). Each record contains the name, a paragraph describing the company, the location and a picture. Raw parsed data can be found at [this link](https://storage.googleapis.com/generall-shared-data/startups_demo.json). ### Step 1: Prepare data for neural search To be able to search for our descriptions in vector space, we must get vectors first. We need to encode the descriptions into a vector representation. As the descriptions are textual data, we can use a pre-trained language model. As mentioned above, for the task of text search there is a whole set of pre-trained models specifically tuned for semantic similarity. One of the easiest libraries to work with pre-trained language models, in my opinion, is the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) by UKPLab. It provides a way to conveniently download and use many pre-trained models, mostly based on transformer architecture. Transformers is not the only architecture suitable for neural search, but for our task, it is quite enough. We will use a model called `all-MiniLM-L6-v2`. This model is an all-round model tuned for many use-cases. Trained on a large and diverse dataset of over 1 billion training pairs. It is optimized for low memory consumption and fast inference. The complete code for data preparation with detailed comments can be found and run in [Colab Notebook](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing) ### Step 2: Incorporate a Vector search engine Now as we have a vector representation for all our records, we need to store them somewhere. In addition to storing, we may also need to add or delete a vector, save additional information with the vector. And most importantly, we need a way to search for the nearest vectors. The vector search engine can take care of all these tasks. It provides a convenient API for searching and managing vectors. In our tutorial, we will use [Qdrant vector search engine](https://github.com/qdrant/qdrant) vector search engine. It not only supports all necessary operations with vectors but also allows you to store additional payload along with vectors and use it to perform filtering of the search result. Qdrant has a client for Python and also defines the API schema if you need to use it from other languages. The easiest way to use Qdrant is to run a pre-built image. So make sure you have Docker installed on your system. To start Qdrant, use the instructions on its [homepage](https://github.com/qdrant/qdrant). Download image from [DockerHub](https://hub.docker.com/r/qdrant/qdrant): ```bash docker pull qdrant/qdrant ``` And run the service inside the docker: ```bash docker run -p 6333:6333 \ -v $(pwd)/qdrant_storage:/qdrant/storage \ qdrant/qdrant ``` You should see output like this ```text ... [2021-02-05T00:08:51Z INFO actix_server::builder] Starting 12 workers [2021-02-05T00:08:51Z INFO actix_server::builder] Starting "actix-web-service-0.0.0.0:6333" service on 0.0.0.0:6333 ``` This means that the service is successfully launched and listening port 6333. To make sure you can test [http://localhost:6333/](http://localhost:6333/) in your browser and get qdrant version info. All uploaded to Qdrant data is saved into the `./qdrant_storage` directory and will be persisted even if you recreate the container. ### Step 3: Upload data to Qdrant Now once we have the vectors prepared and the search engine running, we can start uploading the data. To interact with Qdrant from python, I recommend using an out-of-the-box client library. To install it, use the following command ```bash pip install qdrant-client ``` At this point, we should have startup records in file `startups.json`, encoded vectors in file `startup_vectors.npy`, and running Qdrant on a local machine. Let's write a script to upload all startup data and vectors into the search engine. First, let's create a client object for Qdrant. ```python # Import client library from qdrant_client import QdrantClient from qdrant_client.models import VectorParams, Distance qdrant_client = QdrantClient(host='localhost', port=6333) ``` Qdrant allows you to combine vectors of the same purpose into collections. Many independent vector collections can exist on one service at the same time. Let's create a new collection for our startup vectors. ```python if not qdrant_client.collection_exists('startups'): qdrant_client.create_collection( collection_name='startups', vectors_config=VectorParams(size=384, distance=Distance.COSINE), ) ``` The `vector_size` parameter is very important. It tells the service the size of the vectors in that collection. All vectors in a collection must have the same size, otherwise, it is impossible to calculate the distance between them. `384` is the output dimensionality of the encoder we are using. The `distance` parameter allows specifying the function used to measure the distance between two points. The Qdrant client library defines a special function that allows you to load datasets into the service. However, since there may be too much data to fit a single computer memory, the function takes an iterator over the data as input. Let's create an iterator over the startup data and vectors. ```python import numpy as np import json fd = open('./startups.json') # payload is now an iterator over startup data payload = map(json.loads, fd) # Here we load all vectors into memory, numpy array works as iterable for itself. # Other option would be to use Mmap, if we don't want to load all data into RAM vectors = np.load('./startup_vectors.npy') ``` And the final step - data uploading ```python qdrant_client.upload_collection( collection_name='startups', vectors=vectors, payload=payload, ids=None, # Vector ids will be assigned automatically batch_size=256 # How many vectors will be uploaded in a single request? ) ``` Now we have vectors uploaded to the vector search engine. In the next step, we will learn how to actually search for the closest vectors. The full code for this step can be found [here](https://github.com/qdrant/qdrant_demo/blob/master/qdrant_demo/init_collection_startups.py). ### Step 4: Make a search API Now that all the preparations are complete, let's start building a neural search class. First, install all the requirements: ```bash pip install sentence-transformers numpy ``` In order to process incoming requests neural search will need 2 things. A model to convert the query into a vector and Qdrant client, to perform a search queries. ```python # File: neural_searcher.py from qdrant_client import QdrantClient from sentence_transformers import SentenceTransformer class NeuralSearcher: def __init__(self, collection_name): self.collection_name = collection_name # Initialize encoder model self.model = SentenceTransformer('all-MiniLM-L6-v2', device='cpu') # initialize Qdrant client self.qdrant_client = QdrantClient(host='localhost', port=6333) ``` The search function looks as simple as possible: ```python def search(self, text: str): # Convert text query into vector vector = self.model.encode(text).tolist() # Use `vector` for search for closest vectors in the collection search_result = self.qdrant_client.search( collection_name=self.collection_name, query_vector=vector, query_filter=None, # We don't want any filters for now top=5 # 5 the most closest results is enough ) # `search_result` contains found vector ids with similarity scores along with the stored payload # In this function we are interested in payload only payloads = [hit.payload for hit in search_result] return payloads ``` With Qdrant it is also feasible to add some conditions to the search. For example, if we wanted to search for startups in a certain city, the search query could look like this: ```python from qdrant_client.models import Filter ... city_of_interest = "Berlin" # Define a filter for cities city_filter = Filter(**{ "must": [{ "key": "city", # We store city information in a field of the same name "match": { # This condition checks if payload field have requested value "keyword": city_of_interest } }] }) search_result = self.qdrant_client.search( collection_name=self.collection_name, query_vector=vector, query_filter=city_filter, top=5 ) ... ``` We now have a class for making neural search queries. Let's wrap it up into a service. ### Step 5: Deploy as a service To build the service we will use the FastAPI framework. It is super easy to use and requires minimal code writing. To install it, use the command ```bash pip install fastapi uvicorn ``` Our service will have only one API endpoint and will look like this: ```python # File: service.py from fastapi import FastAPI # That is the file where NeuralSearcher is stored from neural_searcher import NeuralSearcher app = FastAPI() # Create an instance of the neural searcher neural_searcher = NeuralSearcher(collection_name='startups') @app.get("/api/search") def search_startup(q: str): return { "result": neural_searcher.search(text=q) } if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000) ``` Now, if you run the service with ```bash python service.py ``` and open your browser at [http://localhost:8000/docs](http://localhost:8000/docs) , you should be able to see a debug interface for your service. ![FastAPI Swagger interface](https://gist.githubusercontent.com/generall/c229cc94be8c15095286b0c55a3f19d7/raw/d866e37a60036ebe65508bd736faff817a5d27e9/fastapi_neural_search.png) Feel free to play around with it, make queries and check out the results. This concludes the tutorial. ### Experience Neural Search With Qdrant’s Free Demo Excited to see neural search in action? Take the next step and book a [free demo](https://qdrant.to/semantic-search-demo) with Qdrant! Experience firsthand how this cutting-edge technology can transform your search capabilities. Our demo will help you grow intuition for cases when the neural search is useful. The demo contains a switch that selects between neural and full-text searches. You can turn neural search on and off to compare the result with regular full-text search. Try to use a startup description to find similar ones. Join our [Discord community](https://qdrant.to/discord), where we talk about vector search and similarity learning, and publish other examples of neural networks and neural search applications.
articles/neural-search-tutorial.md
--- title: Serverless Semantic Search short_description: "Need to setup a server to offer semantic search? Think again!" description: "Create a serverless semantic search engine using nothing but Qdrant and free cloud services." social_preview_image: /articles_data/serverless/social_preview.png small_preview_image: /articles_data/serverless/icon.svg preview_dir: /articles_data/serverless/preview weight: 1 author: Andre Bogus author_link: https://llogiq.github.io date: 2023-07-12T10:00:00+01:00 draft: false keywords: rust, serverless, lambda, semantic, search --- Do you want to insert a semantic search function into your website or online app? Now you can do so - without spending any money! In this example, you will learn how to create a free prototype search engine for your own non-commercial purposes. You may find all of the assets for this tutorial on [GitHub](https://github.com/qdrant/examples/tree/master/lambda-search). ## Ingredients * A [Rust](https://rust-lang.org) toolchain * [cargo lambda](https://cargo-lambda.info) (install via package manager, [download](https://github.com/cargo-lambda/cargo-lambda/releases) binary or `cargo install cargo-lambda`) * The [AWS CLI](https://aws.amazon.com/cli) * Qdrant instance ([free tier](https://cloud.qdrant.io) available) * An embedding provider service of your choice (see our [Embeddings docs](/documentation/embeddings/). You may be able to get credits from [AI Grant](https://aigrant.org), also Cohere has a [rate-limited non-commercial free tier](https://cohere.com/pricing)) * AWS Lambda account (12-month free tier available) ## What you're going to build You'll combine the embedding provider and the Qdrant instance to a neat semantic search, calling both services from a small Lambda function. ![lambda integration diagram](/articles_data/serverless/lambda_integration.png) Now lets look at how to work with each ingredient before connecting them. ## Rust and cargo-lambda You want your function to be quick, lean and safe, so using Rust is a no-brainer. To compile Rust code for use within Lambda functions, the `cargo-lambda` subcommand has been built. `cargo-lambda` can put your Rust code in a zip file that AWS Lambda can then deploy on a no-frills `provided.al2` runtime. To interface with AWS Lambda, you will need a Rust project with the following dependencies in your `Cargo.toml`: ```toml [dependencies] tokio = { version = "1", features = ["macros"] } lambda_http = { version = "0.8", default-features = false, features = ["apigw_http"] } lambda_runtime = "0.8" ``` This gives you an interface consisting of an entry point to start the Lambda runtime and a way to register your handler for HTTP calls. Put the following snippet into `src/helloworld.rs`: ```rust use lambda_http::{run, service_fn, Body, Error, Request, RequestExt, Response}; /// This is your callback function for responding to requests at your URL async fn function_handler(_req: Request) -> Result<Response<Body>, Error> { Response::from_text("Hello, Lambda!") } #[tokio::main] async fn main() { run(service_fn(function_handler)).await } ``` You can also use a closure to bind other arguments to your function handler (the `service_fn` call then becomes `service_fn(|req| function_handler(req, ...))`). Also if you want to extract parameters from the request, you can do so using the [Request](https://docs.rs/lambda_http/latest/lambda_http/type.Request.html) methods (e.g. `query_string_parameters` or `query_string_parameters_ref`). Add the following to your `Cargo.toml` to define the binary: ```toml [[bin]] name = "helloworld" path = "src/helloworld.rs" ``` On the AWS side, you need to setup a Lambda and IAM role to use with your function. ![create lambda web page](/articles_data/serverless/create_lambda.png) Choose your function name, select "Provide your own bootstrap on Amazon Linux 2". As architecture, use `arm64`. You will also activate a function URL. Here it is up to you if you want to protect it via IAM or leave it open, but be aware that open end points can be accessed by anyone, potentially costing money if there is too much traffic. By default, this will also create a basic role. To look up the role, you can go into the Function overview: ![function overview](/articles_data/serverless/lambda_overview.png) Click on the "Info" link near the "▸ Function overview" heading, and select the "Permissions" tab on the left. You will find the "Role name" directly under *Execution role*. Note it down for later. ![function overview](/articles_data/serverless/lambda_role.png) To test that your "Hello, Lambda" service works, you can compile and upload the function: ```bash $ export LAMBDA_FUNCTION_NAME=hello $ export LAMBDA_ROLE=<role name from lambda web ui> $ export LAMBDA_REGION=us-east-1 $ cargo lambda build --release --arm --bin helloworld --output-format zip Downloaded libc v0.2.137 # [..] output omitted for brevity Finished release [optimized] target(s) in 1m 27s $ # Delete the old empty definition $ aws lambda delete-function-url-config --region $LAMBDA_REGION --function-name $LAMBDA_FUNCTION_NAME $ aws lambda delete-function --region $LAMBDA_REGION --function-name $LAMBDA_FUNCTION_NAME $ # Upload the function $ aws lambda create-function --function-name $LAMBDA_FUNCTION_NAME \ --handler bootstrap \ --architectures arm64 \ --zip-file fileb://./target/lambda/helloworld/bootstrap.zip \ --runtime provided.al2 \ --region $LAMBDA_REGION \ --role $LAMBDA_ROLE \ --tracing-config Mode=Active $ # Add the function URL $ aws lambda add-permission \ --function-name $LAMBDA_FUNCTION_NAME \ --action lambda:InvokeFunctionUrl \ --principal "*" \ --function-url-auth-type "NONE" \ --region $LAMBDA_REGION \ --statement-id url $ # Here for simplicity unauthenticated URL access. Beware! $ aws lambda create-function-url-config \ --function-name $LAMBDA_FUNCTION_NAME \ --region $LAMBDA_REGION \ --cors "AllowOrigins=*,AllowMethods=*,AllowHeaders=*" \ --auth-type NONE ``` Now you can go to your *Function Overview* and click on the Function URL. You should see something like this: ```text Hello, Lambda! ``` Bearer ! You have set up a Lambda function in Rust. On to the next ingredient: ## Embedding Most providers supply a simple https GET or POST interface you can use with an API key, which you have to supply in an authentication header. If you are using this for non-commercial purposes, the rate limited trial key from Cohere is just a few clicks away. Go to [their welcome page](https://dashboard.cohere.ai/welcome/register), register and you'll be able to get to the dashboard, which has an "API keys" menu entry which will bring you to the following page: [cohere dashboard](/articles_data/serverless/cohere-dashboard.png) From there you can click on the ⎘ symbol next to your API key to copy it to the clipboard. *Don't put your API key in the code!* Instead read it from an env variable you can set in the lambda environment. This avoids accidentally putting your key into a public repo. Now all you need to get embeddings is a bit of code. First you need to extend your dependencies with `reqwest` and also add `anyhow` for easier error handling: ```toml anyhow = "1.0" reqwest = { version = "0.11.18", default-features = false, features = ["json", "rustls-tls"] } serde = "1.0" ``` Now given the API key from above, you can make a call to get the embedding vectors: ```rust use anyhow::Result; use serde::Deserialize; use reqwest::Client; #[derive(Deserialize)] struct CohereResponse { outputs: Vec<Vec<f32>> } pub async fn embed(client: &Client, text: &str, api_key: &str) -> Result<Vec<Vec<f32>>> { let CohereResponse { outputs } = client .post("https://api.cohere.ai/embed") .header("Authorization", &format!("Bearer {api_key}")) .header("Content-Type", "application/json") .header("Cohere-Version", "2021-11-08") .body(format!("{{\"text\":[\"{text}\"],\"model\":\"small\"}}")) .send() .await? .json() .await?; Ok(outputs) } ``` Note that this may return multiple vectors if the text overflows the input dimensions. Cohere's `small` model has 1024 output dimensions. Other providers have similar interfaces. Consult our [Embeddings docs](/documentation/embeddings/) for further information. See how little code it took to get the embedding? While you're at it, it's a good idea to write a small test to check if embedding works and the vectors are of the expected size: ```rust #[tokio::test] async fn check_embedding() { // ignore this test if API_KEY isn't set let Ok(api_key) = &std::env::var("API_KEY") else { return; } let embedding = crate::embed("What is semantic search?", api_key).unwrap()[0]; // Cohere's `small` model has 1024 output dimensions. assert_eq!(1024, embedding.len()); } ``` Run this while setting the `API_KEY` environment variable to check if the embedding works. ## Qdrant search Now that you have embeddings, it's time to put them into your Qdrant. You could of course use `curl` or `python` to set up your collection and upload the points, but as you already have Rust including some code to obtain the embeddings, you can stay in Rust, adding `qdrant-client` to the mix. ```rust use anyhow::Result; use qdrant_client::prelude::*; use qdrant_client::qdrant::{VectorsConfig, VectorParams}; use qdrant_client::qdrant::vectors_config::Config; use std::collections::HashMap; fn setup<'i>( embed_client: &reqwest::Client, embed_api_key: &str, qdrant_url: &str, api_key: Option<&str>, collection_name: &str, data: impl Iterator<Item = (&'i str, HashMap<String, Value>)>, ) -> Result<()> { let mut config = QdrantClientConfig::from_url(qdrant_url); config.api_key = api_key; let client = QdrantClient::new(Some(config))?; // create the collections if !client.has_collection(collection_name).await? { client .create_collection(&CreateCollection { collection_name: collection_name.into(), vectors_config: Some(VectorsConfig { config: Some(Config::Params(VectorParams { size: 1024, // output dimensions from above distance: Distance::Cosine as i32, ..Default::default() })), }), ..Default::default() }) .await?; } let mut id_counter = 0_u64; let points = data.map(|(text, payload)| { let id = std::mem::replace(&mut id_counter, *id_counter + 1); let vectors = Some(embed(embed_client, text, embed_api_key).unwrap()); PointStruct { id, vectors, payload } }).collect(); client.upsert_points(collection_name, points, None).await?; Ok(()) } ``` Depending on whether you want to efficiently filter the data, you can also add some indexes. I'm leaving this out for brevity, but you can look at the [example code](https://github.com/qdrant/examples/tree/master/lambda-search) containing this operation. Also this does not implement chunking (splitting the data to upsert in multiple requests, which avoids timeout errors). Add a suitable `main` method and you can run this code to insert the points (or just use the binary from the example). Be sure to include the port in the `qdrant_url`. Now that you have the points inserted, you can search them by embedding: ```rust use anyhow::Result; use qdrant_client::prelude::*; pub async fn search( text: &str, collection_name: String, client: &Client, api_key: &str, qdrant: &QdrantClient, ) -> Result<Vec<ScoredPoint>> { Ok(qdrant.search_points(&SearchPoints { collection_name, limit: 5, // use what fits your use case here with_payload: Some(true.into()), vector: embed(client, text, api_key)?, ..Default::default() }).await?.result) } ``` You can also filter by adding a `filter: ...` field to the `SearchPoints`, and you will likely want to process the result further, but the example code already does that, so feel free to start from there in case you need this functionality. ## Putting it all together Now that you have all the parts, it's time to join them up. Now copying and wiring up the snippets above is left as an exercise to the reader. Impatient minds can peruse the [example repo](https://github.com/qdrant/examples/tree/master/lambda-search) instead. You'll want to extend the `main` method a bit to connect with the Client once at the start, also get API keys from the environment so you don't need to compile them into the code. To do that, you can get them with `std::env::var(_)` from the rust code and set the environment from the AWS console. ```bash $ export QDRANT_URI=<qour Qdrant instance URI including port> $ export QDRANT_API_KEY=<your Qdrant API key> $ export COHERE_API_KEY=<your Cohere API key> $ export COLLECTION_NAME=site-cohere $ aws lambda update-function-configuration \ --function-name $LAMBDA_FUNCTION_NAME \ --environment "Variables={QDRANT_URI=$QDRANT_URI,\ QDRANT_API_KEY=$QDRANT_API_KEY,COHERE_API_KEY=${COHERE_API_KEY},\ COLLECTION_NAME=${COLLECTION_NAME}"` ``` In any event, you will arrive at one command line program to insert your data and one Lambda function. The former can just be `cargo run` to set up the collection. For the latter, you can again call `cargo lambda` and the AWS console: ```bash $ export LAMBDA_FUNCTION_NAME=search $ export LAMBDA_REGION=us-east-1 $ cargo lambda build --release --arm --output-format zip Downloaded libc v0.2.137 # [..] output omitted for brevity Finished release [optimized] target(s) in 1m 27s $ # Update the function $ aws lambda update-function-code --function-name $LAMBDA_FUNCTION_NAME \ --zip-file fileb://./target/lambda/page-search/bootstrap.zip \ --region $LAMBDA_REGION ``` ## Discussion Lambda works by spinning up your function once the URL is called, so they don't need to keep the compute on hand unless it is actually used. This means that the first call will be burdened by some 1-2 seconds of latency for loading the function, later calls will resolve faster. Of course, there is also the latency for calling the embeddings provider and Qdrant. On the other hand, the free tier doesn't cost a thing, so you certainly get what you pay for. And for many use cases, a result within one or two seconds is acceptable. Rust minimizes the overhead for the function, both in terms of file size and runtime. Using an embedding service means you don't need to care about the details. Knowing the URL, API key and embedding size is sufficient. Finally, with free tiers for both Lambda and Qdrant as well as free credits for the embedding provider, the only cost is your time to set everything up. Who could argue with free?
articles/serverless.md
--- title: Filtrable HNSW short_description: How to make ANN search with custom filtering? description: How to make ANN search with custom filtering? Search in selected subsets without loosing the results. # external_link: https://blog.vasnetsov.com/posts/categorical-hnsw/ social_preview_image: /articles_data/filtrable-hnsw/social_preview.jpg preview_dir: /articles_data/filtrable-hnsw/preview small_preview_image: /articles_data/filtrable-hnsw/global-network.svg weight: 60 date: 2019-11-24T22:44:08+03:00 author: Andrei Vasnetsov author_link: https://blog.vasnetsov.com/ # aliases: [ /articles/filtrable-hnsw/ ] --- If you need to find some similar objects in vector space, provided e.g. by embeddings or matching NN, you can choose among a variety of libraries: Annoy, FAISS or NMSLib. All of them will give you a fast approximate neighbors search within almost any space. But what if you need to introduce some constraints in your search? For example, you want search only for products in some category or select the most similar customer of a particular brand. I did not find any simple solutions for this. There are several discussions like [this](https://github.com/spotify/annoy/issues/263), but they only suggest to iterate over top search results and apply conditions consequently after the search. Let's see if we could somehow modify any of ANN algorithms to be able to apply constrains during the search itself. Annoy builds tree index over random projections. Tree index implies that we will meet same problem that appears in relational databases: if field indexes were built independently, then it is possible to use only one of them at a time. Since nobody solved this problem before, it seems that there is no easy approach. There is another algorithm which shows top results on the [benchmark](https://github.com/erikbern/ann-benchmarks). It is called HNSW which stands for Hierarchical Navigable Small World. The [original paper](https://arxiv.org/abs/1603.09320) is well written and very easy to read, so I will only give the main idea here. We need to build a navigation graph among all indexed points so that the greedy search on this graph will lead us to the nearest point. This graph is constructed by sequentially adding points that are connected by a fixed number of edges to previously added points. In the resulting graph, the number of edges at each point does not exceed a given threshold $m$ and always contains the nearest considered points. ![NSW](/articles_data/filtrable-hnsw/NSW.png) ### How can we modify it? What if we simply apply the filter criteria to the nodes of this graph and use in the greedy search only those that meet these criteria? It turns out that even with this naive modification algorithm can cover some use cases. One such case is if your criteria do not correlate with vector semantics. For example, you use a vector search for clothing names and want to filter out some sizes. In this case, the nodes will be uniformly filtered out from the entire cluster structure. Therefore, the theoretical conclusions obtained in the [Percolation theory](https://en.wikipedia.org/wiki/Percolation_theory) become applicable: > Percolation is related to the robustness of the graph (called also network). Given a random graph of $n$ nodes and an average degree $\langle k\rangle$ . Next we remove randomly a fraction $1-p$ of nodes and leave only a fraction $p$. There exists a critical percolation threshold $ pc = \frac{1}{\langle k\rangle} $ below which the network becomes fragmented while above $pc$ a giant connected component exists. This statement also confirmed by experiments: {{< figure src=/articles_data/filtrable-hnsw/exp_connectivity_glove_m0.png caption="Dependency of connectivity to the number of edges" >}} {{< figure src=/articles_data/filtrable-hnsw/exp_connectivity_glove_num_elements.png caption="Dependency of connectivity to the number of point (no dependency)." >}} There is a clear threshold when the search begins to fail. This threshold is due to the decomposition of the graph into small connected components. The graphs also show that this threshold can be shifted by increasing the $m$ parameter of the algorithm, which is responsible for the degree of nodes. Let's consider some other filtering conditions we might want to apply in the search: * Categorical filtering * Select only points in a specific category * Select points which belong to a specific subset of categories * Select points with a specific set of labels * Numerical range * Selection within some geographical region In the first case, we can guarantee that the HNSW graph will be connected simply by creating additional edges inside each category separately, using the same graph construction algorithm, and then combining them into the original graph. In this case, the total number of edges will increase by no more than 2 times, regardless of the number of categories. Second case is a little harder. A connection may be lost between two categories if they lie in different clusters. ![category clusters](/articles_data/filtrable-hnsw/hnsw_graph_category.png) The idea here is to build same navigation graph but not between nodes, but between categories. Distance between two categories might be defined as distance between category entry points (or, for precision, as the average distance between a random sample). Now we can estimate expected graph connectivity by number of excluded categories, not nodes. It still does not guarantee that two random categories will be connected, but allows us to switch to multiple searches in each category if connectivity threshold passed. In some cases, multiple searches can be even faster if you take advantage of parallel processing. {{< figure src=/articles_data/filtrable-hnsw/exp_random_groups.png caption="Dependency of connectivity to the random categories included in search" >}} Third case might be resolved in a same way it is resolved in classical databases. Depending on labeled subsets size ration we can go for one of the following scenarios: * if at least one subset is small: perform search over the label containing smallest subset and then filter points consequently. * if large subsets give large intersection: perform regular search with constraints expecting that intersection size fits connectivity threshold. * if large subsets give small intersection: perform linear search over intersection expecting that it is small enough to fit a time frame. Numerical range case can be reduces to the previous one if we split numerical range into a buckets containing equal amount of points. Next we also connect neighboring buckets to achieve graph connectivity. We still need to filter some results which presence in border buckets but do not fulfill actual constraints, but their amount might be regulated by the size of buckets. Geographical case is a lot like a numerical one. Usual geographical search involves [geohash](https://en.wikipedia.org/wiki/Geohash), which matches any geo-point to a fixes length identifier. ![Geohash example](/articles_data/filtrable-hnsw/geohash.png) We can use this identifiers as categories and additionally make connections between neighboring geohashes. It will ensure that any selected geographical region will also contain connected HNSW graph. ## Conclusion It is possible to enchant HNSW algorithm so that it will support filtering points in a first search phase. Filtering can be carried out on the basis of belonging to categories, which in turn is generalized to such popular cases as numerical ranges and geo. Experiments were carried by modification [python implementation](https://github.com/generall/hnsw-python) of the algorithm, but real production systems require much faster version, like [NMSLib](https://github.com/nmslib/nmslib).
articles/filtrable-hnsw.md
--- title: Food Discovery Demo short_description: Feeling hungry? Find the perfect meal with Qdrant's multimodal semantic search. description: Feeling hungry? Find the perfect meal with Qdrant's multimodal semantic search. preview_dir: /articles_data/food-discovery-demo/preview social_preview_image: /articles_data/food-discovery-demo/preview/social_preview.png small_preview_image: /articles_data/food-discovery-demo/icon.svg weight: -30 author: Kacper Łukawski author_link: https://medium.com/@lukawskikacper date: 2023-09-05T11:32:00.000Z --- Not every search journey begins with a specific destination in mind. Sometimes, you just want to explore and see what’s out there and what you might like. This is especially true when it comes to food. You might be craving something sweet, but you don’t know what. You might be also looking for a new dish to try, and you just want to see the options available. In these cases, it's impossible to express your needs in a textual query, as the thing you are looking for is not yet defined. Qdrant's semantic search for images is useful when you have a hard time expressing your tastes in words. ## General architecture We are happy to announce a refreshed version of our [Food Discovery Demo](https://food-discovery.qdrant.tech/). This time available as an open source project, so you can easily deploy it on your own and play with it. If you prefer to dive into the source code directly, then feel free to check out the [GitHub repository ](https://github.com/qdrant/demo-food-discovery/). Otherwise, read on to learn more about the demo and how it works! In general, our application consists of three parts: a [FastAPI](https://fastapi.tiangolo.com/) backend, a [React](https://react.dev/) frontend, and a [Qdrant](/) instance. The architecture diagram below shows how these components interact with each other: ![Archtecture diagram](/articles_data/food-discovery-demo/architecture-diagram.png) ## Why did we use a CLIP model? CLIP is a neural network that can be used to encode both images and texts into vectors. And more importantly, both images and texts are vectorized into the same latent space, so we can compare them directly. This lets you perform semantic search on images using text queries and the other way around. For example, if you search for “flat bread with toppings”, you will get images of pizza. Or if you search for “pizza”, you will get images of some flat bread with toppings, even if they were not labeled as “pizza”. This is because CLIP embeddings capture the semantics of the images and texts and can find the similarities between them no matter the wording. ![CLIP model](/articles_data/food-discovery-demo/clip-model.png) CLIP is available in many different ways. We used the pretrained `clip-ViT-B-32` model available in the [Sentence-Transformers](https://www.sbert.net/examples/applications/image-search/README.html) library, as this is the easiest way to get started. ## The dataset The demo is based on the [Wolt](https://wolt.com/) dataset. It contains over 2M images of dishes from different restaurants along with some additional metadata. This is how a payload for a single dish looks like: ```json { "cafe": { "address": "VGX7+6R2 Vecchia Napoli, Valletta", "categories": ["italian", "pasta", "pizza", "burgers", "mediterranean"], "location": {"lat": 35.8980154, "lon": 14.5145106}, "menu_id": "610936a4ee8ea7a56f4a372a", "name": "Vecchia Napoli Is-Suq Tal-Belt", "rating": 9, "slug": "vecchia-napoli-skyparks-suq-tal-belt" }, "description": "Tomato sauce, mozzarella fior di latte, crispy guanciale, Pecorino Romano cheese and a hint of chilli", "image": "https://wolt-menu-images-cdn.wolt.com/menu-images/610936a4ee8ea7a56f4a372a/005dfeb2-e734-11ec-b667-ced7a78a5abd_l_amatriciana_pizza_joel_gueller1.jpeg", "name": "L'Amatriciana" } ``` Processing this amount of records takes some time, so we precomputed the CLIP embeddings, stored them in a Qdrant collection and exported the collection as a snapshot. You may [download it here](https://storage.googleapis.com/common-datasets-snapshots/wolt-clip-ViT-B-32.snapshot). ## Different search modes The FastAPI backend [exposes just a single endpoint](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/main.py#L37), however it handles multiple scenarios. Let's dive into them one by one and understand why they are needed. ### Cold start Recommendation systems struggle with a cold start problem. When a new user joins the system, there is no data about their preferences, so it’s hard to recommend anything. The same applies to our demo. When you open it, you will see a random selection of dishes, and it changes every time you refresh the page. Internally, the demo [chooses some random points](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L70) in the vector space. ![Random points selection](/articles_data/food-discovery-demo/random-results.png) That procedure should result in returning diverse results, so we have a higher chance of showing something interesting to the user. ### Textual search Since the demo suffers from the cold start problem, we implemented a textual search mode that is useful to start exploring the data. You can type in any text query by clicking a search icon in the top right corner. The demo will use the CLIP model to encode the query into a vector and then search for the nearest neighbors in the vector space. ![Random points selection](/articles_data/food-discovery-demo/textual-search.png) This is implemented as [a group search query to Qdrant](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L44). We didn't use a simple search, but performed grouping by the restaurant to get more diverse results. [Search groups](/documentation/concepts/search/#search-groups) is a mechanism similar to `GROUP BY` clause in SQL, and it's useful when you want to get a specific number of result per group (in our case just one). ```python import settings # Encode query into a vector, model is an instance of # sentence_transformers.SentenceTransformer that loaded CLIP model query_vector = model.encode(query).tolist() # Search for nearest neighbors, client is an instance of # qdrant_client.QdrantClient that has to be initialized before response = client.search_groups( settings.QDRANT_COLLECTION, query_vector=query_vector, group_by=settings.GROUP_BY_FIELD, limit=search_query.limit, ) ``` ### Exploring the results The main feature of the demo is the ability to explore the space of the dishes. You can click on any of them to see more details, but first of all you can like or dislike it, and the demo will update the search results accordingly. ![Recommendation results](/articles_data/food-discovery-demo/recommendation-results.png) #### Negative feedback only Qdrant [Recommendation API](/documentation/concepts/search/#recommendation-api) needs at least one positive example to work. However, in our demo we want to be able to provide only negative examples. This is because we want to be able to say “I don’t like this dish” without having to like anything first. To achieve this, we use a trick. We negate the vectors of the disliked dishes and use their mean as a query. This way, the disliked dishes will be pushed away from the search results. **This works because the cosine distance is based on the angle between two vectors, and the angle between a vector and its negation is 180 degrees.** ![CLIP model](/articles_data/food-discovery-demo/negated-vector.png) Food Discovery Demo [implements that trick](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L122) by calling Qdrant twice. Initially, we use the [Scroll API](/documentation/concepts/points/#scroll-points) to find disliked items, and then calculate a negated mean of all their vectors. That allows using the [Search Groups API](/documentation/concepts/search/#search-groups) to find the nearest neighbors of the negated mean vector. ```python import numpy as np # Retrieve the disliked points based on their ids disliked_points, _ = client.scroll( settings.QDRANT_COLLECTION, scroll_filter=models.Filter( must=[ models.HasIdCondition(has_id=search_query.negative), ] ), with_vectors=True, ) # Calculate a mean vector of disliked points disliked_vectors = np.array([point.vector for point in disliked_points]) mean_vector = np.mean(disliked_vectors, axis=0) negated_vector = -mean_vector # Search for nearest neighbors of the negated mean vector response = client.search_groups( settings.QDRANT_COLLECTION, query_vector=negated_vector.tolist(), group_by=settings.GROUP_BY_FIELD, limit=search_query.limit, ) ``` #### Positive and negative feedback Since the [Recommendation API](/documentation/concepts/search/#recommendation-api) requires at least one positive example, we can use it only when the user has liked at least one dish. We could theoretically use the same trick as above and negate the disliked dishes, but it would be a bit weird, as Qdrant has that feature already built-in, and we can call it just once to do the job. It's always better to perform the search server-side. Thus, in this case [we just call the Qdrant server with a list of positive and negative examples](https://github.com/qdrant/demo-food-discovery/blob/6b49e11cfbd6412637d527cdd62fe9b9f74ac699/backend/discovery.py#L166), so it can find some points which are close to the positive examples and far from the negative ones. ```python response = client.recommend_groups( settings.QDRANT_COLLECTION, positive=search_query.positive, negative=search_query.negative, group_by=settings.GROUP_BY_FIELD, limit=search_query.limit, ) ``` From the user perspective nothing changes comparing to the previous case. ### Location-based search Last but not least, location plays an important role in the food discovery process. You are definitely looking for something you can find nearby, not on the other side of the globe. Therefore, your current location can be toggled as a filtering condition. You can enable it by clicking on “Find near me” icon in the top right. This way you can find the best pizza in your neighborhood, not in the whole world. Qdrant [geo radius filter](/documentation/concepts/filtering/#geo-radius) is a perfect choice for this. It lets you filter the results by distance from a given point. ```python from qdrant_client import models # Create a geo radius filter query_filter = models.Filter( must=[ models.FieldCondition( key="cafe.location", geo_radius=models.GeoRadius( center=models.GeoPoint( lon=location.longitude, lat=location.latitude, ), radius=location.radius_km * 1000, ), ) ] ) ``` Such a filter needs [a payload index](/documentation/concepts/indexing/#payload-index) to work efficiently, and it was created on a collection we used to create the snapshot. When you import it into your instance, the index will be already there. ## Using the demo The Food Discovery Demo [is available online](https://food-discovery.qdrant.tech/), but if you prefer to run it locally, you can do it with Docker. The [README](https://github.com/qdrant/demo-food-discovery/blob/main/README.md) describes all the steps more in detail, but here is a quick start: ```bash git clone [email protected]:qdrant/demo-food-discovery.git cd demo-food-discovery # Create .env file based on .env.example docker-compose up -d ``` The demo will be available at `http://localhost:8001`, but you won't be able to search anything until you [import the snapshot into your Qdrant instance](/documentation/concepts/snapshots/#recover-via-api). If you don't want to bother with hosting a local one, you can use the [Qdrant Cloud](https://cloud.qdrant.io/) cluster. 4 GB RAM is enough to load all the 2 million entries. ## Fork and reuse Our demo is completely open-source. Feel free to fork it, update with your own dataset or adapt the application to your use case. Whether you’re looking to understand the mechanics of semantic search or to have a foundation to build a larger project, this demo can serve as a starting point. Check out the [Food Discovery Demo repository ](https://github.com/qdrant/demo-food-discovery/) to get started. If you have any questions, feel free to reach out [through Discord](https://qdrant.to/discord).
articles/food-discovery-demo.md
--- title: Google Summer of Code 2023 - Web UI for Visualization and Exploration short_description: Gsoc'23 Web UI for Visualization and Exploration description: My journey as a Google Summer of Code 2023 student working on the "Web UI for Visualization and Exploration" project for Qdrant. preview_dir: /articles_data/web-ui-gsoc/preview small_preview_image: /articles_data/web-ui-gsoc/icon.svg social_preview_image: /articles_data/web-ui-gsoc/preview/social_preview.jpg weight: -20 author: Kartik Gupta author_link: https://kartik-gupta-ij.vercel.app/ date: 2023-08-28T08:00:00+03:00 draft: false keywords: - vector reduction - console - gsoc'23 - vector similarity - exploration - recommendation --- ## Introduction Hello everyone! My name is Kartik Gupta, and I am thrilled to share my coding journey as part of the Google Summer of Code 2023 program. This summer, I had the incredible opportunity to work on an exciting project titled "Web UI for Visualization and Exploration" for Qdrant, a vector search engine. In this article, I will take you through my experience, challenges, and achievements during this enriching coding journey. ## Project Overview Qdrant is a powerful vector search engine widely used for similarity search and clustering. However, it lacked a user-friendly web-based UI for data visualization and exploration. My project aimed to bridge this gap by developing a web-based user interface that allows users to easily interact with and explore their vector data. ## Milestones and Achievements The project was divided into six milestones, each focusing on a specific aspect of the web UI development. Let's go through each of them and my achievements during the coding period. **1. Designing a friendly UI on Figma** I started by designing the user interface on Figma, ensuring it was easy to use, visually appealing, and responsive on different devices. I focused on usability and accessibility to create a seamless user experience. ( [Figma Design](https://www.figma.com/file/z54cAcOErNjlVBsZ1DrXyD/Qdant?type=design&node-id=0-1&mode=design&t=Pu22zO2AMFuGhklG-0)) **2. Building the layout** The layout route served as a landing page with an overview of the application's features and navigation links to other routes. **3. Creating a view collection route** This route enabled users to view a list of collections available in the application. Users could click on a collection to see more details, including the data and vectors associated with it. {{< figure src=/articles_data/web-ui-gsoc/collections-page.png caption="Collection Page" alt="Collection Page" >}} **4. Developing a data page with "find similar" functionality** I implemented a data page where users could search for data and find similar data using a recommendation API. The recommendation API suggested similar data based on the Data's selected ID, providing valuable insights. {{< figure src=/articles_data/web-ui-gsoc/points-page.png caption="Points Page" alt="Points Page" >}} **5. Developing query editor page libraries** This milestone involved creating a query editor page that allowed users to write queries in a custom language. The editor provided syntax highlighting, autocomplete, and error-checking features for a seamless query writing experience. {{< figure src=/articles_data/web-ui-gsoc/console-page.png caption="Query Editor Page" alt="Query Editor Page" >}} **6. Developing a route for visualizing vector data points** This is done by the reduction of n-dimensional vector in 2-D points and they are displayed with their respective payloads. {{< figure src=/articles_data/web-ui-gsoc/visualization-page.png caption="Vector Visuliztion Page" alt="visualization-page" >}} ## Challenges and Learning Throughout the project, I encountered a series of challenges that stretched my engineering capabilities and provided unique growth opportunities. From mastering new libraries and technologies to ensuring the user interface (UI) was both visually appealing and user-friendly, every obstacle became a stepping stone toward enhancing my skills as a developer. However, each challenge provided an opportunity to learn and grow as a developer. I acquired valuable experience in vector search and dimension reduction techniques. The most significant learning for me was the importance of effective project management. Setting realistic timelines, collaborating with mentors, and staying proactive with feedback allowed me to complete the milestones efficiently. ### Technical Learning and Skill Development One of the most significant aspects of this journey was diving into the intricate world of vector search and dimension reduction techniques. These areas, previously unfamiliar to me, required rigorous study and exploration. Learning how to process vast amounts of data efficiently and extract meaningful insights through these techniques was both challenging and rewarding. ### Effective Project Management Undoubtedly, the most impactful lesson was the art of effective project management. I quickly grasped the importance of setting realistic timelines and goals. Collaborating closely with mentors and maintaining proactive communication proved indispensable. This approach enabled me to navigate the complex development process and successfully achieve the project's milestones. ### Overcoming Technical Challenges #### Autocomplete Feature in Console One particularly intriguing challenge emerged while working on the autocomplete feature within the console. Finding a solution was proving elusive until a breakthrough came from an unexpected direction. My mentor, Andrey, proposed creating a separate module that could support autocomplete based on OpenAPI for our custom language. This ingenious approach not only resolved the issue but also showcased the power of collaborative problem-solving. #### Optimization with Web Workers The high-processing demands of vector reduction posed another significant challenge. Initially, this task was straining browsers and causing performance issues. The solution materialized in the form of web workers—an independent processing instance that alleviated the strain on browsers. However, a new question arose: how to terminate these workers effectively? With invaluable insights from my mentor, I gained a deeper understanding of web worker dynamics and successfully tackled this challenge. #### Console Integration Complexity Integrating the console interaction into the application presented multifaceted challenges. Crafting a custom language in Monaco, parsing text to make API requests, and synchronizing the entire process demanded meticulous attention to detail. Overcoming these hurdles was a testament to the complexity of real-world engineering endeavours. #### Codelens Multiplicity Issue An unexpected issue cropped up during the development process: the codelen (run button) registered multiple times, leading to undesired behaviour. This hiccup underscored the importance of thorough testing and debugging, even in seemingly straightforward features. ### Key Learning Points Amidst these challenges, I garnered valuable insights that have significantly enriched my engineering prowess: **Vector Reduction Techniques**: Navigating the realm of vector reduction techniques provided a deep understanding of how to process and interpret data efficiently. This knowledge opens up new avenues for developing data-driven applications in the future. **Web Workers Efficiency**: Mastering the intricacies of web workers not only resolved performance concerns but also expanded my repertoire of optimization strategies. This newfound proficiency will undoubtedly find relevance in various future projects. **Monaco Editor and UI Frameworks**: Working extensively with the Monaco Editor, Material-UI (MUI), and Vite enriched my familiarity with these essential tools. I honed my skills in integrating complex UI components seamlessly into applications. ## Areas for Improvement and Future Enhancements While reflecting on this transformative journey, I recognize several areas that offer room for improvement and future enhancements: 1. Enhanced Autocomplete: Further refining the autocomplete feature to support key-value suggestions in JSON structures could greatly enhance the user experience. 2. Error Detection in Console: Integrating the console's error checker with OpenAPI could enhance its accuracy in identifying errors and offering precise suggestions for improvement. 3. Expanded Vector Visualization: Exploring additional visualization methods and optimizing their performance could elevate the utility of the vector visualization route. ## Conclusion Participating in the Google Summer of Code 2023 and working on the "Web UI for Visualization and Exploration" project has been an immensely rewarding experience. I am grateful for the opportunity to contribute to Qdrant and develop a user-friendly interface for vector data exploration. I want to express my gratitude to my mentors and the entire Qdrant community for their support and guidance throughout this journey. This experience has not only improved my coding skills but also instilled a deeper passion for web development and data analysis. As my coding journey continues beyond this project, I look forward to applying the knowledge and experience gained here to future endeavours. I am excited to see how Qdrant evolves with the newly developed web UI and how it positively impacts users worldwide. Thank you for joining me on this coding adventure, and I hope to share more exciting projects in the future! Happy coding!
articles/web-ui-gsoc.md
--- title: Metric Learning for Anomaly Detection short_description: "How to use metric learning to detect anomalies: quality assessment of coffee beans with just 200 labelled samples" description: Practical use of metric learning for anomaly detection. A way to match the results of a classification-based approach with only ~0.6% of the labeled data. social_preview_image: /articles_data/detecting-coffee-anomalies/preview/social_preview.jpg preview_dir: /articles_data/detecting-coffee-anomalies/preview small_preview_image: /articles_data/detecting-coffee-anomalies/anomalies_icon.svg weight: 30 author: Yusuf Sarıgöz author_link: https://medium.com/@yusufsarigoz date: 2022-05-04T13:00:00+03:00 draft: false # aliases: [ /articles/detecting-coffee-anomalies/ ] --- Anomaly detection is a thirsting yet challenging task that has numerous use cases across various industries. The complexity results mainly from the fact that the task is data-scarce by definition. Similarly, anomalies are, again by definition, subject to frequent change, and they may take unexpected forms. For that reason, supervised classification-based approaches are: * Data-hungry - requiring quite a number of labeled data; * Expensive - data labeling is an expensive task itself; * Time-consuming - you would try to obtain what is necessarily scarce; * Hard to maintain - you would need to re-train the model repeatedly in response to changes in the data distribution. These are not desirable features if you want to put your model into production in a rapidly-changing environment. And, despite all the mentioned difficulties, they do not necessarily offer superior performance compared to the alternatives. In this post, we will detail the lessons learned from such a use case. ## Coffee Beans [Agrivero.ai](https://agrivero.ai/) - is a company making AI-enabled solution for quality control & traceability of green coffee for producers, traders, and roasters. They have collected and labeled more than **30 thousand** images of coffee beans with various defects - wet, broken, chipped, or bug-infested samples. This data is used to train a classifier that evaluates crop quality and highlights possible problems. {{< figure src=/articles_data/detecting-coffee-anomalies/detection.gif caption="Anomalies in coffee" width="400px" >}} We should note that anomalies are very diverse, so the enumeration of all possible anomalies is a challenging task on it's own. In the course of work, new types of defects appear, and shooting conditions change. Thus, a one-time labeled dataset becomes insufficient. Let's find out how metric learning might help to address this challenge. ## Metric Learning Approach In this approach, we aimed to encode images in an n-dimensional vector space and then use learned similarities to label images during the inference. The simplest way to do this is KNN classification. The algorithm retrieves K-nearest neighbors to a given query vector and assigns a label based on the majority vote. In production environment kNN classifier could be easily replaced with [Qdrant](https://github.com/qdrant/qdrant) vector search engine. {{< figure src=/articles_data/detecting-coffee-anomalies/anomalies_detection.png caption="Production deployment" >}} This approach has the following advantages: * We can benefit from unlabeled data, considering labeling is time-consuming and expensive. * The relevant metric, e.g., precision or recall, can be tuned according to changing requirements during the inference without re-training. * Queries labeled with a high score can be added to the KNN classifier on the fly as new data points. To apply metric learning, we need to have a neural encoder, a model capable of transforming an image into a vector. Training such an encoder from scratch may require a significant amount of data we might not have. Therefore, we will divide the training into two steps: * The first step is to train the autoencoder, with which we will prepare a model capable of representing the target domain. * The second step is finetuning. Its purpose is to train the model to distinguish the required types of anomalies. {{< figure src=/articles_data/detecting-coffee-anomalies/anomaly_detection_training.png caption="Model training architecture" >}} ### Step 1 - Autoencoder for Unlabeled Data First, we pretrained a Resnet18-like model in a vanilla autoencoder architecture by leaving the labels aside. Autoencoder is a model architecture composed of an encoder and a decoder, with the latter trying to recreate the original input from the low-dimensional bottleneck output of the former. There is no intuitive evaluation metric to indicate the performance in this setup, but we can evaluate the success by examining the recreated samples visually. {{< figure src=/articles_data/detecting-coffee-anomalies/image_reconstruction.png caption="Example of image reconstruction with Autoencoder" >}} Then we encoded a subset of the data into 128-dimensional vectors by using the encoder, and created a KNN classifier on top of these embeddings and associated labels. Although the results are promising, we can do even better by finetuning with metric learning. ### Step 2 - Finetuning with Metric Learning We started by selecting 200 labeled samples randomly without replacement. In this step, The model was composed of the encoder part of the autoencoder with a randomly initialized projection layer stacked on top of it. We applied transfer learning from the frozen encoder and trained only the projection layer with Triplet Loss and an online batch-all triplet mining strategy. Unfortunately, the model overfitted quickly in this attempt. In the next experiment, we used an online batch-hard strategy with a trick to prevent vector space from collapsing. We will describe our approach in the further articles. This time it converged smoothly, and our evaluation metrics also improved considerably to match the supervised classification approach. {{< figure src=/articles_data/detecting-coffee-anomalies/ae_report_knn.png caption="Metrics for the autoencoder model with KNN classifier" >}} {{< figure src=/articles_data/detecting-coffee-anomalies/ft_report_knn.png caption="Metrics for the finetuned model with KNN classifier" >}} We repeated this experiment with 500 and 2000 samples, but it showed only a slight improvement. Thus we decided to stick to 200 samples - see below for why. ## Supervised Classification Approach We also wanted to compare our results with the metrics of a traditional supervised classification model. For this purpose, a Resnet50 model was finetuned with ~30k labeled images, made available for training. Surprisingly, the F1 score was around ~0.86. Please note that we used only 200 labeled samples in the metric learning approach instead of ~30k in the supervised classification approach. These numbers indicate a huge saving with no considerable compromise in the performance. ## Conclusion We obtained results comparable to those of the supervised classification method by using **only 0.66%** of the labeled data with metric learning. This approach is time-saving and resource-efficient, and that may be improved further. Possible next steps might be: - Collect more unlabeled data and pretrain a larger autoencoder. - Obtain high-quality labels for a small number of images instead of tens of thousands for finetuning. - Use hyperparameter optimization and possibly gradual unfreezing in the finetuning step. - Use [vector search engine](https://github.com/qdrant/qdrant) to serve Metric Learning in production. We are actively looking into these, and we will continue to publish our findings in this challenge and other use cases of metric learning.
articles/detecting-coffee-anomalies.md
--- title: Fine Tuning Similar Cars Search short_description: "How to use similarity learning to search for similar cars" description: Learn how to train a similarity model that can retrieve similar car images in novel categories. social_preview_image: /articles_data/cars-recognition/preview/social_preview.jpg small_preview_image: /articles_data/cars-recognition/icon.svg preview_dir: /articles_data/cars-recognition/preview weight: 10 author: Yusuf Sarıgöz author_link: https://medium.com/@yusufsarigoz date: 2022-06-28T13:00:00+03:00 draft: false # aliases: [ /articles/cars-recognition/ ] --- Supervised classification is one of the most widely used training objectives in machine learning, but not every task can be defined as such. For example, 1. Your classes may change quickly —e.g., new classes may be added over time, 2. You may not have samples from every possible category, 3. It may be impossible to enumerate all the possible classes during the training time, 4. You may have an essentially different task, e.g., search or retrieval. All such problems may be efficiently solved with similarity learning. N.B.: If you are new to the similarity learning concept, checkout the [awesome-metric-learning](https://github.com/qdrant/awesome-metric-learning) repo for great resources and use case examples. However, similarity learning comes with its own difficulties such as: 1. Need for larger batch sizes usually, 2. More sophisticated loss functions, 3. Changing architectures between training and inference. Quaterion is a fine tuning framework built to tackle such problems in similarity learning. It uses [PyTorch Lightning](https://www.pytorchlightning.ai/) as a backend, which is advertized with the motto, "spend more time on research, less on engineering." This is also true for Quaterion, and it includes: 1. Trainable and servable model classes, 2. Annotated built-in loss functions, and a wrapper over [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/) when you need even more, 3. Sample, dataset and data loader classes to make it easier to work with similarity learning data, 4. A caching mechanism for faster iterations and less memory footprint. ## A closer look at Quaterion Let's break down some important modules: - `TrainableModel`: A subclass of `pl.LightNingModule` that has additional hook methods such as `configure_encoders`, `configure_head`, `configure_metrics` and others to define objects needed for training and evaluation —see below to learn more on these. - `SimilarityModel`: An inference-only export method to boost code transfer and lower dependencies during the inference time. In fact, Quaterion is composed of two packages: 1. `quaterion_models`: package that you need for inference. 2. `quaterion`: package that defines objects needed for training and also depends on `quaterion_models`. - `Encoder` and `EncoderHead`: Two objects that form a `SimilarityModel`. In most of the cases, you may use a frozen pretrained encoder, e.g., ResNets from `torchvision`, or language modelling models from `transformers`, with a trainable `EncoderHead` stacked on top of it. `quaterion_models` offers several ready-to-use `EncoderHead` implementations, but you may also create your own by subclassing a parent class or easily listing PyTorch modules in a `SequentialHead`. Quaterion has other objects such as distance functions, evaluation metrics, evaluators, convenient dataset and data loader classes, but these are mostly self-explanatory. Thus, they will not be explained in detail in this article for brevity. However, you can always go check out the [documentation](https://quaterion.qdrant.tech) to learn more about them. The focus of this tutorial is a step-by-step solution to a similarity learning problem with Quaterion. This will also help us better understand how the abovementioned objects fit together in a real project. Let's start walking through some of the important parts of the code. If you are looking for the complete source code instead, you can find it under the [examples](https://github.com/qdrant/quaterion/tree/master/examples/cars) directory in the Quaterion repo. ## Dataset In this tutorial, we will use the [Stanford Cars](https://pytorch.org/vision/main/generated/torchvision.datasets.StanfordCars.html) dataset. {{< figure src=https://storage.googleapis.com/quaterion/docs/class_montage.jpg caption="Stanford Cars Dataset" >}} It has 16185 images of cars from 196 classes, and it is split into training and testing subsets with almost a 50-50% split. To make things even more interesting, however, we will first merge training and testing subsets, then we will split it into two again in such a way that the half of the 196 classes will be put into the training set and the other half will be in the testing set. This will let us test our model with samples from novel classes that it has never seen in the training phase, which is what supervised classification cannot achieve but similarity learning can. In the following code borrowed from [`data.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/data.py): - `get_datasets()` function performs the splitting task described above. - `get_dataloaders()` function creates `GroupSimilarityDataLoader` instances from training and testing datasets. - Datasets are regular PyTorch datasets that emit `SimilarityGroupSample` instances. N.B.: Currently, Quaterion has two data types to represent samples in a dataset. To learn more about `SimilarityPairSample`, check out the [NLP tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html) ```python import numpy as np import os import tqdm from torch.utils.data import Dataset, Subset from torchvision import datasets, transforms from typing import Callable from pytorch_lightning import seed_everything from quaterion.dataset import ( GroupSimilarityDataLoader, SimilarityGroupSample, ) # set seed to deterministically sample train and test categories later on seed_everything(seed=42) # dataset will be downloaded to this directory under local directory dataset_path = os.path.join(".", "torchvision", "datasets") def get_datasets(input_size: int): # Use Mean and std values for the ImageNet dataset as the base model was pretrained on it. # taken from https://www.geeksforgeeks.org/how-to-normalize-images-in-pytorch/ mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] # create train and test transforms transform = transforms.Compose( [ transforms.Resize((input_size, input_size)), transforms.ToTensor(), transforms.Normalize(mean, std), ] ) # we need to merge train and test splits into a full dataset first, # and then we will split it to two subsets again with each one composed of distinct labels. full_dataset = datasets.StanfordCars( root=dataset_path, split="train", download=True ) + datasets.StanfordCars(root=dataset_path, split="test", download=True) # full_dataset contains examples from 196 categories labeled with an integer from 0 to 195 # randomly sample half of it to be used for training train_categories = np.random.choice(a=196, size=196 // 2, replace=False) # get a list of labels for all samples in the dataset labels_list = np.array([label for _, label in tqdm.tqdm(full_dataset)]) # get a mask for indices where label is included in train_categories labels_mask = np.isin(labels_list, train_categories) # get a list of indices to be used as train samples train_indices = np.argwhere(labels_mask).squeeze() # others will be used as test samples test_indices = np.argwhere(np.logical_not(labels_mask)).squeeze() # now that we have distinct indices for train and test sets, we can use `Subset` to create new datasets # from `full_dataset`, which contain only the samples at given indices. # finally, we apply transformations created above. train_dataset = CarsDataset( Subset(full_dataset, train_indices), transform=transform ) test_dataset = CarsDataset( Subset(full_dataset, test_indices), transform=transform ) return train_dataset, test_dataset def get_dataloaders( batch_size: int, input_size: int, shuffle: bool = False, ): train_dataset, test_dataset = get_datasets(input_size) train_dataloader = GroupSimilarityDataLoader( train_dataset, batch_size=batch_size, shuffle=shuffle ) test_dataloader = GroupSimilarityDataLoader( test_dataset, batch_size=batch_size, shuffle=False ) return train_dataloader, test_dataloader class CarsDataset(Dataset): def __init__(self, dataset: Dataset, transform: Callable): self._dataset = dataset self._transform = transform def __len__(self) -> int: return len(self._dataset) def __getitem__(self, index) -> SimilarityGroupSample: image, label = self._dataset[index] image = self._transform(image) return SimilarityGroupSample(obj=image, group=label) ``` ## Trainable Model Now it's time to review one of the most exciting building blocks of Quaterion: [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#module-quaterion.train.trainable_model). It is the base class for models you would like to configure for training, and it provides several hook methods starting with `configure_` to set up every aspect of the training phase just like [`pl.LightningModule`](https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.core.LightningModule.html), its own base class. It is central to fine tuning with Quaterion, so we will break down this essential code in [`models.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/models.py) and review each method separately. Let's begin with the imports: ```python import torch import torchvision from quaterion_models.encoders import Encoder from quaterion_models.heads import EncoderHead, SkipConnectionHead from torch import nn from typing import Dict, Union, Optional, List from quaterion import TrainableModel from quaterion.eval.attached_metric import AttachedMetric from quaterion.eval.group import RetrievalRPrecision from quaterion.loss import SimilarityLoss, TripletLoss from quaterion.train.cache import CacheConfig, CacheType from .encoders import CarsEncoder ``` In the following code snippet, we subclass `TrainableModel`. You may use `__init__()` to store some attributes to be used in various `configure_*` methods later on. The more interesting part is, however, in the [`configure_encoders()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_encoders) method. We need to return an instance of [`Encoder`](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html#quaterion_models.encoders.encoder.Encoder) (or a dictionary with `Encoder` instances as values) from this method. In our case, it is an instance of `CarsEncoders`, which we will review soon. Notice now how it is created with a pretrained ResNet152 model whose classification layer is replaced by an identity function. ```python class Model(TrainableModel): def __init__(self, lr: float, mining: str): self._lr = lr self._mining = mining super().__init__() def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]: pre_trained_encoder = torchvision.models.resnet152(pretrained=True) pre_trained_encoder.fc = nn.Identity() return CarsEncoder(pre_trained_encoder) ``` In Quaterion, a [`SimilarityModel`](https://quaterion-models.qdrant.tech/quaterion_models.model.html#quaterion_models.model.SimilarityModel) is composed of one or more `Encoder`s and an [`EncoderHead`](https://quaterion-models.qdrant.tech/quaterion_models.heads.encoder_head.html#quaterion_models.heads.encoder_head.EncoderHead). `quaterion_models` has [several `EncoderHead` implementations](https://quaterion-models.qdrant.tech/quaterion_models.heads.html#module-quaterion_models.heads) with a unified API such as a configurable dropout value. You may use one of them or create your own subclass of `EncoderHead`. In either case, you need to return an instance of it from [`configure_head`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_head) In this example, we will use a `SkipConnectionHead`, which is lightweight and more resistant to overfitting. ```python def configure_head(self, input_embedding_size) -> EncoderHead: return SkipConnectionHead(input_embedding_size, dropout=0.1) ``` Quaterion has implementations of [some popular loss functions](https://quaterion.qdrant.tech/quaterion.loss.html) for similarity learning, all of which subclass either [`GroupLoss`](https://quaterion.qdrant.tech/quaterion.loss.group_loss.html#quaterion.loss.group_loss.GroupLoss) or [`PairwiseLoss`](https://quaterion.qdrant.tech/quaterion.loss.pairwise_loss.html#quaterion.loss.pairwise_loss.PairwiseLoss). In this example, we will use [`TripletLoss`](https://quaterion.qdrant.tech/quaterion.loss.triplet_loss.html#quaterion.loss.triplet_loss.TripletLoss), which is a subclass of `GroupLoss`. In general, subclasses of `GroupLoss` are used with datasets in which samples are assigned with some group (or label). In our example label is a make of the car. Those datasets should emit `SimilarityGroupSample`. Other alternatives are implementations of `PairwiseLoss`, which consume `SimilarityPairSample` - pair of objects for which similarity is specified individually. To see an example of the latter, you may need to check out the [NLP Tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html) ```python def configure_loss(self) -> SimilarityLoss: return TripletLoss(mining=self._mining, margin=0.5) ``` `configure_optimizers()` may be familiar to PyTorch Lightning users, but there is a novel `self.model` used inside that method. It is an instance of `SimilarityModel` and is automatically created by Quaterion from the return values of `configure_encoders()` and `configure_head()`. ```python def configure_optimizers(self): optimizer = torch.optim.Adam(self.model.parameters(), self._lr) return optimizer ``` Caching in Quaterion is used for avoiding calculation of outputs of a frozen pretrained `Encoder` in every epoch. When it is configured, outputs will be computed once and cached in the preferred device for direct usage later on. It provides both a considerable speedup and less memory footprint. However, it is quite a bit versatile and has several knobs to tune. To get the most out of its potential, it's recommended that you check out the [cache tutorial](https://quaterion.qdrant.tech/tutorials/cache_tutorial.html). For the sake of making this article self-contained, you need to return a [`CacheConfig`](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheConfig) instance from [`configure_caches()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_caches) to specify cache-related preferences such as: - [`CacheType`](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheType), i.e., whether to store caches on CPU or GPU, - `save_dir`, i.e., where to persist caches for subsequent runs, - `batch_size`, i.e., batch size to be used only when creating caches - the batch size to be used during the actual training might be different. ```python def configure_caches(self) -> Optional[CacheConfig]: return CacheConfig( cache_type=CacheType.AUTO, save_dir="./cache_dir", batch_size=32 ) ``` We have just configured the training-related settings of a `TrainableModel`. However, evaluation is an integral part of experimentation in machine learning, and you may configure evaluation metrics by returning one or more [`AttachedMetric`](https://quaterion.qdrant.tech/quaterion.eval.attached_metric.html#quaterion.eval.attached_metric.AttachedMetric) instances from `configure_metrics()`. Quaterion has several built-in [group](https://quaterion.qdrant.tech/quaterion.eval.group.html) and [pairwise](https://quaterion.qdrant.tech/quaterion.eval.pair.html) evaluation metrics. ```python def configure_metrics(self) -> Union[AttachedMetric, List[AttachedMetric]]: return AttachedMetric( "rrp", metric=RetrievalRPrecision(), prog_bar=True, on_epoch=True, on_step=False, ) ``` ## Encoder As previously stated, a `SimilarityModel` is composed of one or more `Encoder`s and an `EncoderHead`. Even if we freeze pretrained `Encoder` instances, `EncoderHead` is still trainable and has enough parameters to adapt to the new task at hand. It is recommended that you set the `trainable` property to `False` whenever possible, as it lets you benefit from the caching mechanism described above. Another important property is `embedding_size`, which will be passed to `TrainableModel.configure_head()` as `input_embedding_size` to let you properly initialize the head layer. Let's see how an `Encoder` is implemented in the following code borrowed from [`encoders.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/encoders.py): ```python import os import torch import torch.nn as nn from quaterion_models.encoders import Encoder class CarsEncoder(Encoder): def __init__(self, encoder_model: nn.Module): super().__init__() self._encoder = encoder_model self._embedding_size = 2048 # last dimension from the ResNet model @property def trainable(self) -> bool: return False @property def embedding_size(self) -> int: return self._embedding_size ``` An `Encoder` is a regular `torch.nn.Module` subclass, and we need to implement the forward pass logic in the `forward` method. Depending on how you create your submodules, this method may be more complex; however, we simply pass the input through a pretrained ResNet152 backbone in this example: ```python def forward(self, images): embeddings = self._encoder.forward(images) return embeddings ``` An important step of machine learning development is proper saving and loading of models. Quaterion lets you save your `SimilarityModel` with [`TrainableModel.save_servable()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.save_servable) and restore it with [`SimilarityModel.load()`](https://quaterion-models.qdrant.tech/quaterion_models.model.html#quaterion_models.model.SimilarityModel.load). To be able to use these two methods, you need to implement `save()` and `load()` methods in your `Encoder`. Additionally, it is also important that you define your subclass of `Encoder` outside the `__main__` namespace, i.e., in a separate file from your main entry point. It may not be restored properly otherwise. ```python def save(self, output_path: str): os.makedirs(output_path, exist_ok=True) torch.save(self._encoder, os.path.join(output_path, "encoder.pth")) @classmethod def load(cls, input_path): encoder_model = torch.load(os.path.join(input_path, "encoder.pth")) return CarsEncoder(encoder_model) ``` ## Training With all essential objects implemented, it is easy to bring them all together and run a training loop with the [`Quaterion.fit()`](https://quaterion.qdrant.tech/quaterion.main.html#quaterion.main.Quaterion.fit) method. It expects: - A `TrainableModel`, - A [`pl.Trainer`](https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html), - A [`SimilarityDataLoader`](https://quaterion.qdrant.tech/quaterion.dataset.similarity_data_loader.html#quaterion.dataset.similarity_data_loader.SimilarityDataLoader) for training data, - And optionally, another `SimilarityDataLoader` for evaluation data. We need to import a few objects to prepare all of these: ```python import os import pytorch_lightning as pl import torch from pytorch_lightning.callbacks import EarlyStopping, ModelSummary from quaterion import Quaterion from .data import get_dataloaders from .models import Model ``` The `train()` function in the following code snippet expects several hyperparameter values as arguments. They can be defined in a `config.py` or passed from the command line. However, that part of the code is omitted for brevity. Instead let's focus on how all the building blocks are initialized and passed to `Quaterion.fit()`, which is responsible for running the whole loop. When the training loop is complete, you can simply call `TrainableModel.save_servable()` to save the current state of the `SimilarityModel` instance: ```python def train( lr: float, mining: str, batch_size: int, epochs: int, input_size: int, shuffle: bool, save_dir: str, ): model = Model( lr=lr, mining=mining, ) train_dataloader, val_dataloader = get_dataloaders( batch_size=batch_size, input_size=input_size, shuffle=shuffle ) early_stopping = EarlyStopping( monitor="validation_loss", patience=50, ) trainer = pl.Trainer( gpus=1 if torch.cuda.is_available() else 0, max_epochs=epochs, callbacks=[early_stopping, ModelSummary(max_depth=3)], enable_checkpointing=False, log_every_n_steps=1, ) Quaterion.fit( trainable_model=model, trainer=trainer, train_dataloader=train_dataloader, val_dataloader=val_dataloader, ) model.save_servable(save_dir) ``` ## Evaluation Let's see what we have achieved with these simple steps. [`evaluate.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/evaluate.py) has two functions to evaluate both the baseline model and the tuned similarity model. We will review only the latter for brevity. In addition to the ease of restoring a `SimilarityModel`, this code snippet also shows how to use [`Evaluator`](https://quaterion.qdrant.tech/quaterion.eval.evaluator.html#quaterion.eval.evaluator.Evaluator) to evaluate the performance of a `SimilarityModel` on a given dataset by given evaluation metrics. {{< figure src=https://storage.googleapis.com/quaterion/docs/original_vs_tuned_cars.png caption="Comparison of original and tuned models for retrieval" >}} Full evaluation of a dataset usually grows exponentially, and thus you may want to perform a partial evaluation on a sampled subset. In this case, you may use [samplers](https://quaterion.qdrant.tech/quaterion.eval.samplers.html) to limit the evaluation. Similar to `Quaterion.fit()` used for training, [`Quaterion.evaluate()`](https://quaterion.qdrant.tech/quaterion.main.html#quaterion.main.Quaterion.evaluate) runs a complete evaluation loop. It takes the following as arguments: - An `Evaluator` instance created with given evaluation metrics and a `Sampler`, - The `SimilarityModel` to be evaluated, - And the evaluation dataset. ```python def eval_tuned_encoder(dataset, device): print("Evaluating tuned encoder...") tuned_cars_model = SimilarityModel.load( os.path.join(os.path.dirname(__file__), "cars_encoders") ).to(device) tuned_cars_model.eval() result = Quaterion.evaluate( evaluator=Evaluator( metrics=RetrievalRPrecision(), sampler=GroupSampler(sample_size=1000, device=device, log_progress=True), ), model=tuned_cars_model, dataset=dataset, ) print(result) ``` ## Conclusion In this tutorial, we trained a similarity model to search for similar cars from novel categories unseen in the training phase. Then, we evaluated it on a test dataset by the Retrieval R-Precision metric. The base model scored 0.1207, and our tuned model hit 0.2540, a twice higher score. These scores can be seen in the following figure: {{< figure src=/articles_data/cars-recognition/cars_metrics.png caption="Metrics for the base and tuned models" >}}
articles/cars-recognition.md
--- title: "How to Optimize RAM Requirements for 1 Million Vectors: A Case Study" short_description: Master RAM measurement and memory optimization for optimal performance and resource use. description: Unlock the secrets of efficient RAM measurement and memory optimization with this comprehensive guide, ensuring peak performance and resource utilization. social_preview_image: /articles_data/memory-consumption/preview/social_preview.jpg preview_dir: /articles_data/memory-consumption/preview small_preview_image: /articles_data/memory-consumption/icon.svg weight: 7 author: Andrei Vasnetsov author_link: https://blog.vasnetsov.com/ date: 2022-12-07T10:18:00.000Z # aliases: [ /articles/memory-consumption/ ] --- <!-- 1. How people usually measure memory and why it might be misleading 2. How to properly measure memory 3. Try different configurations of Qdrant and see how they affect the memory consumption and search speed 4. Conclusion --> <!-- Introduction: 1. We are used to measure memory consumption by looking into `htop`. But it could be misleading. 2. There are multiple reasons why it is wrong: 1. Process may allocate memory, but not use it. 2. Process may not free deallocated memory. 3. Process might be forked and memory is shared between processes. 3. Process may use disk cache. 3. As a result, if you see `10GB` memory consumption in `htop`, it doesn't mean that your process actually needs `10GB` of RAM to work. --> # Mastering RAM Measurement and Memory Optimization in Qdrant: A Comprehensive Guide When it comes to measuring the memory consumption of our processes, we often rely on tools such as `htop` to give us an indication of how much RAM is being used. However, this method can be misleading and doesn't always accurately reflect the true memory usage of a process. There are many different ways in which `htop` may not be a reliable indicator of memory usage. For instance, a process may allocate memory in advance but not use it, or it may not free deallocated memory, leading to overstated memory consumption. A process may be forked, which means that it will have a separate memory space, but it will share the same code and data with the parent process. This means that the memory consumption of the child process will be counted twice. Additionally, a process may utilize disk cache, which is also accounted as resident memory in the `htop` measurements. As a result, even if `htop` shows that a process is using 10GB of memory, it doesn't necessarily mean that the process actually requires 10GB of RAM to operate efficiently. In this article, we will explore how to properly measure RAM usage and optimize [Qdrant](https://qdrant.tech/) for optimal memory consumption. ## How to measure actual RAM requirements <!-- 1. We need to know how much RAM we need to have for the program to work, so why not just do a straightforward experiment. 2. Let's limit the allowed memory of the process and see at which point the process will working. 3. We can do a grid search, but it is better to apply binary search to find the minimum amount of RAM more quickly. 4. We will use docker to limit the memory usage of the process. 5. Before running docker we will use ``` # Ensure that there is no data in page cache before each benchmark run sudo bash -c 'sync; echo 1 > /proc/sys/vm/drop_caches' ``` to clear the page between runs and make sure that the process doesn't use of the previous runs. --> We need to know memory consumption in order to estimate how much RAM is required to run the program. So in order to determine that, we can conduct a simple experiment. Let's limit the allowed memory of the process and observe at which point it stops functioning. In this way we can determine the minimum amount of RAM the program needs to operate. One way to do this is by conducting a grid search, but a more efficient method is to use binary search to quickly find the minimum required amount of RAM. We can use docker to limit the memory usage of the process. Before running each benchmark, it is important to clear the page cache with the following command: ```bash sudo bash -c 'sync; echo 1 > /proc/sys/vm/drop_caches' ``` This ensures that the process doesn't utilize any data from previous runs, providing more accurate and consistent results. We can use the following command to run Qdrant with a memory limit of 1GB: ```bash docker run -it --rm \ --memory 1024mb \ --network=host \ -v "$(pwd)/data/storage:/qdrant/storage" \ qdrant/qdrant:latest ``` ## Let's run some benchmarks Let's run some benchmarks to see how much RAM Qdrant needs to serve 1 million vectors. We can use the `glove-100-angular` and scripts from the [vector-db-benchmark](https://github.com/qdrant/vector-db-benchmark) project to upload and query the vectors. With the first run we will use the default configuration of Qdrant with all data stored in RAM. ```bash # Upload vectors python run.py --engines qdrant-all-in-ram --datasets glove-100-angular ``` After uploading vectors, we will repeat the same experiment with different RAM limits to see how they affect the memory consumption and search speed. ```bash # Search vectors python run.py --engines qdrant-all-in-ram --datasets glove-100-angular --skip-upload ``` <!-- Experiment results: All in memory: 1024mb - out of memory 1512mb - 774.38 rps 1256mb - 760.63 rps 1152mb - out of memory 1200mb - 794.72it/s Conclusion: about 1.2GB is needed to serve ~1 million vectors, no speed degradation with limiting memory above 1.2GB MMAP for vectors: 1200mb - 759.94 rps 1100mb - 687.00 rps 1000mb - 10 rps --- use a bit faster disk --- 1000mb - 25 rps 500mb - out of memory 750mb - 5 rps 625mb - 2.5 rps 575mb - out of memory 600mb - out of memory We can go even lower by using mmap not only for vectors, but also for the index. MMAP for vectors and HNSW graph: 600mb - 5 rps 300mb - 0.9 rps / 1.1 sec per query 150mb - 0.4 rps / 2.5 sec per query 75mb - out of memory 110mb - out of memory 125mb - out of memory 135mb - 0.33 rps / 3 sec per query --> ### All in Memory In the first experiment, we tested how well our system performs when all vectors are stored in memory. We tried using different amounts of memory, ranging from 1512mb to 1024mb, and measured the number of requests per second (rps) that our system was able to handle. | Memory | Requests/s | |--------|---------------| | 1512mb | 774.38 | | 1256mb | 760.63 | | 1200mb | 794.72 | | 1152mb | out of memory | | 1024mb | out of memory | We found that 1152MB memory limit resulted in our system running out of memory, but using 1512mb, 1256mb, and 1200mb of memory resulted in our system being able to handle around 780 RPS. This suggests that about 1.2GB of memory is needed to serve around 1 million vectors, and there is no speed degradation when limiting memory usage above 1.2GB. ### Vectors stored using MMAP Let's go a bit further! In the second experiment, we tested how well our system performs when **vectors are stored using the memory-mapped file** (mmap). Create collection with: ```http PUT /collections/benchmark { "vectors": { ... "on_disk": true } } ``` This configuration tells Qdrant to use mmap for vectors if the segment size is greater than 20000Kb (which is approximately 40K 128d-vectors). Now the out-of-memory happens when we allow using **600mb** RAM only <details> <summary>Experiments details</summary> | Memory | Requests/s | |--------|---------------| | 1200mb | 759.94 | | 1100mb | 687.00 | | 1000mb | 10 | --- use a bit faster disk --- | Memory | Requests/s | |--------|---------------| | 1000mb | 25 rps | | 750mb | 5 rps | | 625mb | 2.5 rps | | 600mb | out of memory | </details> At this point we have to switch from network-mounted storage to a faster disk, as the network-based storage is too slow to handle the amount of sequential reads that our system needs to serve the queries. But let's first see how much RAM we need to serve 1 million vectors and then we will discuss the speed optimization as well. ### Vectors and HNSW graph stored using MMAP In the third experiment, we tested how well our system performs when vectors and [HNSW](https://qdrant.tech/articles/filtrable-hnsw/) graph are stored using the memory-mapped files. Create collection with: ```http PUT /collections/benchmark { "vectors": { ... "on_disk": true }, "hnsw_config": { "on_disk": true }, ... } ``` With this configuration we are able to serve 1 million vectors with **only 135mb of RAM**! <details> <summary>Experiments details</summary> | Memory | Requests/s | |--------|---------------| | 600mb | 5 rps | | 300mb | 0.9 rps / 1.1 sec per query | | 150mb | 0.4 rps / 2.5 sec per query | | 135mb | 0.33 rps / 3 sec per query | | 125mb | out of memory | </details> At this point the importance of the disk speed becomes critical. We can serve the search requests with 135mb of RAM, but the speed of the requests makes it impossible to use the system in production. Let's see how we can improve the speed. ## How to speed up the search <!-- We need to look into disk parameters and see how they affect the search speed. Let's measure the disk speed with `fio`: ``` fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=4k --iodepth=64 --size=8G --readwrite=randread ``` Initially we tested on network-mounted disk, but it was too slow: ``` read: IOPS=6366, BW=24.9MiB/s (26.1MB/s)(8192MiB/329424msec) ``` So we switched to default local disk: ``` read: IOPS=63.2k, BW=247MiB/s (259MB/s)(8192MiB/33207msec) ``` Let's now try it on a machine with local SSD and see if it affects the search speed: ``` read: IOPS=183k, BW=716MiB/s (751MB/s)(8192MiB/11438msec) ``` We can use faster disk to speed up the search. Here are the results: 600mb - 50 rps 300mb - 13 rps 200md - 8 rps 150mb - 7 rps --> To measure the impact of disk parameters on search speed, we used the `fio` tool to test the speed of different types of disks. ```bash # Install fio sudo apt-get install fio # Run fio to check the random reads speed fio --randrepeat=1 \ --ioengine=libaio \ --direct=1 \ --gtod_reduce=1 \ --name=fiotest \ --filename=testfio \ --bs=4k \ --iodepth=64 \ --size=8G \ --readwrite=randread ``` Initially, we tested on a network-mounted disk, but its performance was too slow, with a read IOPS of 6366 and a bandwidth of 24.9 MiB/s: ```text read: IOPS=6366, BW=24.9MiB/s (26.1MB/s)(8192MiB/329424msec) ``` To improve performance, we switched to a local disk, which showed much faster results, with a read IOPS of 63.2k and a bandwidth of 247 MiB/s: ```text read: IOPS=63.2k, BW=247MiB/s (259MB/s)(8192MiB/33207msec) ``` That gave us a significant speed boost, but we wanted to see if we could improve performance even further. To do that, we switched to a machine with a local SSD, which showed even better results, with a read IOPS of 183k and a bandwidth of 716 MiB/s: ```text read: IOPS=183k, BW=716MiB/s (751MB/s)(8192MiB/11438msec) ``` Let's see how these results translate into search speed: | Memory | RPS with IOPS=63.2k | RPS with IOPS=183k | |--------|---------------------|--------------------| | 600mb | 5 | 50 | | 300mb | 0.9 | 13 | | 200mb | 0.5 | 8 | | 150mb | 0.4 | 7 | As you can see, the speed of the disk has a significant impact on the search speed. With a local SSD, we were able to increase the search speed by 10x! With the production-grade disk, the search speed could be even higher. Some configurations of the SSDs can reach 1M IOPS and more. Which might be an interesting option to serve large datasets with low search latency in Qdrant. ## Conclusion In this article, we showed that Qdrant has flexibility in terms of RAM usage and can be used to serve large datasets. It provides configurable trade-offs between RAM usage and search speed. If you’re interested to learn more about Qdrant, [book a demo today](https://qdrant.tech/contact-us/)! We are eager to learn more about how you use Qdrant in your projects, what challenges you face, and how we can help you solve them. Please feel free to join our [Discord](https://qdrant.to/discord) and share your experience with us!
articles/memory-consumption.md
--- title: "Vector Search as a dedicated service" short_description: "Why vector search requires to be a dedicated service." description: "Why vector search requires a dedicated service." social_preview_image: /articles_data/dedicated-service/social-preview.png small_preview_image: /articles_data/dedicated-service/preview/icon.svg preview_dir: /articles_data/dedicated-service/preview weight: -70 author: Andrey Vasnetsov author_link: https://vasnetsov.com/ date: 2023-11-30T10:00:00+03:00 draft: false keywords: - system architecture - vector search - best practices - anti-patterns --- Ever since the data science community discovered that vector search significantly improves LLM answers, various vendors and enthusiasts have been arguing over the proper solutions to store embeddings. Some say storing them in a specialized engine (aka vector database) is better. Others say that it's enough to use plugins for existing databases. Here are [just](https://nextword.substack.com/p/vector-database-is-not-a-separate) a [few](https://stackoverflow.blog/2023/09/20/do-you-need-a-specialized-vector-database-to-implement-vector-search-well/) of [them](https://www.singlestore.com/blog/why-your-vector-database-should-not-be-a-vector-database/). This article presents our vision and arguments on the topic . We will: 1. Explain why and when you actually need a dedicated vector solution 2. Debunk some ungrounded claims and anti-patterns to be avoided when building a vector search system. A table of contents: * *Each database vendor will sooner or later introduce vector capabilities...* [[click](#each-database-vendor-will-sooner-or-later-introduce-vector-capabilities-that-will-make-every-database-a-vector-database)] * *Having a dedicated vector database requires duplication of data.* [[click](#having-a-dedicated-vector-database-requires-duplication-of-data)] * *Having a dedicated vector database requires complex data synchronization.* [[click](#having-a-dedicated-vector-database-requires-complex-data-synchronization)] * *You have to pay for a vector service uptime and data transfer.* [[click](#you-have-to-pay-for-a-vector-service-uptime-and-data-transfer-of-both-solutions)] * *What is more seamless than your current database adding vector search capability?* [[click](#what-is-more-seamless-than-your-current-database-adding-vector-search-capability)] * *Databases can support RAG use-case end-to-end.* [[click](#databases-can-support-rag-use-case-end-to-end)] ## Responding to claims ###### Each database vendor will sooner or later introduce vector capabilities. That will make every database a Vector Database. The origins of this misconception lie in the careless use of the term Vector *Database*. When we think of a *database*, we subconsciously envision a relational database like Postgres or MySQL. Or, more scientifically, a service built on ACID principles that provides transactions, strong consistency guarantees, and atomicity. The majority of Vector Database are not *databases* in this sense. It is more accurate to call them *search engines*, but unfortunately, the marketing term *vector database* has already stuck, and it is unlikely to change. *What makes search engines different, and why vector DBs are built as search engines?* First of all, search engines assume different patterns of workloads and prioritize different properties of the system. The core architecture of such solutions is built around those priorities. What types of properties do search engines prioritize? * **Scalability**. Search engines are built to handle large amounts of data and queries. They are designed to be horizontally scalable and operate with more data than can fit into a single machine. * **Search speed**. Search engines should guarantee low latency for queries, while the atomicity of updates is less important. * **Availability**. Search engines must stay available if the majority of the nodes in a cluster are down. At the same time, they can tolerate the eventual consistency of updates. {{< figure src=/articles_data/dedicated-service/compass.png caption="Database guarantees compass" width=80% >}} Those priorities lead to different architectural decisions that are not reproducible in a general-purpose database, even if it has vector index support. ###### Having a dedicated vector database requires duplication of data. By their very nature, vector embeddings are derivatives of the primary source data. In the vast majority of cases, embeddings are derived from some other data, such as text, images, or additional information stored in your system. So, in fact, all embeddings you have in your system can be considered transformations of some original source. And the distinguishing feature of derivative data is that it will change when the transformation pipeline changes. In the case of vector embeddings, the scenario of those changes is quite simple: every time you update the encoder model, all the embeddings will change. In systems where vector embeddings are fused with the primary data source, it is impossible to perform such migrations without significantly affecting the production system. As a result, even if you want to use a single database for storing all kinds of data, you would still need to duplicate data internally. ###### Having a dedicated vector database requires complex data synchronization. Most production systems prefer to isolate different types of workloads into separate services. In many cases, those isolated services are not even related to search use cases. For example, databases for analytics and one for serving can be updated from the same source. Yet they can store and organize the data in a way that is optimal for their typical workloads. Search engines are usually isolated for the same reason: you want to avoid creating a noisy neighbor problem and compromise the performance of your main database. *To give you some intuition, let's consider a practical example:* Assume we have a database with 1 million records. This is a small database by modern standards of any relational database. You can probably use the smallest free tier of any cloud provider to host it. But if we want to use this database for vector search, 1 million OpenAI `text-embedding-ada-002` embeddings will take **~6GB of RAM** (sic!). As you can see, the vector search use case completely overwhelmed the main database resource requirements. In practice, this means that your main database becomes burdened with high memory requirements and can not scale efficiently, limited by the size of a single machine. Fortunately, the data synchronization problem is not new and definitely not unique to vector search. There are many well-known solutions, starting with message queues and ending with specialized ETL tools. For example, we recently released our [integration with Airbyte](/documentation/integrations/airbyte/), allowing you to synchronize data from various sources into Qdrant incrementally. ###### You have to pay for a vector service uptime and data transfer of both solutions. In the open-source world, you pay for the resources you use, not the number of different databases you run. Resources depend more on the optimal solution for each use case. As a result, running a dedicated vector search engine can be even cheaper, as it allows optimization specifically for vector search use cases. For instance, Qdrant implements a number of [quantization techniques](/documentation/guides/quantization/) that can significantly reduce the memory footprint of embeddings. In terms of data transfer costs, on most cloud providers, network use within a region is usually free. As long as you put the original source data and the vector store in the same region, there are no added data transfer costs. ###### What is more seamless than your current database adding vector search capability? In contrast to the short-term attractiveness of integrated solutions, dedicated search engines propose flexibility and a modular approach. You don't need to update the whole production database each time some of the vector plugins are updated. Maintenance of a dedicated search engine is as isolated from the main database as the data itself. In fact, integration of more complex scenarios, such as read/write segregation, is much easier with a dedicated vector solution. You can easily build cross-region replication to ensure low latency for your users. {{< figure src=/articles_data/dedicated-service/region-based-deploy.png caption="Read/Write segregation + cross-regional deployment" width=80% >}} It is especially important in large enterprise organizations, where the responsibility for different parts of the system is distributed among different teams. In those situations, it is much easier to maintain a dedicated search engine for the AI team than to convince the core team to update the whole primary database. Finally, the vector capabilities of the all-in-one database are tied to the development and release cycle of the entire stack. Their long history of use also means that they need to pay a high price for backward compatibility. ###### Databases can support RAG use-case end-to-end. Putting aside performance and scalability questions, the whole discussion about implementing RAG in the DBs assumes that the only detail missing in traditional databases is the vector index and the ability to make fast ANN queries. In fact, the current capabilities of vector search have only scratched the surface of what is possible. For example, in our recent article, we discuss the possibility of building an [exploration API](/articles/vector-similarity-beyond-search/) to fuel the discovery process - an alternative to kNN search, where you don’t even know what exactly you are looking for. ## Summary Ultimately, you do not need a vector database if you are looking for a simple vector search functionality with a small amount of data. We genuinely recommend starting with whatever you already have in your stack to prototype. But you need one if you are looking to do more out of it, and it is the central functionality of your application. It is just like using a multi-tool to make something quick or using a dedicated instrument highly optimized for the use case. Large-scale production systems usually consist of different specialized services and storage types for good reasons since it is one of the best practices of modern software architecture. Comparable to the orchestration of independent building blocks in a microservice architecture. When you stuff the database with a vector index, you compromise both the performance and scalability of the main database and the vector search capabilities. There is no one-size-fits-all approach that would not compromise on performance or flexibility. So if your use case utilizes vector search in any significant way, it is worth investing in a dedicated vector search engine, aka vector database.
articles/dedicated-service.md
--- title: Triplet Loss - Advanced Intro short_description: "What are the advantages of Triplet Loss and how to efficiently implement it?" description: "What are the advantages of Triplet Loss over Contrastive loss and how to efficiently implement it?" social_preview_image: /articles_data/triplet-loss/social_preview.jpg preview_dir: /articles_data/triplet-loss/preview small_preview_image: /articles_data/triplet-loss/icon.svg weight: 30 author: Yusuf Sarıgöz author_link: https://medium.com/@yusufsarigoz date: 2022-03-24T15:12:00+03:00 # aliases: [ /articles/triplet-loss/ ] --- ## What is Triplet Loss? Triplet Loss was first introduced in [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/abs/1503.03832) in 2015, and it has been one of the most popular loss functions for supervised similarity or metric learning ever since. In its simplest explanation, Triplet Loss encourages that dissimilar pairs be distant from any similar pairs by at least a certain margin value. Mathematically, the loss value can be calculated as $L=max(d(a,p) - d(a,n) + m, 0)$, where: - $p$, i.e., positive, is a sample that has the same label as $a$, i.e., anchor, - $n$, i.e., negative, is another sample that has a label different from $a$, - $d$ is a function to measure the distance between these three samples, - and $m$ is a margin value to keep negative samples far apart. The paper uses Euclidean distance, but it is equally valid to use any other distance metric, e.g., cosine distance. The function has a learning objective that can be visualized as in the following: {{< figure src=/articles_data/triplet-loss/loss_objective.png caption="Triplet Loss learning objective" >}} Notice that Triplet Loss does not have a side effect of urging to encode anchor and positive samples into the same point in the vector space as in Contrastive Loss. This lets Triplet Loss tolerate some intra-class variance, unlike Contrastive Loss, as the latter forces the distance between an anchor and any positive essentially to $0$. In other terms, Triplet Loss allows to stretch clusters in such a way as to include outliers while still ensuring a margin between samples from different clusters, e.g., negative pairs. Additionally, Triplet Loss is less greedy. Unlike Contrastive Loss, it is already satisfied when different samples are easily distinguishable from similar ones. It does not change the distances in a positive cluster if there is no interference from negative examples. This is due to the fact that Triplet Loss tries to ensure a margin between distances of negative pairs and distances of positive pairs. However, Contrastive Loss takes into account the margin value only when comparing dissimilar pairs, and it does not care at all where similar pairs are at that moment. This means that Contrastive Loss may reach a local minimum earlier, while Triplet Loss may continue to organize the vector space in a better state. Let's demonstrate how two loss functions organize the vector space by animations. For simpler visualization, the vectors are represented by points in a 2-dimensional space, and they are selected randomly from a normal distribution. {{< figure src=/articles_data/triplet-loss/contrastive.gif caption="Animation that shows how Contrastive Loss moves points in the course of training." >}} {{< figure src=/articles_data/triplet-loss/triplet.gif caption="Animation that shows how Triplet Loss moves points in the course of training." >}} From mathematical interpretations of the two-loss functions, it is clear that Triplet Loss is theoretically stronger, but Triplet Loss has additional tricks that help it work better. Most importantly, Triplet Loss introduce online triplet mining strategies, e.g., automatically forming the most useful triplets. ## Why triplet mining matters? The formulation of Triplet Loss demonstrates that it works on three objects at a time: - `anchor`, - `positive` - a sample that has the same label as the anchor, - and `negative` - a sample with a different label from the anchor and the positive. In a naive implementation, we could form such triplets of samples at the beginning of each epoch and then feed batches of such triplets to the model throughout that epoch. This is called "offline strategy." However, this would not be so efficient for several reasons: - It needs to pass $3n$ samples to get a loss value of $n$ triplets. - Not all these triplets will be useful for the model to learn anything, e.g., yielding a positive loss value. - Even if we form "useful" triplets at the beginning of each epoch with one of the methods that I will be implementing in this series, they may become "useless" at some point in the epoch as the model weights will be constantly updated. Instead, we can get a batch of $n$ samples and their associated labels, and form triplets on the fly. That is called "online strategy." Normally, this gives $n^3$ possible triplets, but only a subset of such possible triplets will be actually valid. Even in this case, we will have a loss value calculated from much more triplets than the offline strategy. Given a triplet of `(a, p, n)`, it is valid only if: - `a` and `p` has the same label, - `a` and `p` are distinct samples, - and `n` has a different label from `a` and `p`. These constraints may seem to be requiring expensive computation with nested loops, but it can be efficiently implemented with tricks such as distance matrix, masking, and broadcasting. The rest of this series will focus on the implementation of these tricks. ## Distance matrix A distance matrix is a matrix of shape $(n, n)$ to hold distance values between all possible pairs made from items in two $n$-sized collections. This matrix can be used to vectorize calculations that would need inefficient loops otherwise. Its calculation can be optimized as well, and we will implement [Euclidean Distance Matrix Trick (PDF)](https://www.robots.ox.ac.uk/~albanie/notes/Euclidean_distance_trick.pdf) explained by Samuel Albanie. You may want to read this three-page document for the full intuition of the trick, but a brief explanation is as follows: - Calculate the dot product of two collections of vectors, e.g., embeddings in our case. - Extract the diagonal from this matrix that holds the squared Euclidean norm of each embedding. - Calculate the squared Euclidean distance matrix based on the following equation: $||a - b||^2 = ||a||^2 - 2 ⟨a, b⟩ + ||b||^2$ - Get the square root of this matrix for non-squared distances. We will implement it in PyTorch, so let's start with imports. ```python import torch import torch.nn as nn import torch.nn.functional as F eps = 1e-8 # an arbitrary small value to be used for numerical stability tricks ``` --- ```python def euclidean_distance_matrix(x): """Efficient computation of Euclidean distance matrix Args: x: Input tensor of shape (batch_size, embedding_dim) Returns: Distance matrix of shape (batch_size, batch_size) """ # step 1 - compute the dot product # shape: (batch_size, batch_size) dot_product = torch.mm(x, x.t()) # step 2 - extract the squared Euclidean norm from the diagonal # shape: (batch_size,) squared_norm = torch.diag(dot_product) # step 3 - compute squared Euclidean distances # shape: (batch_size, batch_size) distance_matrix = squared_norm.unsqueeze(0) - 2 * dot_product + squared_norm.unsqueeze(1) # get rid of negative distances due to numerical instabilities distance_matrix = F.relu(distance_matrix) # step 4 - compute the non-squared distances # handle numerical stability # derivative of the square root operation applied to 0 is infinite # we need to handle by setting any 0 to eps mask = (distance_matrix == 0.0).float() # use this mask to set indices with a value of 0 to eps distance_matrix += mask * eps # now it is safe to get the square root distance_matrix = torch.sqrt(distance_matrix) # undo the trick for numerical stability distance_matrix *= (1.0 - mask) return distance_matrix ``` ## Invalid triplet masking Now that we can compute a distance matrix for all possible pairs of embeddings in a batch, we can apply broadcasting to enumerate distance differences for all possible triplets and represent them in a tensor of shape `(batch_size, batch_size, batch_size)`. However, only a subset of these $n^3$ triplets are actually valid as I mentioned earlier, and we need a corresponding mask to compute the loss value correctly. We will implement such a helper function in three steps: - Compute a mask for distinct indices, e.g., `(i != j and j != k)`. - Compute a mask for valid anchor-positive-negative triplets, e.g., `labels[i] == labels[j] and labels[j] != labels[k]`. - Combine two masks. ```python def get_triplet_mask(labels): """compute a mask for valid triplets Args: labels: Batch of integer labels. shape: (batch_size,) Returns: Mask tensor to indicate which triplets are actually valid. Shape: (batch_size, batch_size, batch_size) A triplet is valid if: `labels[i] == labels[j] and labels[i] != labels[k]` and `i`, `j`, `k` are different. """ # step 1 - get a mask for distinct indices # shape: (batch_size, batch_size) indices_equal = torch.eye(labels.size()[0], dtype=torch.bool, device=labels.device) indices_not_equal = torch.logical_not(indices_equal) # shape: (batch_size, batch_size, 1) i_not_equal_j = indices_not_equal.unsqueeze(2) # shape: (batch_size, 1, batch_size) i_not_equal_k = indices_not_equal.unsqueeze(1) # shape: (1, batch_size, batch_size) j_not_equal_k = indices_not_equal.unsqueeze(0) # Shape: (batch_size, batch_size, batch_size) distinct_indices = torch.logical_and(torch.logical_and(i_not_equal_j, i_not_equal_k), j_not_equal_k) # step 2 - get a mask for valid anchor-positive-negative triplets # shape: (batch_size, batch_size) labels_equal = labels.unsqueeze(0) == labels.unsqueeze(1) # shape: (batch_size, batch_size, 1) i_equal_j = labels_equal.unsqueeze(2) # shape: (batch_size, 1, batch_size) i_equal_k = labels_equal.unsqueeze(1) # shape: (batch_size, batch_size, batch_size) valid_indices = torch.logical_and(i_equal_j, torch.logical_not(i_equal_k)) # step 3 - combine two masks mask = torch.logical_and(distinct_indices, valid_indices) return mask ``` ## Batch-all strategy for online triplet mining Now we are ready for actually implementing Triplet Loss itself. Triplet Loss involves several strategies to form or select triplets, and the simplest one is to use all valid triplets that can be formed from samples in a batch. This can be achieved in four easy steps thanks to utility functions we've already implemented: - Get a distance matrix of all possible pairs that can be formed from embeddings in a batch. - Apply broadcasting to this matrix to compute loss values for all possible triplets. - Set loss values of invalid or easy triplets to $0$. - Average the remaining positive values to return a scalar loss. I will start by implementing this strategy, and more complex ones will follow as separate posts. ```python class BatchAllTtripletLoss(nn.Module): """Uses all valid triplets to compute Triplet loss Args: margin: Margin value in the Triplet Loss equation """ def __init__(self, margin=1.): super().__init__() self.margin = margin def forward(self, embeddings, labels): """computes loss value. Args: embeddings: Batch of embeddings, e.g., output of the encoder. shape: (batch_size, embedding_dim) labels: Batch of integer labels associated with embeddings. shape: (batch_size,) Returns: Scalar loss value. """ # step 1 - get distance matrix # shape: (batch_size, batch_size) distance_matrix = euclidean_distance_matrix(embeddings) # step 2 - compute loss values for all triplets by applying broadcasting to distance matrix # shape: (batch_size, batch_size, 1) anchor_positive_dists = distance_matrix.unsqueeze(2) # shape: (batch_size, 1, batch_size) anchor_negative_dists = distance_matrix.unsqueeze(1) # get loss values for all possible n^3 triplets # shape: (batch_size, batch_size, batch_size) triplet_loss = anchor_positive_dists - anchor_negative_dists + self.margin # step 3 - filter out invalid or easy triplets by setting their loss values to 0 # shape: (batch_size, batch_size, batch_size) mask = get_triplet_mask(labels) triplet_loss *= mask # easy triplets have negative loss values triplet_loss = F.relu(triplet_loss) # step 4 - compute scalar loss value by averaging positive losses num_positive_losses = (triplet_loss > eps).float().sum() triplet_loss = triplet_loss.sum() / (num_positive_losses + eps) return triplet_loss ``` ## Conclusion I mentioned that Triplet Loss is different from Contrastive Loss not only mathematically but also in its sample selection strategies, and I implemented the batch-all strategy for online triplet mining in this post efficiently by using several tricks. There are other more complicated strategies such as batch-hard and batch-semihard mining, but their implementations, and discussions of the tricks I used for efficiency in this post, are worth separate posts of their own. The future posts will cover such topics and additional discussions on some tricks to avoid vector collapsing and control intra-class and inter-class variance.
articles/triplet-loss.md
--- title: "Qdrant Internals: Immutable Data Structures" short_description: "Learn how immutable data structures improve vector search performance in Qdrant." description: "Learn how immutable data structures improve vector search performance in Qdrant." social_preview_image: /articles_data/immutable-data-structures/social_preview.png preview_dir: /articles_data/immutable-data-structures/preview weight: -200 author: Andrey Vasnetsov date: 2024-08-20T10:45:00+02:00 draft: false keywords: - data structures - optimization - immutable data structures - perfect hashing - defragmentation --- ## Data Structures 101 Those who took programming courses might remember that there is no such thing as a universal data structure. Some structures are good at accessing elements by index (like arrays), while others shine in terms of insertion efficiency (like linked lists). {{< figure src="/articles_data/immutable-data-structures/hardware-optimized.png" alt="Hardware-optimized data structure" caption="Hardware-optimized data structure" width="80%" >}} However, when we move from theoretical data structures to real-world systems, and particularly in performance-critical areas such as [vector search](/use-cases/), things become more complex. [Big-O notation](https://en.wikipedia.org/wiki/Big_O_notation) provides a good abstraction, but it doesn’t account for the realities of modern hardware: cache misses, memory layout, disk I/O, and other low-level considerations that influence actual performance. > From the perspective of hardware efficiency, the ideal data structure is a contiguous array of bytes that can be read sequentially in a single thread. This scenario allows hardware optimizations like prefetching, caching, and branch prediction to operate at their best. However, real-world use cases require more complex structures to perform various operations like insertion, deletion, and search. These requirements increase complexity and introduce performance trade-offs. ### Mutability One of the most significant challenges when working with data structures is ensuring **mutability — the ability to change the data structure after it’s created**, particularly with fast update operations. Let’s consider a simple example: we want to iterate over items in sorted order. Without a mutability requirement, we can use a simple array and sort it once. This is very close to our ideal scenario. We can even put the structure on disk - which is trivial for an array. However, if we need to insert an item into this array, **things get more complicated**. Inserting into a sorted array requires shifting all elements after the insertion point, which leads to linear time complexity for each insertion, which is not acceptable for many applications. To handle such cases, more complex structures like [B-trees](https://en.wikipedia.org/wiki/B-tree) come into play. B-trees are specifically designed to optimize both insertion and read operations for large data sets. However, they sacrifice the raw speed of array reads for better insertion performance. Here’s a benchmark that illustrates the difference between iterating over a plain array and a BTreeSet in Rust: ```rust use std::collections::BTreeSet; use rand::Rng; fn main() { // Benchmark plain vector VS btree in a task of iteration over all elements let mut rand = rand::thread_rng(); let vector: Vec<_> = (0..1000000).map(|_| rand.gen::<u64>()).collect(); let btree: BTreeSet<_> = vector.iter().copied().collect(); { let mut sum = 0; for el in vector { sum += el; } } // Elapsed: 850.924µs { let mut sum = 0; for el in btree { sum += el; } } // Elapsed: 5.213025ms, ~6x slower } ``` [Vector databases](https://qdrant.tech/), like Qdrant, have to deal with a large variety of data structures. If we could make them immutable, it would significantly improve performance and optimize memory usage. ## How Does Immutability Help? A large part of the immutable advantage comes from the fact that we know the exact data we need to put into the structure even before we start building it. The simplest example is a sorted array: we would know exactly how many elements we have to put into the array so we can allocate the exact amount of memory once. More complex data structures might require additional statistics to be collected before the structure is built. A Qdrant-related example of this is [Scalar Quantization](/articles/scalar-quantization/#conversion-to-integers): in order to select proper quantization levels, we have to know the distribution of the data. {{< figure src="/articles_data/immutable-data-structures/quantization-quantile.png" alt="Scalar Quantization Quantile" caption="Scalar Quantization Quantile" width="70%" >}} Computing this distribution requires knowing all the data in advance, but once we have it, applying scalar quantization is a simple operation. Let's take a look at a non-exhaustive list of data structures and potential improvements we can get from making them immutable: |Function| Mutable Data Structure | Immutable Alternative | Potential improvements | |----|------|------|------------------------| | Read by index | Array | Fixed chunk of memory | Allocate exact amount of memory | | Vector Storage | Array or Arrays | Memory-mapped file | Offload data to disk | | Read sorted ranges| B-Tree | Sorted Array | Store all data close, avoid cache misses | | Read by key | Hash Map | Hash Map with Perfect Hashing | Avoid hash collisions | | Get documents by keyword | Inverted Index | Inverted Index with Sorted </br> and BitPacked Postings | Less memory usage, faster search | | Vector Search | HNSW graph | HNSW graph with </br> payload-aware connections | Better precision with filters | | Tenant Isolation | Vector Storage | Defragmented Vector Storage | Faster access to on-disk data | For more info on payload-aware connections in HNSW, read our [previous article](/articles/filtrable-hnsw/). This time around, we will focus on the latest additions to Qdrant: - **the immutable hash map with perfect hashing** - **defragmented vector storage**. ### Perfect Hashing A hash table is one of the most commonly used data structures implemented in almost every programming language, including Rust. It provides fast access to elements by key, with an average time complexity of O(1) for read and write operations. There is, however, the assumption that should be satisfied for the hash table to work efficiently: *hash collisions should not cause too much overhead*. In a hash table, each key is mapped to a "bucket," a slot where the value is stored. When different keys map to the same bucket, a collision occurs. In regular mutable hash tables, minimization of collisions is achieved by: * making the number of buckets bigger so the probability of collision is lower * using a linked list or a tree to store multiple elements with the same hash However, these strategies have overheads, which become more significant if we consider using high-latency storage like disk. Indeed, every read operation from disk is several orders of magnitude slower than reading from RAM, so we want to know the correct location of the data from the first attempt. In order to achieve this, we can use a so-called minimal perfect hash function (MPHF). This special type of hash function is constructed specifically for a given set of keys, and it guarantees no collisions while using minimal amount of buckets. In Qdrant, we decided to use *fingerprint-based minimal perfect hash function* implemented in the [ph crate 🦀](https://crates.io/crates/ph) by [Piotr Beling](https://dl.acm.org/doi/10.1145/3596453). According to our benchmarks, using the perfect hash function does introduce some overhead in terms of hashing time, but it significantly reduces the time for the whole operation: | Volume | `ph::Function` | `std::hash::Hash` | `HashMap::get`| |--------|----------------|-------------------|---------------| | 1000 | 60ns | ~20ns | 34ns | | 100k | 90ns | ~20ns | 220ns | | 10M | 238ns | ~20ns | 500ns | Even thought the absolute time for hashing is higher, the time for the whole operation is lower, because PHF guarantees no collisions. The difference is even more significant when we consider disk read time, which might up to several milliseconds (10^6 ns). PHF RAM size scales linearly for `ph::Function`: 3.46 kB for 10k elements, 119MB for 350M elements. The construction time required to build the hash function is surprisingly low, and we only need to do it once: | Volume | `ph::Function` (construct) | PHF size | Size of int64 keys (for reference) | |--------|----------------------------|----------|------------------------------------| | 1M | 52ms | 0.34Mb | 7.62Mb | | 100M | 7.4s | 33.7Mb | 762.9Mb | The usage of PHF in Qdrant lets us minimize the latency of cold reads, which is especially important for large-scale multi-tenant systems. With PHF, it is enough to read a single page from a disk to get the exact location of the data. ### Defragmentation When you read data from a disk, you almost never read a single byte. Instead, you read a page, which is a fixed-size chunk of data. On many systems, the page size is 4KB, which means that every read operation will read 4KB of data, even if you only need a single byte. Vector search, on the other hand, requires reading a lot of small vectors, which might create a large overhead. It is especially noticeable if we use binary quantization, where the size of even large OpenAI 1536d vectors is compressed down to **192 bytes**. {{< figure src="/articles_data/immutable-data-structures/page-vector.png" alt="Overhead when reading a single vector" caption="Overhead when reading single vector" width="80%" >}} That means if the vectors we access during the search are randomly scattered across the disk, we will have to read 4KB for each vector, which is 20 times more than the actual data size. There is, however, a simple way to avoid this overhead: **defragmentation**. If we knew some additional information about the data, we could combine all relevant vectors into a single page. {{< figure src="/articles_data/immutable-data-structures/defragmentation.png" alt="Defragmentation" caption="Defragmentation" width="70%" >}} This additional information is available to Qdrant via the [payload index](/documentation/concepts/indexing/#payload-index). By specifying the payload index, which is going to be used for filtering most of the time, we can put all vectors with the same payload together. This way, reading a single page will also read nearby vectors, which will be used in the search. This approach is especially efficient for [multi-tenant systems](/documentation/guides/multiple-partitions/), where only a small subset of vectors is actively used for search. The capacity of such a deployment is typically defined by the size of the hot subset, which is much smaller than the total number of vectors. > Grouping relevant vectors together allows us to optimize the size of the hot subset by avoiding caching of irrelevant data. The following benchmark data compares RPS for defragmented and non-defragmented storage: | % of hot subset | Tenant Size (vectors) | RPS, Non-defragmented | RPS, Defragmented | |-----------------|-----------------------|-----------------------|-------------------| | 2.5% | 50k | 1.5 | 304 | | 12.5% | 50k | 0.47 | 279 | | 25% | 50k | 0.4 | 63 | | 50% | 50k | 0.3 | 8 | | 2.5% | 5k | 56 | 490 | | 12.5% | 5k | 5.8 | 488 | | 25% | 5k | 3.3 | 490 | | 50% | 5k | 3.1 | 480 | | 75% | 5k | 2.9 | 130 | | 100% | 5k | 2.7 | 95 | **Dataset size:** 2M 768d vectors (~6Gb Raw data), binary quantization, 650Mb of RAM limit. All benchmarks are made with minimal RAM allocation to demonstrate disk cache efficiency. As you can see, the biggest impact is on the small tenant size, where defragmentation allows us to achieve **100x more RPS**. Of course, the real-world impact of defragmentation depends on the specific workload and the size of the hot subset, but enabling this feature can significantly improve the performance of Qdrant. Please find more details on how to enable defragmentation in the [indexing documentation](/documentation/concepts/indexing/#tenant-index). ## Updating Immutable Data Structures One may wonder how Qdrant allows updating collection data if everything is immutable. Indeed, [Qdrant API](https://api.qdrant.tech) allows the change of any vector or payload at any time, so from the user's perspective, the whole collection is mutable at any time. As it usually happens with every decent magic trick, the secret is disappointingly simple: not all data in Qdrant is immutable. In Qdrant, storage is divided into segments, which might be either mutable or immutable. New data is always written to the mutable segment, which is later converted to the immutable one by the optimization process. {{< figure src="/articles_data/immutable-data-structures/optimization.png" alt="Optimization process" caption="Optimization process" width="80%" >}} If we need to update the data in the immutable or currenly optimized segment, instead of changing the data in place, we perform a copy-on-write operation, move the data to the mutable segment, and update it there. Data in the original segment is marked as deleted, and later vacuumed by the optimization process. ## Downsides and How to Compensate While immutable data structures are great for read-heavy operations, they come with trade-offs: - **Higher update costs:** Immutable structures are less efficient for updates. The amortized time complexity might be the same as mutable structures, but the constant factor is higher. - **Rebuilding overhead:** In some cases, we may need to rebuild indices or structures for the same data more than once. - **Read-heavy workloads:** Immutability assumes a search-heavy workload, which is typical for search engines but not for all applications. In Qdrant, we mitigate these downsides by allowing the user to adapt the system to their specific workload. For example, changing the default size of the segment might help to reduce the overhead of rebuilding indices. In extreme cases, multi-segment storage can act as a single segment, falling back to the mutable data structure when needed. ## Conclusion Immutable data structures, while tricky to implement correctly, offer significant performance gains, especially for read-heavy systems like search engines. They allow us to take full advantage of hardware optimizations, reduce memory overhead, and improve cache performance. In Qdrant, the combination of techniques like perfect hashing and defragmentation brings further benefits, making our vector search operations faster and more efficient. While there are trade-offs, the flexibility of Qdrant’s architecture — including segment-based storage — allows us to balance the best of both worlds.
articles/immutable-data-structures.md
--- title: "Qdrant 1.8.0: Enhanced Search Capabilities for Better Results" draft: false slug: qdrant-1.8.x short_description: "Faster sparse vectors.Optimized indexation. Optional CPU resource management." description: "Explore the latest in search technology with Qdrant 1.8.0! Discover faster performance, smarter indexing, and enhanced search capabilities." social_preview_image: /articles_data/qdrant-1.8.x/social_preview.png small_preview_image: /articles_data/qdrant-1.8.x/icon.svg preview_dir: /articles_data/qdrant-1.8.x/preview weight: -140 date: 2024-03-06T00:00:00-08:00 author: David Myriel, Mike Jang featured: false tags: - vector search - new features - sparse vectors - hybrid search - CPU resource management - text field index --- # Unlocking Next-Level Search: Exploring Qdrant 1.8.0's Advanced Search Capabilities [Qdrant 1.8.0 is out!](https://github.com/qdrant/qdrant/releases/tag/v1.8.0). This time around, we have focused on Qdrant's internals. Our goal was to optimize performance so that your existing setup can run faster and save on compute. Here is what we've been up to: - **Faster [sparse vectors](https://qdrant.tech/articles/sparse-vectors/):** [Hybrid search](https://qdrant.tech/articles/hybrid-search/) is up to 16x faster now! - **CPU resource management:** You can allocate CPU threads for faster indexing. - **Better indexing performance:** We optimized text [indexing](https://qdrant.tech/documentation/concepts/indexing/) on the backend. ## Faster search with sparse vectors Search throughput is now up to 16 times faster for sparse vectors. If you are [using Qdrant for hybrid search](/articles/sparse-vectors/), this means that you can now handle up to sixteen times as many queries. This improvement comes from extensive backend optimizations aimed at increasing efficiency and capacity. What this means for your setup: - **Query speed:** The time it takes to run a search query has been significantly reduced. - **Search capacity:** Qdrant can now handle a much larger volume of search requests. - **User experience:** Results will appear faster, leading to a smoother experience for the user. - **Scalability:** You can easily accommodate rapidly growing users or an expanding dataset. ### Sparse vectors benchmark Performance results are publicly available for you to test. Qdrant's R&D developed a dedicated [open-source benchmarking tool](https://github.com/qdrant/sparse-vectors-benchmark) just to test sparse vector performance. A real-life simulation of sparse vector queries was run against the [NeurIPS 2023 dataset](https://big-ann-benchmarks.com/neurips23.html). All tests were done on an 8 CPU machine on Azure. Latency (y-axis) has dropped significantly for queries. You can see the before/after here: ![dropping latency](/articles_data/qdrant-1.8.x/benchmark.png) **Figure 1:** Dropping latency in sparse vector search queries across versions 1.7-1.8. The colors within both scatter plots show the frequency of results. The red dots show that the highest concentration is around 2200ms (before) and 135ms (after). This tells us that latency for sparse vector queries dropped by about a factor of 16. Therefore, the time it takes to retrieve an answer with Qdrant is that much shorter. This performance increase can have a dramatic effect on hybrid search implementations. [Read more about how to set this up.](/articles/sparse-vectors/) FYI, sparse vectors were released in [Qdrant v.1.7.0](/articles/qdrant-1.7.x/#sparse-vectors). They are stored using a different index, so first [check out the documentation](/documentation/concepts/indexing/#sparse-vector-index) if you want to try an implementation. ## CPU resource management Indexing is Qdrant’s most resource-intensive process. Now you can account for this by allocating compute use specifically to indexing. You can assign a number CPU resources towards indexing and leave the rest for search. As a result, indexes will build faster, and search quality will remain unaffected. This isn't mandatory, as Qdrant is by default tuned to strike the right balance between indexing and search. However, if you wish to define specific CPU usage, you will need to do so from `config.yaml`. This version introduces a `optimizer_cpu_budget` parameter to control the maximum number of CPUs used for indexing. > Read more about `config.yaml` in the [configuration file](/documentation/guides/configuration/). ```yaml # CPU budget, how many CPUs (threads) to allocate for an optimization job. optimizer_cpu_budget: 0 ``` - If left at 0, Qdrant will keep 1 or more CPUs unallocated - depending on CPU size. - If the setting is positive, Qdrant will use this exact number of CPUs for indexing. - If the setting is negative, Qdrant will subtract this number of CPUs from the available CPUs for indexing. For most users, the default `optimizer_cpu_budget` setting will work well. We only recommend you use this if your indexing load is significant. Our backend leverages dynamic CPU saturation to increase indexing speed. For that reason, the impact on search query performance ends up being minimal. Ultimately, you will be able to strike the best possible balance between indexing times and search performance. This configuration can be done at any time, but it requires a restart of Qdrant. Changing it affects both existing and new collections. > **Note:** This feature is not configurable on [Qdrant Cloud](https://qdrant.to/cloud). ## Better indexing for text data In order to [minimize your RAM expenditure](https://qdrant.tech/articles/memory-consumption/), we have developed a new way to index specific types of data. Please keep in mind that this is a backend improvement, and you won't need to configure anything. > Going forward, if you are indexing immutable text fields, we estimate a 10% reduction in RAM loads. Our benchmark result is based on a system that uses 64GB of RAM. If you are using less RAM, this reduction might be higher than 10%. Immutable text fields are static and do not change once they are added to Qdrant. These entries usually represent some type of attribute, description or tag. Vectors associated with them can be indexed more efficiently, since you don’t need to re-index them anymore. Conversely, mutable fields are dynamic and can be modified after their initial creation. Please keep in mind that they will continue to require additional RAM. This approach ensures stability in the [vector search](https://qdrant.tech/documentation/overview/vector-search/) index, with faster and more consistent operations. We achieved this by setting up a field index which helps minimize what is stored. To improve search performance we have also optimized the way we load documents for searches with a text field index. Now our backend loads documents mostly sequentially and in increasing order. ## Minor improvements and new features Beyond these enhancements, [Qdrant v1.8.0](https://github.com/qdrant/qdrant/releases/tag/v1.8.0) adds and improves on several smaller features: 1. **Order points by payload:** In addition to searching for semantic results, you might want to retrieve results by specific metadata (such as price). You can now use Scroll API to [order points by payload key](/documentation/concepts/points/#order-points-by-payload-key). 2. **Datetime support:** We have implemented [datetime support for the payload index](/documentation/concepts/filtering/#datetime-range). Prior to this, if you wanted to search for a specific datetime range, you would have had to convert dates to UNIX timestamps. ([PR#3320](https://github.com/qdrant/qdrant/issues/3320)) 3. **Check collection existence:** You can check whether a collection exists via the `/exists` endpoint to the `/collections/{collection_name}`. You will get a true/false response. ([PR#3472](https://github.com/qdrant/qdrant/pull/3472)). 4. **Find points** whose payloads match more than the minimal amount of conditions. We included the `min_should` match feature for a condition to be `true` ([PR#3331](https://github.com/qdrant/qdrant/pull/3466/)). 5. **Modify nested fields:** We have improved the `set_payload` API, adding the ability to update nested fields ([PR#3548](https://github.com/qdrant/qdrant/pull/3548)). ## Experience the Power of Qdrant 1.8.0 Ready to experience the enhanced performance of Qdrant 1.8.0? Upgrade now and explore the major improvements, from faster sparse vectors to optimized CPU resource management and better indexing for text data. Take your search capabilities to the next level with Qdrant's latest version. [Try a demo today](https://qdrant.tech/demo/) and see the difference firsthand! ## Release notes For more information, see [our release notes](https://github.com/qdrant/qdrant/releases/tag/v1.8.0). Qdrant is an open-source project. We welcome your contributions; raise [issues](https://github.com/qdrant/qdrant/issues), or contribute via [pull requests](https://github.com/qdrant/qdrant/pulls)!
articles/qdrant-1.8.x.md
--- title: On Unstructured Data, Vector Databases, New AI Age, and Our Seed Round. short_description: On Unstructured Data, Vector Databases, New AI Age, and Our Seed Round. description: We announce Qdrant seed round investment and share our thoughts on Vector Databases and New AI Age. preview_dir: /articles_data/seed-round/preview social_preview_image: /articles_data/seed-round/seed-social.png small_preview_image: /articles_data/quantum-quantization/icon.svg weight: 6 author: Andre Zayarni draft: false author_link: https://www.linkedin.com/in/zayarni date: 2023-04-19T00:42:00.000Z --- > Vector databases are here to stay. The New Age of AI is powered by vector embeddings, and vector databases are a foundational part of the stack. At Qdrant, we are working on cutting-edge open-source vector similarity search solutions to power fantastic AI applications with the best possible performance and excellent developer experience. > > Our 7.5M seed funding – led by [Unusual Ventures](https://www.unusual.vc/), awesome angels, and existing investors – will help us bring these innovations to engineers and empower them to make the most of their unstructured data and the awesome power of LLMs at any scale. We are thrilled to announce that we just raised our seed round from the best possible investor we could imagine for this stage. Let’s talk about fundraising later – it is a story itself that I could probably write a bestselling book about. First, let's dive into a bit of background about our project, our progress, and future plans. ## A need for vector databases. Unstructured data is growing exponentially, and we are all part of a huge unstructured data workforce. This blog post is unstructured data; your visit here produces unstructured and semi-structured data with every web interaction, as does every photo you take or email you send. The global datasphere will grow to [165 zettabytes by 2025](https://github.com/qdrant/qdrant/pull/1639), and about 80% of that will be unstructured. At the same time, the rising demand for AI is vastly outpacing existing infrastructure. Around 90% of machine learning research results fail to reach production because of a lack of tools. {{< figure src=/articles_data/seed-round/demand.png caption="Demand for AI tools" alt="Vector Databases Demand" >}} Thankfully there’s a new generation of tools that let developers work with unstructured data in the form of vector embeddings, which are deep representations of objects obtained from a neural network model. A vector database, also known as a vector similarity search engine or approximate nearest neighbour (ANN) search database, is a database designed to store, manage, and search high-dimensional data with an additional payload. Vector Databases turn research prototypes into commercial AI products. Vector search solutions are industry agnostic and bring solutions for a number of use cases, including classic ones like semantic search, matching engines, and recommender systems to more novel applications like anomaly detection, working with time series, or biomedical data. The biggest limitation is to have a neural network encoder in place for the data type you are working with. {{< figure src=/articles_data/seed-round/use-cases.png caption="Vector Search Use Cases" alt="Vector Search Use Cases" >}} With the rise of large language models (LLMs), Vector Databases have become the fundamental building block of the new AI Stack. They let developers build even more advanced applications by extending the “knowledge base” of LLMs-based applications like ChatGPT with real-time and real-world data. A new AI product category, “Co-Pilot for X,” was born and is already affecting how we work. Starting from producing content to developing software. And this is just the beginning, there are even more types of novel applications being developed on top of this stack. {{< figure src=/articles_data/seed-round/ai-stack.png caption="New AI Stack" alt="New AI Stack" >}} ## Enter Qdrant. ## At the same time, adoption has only begun. Vector Search Databases are replacing VSS libraries like FAISS, etc., which, despite their disadvantages, are still used by ~90% of projects out there They’re hard-coupled to the application code, lack of production-ready features like basic CRUD operations or advanced filtering, are a nightmare to maintain and scale and have many other difficulties that make life hard for developers. The current Qdrant ecosystem consists of excellent products to work with vector embeddings. We launched our managed vector database solution, Qdrant Cloud, early this year, and it is already serving more than 1,000 Qdrant clusters. We are extending our offering now with managed on-premise solutions for enterprise customers. {{< figure src=/articles_data/seed-round/ecosystem.png caption="Qdrant Ecosystem" alt="Qdrant Vector Database Ecosystem" >}} Our plan for the current [open-source roadmap](https://github.com/qdrant/qdrant/blob/master/docs/roadmap/README.md) is to make billion-scale vector search affordable. Our recent release of the [Scalar Quantization](/articles/scalar-quantization/) improves both memory usage (x4) as well as speed (x2). Upcoming [Product Quantization](https://www.irisa.fr/texmex/people/jegou/papers/jegou_searching_with_quantization.pdf) will introduce even another option with more memory saving. Stay tuned. Qdrant started more than two years ago with the mission of building a vector database powered by a well-thought-out tech stack. Using Rust as the system programming language and technical architecture decision during the development of the engine made Qdrant the leading and one of the most popular vector database solutions. Our unique custom modification of the [HNSW algorithm](/articles/filtrable-hnsw/) for Approximate Nearest Neighbor Search (ANN) allows querying the result with a state-of-the-art speed and applying filters without compromising on results. Cloud-native support for distributed deployment and replications makes the engine suitable for high-throughput applications with real-time latency requirements. Rust brings stability, efficiency, and the possibility to make optimization on a very low level. In general, we always aim for the best possible results in [performance](/benchmarks/), code quality, and feature set. Most importantly, we want to say a big thank you to our [open-source community](https://qdrant.to/discord), our adopters, our contributors, and our customers. Your active participation in the development of our products has helped make Qdrant the best vector database on the market. I cannot imagine how we could do what we’re doing without the community or without being open-source and having the TRUST of the engineers. Thanks to all of you! I also want to thank our team. Thank you for your patience and trust. Together we are strong. Let’s continue doing great things together. ## Fundraising ## The whole process took only a couple of days, we got several offers, and most probably, we would get more with different conditions. We decided to go with Unusual Ventures because they truly understand how things work in the open-source space. They just did it right. Here is a big piece of advice for all investors interested in open-source: Dive into the community, and see and feel the traction and product feedback instead of looking at glossy pitch decks. With Unusual on our side, we have an active operational partner instead of one who simply writes a check. That help is much more important than overpriced valuations and big shiny names. Ultimately, the community and adopters will decide what products win and lose, not VCs. Companies don’t need crazy valuations to create products that customers love. You do not need Ph.D. to innovate. You do not need to over-engineer to build a scalable solution. You do not need ex-FANG people to have a great team. You need clear focus, a passion for what you’re building, and the know-how to do it well. We know how. PS: This text is written by me in an old-school way without any ChatGPT help. Sometimes you just need inspiration instead of AI ;-)
articles/seed-round.md
--- title: "Optimizing RAG Through an Evaluation-Based Methodology" short_description: Learn how Qdrant-powered RAG applications can be tested and iteratively improved using LLM evaluation tools like Quotient. description: Learn how Qdrant-powered RAG applications can be tested and iteratively improved using LLM evaluation tools like Quotient. social_preview_image: /articles_data/rapid-rag-optimization-with-qdrant-and-quotient/preview/social_preview.jpg small_preview_image: /articles_data/rapid-rag-optimization-with-qdrant-and-quotient/icon.svg preview_dir: /articles_data/rapid-rag-optimization-with-qdrant-and-quotient/preview weight: -131 author: Atita Arora author_link: https://github.com/atarora date: 2024-06-12T00:00:00.000Z draft: false keywords: - vector database - vector search - retrieval augmented generation - quotient - optimization - rag --- In today's fast-paced, information-rich world, AI is revolutionizing knowledge management. The systematic process of capturing, distributing, and effectively using knowledge within an organization is one of the fields in which AI provides exceptional value today. > The potential for AI-powered knowledge management increases when leveraging Retrieval Augmented Generation (RAG), a methodology that enables LLMs to access a vast, diverse repository of factual information from knowledge stores, such as vector databases. This process enhances the accuracy, relevance, and reliability of generated text, thereby mitigating the risk of faulty, incorrect, or nonsensical results sometimes associated with traditional LLMs. This method not only ensures that the answers are contextually relevant but also up-to-date, reflecting the latest insights and data available. While RAG enhances the accuracy, relevance, and reliability of traditional LLM solutions, **an evaluation strategy can further help teams ensure their AI products meet these benchmarks of success.** ## Relevant tools for this experiment In this article, we’ll break down a RAG Optimization workflow experiment that demonstrates that evaluation is essential to build a successful RAG strategy. We will use Qdrant and Quotient for this experiment. [Qdrant](https://qdrant.tech/) is a vector database and vector similarity search engine designed for efficient storage and retrieval of high-dimensional vectors. Because Qdrant offers efficient indexing and searching capabilities, it is ideal for implementing RAG solutions, where quickly and accurately retrieving relevant information from extremely large datasets is crucial. Qdrant also offers a wealth of additional features, such as quantization, multivector support and multi-tenancy. Alongside Qdrant we will use Quotient, which provides a seamless way to evaluate your RAG implementation, accelerating and improving the experimentation process. [Quotient](https://www.quotientai.co/) is a platform that provides tooling for AI developers to build evaluation frameworks and conduct experiments on their products. Evaluation is how teams surface the shortcomings of their applications and improve performance in key benchmarks such as faithfulness, and semantic similarity. Iteration is key to building innovative AI products that will deliver value to end users. > 💡 The [accompanying notebook](https://github.com/qdrant/qdrant-rag-eval/tree/master/workshop-rag-eval-qdrant-quotient) for this exercise can be found on GitHub for future reference. ## Summary of key findings 1. **Irrelevance and Hallucinations**: When the documents retrieved are irrelevant, evidenced by low scores in both Chunk Relevance and Context Relevance, the model is prone to generating inaccurate or fabricated information. 2. **Optimizing Document Retrieval**: By retrieving a greater number of documents and reducing the chunk size, we observed improved outcomes in the model's performance. 3. **Adaptive Retrieval Needs**: Certain queries may benefit from accessing more documents. Implementing a dynamic retrieval strategy that adjusts based on the query could enhance accuracy. 4. **Influence of Model and Prompt Variations**: Alterations in language models or the prompts used can significantly impact the quality of the generated responses, suggesting that fine-tuning these elements could optimize performance. Let us walk you through how we arrived at these findings! ## Building a RAG pipeline To evaluate a RAG pipeline , we will have to build a RAG Pipeline first. In the interest of simplicity, we are building a Naive RAG in this article. There are certainly other versions of RAG : ![shades_of_rag.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/shades_of_rag.png) The illustration below depicts how we can leverage a RAG Evaluation framework to assess the quality of RAG Application. ![qdrant_and_quotient.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/qdrant_and_quotient.png) We are going to build a RAG application using Qdrant’s Documentation and the premeditated [hugging face dataset](https://huggingface.co/datasets/atitaarora/qdrant_doc). We will then assess our RAG application’s ability to answer questions about Qdrant. To prepare our knowledge store we will use Qdrant, which can be leveraged in 3 different ways as below : ```python ##Uncomment to initialise qdrant client in memory #client = qdrant_client.QdrantClient( # location=":memory:", #) ##Uncomment below to connect to Qdrant Cloud client = qdrant_client.QdrantClient( os.environ.get("QDRANT_URL"), api_key=os.environ.get("QDRANT_API_KEY"), ) ## Uncomment below to connect to local Qdrant #client = qdrant_client.QdrantClient("http://localhost:6333") ``` We will be using [Qdrant Cloud](https://cloud.qdrant.io/login) so it is a good idea to provide the `QDRANT_URL` and `QDRANT_API_KEY` as environment variables for easier access. Moving on, we will need to define the collection name as : ```python COLLECTION_NAME = "qdrant-docs-quotient" ``` In this case , we may need to create different collections based on the experiments we conduct. To help us provide seamless embedding creations throughout the experiment, we will use Qdrant’s native embedding provider [Fastembed](https://qdrant.github.io/fastembed/) which supports [many different models](https://qdrant.github.io/fastembed/examples/Supported_Models/) including dense as well as sparse vector models. We can initialize and switch the embedding model of our choice as below : ```python ## Declaring the intended Embedding Model with Fastembed from fastembed.embedding import TextEmbedding ## General Fastembed specific operations ##Initilising embedding model ## Using Default Model - BAAI/bge-small-en-v1.5 embedding_model = TextEmbedding() ## For custom model supported by Fastembed #embedding_model = TextEmbedding(model_name="BAAI/bge-small-en", max_length=512) #embedding_model = TextEmbedding(model_name="sentence-transformers/all-MiniLM-L6-v2", max_length=384) ## Verify the chosen Embedding model embedding_model.model_name ``` Before implementing RAG, we need to prepare and index our data in Qdrant. This involves converting textual data into vectors using a suitable encoder (e.g., sentence transformers), and storing these vectors in Qdrant for retrieval. ```python from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.docstore.document import Document as LangchainDocument ## Load the dataset with qdrant documentation dataset = load_dataset("atitaarora/qdrant_doc", split="train") ## Dataset to langchain document langchain_docs = [ LangchainDocument(page_content=doc["text"], metadata={"source": doc["source"]}) for doc in dataset ] len(langchain_docs) #Outputs #240 ``` You can preview documents in the dataset as below : ```python ## Here's an example of what a document in our dataset looks like print(dataset[100]['text']) ``` ## Evaluation dataset To measure the quality of our RAG setup, we will need a representative evaluation dataset. This dataset should contain realistic questions and the expected answers. Additionally, including the expected contexts for which your RAG pipeline is designed to retrieve information would be beneficial. We will be using a [prebuilt evaluation dataset](https://huggingface.co/datasets/atitaarora/qdrant_doc_qna). If you are struggling to make an evaluation dataset for your use case , you can use your documents and some techniques described in this [notebook](https://github.com/qdrant/qdrant-rag-eval/blob/master/synthetic_qna/notebook/Synthetic_question_generation.ipynb) ### Building the RAG pipeline We establish the data preprocessing parameters essential for the RAG pipeline and configure the Qdrant vector database according to the specified criteria. Key parameters under consideration are: - **Chunk size** - **Chunk overlap** - **Embedding model** - **Number of documents retrieved (retrieval window)** Following the ingestion of data in Qdrant, we proceed to retrieve pertinent documents corresponding to each query. These documents are then seamlessly integrated into our evaluation dataset, enriching the contextual information within the designated **`context`** column to fulfil the evaluation aspect. Next we define methods to take care of logistics with respect to adding documents to Qdrant ```python def add_documents(client, collection_name, chunk_size, chunk_overlap, embedding_model_name): """ This function adds documents to the desired Qdrant collection given the specified RAG parameters. """ ## Processing each document with desired TEXT_SPLITTER_ALGO, CHUNK_SIZE, CHUNK_OVERLAP text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size, chunk_overlap=chunk_overlap, add_start_index=True, separators=["\n\n", "\n", ".", " ", ""], ) docs_processed = [] for doc in langchain_docs: docs_processed += text_splitter.split_documents([doc]) ## Processing documents to be encoded by Fastembed docs_contents = [] docs_metadatas = [] for doc in docs_processed: if hasattr(doc, 'page_content') and hasattr(doc, 'metadata'): docs_contents.append(doc.page_content) docs_metadatas.append(doc.metadata) else: # Handle the case where attributes are missing print("Warning: Some documents do not have 'page_content' or 'metadata' attributes.") print("processed: ", len(docs_processed)) print("content: ", len(docs_contents)) print("metadata: ", len(docs_metadatas)) ## Adding documents to Qdrant using desired embedding model client.set_model(embedding_model_name=embedding_model_name) client.add(collection_name=collection_name, metadata=docs_metadatas, documents=docs_contents) ``` and retrieving documents from Qdrant during our RAG Pipeline assessment. ```python def get_documents(collection_name, query, num_documents=3): """ This function retrieves the desired number of documents from the Qdrant collection given a query. It returns a list of the retrieved documents. """ search_results = client.query( collection_name=collection_name, query_text=query, limit=num_documents, ) results = [r.metadata["document"] for r in search_results] return results ``` ### Setting up Quotient You will need an account log in, which you can get by requesting access on [Quotient's website](https://www.quotientai.co/). Once you have an account, you can create an API key by running the `quotient authenticate` CLI command. <aside> 💡 Be sure to save your API key, since it will only be displayed once (Note: you will not have to repeat this step again until your API key expires). </aside> **Once you have your API key, make sure to set it as an environment variable called `QUOTIENT_API_KEY`** ```python # Import QuotientAI client and connect to QuotientAI from quotientai.client import QuotientClient from quotientai.utils import show_job_progress # IMPORTANT: be sure to set your API key as an environment variable called QUOTIENT_API_KEY # You will need this set before running the code below. You may also uncomment the following line and insert your API key: # os.environ['QUOTIENT_API_KEY'] = "YOUR_API_KEY" quotient = QuotientClient() ``` **QuotientAI** provides a seamless way to integrate *RAG evaluation* into your applications. Here, we'll see how to use it to evaluate text generated from an LLM, based on retrieved knowledge from the Qdrant vector database. After retrieving the top similar documents and populating the `context` column, we can submit the evaluation dataset to Quotient and execute an evaluation job. To run a job, all you need is your evaluation dataset and a `recipe`. ***A recipe is a combination of a prompt template and a specified LLM.*** **Quotient** orchestrates the evaluation run and handles version control and asset management throughout the experimentation process. ***Prior to assessing our RAG solution, it's crucial to outline our optimization goals.*** In the context of *question-answering on Qdrant documentation*, our focus extends beyond merely providing helpful responses. Ensuring the absence of any *inaccurate or misleading information* is paramount. In other words, **we want to minimize hallucinations** in the LLM outputs. For our evaluation, we will be considering the following metrics, with a focus on **Faithfulness**: - **Context Relevance** - **Chunk Relevance** - **Faithfulness** - **ROUGE-L** - **BERT Sentence Similarity** - **BERTScore** ### Evaluation in action The function below takes an evaluation dataset as input, which in this case contains questions and their corresponding answers. It retrieves relevant documents based on the questions in the dataset and populates the context field with this information from Qdrant. The prepared dataset is then submitted to QuotientAI for evaluation for the chosen metrics. After the evaluation is complete, the function displays aggregated statistics on the evaluation metrics followed by the summarized evaluation results. ```python def run_eval(eval_df, collection_name, recipe_id, num_docs=3, path="eval_dataset_qdrant_questions.csv"): """ This function evaluates the performance of a complete RAG pipeline on a given evaluation dataset. Given an evaluation dataset (containing questions and ground truth answers), this function retrieves relevant documents, populates the context field, and submits the dataset to QuotientAI for evaluation. Once the evaluation is complete, aggregated statistics on the evaluation metrics are displayed. The evaluation results are returned as a pandas dataframe. """ # Add context to each question by retrieving relevant documents eval_df['documents'] = eval_df.apply(lambda x: get_documents(collection_name=collection_name, query=x['input_text'], num_documents=num_docs), axis=1) eval_df['context'] = eval_df.apply(lambda x: "\n".join(x['documents']), axis=1) # Now we'll save the eval_df to a CSV eval_df.to_csv(path, index=False) # Upload the eval dataset to QuotientAI dataset = quotient.create_dataset( file_path=path, name="qdrant-questions-eval-v1", ) # Create a new task for the dataset task = quotient.create_task( dataset_id=dataset['id'], name='qdrant-questions-qa-v1', task_type='question_answering' ) # Run a job to evaluate the model job = quotient.create_job( task_id=task['id'], recipe_id=recipe_id, num_fewshot_examples=0, limit=500, metric_ids=[5, 7, 8, 11, 12, 13, 50], ) # Show the progress of the job show_job_progress(quotient, job['id']) # Once the job is complete, we can get our results data = quotient.get_eval_results(job_id=job['id']) # Add the results to a pandas dataframe to get statistics on performance df = pd.json_normalize(data, "results") df_stats = df[df.columns[df.columns.str.contains("metric|completion_time")]] df.columns = df.columns.str.replace("metric.", "") df_stats.columns = df_stats.columns.str.replace("metric.", "") metrics = { 'completion_time_ms':'Completion Time (ms)', 'chunk_relevance': 'Chunk Relevance', 'selfcheckgpt_nli_relevance':"Context Relevance", 'selfcheckgpt_nli':"Faithfulness", 'rougeL_fmeasure':"ROUGE-L", 'bert_score_f1':"BERTScore", 'bert_sentence_similarity': "BERT Sentence Similarity", 'completion_verbosity':"Completion Verbosity", 'verbosity_ratio':"Verbosity Ratio",} df = df.rename(columns=metrics) df_stats = df_stats.rename(columns=metrics) display(df_stats[metrics.values()].describe()) return df main_metrics = [ 'Context Relevance', 'Chunk Relevance', 'Faithfulness', 'ROUGE-L', 'BERT Sentence Similarity', 'BERTScore', ] ``` ## Experimentation Our approach is rooted in the belief that improvement thrives in an environment of exploration and discovery. By systematically testing and tweaking various components of the RAG pipeline, we aim to incrementally enhance its capabilities and performance. In the following section, we dive into the details of our experimentation process, outlining the specific experiments conducted and the insights gained. ### Experiment 1 - Baseline Parameters - **Embedding Model: `bge-small-en`** - **Chunk size: `512`** - **Chunk overlap: `64`** - **Number of docs retrieved (Retireval Window): `3`** - **LLM: `Mistral-7B-Instruct`** We’ll process our documents based on configuration above and ingest them into Qdrant using `add_documents` method introduced earlier ```python #experiment1 - base config chunk_size = 512 chunk_overlap = 64 embedding_model_name = "BAAI/bge-small-en" num_docs = 3 COLLECTION_NAME = f"experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}" add_documents(client, collection_name=COLLECTION_NAME, chunk_size=chunk_size, chunk_overlap=chunk_overlap, embedding_model_name=embedding_model_name) #Outputs #processed: 4504 #content: 4504 #metadata: 4504 ``` Notice the `COLLECTION_NAME` which helps us segregate and identify our collections based on the experiments conducted. To proceed with the evaluation, let’s create the `evaluation recipe` up next ```python # Create a recipe for the generator model and prompt template recipe_mistral = quotient.create_recipe( model_id=10, prompt_template_id=1, name='mistral-7b-instruct-qa-with-rag', description='Mistral-7b-instruct using a prompt template that includes context.' ) recipe_mistral #Outputs recipe JSON with the used prompt template #'prompt_template': {'id': 1, # 'name': 'Default Question Answering Template', # 'variables': '["input_text","context"]', # 'created_at': '2023-12-21T22:01:54.632367', # 'template_string': 'Question: {input_text}\\n\\nContext: {context}\\n\\nAnswer:', # 'owner_profile_id': None} ``` To get a list of your existing recipes, you can simply run: ```python quotient.list_recipes() ``` Notice the recipe template is a simplest prompt using `Question` from evaluation template `Context` from document chunks retrieved from Qdrant and `Answer` generated by the pipeline. To kick off the evaluation ```python # Kick off an evaluation job experiment_1 = run_eval(eval_df, collection_name=COLLECTION_NAME, recipe_id=recipe_mistral['id'], num_docs=num_docs, path=f"{COLLECTION_NAME}_{num_docs}_mistral.csv") ``` This may take few minutes (depending on the size of evaluation dataset!) We can look at the results from our first (baseline) experiment as below : ![experiment1_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment1_eval.png) Notice that we have a pretty **low average Chunk Relevance** and **very large standard deviations for both Chunk Relevance and Context Relevance**. Let's take a look at some of the lower performing datapoints with **poor Faithfulness**: ```python with pd.option_context('display.max_colwidth', 0): display(experiment_1[['content.input_text', 'content.answer','content.documents','Chunk Relevance','Context Relevance','Faithfulness'] ].sort_values(by='Faithfulness').head(2)) ``` ![experiment1_bad_examples.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment1_bad_examples.png) In instances where the retrieved documents are **irrelevant (where both Chunk Relevance and Context Relevance are low)**, the model also shows **tendencies to hallucinate** and **produce poor quality responses**. The quality of the retrieved text directly impacts the quality of the LLM-generated answer. Therefore, our focus will be on enhancing the RAG setup by **adjusting the chunking parameters**. ### Experiment 2 - Adjusting the chunk parameter Keeping all other parameters constant, we changed the `chunk size` and `chunk overlap` to see if we can improve our results. Parameters : - **Embedding Model : `bge-small-en`** - **Chunk size: `1024`** - **Chunk overlap: `128`** - **Number of docs retrieved (Retireval Window): `3`** - **LLM: `Mistral-7B-Instruct`** We will reprocess the data with the updated parameters above: ```python ## for iteration 2 - lets modify chunk configuration ## We will start with creating seperate collection to store vectors chunk_size = 1024 chunk_overlap = 128 embedding_model_name = "BAAI/bge-small-en" num_docs = 3 COLLECTION_NAME = f"experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}" add_documents(client, collection_name=COLLECTION_NAME, chunk_size=chunk_size, chunk_overlap=chunk_overlap, embedding_model_name=embedding_model_name) #Outputs #processed: 2152 #content: 2152 #metadata: 2152 ``` Followed by running evaluation : ![experiment2_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment2_eval.png) and **comparing it with the results from Experiment 1:** ![graph_exp1_vs_exp2.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_vs_exp2.png) We observed slight enhancements in our LLM completion metrics (including BERT Sentence Similarity, BERTScore, ROUGE-L, and Knowledge F1) with the increase in *chunk size*. However, it's noteworthy that there was a significant decrease in *Faithfulness*, which is the primary metric we are aiming to optimize. Moreover, *Context Relevance* demonstrated an increase, indicating that the RAG pipeline retrieved more relevant information required to address the query. Nonetheless, there was a considerable drop in *Chunk Relevance*, implying that a smaller portion of the retrieved documents contained pertinent information for answering the question. **The correlation between the rise in Context Relevance and the decline in Chunk Relevance suggests that retrieving more documents using the smaller chunk size might yield improved results.** ### Experiment 3 - Increasing the number of documents retrieved (retrieval window) This time, we are using the same RAG setup as `Experiment 1`, but increasing the number of retrieved documents from **3** to **5**. Parameters : - **Embedding Model : `bge-small-en`** - **Chunk size: `512`** - **Chunk overlap: `64`** - **Number of docs retrieved (Retrieval Window): `5`** - **LLM: : `Mistral-7B-Instruct`** We can use the collection from Experiment 1 and run evaluation with modified `num_docs` parameter as : ```python #collection name from Experiment 1 COLLECTION_NAME = f"experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}" #running eval for experiment 3 experiment_3 = run_eval(eval_df, collection_name=COLLECTION_NAME, recipe_id=recipe_mistral['id'], num_docs=num_docs, path=f"{COLLECTION_NAME}_{num_docs}_mistral.csv") ``` Observe the results as below : ![experiment_3_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment_3_eval.png) Comparing the results with Experiment 1 and 2 : ![graph_exp1_exp2_exp3.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_exp2_exp3.png) As anticipated, employing the smaller chunk size while retrieving a larger number of documents resulted in achieving the highest levels of both *Context Relevance* and *Chunk Relevance.* Additionally, it yielded the **best** (albeit marginal) *Faithfulness* score, indicating a *reduced occurrence of inaccuracies or hallucinations*. Looks like we have achieved a good hold on our chunking parameters but it is worth testing another embedding model to see if we can get better results. ### Experiment 4 - Changing the embedding model Let us try using **MiniLM** for this experiment ****Parameters : - **Embedding Model : `MiniLM-L6-v2`** - **Chunk size: `512`** - **Chunk overlap: `64`** - **Number of docs retrieved (Retrieval Window): `5`** - **LLM: : `Mistral-7B-Instruct`** We will have to create another collection for this experiment : ```python #experiment-4 chunk_size=512 chunk_overlap=64 embedding_model_name="sentence-transformers/all-MiniLM-L6-v2" num_docs=5 COLLECTION_NAME = f"experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}" add_documents(client, collection_name=COLLECTION_NAME, chunk_size=chunk_size, chunk_overlap=chunk_overlap, embedding_model_name=embedding_model_name) #Outputs #processed: 4504 #content: 4504 #metadata: 4504 ``` We will observe our evaluations as : ![experiment4_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment4_eval.png) Comparing these with our previous experiments : ![graph_exp1_exp2_exp3_exp4.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_exp2_exp3_exp4.png) It appears that `bge-small` was more proficient in capturing the semantic nuances of the Qdrant Documentation. Up to this point, our experimentation has focused solely on the *retrieval aspect* of our RAG pipeline. Now, let's explore altering the *generation aspect* or LLM while retaining the optimal parameters identified in Experiment 3. ### Experiment 5 - Changing the LLM Parameters : - **Embedding Model : `bge-small-en`** - **Chunk size: `512`** - **Chunk overlap: `64`** - **Number of docs retrieved (Retrieval Window): `5`** - **LLM: : `GPT-3.5-turbo`** For this we can repurpose our collection from Experiment 3 while the evaluations to use a new recipe with **GPT-3.5-turbo** model. ```python #collection name from Experiment 3 COLLECTION_NAME = f"experiment_{chunk_size}_{chunk_overlap}_{embedding_model_name.split('/')[1]}" # We have to create a recipe using the same prompt template and GPT-3.5-turbo recipe_gpt = quotient.create_recipe( model_id=5, prompt_template_id=1, name='gpt3.5-qa-with-rag-recipe-v1', description='GPT-3.5 using a prompt template that includes context.' ) recipe_gpt #Outputs #{'id': 495, # 'name': 'gpt3.5-qa-with-rag-recipe-v1', # 'description': 'GPT-3.5 using a prompt template that includes context.', # 'model_id': 5, # 'prompt_template_id': 1, # 'created_at': '2024-05-03T12:14:58.779585', # 'owner_profile_id': 34, # 'system_prompt_id': None, # 'prompt_template': {'id': 1, # 'name': 'Default Question Answering Template', # 'variables': '["input_text","context"]', # 'created_at': '2023-12-21T22:01:54.632367', # 'template_string': 'Question: {input_text}\\n\\nContext: {context}\\n\\nAnswer:', # 'owner_profile_id': None}, # 'model': {'id': 5, # 'name': 'gpt-3.5-turbo', # 'endpoint': 'https://api.openai.com/v1/chat/completions', # 'revision': 'placeholder', # 'created_at': '2024-02-06T17:01:21.408454', # 'model_type': 'OpenAI', # 'description': 'Returns a maximum of 4K output tokens.', # 'owner_profile_id': None, # 'external_model_config_id': None, # 'instruction_template_cls': 'NoneType'}} ``` Running the evaluations as : ```python experiment_5 = run_eval(eval_df, collection_name=COLLECTION_NAME, recipe_id=recipe_gpt['id'], num_docs=num_docs, path=f"{COLLECTION_NAME}_{num_docs}_gpt.csv") ``` We observe : ![experiment5_eval.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/experiment5_eval.png) and comparing all the 5 experiments as below : ![graph_exp1_exp2_exp3_exp4_exp5.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/graph_exp1_exp2_exp3_exp4_exp5.png) **GPT-3.5 surpassed Mistral-7B in all metrics**! Notably, Experiment 5 exhibited the **lowest occurrence of hallucination**. ## Conclusions Let’s take a look at our results from all 5 experiments above ![overall_eval_results.png](/articles_data/rapid-rag-optimization-with-qdrant-and-quotient/overall_eval_results.png) We still have a long way to go in improving the retrieval performance of RAG, as indicated by our generally poor results thus far. It might be beneficial to **explore alternative embedding models** or **different retrieval strategies** to address this issue. The significant variations in *Context Relevance* suggest that **certain questions may necessitate retrieving more documents than others**. Therefore, investigating a **dynamic retrieval strategy** could be worthwhile. Furthermore, there's ongoing **exploration required on the generative aspect** of RAG. Modifying LLMs or prompts can substantially impact the overall quality of responses. This iterative process demonstrates how, starting from scratch, continual evaluation and adjustments throughout experimentation can lead to the development of an enhanced RAG system. ## Watch this workshop on YouTube > A workshop version of this article is [available on YouTube](https://www.youtube.com/watch?v=3MEMPZR1aZA). Follow along using our [GitHub notebook](https://github.com/qdrant/qdrant-rag-eval/tree/master/workshop-rag-eval-qdrant-quotient). <iframe width="560" height="315" src="https://www.youtube.com/embed/3MEMPZR1aZA?si=n38oTBMtH3LNCTzd" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
articles/rapid-rag-optimization-with-qdrant-and-quotient.md
--- title: Qdrant Articles page_title: Articles about Vector Search description: Articles about vector search and similarity larning related topics. Latest updates on Qdrant vector search engine. section_title: Check out our latest publications subtitle: Check out our latest publications img: /articles_data/title-img.png ---
articles/_index.md
--- title: Why Rust? short_description: "A short history on how we chose rust and what it has brought us" description: Qdrant could be built in any language. But it's written in Rust. Here*s why. social_preview_image: /articles_data/why-rust/preview/social_preview.jpg preview_dir: /articles_data/why-rust/preview weight: 10 author: Andre Bogus author_link: https://llogiq.github.io date: 2023-05-11T10:00:00+01:00 draft: false keywords: rust, programming, development aliases: [ /articles/why_rust/ ] --- # Building Qdrant in Rust Looking at the [github repository](https://github.com/qdrant/qdrant), you can see that Qdrant is built in [Rust](https://rust-lang.org). Other offerings may be written in C++, Go, Java or even Python. So why does Qdrant chose Rust? Our founder Andrey had built the first prototype in C++, but didn’t trust his command of the language to scale to a production system (to be frank, he likened it to cutting his leg off). He was well versed in Java and Scala and also knew some Python. However, he considered neither a good fit: **Java** is also more than 30 years old now. With a throughput-optimized VM it can often at least play in the same ball park as native services, and the tooling is phenomenal. Also portability is surprisingly good, although the GC is not suited for low-memory applications and will generally take good amount of RAM to deliver good performance. That said, the focus on throughput led to the dreaded GC pauses that cause latency spikes. Also the fat runtime incurs high start-up delays, which need to be worked around. **Scala** also builds on the JVM, although there is a native compiler, there was the question of compatibility. So Scala shared the limitations of Java, and although it has some nice high-level amenities (of which Java only recently copied a subset), it still doesn’t offer the same level of control over memory layout as, say, C++, so it is similarly disqualified. **Python**, being just a bit younger than Java, is ubiquitous in ML projects, mostly owing to its tooling (notably jupyter notebooks), being easy to learn and integration in most ML stacks. It doesn’t have a traditional garbage collector, opting for ubiquitous reference counting instead, which somewhat helps memory consumption. With that said, unless you only use it as glue code over high-perf modules, you may find yourself waiting for results. Also getting complex python services to perform stably under load is a serious technical challenge. ## Into the Unknown So Andrey looked around at what younger languages would fit the challenge. After some searching, two contenders emerged: Go and Rust. Knowing neither, Andrey consulted the docs, and found hinself intrigued by Rust with its promise of Systems Programming without pervasive memory unsafety. This early decision has been validated time and again. When first learning Rust, the compiler’s error messages are very helpful (and have only improved in the meantime). It’s easy to keep memory profile low when one doesn’t have to wrestle a garbage collector and has complete control over stack and heap. Apart from the much advertised memory safety, many footguns one can run into when writing C++ have been meticulously designed out. And it’s much easier to parallelize a task if one doesn’t have to fear data races. With Qdrant written in Rust, we can offer cloud services that don’t keep us awake at night, thanks to Rust’s famed robustness. A current qdrant docker container comes in at just a bit over 50MB — try that for size. As for performance… have some [benchmarks](/benchmarks/). And we don’t have to compromise on ergonomics either, not for us nor for our users. Of course, there are downsides: Rust compile times are usually similar to C++’s, and though the learning curve has been considerably softened in the last years, it’s still no match for easy-entry languages like Python or Go. But learning it is a one-time cost. Contrast this with Go, where you may find [the apparent simplicity is only skin-deep](https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-ride). ## Smooth is Fast The complexity of the type system pays large dividends in bugs that didn’t even make it to a commit. The ecosystem for web services is also already quite advanced, perhaps not at the same point as Java, but certainly matching or outcompeting Go. Some people may think that the strict nature of Rust will slow down development, which is true only insofar as it won’t let you cut any corners. However, experience has conclusively shown that this is a net win. In fact, Rust lets us [ride the wall](https://the-race.com/nascar/bizarre-wall-riding-move-puts-chastain-into-nascar-folklore/), which makes us faster, not slower. The job market for Rust programmers is certainly not as big as that for Java or Python programmers, but the language has finally reached the mainstream, and we don’t have any problems getting and retaining top talent. And being an open source project, when we get contributions, we don’t have to check for a wide variety of errors that Rust already rules out. ## In Rust We Trust Finally, the Rust community is a very friendly bunch, and we are delighted to be part of that. And we don’t seem to be alone. Most large IT companies (notably Amazon, Google, Huawei, Meta and Microsoft) have already started investing in Rust. It’s in the Windows font system already and in the process of coming to the Linux kernel (build support has already been included). In machine learning applications, Rust has been tried and proven by the likes of Aleph Alpha and Huggingface, among many others. To sum up, choosing Rust was a lucky guess that has brought huge benefits to Qdrant. Rust continues to be our not-so-secret weapon. ### Key Takeaways: - **Rust's Advantages for Qdrant:** Rust provides memory safety and control without a garbage collector, which is crucial for Qdrant's high-performance cloud services. - **Low Overhead:** Qdrant's Rust-based system offers efficiency, with small Docker container sizes and robust performance benchmarks. - **Complexity vs. Simplicity:** Rust's strict type system reduces bugs early in development, making it faster in the long run despite initial learning curves. - **Adoption by Major Players:** Large tech companies like Amazon, Google, and Microsoft are embracing Rust, further validating Qdrant's choice. - **Community and Talent:** The supportive Rust community and increasing availability of Rust developers make it easier for Qdrant to grow and innovate.
articles/why-rust.md
--- title: "Qdrant x.y.0 - <include headline> #required; update version and headline" draft: true # Change to false to publish the article at /articles/ slug: qdrant-x.y.z # required; subtitute version number short_description: "Headline-like description." description: "Headline with more detail. Suggested limit: 140 characters. " # Follow instructions in https://github.com/qdrant/landing_page?tab=readme-ov-file#articles to create preview images # social_preview_image: /articles_data/<slug>/social_preview.jpg # This image will be used in social media previews, should be 1200x600px. Required. # small_preview_image: /articles_data/<slug>/icon.svg # This image will be used in the list of articles at the footer, should be 40x40px # preview_dir: /articles_data/<slug>/preview # This directory contains images that will be used in the article preview. They can be generated from one image. Read more below. Required. weight: 10 # This is the order of the article in the list of articles at the footer. The lower the number, the higher the article will be in the list. Negative numbers OK. author: <name> # Author of the article. Required. author_link: https://medium.com/@yusufsarigoz # Link to the author's page. Not required. date: 2022-06-28T13:00:00+03:00 # Date of the article. Required. If the date is in the future it does not appear in the build tags: # Keywords for SEO - vector databases comparative benchmark - benchmark - performance - latency --- [Qdrant x.y.0 is out!]((https://github.com/qdrant/qdrant/releases/tag/vx.y.0). Include headlines: - **Headline 1:** Description - **Headline 2:** Description - **Headline 3:** Description ## Related to headline 1 Description Highlights: - **Detail 1:** Description - **Detail 2:** Description - **Detail 3:** Description Include before / after information, ideally with graphs and/or numbers Include links to documentation Note limits, such as availability on Qdrant Cloud ## Minor improvements and new features Beyond these enhancements, [Qdrant vx.y.0](https://github.com/qdrant/qdrant/releases/tag/vx.y.0) adds and improves on several smaller features: 1. 1. ## Release notes For more information, see [our release notes](https://github.com/qdrant/qdrant/releases/tag/vx.y.0). Qdrant is an open source project. We welcome your contributions; raise [issues](https://github.com/qdrant/qdrant/issues), or contribute via [pull requests](https://github.com/qdrant/qdrant/pulls)!
articles/templates/release-post-template.md
--- review: “With the landscape of AI being complex for most customers, Qdrant's ease of use provides an easy approach for customers' implementation of RAG patterns for Generative AI solutions and additional choices in selecting AI components on Azure.” names: Tara Walker positions: Principal Software Engineer at Microsoft avatar: src: /img/customers/tara-walker.svg alt: Tara Walker Avatar logo: src: /img/brands/microsoft-gray.svg alt: Logo sitemapExclude: true ---
qdrant-for-startups/qdrant-for-startups-testimonial.md
--- title: Apply Now form: id: startup-program-form title: Join our Startup Program firstNameLabel: First Name lastNameLabel: Last Name businessEmailLabel: Business Email companyNameLabel: Company Name companyUrlLabel: Company URL cloudProviderLabel: Cloud Provider productDescriptionLabel: Product Description latestFundingRoundLabel: Latest Funding Round numberOfEmployeesLabel: Number of Employees info: By submitting, I confirm that I have read and understood the link: url: / text: Terms and Conditions. button: Send Message hubspotFormOptions: '{ "region": "eu1", "portalId": "139603372", "formId": "59eb058b-0145-4ab0-b49a-c37708d20a60", "submitButtonClass": "button button_contained", }' sitemapExclude: true ---
qdrant-for-startups/qdrant-for-startups-form.md
--- title: Program FAQ questions: - id: 0 question: Who is eligible? answer: | <ul> <li>Pre-seed, Seed or Series A startups (under five years old)</li> <li>Has not previously participated in the Qdrant for Startups program</li> <li>Must be building an AI-driven product or services (agencies or devshops are not eligible)</li> <li>A live, functional website is a must for all applicants</li> <li>Billing must be done directly with Qdrant (not through a marketplace)</li> </ul> - id: 1 question: When will I get notified about my application? answer: Upon submitting your application, we will review it and notify you of your status within 7 business days. - id: 2 question: What is the price? answer: It is free to apply to the program. As part of the program, you will receive up to a 20% discount on Qdrant Cloud, valid for 12 months. For detailed cloud pricing, please visit qdrant.tech/pricing. - id: 3 question: How can my startup join the program? answer: Your startup can join the program by simply submitting the application on this page. Once submitted, we will review your application and notify you of your status within 7 business days. sitemapExclude: true ---
qdrant-for-startups/qdrant-for-startups-faq.md
--- title: Why join Qdrant for Startups? mainCard: title: Discount for Qdrant Cloud description: Receive up to <strong>20%</strong> discount on <a href="https://cloud.qdrant.io/" target="_blank">Qdrant Cloud</a> for the first year and start building now. image: src: /img/qdrant-for-startups-benefits/card1.png alt: Qdrant Discount for Startups cards: - id: 0 title: Expert Technical Advice description: Get access to one-on-one sessions with experts for personalized technical advice. image: src: /img/qdrant-for-startups-benefits/card2.svg alt: Expert Technical Advice - id: 1 title: Co-Marketing Opportunities description: We’d love to share your work with our community. Exclusive access to our Vector Space Talks, joint blog posts, and more. image: src: /img/qdrant-for-startups-benefits/card3.svg alt: Co-Marketing Opportunities description: Qdrant is the leading open source vector database and similarity search engine designed to handle high-dimensional vectors for performance and massive-scale AI applications. link: url: /documentation/overview/ text: Learn More sitemapExclude: true ---
qdrant-for-startups/qdrant-for-startups-benefits.md
--- title: Qdrant For Startups description: Qdrant For Startups cascade: - _target: environment: production build: list: never render: never publishResources: false sitemapExclude: true # todo: remove sitemapExclude and change building options after the page is ready to be published ---
qdrant-for-startups/_index.md
--- title: Qdrant for Startups description: Powering The Next Wave of AI Innovators, Qdrant for Startups is committed to being the catalyst for the next generation of AI pioneers. Our program is specifically designed to provide AI-focused startups with the right resources to scale. If AI is at the heart of your startup, you're in the right place. button: text: Apply Now url: "#form" image: src: /img/qdrant-for-startups-hero.svg srcMobile: /img/mobile/qdrant-for-startups-hero.svg alt: Qdrant for Startups sitemapExclude: true ---
qdrant-for-startups/qdrant-for-startups-hero.md
--- title: Distributed icon: - url: /features/cloud.svg - url: /features/cluster.svg weight: 50 sitemapExclude: True --- Cloud-native and scales horizontally. \ No matter how much data you need to serve - Qdrant can always be used with just the right amount of computational resources.
features/distributed.md
--- title: Rich data types icon: - url: /features/data.svg weight: 40 sitemapExclude: True --- Vector payload supports a large variety of data types and query conditions, including string matching, numerical ranges, geo-locations, and more. Payload filtering conditions allow you to build almost any custom business logic that should work on top of similarity matching.
features/rich-data-types.md
--- title: Efficient icon: - url: /features/sight.svg weight: 60 sitemapExclude: True --- Effectively utilizes your resources. Developed entirely in Rust language, Qdrant implements dynamic query planning and payload data indexing. Hardware-aware builds are also available for Enterprises.
features/optimized.md
--- title: Easy to Use API icon: - url: /features/settings.svg - url: /features/microchip.svg weight: 10 sitemapExclude: True --- Provides the [OpenAPI v3 specification](https://api.qdrant.tech/api-reference) to generate a client library in almost any programming language. Alternatively utilise [ready-made client for Python](https://github.com/qdrant/qdrant-client) or other programming languages with additional functionality.
features/easy-to-use.md
--- title: Filterable icon: - url: /features/filter.svg weight: 30 sitemapExclude: True --- Support additional payload associated with vectors. Not only stores payload but also allows filter results based on payload&nbsp;values. \ Unlike Elasticsearch post-filtering, Qdrant guarantees all relevant vectors are retrieved.
features/filterable.md
--- title: Fast and Accurate icon: - url: /features/speed.svg - url: /features/target.svg weight: 20 sitemapExclude: True --- Implement a unique custom modification of the [HNSW algorithm](https://arxiv.org/abs/1603.09320) for Approximate Nearest Neighbor Search. Search with a [State-of-the-Art speed](https://github.com/qdrant/benchmark/tree/master/search_benchmark) and apply search filters without [compromising on results](https://blog.vasnetsov.com/posts/categorical-hnsw/).
features/fast-and-accurate.md
--- title: "Make the most of your Unstructured Data" icon: sitemapExclude: True _build: render: never list: never publishResources: false cascade: _build: render: never list: never publishResources: false --- Qdrant is a vector database & vector similarity search engine. It deploys as an API service providing search for the nearest high-dimensional vectors. With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and much more!
features/_index.md
--- title: Are you contributing to our code, content, or community? button: url: https://forms.gle/q4fkwudDsy16xAZk8 text: Become a Star image: src: /img/stars.svg alt: Stars sitemapExclude: true ---
stars/stars-get-started.md
--- title: Meet our Stars cards: - id: 0 image: src: /img/stars/robert-caulk.jpg alt: Robert Caulk Photo name: Robert Caulk position: Founder of Emergent Methods description: Robert is working with a team on AskNews.app to adaptively enrich, index, and report on over 1 million news articles per day - id: 1 image: src: /img/stars/joshua-mo.jpg alt: Joshua Mo Photo name: Joshua Mo position: DevRel at Shuttle.rs description: Hey there! I primarily use Rust and am looking forward to contributing to the Qdrant community! - id: 2 image: src: /img/stars/nick-khami.jpg alt: Nick Khami Photo name: Nick Khami position: Founder & Product Engineer description: Founder and product engineer at Trieve and has been using Qdrant since late 2022 - id: 3 image: src: /img/stars/owen-colegrove.jpg alt: Owen Colegrove Photo name: Owen Colegrove position: Founder of SciPhi description: Physics PhD, Quant @ Citadel and Founder at SciPhi - id: 4 image: src: /img/stars/m-k-pavan-kumar.jpg alt: M K Pavan Kumar Photo name: M K Pavan Kumar position: Data Scientist and Lead GenAI description: A seasoned technology expert with 14 years of experience in full stack development, cloud solutions, & artificial intelligence - id: 5 image: src: /img/stars/niranjan-akella.jpg alt: Niranjan Akella Photo name: Niranjan Akella position: Scientist by Heart & AI Engineer description: I build & deploy AI models like LLMs, Diffusion Models & Vision Models at scale - id: 6 image: src: /img/stars/bojan-jakimovski.jpg alt: Bojan Jakimovski Photo name: Bojan Jakimovski position: Machine Learning Engineer description: I'm really excited to show the power of the Qdrant as vector database - id: 7 image: src: /img/stars/haydar-kulekci.jpg alt: Haydar KULEKCI Photo name: Haydar KULEKCI position: Senior Software Engineer description: I am a senior software engineer and consultant with over 10 years of experience in data management, processing, and software development. - id: 8 image: src: /img/stars/nicola-procopio.jpg alt: Nicola Procopio Photo name: Nicola Procopio position: Senior Data Scientist @ Fincons Group description: Nicola, a data scientist and open-source enthusiast since 2009, has used Qdrant since 2023. He developed fastembed for Haystack, vector search for Cheshire Cat A.I., and shares his expertise through articles, tutorials, and talks. - id: 9 image: src: /img/stars/eduardo-vasquez.jpg alt: Eduardo Vasquez Photo name: Eduardo Vasquez position: Data Scientist and MLOps Engineer description: I am a Data Scientist and MLOps Engineer exploring generative AI and LLMs, creating YouTube content on RAG workflows and fine-tuning LLMs. I hold an MSc in Statistics and Data Science. - id: 10 image: src: /img/stars/benito-martin.jpg alt: Benito Martin Photo name: Benito Martin position: Independent Consultant | Data Science, ML and AI Project Implementation | Teacher and Course Content Developer description: Over the past year, Benito developed MLOps and LLM projects. Based in Switzerland, Benito continues to advance his skills. - id: 11 image: src: /img/stars/nirant-kasliwal.jpg alt: Nirant Kasliwal Photo name: Nirant Kasliwal position: FastEmbed Creator description: I'm a Machine Learning consultant specializing in NLP and Vision systems for early-stage products. I've authored an NLP book recommended by Dr. Andrew Ng to Stanford's CS230 students and maintain FastEmbed at Qdrant for speed. - id: 12 image: src: /img/stars/denzell-ford.jpg alt: Denzell Ford Photo name: Denzell Ford position: Founder at Trieve, has been using Qdrant since late 2022. description: Denzell Ford, the founder of Trieve, has been using Qdrant since late 2022. He's passionate about helping people in the community. - id: 13 image: src: /img/stars/pavan-nagula.jpg alt: Pavan Nagula Photo name: Pavan Nagula position: Data Scientist | Machine Learning and Generative AI description: I'm Pavan, a data scientist specializing in AI, ML, and big data analytics. I love experimenting with new technologies in the AI and ML space, and Qdrant is a place where I've seen such innovative implementations recently. sitemapExclude: true ---
stars/stars-list.md
--- title: Everything you need to extend your current reach to be the voice of the developer community and represent Qdrant benefits: - id: 0 icon: src: /icons/outline/training-blue.svg alt: Training title: Training description: You will be equipped with the assets and knowledge to organize and execute successful talks and events. Get access to our content library with slide decks, templates, and more. - id: 1 icon: src: /icons/outline/award-blue.svg alt: Award title: Recognition description: Win a certificate and be featured on our website page. Plus, enjoy the distinction of receiving exclusive Qdrant swag. - id: 2 icon: src: /icons/outline/travel-blue.svg alt: Travel title: Travel description: Benefit from a dedicated travel fund for speaking engagements at developer conferences. - id: 3 icon: src: /icons/outline/star-ticket-blue.svg alt: Star ticket title: Beta-tests description: Get a front-row seat to the future of Qdrant with opportunities to beta-test new releases and access our detailed product roadmap. sitemapExclude: true ---
stars/stars-benefits.md
--- title: Join our growing community cards: - id: 0 icon: src: /img/stars-marketplaces/github.svg alt: Github icon title: Stars statsToUse: githubStars description: Join our GitHub community and contribute to the future of vector databases. link: text: Start Contributing url: https://github.com/qdrant/qdrant - id: 1 icon: src: /img/stars-marketplaces/discord.svg alt: Discord icon title: Members statsToUse: discordMembers description: Discover and chat on a vibrant community of developers working on the future of AI. link: text: Join our Conversations url: https://qdrant.to/discord - id: 2 icon: src: /img/stars-marketplaces/twitter.svg alt: Twitter icon title: Followers statsToUse: twitterFollowers description: Join us on X, participate and find out about our updates and releases before anyone else. link: text: Spread the Word url: https://qdrant.to/twitter sitemapExclude: true ---
stars/stars-marketplaces.md
--- title: About Qdrant Stars descriptionFirstPart: Qdrant Stars is an exclusive program to the top contributors and evangelists inside the Qdrant community. descriptionSecondPart: These are the experts responsible for leading community discussions, creating high-quality content, and participating in Qdrant’s events and meetups. image: src: /img/stars-about.png alt: Stars program sitemapExclude: true ---
stars/stars-about.md
--- title: You are already a star in our community! description: The Qdrant Stars program is here to take that one step further. button: text: Become a Star url: https://forms.gle/q4fkwudDsy16xAZk8 image: src: /img/stars-hero.svg alt: Stars sitemapExclude: true ---
stars/stars-hero.md
--- title: Qdrant Stars description: Qdrant Stars - Our Ambassador Program build: render: always cascade: - build: list: local publishResources: false render: never ---
stars/_index.md
--- title: Qdrant Private Cloud. Run Qdrant On-Premise. description: Effortlessly deploy and manage your enterprise-ready vector database fully on-premise, enhancing security for AI-driven applications. contactUs: text: Contact us url: /contact-sales/ sitemapExclude: true ---
private-cloud/private-cloud-hero.md
--- title: Qdrant Private Cloud offers a dedicated, on-premise solution that guarantees supreme data privacy and sovereignty. description: Designed for enterprise-grade demands, it provides a seamless management experience for your vector database, ensuring optimal performance and security for vector search and AI applications. image: src: /img/private-cloud-data-privacy.svg alt: Private cloud data privacy sitemapExclude: true ---
private-cloud/private-cloud-about.md
--- content: To learn more about Qdrant Private Cloud, please contact our team. contactUs: text: Contact us url: /contact-sales/ sitemapExclude: true ---
private-cloud/private-cloud-get-contacted.md
--- title: private-cloud description: private-cloud build: render: always cascade: - build: list: local publishResources: false render: never ---
private-cloud/_index.md
--- draft: false title: Building a High-Performance Entity Matching Solution with Qdrant - Rishabh Bhardwaj | Vector Space Talks slug: entity-matching-qdrant short_description: Rishabh Bhardwaj, a Data Engineer at HRS Group, discusses building a high-performance hotel matching solution with Qdrant. description: Rishabh Bhardwaj, a Data Engineer at HRS Group, discusses building a high-performance hotel matching solution with Qdrant, addressing data inconsistency, duplication, and real-time processing challenges. preview_image: /blog/from_cms/rishabh-bhardwaj-cropped.png date: 2024-01-09T11:53:56.825Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talk - Entity Matching Solution - Real Time Processing --- > *"When we were building proof of concept for this solution, we initially started with Postgres. But after some experimentation, we realized that it basically does not perform very well in terms of recall and speed... then we came to know that Qdrant performs a lot better as compared to other solutions that existed at the moment.”*\ > -- Rishabh Bhardwaj > How does the HNSW (Hierarchical Navigable Small World) algorithm benefit the solution built by Rishabh? Rhishabh, a Data Engineer at HRS Group, excels in designing, developing, and maintaining data pipelines and infrastructure crucial for data-driven decision-making processes. With extensive experience, Rhishabh brings a profound understanding of data engineering principles and best practices to the role. Proficient in SQL, Python, Airflow, ETL tools, and cloud platforms like AWS and Azure, Rhishabh has a proven track record of delivering high-quality data solutions that align with business needs. Collaborating closely with data analysts, scientists, and stakeholders at HRS Group, Rhishabh ensures the provision of valuable data and insights for informed decision-making. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/3IMIZljXqgYBqt671eaR9b?si=HUV6iwzIRByLLyHmroWTFA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/tDWhMAOyrcE).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/tDWhMAOyrcE?si=-LVPtwvJTyyvaSv3" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Building-a-High-Performance-Entity-Matching-Solution-with-Qdrant---Rishabh-Bhardwaj--Vector-Space-Talks-005-e2cbu7e/a-aaldc8e" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top Takeaways:** Data inconsistency, duplication, and real-time processing challenges? Rishabh Bhardwaj, Data Engineer at HRS Group has the solution! In this episode, Rishabh dives into the nitty-gritty of creating a high-performance hotel matching solution with Qdrant, covering everything from data inconsistency challenges to the speed and accuracy enhancements achieved through the HNSW algorithm. 5 Keys to Learning from the Episode: 1. Discover the importance of data consistency and the challenges it poses when dealing with multiple sources and languages. 2. Learn how Qdrant, an open-source vector database, outperformed other solutions and provided an efficient solution for high-speed matching. 3. Explore the unique modification of the HNSW algorithm in Qdrant and how it optimized the performance of the solution. 4. Dive into the crucial role of geofiltering and how it ensures accurate matching based on hotel locations. 5. Gain insights into the considerations surrounding GDPR compliance and the secure handling of hotel data. > Fun Fact: Did you know that Rishabh and his team experimented with multiple transformer models to find the best fit for their entity resolution use case? Ultimately, they found that the Mini LM model struck the perfect balance between speed and accuracy. Talk about a winning combination! > ## Show Notes: 02:24 Data from different sources is inconsistent and complex.\ 05:03 Using Postgres for proof, switched to Qdrant for better results\ 09:16 Geofiltering is crucial for validating our matches.\ 11:46 Insights on performance metrics and benchmarks.\ 16:22 We experimented with different values and found the desired number.\ 19:54 We experimented with different models and found the best one.\ 21:01 API gateway connects multiple clients for entity resolution.\ 24:31 Multiple languages supported, using transcript API for accuracy. ## More Quotes from Rishabh: *"One of the major challenges is the data inconsistency.”*\ -- Rishabh Bhardwaj *"So the only thing of how to know that which model would work for us is to again experiment with the models on our own data sets. But after doing those experiments, we realized that this is the best model that offers the best balance between speed and accuracy cool of the embeddings.”*\ -- Rishabh Bhardwaj *"Qdrant basically optimizes a lot using for the compute resources and this also helped us to scale the whole infrastructure in a really efficient manner.”*\ -- Rishabh Bhardwaj ## Transcript: Demetrios: Hello, fellow travelers in vector space. Dare, I call you astronauts? Today we've got an incredible conversation coming up with Rishabh, and I am happy that you all have joined us. Rishabh, it's great to have you here, man. How you doing? Rishabh Bhardwaj: Thanks for having me, Demetrios. I'm doing really great. Demetrios: Cool. I love hearing that. And I know you are in India. It is a little bit late there, so I appreciate you taking the time to come on the Vector space talks with us today. You've got a lot of stuff that you're going to be talking about. For anybody that does not know you, you are a data engineer at Hrs Group, and you're responsible for designing, developing, and maintaining data pipelines and infrastructure that supports the company. I am excited because today we're going to be talking about building a high performance hotel matching solution with Qdrant. Of course, there's a little kicker there. Demetrios: We want to get into how you did that and how you leveraged Qdrant. Let's talk about it, man. Let's get into it. I want to know give us a quick overview of what exactly this is. I gave the title, but I think you can tell us a little bit more about building this high performance hotel matching solution. Rishabh Bhardwaj: Definitely. So to start with, a brief description about the project. So we have some data in our internal databases, and we ingest a lot of data on a regular basis from different sources. So Hrs is basically a global tech company focused on business travel, and we have one of the most used hotel booking portals in Europe. So one of the major things that is important for customer satisfaction is the content that we provide them on our portals. Right. So the issue or the key challenges that we have is basically with the data itself that we ingest from different sources. One of the major challenges is the data inconsistency. Rishabh Bhardwaj: So different sources provide data in different formats, not only in different formats. It comes in multiple languages as well. So almost all the languages being used across Europe and also other parts of the world as well. So, Majorly, the data is coming across 20 different languages, and it makes it really difficult to consolidate and analyze this data. And this inconsistency in data often leads to many errors in data interpretation and decision making as well. Also, there is a challenge of data duplication, so the same piece of information can be represented differently across various sources, which could then again lead to data redundancy. And identifying and resolving these duplicates is again a significant challenge. Then the last challenge I can think about is that this data processing happens in real time. Rishabh Bhardwaj: So we have a constant influx of data from multiple sources, and processing and updating this information in real time is a really daunting task. Yeah. Demetrios: And when you are talking about this data duplication, are you saying things like, it's the same information in French and German? Or is it something like it's the same column, just a different way in like, a table? Rishabh Bhardwaj: Actually, it is both the cases, so the same entities can be coming in multiple languages. And then again, second thing also wow. Demetrios: All right, cool. Well, that sets the scene for us. Now, I feel like you brought some slides along. Feel free to share those whenever you want. I'm going to fire away the first question and ask about this. I'm going to go straight into Qdrant questions and ask you to elaborate on how the unique modification of Qdrant of the HNSW algorithm benefits your solution. So what are you doing there? How are you leveraging that? And how also to add another layer to this question, this ridiculously long question that I'm starting to get myself into, how do you handle geo filtering based on longitude and latitude? So, to summarize my lengthy question, let's just start with the HNSW algorithm. How does that benefit your solution? Rishabh Bhardwaj: Sure. So to begin with, I will give you a little backstory. So when we were building proof of concept for this solution, we initially started with Postgres, because we had some Postgres databases lying around in development environments, and we just wanted to try out and build a proof of concept. So we installed an extension called Pgvector. And at that point of time, it used to have IVF Flat indexing approach. But after some experimentation, we realized that it basically does not perform very well in terms of recall and speed. Basically, if we want to increase the speed, then we would suffer a lot on basis of recall. Then we started looking for native vector databases in the market, and then we saw some benchmarks and we came to know that Qdrant performs a lot better as compared to other solutions that existed at the moment. Rishabh Bhardwaj: And also, it was open source and really easy to host and use. We just needed to deploy a docker image in EC two instance and we can really start using it. Demetrios: Did you guys do your own benchmarks too? Or was that just like, you looked, you saw, you were like, all right, let's give this thing a spin. Rishabh Bhardwaj: So while deciding initially we just looked at the publicly available benchmarks, but later on, when we started using Qdrant, we did our own benchmarks internally. Nice. Demetrios: All right. Rishabh Bhardwaj: We just deployed a docker image of Qdrant in one of the EC Two instances and started experimenting with it. Very soon we realized that the HNSW indexing algorithm that it uses to build the indexing for the vectors, it was really efficient. We noticed that as compared to the PG Vector IVF Flat approach, it was around 16 times faster. And it didn't mean that it was not that accurate. It was actually 5% more accurate as compared to the previous results. So hold up. Demetrios: 16 times faster and 5% more accurate. And just so everybody out there listening knows we're not paying you to say this, right? Rishabh Bhardwaj: No, not at all. Demetrios: All right, keep going. I like it. Rishabh Bhardwaj: Yeah. So initially, during the experimentations, we begin with the default values for the HNSW algorithm that Qdrant ships with. And these benchmarks that I just told you about, it was based on those parameters. But as our use cases evolved, we also experimented on multiple values of basically M and EF construct that Qdrant allow us to specify in the indexing algorithm. Demetrios: Right. Rishabh Bhardwaj: So also the other thing is, Qdrant also provides the functionality to specify those parameters while making the search as well. So it does not mean if we build the index initially, we only have to use those specifications. We can again specify them during the search as well. Demetrios: Okay. Rishabh Bhardwaj: Yeah. So some use cases we have requires 100% accuracy. It means we do not need to worry about speed at all in those use cases. But there are some use cases in which speed is really important when we need to match, like, a million scale data set. In those use cases, speed is really important, and we can adjust a little bit on the accuracy part. So, yeah, this configuration that Qdrant provides for indexing really benefited us in our approach. Demetrios: Okay, so then layer into that all the fun with how you're handling geofiltering. Rishabh Bhardwaj: So geofiltering is also a very important feature in our solution because the entities that we are dealing with in our data majorly consist of hotel entities. Right. And hotel entities often comes with the geocordinates. So even if we match it using one of the Embedding models, then we also need to make sure that whatever the model has matched with a certain cosine similarity is also true. So in order to validate that, we use geofiltering, which also comes in stacked with Qdrant. So we provide geocordinate data from our internal databases, and then we match it from what we get from multiple sources as well. And it also has a radius parameter, which we can provide to tune in. How much radius do we want to take in account in order for this to be filterable? Demetrios: Yeah. Makes sense. I would imagine that knowing where the hotel location is is probably a very big piece of the puzzle that you're serving up for people. So as you were doing this, what are some things that came up that were really important? I know you talked about working with Europe. There's a lot of GDPR concerns. Was there, like, privacy considerations that you had to address? Was there security considerations when it comes to handling hotel data? Vector, Embeddings, how did you manage all that stuff? Rishabh Bhardwaj: So GDP compliance? Yes. It does play a very important role in this whole solution. Demetrios: That was meant to be a thumbs up. I don't know what happened there. Keep going. Sorry, I derailed that. Rishabh Bhardwaj: No worries. Yes. So GDPR compliance is also one of the key factors that we take in account while building this solution to make sure that nothing goes out of the compliance. We basically deployed Qdrant inside a private EC two instance, and it is also protected by an API key. And also we have built custom authentication workflows using Microsoft Azure SSO. Demetrios: I see. So there are a few things that I also want to ask, but I do want to open it up. There are people that are listening, watching live. If anyone wants to ask any questions in the chat, feel free to throw something in there and I will ask away. In the meantime, while people are typing in what they want to talk to you about, can you talk to us about any insights into the performance metrics? And really, these benchmarks that you did where you saw it was, I think you said, 16 times faster and then 5% more accurate. What did that look like? What benchmarks did you do? How did you benchmark it? All that fun stuff. And what are some things to keep in mind if others out there want to benchmark? And I guess you were just benchmarking it against Pgvector, right? Rishabh Bhardwaj: Yes, we did. Demetrios: Okay, cool. Rishabh Bhardwaj: So for benchmarking, we have some data sets that are already matched to some entities. This was done partially by humans and partially by other algorithms that we use for matching in the past. And it is already consolidated data sets, which we again used for benchmarking purposes. Then the benchmarks that I specified were only against PG vector, and we did not benchmark it any further because the speed and the accuracy that Qdrant provides, I think it is already covering our use case and it is way more faster than we thought the solution could be. So right now we did not benchmark against any other vector database or any other solution. Demetrios: Makes sense just to also get an idea in my head kind of jumping all over the place, so forgive me. The semantic components of the hotel, was it text descriptions or images or a little bit of both? Everything? Rishabh Bhardwaj: Yes. So semantic comes just from the descriptions of the hotels, and right now it does not include the images. But in future use cases, we are also considering using images as well to calculate the semantic similarity between two entities. Demetrios: Nice. Okay, cool. Good. I am a visual guy. You got slides for us too, right? If I'm not mistaken? Do you want to share those or do you want me to keep hitting you with questions? We have something from Brad in the chat and maybe before you share any slides, is there a map visualization as part of the application UI? Can you speak to what you used? Rishabh Bhardwaj: If so, not right now, but this is actually a great idea and we will try to build it as soon as possible. Demetrios: Yeah, it makes sense. Where you have the drag and you can see like within this area, you have X amount of hotels, and these are what they look like, et cetera, et cetera. Rishabh Bhardwaj: Yes, definitely. Demetrios: Awesome. All right, so, yeah, feel free to share any slides you have, otherwise I can hit you with another question in the meantime, which is I'm wondering about the configurations you used for the HNSW index in Qdrant and what were the number of edges per node and the number of neighbors to consider during the index building. All of that fun stuff that goes into the nitty gritty of it. Rishabh Bhardwaj: So should I go with the slide first or should I answer your question first? Demetrios: Probably answer the question so we don't get too far off track, and then we can hit up your slides. And the slides, I'm sure, will prompt many other questions from my side and the audience's side. Rishabh Bhardwaj: So, for HNSW configuration, we have specified the value of M, which is, I think, basically the layers as 64, and the value for EF construct is 256. Demetrios: And how did you go about that? Rishabh Bhardwaj: So we did some again, benchmarks based on the single model that we have selected, which is mini LM, L six, V two. I will talk about it later also. But we basically experimented with different values of M and EF construct, and we came to this number that this is the value that we want to go ahead with. And also when I said that in some cases, indexing is not required at all, speed is not required at all, we want to make sure that whatever we are matching is 100% accurate. In that case, the Python client for Qdrant also provides a parameter called exact, and if we specify it as true, then it basically does not use indexing and it makes a full search on the whole vector collection, basically. Demetrios: Okay, so there's something for me that's pretty fascinating there on these different use cases. What else differs in the different ones? Because you have certain needs for speed or accuracy. It seems like those are the main trade offs that you're working with. What differs in the way that you set things up? Rishabh Bhardwaj: So in some cases so there are some internal databases that need to have hotel entities in a very sophisticated manner. It means it should not contain even a single duplicate entity. In those cases, accuracy is the most important thing we look at, and in some cases, for data analytics and consolidation purposes, we want speed more, but the accuracy should not be that much in value. Demetrios: So what does that look like in practice? Because you mentioned okay, when we are looking for the accuracy, we make sure that it comes through all of the different records. Right. Are there any other things in practice that you did differently? Rishabh Bhardwaj: Not really. Nothing I can think of right now. Demetrios: Okay, if anything comes up yeah, I'll remind you, but hit us with the slides, man. What do you got for the visual learners out there? Rishabh Bhardwaj: Sure. So I have an architecture diagram of what the solution looks like right now. So, this is the current architecture that we have in production. So, as I mentioned, we have deployed the Qdrant vector database in an EC Two, private EC Two instance hosted inside a VPC. And then we have some batch jobs running, which basically create Embeddings. And the source data basically first comes into S three buckets into a data lake. We do a little bit of preprocessing data cleaning and then it goes through a batch process of generating the Embeddings using the Mini LM model, mini LML six, V two. And this model is basically hosted in a SageMaker serverless inference endpoint, which allows us to not worry about servers and we can scale it as much as we want. Rishabh Bhardwaj: And it really helps us to build the Embeddings in a really fast manner. Demetrios: Why did you choose that model? Did you go through different models or was it just this one worked well enough and you went with it? Rishabh Bhardwaj: No, actually this was, I think the third or the fourth model that we tried out with. So what happens right now is if, let's say we want to perform a task such as sentence similarity and we go to the Internet and we try to find a model, it is really hard to see which model would perform best in our use case. So the only thing of how to know that which model would work for us is to again experiment with the models on our own data sets. So we did a lot of experiments. We used, I think, Mpnet model and a lot of multilingual models as well. But after doing those experiments, we realized that this is the best model that offers the best balance between speed and accuracy cool of the Embeddings. So we have deployed it in a serverless inference endpoint in SageMaker. And once we generate the Embeddings in a glue job, we then store them into the vector database Qdrant. Rishabh Bhardwaj: Then this part here is what goes on in the real time scenario. So, we have multiple clients, basically multiple application that would connect to an API gateway. We have exposed this API gateway in such a way that multiple clients can connect to it and they can use this entity resolution service according to their use cases. And we take in different parameters. Some are mandatory, some are not mandatory, and then they can use it based on their use case. The API gateway is connected to a lambda function which basically performs search on Qdrant vector database using the same Embeddings that can be generated from the same model that we hosted in the serverless inference endpoint. So, yeah, this is how the diagram looks right now. It did not used to look like this sometime back, but we have evolved it, developed it, and now we have got to this point where it is really scalable because most of the infrastructure that we have used here is serverless and it can be scaled up to any number of requests that you want. Demetrios: What did you have before that was the MVP. Rishabh Bhardwaj: So instead of this one, we had a real time inference endpoint which basically limited us to some number of requests that we had preset earlier while deploying the model. So this was one of the bottlenecks and then lambda function was always there, I think this one and also I think in place of this Qdrant vector database, as I mentioned, we had Postgres. So yeah, that was also a limitation because it used to use a lot of compute capacity within the EC two instance as compared to Qdrant. Qdrant basically optimizes a lot using for the compute resources and this also helped us to scale the whole infrastructure in a really efficient manner. Demetrios: Awesome. Cool. This is fascinating. From my side, I love seeing what you've done and how you went about iterating on the architecture and starting off with something that you had up and running and then optimizing it. So this project has been how long has it been in the making and what has the time to market been like that first MVP from zero to one and now it feels like you're going to one to infinity by making it optimized. What's the time frames been here? Rishabh Bhardwaj: I think we started this in the month of May this year. Now it's like five to six months already. So the first working solution that we built was in around one and a half months and then from there onwards we have tried to iterate it to make it better and better. Demetrios: Cool. Very cool. Some great questions come through in the chat. Do you have multiple language support for hotel names? If so, did you see any issues with such mappings? Rishabh Bhardwaj: Yes, we do have support for multiple languages and we do not do it using currently using the multilingual models because what we realized is the multilingual models are built on journal sentences and not based it is not trained on entities like names, hotel names and traveler names, et cetera. So when we experimented with the multilingual models it did not provide much satisfactory results. So we used transcript API from Google and it is able to basically translate a lot of languages across that we have across the data and it really gives satisfactory results in terms of entity resolution. Demetrios: Awesome. What other transformers were considered for the evaluation? Rishabh Bhardwaj: The ones I remember from top of my head are Mpnet, then there is a Chinese model called Text to VEC, Shiba something and Bert uncased, if I remember correctly. Yeah, these were some of the models. Demetrios: That we considered and nothing stood out that worked that well or was it just that you had to make trade offs on all of them? Rishabh Bhardwaj: So in terms of accuracy, Mpnet was a little bit better than Mini LM but then again it was a lot slower than the Mini LM model. It was around five times slower than the Mini LM model, so it was not a big trade off to give up with. So we decided to go ahead with Mini LM. Demetrios: Awesome. Well, dude, this has been pretty enlightening. I really appreciate you coming on here and doing this. If anyone else has any questions for you, we'll leave all your information on where to get in touch in the chat. Rishabh, thank you so much. This is super cool. I appreciate you coming on here. Anyone that's listening, if you want to come onto the vector space talks, feel free to reach out to me and I'll make it happen. Demetrios: This is really cool to see the different work that people are doing and how you all are evolving the game, man. I really appreciate this. Rishabh Bhardwaj: Thank you, Demetrios. Thank you for inviting inviting me and have a nice day.
blog/building-a-high-performance-entity-matching-solution-with-qdrant-rishabh-bhardwaj-vector-space-talks-005.md
--- draft: false preview_image: /blog/from_cms/inception.png sitemapExclude: true title: Qdrant has joined NVIDIA Inception Program slug: qdrant-joined-nvidia-inception-program short_description: Recently Qdrant has become a member of the NVIDIA Inception. description: Along with the various opportunities it gives, we are the most excited about GPU support since it is an essential feature in Qdrant's roadmap. Stay tuned for our new updates. date: 2022-04-04T12:06:36.819Z author: Alyona Kavyerina featured: false author_link: https://www.linkedin.com/in/alyona-kavyerina/ tags: - Corporate news - NVIDIA categories: - News --- Recently we've become a member of the NVIDIA Inception. It is a program that helps boost the evolution of technology startups through access to their cutting-edge technology and experts, connects startups with venture capitalists, and provides marketing support. Along with the various opportunities it gives, we are the most excited about GPU support since it is an essential feature in Qdrant's roadmap. Stay tuned for our new updates.
blog/qdrant-has-joined-nvidia-inception-program.md
--- draft: false title: "Kairoswealth & Qdrant: Transforming Wealth Management with AI-Driven Insights and Scalable Vector Search" short_description: "Transforming wealth management with AI-driven insights and scalable vector search." description: "Enhancing wealth management using AI-driven insights and efficient vector search for improved recommendations and scalability." preview_image: /blog/case-study-kairoswealth/preview.png social_preview_image: /blog/case-study-kairoswealth/preview.png date: 2024-07-10T00:02:00Z author: Qdrant featured: false tags: - Kairoswealth - Vincent Teyssier - AI-Driven Insights - Performance Scalability - Multi-Tenancy - Financial Recommendations --- ![Kairoswealth overview](/blog/case-study-kairoswealth/image2.png) ### **About Kairoswealth** [Kairoswealth](https://kairoswealth.com/) is a comprehensive wealth management platform designed to provide users with a holistic view of their financial portfolio. The platform offers access to unique financial products and automates back-office operations through its AI assistant, Gaia. ![Dashboard Kairoswealth](/blog/case-study-kairoswealth/image3.png) ### **Motivations for Adopting a Vector Database** “At Kairoswealth we encountered several use cases necessitating the ability to run similarity queries on large datasets. Key applications included product recommendations and retrieval-augmented generation (RAG),” says [Vincent Teyssier](https://www.linkedin.com/in/vincent-teyssier/), Chief Technology & AI Officer at Kairoswealth. These needs drove the search for a more robust and scalable vector database solution. ### **Challenges with Previous Solutions** “We faced several critical showstoppers with our previous vector database solution, which led us to seek an alternative,” says Teyssier. These challenges included: - **Performance Scalability:** Significant performance degradation occurred as more data was added, despite various optimizations. - **Robust Multi-Tenancy:** The previous solution struggled with multi-tenancy, impacting performance. - **RAM Footprint:** High memory consumption was an issue. ### **Qdrant Use Cases at Kairoswealth** Kairoswealth leverages Qdrant for several key use cases: - **Internal Data RAG:** Efficiently handling internal RAG use cases. - **Financial Regulatory Reports RAG:** Managing and generating financial reports. - **Recommendations:** Enhancing the accuracy and efficiency of recommendations with the Kairoswealth platform. ![Stock recommendation](/blog/case-study-kairoswealth/image1.png) ### **Why Kairoswealth Chose Qdrant** Some of the key reasons, why Kairoswealth landed on Qdrant as the vector database of choice are: 1. **High Performance with 2.4M Vectors:** “Qdrant efficiently handled the indexing of 1.2 million vectors with 16 metadata fields each, maintaining high performance with no degradation. Similarity queries and scrolls run in less than 0.3 seconds. When we doubled the dataset to 2.4 million vectors, performance remained consistent.So we decided to double that to 2.4M vectors, and it's as if we were inserting our first vector!” says Teyssier. 2. **8x Memory Efficiency:** The database storage size with Qdrant was eight times smaller than the previous solution, enabling the deployment of the entire dataset on smaller instances and saving significant infrastructure costs. 3. **Embedded Capabilities:** “Beyond simple search and similarity, Qdrant hosts a bunch of very nice features around recommendation engines, adding positive and negative examples for better spacial narrowing, efficient multi-tenancy, and many more,” says Teyssier. 4. **Support and Community:** “The Qdrant team, led by Andre Zayarni, provides exceptional support and has a strong passion for data engineering,” notes Teyssier, “the team's commitment to open-source and their active engagement in helping users, from beginners to veterans, is highly valued by Kairoswealth.” ### **Conclusion** Kairoswealth's transition to Qdrant has enabled them to overcome significant challenges related to performance, scalability, and memory efficiency, while also benefiting from advanced features and robust support. This partnership positions Kairoswealth to continue innovating in the wealth management sector, leveraging the power of AI to deliver superior services to their clients. ### **Future Roadmap for Kairoswealth** Kairoswealth is seizing the opportunity to disrupt the wealth management sector, which has traditionally been underserved by technology. For example, they are developing the Kairos Terminal, a natural language interface that translates user queries into OpenBB commands (a set of tools for financial analysis and data visualization within the OpenBB Terminal). With regards to the future of the wealth management sector, Teyssier notes that “the integration of Generative AI will automate back-office tasks such as data collation, data reconciliation, and market research. This technology will also enable wealth managers to scale their services to broader segments, including affluent clients, by automating relationship management and interactions.”
blog/case-study-kairoswealth.md
--- draft: false title: Vector Search for Content-Based Video Recommendation - Gladys and Samuel from Dailymotion slug: vector-search-vector-recommendation short_description: Gladys Roch and Samuel Leonardo Gracio join us in this episode to share their knowledge on content-based recommendation. description: Gladys Roch and Samuel Leonardo Gracio from Dailymotion, discussed optimizing video recommendations using Qdrant's vector search alongside challenges and solutions in content-based recommender systems. preview_image: /blog/from_cms/gladys-and-sam-bp-cropped.png date: 2024-03-19T14:08:00.190Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - Video Recommender - content based recommendation --- > "*The vector search engine that we chose is Qdrant, but why did we choose it? Actually, it answers all the load constraints and the technical needs that we had. It allows us to do a fast neighbor search. It has a python API which matches the recommender tag that we have.*”\ -- Gladys Roch > Gladys Roch is a French Machine Learning Engineer at Dailymotion working on recommender systems for video content. > "*We don't have full control and at the end the cost of their solution is very high for a very low proposal. So after that we tried to benchmark other solutions and we found out that Qdrant was easier for us to implement.*”\ -- Samuel Leonardo Gracio > Samuel Leonardo Gracio, a Senior Machine Learning Engineer at Dailymotion, mainly works on recommender systems and video classification. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/4YYASUZKcT5A90d6H2mOj9?si=a5GgBd4JTR6Yo3HBJfiejQ), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/z_0VjMZ2JY0).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/z_0VjMZ2JY0?si=buv9aSN0Uh09Y6Qx" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Vector-Search-for-Content-Based-Video-Recommendation---Gladys-and-Sam--Vector-Space-Talk-012-e2f9hmm/a-aatvqtr" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Are you captivated by how video recommendations that are engineered to serve up your next binge-worthy content? We definitely are. Get ready to unwrap the secrets that keep millions engaged, as Demetrios chats with the brains behind the scenes of Dailymotion. This episode is packed with insights straight from ML Engineers at Dailymotion who are reshaping how we discover videos online. Here's what you’ll unbox from this episode: 1. **The Mech Behind the Magic:** Understand how a robust video embedding process can change the game - from textual metadata to audio signals and beyond. 2. **The Power of Multilingual Understanding:** Discover the tools that help recommend videos to a global audience, transcending language barriers. 3. **Breaking the Echo Chamber:** Learn about Dailymotion's 'perspective' feature that's transforming the discovery experience for users. 4. **Challenges & Triumphs:** Hear how Qdrant helps Dailymotion tackle a massive video catalog and ensure the freshest content pops on your feed. 5. **Behind the Scenes with Qdrant:** Get an insider’s look at why Dailymotion entrusted their recommendation needs to Qdrant's capable hands (or should we say algorithms?). > Fun Fact: Did you know that Dailymotion juggles over 13 million recommendations daily? That's like serving up a personalized video playlist to the entire population of Greece. Every single day! > ## Show notes: 00:00 Vector Space Talks intro with Gladys and Samuel.\ 05:07 Recommender system needs vector search for recommendations.\ 09:29 Chose vector search engine for fast neighbor search.\ 13:23 Video transcript use for scalable multilingual embedding.\ 16:35 Transcripts prioritize over video title and tags.\ 17:46 Videos curated based on metadata for quality.\ 20:53 Qdrant setup overview for machine learning engineers.\ 25:25 Enhanced recommendation system improves user engagement.\ 29:36 Recommender system, A/B testing, collection aliases strategic.\ 33:03 Dailymotion's new feature diversifies video perspectives.\ 34:58 Exploring different perspectives and excluding certain topics. ## More Quotes from Gladys and Sam: "*Basically, we're computing the embeddings and then we feed them into Qdrant, and we do that with a streaming pipeline, which means that every time, so everything is in streaming, every time a new video is uploaded or updated, if the description changes, for example, then the embedding will be computed and then it will be fed directly into Qdrant.*”\ -- Gladys Roch *"We basically recommend videos to a user if other users watching the same video were watching other videos. But the problem with that is that it only works with videos where we have what we call here high signal. So videos that have at least thousands of views, some interactions, because for fresh and fresh or niche videos, we don't have enough interaction.”*\ -- Samuel Leonardo Gracio *"But every time we add new videos to Dailymotion, then it's growing. So it can provide recommendation for videos with few interactions that we don't know well. So we're very happy because it led us to huge performances increase on the low signal. We did a threefold increase on the CTR, which means the number of clicks on the recommendation. So with Qdrant we were able to kind of fix our call start issues.”*\ -- Gladys Roch *"The fact that you have a very cool team that helped us to implement some parts when it was difficult, I think it was definitely the thing that make us choose Qdrant instead of another solution.”*\ -- Samuel Leonardo Gracio ## Transcript: Demetrios: I don't know if you all realize what you got yourself into, but we are back for another edition of the Vector Space Talks. My stream is a little bit chunky and slow, so I think we're just to get into it with Gladys and Samuel from Daily motion. Thank you both for joining us. It is an honor to have you here. For everyone that is watching, please throw your questions and anything else that you want to remark about into the chat. We love chatting with you and I will jump on screen if there is something that we need to stop the presentation about and ask right away. But for now, I think you all got some screen shares you want to show us. Samuel Leonardo Gracio: Yes, exactly. So first of all, thank you for the invitation, of course. And yes, I will share my screen. We have a presentation. Excellent. Should be okay now. Demetrios: Brilliant. Samuel Leonardo Gracio: So can we start? Demetrios: I would love it. Yes, I'm excited. I think everybody else is excited too. Gladys Roch: So welcome, everybody, to our vector space talk. I'm Gladys Roch, machine learning engineer at Dailymotion. Samuel Leonardo Gracio: And I'm Samuel, senior machine learning engineer at Dailymotion. Gladys Roch: Today we're going to talk about Vector search in the context of recommendation and in particular how Qdrant. That's going to be a hard one. We actually got used to pronouncing Qdrant as a french way, so we're going to sleep a bit during this presentation, sorry, in advance, the Qdrant and how we use it for our content based recommender. So we are going to first present the context and why we needed a vector database and why we chose Qdrant, how we fit Qdrant, what we put in it, and we are quite open about the pipelines that we've set up and then we get into the results and how Qdrant helped us solve the issue that we had. Samuel Leonardo Gracio: Yeah. So first of all, I will talk about, globally, the recommendation at Dailymotion. So just a quick introduction about Dailymotion, because you're not all french, so you may not all know what Dailymotion is. So we are a video hosting platform as YouTube or TikTok, and we were founded in 2005. So it's a node company for videos and we have 400 million unique users per month. So that's a lot of users and videos and views. So that's why we think it's interesting. So Dailymotion is we can divide the product in three parts. Samuel Leonardo Gracio: So one part is the native app. As you can see, it's very similar from other apps like TikTok or Instagram reels. So you have vertical videos, you just scroll and that's it. We also have a website. So Dailymotion.com, that is our main product, historical product. So on this website you have a watching page like you can have for instance, on YouTube. And we are also a video player that you can find in most of the french websites and even in other countries. And so we have recommendation almost everywhere and different recommenders for each of these products. Gladys Roch: Okay, so that's Dailymotion. But today we're going to focus on one of our recommender systems. Actually, the machine learning engineer team handles multiple recommender systems. But the video to video recommendation is the oldest and the most used. And so it's what you can see on the screen, it's what you have the recommendation queue of videos that you can see on the side or below the videos that you're watching. And to compute these suggestions, we have multiple models running. So that's why it's a global system. This recommendation is quite important for Dailymotion. Gladys Roch: It's actually a key component. It's one of the main levers of audience generation. So for everybody who comes to the website from SEO or other ways, then that's how we generate more audience and more engagement. So it's very important in the revenue stream of the platform. So working on it is definitely a main topic of the team and that's why we are evolving on this topic all the time. Samuel Leonardo Gracio: Okay, so why would we need a vector search for this recommendation? I think we are here for that. So as many platforms and as many recommender systems, I think we have a very usual approach based on a collaborative model. So we basically recommend videos to a user if other users watching the same video were watching other videos. But the problem with that is that it only works with videos where we have what we call here high signal. So videos that have at least thousands of views, some interactions, because for fresh and fresh or niche videos, we don't have enough interaction. And we have a problem that I think all the recommender systems can have, which is a costar tissue. So this costar tissue is for new users and new videos, in fact. So if we don't have any information or interaction, it's difficult to recommend anything based on this collaborative approach. Samuel Leonardo Gracio: So the idea to solve that was to use a content based recommendation. It's also a classic solution. And the idea is when you have a very fresh video. So video, hey, in this case, a good thing to recommend when you don't have enough information is to recommend a very similar video and hope that the user will watch it also. So for that, of course, we use Qdrant and we will explain how. So yeah, the idea is to put everything in the vector space. So each video at Dailymotion will go through an embedding model. So for each video we'll get a video on embedding. Samuel Leonardo Gracio: We will describe how we do that just after and put it in a vector space. So after that we could use Qdrant to, sorry, Qdrant to query and get similar videos that we will recommend to our users. Gladys Roch: Okay, so if we have embeddings to represent our videos, then we have a vector space, but we need to be able to query this vector space and not only to query it, but to do it at scale and online because it's like a recommender facing users. So we have a few requirements. The first one is that we have a lot of videos in our catalog. So actually doing an exact neighbor search would be unreasonable, unrealistic. It's a combinatorial explosion issue, so we can't do an exact Knn. Plus we also have new videos being uploaded to Dailymotion every hour. So if we could somehow manage to do KNN and to pre compute it, it would never be up to date and it would be very expensive to recompute all the time to include all the new videos. So we need a solution that can integrate new videos all the time. Gladys Roch: And we're also at scale, we serve over 13 million recommendation each day. So it means that we need a big setup to retrieve the neighbors of many videos all day. And finally, we have users waiting for the recommendation. So it's not just pre computed and stored, and it's not just content knowledge. We are trying to provide the recommendation as fast as possible. So we have time constraints and we only have a few hundred milliseconds to compute the recommendation that we're going to show the user. So we need to be able to retrieve the close video that we'd like to propose to the user very fast. So we need to be able to navigate this vector space that we are building quite quickly. Gladys Roch: So of course we need vector search engine. That's the most easy way to do it, to be able to compute and approximate neighbor search and to do it at scale. So obviously, evidently the vector search engine that we chose this Qdrant, but why did we choose it? Actually, it answers all the load constraints and the technical needs that we had. It allows us to do a fast neighbor search. It has a python API which match the recommendous tag that we have. A very important issue for us was to be able to not only put the embeddings of the vectors in this space but also to put metadata with it to be able to get a bit more information and not just a mathematical representation of the video in this database. And actually doing that make it filterable, which means that we can retrieve neighbors of a video, but given some constraints, and it's very important for us typically for language constraints. Samuel will talk a bit more in details about that just after. Gladys Roch: But we have an embedding that is multilingual and we need to be able to filter all the language, all the videos on their language to offer more robust recommendation for our users. And also Qdrant is distributed and so it's scalable and we needed that due to the load that I just talked about. So that's the main points that led us to choose Qdrant. Samuel Leonardo Gracio: And also they have an amazing team. Gladys Roch: So that's another, that would be our return of experience. The team of Qdrant is really nice. You helped us actually put in place the cluster. Samuel Leonardo Gracio: Yeah. So what do we put in our Qdrant cluster? So how do we build our robust video embedding? I think it's really interesting. So the first point for us was to know what a video is about. So it's a really tricky question, in fact. So of course, for each video uploaded on the platform, we have the video signal, so many frames representing the video, but we don't use that for our meetings. And in fact, why we are not using them, it's because it contains a lot of information. Right, but not what we want. For instance, here you have video about an interview of LeBron James. Samuel Leonardo Gracio: But if you only use the frames, the video signal, you can't even know what he's saying, what the video is about, in fact. So we still try to use it. But in fact, the most interesting thing to represent our videos are the textual metadata. So the textual metadata, we have them for every video. So for every video uploaded on the platform, we have a video title, video description that are put by the person that uploads the video. But we also have automatically detected tags. So for instance, for this video, you could have LeBron James, and we also have subtitles that are automatically generated. So just to let you know, we do that using whisper, which is an open source solution provided by OpenAI, and we do it at scale. Samuel Leonardo Gracio: When a video is uploaded, we directly have the video transcript and we can use this information to represent our videos with just a textual embedding, which is far more easy to treat, and we need less compute than for frames, for instance. So the other issue for us was that we needed an embedding that could scale so that does not require too much time to compute because we have a lot of videos, more than 400 million videos, and we have many videos uploaded every hour, so it needs to scale. We also have many languages on our platform, more than 300 languages in the videos. And even if we are a french video platform, in fact, it's only a third of our videos that are actually in French. Most of the videos are in English or other languages such as Turkish, Spanish, Arabic, et cetera. So we needed something multilingual, which is not very easy to find. But we came out with this embedding, which is called multilingual universal sentence encoder. It's not the most famous embedding, so I think it's interesting to share it. Samuel Leonardo Gracio: It's open source, so everyone can use it. It's available on Tensorflow hub, and I think that now it's also available on hugging face, so it's easy to implement and to use it. The good thing is that it's pre trained, so you don't even have to fine tune it on your data. You can, but I think it's not even required. And of course it's multilingual, so it doesn't work with every languages. But still we have the main languages that are used on our platform. It focuses on semantical similarity. And you have an example here when you have different video titles. Samuel Leonardo Gracio: So for instance, one about soccer, another one about movies. Even if you have another video title in another language, if it's talking about the same topic, they will have a high cosine similarity. So that's what we want. We want to be able to recommend every video that we have in our catalog, not depending on the language. And the good thing is that it's really fast. Actually, it's a few milliseconds on cpu, so it's really easy to scale. So that was a huge requirement for us. Demetrios: Can we jump in here? Demetrios: There's a few questions that are coming through that I think are pretty worth. And it's actually probably more suited to the last slide. Sameer is asking this one, actually, one more back. Sorry, with the LeBron. Yeah, so it's really about how you understand the videos. And Sameer was wondering if you can quote unquote hack the understanding by putting some other tags or. Samuel Leonardo Gracio: Ah, you mean from a user perspective, like the person uploading the video, right? Demetrios: Yeah, exactly. Samuel Leonardo Gracio: You could do that before using transcripts, but since we are using them mainly and we only use the title, so the tags are automatically generated. So it's on our side. So the title and description, you can put whatever you want. But since we have the transcript, we know the content of the video and we embed that. So the title and the description are not the priority in the embedding. So I think it's still possible, but we don't have such use case. In fact, most of the people uploading videos are just trying to put the right title, but I think it's still possible. But yeah, with the transcript we don't have any examples like that. Samuel Leonardo Gracio: Yeah, hopefully. Demetrios: So that's awesome to think about too. It kind of leads into the next question, which is around, and this is from Juan Pablo. What do you do with videos that have no text and no meaningful audio, like TikTok or a reel? Samuel Leonardo Gracio: So for the moment, for these videos, we are only using the signal from the title tags, description and other video metadata. And we also have a moderation team which is watching the videos that we have here in the mostly recommended videos. So we know that the videos that we recommend are mostly good videos. And for these videos, so that don't have audio signal, we are forced to use the title tags and description. So these are the videos where the risk is at the maximum for us currently. But we are also working at the moment on something using the audio signal and the frames, but not all the frames. But for the moment, we don't have this solution. Right. Gladys Roch: Also, as I said, it's not just one model, we're talking about the content based model. But if we don't have a similarity score that is high enough, or if we're just not confident about the videos that were the closest, then we will default to another model. So it's not just one, it's a huge system. Samuel Leonardo Gracio: Yeah, and one point also, we are talking about videos with few interactions, so they are not videos at risk. I mean, they don't have a lot of views. When this content based algo is called, they are important because there are very fresh videos, and fresh videos will have a lot of views in a few minutes. But when the collaborative model will be retrained, it will be able to recommend videos on other things than the content itself, but it will use the collaborative signal. So I'm not sure that it's a really important risk for us. But still, I think we could still do some improvement for that aspect. Demetrios: So where do I apply to just watch videos all day for the content team? All right, I'll let you get back to it. Sorry to interrupt. And if anyone else has good questions. Samuel Leonardo Gracio: And I think it's good to ask your question during the presentation, it's more easier to answer. So, yeah, sorry, I was saying that we had this multilingual embedding, and just to present you our embedding pipeline. So, for each video that is uploaded or edited, because you can change the video title whenever you want, we have a pub sub event that is sent to a dataflow pipeline. So it's a streaming job for every video we will retrieve. So textual metadata, title, description tags or transcript, preprocess it to remove some words, for instance, and then call the model to have this embedding. And then. So we put it in bigquery, of course, but also in Qdrant. Gladys Roch: So I'm going to present a bit our Qdrant setup. So actually all this was deployed by our tier DevOps team, not by us machine learning engineers. So it's an overview, and I won't go into the details because I'm not familiar with all of this, but basically, as Samuel said, we're computing the embeddings and then we feed them into Qdrant, and we do that with a streaming pipeline, which means that every time, so everything is in streaming, every time a new video is uploaded or updated, if the description changes, for example, then the embedding will be computed and then it will be fed directly into Qdrant. And on the other hand, our recommender queries the Qdrant vector space through GrPC ingress. And actually Qdrant is running on six pods that are using arm nodes. And you have the specificities of which type of nodes we're using there, if you're interested. But basically that's the setup. And what is interesting is that our recommendation stack for now, it's on premise, which means it's running on Dailymotion servers, not on the Google Kubernetes engine, whereas Qdrant is on the TKE. Gladys Roch: So we are querying it from outside. And also if you have more questions about this setup, we'll be happy to redirect you to the DevOps team that helped us put that in place. And so finally the results. So we stated earlier that we had a call start issue. So before Qdrant, we had a lot of difficulties with this challenge. We had a collaborative recommender that was trained and performed very well on high senior videos, which means that is videos with a lot of interactions. So we can see what user like to watch, which videos they like to watch together. And we also had a metadata recommender. Gladys Roch: But first, this collaborative recommender was actually also used to compute call start recommendation, which is not allowed what it is trained on, but we were using a default embedding to compute like a default recommendation for call start, which led to a lot of popularity issues. Popularity issues for recommender system is when you always recommend the same video that is hugely popular and it's like a feedback loop. A lot of people will default to this video because it might be clickbait and then we will have a lot of inhaler action. So it will pollute the collaborative model all over again. So we had popularity issues with this, obviously. And we also had like this metadata recommender that only focused on a very small scope of trusted owners and trusted video sources. So it was working. It was an auto encoder and it was fine, but the scope was too small. Gladys Roch: Too few videos could be recommended through this model. And also those two models were trained very infrequently, only every 4 hours and 5 hours, which means that any fresh videos on the platform could not be recommended properly for like 4 hours. So it was the main issue because Dailymotion uses a lot of fresh videos and we have a lot of news, et cetera. So we need to be topical and this couldn't be done with this huge delay. So we had overall bad performances on the Los signal. And so with squadron we fixed that. We still have our collaborative recommender. It has evolved since then. Gladys Roch: It's actually computed much more often, but the collaborative model is only focused on high signal now and it's not computed like default recommendation for low signal that it doesn't know. And we have a content based recommender based on the muse embedding and Qdrant that is able to recommend to users video as soon as they are uploaded on the platform. And it has like a growing scope, 20 million vectors at the moment. But every time we add new videos to Dailymotion, then it's growing. So it can provide recommendation for videos with few interactions that we don't know well. So we're very happy because it led us to huge performances increase on the low signal. We did a threefold increase on the CTR, which means the number of clicks on the recommendation. So with Qdrant we were able to kind of fix our call start issues. Gladys Roch: What I was talking about fresh videos, popularities, low performances. We fixed that and we were very happy with the setup. It's running smoothly. Yeah, I think that's it for the presentation, for the slides at least. So we are open to discussion and if you have any questions to go into the details of the recommender system. So go ahead, shoot. Demetrios: I've got some questions while people are typing out everything in the chat and the first one I think that we should probably get into is how did the evaluation process go for you when you were looking at different vector databases and vector search engines? Samuel Leonardo Gracio: So that's a good point. So first of all, you have to know that we are working with Google cloud platform. So the first thing that we did was to use their vector search engine, so which called matching engine. Gladys Roch: Right. Samuel Leonardo Gracio: But the issue with matching engine is that we could not in fact add the API, wasn't easy to use. First of all. The second thing was that we could not put metadata, as we do in Qdrant, and filter out, pre filter before the query, as we are doing now in a Qdrant. And the first thing is that their solution is managed. Yeah, is managed. We don't have the full control and at the end the cost of their solution is very high for a very low proposal. So after that we tried to benchmark other solutions and we found out that Qdrant was easier for us to implement. We had a really cool documentation, so it was easy to test some things and basically we couldn't find any drawbacks for our use case at least. Samuel Leonardo Gracio: And moreover, the fact that you have a very cool team that helped us to implement some parts when it was difficult, I think it was definitely the thing that make us choose Qdrant instead of another solution, because we implemented Qdrant. Gladys Roch: Like on February or even January 2023. So Qdrant is fairly new, so the documentation was still under construction. And so you helped us through the discord to set up the cluster. So it was really nice. Demetrios: Excellent. And what about least favorite parts of using Qdrant? Gladys Roch: Yeah, I have one. I discovered it was not actually a requirement at the beginning, but for recommender systems we tend to do a lot of a B test. And you might wonder what's the deal with Qdrant and a b test. It's not related, but actually we were able to a b test our collection. So how we compute the embedding? First we had an embedding without the transcript, and now we have an embedding that includes the transcript. So we wanted to a b test that. And on Quellin you can have collection aliases and this is super helpful because you can have two collections that live on the cluster at the same time, and then on your code you can just call the production collection and then set the alias to the proper one. So for a d testing and rollout it's very useful. Gladys Roch: And I found it when I first wanted to do an a test. So I like this one. It was an existed and I like it also, the second thing I like is the API documentation like the one that is auto generated with all the examples and how to query any info on Qdrant. It's really nice for someone who's not from DevOps. It help us just debug our collection whenever. So it's very easy to get into. Samuel Leonardo Gracio: And the fact that the product is evolving so fast, like every week almost. You have a new feeder. I think it's really cool. There is one community and I think, yeah, it's really interesting and it's amazing to have such people working on that on an open source project like this one. Gladys Roch: We had feedback from our devot team when preparing this presentation. We reached out to them for the small schema that I tried to present. And yeah, they said that the open source community of quasant was really nice. It was easy to contribute, it was very open on Discord. I think we did a return on experience at some point on how we set up the cluster at the beginning. And yeah, they were very hyped by the fact that it's coded in rust. I don't know if you hear this a lot, but to them it's even more encouraging contributing with this kind of new language. Demetrios: 100% excellent. So last question from my end, and it is on if you're using Qdrant for anything else when it comes to products at Dailymotion, yes, actually we do. Samuel Leonardo Gracio: I have one slide about this. Gladys Roch: We have slides because we presented quadrum to another talk a few weeks ago. Samuel Leonardo Gracio: So we didn't prepare this slide just for this presentation, it's from another presentation, but still, it's a good point because we're currently trained to use it in other projects. So as we said in this presentation, we're mostly using it for the watching page. So Dailymotion.com but we also introduced it in the mobile app recently through a feature that is called perspective. So the goal of the feature is to be able to break this vertical feed algorithm to let the users to have like a button to discover new videos. So when you go through your feed, sometimes you will get a video talking about, I don't know, a movie. You will get this button, which is called perspective, and you will be able to have other videos talking about the same movie but giving to you another point of view. So people liking the movie, people that didn't like the movie, and we use Qdrant, sorry for the candidate generation part. So to get the similar videos and to get the videos that are talking about the same subject. Samuel Leonardo Gracio: So I won't talk too much about this project because it will require another presentation of 20 minutes or more. But still we are using it in other projects and yeah, it's really interesting to see what we are able to do with that tool. Gladys Roch: Once we have the vector space set up, we can just query it from everywhere. In every project of recommendation. Samuel Leonardo Gracio: We also tested some search. We are testing many things actually, but we don't have implemented it yet. For the moment we just have this perspective feed and the content based Roko, but we still have a lot of ideas using this vector search space. Demetrios: I love that idea on the get another perspective. So it's not like you get, as you were mentioning before, you don't get that echo chamber and just about everyone saying the same thing. You get to see are there other sides to this? And I can see how that could be very uh, Juan Pablo is back, asking questions in the chat about are you able to recommend videos with negative search queries and negative in the sense of, for example, as a user I want to see videos of a certain topic, but I want to exclude some topics from the video. Gladys Roch: Okay. We actually don't do that at the moment, but we know we can with squadron we can set positive and negative points from where to query. So actually for the moment we only retrieve close positive neighbors and we apply some business filters on top of that recommendation. But that's it. Samuel Leonardo Gracio: And that's because we have also this collaborative model, which is our main recommender system. But I think we definitely need to check that and maybe in the future we will implement that. We saw that many documentation about this and I'm pretty sure that it would work very well on our use case. Demetrios: Excellent. Well folks, I think that's about it for today. I want to thank you so much for coming and chatting with us and teaching us about how you're using Qdrant and being very transparent about your use. I learned a ton. And for anybody that's out there doing recommender systems and interested in more, I think they can reach out to you on LinkedIn. I've got both of your we'll drop them in the chat right now and we'll let everybody enjoy. So don't get lost in vector base. We will see you all later. Demetrios: If anyone wants to give a talk next, reach out to me. We always are looking for incredible talks and so this has been great. Thank you all. Gladys Roch: Thank you. Samuel Leonardo Gracio: Thank you very much for the invitation and for everyone listening. Thank you. Gladys Roch: See you. Bye.
blog/vector-search-for-content-based-video-recommendation-gladys-and-sam-vector-space-talks.md
--- draft: false title: Indexify Unveiled - Diptanu Gon Choudhury | Vector Space Talks slug: indexify-content-extraction-engine short_description: Diptanu Gon Choudhury discusses how Indexify is transforming the AI-driven workflow in enterprises today. description: Diptanu Gon Choudhury shares insights on re-imaging Spark and data infrastructure while discussing his work on Indexify to enhance AI-driven workflows and knowledge bases. preview_image: /blog/from_cms/diptanu-choudhury-cropped.png date: 2024-01-26T16:40:55.469Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Indexify - structured extraction engine - rag-based applications --- > *"We have something like Qdrant, which is very geared towards doing Vector search. And so we understand the shape of the storage system now.”*\ — Diptanu Gon Choudhury > Diptanu Gon Choudhury is the founder of Tensorlake. They are building Indexify - an open-source scalable structured extraction engine for unstructured data to build near-real-time knowledgebase for AI/agent-driven workflows and query engines. Before building Indexify, Diptanu created the Nomad cluster scheduler at Hashicorp, inventor of the Titan/Titus cluster scheduler at Netflix, led the FBLearner machine learning platform, and built the real-time speech inference engine at Facebook. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/6MSwo7urQAWE7EOxO7WTns?si=_s53wC0wR9C4uF8ngGYQlg), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/RoOgTxHkViA).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/RoOgTxHkViA?si=r0EjWlssjFDVrzo6" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Indexify-Unveiled-A-Scalable-and-Near-Real-time-Content-Extraction-Engine-for-Multimodal-Unstructured-Data---Diptanu-Gon-Choudhury--Vector-Space-Talk-009-e2el8qc/a-aas4nil" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Discover how reimagined data infrastructures revolutionize AI-agent workflows as Diptanu delves into Indexify, transforming raw data into real-time knowledge bases, and shares expert insights on optimizing rag-based applications, all amidst the ever-evolving landscape of Spark. Here's What You'll Discover: 1. **Innovative Data Infrastructure**: Diptanu dives deep into how Indexify is revolutionizing the enterprise world by providing a sharper focus on data infrastructure and a refined abstraction for generative AI this year. 2. **AI-Copilot for Call Centers**: Learn how Indexify streamlines customer service with a real-time knowledge base, transforming how agents interact and resolve issues. 3. **Scaling Real-Time Indexing**: discover the system’s powerful capability to index content as it happens, enabling multiple extractors to run simultaneously. It’s all about the right model and the computing capacity for on-the-fly content generation. 4. **Revamping Developer Experience**: get a glimpse into the future as Diptanu chats with Demetrios about reimagining Spark to fit today's tech capabilities, vastly different from just two years ago! 5. **AI Agent Workflow Insights**: Understand the crux of AI agent-driven workflows, where models dynamically react to data, making orchestrated decisions in live environments. > Fun Fact: The development of Indexify by Diptanu was spurred by the rising use of Large Language Models in applications and the subsequent need for better data infrastructure to support these technologies. > ## Show notes: 00:00 AI's impact on model production and workflows.\ 05:15 Building agents need indexes for continuous updates.\ 09:27 Early RaG and LLMs adopters neglect data infrastructure.\ 12:32 Design partner creating copilot for call centers.\ 17:00 Efficient indexing and generation using scalable models.\ 20:47 Spark is versatile, used for many cases.\ 24:45 Recent survey paper on RAG covers tips.\ 26:57 Evaluation of various aspects of data generation.\ 28:45 Balancing trust and cost in factual accuracy. ## More Quotes from Diptanu: *"In 2017, when I started doing machine learning, it would take us six months to ship a good model in production. And here we are today, in January 2024, new models are coming out every week, and people are putting them in production.”*\ -- Diptanu Gon Choudhury *"Over a period of time, you want to extract new information out of existing data, because models are getting better continuously.”*\ -- Diptanu Gon Choudhury *"We are in the golden age of demos. Golden age of demos with LLMs. Almost anyone, I think with some programming knowledge can kind of like write a demo with an OpenAI API or with an embedding model and so on.”*\ -- Diptanu Gon Choudhury ## Transcript: Demetrios: We are live, baby. This is it. Welcome back to another vector space talks. I'm here with my man Diptanu. He is the founder and creator of Tenterlake. They are building indexify, an open source, scalable, structured extraction engine for unstructured data to build near real time knowledge bases for AI agent driven workflows and query engines. And if it sounds like I just threw every buzzword in the book into that sentence, you can go ahead and say, bingo, we are here, and we're about to dissect what all that means in the next 30 minutes. So, dude, first of all, I got to just let everyone know who is here, that you are a bit of a hard hitter. Demetrios: You've got some track record under some notches on your belt. We could say before you created Tensorlake, let's just let people know that you were at Hashicorp, you created the nomad cluster scheduler, and you were the inventor of Titus cluster scheduler at Netflix. You led the FB learner machine learning platform and built real time speech inference engine at Facebook. You may be one of the most decorated people we've had on and that I have had the pleasure of talking to, and that's saying a lot. I've talked to a lot of people in my day, so I want to dig in, man. First question I've got for you, it's a big one. What the hell do you mean by AI agent driven workflows? Are you talking to autonomous agents? Are you talking, like the voice agents? What's that? Diptanu Gon Choudhury: Yeah, I was going to say that what a great last couple of years has been for AI. I mean, in context, learning has kind of, like, changed the way people do models and access models and use models in production, like at Facebook. In 2017, when I started doing machine learning, it would take us six months to ship a good model in production. And here we are today, in January 2024, new models are coming out every week, and people are putting them in production. It's a little bit of a Yolo where I feel like people have stopped measuring how well models are doing and just ship in production, but here we are. But I think underpinning all of this is kind of like this whole idea that models are capable of reasoning over data and non parametric knowledge to a certain extent. And what we are seeing now is workflows stop being completely heuristics driven, or as people say, like software 10 driven. And people are putting models in the picture where models are reacting to data that a workflow is seeing, and then people are using models behavior on the data and kind of like making the model decide what should the workflow do? And I think that's pretty much like, to me, what an agent is that an agent responds to information of the world and information which is external and kind of reacts to the information and kind of orchestrates some kind of business process or some kind of workflow, some kind of decision making in a workflow. Diptanu Gon Choudhury: That's what I mean by agents. And they can be like autonomous. They can be something that writes an email or writes a chat message or something like that. The spectrum is wide here. Demetrios: Excellent. So next question, logical question is, and I will second what you're saying. Like the advances that we've seen in the last year, wow. And the times are a change in, we are trying to evaluate while in production. And I like the term, yeah, we just yoloed it, or as the young kids say now, or so I've heard, because I'm not one of them, but we just do it for the plot. So we are getting those models out there, we're seeing if they work. And I imagine you saw some funny quotes from the Chevrolet chat bot, that it was a chat bot on the Chevrolet support page, and it was asked if Teslas are better than Chevys. And it said, yeah, Teslas are better than Chevys. Demetrios: So yes, that's what we do these days. This is 2024, baby. We just put it out there and test and prod. Anyway, getting back on topic, let's talk about indexify, because there was a whole lot of jargon that I said of what you do, give me the straight shooting answer. Break it down for me like I was five. Yeah. Diptanu Gon Choudhury: So if you are building an agent today, which depends on augmented generation, like retrieval, augmented generation, and given that this is Qdrant's show, I'm assuming people are very much familiar with Arag and augmented generation. So if people are building applications where the data is external or non parametric, and the model needs to see updated information all the time, because let's say, the documents under the hood that the application is using for its knowledge base is changing, or someone is building a chat application where new chat messages are coming all the time, and the agent or the model needs to know about what is happening, then you need like an index, or a set of indexes, which are continuously updated. And you also, over a period of time, you want to extract new information out of existing data, because models are getting better continuously. And the other thing is, AI, until now, or until a couple of years back, used to be very domain oriented or task oriented, where modality was the key behind models. Now we are entering into a world where information being encoded in any form, documents, videos or whatever, are important to these workflows that people are building or these agents that people are building. And so you need capability to ingest any kind of data and then build indexes out of them. And indexes, in my opinion, are not just embedding indexes, they could be indexes of semi structured data. So let's say you have an invoice. Diptanu Gon Choudhury: You want to maybe transform that invoice into semi structured data of where the invoice is coming from or what are the line items and so on. So in a nutshell, you need good data infrastructure to store these indexes and serve these indexes. And also you need a scalable compute engine so that whenever new data comes in, you're able to index them appropriately and update the indexes and so on. And also you need capability to experiment, to add new extractors into your platform, add new models into your platform, and so on. Indexify helps you with all that, right? So indexify, imagine indexify to be an online service with an API so that developers can upload any form of unstructured data, and then a bunch of extractors run in parallel on the cluster and extract information out of this unstructured data, and then update indexes on something like Qdrant or postgres for semi structured data continuously. Demetrios: Okay? Diptanu Gon Choudhury: And you basically get that in a single application, in a single binary, which is distributed on your cluster. You wouldn't have any external dependencies other than storage systems, essentially, to have a very scalable data infrastructure for your Rag applications or for your LLM agents. Demetrios: Excellent. So then talk to me about the inspiration for creating this. What was it that you saw that gave you that spark of, you know what? There needs to be something on the market that can handle this. Yeah. Diptanu Gon Choudhury: Earlier this year I was working with founder of a generative AI startup here. I was looking at what they were doing, I was helping them out, and I saw that. And then I looked around, I looked around at what is happening. Not earlier this year as in 2023. Somewhere in early 2023, I was looking at how developers are building applications with llms, and we are in the golden age of demos. Golden age of demos with llms. Almost anyone, I think with some programming knowledge can kind of like write a demo with an OpenAI API or with an embedding model and so on. And I mostly saw that the data infrastructure part of those demos or those applications were very basic people would do like one shot transformation of data, build indexes and then do stuff, build an application on top. Diptanu Gon Choudhury: And then I started talking to early adopters of RaG and llms in enterprises, and I started talking to them about how they're building their data pipelines and their data infrastructure for llms. And I feel like people were mostly excited about the application layer, right? A very less amount of thought was being put on the data infrastructure, and it was almost like built out of duct tape, right, of pipeline, like pipelines and workflows like RabbitMQ, like x, Y and z, very bespoke pipelines, which are good at one shot transformation of data. So you put in some documents on a queue, and then somehow the documents get embedded and put into something like Qdrant. But there was no thought about how do you re index? How do you add a new capability into your pipeline? Or how do you keep the whole system online, right? Keep the indexes online while reindexing and so on. And so classically, if you talk to a distributed systems engineer, they would be, you know, this is a mapreduce problem, right? So there are tools like Spark, there are tools like any skills ray, and they would classically solve these problems, right? And if you go to Facebook, we use Spark for something like this, or like presto, or we have a ton of big data infrastructure for handling things like this. And I thought that in 2023 we need a better abstraction for doing something like this. The world is moving to our server less, right? Developers understand functions. Developer thinks about computers as functions and functions which are distributed on the cluster and can transform content into something that llms can consume. Diptanu Gon Choudhury: And that was the inspiration I was thinking, what would it look like if we redid Spark or ray for generative AI in 2023? How can we make it so easy so that developers can write functions to extract content out of any form of unstructured data, right? You don't need to think about text, audio, video, or whatever. You write a function which can kind of handle a particular data type and then extract something out of it. And now how can we scale it? How can we give developers very transparently, like, all the abilities to manage indexes and serve indexes in production? And so that was the inspiration for it. I wanted to reimagine Mapreduce for generative AI. Demetrios: Wow. I like the vision you sent me over some ideas of different use cases that we can walk through, and I'd love to go through that and put it into actual tangible things that you've been seeing out there. And how you can plug it in to these different use cases. I think the first one that I wanted to look at was building a copilot for call center agents and what that actually looks like in practice. Yeah. Diptanu Gon Choudhury: So I took that example because that was super close to my heart in the sense that we have a design partner like who is doing this. And you'll see that in a call center, the information that comes in into a call center or the information that an agent in a human being in a call center works with is very rich. In a call center you have phone calls coming in, you have chat messages coming in, you have emails going on, and then there are also documents which are knowledge bases for human beings to answer questions or make decisions on. Right. And so they're working with a lot of data and then they're always pulling up a lot of information. And so one of our design partner is like building a copilot for call centers essentially. And what they're doing is they want the humans in a call center to answer questions really easily based on the context of a conversation or a call that is happening with one of their users, or pull up up to date information about the policies of the company and so on. And so the way they are using indexify is that they ingest all the content, like the raw content that is coming in video, not video, actually, like audio emails, chat messages into indexify. Diptanu Gon Choudhury: And then they have a bunch of extractors which handle different type of modalities, right? Some extractors extract information out of emails. Like they would do email classification, they would do embedding of emails, they would do like entity extraction from emails. And so they are creating many different types of indexes from emails. Same with speech. Right? Like data that is coming on through calls. They would transcribe them first using ASR extractor, and from there on the speech would be embedded and the whole pipeline for a text would be invoked into it, and then the speech would be searchable. If someone wants to find out what conversation has happened, they would be able to look up things. There is a summarizer extractor, which is like looking at a phone call and then summarizing what the customer had called and so on. Diptanu Gon Choudhury: So they are basically building a near real time knowledge base of one what is happening with the customer. And also they are pulling in information from their documents. So that's like one classic use case. Now the only dependency now they have is essentially like a blob storage system and serving infrastructure for indexes, like in this case, like Qdrant and postgres. And they have a bunch of extractors that they have written in house and some extractors that we have written, they're using them out of the box and they can scale the system to as much as they need. And it's kind of like giving them a high level abstraction of building indexes and using them in llms. Demetrios: So I really like this idea of how you have the unstructured and you have the semi structured and how those play together almost. And I think one thing that is very clear is how you've got the transcripts, you've got the embeddings that you're doing, but then you've also got documents that are very structured and maybe it's from the last call and it's like in some kind of a database. And I imagine we could say whatever, salesforce, it's in a salesforce and you've got it all there. And so there is some structure to that data. And now you want to be able to plug into all of that and you want to be able to, especially in this use case, the call center agents, human agents need to make decisions and they need to make decisions fast. Right. So the real time aspect really plays a part of that. Diptanu Gon Choudhury: Exactly. Demetrios: You can't have it be something that it'll get back to you in 30 seconds, or maybe 30 seconds is okay, but really the less time the better. And so traditionally when I think about using llms, I kind of take real time off the table. Have you had luck with making it more real time? Yeah. Diptanu Gon Choudhury: So there are two aspects of it. How quickly can your indexes be updated? As of last night, we can index all of Wikipedia under five minutes on AWS. We can run up to like 5000 extractors with indexify concurrently and parallel. I feel like we got the indexing part covered. Unless obviously you are using a model as behind an API where we don't have any control. But assuming you're using some kind of embedding model or some kind of extractor model, right, like a named entity extractor or an speech to text model that you control and you understand the I Ops, we can scale it out and our system can kind of handle the scale of getting it indexed really quickly. Now on the generation side, that's where it's a little bit more nuanced, right? Generation depends on how big the generation model is. If you're using GPD four, then obviously you would be playing with the latency budgets that OpenAI provides. Diptanu Gon Choudhury: If you're using some other form of models like mixture MoE or something which is very optimized and you have worked on making the model optimized, then obviously you can cut it down. So it depends on the end to end stack. It's not like a single piece of software. It's not like a monolithic piece of software. So it depends on a lot of different factors. But I can confidently claim that we have gotten the indexing side of real time aspects covered as long as the models people are using are reasonable and they have enough compute in their cluster. Demetrios: Yeah. Okay. Now talking again about the idea of rethinking the developer experience with this and almost reimagining what Spark would be if it were created today. Diptanu Gon Choudhury: Exactly. Demetrios: How do you think that there are manifestations in what you've built that play off of things that could only happen because you created it today as opposed to even two years ago. Diptanu Gon Choudhury: Yeah. So I think, for example, take Spark, right? Spark was born out of big data, like the 2011 twelve era of big data. In fact, I was one of the committers on Apache Mesos, the cluster scheduler that Spark used for a long time. And then when I was at Hashicorp, we tried to contribute support for Nomad in Spark. What I'm trying to say is that Spark is a task scheduler at the end of the day and it uses an underlying scheduler. So the teams that manage spark today or any other similar tools, they have like tens or 15 people, or they're using like a hosted solution, which is super complex to manage. Right. A spark cluster is not easy to manage. Diptanu Gon Choudhury: I'm not saying it's a bad thing or whatever. Software written at any given point in time reflect the world in which it was born. And so obviously it's from that era of systems engineering and so on. And since then, systems engineering has progressed quite a lot. I feel like we have learned how to make software which is scalable, but yet simpler to understand and to operate and so on. And the other big thing in spark that I feel like is missing or any skills, Ray, is that they are not natively integrated into the data stack. Right. They don't have an opinion on what the data stack is. Diptanu Gon Choudhury: They're like excellent Mapreduce systems, and then the data stuff is layered on top. And to a certain extent that has allowed them to generalize to so many different use cases. People use spark for everything. At Facebook, I was using Spark for batch transcoding of speech, to text, for various use cases with a lot of issues under the hood. Right? So they are tied to the big data storage infrastructure. So when I am reimagining Spark, I almost can take the position that we are going to use blob storage for ingestion and writing raw data, and we will have low latency serving infrastructure in the form of something like postgres or something like clickhouse or something for serving like structured data or semi structured data. And then we have something like Qdrant, which is very geared towards doing vector search and so on. And so we understand the shape of the storage system now. Diptanu Gon Choudhury: We understand that developers want to integrate with them. So now we can control the compute layer such that the compute layer is optimized for doing the compute and producing data such that they can be written in those data stores, right? So we understand the I Ops, right? The I O, what is it called? The I O characteristics of the underlying storage system really well. And we understand that the use case is that people want to consume those data in llms, right? So we can make design decisions such that how we write into those, into the storage system, how we serve very specifically for llms, that I feel like a developer would be making those decisions themselves, like if they were using some other tool. Demetrios: Yeah, it does feel like optimizing for that and recognizing that spark is almost like a swiss army knife. As you mentioned, you can do a million things with it, but sometimes you don't want to do a million things. You just want to do one thing and you want it to be really easy to be able to do that one thing. I had a friend who worked at some enterprise and he was talking about how spark engineers have all the job security in the world, because a, like you said, you need a lot of them, and b, it's hard stuff being able to work on that and getting really deep and knowing it and the ins and outs of it. So I can feel where you're coming from on that one. Diptanu Gon Choudhury: Yeah, I mean, we basically integrated the compute engine with the storage so developers don't have to think about it. Plug in whatever storage you want. We support, obviously, like all the blob stores, and we support Qdrant and postgres right now, indexify in the future can even have other storage engines. And now all an application developer needs to do is deploy this on AWS or GCP or whatever, right? Have enough compute, point it to the storage systems, and then now build your application. You don't need to make any of the hard decisions or build a distributed systems by bringing together like five different tools and spend like five months building the data layer, focus on the application, build your agents. Demetrios: So there is something else. As we are winding down, I want to ask you one last thing, and if anyone has any questions, feel free to throw them in the chat. I am monitoring that also, but I am wondering about advice that you have for people that are building rag based applications, because I feel like you've probably seen quite a few out there in the wild. And so what are some optimizations or some nice hacks that you've seen that have worked really well? Yeah. Diptanu Gon Choudhury: So I think, first of all, there is a recent paper, like a rack survey paper. I really like it. Maybe you can have the link on the show notes if you have one. There was a recent survey paper, I really liked it, and it covers a lot of tips and tricks that people can use with Rag. But essentially, Rag is an information. Rag is like a two step process in its essence. One is the document selection process and the document reading process. Document selection is how do you retrieve the most important information out of million documents that might be there, and then the reading process is how do you jam them in the context of a model, and so that the model can kind of ground its generation based on the context. Diptanu Gon Choudhury: So I think the most tricky part here, and the part which has the most tips and tricks is the document selection part. And that is like a classic information retrieval problem. So I would suggest people doing a lot of experimentation around ranking algorithms, hitting different type of indexes, and refining the results by merging results from different indexes. One thing that always works for me is reducing the search space of the documents that I am selecting in a very systematic manner. So like using some kind of hybrid search where someone does the embedding lookup first, and then does the keyword lookup, or vice versa, or does lookups parallel and then merges results together? Those kind of things where the search space is narrowed down always works for me. Demetrios: So I think one of the Qdrant team members would love to know because I've been talking to them quite frequently about this, the evaluating of retrieval. Have you found any tricks or tips around that and evaluating the quality of what is retrieved? Diptanu Gon Choudhury: So I haven't come across a golden one trick that fits every use case type thing like solution for evaluation. Evaluation is really hard. There are open source projects like ragas who are trying to solve it, and everyone is trying to solve various, various aspects of evaluating like rag exactly. Some of them try to evaluate how accurate the results are, some people are trying to evaluate how diverse the answers are, and so on. I think the most important thing that our design partners care about is factual accuracy and factual accuracy. One process that has worked really well is like having a critique model. So let the generation model generate some data and then have a critique model go and try to find citations and look up how accurate the data is, how accurate the generation is, and then feed that back into the system. One another thing like going back to the previous point is what tricks can someone use for doing rag really well? I feel like people don't fine tune embedding models that much. Diptanu Gon Choudhury: I think if people are using an embedding model, like sentence transformer or anything like off the shelf, they should look into fine tuning the embedding models on their data set that they are embedding. And I think a combination of fine tuning the embedding models and kind of like doing some factual accuracy checks lead to a long way in getting like rag working really well. Demetrios: Yeah, it's an interesting one. And I'll probably leave it here on the extra model that is basically checking factual accuracy. You've always got these trade offs that you're playing with, right? And one of the trade offs is going to be, maybe you're making another LLM call, which could be more costly, but you're gaining trust or you're gaining confidence that what it's outputting is actually what it says it is. And it's actually factually correct, as you said. So it's like, what price can you put on trust? And we're going back to that whole thing that I saw on Chevy's website where they were saying that a Tesla is better. It's like that hopefully doesn't happen anymore as people deploy this stuff and they recognize that humans are cunning when it comes to playing around with chat bots. So this has been fascinating, man. I appreciate you coming on here and chatting me with it. Demetrios: I encourage everyone to go and either reach out to you on LinkedIn, I know you are on there, and we'll leave a link to your LinkedIn in the chat too. And if not, check out Tensorleg, check out indexify, and we will be in touch. Man, this was great. Diptanu Gon Choudhury: Yeah, same. It was really great chatting with you about this, Demetrius, and thanks for having me today. Demetrios: Cheers. I'll talk to you later.
blog/indexify-unveiled-diptanu-gon-choudhury-vector-space-talk-009.md
--- draft: false title: "Qdrant Hybrid Cloud and DigitalOcean for Scalable and Secure AI Solutions" short_description: "Enabling developers to deploy a managed vector database in their DigitalOcean Environment." description: "Enabling developers to deploy a managed vector database in their DigitalOcean Environment." preview_image: /blog/hybrid-cloud-digitalocean/hybrid-cloud-digitalocean.png date: 2024-04-11T00:02:00Z author: Qdrant featured: false weight: 1010 tags: - Qdrant - Vector Database --- Developers are constantly seeking new ways to enhance their AI applications with new customer experiences. At the core of this are vector databases, as they enable the efficient handling of complex, unstructured data, making it possible to power applications with semantic search, personalized recommendation systems, and intelligent Q&A platforms. However, when deploying such new AI applications, especially those handling sensitive or personal user data, privacy becomes important. [DigitalOcean](https://www.digitalocean.com/) and Qdrant are actively addressing this with an integration that lets developers deploy a managed vector database in their existing DigitalOcean environments. With the recent launch of [Qdrant Hybrid Cloud](/hybrid-cloud/), developers can seamlessly deploy Qdrant on DigitalOcean Kubernetes (DOKS) clusters, making it easier for developers to handle vector databases without getting bogged down in the complexity of managing the underlying infrastructure. #### Unlocking the Power of Generative AI with Qdrant and DigitalOcean User data is a critical asset for a business, and user privacy should always be a top priority. This is why businesses require tools that enable them to leverage their user data as a valuable asset while respecting privacy. Qdrant Hybrid Cloud on DigitalOcean brings these capabilities directly into developers' hands, enhancing deployment flexibility and ensuring greater control over data. > *“Qdrant, with its seamless integration and robust performance, equips businesses to develop cutting-edge applications that truly resonate with their users. Through applications such as semantic search, Q&A systems, recommendation engines, image search, and RAG, DigitalOcean customers can leverage their data to the fullest, ensuring privacy and driving innovation.“* - Bikram Gupta, Lead Product Manager, Kubernetes & App Platform, DigitalOcean. #### Get Started with Qdrant on DigitalOcean DigitalOcean customers can easily deploy Qdrant on their DigitalOcean Kubernetes (DOKS) clusters through a simple Kubernetis-native “one-line” installment. This simplicity allows businesses to start small and scale efficiently. - **Simple Deployment**: Leveraging Kubernetes, deploying Qdrant Hybrid Cloud on DigitalOcean is streamlined, making the management of vector search workloads in the own environment more efficient. - **Own Infrastructure**: Hosting the vector database on your DigitalOcean infrastructure offers flexibility and allows you to manage the entire AI stack in one place. - **Data Control**: Deploying within the own DigitalOcean environment ensures data control, keeping sensitive information within its security perimeter. To get Qdrant Hybrid Cloud setup on DigitalOcean, just follow these steps: - **Hybrid Cloud Setup**: Begin by logging into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and activate **Hybrid Cloud** feature in the sidebar. - **Cluster Configuration**: From Hybrid Cloud settings, integrate your DigitalOcean Kubernetes clusters as a Hybrid Cloud Environment. - **Simplified Deployment**: Use the Qdrant Management Console to effortlessly establish and oversee your Qdrant clusters on DigitalOcean. #### Chat with PDF Documents with Qdrant Hybrid Cloud on DigitalOcean ![hybrid-cloud-llamaindex-tutorial](/blog/hybrid-cloud-llamaindex/hybrid-cloud-llamaindex-tutorial.png) We created a tutorial that guides you through setting up and leveraging Qdrant Hybrid Cloud on DigitalOcean for a RAG application. It highlights practical steps to integrate vector search with Jina AI's LLMs, optimizing the generation of high-quality, relevant AI content, while ensuring data sovereignty is maintained throughout. This specific system is tied together via the LlamaIndex framework. [Try the Tutorial](/documentation/tutorials/hybrid-search-llamaindex-jinaai/) For a comprehensive guide, our documentation provides detailed instructions on setting up Qdrant on DigitalOcean. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
blog/hybrid-cloud-digitalocean.md
--- draft: false title: Optimizing an Open Source Vector Database with Andrey Vasnetsov slug: open-source-vector-search-engine-vector-database short_description: CTO of Qdrant Andrey talks about Vector search engines and the technical facets and challenges encountered in developing an open-source vector database. description: Learn key strategies for optimizing vector search from Andrey Vasnetsov, CTO at Qdrant. Dive into techniques like efficient indexing for improved performance. preview_image: /blog/from_cms/andrey-vasnetsov-cropped.png date: 2024-01-10T16:04:57.804Z author: Demetrios Brinkmann featured: false tags: - Qdrant - Vector Search Engine - Vector Database --- # Optimizing Open Source Vector Search: Strategies from Andrey Vasnetsov at Qdrant > *"For systems like Qdrant, scalability and performance in my opinion, is much more important than transactional consistency, so it should be treated as a search engine rather than database."*\ -- Andrey Vasnetsov > Discussing core differences between search engines and databases, Andrey underlined the importance of application needs and scalability in database selection for vector search tasks. Andrey Vasnetsov, CTO at Qdrant is an enthusiast of [Open Source](https://qdrant.tech/), machine learning, and vector search. He works on Open Source projects related to [Vector Similarity Search](https://qdrant.tech/articles/vector-similarity-beyond-search/) and Similarity Learning. He prefers practical over theoretical, working demo over arXiv paper. ***You can watch this episode on [YouTube](https://www.youtube.com/watch?v=bU38Ovdh3NY).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/bU38Ovdh3NY?si=GiRluTu_c-4jESMj" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> ***This episode is part of the [ML⇄DB Seminar Series](https://db.cs.cmu.edu/seminar2023/#) (Machine Learning for Databases + Databases for Machine Learning) of the Carnegie Mellon University Database Research Group.*** ## **Top Takeaways:** Dive into the intricacies of [vector databases](https://qdrant.tech/articles/what-is-a-vector-database/) with Andrey as he unpacks Qdrant's approach to combining filtering and vector search, revealing how in-place filtering during graph traversal optimizes precision without sacrificing search exactness, even when scaling to billions of vectors. 5 key insights you’ll learn: - 🧠 **The Strategy of Subgraphs:** Dive into how overlapping intervals and geo hash regions can enhance the precision and connectivity within vector search indices. - 🛠️ **Engine vs Database:** Discover the differences between search engines and relational databases and why considering your application's needs is crucial for scalability. - 🌐 **Combining Searches with Relational Data:** Get insights on integrating relational and vector search for improved efficiency and performance. - 🚅 **Speed and Precision Tactics:** Uncover the techniques for controlling search precision and speed by tweaking the beam size in HNSW indices. - 🔗 **Connected Graph Challenges:** Learn about navigating the difficulties of maintaining a connected graph while filtering during search operations. > Fun Fact: [The Qdrant system](https://qdrant.tech/) is capable of in-place filtering during graph traversal, which is a novel approach compared to traditional post-filtering methods, ensuring the correct quantity of results that meet the filtering conditions. > ## Timestamps: 00:00 Search professional with expertise in vectors and engines.\ 09:59 Elasticsearch: scalable, weak consistency, prefer vector search.\ 12:53 Optimize data structures for faster processing efficiency.\ 21:41 Vector indexes require special treatment, like HNSW's proximity graph and greedy search.\ 23:16 HNSW index: approximate, precision control, CPU intensive.\ 30:06 Post-filtering inefficient, prefiltering costly.\ 34:01 Metadata-based filters; creating additional connecting links.\ 41:41 Vector dimension impacts comparison speed, indexing complexity high.\ 46:53 Overlapping intervals and subgraphs for precision.\ 53:18 Postgres limits scalability, additional indexing engines provide faster queries.\ 59:55 Embedding models for time series data explained.\ 01:02:01 Cheaper system for serving billion vectors. ## More Quotes from Andrey: *"It allows us to compress vector to a level where a single dimension is represented by just a single bit, which gives total of 32 times compression for the vector."*\ -- Andrey Vasnetsov on vector compression in AI *"We build overlapping intervals and we build these subgraphs with additional links for those intervals. And also we can do the same with, let's say, location data where we have geocoordinates, so latitude, longitude, we encode it into geo hashes and basically build this additional graph for overlapping geo hash regions."*\ -- Andrey Vasnetsov *"We can further compress data using such techniques as delta encoding, as variable byte encoding, and so on. And this total effect, total combined effect of this optimization can make immutable data structures order of minute more efficient than mutable ones."*\ -- Andrey Vasnetsov
blog/open-source-vector-search-engine-and-vector-database.md
--- draft: false title: "Integrating Qdrant and LangChain for Advanced Vector Similarity Search" short_description: Discover how Qdrant and LangChain can be integrated to enhance AI applications. description: Discover how Qdrant and LangChain can be integrated to enhance AI applications with advanced vector similarity search technology. preview_image: /blog/using-qdrant-and-langchain/qdrant-langchain.png date: 2024-03-12T09:00:00Z author: David Myriel featured: true tags: - Qdrant - LangChain - LangChain integration - Vector similarity search - AI LLM (large language models) - LangChain agents - Large Language Models --- > *"Building AI applications doesn't have to be complicated. You can leverage pre-trained models and support complex pipelines with a few lines of code. LangChain provides a unified interface, so that you can avoid writing boilerplate code and focus on the value you want to bring."* Kacper Lukawski, Developer Advocate, Qdrant ## Long-Term Memory for Your GenAI App Qdrant's vector database quickly grew due to its ability to make Generative AI more effective. On its own, an LLM can be used to build a process-altering invention. With Qdrant, you can turn this invention into a production-level app that brings real business value. The use of vector search in GenAI now has a name: **Retrieval Augmented Generation (RAG)**. [In our previous article](/articles/rag-is-dead/), we argued why RAG is an essential component of AI setups, and why large-scale AI can't operate without it. Numerous case studies explain that AI applications are simply too costly and resource-intensive to run using only LLMs. > Going forward, the solution is to leverage composite systems that use models and vector databases. **What is RAG?** Essentially, a RAG setup turns Qdrant into long-term memory storage for LLMs. As a vector database, Qdrant manages the efficient storage and retrieval of user data. Adding relevant context to LLMs can vastly improve user experience, leading to better retrieval accuracy, faster query speed and lower use of compute. Augmenting your AI application with vector search reduces hallucinations, a situation where AI models produce legitimate-sounding but made-up responses. Qdrant streamlines this process of retrieval augmentation, making it faster, easier to scale and efficient. When you are accessing vast amounts of data (hundreds or thousands of documents), vector search helps your sort through relevant context. **This makes RAG a primary candidate for enterprise-scale use cases.** ## Why LangChain? Retrieval Augmented Generation is not without its challenges and limitations. One of the main setbacks for app developers is managing the entire setup. The integration of a retriever and a generator into a single model can lead to a raised level of complexity, thus increasing the computational resources required. [LangChain](https://www.langchain.com/) is a framework that makes developing RAG-based applications much easier. It unifies interfaces to different libraries, including major embedding providers like OpenAI or Cohere and vector stores like Qdrant. With LangChain, you can focus on creating tangible GenAI applications instead of writing your logic from the ground up. > Qdrant is one of the **top supported vector stores** on LangChain, with [extensive documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant) and [examples](https://python.langchain.com/docs/integrations/retrievers/self_query/qdrant_self_query). **How it Works:** LangChain receives a query and retrieves the query vector from an embedding model. Then, it dispatches the vector to a vector database, retrieving relevant documents. Finally, both the query and the retrieved documents are sent to the large language model to generate an answer. ![qdrant-langchain-rag](/blog/using-qdrant-and-langchain/flow-diagram.png) When supported by LangChain, Qdrant can help you set up effective question-answer systems, detection systems and chatbots that leverage RAG to its full potential. When it comes to long-term memory storage, developers can use LangChain to easily add relevant documents, chat history memory & rich user data to LLM app prompts via Qdrant. ## Common Use Cases Integrating Qdrant and LangChain can revolutionize your AI applications. Let's take a look at what this integration can do for you: *Enhance Natural Language Processing (NLP):* LangChain is great for developing question-answering **chatbots**, where Qdrant is used to contextualize and retrieve results for the LLM. We cover this in [our article](/articles/langchain-integration/), and in OpenAI's [cookbook examples](https://cookbook.openai.com/examples/vector_databases/qdrant/qa_with_langchain_qdrant_and_openai) that use LangChain and GPT to process natural language. *Improve Recommendation Systems:* Food delivery services thrive on indecisive customers. Businesses need to accomodate a multi-aim search process, where customers seek recommendations though semantic search. With LangChain you can build systems for **e-commerce, content sharing, or even dating apps**. *Advance Data Analysis and Insights:* Sometimes you just want to browse results that are not necessarily closest, but still relevant. Semantic search helps user discover products in **online stores**. Customers don't exactly know what they are looking for, but require constrained space in which a search is performed. *Offer Content Similarity Analysis:* Ever been stuck seeing the same recommendations on your **local news portal**? You may be held in a similarity bubble! As inputs get more complex, diversity becomes scarce, and it becomes harder to force the system to show something different. LangChain developers can use semantic search to develop further context. ## Building a Chatbot with LangChain _Now that you know how Qdrant and LangChain work together - it's time to build something!_ Follow Daniel Romero's video and create a RAG Chatbot completely from scratch. You will only use OpenAI, Qdrant and LangChain. Here is what this basic tutorial will teach you: **1. How to set up a chatbot using Qdrant and LangChain:** You will use LangChain to create a RAG pipeline that retrieves information from a dataset and generates output. This will demonstrate the difference between using an LLM by itself and leveraging a vector database like Qdrant for memory retrieval. **2. Preprocess and format data for use by the chatbot:** First, you will download a sample dataset based on some academic journals. Then, you will process this data into embeddings and store it as vectors inside of Qdrant. **3. Implement vector similarity search algorithms:** Second, you will create and test a chatbot that only uses the LLM. Then, you will enable the memory component offered by Qdrant. This will allow your chatbot to be modified and updated, giving it long-term memory. **4. Optimize the chatbot's performance:** In the last step, you will query the chatbot in two ways. First query will retrieve parametric data from the LLM, while the second one will get contexual data via Qdrant. The goal of this exercise is to show that RAG is simple to implement via LangChain and yields much better results than using LLMs by itself. <iframe width="560" height="315" src="https://www.youtube.com/embed/O60-KuZZeQA?si=jkDsyJ52qA4ivXUy" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> ## Scaling Qdrant and LangChain If you are looking to scale up and keep the same level of performance, Qdrant and LangChain are a rock-solid combination. Getting started with both is a breeze and the [documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant) covers a broad number of cases. However, the main strength of Qdrant is that it can consistently support the user way past the prototyping and launch phases. > *"We are all-in on performance and reliability. Every release we make Qdrant faster, more stable and cost-effective for the user. When others focus on prototyping, we are already ready for production. Very soon, our users will build successful products and go to market. At this point, I anticipate a great need for a reliable vector store. Qdrant will be there for LangChain and the entire community."* Whether you are building a bank fraud-detection system, RAG for e-commerce, or services for the federal government - you will need to leverage a scalable architecture for your product. Qdrant offers different features to help you considerably increase your application’s performance and lower your hosting costs. > Read more about out how we foster [best practices for large-scale deployments](/articles/multitenancy/). ## Next Steps Now that you know how Qdrant and LangChain can elevate your setup - it's time to try us out. - Qdrant is open source and you can [quickstart locally](/documentation/quick-start/), [install it via Docker](/documentation/quick-start/), [or to Kubernetes](https://github.com/qdrant/qdrant-helm/). - We also offer [a free-tier of Qdrant Cloud](https://cloud.qdrant.io/) for prototyping and testing. - For best integration with LangChain, read the [official LangChain documentation](https://python.langchain.com/docs/integrations/vectorstores/qdrant/). - For all other cases, [Qdrant documentation](/documentation/integrations/langchain/) is the best place to get there. > We offer additional support tailored to your business needs. [Contact us](https://qdrant.to/contact-us) to learn more about implementation strategies and integrations that suit your company.
blog/using-qdrant-and-langchain.md
--- draft: false title: Qdrant supports ARM architecture! slug: qdrant-supports-arm-architecture short_description: Qdrant announces ARM architecture support, expanding accessibility and performance for their advanced data indexing technology. description: Qdrant's support for ARM architecture marks a pivotal step in enhancing accessibility and performance. This development optimizes data indexing and retrieval. preview_image: /blog/from_cms/docker-preview.png date: 2022-09-21T09:49:53.352Z author: Kacper Łukawski featured: false tags: - Vector Search - Vector Search Engine - Embedding - Neural Networks - Database --- The processor architecture is a thing that the end-user typically does not care much about, as long as all the applications they use run smoothly. If you use a PC then chances are you have an x86-based device, while your smartphone rather runs on an ARM processor. In 2020 Apple introduced their ARM-based M1 chip which is used in modern Mac devices, including notebooks. The main differences between those two architectures are the set of supported instructions and energy consumption. ARM’s processors have a way better energy efficiency and are cheaper than their x86 counterparts. That’s why they became available as an affordable alternative in the hosting providers, including the cloud. ![](/blog/from_cms/1_seaglc6jih2qknoshqbf1q.webp "An image generated by Stable Diffusion with a query “two computer processors fightning against each other”") In order to make an application available for ARM users, it has to be compiled for that platform. Otherwise, it has to be emulated by the device, which gives an additional overhead and reduces its performance. We decided to provide the [Docker images](https://hub.docker.com/r/qdrant/qdrant/) targeted especially at ARM users. Of course, using a limited set of processor instructions may impact the performance of your vector search, and that’s why we decided to test both architectures using a similar setup. ## Test environments AWS offers ARM-based EC2 instances that are 20% cheaper than the x86 corresponding alternatives with a similar configuration. That estimate has been done for the eu-central-1 region (Frankfurt) and R6g/R6i instance families. For the purposes of this comparison, we used an r6i.large instance (Intel Xeon) and compared it to r6g.large one (AWS Graviton2). Both setups have 2 vCPUs and 16 GB of memory available and these were the smallest comparable instances available. ## The results For the purposes of this test, we created some random vectors which were compared with cosine distance. ### Vector search During our experiments, we performed 1000 search operations for both ARM64 and x86-based setups. We didn’t measure the network overhead, only the time measurements returned by the engine in the API response. The chart below shows the distribution of that time, separately for each architecture. ![](/blog/from_cms/1_zvuef4ri6ztqjzbsocqj_w.webp "The latency distribution of search requests: arm vs x86") It seems that ARM64 might be an interesting alternative if you are on a budget. It is 10% slower on average, and 20% slower on the median, but the performance is more consistent. It seems like it won’t be randomly 2 times slower than the average, unlike x86. That makes ARM64 a cost-effective way of setting up vector search with Qdrant, keeping in mind it’s 20% cheaper on AWS. You do get less for less, but surprisingly more than expected.
blog/qdrant-supports-arm-architecture.md
--- draft: false title: Advancements and Challenges in RAG Systems - Syed Asad | Vector Space Talks slug: rag-advancements-challenges short_description: Syed Asad talked about advanced rag systems and multimodal AI projects, discussing challenges, technologies, and model evaluations in the context of their work at Kiwi Tech. description: Syed Asad unfolds the challenges of developing multimodal RAG systems at Kiwi Tech, detailing the balance between accuracy and cost-efficiency, and exploring various tools and approaches like GPT 4 and Mixtral to enhance family tree apps and financial chatbots while navigating the hurdles of data privacy and infrastructure demands. preview_image: /blog/from_cms/syed-asad-cropped.png date: 2024-04-11T22:25:00.000Z author: Demetrios Brinkmann featured: false tags: - Vector Search - Retrieval Augmented Generation - Generative AI - KiwiTech --- > *"The problem with many of the vector databases is that they work fine, they are scalable. This is common. The problem is that they are not easy to use. So that is why I always use Qdrant.”*\ — Syed Asad > Syed Asad is an accomplished AI/ML Professional, specializing in LLM Operations and RAGs. With a focus on Image Processing and Massive Scale Vector Search Operations, he brings a wealth of expertise to the field. His dedication to advancing artificial intelligence and machine learning technologies has been instrumental in driving innovation and solving complex challenges. Syed continues to push the boundaries of AI/ML applications, contributing significantly to the ever-evolving landscape of the industry. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/4Gm4TQsO2PzOGBp5U6Cj2e?si=JrG0kHDpRTeb2gLi5zdi4Q), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/RVb6_CI7ysM?si=8Hm7XSWYTzK6SRj0).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/RVb6_CI7ysM?si=8Hm7XSWYTzK6SRj0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Advancements-and-Challenges-in-RAG-Systems---Syed-Asad--Vector-Space-Talks-021-e2i112h/a-ab4vnl8" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Prompt engineering is the new frontier in AI. Let’s find out about how critical its role is in controlling AI language models. In this episode, Demetrios and Syed gets to discuss about it. Syed also explores the retrieval augmented generation systems and machine learning technology at Kiwi Tech. This episode showcases the challenges and advancements in AI applications across various industries. Here are the highlights from this episode: 1. **Digital Family Tree:** Learn about the family tree app project that brings the past to life through video interactions with loved ones long gone. 2. **Multimodal Mayhem:** Discover the complexities of creating AI systems that can understand diverse accents and overcome transcription tribulations – all while being cost-effective! 3. **The Perfect Match:** Find out how semantic chunking is revolutionizing job matching in radiology and why getting the context right is non-negotiable. 4. **Quasar's Quantum Leap:** Syed shares the inside scoop on Quasar, a financial chatbot, and the AI magic that makes it tick. 5. **The Privacy Paradox:** Delve into the ever-present conflict between powerful AI outcomes and the essential quest to preserve data privacy. > Fun Fact: Syed Asad and his team at Kiwi Tech use a GPU-based approach with GPT 4 for their AI system named Quasar, addressing challenges like temperature control and mitigating hallucinatory responses. > ## Show notes: 00:00 Clients seek engaging multimedia apps over chatbots.\ 06:03 Challenges in multimodal rags: accent, transcription, cost.\ 08:18 AWS credits crucial, but costs skyrocket quickly.\ 10:59 Accurate procedures crucial, Qdrant excels in search.\ 14:46 Embraces AI for monitoring and research.\ 19:47 Seeking insights on ineffective marketing models and solutions.\ 23:40 GPT 4 useful, prompts need tracking tools\ 25:28 Discussing data localization and privacy, favoring Ollama.\ 29:21 Hallucination control and pricing are major concerns.\ 32:47 DeepEval, AI testing, LLM, potential, open source.\ 35:24 Filter for appropriate embedding model based on use case and size. ## More Quotes from Syed: *"Qdrant has the ease of use. I have trained people in my team who specializes with Qdrant, and they were initially using Weaviate and Pinecone.”*\ — Syed Asad *"What's happening nowadays is that the clients or the projects in which I am particularly working on are having more of multimedia or multimodal approach. They want their apps or their LLM apps to be more engaging rather than a mere chatbot.”*\ — Syed Asad *"That is where the accuracy matters the most. And in this case, Qdrant has proved just commendable in giving excellent search results.”*\ — Syed Asad in Advancements in Medical Imaging Search ## Transcript: Demetrios: What is up, good people? How y'all doing? We are back for yet another vector space talks. I'm super excited to be with you today because we're gonna be talking about rags and rag systems. And from the most basic naive rag all the way to the most advanced rag, we've got it covered with our guest of honor, Asad. Where are you at, my man? There he is. What's going on, dude? Syed Asad: Yeah, everything is fine. Demetrios: Excellent, excellent. Well, I know we were talking before we went live, and you are currently in India. It is very late for you, so I appreciate you coming on here and doing this with us. You are also, for those who do not know, a senior engineer for AI and machine learning at Kiwi Tech. Can you break down what Kiwi tech is for us real fast? Syed Asad: Yeah, sure. Absolutely. So Kiwi tech is actually a software development, was actually a software development company focusing on software development, iOS and mobile apps. And right now we are in all focusing more on generative AI, machine learning and computer vision projects. So I am heading the AI part here. So. And we are having loads of projects here with, from basic to advanced rags, from naive to visual rags. So basically I'm doing rag in and out from morning to evening. Demetrios: Yeah, you can't get away from it, huh? Man, that is great. Syed Asad: Everywhere there is rag. Even, even the machine learning part, which was previously done by me, is all now into rags engineered AI. Yeah. Machine learning is just at the background now. Demetrios: Yeah, yeah, yeah. It's funny, I understand the demand for it because people are trying to see where they can get value in their companies with the new generative AI advancements. Syed Asad: Yeah. Demetrios: So I want to talk a lot about advance rags, considering the audience that we have. I would love to hear about the visual rags also, because that sounds very exciting. Can we start with the visual rags and what exactly you are doing, what you're working on when it comes to that? Syed Asad: Yeah, absolutely. So initially when I started working, so you all might be aware with the concept of frozen rags, the normal and the basic rag, there is a text retrieval system. You just query your data and all those things. So what is happening nowadays is that the clients or the projects in which I am particularly working on are having more of multimedia or multimodal approach. So that is what is happening. So they want their apps or their LLM apps to be more engaging rather than a mere chatbot. Because. Because if we go on to the natural language or the normal english language, I mean, interacting by means of a video or interacting by means of a photo, like avatar, generation, anything like that. Syed Asad: So that has become more popular or, and is gaining more popularity. And if I talk about, specifically about visual rags. So the projects which I am working on is, say, for example, say, for example, there is a family tree type of app in which. In which you have an account right now. So, so you are recording day videos every day, right? Like whatever you are doing, for example, you are singing a song, you're walking in the park, you are eating anything like that, and you're recording those videos and just uploading them on that app. But what do you want? Like, your future generations can do some sort of query, like what, what was my grandfather like? What was my, my uncle like? Anything my friend like. And it was, it is not straight, restricted to a family. It can be friends also. Syed Asad: Anyway, so. And these are all us based projects, not indian based projects. Okay, so, so you, you go in query and it returns a video about your grandfather who has already died. He has not. You can see him speaking about that particular thing. So it becomes really engaging. So this is something which is called visual rag, which I am working right now on this. Demetrios: I love that use case. So basically it's, I get to be closer to my family that may or may not be here with us right now because the rag can pull writing that they had. It can pull video of other family members talking about it. It can pull videos of when my cousin was born, that type of stuff. Syed Asad: Anything, anything from cousin to family. You can add any numbers of members of your family. You can give access to any number of people who can have after you, after you're not there, like a sort of a nomination or a delegation live up thing. So that is, I mean, actually, it is a very big project, involves multiple transcription models, video transcription models. It also involves actually the databases, and I'm using Qdrant, proud of it. So, in that, so. And Qdrant is working seamlessly in that. So, I mean, at the end there is a vector search, but at the background there is more of more of visual rag, and people want to communicate through videos and photos. Syed Asad: So that is coming into picture more. Demetrios: Well, talk to me about multimodal rag. And I know it's a bit of a hairy situation because if you're trying to do vector search with videos, it can be a little bit more complicated than just vector search with text. Right. So what are some of the unique challenges that you've seen when it comes to multimodal rag? Syed Asad: The first challenge dealing with multimodal rags is actually the accent, because it can be varying accent. The problem with the transcription, one of the problems or the challenges which I have faced in this is that lack of proper transcription models, if you are, if you are able to get a proper transcription model, then if that, I want to deploy that model in the cloud, say for example, an AWS cloud. So that AWS cloud is costing heavy on the pockets. So managing infra is one of the part. I mean, I'm talking in a, in a, in a highly scalable production environment. I'm not talking about a research environment in which you can do anything on a collab notebook and just go with that. So whenever it comes to the client part or the delivery part, it becomes more critical. And even there, there were points then that we have to entirely overhaul the entire approach, which was working very fine when we were doing it on the dev environment, like the openais whisper. Syed Asad: We started with that OpenAI's whisper. It worked fine. The transcription was absolutely fantastic. But we couldn't go into the production. Demetrios: Part with that because it was too, the word error rate was too high, or because it was too slow. What made it not allow you to go into production? Syed Asad: It was, the word error rate was also high. It was very slow when it was being deployed on an AWS instance. And the thing is that the costing part, because usually these are startups, or mid startup, if I talk about the business point of view, not the tech point of view. So these companies usually offer these type of services for free, and on the basis of these services they try to raise funding. So they want something which is actually optimized, optimizing their cost as well. So what I personally feel, although AWS is massively scalable, but I don't prefer AWS at all until, unless there are various other options coming out, like salad. I had a call, I had some interactions with Titan machine learning also, but it was also fine. But salad is one of the best as of now. Demetrios: Yeah. Unless you get that free AWS credits from the startup program, it can get very expensive very quickly. And even if you do have the free AWS credits, it still gets very expensive very quickly. So I understand what you're saying is basically it was unusable because of the cost and the inability to figure out, it was more of a product problem if you could figure out how to properly monetize it. But then you had technical problems like word error rate being really high, the speed and latency was just unbearable. I can imagine. So unless somebody makes a query and they're ready to sit around for a few minutes and let that query come back to you, with a video or some documents, whatever it may be. Is that what I'm understanding on this? And again, this is for the family tree use case that you're talking about. Syed Asad: Yes, family tree use case. So what was happening in that, in that case is a video is uploaded, it goes to the admin for an approval actually. So I mean you can, that is where we, they were restricting the costing part as far as the project was concerned. It's because you cannot upload any random videos and they will select that. Just some sort of moderation was also there, as in when the admin approves those videos, that videos goes on to the transcription pipeline. They are transcripted via an, say a video to text model like the open eyes whisper. So what was happening initially, all the, all the research was done with Openais, but at the end when deployment came, we have to go with deep Gram and AssemblyAI. That was the place where these models were excelling far better than OpenAI. Syed Asad: And I'm a big advocate of open source models, so also I try to leverage those, but it was not pretty working in production environment. Demetrios: Fascinating. So you had that, that's one of your use cases, right? And that's very much the multimodal rag use case. Are all of your use cases multimodal or did you have, do you have other ones too? Syed Asad: No, all are not multimodal. There are few multimodal, there are few text based on naive rag also. So what, like for example, there is one use case coming which is sort of a job search which is happening. A job search for a radiology, radiology section. I mean a very specialized type of client it is. And they're doing some sort of job search matching the modalities and procedures. And it is sort of a temporary job. Like, like you have two shifts ready, two shifts begin, just some. Syed Asad: So, so that is, that is very critical when somebody is putting their procedures or what in. Like for example, they, they are specializing in x rays in, in some sort of medical procedures and that is matching with the, with the, with the, with the employers requirement. So that is where the accuracy matters the most. Accurate. And in this case, Qdrant has proved just commendable in giving excellent search results. The other way around is that in this case is there were some challenges related to the quality of results also because. So progressing from frozen rack to advanced rag like adopting methods like re ranking, semantic chunking. I have, I have started using semantic chunking. Syed Asad: So it has proved very beneficial as far as the quality of results is concerned. Demetrios: Well, talk to me more about. I'm trying to understand this use case and why a rag is useful for the job matching. You have doctors who have specialties and they understand, all right, they're, maybe it's an orthopedic surgeon who is very good at a certain type of surgery, and then you have different jobs that come online. They need to be matched with those different jobs. And so where does the rag come into play? Because it seems like it could be solved with machine learning as opposed to AI. Syed Asad: Yeah, it could have been solved through machine learning, but the type of modalities that are, the type of, say, the type of jobs which they were posting are too much specialized. So it needed some sort of contextual matching also. So there comes the use case for the rag. In this place, the contextual matching was required. Initially, an approach for machine learning was on the table, but it was done with, it was not working. Demetrios: I get it, I get it. So now talk to me. This is really important that you said accuracy needs to be very high in this use case. How did you make sure that the accuracy was high? Besides the, I think you said chunking, looking at the chunks, looking at how you were doing that, what were some other methods you took to make sure that the accuracy was high? Syed Asad: I mean, as far as the accuracy is concerned. So what I did was that my focus was on the embedding model, actually when I started with what type of embed, choice of embedding model. So initially my team started with open source model available readily on hugging face, looking at some sort of leaderboard metrics, some sort of model specializing in medical, say, data, all those things. But even I was curious that the large language, the embedding models which were specializing in medical data, they were also not returning good results and they were mismatching. When, when there was a tabular format, I created a visualization in which the cosine similarity of various models were compared. So all were lagging behind until I went ahead with cohere. Cohere re rankers. They were the best in that case, although they are not trained on that. Syed Asad: And just an API call was required rather than loading that whole model onto the local. Demetrios: Interesting. All right. And so then were you doing certain types, so you had the cohere re ranker that gave you a big up. Were you doing any kind of monitoring of the output also, or evaluation of the output and if so, how? Syed Asad: Yes, for evaluation, for monitoring we readily use arrays AI, because I am a, I'm a huge advocate of Llama index also because it has made everything so easier versus lang chain. I mean, if I talk about my personal preference, not regarding any bias, because I'm not linked with anybody, I'm not promoting it here, but they are having the best thing which I write, I like about Llama index and why I use it, is that anything which is coming into play as far as the new research is going on, like for example, a recent research paper was with the raft retrieval augmented fine tuning, which was released by the Microsoft, and it is right now available on archive. So barely few days after they just implemented it in the library, and you can readily start using it rather than creating your own structure. So, yeah, so it was. So one of my part is that I go through the research papers first, then coming on to a result. So a research based approach is required in actually selecting the models, because every day there is new advancement going on in rags and you cannot figure out what is, what would be fine for you, and you cannot do hit and trial the whole day. Demetrios: Yes, that is a great point. So then if we break down your tech stack, what does it look like? You're using Llama index, you're using arise for the monitoring, you're using Qdrant for your vector database. You have the, you have the coherent re ranker, you are using GPT 3.5. Syed Asad: No, it's GPT 4, not 3.5. Demetrios: You needed to go with GPT 4 because everything else wasn't good enough. Syed Asad: Yes, because one of the context length was one of the most things. But regarding our production, we have been readily using since the last one and a half months. I have been readily using Mixtril. I have been. I have been using because there's one more challenge coming onto the rack, because there's one more I'll give, I'll give you an example of one more use case. It is the I'll name the project also because I'm allowed by my company. It is a big project by the name of Quasar markets. It is a us based company and they are actually creating a financial market type of check chatbot. Syed Asad: Q u a s a r, quasar. You can search it also, and they give you access to various public databases also, and some paid databases also. They have a membership plan. So we are entirely handling the front end backend. I'm not handling the front end and the back end, I'm handling the AI part in that. So one of the challenges is the inference, timing, the timing in which the users are getting queries when it is hitting the database. Say for example, there is a database publicly available database called Fred of us government. So when user can select in that app and go and select the Fred database and want to ask some questions regarding that. Syed Asad: So that is in this place there is no vectors, there are no vector databases. It is going without that. So we are following some keyword approach. We are extracting keywords, classifying the queries in simple or complex, then hitting it again to the database, sending it on the live API, getting results. So there are multiple hits going on. So what happened? This all multiple hits which were going on. They reduced the timing and I mean the user experience was being badly affected as the time for the retrieval has gone up and user and if you're going any query and inputting any query it is giving you results in say 1 minute. You wouldn't be waiting for 1 minute for a result. Demetrios: Not at all. Syed Asad: So this is one of the challenge for a GPU based approach. And in, in the background everything was working on GPT 4 even, not 3.5. I mean the costliest. Demetrios: Yeah. Syed Asad: So, so here I started with the LPU approach, the Grok. I mean it's magical. Demetrios: Yeah. Syed Asad: I have been implementing proc since the last many days and it has been magical. The chatbots are running blazingly fast but there are some shortcomings also. You cannot control the temperature if you have lesser control on hallucination. That is one of the challenges which I am facing. So that is why I am not able to deploy Grok into production right now. Because hallucination is one of the concern for the client. Also for anybody who is having, who wants to have a rag on their own data, say, or AI on their own data, they won't, they won't expect you, the LLM, to be creative. So that is one of the challenges. Syed Asad: So what I found that although many of the tools that are available in the market right now day in and day out, there are more researches. But most of the things which are coming up in our feeds or more, I mean they are coming as a sort of a marketing gimmick. They're not working actually on the ground. Demetrios: Tell me, tell me more about that. What other stuff have you tried that's not working? Because I feel that same way. I've seen it and I also have seen what feels like some people, basically they release models for marketing purposes as opposed to actual valuable models going out there. So which ones? I mean Grok, knowing about Grok and where it excels and what some of the downfalls are is really useful. It feels like this idea of temperature being able to control the knob on the temperature and then trying to decrease the hallucinations is something that is fixable in the near future. So maybe it's like months that we'll have to deal with that type of thing for now. But I'd love to hear what other things you've tried that were not like you thought they were going to be when you were scrolling Twitter or LinkedIn. Syed Asad: Should I name them? Demetrios: Please. So we all know we don't have to spend our time on them. Syed Asad: I'll start with OpenAI. The clients don't like GPT 4 to be used in there just because the primary concern is the cost. Secondary concern is the data privacy. And the third is that, I mean, I'm talking from the client's perspective, not the tech stack perspective. Demetrios: Yeah, yeah, yeah. Syed Asad: They consider OpenAI as a more of a marketing gimmick. Although GPT 4 gives good results. I'm, I'm aware of that, but the clients are not in favor. But the thing is that I do agree that GPT 4 is still the king of llms right now. So they have no option, no option to get the better, better results. But Mixtral is performing very good as far as the hallucinations are concerned. Just keeping the parameter temperature is equal to zero in a python code does not makes the hallucination go off. It is one of my key takeaways. Syed Asad: I have been bogging my head. Just. I'll give you an example, a chat bot. There is a, there's one of the use case in which is there's a big publishing company. I cannot name that company right now. And they want the entire system of books since the last 2025 years to be just converted into a rack pipeline. And the people got query. The. Syed Asad: The basic problem which I was having is handling a hello. When a user types hello. So when you type in hello, it. Demetrios: Gives you back a book. Syed Asad: It gives you back a book even. It is giving you back sometimes. Hello, I am this, this, this. And then again, some information. What you have written in the prompt, it is giving you everything there. I will answer according to this. I will answer according to this. So, so even if the temperature is zero inside the code, even so that, that included lots of prompt engineering. Syed Asad: So prompt engineering is what I feel is one of the most important trades which will be popular, which is becoming popular. And somebody is having specialization in prompt engineering. I mean, they can control the way how an LLM behaves because it behaves weirdly. Like in this use case, I was using croc and Mixtral. So to control Mixtral in such a way. It was heck lot of work, although it, we made it at the end, but it was heck lot of work in prompt engineering part. Demetrios: And this was, this was Mixtral large. Syed Asad: Mixtral, seven bits, eight by seven bits. Demetrios: Yeah. I mean, yeah, that's the trade off that you have to deal with. And it wasn't fine tuned at all. Syed Asad: No, it was not fine tuned because we were constructing a rack pipeline, not a fine tuned application, because right now, right now, even the customers are not interested in getting a fine tune model because it cost them and they are more interested in a contextual, like a rag contextual pipeline. Demetrios: Yeah, yeah. Makes sense. So basically, this is very useful to think about. I think we all understand and we've all seen that GPT 4 does best if we can. We want to get off of it as soon as possible and see how we can, how far we can go down the line or how far we can go on the difficulty spectrum. Because as soon as you start getting off GPT 4, then you have to look at those kind of issues with like, okay, now it seems to be hallucinating a lot more. How do I figure this out? How can I prompt it? How can I tune my prompts? How can I have a lot of prompt templates or a prompt suite to make sure that things work? And so are you using any tools for keeping track of prompts? I know there's a ton out there. Syed Asad: We initially started with the parameter efficient fine tuning for prompts, but nothing is working 100% interesting. Nothing works 100% it is as far as the prompting is concerned. It goes on to a hit and trial at the end. Huge wastage of time in doing prompt engineering. Even if you are following the exact prompt template given on the hugging face given on the model card anywhere, it will, it will behave, it will act, but after some time. Demetrios: Yeah, yeah. Syed Asad: But mixed well. Is performing very good. Very, very good. Mixtral eight by seven bits. That's very good. Demetrios: Awesome. Syed Asad: The summarization part is very strong. It gives you responses at par with GPT 4. Demetrios: Nice. Okay. And you don't have to deal with any of those data concerns that your customers have. Syed Asad: Yeah, I'm coming on to that only. So the next part was the data concern. So they, they want either now or in future the localization of llms. I have been doing it with readily, with Llama, CPP and Ollama. Right now. Ollama is very good. I mean, I'm a huge, I'm a huge fan of Ollama right now, and it is performing very good as far as the localization and data privacy is concerned because, because at the end what you are selling, it makes things, I mean, at the end it is sales. So even if the client is having data of the customers, they want to make their customers assure that the data is safe. Syed Asad: So that is with the localization only. So they want to gradually go into that place. So I want to bring here a few things. To summarize what I said, localization of llms is one of the concern right now is a big market. Second is quantization of models. Demetrios: Oh, interesting. Syed Asad: In quantization of models, whatever. So I perform scalar quantization and binary quantization, both using bits and bytes. I various other techniques also, but the bits and bytes was the best. Scalar quantization is performing better. Binary quantization, I mean the maximum compression or maximum lossy function is there, so it is not, it is, it is giving poor results. Scalar quantization is working very fine. It, it runs on CPU also. It gives you good results because whatever projects which we are having right now or even in the markets also, they are not having huge corpus of data right now, but they will eventually scale. Syed Asad: So they want something right now so that quantization works. So quantization is one of the concerns. People want to dodge aws, they don't want to go to AWS, but it is there. They don't have any other way. So that is why they want aws. Demetrios: And is that because of costs lock in? Syed Asad: Yeah, cost is the main part. Demetrios: Yeah. They understand that things can get out of hand real quick if you're using AWS and you start using different services. I think it's also worth noting that when you're using different services on AWS, it may be a very similar service. But if you're using sagemaker endpoints on AWS, it's like a lot more expensive than just an EKS endpoint. Syed Asad: Minimum cost for a startup, for just the GPU, bare minimum is minimum. $450. Minimum. It's $450 even without just on the testing phases or the development phases, even when it has not gone into production. So that gives a dent to the client also. Demetrios: Wow. Yeah. Yeah. So it's also, and this is even including trying to use like tranium or inferencia and all of that stuff. You know those services? Syed Asad: I know those services, but I've not readily tried those services. I'm right now in the process of trying salad also for inference, and they are very, very cheap right now. Demetrios: Nice. Okay. Yeah, cool. So if you could wave your magic wand and have something be different when it comes to your work, your day in, day out, especially because you've been doing a lot of rags, a lot of different kinds of rags, a lot of different use cases with, with rags. Where do you think you would get the biggest uptick in your performance, your ability to just do what you need to do? How could rags be drastically changed? Is it something that you say, oh, the hallucinations. If we didn't have to deal with those, that would make my life so much easier. I didn't have to deal with prompts that would make my life infinitely easier. What are some things like where in five years do you want to see this field be? Syed Asad: Yeah, you figured it right. The hallucination part is one of the concerns, or biggest concerns with the client when it comes to the rag, because what we see on LinkedIn and what we see on places, it gives you a picture that it, it controls hallucination, and it gives you answer that. I don't know anything about this, as mentioned in the context, but it does not really happen when you come to the production. It gives you information like you are developing a rag for a publishing company, and it is giving you. Where is, how is New York like, it gives you information on that also, even if you have control and everything. So that is one of the things which needs to be toned down. As far as the rag is concerned, pricing is the biggest concern right now, because there are very few players in the market as far as the inference is concerned, and they are just dominating the market with their own rates. So this is one of the pain points. Syed Asad: And the. I'll also want to highlight the popular vector databases. There are many Pinecone weaviate, many things. So they are actually, the problem with many of the vector databases is that they work fine. They are scalable. This is common. The problem is that they are not easy to use. So that is why I always use Qdrant. Syed Asad: Not because Qdrant is sponsoring me, not because I am doing a job with Qdrant, but Qdrant is having the ease of use. And it, I have, I have trained people in my team who specialize with Qdrant, and they were initially using Weaviate and Pinecone. I mean, you can do also store vectors in those databases, but it is not especially the, especially the latest development with Pine, sorry, with Qdrant is the fast embed, which they just now released. And it made my work a lot easier by using the ONNX approach rather than a Pytorch based approach, because there was one of the projects in which we were deploying embedding model on an AWS server and it was running continuously. And minimum utilization of ram is 6gb. Even when it is not doing any sort of vector embedding so fast. Embed has so Qdrant is playing a huge role, I should acknowledge them. And one more thing which I would not like to use is LAN chain. Syed Asad: I have been using it. So. So I don't want to use that language because it is not, it did not serve any purpose for me, especially in the production. It serves purpose in the research phase. When you are releasing any notebook, say you have done this and does that. It is not. It does not works well in production, especially for me. Llama index works fine, works well. Demetrios: You haven't played around with anything else, have you? Like Haystack or. Syed Asad: Yeah, haystack. Haystack. I have been playing out around, but haystack is lacking functionalities. It is working well. I would say it is working well, but it lacks some functionalities. They need to add more things as compared to Llama index. Demetrios: And of course, the hottest one on the block right now is DSPY. Right? Have you messed around with that at all? Syed Asad: DSPy, actually DSPY. I have messed with DSPY. But the thing is that DSPY is right now, I have not experimented with that in the production thing, just in the research phase. Demetrios: Yeah. Syed Asad: So, and regarding the evaluation part, DeepEval, I heard you might have a DeepEval. So I've been using that. It is because one of the, one of the challenges is the testing for the AI. Also, what responses are large language model is generating the traditional testers or the manual tester software? They don't know, actually. So there's one more vertical which is waiting to be developed, is the testing for AI. It has a huge potential. And DeepEval, the LLM based approach on testing is very, is working fine and is open source also. Demetrios: And that's the DeepEval I haven't heard. Syed Asad: Let me just tell you the exact spelling. It is. Sorry. It is DeepEval. D E E P. Deep eval. I can. Demetrios: Yeah. Okay. I know DeepEval. All right. Yeah, for sure. Okay. Hi. I for some reason was understanding D Eval. Syed Asad: Yeah, actually I was pronouncing it wrong. Demetrios: Nice. So these are some of your favorite, non favorite, and that's very good to know. It is awesome to hear about all of this. Is there anything else that you want to say before we jump off? Anything that you can, any wisdom you can impart on us for your rag systems and how you have learned the hard way? So tell us so we don't have to learn that way. Syed Asad: Just go. Don't go with the marketing. Don't go with the marketing. Do your own research. Hugging face is a good, I mean, just fantastic. The leaderboard, although everything does not work in the leaderboard, also say, for example, I don't, I don't know about today and tomorrow, today and yesterday, but there was a model from Salesforce, the embedding model from Salesforce. It is still topping charts, I think, in the, on the MTEB. MTEB leaderboard for the embedding models. Syed Asad: But you cannot use it in the production. It is way too huge to implement it. So what's the use? Mixed bread AI. The mixed bread AI, they are very light based, lightweight, and they, they are working fine. They're not even on the leaderboard. They were on the leaderboard, but they're right, they might not. When I saw they were ranking on around seven or eight on the leaderboard, MTEB leaderboard, but they were working fine. So even on the leaderboard thing, it does not works. Demetrios: And right now it feels a little bit like, especially when it comes to embedding models, you just kind of go to the leaderboard and you close your eyes and then you pick one of them. Have you figured out a way to better test these or do you just find one and then try and use it everywhere? Syed Asad: No, no, that is not the case. Actually what I do is that I need to find the first, the embedding model. Try to find the embedding model based on my use case. Like if it is an embedding model on a medical use case more. So I try to find that. But the second factor to filter that is, is the size of that embedding model. Because at the end, if I am doing the entire POC or an entire research with that embedding model, what? And it has happened to me that we did entire research with embedding models, large language models, and then we have to remove everything just on the production part and it just went in smoke. Everything. Syed Asad: So a lightweight embedding model, especially the one which, which has started working recently, is that the cohere embedding models, and they have given a facility to call those embedding models in a quantized format. So that is also working and fast. Embed is one of the things which is by Qdrant, these two things are working in the production. I'm talking in the production for research. You can do anything. Demetrios: Brilliant, man. Well, this has been great. I really appreciate it. Asad, thank you for coming on here and for anybody else that would like to come on to the vector space talks, just let us know. In the meantime, don't get lost in vector space. We will see you all later. Have a great afternoon. Morning, evening, wherever you are. Demetrios: Asad, you taught me so much, bro. Thank you.
blog/advancements-and-challenges-in-rag-systems-syed-asad-vector-space-talks-021.md
--- draft: false title: Talk with YouTube without paying a cent - Francesco Saverio Zuppichini | Vector Space Talks slug: youtube-without-paying-cent short_description: A sneak peek into the tech world as Francesco shares his ideas and processes on coding innovative solutions. description: Francesco Zuppichini outlines the process of converting YouTube video subtitles into searchable vector databases, leveraging tools like YouTube DL and Hugging Face, and addressing the challenges of coding without conventional frameworks in machine learning engineering. preview_image: /blog/from_cms/francesco-saverio-zuppichini-bp-cropped.png date: 2024-03-27T12:37:55.643Z author: Demetrios Brinkmann featured: false tags: - embeddings - LLMs - Retrieval Augmented Generation - Ollama --- > *"Now I do believe that Qdrant, I'm not sponsored by Qdrant, but I do believe it's the best one for a couple of reasons. And we're going to see them mostly because I can just run it on my computer so it's full private and I'm in charge of my data.”*\ -- Francesco Saverio Zuppichini > Francesco Saverio Zuppichini is a Senior Full Stack Machine Learning Engineer at Zurich Insurance with experience in both large corporations and startups of various sizes. He is passionate about sharing knowledge, and building communities, and is known as a skilled practitioner in computer vision. He is proud of the community he built because of all the amazing people he got to know. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/7kVd5a64sz2ib26IxyUikO?si=mrOoVP3ISQ22kXrSUdOmQA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/56mFleo06LI).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/56mFleo06LI?si=P4vF9jeQZEZzjb32" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Talk-with-YouTube-without-paying-a-cent---Francesco-Saverio-Zuppichini--Vector-Space-Talks-016-e2ggt6d/a-ab17u5q" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Curious about transforming YouTube content into searchable elements? Francesco Zuppichini unpacks the journey of coding a RAG by using subtitles as input, harnessing technologies like YouTube DL, Hugging Face, and Qdrant, while debating framework reliance and the fine art of selecting the right software tools. Here are some insights from this episode: 1. **Behind the Code**: Francesco unravels how to create a RAG using YouTube videos. Get ready to geek out on the nuts and bolts that make this magic happen. 2. **Vector Voodoo**: Ever wonder how embedding vectors carry out their similarity searches? Francesco's got you covered with his brilliant explanation of vector databases and the mind-bending distance method that seeks out those matches. 3. **Function over Class**: The debate is as old as stardust. Francesco shares why he prefers using functions over classes for better code organization and demonstrates how this approach solidifies when running language models with Ollama. 4. **Metadata Magic**: Find out how metadata isn't just a sidekick but plays a pivotal role in the realm of Qdrant and RAGs. Learn why Francesco values metadata as payload and the challenges it presents in developing domain-specific applications. 5. **Tool Selection Tips**: Deciding on the right software tool can feel like navigating an asteroid belt. Francesco shares his criteria—ease of installation, robust documentation, and a little help from friends—to ensure a safe landing. > Fun Fact: Francesco confessed that his code for chunking subtitles was "a little bit crappy" because of laziness—proving that even pros take shortcuts to the stars now and then. > ## Show notes: 00:00 Intro to Francesco\ 05:36 Create YouTube rack for data retrieval.\ 09:10 Local web dev showcase without frameworks effectively.\ 11:12 Qdrant: converting video text to vectors.\ 13:43 Connect to vectordb, specify config, keep it simple.\ 17:59 Recreate, compare vectors, filter for right matches.\ 21:36 Use functions and share states for simpler coding.\ 29:32 Gemini Pro generates task-based outputs effectively.\ 32:36 Good documentation shows pride in the product.\ 35:38 Organizing different data types in separate collections.\ 38:36 Proactive approach to understanding code and scalability.\ 42:22 User feedback and statistics evaluation is crucial.\ 44:09 Consider user needs for chatbot accuracy and relevance. ## More Quotes from Francesco: *"So through Docker, using Docker compose, very simple here I just copy and paste the configuration for the Qdrant documentation. I run it and when I run it I also get a very nice looking interface.*”\ -- Francesco Saverio Zuppichini *"It's a very easy way to debug stuff because if you see a lot of vectors from the same document in the same place, maybe your chunking is not doing a great job because maybe you have some too much kind of overlapping on the recent bug in your code in which you have duplicate chunks. Okay, so we have our vector DB running. Now we need to do some setup stuff. So very easy to do with Qdrant. You just need to get the Qdrant client.”*\ -- Francesco Saverio Zuppichini *"So straightforward, so useful. A lot of people, they don't realize that types are very useful. So kudos to the Qdrant team to actually make all the types very nice.”*\ -- Francesco Saverio Zuppichini ## Transcript: Demetrios: Folks, welcome to another vector space talks. I'm excited to be here and it is a special day because I've got a co host with me today. Sabrina, what's going on? How you doing? Sabrina Aquino: Let's go. Thank you so much, Demetrios, for having me here. I've always wanted to participate in vector space talks. Now it's finally my chance. So thank you so much. Demetrios: Your dream has come true and what a day for it to come true because we've got a special guest today. While we've got you here, Sabrina, I know you've been doing some excellent stuff on the Internet when it comes to other ways to engage with the Qdrant community. Can you break that down real fast before we jump into this? Sabrina Aquino: Absolutely. I think an announcement here is we're hosting our first discord office hours. We're going to be answering all your questions about Qdrant with Qdrant team members, where you can interact with us, with our community as well. And we're also going to be dropping a few insights on the next Qdrant release 1.8. So that's super exciting and also, we are. Sorry, I just have another thing going on here on the live. Demetrios: Music got in your ear. Sabrina Aquino: We're also having the vector voices on Twitter, the X Spaces roundtable, where we bring experts to talk about a topic with our team. And you can also jump in and ask questions on the AMA. So that's super exciting as well. And, yeah, see you guys there. And I'll drop a link of the discord in the comments so you guys can join our community and be a part of it. Demetrios: Exactly what I was about to say. So without further ado, let's bring on our guest of honor, Mr. Where are you at, dude? Francesco Zuppichini: Hi. Hello. How are you? Demetrios: I'm great. How are you doing? Francesco Zuppichini: Great. Demetrios: I've been seeing you all around the Internet and I am very excited to be able to chat with you today. I know you've got a bit of stuff planned for us. You've got a whole presentation, right? Francesco Zuppichini: Correct. Demetrios: But for those that do not know you, you're a full stack machine learning engineer at Zurich Insurance. I think you also are very vocal and you are fun to follow on LinkedIn is what I would say. And we're going to get to that at the end after you give your presentation. But once again, reminder for everybody, if you want to ask questions, hit us up with questions in the chat. As far as going through his presentation today, you're going to be talking to us all about some really cool stuff about rags. I'm going to let you get into it, man. And while you're sharing your screen, I'm going to tell people a little bit of a fun fact about you. That you put ketchup on your pizza, which I think is a little bit sacrilegious. Francesco Zuppichini: Yes. So that's 100% true. And I hope that the italian pizza police is not listening to this call or I can be in real trouble. Demetrios: I think we just lost a few viewers there, but it's all good. Sabrina Aquino: Italy viewers just dropped out. Demetrios: Yeah, the Italians just dropped, but it's all good. We will cut that part out in post production, my man. I'm going to share your screen and I'm going to let you get after it. I'll be hanging around in case any questions pop up with Sabrina in the background. And here you go, bro. Francesco Zuppichini: Wonderful. So you can see my screen, right? Demetrios: Yes, for sure. Francesco Zuppichini: That's perfect. Okay, so today we're going to talk about talk with YouTube without paying a cent, no framework bs. So the goal of today is to showcase how to code a RAG given as an input a YouTube video without using any framework like language, et cetera, et cetera. And I want to show you that it's straightforward, using a bunch of technologies and Qdrants as well. And you can do all of this without actually pay to any service. Right. So we are going to run our PEDro DB locally and also the language model. We are going to run our machines. Francesco Zuppichini: And yeah, it's going to be a technical talk, so I will kind of guide you through the code. Feel free to interrupt me at any time if you have questions, if you want to ask why I did that, et cetera, et cetera. So very quickly, before we get started, I just want you not to introduce myself. So yeah, senior full stack machine engineer. That's just a bunch of funny work to basically say that I do a little bit of everything. Start. So when I was working, I start as computer vision engineer, I work at PwC, then a bunch of startups, and now I sold my soul to insurance companies working at insurance. And before I was doing computer vision, now I'm doing due to Chat GPT, hyper language model, I'm doing more of that. Francesco Zuppichini: But I'm always involved in bringing the full product together. So from zero to something that is deployed and running. So I always be interested in web dev. I can also do website servers, a little bit of infrastructure as well. So now I'm just doing a little bit of everything. So this is why there is full stack there. Yeah. Okay, let's get started to something a little bit more interesting than myself. Francesco Zuppichini: So our goal is to create a full local YouTube rack. And if you don't want a rack, is, it's basically a system in which you take some data. In this case, we are going to take subtitles from YouTube videos and you're able to basically q a with your data. So you're able to use a language model, you ask questions, then we retrieve the relevant parts in the data that you provide, and hopefully you're going to get the right answer to your. So let's talk about the technologies that we're going to use. So to get the subtitles from a video, we're going to use YouTube DL and YouTube DL. It's a library that is available through Pip. So Python, I think at some point it was on GitHub and then I think it was removed because Google, they were a little bit beach about that. Francesco Zuppichini: So then they realized it on GitHub. And now I think it's on GitHub again, but you can just install it through Pip and it's very cool. Demetrios: One thing, man, are you sharing a slide? Because all I see is your. I think you shared a different screen. Francesco Zuppichini: Oh, boy. Demetrios: I just see the video of you. There we go. Francesco Zuppichini: Entire screen. Yeah. I'm sorry. Thank you so much. Demetrios: There we go. Francesco Zuppichini: Wonderful. Okay, so in order to get the embedding. So to translate from text to vectors, right, so we're going to use hugging face just an embedding model so we can actually get some vectors. Then as soon as we got our vectors, we need to store and search them. So we're going to use our beloved Qdrant to do so. We also need to keep a little bit of stage right because we need to know which video we have processed so we don't redo the old embeddings and the storing every time we see the same video. So for this part, I'm just going to use SQLite, which is just basically an SQL database in just a file. So very easy to use, very kind of lightweight, and it's only your computer, so it's safe to run the language model. Francesco Zuppichini: We're going to use Ollama. That is a very simple way and very well done way to just get a language model that is running on your computer. And you can also call it using the OpenAI Python library because they have implemented the same endpoint as. It's like, it's super convenient, super easy to use. If you already have some code that is calling OpenAI, you can just run a different language model using Ollama. And you just need to basically change two lines of code. So what we're going to do, basically, I'm going to take a video. So here it's a video from Fireship IO. Francesco Zuppichini: We're going to run our command line and we're going to ask some questions. Now, if you can still, in theory, you should be able to see my full screen. Yeah. So very quickly to showcase that to you, I already processed this video from the good sound YouTube channel and I have already here my command line. So I can already kind of see, you know, I can ask a question like what is the contact size of Germany? And we're going to get the reply. Yeah. And here we're going to get a reply. And now I want to walk you through how you can do something similar. Francesco Zuppichini: Now, the goal is not to create the best rack in the world. It's just to showcase like show zero to something that is actually working. How you can do that in a fully local way without using any framework so you can really understand what's going on under the hood. Because I think a lot of people, they try to copy, to just copy and paste stuff on Langchain and then they end up in a situation when they need to change something, but they don't really know where the stuff is. So this is why I just want to just show like Windfield zero to hero. So the first step will be I get a YouTube video and now I need to get the subtitle. So you could actually use a model to take the audio from the video and get the text. Like a whisper model from OpenAI, for example. Francesco Zuppichini: In this case, we are taking advantage that YouTube allow people to upload subtitles and YouTube will automatically generate the subtitles. So here using YouTube dial, I'm just going to get my video URL. I'm going to set up a bunch of options like the format they want, et cetera, et cetera. And then basically I'm going to download and get the subtitles. And they look something like this. Let me show you an example. Something similar to this one, right? We have the timestamps and we do have all text inside. Now the next step. Francesco Zuppichini: So we got our source of data, we have our text key. Next step is I need to translate my text to vectors. Now the easiest way to do so is just use sentence transformers for backing phase. So here I've installed it. I load in a model. In this case I'm using this model here. I have no idea what tat model is. I just default one tatted find and it seems to work fine. Francesco Zuppichini: And then in order to use it, I'm just providing a query and I'm getting back a list of vectors. So we have a way to take a video, take the text from the video, convert that to vectors with a semantic meaningful representation. And now we need to store them. Now I do believe that Qdrant, I'm not sponsored by Qdrant, but I do believe it's the best one for a couple of reasons. And we're going to see them mostly because I can just run it on my computer so it's full private and I'm in charge of my data. So the way I'm running it is through Docker compose. So through Docker, using Docker compose, very simple here I just copy and paste the configuration for the Qdrant documentation. I run it and when I run it I also get a very nice looking interface. Francesco Zuppichini: I'm going to show that to you because I think it's very cool. So here I've already some vectors inside here so I can just look in my collection, it's called embeddings, an original name. And we can see all the chunks that were embed with the metadata, in this case just the video id. A super cool thing, super useful to debug is go in the visualize part and see the embeddings, the projected embeddings. You can actually do a bounce of stuff. You can actually also go here and color them by some metadata. Like I can say I want to have a different color based on the video id. In this case I just have one video. Francesco Zuppichini: I will show that as soon as we add more videos. This is so cool, so useful. I will use this at work as well in which I have a lot of documents. And it's a very easy way to debug stuff because if you see a lot of vectors from the same document in the same place, maybe your chunking is not doing a great job because maybe you have some too much kind of overlapping on the recent bug in your code in which you have duplicate chunks. Okay, so we have our vector DB running. Now we need to do some setup stuff. So very easy to do with Qdrant. You just need to get the Qdrant client. Francesco Zuppichini: So you have a connection with a vectordb, you create a connection, you specify a name, you specify some configuration stuff. In this case I just specify the vector size because Qdrant, it needs to know how big the vectors are going to be and the distance I want to use. So I'm going to use the cosite distance in Qdrant documentation there are a lot of parameters. You can do a lot of crazy stuff here and just keep it very simple. And yeah, another important thing is that since we are going to embed more videos, when I ask a question to a video, I need to know which embedded are from that video. So we're going to create an index. So it's very efficient to filter my embedded based on that index, an index on the metadata video because when I store a chunk in Qdrant, I also going to include from which video is coming from. Very simple, very simple to set up. Francesco Zuppichini: You just need to do this once. I was very lazy so I just assumed that if this is going to fail, it means that it's because I've already created a collection. So I'm just going to pass it and call it a day. Okay, so this is basically all the preprocess this setup you need to do to have your Qdrant ready to store and search vectors. To store vectors. Straightforward, very straightforward as well. Just need again the client. So the connection to the database here I'm passing my embedding so sentence transformer model and I'm passing my chunks as a list of documents. Francesco Zuppichini: So documents in my code is just a type that will contain just this metadata here. Very simple. It's similar to Lang chain here. I just have attacked it because it's lightweight. To store them we call the upload records function. We encode them here. There is a little bit of bad variable names from my side which I replacing that. So you shouldn't do that. Francesco Zuppichini: Apologize about that and you just send the records. Another very cool thing about Qdrant. So the second things that I really like is that they have types for what you send through the library. So this models record is a Qdrant type. So you use it and you know immediately. So what you need to put inside. So let me give you an example. Right? So assuming that I'm programming, right, I'm going to say model record bank. Francesco Zuppichini: I know immediately. So what I have to put inside, right? So straightforward, so useful. A lot of people, they don't realize that types are very useful. So kudos to the Qdrant team to actually make all the types very nice. Another cool thing is that if you're using fast API to build a web server, if you are going to return a Qdrant models type, it's actually going to be serialized automatically through pydantic. So you don't need to do weird stuff. It's all handled by the Qdrant APIs, by the product SDK. Super cool. Francesco Zuppichini: Now we have a way to store our chunks to embed them. So this is how they look like in the interface. I can see them, I can go to them, et cetera, et Cetera. Very nice. Now the missing part, right. So video subtitles. I chunked the subtitles. I haven't show you the chunking code. Francesco Zuppichini: It's a little bit crappy because I was very lazy. So I just like chunking by characters count and a little bit of overlapping. We have a way to store and embed our chunks and now we need a way to search. That's basically one of the missing steps. Now search straightforward as well. This is also a good example because I can show you how effective is to create filters using Qdrant. So what do we need to search with again the vector client, the embeddings, because we have a query, right. We need to run the query with the same embedding models. Francesco Zuppichini: We need to recreate to embed in a vector and then we need to compare with the vectors in the vector Db using a distance method, in this case considered similarity in order to get the right matches right, the closest one in our vector DB, in our vector search base. So passing a query string, I'm passing a video id and I pass in a label. So how many hits I want to get from the metadb. Now to create a filter again you're going to use the model package from the Qdrant framework. So here I'm just creating a filter class for the model and I'm saying okay, this filter must match this key, right? So metadata video id with this video id. So when we search, before we do the similarity search, we are going to filter away all the vectors that are not from that video. Wonderful. Now super easy as well. Francesco Zuppichini: We just call the DB search, right pass. Our collection name here is star coded. Apologies about that, I think I forgot to put the right global variable our coded, we create a query, we set the limit, we pass the query filter, we get the it back as a dictionary in the payload field of each it and we recreate our document a dictionary. I have types, right? So I know what this function is going to return. Now if you were to use a framework, right this part, it will be basically the same thing. If I were to use langchain and I want to specify a filter, I would have to write the same amount of code. So most of the times you don't really need to use a framework. One thing that is nice about not using a framework here is that I add control on the indexes. Francesco Zuppichini: Lang chain, for instance, will create the indexes only while you call a classmate like from document. And that is kind of cumbersome because sometimes I wasn't quoting bugs in which I was not understanding why one index was created before, after, et cetera, et cetera. So yes, just try to keep things simple and not always write on frameworks. Wonderful. Now I have a way to ask a query to get back the relative parts from that video. Now we need to translate this list of chunks to something that we can read as human. Before we do that, I was almost going to forget we need to keep state. Now, one of the last missing part is something in which I can store data. Francesco Zuppichini: Here I just have a setup function in which I'm going to create an SQL lite database, create a table called videos in which I have an id and a title. So later I can check, hey, is this video already in my database? Yes. I don't need to process that. I can just start immediately to QA on that video. If not, I'm going to do the chunking and embeddings. Got a couple of functions here to get video from Db to save video from and to save video to Db. So notice now I only use functions. I'm not using classes here. Francesco Zuppichini: I'm not a fan of object writing programming because it's very easy to kind of reach inheritance health in which we have like ten levels of inheritance. And here if a function needs to have state, here we do need to have state because we need a connection. So I will just have a function that initialize that state. I return tat to me, and me as a caller, I'm just going to call it and pass my state. Very simple tips allow you really to divide your code properly. You don't need to think about is my class to couple with another class, et cetera, et cetera. Very simple, very effective. So what I suggest when you're coding, just start with function and share states across just pass down state. Francesco Zuppichini: And when you realize that you can cluster a lot of function together with a common behavior, you can go ahead and put state in a class and have key function as methods. So try to not start first by trying to understand which class I need to use around how I connect them, because in my opinion it's just a waste of time. So just start with function and then try to cluster them together if you need to. Okay, last part, the juicy part as well. Language models. So we need the language model. Why do we need the language model? Because I'm going to ask a question, right. I'm going to get a bunch of relevant chunks from a video and the language model. Francesco Zuppichini: It needs to answer that to me. So it needs to get information from the chunks and reply that to me using that information as a context. To run language model, the easiest way in my opinion is using Ollama. There are a lot of models that are available. I put a link here and you can also bring your own model. There are a lot of videos and tutorial how to do that. You run this command as soon as you install it on Linux. It's a one line to install Ollama. Francesco Zuppichini: You run this command here, it's going to download Mistral 7B very good model and run it on your gpu if you have one, or your cpu if you don't have a gpu, run it on GPU. Here you can see it yet. It's around 6gb. So even with a low tier gpu, you should be able to run a seven minute model on your gpu. Okay, so this is the prompt just for also to show you how easy is this, this prompt was just very lazy. Copy and paste from langchain source code here prompt use the following piece of context to answer the question at the end. Blah blah blah variable to inject the context inside question variable to get question and then we're going to get an answer. How do we call it? Is it easy? I have a function here called getanswer passing a bunch of stuff, passing also the OpenAI from the OpenAI Python package model client passing a question, passing a vdb, my DB client, my embeddings, reading my prompt, getting my matching documents, calling the search function we have just seen before, creating my context. Francesco Zuppichini: So just joining the text in the chunks on a new line, calling the format function in Python. As simple as that. Just calling the format function in Python because the format function will look at a string and kitty will inject variables that match inside these parentheses. Passing context passing question using the OpenAI model client APIs and getting a reply back. Super easy. And here I'm returning the reply from the language model and also the list of documents. So this should be documents. I think I did a mistake. Francesco Zuppichini: When I copy and paste this to get this image and we are done right. We have a way to get some answers from a video by putting everything together. This can seem scary because there is no comment here, but I can show you tson code. I think it's easier so I can highlight stuff. I'm creating my embeddings, I'm getting my database, I'm getting my vector DB login, some stuff I'm getting my model client, I'm getting my vid. So here I'm defining the state that I need. You don't need comments because I get it straightforward. Like here I'm getting the vector db, good function name. Francesco Zuppichini: Then if I don't have the vector db, sorry. If I don't have the video id in a database, I'm going to get some information to the video. I'm going to download the subtitles, split the subtitles. I'm going to do the embeddings. In the end I'm going to save it to the betterDb. Finally I'm going to get my video back, printing something and start a while loop in which you can get an answer. So this is the full pipeline. Very simple, all function. Francesco Zuppichini: Also here fit function is very simple to divide things. Around here I have a file called RAG and here I just do all the RAG stuff. Right. It's all here similar. I have my file called crude. Here I'm doing everything I need to do with my database, et cetera, et cetera. Also a file called YouTube. So just try to split things based on what they do instead of what they are. Francesco Zuppichini: I think it's easier than to code. Yeah. So I can actually show you a demo in which we kind of embed a video from scratch. So let me kill this bad boy here. Let's get a juicy YouTube video from Sam. We can go with Gemma. We can go with Gemma. I think I haven't embedded that yet. Francesco Zuppichini: I'm sorry. My Eddie block is doing weird stuff over here. Okay, let me put this here. Demetrios: This is the moment that we need to all pray to the demo gods that this will work. Francesco Zuppichini: Oh yeah. I'm so sorry. I'm so sorry. I think it was already processed. So let me. I don't know this one. Also I noticed I'm seeing this very weird thing which I've just not seen that yesterday. So that's going to be interesting. Francesco Zuppichini: I think my poor Linux computer is giving up to running language models. Okay. Downloading ceramic logs, embeddings and we have it now before I forgot because I think that you guys spent some time doing this. So let's go on the visualize page and let's actually do the color by and let's do metadata, video id. Video id. Let's run it. Metadata, metadata, video meta. Oh my God. Francesco Zuppichini: Data video id. Why don't see the other one? I don't know. This is the beauty of live section. Demetrios: This is how we know it's real. Francesco Zuppichini: Yeah, I mean, this is working, right? This is called Chevroni Pro. That video. Yeah, I don't know about that. I don't know about that. It was working before. I can touch for sure. So probably I'm doing something wrong, probably later. Let's try that. Francesco Zuppichini: Let's see. I must be doing something wrong, so don't worry about that. But we are ready to ask questions, so maybe I can just say I don't know, what is Gemini pro? So let's see, Mr. Running on GPU is kind of fast, it doesn't take too much time. And here we can see we are 6gb, 1gb is for the embedding model. So 4gb, 5gb running the language model here it says Gemini pro is a colonized tool that can generate output based on given tasks. Blah, blah, blah, blah, blah, blah. Yeah, it seems to work. Francesco Zuppichini: Here you have it. Thanks. Of course. And I don't know if there are any questions about it. Demetrios: So many questions. There's a question that came through the chat that is a simple one that we can answer right away, which is can we access this code anywhere? Francesco Zuppichini: Yeah, so it's on my GitHub. Can I share a link with you in the chat? Maybe? So that should be YouTube. Can I put it here maybe? Demetrios: Yes, most definitely can. And we'll drop that into all of the spots so that we have it. Now. Next question from my side, while people are also asking, and you've got some fans in the chat right now, so. Francesco Zuppichini: Nice to everyone by the way. Demetrios: So from my side, I'm wondering, do you have any specific design decisions criteria that you use when you are building out your stack? Like you chose Mistral, you chose Ollama, you chose Qdrant. It sounds like with Qdrant you did some testing and you appreciated the capabilities. With Qdrant, was it similar with Ollama and Mistral? Francesco Zuppichini: So my test is how long it's going to take to install that tool. If it's taking too much time and it's hard to install because documentation is bad, so that it's a red flag, right? Because if it's hard to install and documentation is bad for the installation, that's the first thing people are going to read. So probably it's not going to be great for something down the road to use Olama. It took me two minutes, took me two minutes, it was incredible. But just install it, run it and it was done. Same thing with Qualent as well and same thing with the hacking phase library. So to me, usually as soon as if I see that something is easy to install, that's usually means that is good. And if the documentation to install it, it's good. Francesco Zuppichini: It means that people thought about it and they care about writing good documentation because they want people to use their tools. A lot of times for enterprises tools like cloud enterprise services, documentation is terrible because they know you're going to pay because you're an enterprise. And some manager has decided five years ago to use TatCloud provider, not the other. So I think know if you see recommendation that means that the people's company, startup enterprise behind that want you to use their software because they know and they're proud of it. Like they know that is good. So usually this is my way of going. And then of course I watch a lot of YouTube videos so I see people talking about different texts, et cetera. And if some youtuber which I trust say like I tried this seems to work well, I will note it down. Francesco Zuppichini: So then in the future I know hey, for these things I think I use ABC and this has already be tested by someone. I don't know I'm going to use it. Another important thing is reach out to your friends networks and say hey guys, I need to do this. Do you know if you have a good stock that you're already trying to experience with that? Demetrios: Yeah. With respect to the enterprise software type of tools, there was something that I saw that was hilarious. It was something along the lines of custom customer and user is not the same thing. Customer is the one who pays, user is the one who suffers. Francesco Zuppichini: That's really true for enterprise software, I need to tell you. So that's true. Demetrios: Yeah, we've all been through it. So there's another question coming through in the chat about would there be a collection for each embedded video based on your unique view video id? Francesco Zuppichini: No. What you want to do, I mean you could do that of course, but collection should encapsulate the project that you're doing more or less in my mind. So in this case I just call it embeddings. Maybe I should have called videos. So they are just going to be inside the same collection, they're just going to have different metadata. I think you need to correct me if I'm wrong that from your side, from the Qdrant code, searching things in the same collection, probably it's more effective to some degree. And imagine that if you have 1000 videos you need to create 1000 collection. And then I think cocoa wise collection are meant to have data coming from the same source, semantic value. Francesco Zuppichini: So in my case I have all videos. If I were to have different data, maybe from pdfs. Probably I would just create another collection, right, if I don't want them to be in the same part and search them. And one cool thing of having all the videos in the same collection is that I can just ask a question to all the videos at the same time if I want to, or I can change my filter and ask questions to two free videos. Specifically, you can do that if you have one collection per video, right? Like for instance at work I was embedding PDF and using qualitative and sometimes you need to talk with two pdf at the same time free, or just one, or maybe all the PDF in that folder. So I was just changing the filter, right? And that can only be done if they're all in the same collection. Sabrina Aquino: Yeah, that's a great explanation of collections. And I do love your approach of having everything locally and having everything in a structured way that you can really understand what you're doing. And I know you mentioned sometimes frameworks are not necessary. And I wonder also from your side, when do you think a framework would be necessary and does it have to do with scaling? What do you think? Francesco Zuppichini: So that's a great question. So what frameworks in theory should give you is good interfaces, right? So a good interface means that if I'm following that interface, I know that I can always call something that implements that interface in the same way. Like for instance in Langchain, if I call a betterdb, I can just swap the betterdb and I can call it in the same way. If the interfaces are good, the framework is useful. If you know that you are going to change stuff. In my case, I know from the beginning that I'm going to use Qdrant, I'm going to use Ollama, and I'm going to use SQL lite. So why should I go to the hello reading framework documentation? I install libraries, and then you need to install a bunch of packages from the framework that you don't even know why you need them. Maybe you have a conflict package, et cetera, et cetera. Francesco Zuppichini: If you know ready. So what you want to do then just code it and call it a day? Like in this case, I know I'm not going to change the vector DB. If you think that you're going to change something, even if it's a simple approach, it's fair enough, simple to change stuff. Like I will say that if you know that you want to change your vector DB providers, either you define your own interface or you use a framework with an already defined interface. But be careful because right too much on framework will. First of all, basically you don't know what's going on inside the hood for launching because it's so kudos to them. They were the first one. They are very smart people, et cetera, et cetera. Francesco Zuppichini: But they have inheritance held in that code. And in order to understand how to do certain stuff I had to look at in the source code, right. And try to figure it out. So which class is inherited from that? And going straight up in order to understand what behavior that class was supposed to have. If I pass this parameter, and sometimes defining an interface is straightforward, just maybe you want to define a couple of function in a class. You call it, you just need to define the inputs and the outputs and if you want to scale and you can just implement a new class called that interface. Yeah, that is at least like my take. I try to first try to do stuff and then if I need to scale, at least I have already something working and I can scale it instead of kind of try to do the perfect thing from the beginning. Francesco Zuppichini: Also because I hate reading documentation, so I try to avoid doing that in general. Sabrina Aquino: Yeah, I totally love this. It's about having like what's your end project? Do you actually need what you're going to build and understanding what you're building behind? I think it's super nice. We're also having another question which is I haven't used Qdrant yet. The metadata is also part of the embedding, I. E. Prepended to the chunk or so basically he's asking if the metadata is also embedded in the answer for that. Go ahead. Francesco Zuppichini: I think you have a good article about another search which you also probably embed the title. Yeah, I remember you have a good article in which you showcase having chunks with the title from, I think the section, right. And you first do a search, find the right title and then you do a search inside. So all the chunks from that paragraph, I think from that section, if I'm not mistaken. It really depends on the use case, though. If you have a document full of information, splitting a lot of paragraph, very long one, and you need to very be precise on what you want to fetch, you need to take advantage of the structure of the document, right? Sabrina Aquino: Yeah, absolutely. The metadata goes as payload in Qdrant. So basically it's like a JSON type of information attached to your data that's not embedded. We also have documentation on it. I will answer on the comments as well, I think another question I have for you, Franz, about the sort of evaluation and how would you perform a little evaluation on this rag that you created. Francesco Zuppichini: Okay, so that is an interesting question, because everybody talks about metrics and evaluation. Most of the times you don't really have that, right? So you have benchmarks, right. And everybody can use a benchmark to evaluate their pipeline. But when you have domain specific documents, like at work, for example, I'm doing RAG on insurance documents now. How do I create a data set from that in order to evaluate my RAG? It's going to be very time consuming. So what we are trying to do, so we get a bunch of people who knows these documents, catching some paragraph, try to ask a question, and that has the reply there and having basically a ground truth from their side. A lot of time the reply has to be composed from different part of the document. So, yeah, it's very hard. Francesco Zuppichini: It's very hard. So what I will kind of suggest is try to use no benchmark, or then you empirically try that. If you're building a RAG that users are going to use, always include a way to collect feedback and collect statistics. So collect the conversation, if that is okay with your privacy rules. Because in my opinion, it's always better to put something in production till you wait too much time, because you need to run all your metrics, et cetera, et cetera. And as soon as people start using that, you kind of see if it is good enough, maybe for language model itself, so that it's a different task, because you need to be sure that they don't say, we're stuck to the users. I don't really have the source of true answer here. It's very hard to evaluate them. Francesco Zuppichini: So what I know people also try to do, like, so they get some paragraph or some chunks, they ask GPD four to generate a question and the answer based on the paragraph, and they use that as an auto labeling way to create a data set to evaluate your RAG. That can also be effective, I guess 100%, yeah. Demetrios: And depending on your use case, you probably need more rigorous evaluation or less, like in this case, what you're doing, it might not need that rigor. Francesco Zuppichini: You can see, actually, I think was Canada Airlines, right? Demetrios: Yeah. Francesco Zuppichini: If you have something that is facing paying users, then think one of the times before that. In my case at all, I have something that is used by internal users and we communicate with them. So if my chat bot is saying something wrong, so they will tell me. And the worst thing that can happen is that they need to manually look for the answer. But as soon as your chatbot needs to do something that had people that are going to pay or medical stuff. You need to understand that for some use cases, you need to apply certain rules for others and you can be kind of more relaxed, I would say, based on the arm that your chatbot is going to generate. Demetrios: Yeah, I think that's all the questions we've got for now. Appreciate you coming on here and chatting with us. And I also appreciate everybody listening in. Anyone who is not following Fran, go give him a follow, at least for the laughs, the chuckles, and huge thanks to you, Sabrina, for joining us, too. It was a pleasure having you here. I look forward to doing many more of these. Sabrina Aquino: The pleasure is all mine, Demetrios, and it was a total pleasure. Fran, I learned a lot from your session today. Francesco Zuppichini: Thank you so much. Thank you so much. And also go ahead and follow the Qdrant on LinkedIn. They post a lot of cool stuff and read the Qdrant blogs. They're very good. They're very good. Demetrios: That's it. The team is going to love to hear that, I'm sure. So if you are doing anything cool with good old Qdrant, give us a ring so we can feature you in the vector space talks. Until next time, don't get lost in vector space. We will see you all later. Have a good one, y'all.
blog/talk-with-youtube-without-paying-a-cent-francesco-saverio-zuppichini-vector-space-talks.md
--- draft: false title: The challenges in using LLM-as-a-Judge - Sourabh Agrawal | Vector Space Talks slug: llm-as-a-judge short_description: Sourabh Agrawal explores the world of AI chatbots. description: Everything you need to know about chatbots, Sourabh Agrawal goes in to detail on evaluating their performance, from real-time to post-feedback assessments, and introduces uptrendAI—an open-source tool for enhancing chatbot interactions through customized and logical evaluations. preview_image: /blog/from_cms/sourabh-agrawal-bp-cropped.png date: 2024-03-19T15:05:02.986Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - LLM - retrieval augmented generation --- > "*You don't want to use an expensive model like GPT 4 for evaluation, because then the cost adds up and it does not work out. If you are spending more on evaluating the responses, you might as well just do something else, like have a human to generate the responses.*”\ -- Sourabh Agrawal > Sourabh Agrawal, CEO & Co-Founder at UpTrain AI is a seasoned entrepreneur and AI/ML expert with a diverse background. He began his career at Goldman Sachs, where he developed machine learning models for financial markets. Later, he contributed to the autonomous driving team at Bosch/Mercedes, focusing on computer vision modules for scene understanding. In 2020, Sourabh ventured into entrepreneurship, founding an AI-powered fitness startup that gained over 150,000 users. Throughout his career, he encountered challenges in evaluating AI models, particularly Generative AI models. To address this issue, Sourabh is developing UpTrain, an open-source LLMOps tool designed to evaluate, test, and monitor LLM applications. UpTrain provides scores and offers insights to enhance LLM applications by performing root-cause analysis, identifying common patterns among failures, and providing automated suggestions for resolution. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/1o7xdbdx32TiKe7OSjpZts?si=yCHU-FxcQCaJLpbotLk7AQ), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/vBJF2sy1Pyw).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/vBJF2sy1Pyw?si=H-HwmPHtFSfiQXjn" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/The-challenges-with-using-LLM-as-a-Judge---Sourabh-Agrawal--Vector-Space-Talks-013-e2fj7g8/a-aaurgd0" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Why is real-time evaluation critical in maintaining the integrity of chatbot interactions and preventing issues like promoting competitors or making false promises? What strategies do developers employ to minimize cost while maximizing the effectiveness of model evaluations, specifically when dealing with LLMs? These might be just some of the many questions people in the industry are asking themselves. Fear, not! Sourabh will break it down for you. Check out the full conversation as they dive into the intricate world of AI chatbot evaluations. Discover the nuances of ensuring your chatbot's quality and continuous improvement across various metrics. Here are the key topics of this episode: 1. **Evaluating Chatbot Effectiveness**: An exploration of systematic approaches to assess chatbot quality across various stages, encompassing retrieval accuracy, response generation, and user satisfaction. 2. **Importance of Real-Time Assessment**: Insights into why continuous and real-time evaluation of chatbots is essential to maintain integrity and ensure they function as designed without promoting undesirable actions. 3. **Indicators of Compromised Systems**: Understand the significance of identifying behaviors that suggest a system may be prone to 'jailbreaking' and the methods available to counter these through API integration. 4. **Cost-Effective Evaluation Models**: Discussion on employing smaller models for evaluation to reduce costs without compromising the depth of analysis, focusing on failure cases and root-cause assessments. 5. **Tailored Evaluation Metrics**: Emphasis on the necessity of customizing evaluation criteria to suit specific use case requirements, including an exploration of the different metrics applicable to diverse scenarios. > Fun Fact: Sourabh discussed the use of Uptrend, an innovative API that provides scores and explanations for various data checks, facilitating logical and informed decision-making when evaluating AI models. > ## Show notes: 00:00 Prototype evaluation subjective; scalability challenges emerge.\ 05:52 Use cheaper, smaller models for effective evaluation.\ 07:45 Use LLM objectively, avoid subjective biases.\ 10:31 Evaluate conversation quality and customization for AI.\ 15:43 Context matters for AI model performance.\ 19:35 Chat bot creates problems for car company.\ 20:45 Real-time user query evaluations, guardrails, and jailbreak.\ 27:27 Check relevance, monitor data, filter model failures.\ 28:09 Identify common themes, insights, experiment with settings.\ 32:27 Customize jailbreak check for specific app purposes.\ 37:42 Mitigate hallucination using evaluation data techniques.\ 38:59 Discussion on productizing hallucination mitigation techniques.\ 42:22 Experimentation is key for system improvement. ## More Quotes from Sourabh: *"There are some cases, let's say related to safety, right? Like you want to check whether the user is trying to jailbreak your LLMs or not. So in that case, what you can do is you can do this evaluation in parallel to the generation because based on just the user query, you can check whether the intent is to jailbreak or it's an intent to actually use your product to kind of utilize it for the particular model purpose.*”\ -- Sourabh Agrawal *"You have to break down the response into individual facts and just see whether each fact is relevant for the question or not. And then take some sort of a ratio to get the final score. So that way all the biases which comes up into the picture, like egocentric bias, where LLM prefers its own outputs, those biases can be mitigated to a large extent.”*\ -- Sourabh Agrawal *"Generally speaking, what we have been seeing is that the better context you retrieve, the better your model becomes.”*\ -- Sourabh Agrawal ## Transcript: Demetrios: Sourabh, I've got you here from Uptrain. I think you have some notes that you wanted to present, but I also want to ask you a few questions because we are going to be diving into a topic that is near and dear to my heart and I think it's been coming up so much recently that is using LLMs as a judge. It is really hot these days. Some have even gone as far to say that it is the topic of 2024. I would love for you to dive in. Let's just get right to it, man. What are some of the key topics when you're talking about using LLMs to evaluate what key metrics are you using? How does this work? Can you break it down? Sourabh Agrawal: Yeah. First of all, thanks a lot for inviting me and no worries for hiccup. I guess I have never seen a demo or a talk which goes without any technical hiccups. It is bound to happen. Really excited to be here. Really excited to talk about LLM evaluations. And as you rightly pointed right, it's really a hot topic and rightly so. Right. Sourabh Agrawal: The way things have been panning out with LLMs and chat, GPT and GPT four and so on, is that people started building all these prototypes, right? And the way to evaluate them was just like eyeball them, just trust your gut feeling, go with the vibe. I guess they truly adopted the startup methodology, push things out to production and break things. But what people have been realizing is that it's not scalable, right? I mean, rightly so. It's highly subjective. It's a developer, it's a human who is looking at all the responses, someday he might like this, someday he might like something else. And it's not possible for them to kind of go over, just read through more than ten responses. And now the unique thing about production use cases is that they need continuous refinement. You need to keep on improving them, you need to keep on improving your prompt or your retrieval, your embedding model, your retrieval mechanisms and so on. Sourabh Agrawal: So that presents a case like you have to use a more scalable technique, you have to use LLMs as a judge because that's scalable. You can have an API call, and if that API call gives good quality results, it's a way you can mimic whatever your human is doing or in a way augment them which can truly act as their copilot. Demetrios: Yeah. So one question that's been coming through my head when I think about using LLMs as a judge and I get more into it, has been around when do we use those API calls. It's not in the moment that we're looking for this output. Is it like just to see if this output is real? And then before we show it to the user, it's kind of in bunches after we've gotten a bit of feedback from the user. So that means that certain use cases are automatically discarded from this, right? Like if we are thinking, all right, we're going to use LLMs as a judge to make sure that we're mitigating hallucinations or that we are evaluating better, it is not necessarily something that we can do in the moment, if I'm understanding it correctly. So can you break that down a little bit more? How does it actually look in practice? Sourabh Agrawal: Yeah, definitely. And that's a great point. The way I see it, there are three cases. Case one is what you mentioned in the moment before showing the response to the user. You want to check whether the response is good or not. In most of the scenarios you can't do that because obviously checking requires extra time and you don't want to add latency. But there are some cases, let's say related to safety, right? Like you want to check whether the user is trying to jailbreak your LLMs or not. So in that case, what you can do is you can do this evaluation in parallel to the generation because based on just the user query, you can check whether the intent is to jailbreak or it's an intent to actually use your product to kind of utilize it for the particular model purpose. Sourabh Agrawal: But most of the other evaluations like relevance, hallucinations, quality and so on, it has to be done. Post whatever you show to the users and then there you can do it in two ways. You can either experiment with use them to experiment with things, or you can run monitoring on your production and find out failure cases. And typically we are seeing like developers are adopting a combination of these two to find cases and then experiment and then improve their systems. Demetrios: Okay, so when you're doing it in parallel, that feels like something that is just asking you craft a prompt and as soon as. So you're basically sending out two prompts. Another piece that I have been thinking about is, doesn't this just add a bunch more cost to your system? Because there you're effectively doubling your cost. But then later on I can imagine you can craft a few different ways of making the evaluations and sending out the responses to the LLM better, I guess. And you can figure out how to trim some tokens off, or you can try and concatenate some of the responses and do tricks there. I'm sure there's all kinds of tricks that you know about that I don't, and I'd love to tell you to tell me about them, but definitely what kind of cost are we looking at? How much of an increase can we expect? Sourabh Agrawal: Yeah, so I think that's like a very valid limitation of evaluation. So that's why, let's say at uptrend, what we truly believe in is that you don't want to use an expensive model like GPT four for evaluation, because then the cost adds up and it does not work out. Right. If you are spending more on evaluating the responses, you may as well just do something else, like have a human to generate the responses. We rely on smaller models, on cheaper models for this. And secondly, the methodology which we adopt is that you don't want to evaluate everything on all the data points. Like maybe you have a higher level check, let's say, for jailbreak or let's say for the final response quality. And when you find cases where the quality is low, you run a battery of checks on these failures to figure out which part of the pipeline is exactly failing. Sourabh Agrawal: This is something what we call as like root cause analysis, where you take all these failure cases, which may be like 10% or 20% of the cases out of all what you are seeing in production. Take these 20% cases, run like a battery of checks on them. They might be exhaustive. You might run like five to ten checks on them. And then based on those checks, you can figure out that, what is the error mode? Is it a retrieval problem? Is it a citation problem? Is it a utilization problem? Is it hallucination? Is the query like the question asked by the user? Is it not clear enough? Is it like your embedding model is not appropriate? So that's how you can kind of take best of the two. Like, you can also improve the performance at the same time, make sure that you don't burn a hole in your pocket. Demetrios: I've also heard this before, and it's almost like you're using the LLMs as tests and they're helping you write. It's not that they're helping you write tests, it's that they are there and they're part of the tests that you're writing. Sourabh Agrawal: Yeah, I think the key here is that you have to use them objectively. What I have seen is a lot of people who are trying to do LLM evaluations, what they do is they ask the LLM that, okay, this is my response. Can you tell is it relevant or not? Or even, let's say, they go a step beyond and do like a grading thing, that is it highly relevant, somewhat relevant, highly irrelevant. But then it becomes very subjective, right? It depends upon the LLM to decide whether it's relevant or not. Rather than that you have to transform into an objective setting. You have to break down the response into individual facts and just see whether each fact is relevant for the question or not. And then take some sort of a ratio to get the final score. So that way all the biases which comes up into the picture, like egocentric bias, where LLM prefers its own outputs, those biases can be mitigated to a large extent. Sourabh Agrawal: And I believe that's the key for making LLM evaluations work, because similar to LLM applications, even LLM evaluations, you have to put in a lot of efforts to make them really work and finally get some scores which align well with human expectations. Demetrios: It's funny how these LLMs mimic humans so much. They love the sound of their own voice, even. It's hilarious. Yeah, dude. Well, talk to me a bit more about how this looks in practice, because there's a lot of different techniques that you can do. Also, I do realize that when it comes to the use cases, it's very different, right. So if it's code generation use case, and you're evaluating that, it's going to be pretty clear, did the code run or did it not? And then you can go into some details on is this code actually more valuable? Is it a hacked way to do it? Et cetera, et cetera. But there's use cases that I would consider more sensitive and less sensitive. Demetrios: And so how do you look at that type of thing? Sourabh Agrawal: Yeah, I think so. The way even we think about evaluations is there's no one size fit all solution for different use cases. You need to look at different things. And even if you, let's say, looking at hallucinations, different use cases, or different businesses would look at evaluations from different lenses. Right. For someone, whatever, if they are focusing a lot on certain aspects of the correctness, someone else would focus less on those aspects and more on other aspects. The way we think about it is, know, we define different criteria for different use cases. So if you have A-Q-A bot, right? So you look at the quality of the response, the quality of the context. Sourabh Agrawal: If you have a conversational agent, then you look at the quality of the conversation as a whole. You look at whether the user is satisfied with that conversation. If you are writing long form content. Like, you look at coherence across the content, you look at the creativity or the sort of the interestingness of the content. If you have an AI agent, you look at how well they are able to plan, how well they were able to execute a particular task, and so on. How many steps do they take to achieve their objective? So there are a variety of these evaluation matrices, which are each one of which is more suitable for different use cases. And even there, I believe a good tool needs to provide certain customization abilities to their developers so that they can transform it, they can modify it in a way that it makes most sense for their business. Demetrios: Yeah. Is there certain ones that you feel like are more prevalent and that if I'm just thinking about this, I'm developing on the side and I'm thinking about this right now and I'm like, well, how could I start? What would you recommend? Sourabh Agrawal: Yeah, definitely. One of the biggest use case for LLMs today is rag. Applications for Rag. I think retrieval is the key. So I think the best starting points in terms of evaluations is like look at the response quality, so look at the relevance of the response, look at the completeness of the response, look at the context quality. So like context relevance, which judges the retrieval quality. Hallucinations, which judges whether the response is grounded by the context or not. If tone matters for your use case, look at the tonality and finally look at the conversation satisfaction, because at the end, whatever outputs you give, you also need to judge whether the end user is satisfied with these outputs. Sourabh Agrawal: So I would say these four or five matrices are the best way for any developer to start who is building on top of these LLMs. And from there you can understand how the behavior is going, and then you can go more deeper, look at more nuanced metrics, which can help you understand your systems even better. Demetrios: Yeah, I like that. Now, one thing that has also been coming up in my head a lot are like the custom metrics and custom evaluation and also proprietary data set, like evaluation data sets, because as we all know, the benchmarks get gamed. And you see on Twitter, oh wow, this new model just came out. It's so good. And then you try it and you're like, what are you talking about? This thing just was trained on the benchmarks. And so it seems like it's good, but it's not. And can you talk to us about creating these evaluation data sets? What have you seen as far as the best ways of going about it? What kind of size? Like how many do we need to actually make it valuable. And what is that? Give us a breakdown there? Sourabh Agrawal: Yeah, definitely. So, I mean, surprisingly, the answer is that you don't need that many to get started. We have seen cases where even if someone builds a test data sets of like 50 to 100 samples, that's actually like a very good starting point than where they were in terms of manual annotation and in terms of creation of this data set, I believe that the best data set is what actually your users are asking. You can look at public benchmarks, you can generate some synthetic data, but none of them matches the quality of what actually your end users are looking, because those are going to give you issues which you can never anticipate. Right. Even you're generating and synthetic data, you have to anticipate what issues can come up and generate data. Beyond that, if you're looking at public data sets, they're highly curated. There is always problems of them leaking into the training data and so on. Sourabh Agrawal: So those benchmarks becomes highly reliable. So look at your traffic, take 50 samples from them. If you are collecting user feedback. So the cases where the user has downvoted or the user has not accepted the response, I mean, they are very good cases to look at. Or if you're running some evaluations, quality checks on that cases which are failing, I think they are the best starting point for you to have a good quality test data sets and use that as a way to experiment with your prompts, experiment with your systems, experiment with your retrievals, and iteratively improve them. Demetrios: Are you weighing any metrics more than others? Because I've heard stories about how sometimes you'll see that a new model will come out, or you're testing out a new model, and it seems like on certain metrics, it's gone down. But then the golden metric that you have, it actually has gone up. And so have you seen which metrics are better for different use cases? Sourabh Agrawal: I think for here, there's no single answer. I think that metric depends upon the business. Generally speaking, what we have been seeing is that the better context you retrieve, the better your model becomes. Especially like if you're using any of the bigger models, like any of the GPT or claudes, or to some extent even mistral, is highly performant. So if you're using any of these highly performant models, then if you give them the right context, the response more or less, it comes out to be good. So I think one thing which we are seeing people focusing a lot on, experimenting with different retrieval mechanisms, embedding models, and so on. But then again, the final golden key, I think many people we have seen, they annotate some data set so they have like a ground root response or a golden response, and they completely rely on just like how well their answer matches with that golden response, which I believe it's a very good starting point because now you know that, okay, if this is right and you're matching very highly with that, then obviously your response is also right. Demetrios: And what about those use cases where golden responses are very subjective? Sourabh Agrawal: Yeah, I think that's where the issues like. So I think in those scenarios, what we have seen is that one thing which people have been doing a lot is they try to see whether all information in the golden response is contained in the generated response. You don't miss out any of the important information in your ground truth response. And on top of that you want it to be concise, so you don't want it to be blabbering too much or giving highly verbose responses. So that is one way we are seeing where people are getting around this subjectivity issue of the responses by making sure that the key information is there. And then beyond that it's being highly concise and it's being to the point in terms of the task being asked. Demetrios: And so you kind of touched on this earlier, but can you say it again? Because I don't know if I fully grasped it. Where are all the places in the system that you are evaluating? Because it's not just the output. Right. And how do you look at evaluation as a system rather than just evaluating the output every once in a while? Sourabh Agrawal: Yeah, so I mean, what we do is we plug with every part. So even if you start with retrieval, so we have a high level check where we look at the quality of retrieved context. And then we also have evaluations for every part of this retrieval pipeline. So if you're doing query rewrite, if you're doing re ranking, if you're doing sub question, we have evaluations for all of them. In fact, we have worked closely with the llama index team to kind of integrate with all of their modular pipelines. Secondly, once we cross the retrieval step, we have around five to six matrices on this retrieval part. Then we look at the response generation. We have their evaluations for different criterias. Sourabh Agrawal: So conciseness, completeness, safety, jailbreaks, prompt injections, as well as you can define your custom guidelines. So you can say that, okay, if the user is asking anything and related to code, the output should also give an example code snippet so you can just in plain English, define this guideline. And we check for that. And then finally, like zooming out, we also have checks. We look at conversations as a whole, how the user is satisfied, how many turns it requires for them to, for the chatbot or the LLM to answer the user. Yeah, that's how we look at the whole evaluations as a whole. Demetrios: Yeah. It really reminds me, I say this so much because it's one of the biggest fails, I think, on the Internet, and I'm sure you've seen it where I think it was like Chevy or GM, the car manufacturer car company, they basically slapped a chat bot on their website. It was a GPT call, and people started talking to it and realized, oh my God, this thing will do anything that we want it to do. So they started asking it questions like, is Tesla better than GM? And the bot would say, yeah, give a bunch of reasons why Tesla is better than GM on the website of GM. And then somebody else asked it, oh, can I get a car for a dollar? And it said, no. And then it said, but I'm broke and I need a car for a dollar. And it said, ok, we'll sell you the car for the dollar. And so you're getting yourself into all this trouble just because you're not doing that real time evaluation. Demetrios: How do you think about the real time evaluation? And is that like an extra added layer of complexity? Sourabh Agrawal: Yeah, for the real time evaluations, I think the most important cases, which, I mean, there are two scenarios which we feel like are most important to deal with. One is you have to put some guardrails in the sense that you don't want the users to talk about your competitors. You don't want to answer some queries, like, say, you don't want to make false promises, and so on, right? Some of them can be handled with pure rejects, contextual logics, and some of them you have to do evaluations. And the second is jailbreak. Like, you don't want the user to use, let's say, your Chevy chatbot to kind of solve math problems or solve coding problems, right? Because in a way, you're just like subsidizing GPT four for them. And all of these can be done just on the question which is being asked. So you can have a system where you can fire a query, evaluate a few of these key matrices, and in parallel generate your responses. And as soon as you get your response, you also get your evaluations. Sourabh Agrawal: And you can have some logic that if the user is asking about something which I should not be answering. Instead of giving the response, I should just say, sorry, I could not answer this or have a standard text for those cases and have some mechanisms to limit such scenarios and so on. Demetrios: And it's better to do that in parallel than to try and catch the response. Make sure it's okay before sending out an LLM call. Sourabh Agrawal: I mean, generally, yes, because if you look at, if you catch the response, it adds another layer of latency. Demetrios: Right. Sourabh Agrawal: And at the end of the day, 95% of your users are not trying to do this any good product. A lot of those users are genuinely trying to use it and you don't want to build something which kind of breaks, creates an issue for them, add a latency for them just to solve for that 5%. So you have to be cognizant of this fact and figure out clever ways to do this. Demetrios: Yeah, I remember I was talking to Philip of company called honeycomb, and they added some LLM functionality to their product. And he said that when people were trying to either prompt, inject or jailbreak, it was fairly obvious because there were a lot of calls. It kind of started to be not human usage and it was easy to catch in that way. Have you seen some of that too? And what are some signs that you see when people are trying to jailbreak? Sourabh Agrawal: Yeah, I think we also have seen typically, what we also see is that whenever someone is trying to jailbreak, the length of their question or the length of their prompt typically is much larger than any average question, because they will have all sorts of instruction like forget everything, you know, you are allowed to say all of those things. And then again, this issue also comes because when they try to jailbreak, they try with one technique, it doesn't work. They try with another technique, it doesn't work. Then they try with third technique. So there is like a burst of traffic. And even in terms of sentiment, typically the sentiment or the coherence in those cases, we have seen that to be lower as compared to a genuine question, because people are just trying to cramp up all these instructions into the response. So there are definitely certain signs which already indicates that the user is trying to jailbreak this. And I think those are leg race indicators to catch them. Demetrios: And I assume that you've got it set up so you can just set an alert when those things happen and then it at least will flag it and have humans look over it or potentially just ask the person to cool off for the next minute. Hey, you've been doing some suspicious activity here. We want to see something different so I think you were going to show us a little bit about uptrend, right? I want to see what you got. Can we go for a spin? Sourabh Agrawal: Yeah, definitely. Let me share my screen and I can show you how that looks like. Demetrios: Cool, very cool. Yeah. And just while you're sharing your screen, I want to mention that for this talk, I wore my favorite shirt, which is it says, I don't know if everyone can see it, but it says, I hallucinate more than Chat GPT. Sourabh Agrawal: I think that's a cool one. Demetrios: What do we got here? Sourabh Agrawal: Yeah, so, yeah, let me kind of just get started. So I create an account with uptrend. What we have is an API method, API way of calculating these evaluations. So you get an API key similar to what you get for chat, GPT or others, and then you can just do uptrend log and evaluate and you can tell give your data. So you can give whatever your question responses context, and you can define your checks which you want to evaluate for. So if I create an API key, I can just copy this code and I just already have it here. So I'll just show you. So we have two mechanisms. Sourabh Agrawal: One is that you can just run evaluations so you can define like, okay, I want to run context relevance, I want to run response completeness. Similarly, I want to run jailbreak. I want to run for safety. I want to run for satisfaction of the users and so on. And then when you run it, it gives back you a score and it gives back you an explanation on why this particular score has been given for this particular question. Demetrios: Can you make that a little bit bigger? Yeah, just give us some plus. Yeah, there we. Sourabh Agrawal: It'S, it's essentially an API call which takes the data, takes the list of checks which you want to run, and then it gives back and score and an explanation for that. So based on that score, you can have logics, right? If the jailbreak score is like more than 0.5, then you don't want to show it. Like you want to switch back to a default response and so on. And then you can also configure that we log all of these course, and we have dashboard where you can access them. Demetrios: I was just going to ask if you have dashboards. Everybody loves a good dashboard. Let's see it. That's awesome. Sourabh Agrawal: So let's see. Okay, let's take this one. So in this case, I just ran some of this context relevance checks for some of the queries. So you can see how that changes on your data sets. If you're running the same. We also run this in a monitoring setting, so you can see how this varies over time. And then finally you have all of the data. So we provide all of the data, you can download it, run whatever analysis you want to run, and then you can also, one of the features which we have built recently and is getting very popular amongst our users is that you can filter cases where, let's say, the model is failing. Sourabh Agrawal: So let's say I take all the cases where the responses is zero and I can find common topics. So I can look at all these cases and I can find, okay, what's the common theme across them? Maybe, as you can see, they're all talking about France, Romeo Juliet and so on. So it can just pull out a common topic among these cases. So then this gives you some insights into where things are going wrong and what do you need to improve upon. And the second piece of the puzzle is the experiments. So, not just you can evaluate them, but also you can use it to experiment with different settings. So let's say. Let me just pull out an experiment I ran recently. Demetrios: Yeah. Sourabh Agrawal: So let's say I want to compare two different models, right? So GPT 3.5 and clot two. So I can now see that, okay, clot two is giving more concise responses, but in terms of factual accuracy, like GPT 3.5 is more factually accurate. So I can now decide, based on my application, based on what my users want, I can now decide which of these criteria is more meaningful for me, it's more meaningful for my users, for my data, and decide which prompt or which model I want to go ahead with. Demetrios: This is totally what I was talking about earlier, where you get a new model and you're seeing on some metrics, it's doing worse. But then on your core metric that you're looking at, it's actually performing better. So you have to kind of explain to yourself, why is it doing better on those other metrics? I don't know if I'm understanding this correctly. We can set the metrics that we're looking at. Sourabh Agrawal: Yeah, actually, I'll show you the kind of metric. Also, I forgot to mention earlier, uptrend is like open source. Demetrios: Nice. Sourabh Agrawal: Yeah. So we have these pre configured checks, so you don't need to do anything. You can just say uptrend response completeness or uptrend prompt injection. So these are like, pre configured. So we did the hard work of getting all these scores and so on. And on top of that, we also have ways for you to customize these matrices so you can define a custom guideline. You can change the prompt which you want. You can even define a custom python function which you want to act as an evaluator. Sourabh Agrawal: So we provide all of those functionalities so that they can also take advantage of things which are already there, as well as they can create custom things which make sense for them and have a way to kind of truly understand how their systems are doing. Demetrios: Oh, that's really cool. I really like the idea of custom, being able to set custom ones, but then also having some that just come right out of the box to make life easier on us. Sourabh Agrawal: Yeah. And I think both are needed because you want someplace to start, and as you advance, you also want to kind of like, you can't cover everything right, with pre configured. So you want to have a way to customize things. Demetrios: Yeah. And especially once you have data flowing, you'll start to see what other things you need to be evaluating exactly. Sourabh Agrawal: Yeah, that's very true. Demetrios: Just the random one. I'm not telling you how to build your product or anything, but have you thought about having a community sourced metric? So, like, all these custom ones that people are making, maybe there's a hub where we can add our custom? Sourabh Agrawal: Yeah, I think that's really interesting. This is something we also have been thinking a lot. It's not built out yet, but we plan to kind of go in that direction pretty soon. We want to kind of create, like a store kind of a thing where people can add their custom matrices. So. Yeah, you're right on. I think I also believe that's the way to go, and we will be releasing something on those fronts pretty soon. Demetrios: Nice. So drew's asking, how do you handle jailbreak for different types of applications? Jailbreak for a medical app would be different than one for a finance one, right? Yeah. Sourabh Agrawal: The way our jailbreak check is configured. So it takes something, what you call as a model purpose. So you define what is the purpose of your model? For a financial app, you need to say that, okay, this LLM application is designed to answer financial queries so and so on. From medical. You will have a different purpose, so you can configure what is the purpose of your app. And then when we take up a user query, we check whether the user query is under. Firstly, we check also for illegals activities and so on. And then we also check whether it's under the preview of this purpose. Sourabh Agrawal: If not, then we tag that as a scenario of jailbreak because the user is trying to do something other than the purpose so that's how we tackle it. Demetrios: Nice, dude. Well, this is awesome. Is there anything else you want to say before we jump off? Sourabh Agrawal: No, I mean, it was like, a great conversation. Really glad to be here and great talking to you. Demetrios: Yeah, I'm very happy that we got this working and you were able to show us a little bit of uptrend. Super cool that it's open source. So I would recommend everybody go check it out, get your LLMs working with confidence, and make sure that nobody is using your chatbot to be their GPT subsidy, like GM use case and. Yeah, it's great, dude. I appreciate. Sourabh Agrawal: Yeah, check us out like we [email protected]. Slash uptrendai slashuptrend. Demetrios: There we go. And if anybody else wants to come on to the vector space talks and talk to us about all the cool stuff that you're doing, hit us up and we'll see you all astronauts later. Don't get lost in vector space. Sourabh Agrawal: Yeah, thank you. Thanks a lot. Demetrios: All right, dude. There we go. We are good. I don't know how the hell I'm going to stop this one because I can't go through on my phone or I can't go through on my computer. It's so weird. So I'm not, like, technically there's nobody at the wheel right now. So I think if we both get off, it should stop working. Okay. Demetrios: Yeah, but that was awesome, man. This is super cool. I really like what you're doing, and it's so funny. I don't know if we're not connected on LinkedIn, are we? I literally just today posted a video of me going through a few different hallucination mitigation techniques. So it's, like, super timely that you talk about this. I think so many people have been thinking about this. Sourabh Agrawal: Definitely with enterprises, it's like a big issue. Right? I mean, how do you make it safe? How do you make it production ready? So I'll definitely check out your video. Also would be super interesting. Demetrios: Just go to my LinkedIn right now. It's just like LinkedIn.com dpbrinkm or just search for me. I think we are connected. We're connected. All right, cool. Yeah, so, yeah, check out the last video I just posted, because it's literally all about this. And there's a really cool paper that came out and you probably saw it. It's all like, mitigating AI hallucinations, and it breaks down all 32 techniques. Demetrios: And I was talking with on another podcast that I do, I was literally talking with the guys from weights and biases yesterday, and I was talking about how I was like, man, these evaluation data sets as a service feels like something that nobody's doing. And I guess it's probably because, and you're the expert, so I would love to hear what you have to say about it, but I guess it's because you don't really need it that bad. With a relatively small amount of data, you can start getting some really good evaluation happening. So it's a lot better than paying somebody else. Sourabh Agrawal: And also, I think it doesn't make sense also for a service because some external person is not best suited to make a data set for your use case. Demetrios: Right. Sourabh Agrawal: It's you. You have to look at what your users are asking to create a good data set. You can have a method, which is what optrain also does. We basically help you to sample and pick out the right cases from this data set based on the feedback of your users, based on the scores which are being generated. But it's difficult for someone external to craft really good questions or really good queries or really good cases which make sense for your business. Demetrios: Because the other piece that kind of, like, spitballed off of that, the other piece of it was techniques. So let me see if I can place all this words into a coherent sentence for you. It's basically like, okay, evaluation data sets don't really make sense because you're the one who knows the most. With a relatively small amount of data, you're going to be able to get stuff going real quick. What I thought about is, what about these hallucination mitigation techniques so that you can almost have options. So in this paper, right, there's like 32 different kinds of techniques that they use, and some are very pertinent for rags. They have like, five different or four different types of techniques. When you're dealing with rags to mitigate hallucinations, then they have some like, okay, if you're distilling a model, here is how you can make sure that the new distilled model doesn't hallucinate as much. Demetrios: Blah, blah, blah. But what I was thinking is like, what about how can you get a product? Or can you productize these kind of techniques? So, all right, cool. They're in this paper, but in uptrain, can we just say, oh, you want to try this new mitigation technique? We make that really easy for you. You just have to select it as one of the hallucination mitigation techniques. And then we do the heavy lifting of, if it's like, there's one. Have you heard of fleek? That was one that I was talking about in the video. Fleek is like where there's a knowledge graph, LLM that is created, and it is specifically created to try and combat hallucinations. And the way that they do it is they say that LLM will try and identify anywhere in the prompt or the output. Demetrios: Sorry, the output. It will try and identify if there's anything that can be fact checked. And so if it says that humans landed on the moon in 1969, it will identify that. And then either through its knowledge graph or through just forming a search query that will go out and then search the Internet, it will verify if that fact is true in the output. So that's like one technique, right? And so what I'm thinking about is like, oh, man, wouldn't it be cool if you could have all these different techniques to be able to use really easily as opposed to, great, I read it in a paper. Now, how the fuck am I going to get my hands on one of these LLMs with a knowledge graph if I don't train it myself? Sourabh Agrawal: Shit, yeah, I think that's a great suggestion. I'll definitely check it out. One of the things which we also want to do is integrate with all these techniques because these are really good techniques and they help solve a lot of problems, but using them is not simple. Recently we integrated with Spade. It's basically like a technique where I. Demetrios: Did another video on spade, actually. Sourabh Agrawal: Yeah, basically. I think I'll also check out these hallucinations. So right now what we do is based on this paper called fact score, which instead of checking on the Internet, it checks in the context only to verify this fact can be verified from the context or not. But I think it would be really cool if people can just play around with these techniques and just see whether it's actually working on their data or not. Demetrios: That's kind of what I was thinking is like, oh, can you see? Does it give you a better result? And then the other piece is like, oh, wait a minute, does this actually, can I put like two or three of them in my system at the same time? Right. And maybe it's over engineering or maybe it's not. I don't know. So there's a lot of fun stuff that can go down there and it's fascinating to think about. Sourabh Agrawal: Yeah, definitely. And I think experimentation is the key here, right? I mean, unless you try out them, you don't know what works. And if something works which improves your system, then definitely it was worth it. Demetrios: Thanks for that. Sourabh Agrawal: We'll check into it. Demetrios: Dude, awesome. It's great chatting with you, bro. And I'll talk to you later, bro. Sourabh Agrawal: Yeah, thanks a lot. Great speaking. See you. Bye.
blog/vector-search-for-content-based-video-recommendation-gladys-and-sam-vector-space-talk-012.md
--- draft: false title: Iveta Lohovska on Gen AI and Vector Search | Qdrant slug: gen-ai-and-vector-search short_description: Iveta talks about the importance of trustworthy AI, particularly when implementing it within high-stakes enterprises like governments and security agencies description: Discover valuable insights on generative AI, vector search, and ethical AI implementation from Iveta Lohovska, Chief Technologist at HPE. preview_image: /blog/from_cms/iveta-lohovska-bp-cropped.png date: 2024-04-11T22:12:00.000Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - Retrieval Augmented Generation - GenAI --- # Exploring Gen AI and Vector Search: Insights from Iveta Lohovska > *"In the generative AI context of AI, all foundational models have been trained on some foundational data sets that are distributed in different ways. Some are very conversational, some are very technical, some are on, let's say very strict taxonomy like healthcare or chemical structures. We call them modalities, and they have different representations.”*\ — Iveta Lohovska > Iveta Lohovska serves as the Chief Technologist and Principal Data Scientist for AI and Supercomputing at [Hewlett Packard Enterprise (HPE)](https://www.hpe.com/us/en/home.html), where she champions the democratization of decision intelligence and the development of ethical AI solutions. An industry leader, her multifaceted expertise encompasses natural language processing, computer vision, and data mining. Committed to leveraging technology for societal benefit, Iveta is a distinguished technical advisor to the United Nations' AI for Good program and a Data Science lecturer at the Vienna University of Applied Sciences. Her career also includes impactful roles with the World Bank Group, focusing on open data initiatives and Sustainable Development Goals (SDGs), as well as collaborations with USAID and the Gates Foundation. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/7f1RDwp5l2Ps9N7gKubl8S?si=kCSX4HGCR12-5emokZbRfw), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/RsRAUO-fNaA).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/RsRAUO-fNaA?si=s3k_-DP1U0rkPlEV" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Gen-AI-and-Vector-Search---Iveta-Lohovska--Vector-Space-Talks-020-e2hnie2/a-ab48uha" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** In our continuous pursuit of knowledge and understanding, especially in the evolving landscape of AI and the vector space, we brought another great Vector Space Talk episode featuring Iveta Lohovska as she talks about generative AI and [vector search](https://qdrant.tech/). Iveta brings valuable insights from her work with the World Bank and as Chief Technologist at HPE, explaining the ins and outs of ethical AI implementation. Here are the episode highlights: - Exploring the critical role of trustworthiness and explainability in AI, especially within high confidentiality use cases like government and security agencies. - Discussing the importance of transparency in AI models and how it impacts the handling of data and understanding the foundational datasets for vector search. - Iveta shares her experiences implementing generative AI in high-stakes environments, including the energy sector and policy-making, emphasizing accuracy and source credibility. - Strategies for managing data privacy in high-stakes sectors, the superiority of on-premises solutions for control, and the implications of opting for cloud or hybrid infrastructure. - Iveta's take on the maturity levels of generative AI, the ongoing development of smaller, more focused models, and the evolving landscape of AI model licensing and open-source contributions. > Fun Fact: The climate agent solution showcased by Iveta helps individuals benchmark their carbon footprint and assists policymakers in drafting policy recommendations based on scientifically accurate data. > ## Show notes: 00:00 AI's vulnerabilities and ethical implications in practice.\ 06:28 Trust reliable sources for accurate climate data.\ 09:14 Vector database offers control and explainability.\ 13:21 On-prem vital for security and control.\ 16:47 Gen AI chat models at basic maturity.\ 19:28 Mature technical community, but slow enterprise adoption.\ 23:34 Advocates for open source but highlights complexities.\ 25:38 Unreliable information, triangle of necessities, vector space. ## More Quotes from Iveta: *"What we have to ensure here is that every citation and every answer and augmentation by the generative AI on top of that is linked to the exact source of paper or publication, where it's coming from, to ensure that we can trace it back to where the climate information is coming from.”*\ — Iveta Lohovska *"Explainability means if you receive a certain answer based on your prompt, you can trace it back to the exact source where the embedding has been stored or the source of where the information is coming from and things.”*\ — Iveta Lohovska *"Chat GPT for conversational purposes and individual help is something very cool but when this needs to be translated into actual business use cases scenario with all the constraint of the enterprise architecture, with the constraint of the use cases, the reality changes quite dramatically.”*\ — Iveta Lohovska ## Transcript: Demetrios: Look at that. We are back for another vector space talks. I'm very excited to be doing this today with you all. I am joined by none other than Sabrina again. Where are you at, Sabrina? How's it going? Sabrina Aquino: Hey there, Demetrios. Amazing. Another episode and I'm super excited for this one. How are you doing? Demetrios: I'm great. And we're going to bring out our guest of honor today. We are going to be talking a lot about trustworthy AI because Iveta has a background working with the World bank and focusing on the open data with that. But currently she is chief technologist and principal data scientist at HPE. And we were talking before we hit record before we went live. And we've got some hot takes that are coming up. So I'm going to bring Iveta to the stage. Where are you? There you are, our guest of honor. Demetrios: How you doing? Iveta Lohovska: Good. I hope you can hear me well. Demetrios: Loud and clear. Yes. Iveta Lohovska: Happy to join here from Vienna and thank you for the invite. Demetrios: Yes. So I'm very excited to talk with you today. I think it's probably worth getting the TLDR on your story and why you're so passionate about trustworthiness and explainability. Iveta Lohovska: Well, I think especially in the genaid context where if there any vulnerabilities around the solution or the training data set or any underlying context, either in the enterprise or in a smaller scale, it's just the scale that AI engine AI can achieve if it has any vulnerabilities or any weaknesses when it comes to explainability or trustworthiness or bias, it just goes explain nature. So it is to be considered and taken with high attention when it comes to those use cases. And most of my work is within an enterprise with high confidentiality use cases. So it plays a big role more than actually people will think it's on a high level. It just sounds like AI ethical principles or high level words that are very difficult to implement in technical terms. But in reality, when you hit the ground, when you hit the projects, when you work with in the context of, let's say, governments or organizations that deal with atomic energy, I see it in Vienna, the atomic agency is a neighboring one, or security agencies. Then you see the importance and the impact of those terms and the technical implications behind that. Sabrina Aquino: That's amazing. And can you talk a little bit more about the importance of the transparency of these models and what can happen if we don't know exactly what kind of data they are being trained on? Iveta Lohovska: I mean, this is especially relevant under our context of [vector databases](https://qdrant.tech/articles/what-is-a-vector-database/) and vector search. Because in the generative AI context of AI, all foundational models have been trained on some foundational data sets that are distributed in different ways. Some are very conversational, some are very technical, some are on, let's say very strict taxonomy like healthcare or chemical structures. We call them modalities, and they have different representations. So, so when it comes to implementing vector search or [vector database](https://qdrant.tech/articles/what-is-a-vector-database/) and knowing the distribution of the foundational data sets, you have better control if you introduce additional layers or additional components to have the control in your hands of where the information is coming from, where it's stored, [what are the embeddings](https://qdrant.tech/articles/what-are-embeddings/). So that helps, but it is actually quite important that you know what the foundational data sets are, so that you can predict any kind of weaknesses or vulnerabilities or penetrations that the solution or the use case of the model will face when it lands at the end user. Because we know with generative AI that is unpredictable, we know we can implement guardrails. They're already solutions. Iveta Lohovska: We know they're not 100, they don't give you 100% certainty, but they are definitely use cases and work where you need to hit the hundred percent certainty, especially intelligence, cybersecurity and healthcare. Demetrios: Yeah, that's something that I wanted to dig into a little bit. More of these high stakes use cases feel like you can't. I don't know. I talk with a lot of people about at this current time, it's very risky to try and use specifically generative AI for those high stakes use cases. Have you seen people that are doing it well, and if so, how? Iveta Lohovska: Yeah, I'm in the business of high stakes use cases and yes, we do those kind of projects and work, which is very exciting and interesting, and you can see the impact. So I'm in the generative AI implementation into enterprise control. An enterprise context could mean critical infrastructure, could mean telco, could mean a government, could mean intelligence organizations. So those are just a few examples, but I could flip the coin and give you an alternative for a public one where I can share, let's say a good example is climate data. And we recently worked on, on building a knowledge worker, a climate agent that is trained, of course, his foundational knowledge, because all foundational models have prior knowledge they can refer to. But the key point here is to be an expert on climate data emissions gap country cards. Every country has a commitment to meet certain reduction emission reduction goals and then benchmarked and followed through the international supervisions of the world, like the United nations environmental program and similar entities. So when you're training this agent on climate data, they're competing ideas or several sources. Iveta Lohovska: You can source your information from the local government that is incentivized to show progress to the nation and other stakeholders faster than the actual reality, the independent entities that provide information around the state of the world when it comes to progress towards certain climate goals. And there are also different parties. So for this kind of solution, we were very lucky to work with kind of the status co provider, the benchmark around climate data, around climate publications. And what we have to ensure here is that every citation and every answer and augmentation by the generative AI on top of that is linked to the exact source of paper or publication, where it's coming from, to ensure that we can trace it back to where the climate information is coming from. If Germany performs better compared to Austria, and also the partner we work with was the United nations environmental program. So they want to make sure that they're the citadel scientific arm when it comes to giving information. And there's no compromise, could be a compromise on the structure of the answer, on the breadth and death of the information, but there should be no compromise on the exact fact fullness of the information and where it's coming from. And this is a concrete example because why, you oughta ask, why is this so important? Because it has two interfaces. Iveta Lohovska: It has the public. You can go and benchmark your carbon footprint as an individual living in one country comparing to an individual living in another. But if you are a policymaker, which is the other interface of this application, who will write the policy recommendation of a country in their own country, or a country they're advising on, you might want to make sure that the scientific citations and the policy recommendations that you're making are correct and they are retrieved from the proper data sources. Because there will be a huge implication when you go public with those numbers or when you actually design a law that is reinforceable with legal terms and law enforcement. Sabrina Aquino: That's very interesting, Iveta, and I think this is one of the great use cases for [RAG](https://qdrant.tech/articles/what-is-rag-in-ai/), for example. And I think if you can talk a little bit more about how vector search is playing into all of this, how it's helping organizations do this, this. Iveta Lohovska: Would be amazing in such specific use cases. I think the main differentiator is the traceability component, the first that you have full control on which data it will refer to, because if you deal with open source models, most of them are open, but the data it has been trained on has not been opened or given public so with vector database you introduce a step of control and explainability. Explainability means if you receive a certain answer based on your prompt, you can trace it back to the exact source where the embedding has been stored or the source of where the information is coming from and things. So this is a major use case for us for those kind of high stake solution is that you have the explainability and traceability. Explainability. It could be as simple as a semantical similarity to the text, but also the traceability of where it's coming from and the exact link of where it's coming from. So it should be, it shouldn't be referred. You can close and you can cut the line of the model referring to its previous knowledge by introducing a [vector database](https://qdrant.tech/articles/what-is-a-vector-database/), for example. Iveta Lohovska: So there could be many other implications and improvements in terms of speed and just handling huge amounts of data, yet also nice to have that come with this kind of technique, but the prior use case is actually not incentivized around those. Demetrios: So if I'm hearing you correctly, it's like yet another reason why you should be thinking about using vector databases, because you need that ability to cite your work and it's becoming a very strong design pattern. Right. We all understand now, if you can't see where this data has been pulled from or you can't get, you can't trace back to the actual source, it's hard to trust what the output is. Iveta Lohovska: Yes, and the easiest way to kind of cluster the two groups. If you think of creative fields and marketing fields and design fields where you could go wild and crazy with the temperature on each model, how creative it could go and how much novelty it could bring to the answer are one family of use cases. But there is exactly the opposite type of use cases where this is a no go and you don't need any creativity, you just focus on, focus on the factfulness and explainability. So it's more of the speed and the accuracy of retrieving information with a high level of novelty, but not compromising on any kind of facts within the answer, because there will be legal implications and policy implications and societal implications based on the action taken on this answer, either policy recommendation or legal action. There's a lot to do with the intelligence agencies that retrieve information based on nearest neighbor or kind of a relational analysis that you can also execute with vector databases and generative AI. Sabrina Aquino: And we know that for these high stakes sectors that data privacy is a huge concern. And when we're talking about using vector databases and storing that data somewhere, what are some of the principles or techniques that you use in terms of infrastructure, where should you store your vector database and how should you think about that part of your system? Iveta Lohovska: Yeah, so most of the cases, I would say 99% of the cases, is that if you have such a high requirements around security and explainability, security of the data, but those security of the whole use case and environment, and the explainability and trustworthiness of the answer, then it's very natural to have expectations that will be on prem and not in the cloud, because only on prem you have a full control of where your data sits, where your model sits, the full ownership of your IP, and then the full ownership of having less question marks of the implementation and architecture, but mainly the full ownership of the end to end solution. So when it comes to those use cases, RAG on Prem, with the whole infrastructure, with the whole software and platform layers, including models on Prem, not accessible through an API, through a service somewhere where you don't know where the guardrails is, who designed the guardrails, what are the guardrails? And we see those, this a lot with, for example, copilot, a lot of question marks around that. So it's a huge part of my work is just talking of it, just sorting out that. Sabrina Aquino: Exactly. You don't want to just give away your data to a cloud provider, because there's many implications that that comes with. And I think even your clients, they need certain certifications, then they need to make sure that nobody can access that data, something that you cannot. Exactly. I think ensure if you're just using a cloud provider somewhere, which is, I think something that's very important when you're thinking about these high stakes solutions. But also I think if you're going to maybe outsource some of the infrastructure, you also need to think about something that's similar to a [hybrid cloud solution](https://qdrant.tech/documentation/hybrid-cloud/) where you can keep your data and outsource the kind of management of infrastructure. So that's also a nice use case for that, right? Iveta Lohovska: I mean, I work for HPE, so hybrid is like one of our biggest sacred words. Yeah, exactly. But actually like if you see the trends and if you see how expensive is to work to run some of those workloads in the cloud, either for training for national model or fine tuning. And no one talks about inference, inference not in ten users, but inference in hundred users with big organizations. This itself is not sustainable. Honestly, when you do the simple Linux, algebra or math of the exponential cost around this. That's why everything is hybrid. And there are use cases that make sense to be fast and speedy and easy to play with, low risk in the cloud to try. Iveta Lohovska: But when it comes to actual GenAI work and LLM models, yeah, the answer is never straightforward when it comes to the infrastructure and the environment where you are hosting it, for many reasons, not just cost, but any other. Demetrios: So there's something that I've been thinking about a lot lately that I would love to get your take on, especially because you deal with this day in and day out, and it is the maturity levels of the current state of Gen AI and where we are at for chat GPT or just llms and foundational models feel like they just came out. And so we're almost in the basic, basic, basic maturity levels. And when you work with customers, how do you like kind of signal that, hey, this is where we are right now, but you should be very conscientious that you're going to need to potentially work with a lot of breaking changes or you're going to have to be constantly updating. And this isn't going to be set it and forget it type of thing. This is going to be a lot of work to make sure that you're staying up to date, even just like trying to stay up to date with the news as we were talking about. So I would love to hear your take on on the different maturity levels that you've been seeing and what that looks like. Iveta Lohovska: So I have huge exposure to GenAI for the enterprise, and there's a huge component expectation management. Why? Because chat GPT for conversational purposes and individual help is something very cool. But when this needs to be translated into actual business use cases scenario with all the constraint of the enterprise architecture, with the constraint of the use cases, the reality changes quite dramatically. So end users who are used to expect level of forgiveness as conversational chatbots have, is very different of what you will get into actual, let's say, knowledge worker type of context, or summarization type of context into the enterprise. And it's not so much to the performance of the models, but we have something called modalities of the models. And I don't think there will be ultimately one model with all the capabilities possible, let's say cult generation or image generation, voice generational, or just being very chatty and loving and so on. There will be multiple mini models out there for those. Modalities in actual architecture with reasonable cost are very difficult to handle. Iveta Lohovska: So I would say the technical community feels we are very mature and very fast. The enterprise adoption is a totally different topic, and it's a couple of years behind, but also the society type of technologists like me, who try to keep up with the development and we know where we stand at this point, but they're the legal side and the regulations coming in, like the EU act and Biden trying to regulate the compute power, but also how societies react to this and how they adapt. And I think especially on the third one, we are far behind understanding and the implications of this technology, also adopting it at scale and understanding the vulnerabilities. That's why I enjoy so much my enterprise work is because it's a reality check. When you put the price tag attached to actual Gen AI use case in production with the inference cost and the expected performance, it's different situation when you just have an app on the phone and you chat with it and it pulls you interesting links. So yes, I think that there's a bridge to be built between the two worlds. Demetrios: Yeah. And I find it really interesting too, because it feels to me like since it is so new, people are more willing to explore and not necessarily have that instant return of the ROI, but when it comes to more traditional ML or predictive ML, it is a bit more mature and so there's less patience for that type of exploration. Or, hey, is this use case? If you can't by now show the ROI of a predictive ML use case, then that's a little bit more dangerous. But if you can't with a Gen AI use case, it is not that big of a deal. Iveta Lohovska: Yeah, it's basically a technology growing up in front of our eyes. It's a kind of a flying a plane while building it type of situation. We are seeing it in the real time, and I agree with you. So that the maturity around ML is one thing, but around generative AI, and they will be a model of kind of mini disappointment or decline, in my opinion, before actually maturing product. This kind of powerful technology in a sustainable way. Sustainable ways mean you can afford it, but also it proves your business case and use case. Otherwise it's just doing for the sake of doing it because everyone else is doing it. Demetrios: Yeah, yeah, 100%. So I know we're bumping up against time here. I do feel like there was a bit of a topic that we wanted to discuss with the licenses and how that plays into basically trustworthiness and explainability. And so we were talking about how, yeah, the best is to run your own model, and it probably isn't going to be this gigantic model that can do everything. It's the, it seems like the trends are going into smaller models. And from your point of view though, we are getting new models like every week. It feels like. Yeah, especially. Demetrios: I mean, we were just talking about this before we went live again, like databricks just released there. What is it? DBRX Yesterday you had Mistral releasing like a new base model over the weekend, and then Llama 3 is probably going to come out in the flash of an eye. So where do you stand in regards to that? It feels like there's a lot of movement in open source, but it is a little bit of, as you mentioned, like, to be cautious with the open source movement. Iveta Lohovska: So I think it feels like there's a lot of open source, but that. So I'm totally for open sourcing and giving the people and the communities the power to be able to innovate, to do R & D in different labs so it's not locked to the view. Elite big tech companies that can afford this kind of technology. So kudos to meta for trying compared to the other equal players in the space. But open source comes with a lot of ecosystem in our world, especially for the more powerful models, which is something I don't like because it becomes like just, it immediately translates into legal fees type of conversation. It's like there are too many if else statements in those open source licensing terms where it becomes difficult to navigate, for technologists to understand what exactly this means, and then you have to bring the legal people to articulate it to you or to put additional clauses. So it's becoming a very complex environment to handle and less and less open, because there are not so many open source and small startup players that can afford to train foundational models that are powerful and useful. So it becomes a bit of a game logged to a view, and I think everyone needs to be a bit worried about that. Iveta Lohovska: So we can use the equivalents from the past, but I don't think we are doing well enough in terms of open sourcing. The three main core components of LLM model, which is the model itself, the data it has been trained on, and the data sets, and most of the times, at least in one of those, is restricted or missing. So it's difficult space to navigate. Demetrios: Yeah, yeah. You can't really call it trustworthy, or you can't really get the information that you need and that you would hope for if you're missing one of those three. I do like that little triangle of the necessities. So, Iveta, this has been awesome. I really appreciate you coming on here. Thank you, Sabrina, for joining us. And for everyone else that is watching, remember, don't get lost in vector space. This has been another vector space talk. Demetrios: We are out. Have a great weekend, everyone. Iveta Lohovska: Thank you. Bye. Thank you. Bye.
blog/gen-ai-and-vector-search-iveta-lohovska-vector-space-talks.md