Post title
stringlengths 27
1.25k
| Post link
stringlengths 72
72
| Post type
stringclasses 1
value | Posted by
stringclasses 1
value | Created date
stringlengths 10
10
| Audience
stringclasses 1
value | Impressions
int64 136
7.2k
| Clicks
int64 1
337
| Click through rate (CTR)
float64 0
0.15
| Likes
int64 1
81
| Comments
int64 0
15
| Reposts
int64 0
7
| Engagement rate
float64 0.01
0.18
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
π₯ Want to use LLMs and RAG to analyze the stock market with current data? Then check out this article!
https://lnkd.in/db5PCc98
| https://www.linkedin.com/feed/update/urn:li:activity:7231377944626044928 | Organic | David Mezzetti | 08/19/2024 | All followers | 452 | 24 | 0.053097 | 5 | 0 | 1 | 0.066372 |
π Great to see how each new release of txtai brings new users into the fold. Look how adding llama.cpp and LLM APIs bumped up the number of downloads. And how better examples with streaming RAG with local document text extraction did the same!
https://lnkd.in/ecEgQDYM | https://www.linkedin.com/feed/update/urn:li:activity:7230542476527218688 | Organic | David Mezzetti | 08/17/2024 | All followers | 537 | 15 | 0.027933 | 6 | 0 | 1 | 0.040968 |
π Just published this comprehensive article covering our new RAG application with txtai, check it out!
https://lnkd.in/dPEPv4Fm
| https://www.linkedin.com/feed/update/urn:li:activity:7229533817089269763 | Organic | David Mezzetti | 08/14/2024 | All followers | 646 | 33 | 0.051084 | 7 | 0 | 2 | 0.065015 |
Did you know that most of the functionality in txtai can be run with configuration? That's right, txtai can dynamically load Embeddings, LLM, RAG and other pipelines with YAML configuration.
Check out this example that loads an Embeddings database via Docker with a couple of lines of YAML config. The example then runs a graph search via the local API and plots the results with Sigma.js.
Code: https://lnkd.in/e5CwDjBb
Docs: https://lnkd.in/dz7v_UPb | https://www.linkedin.com/feed/update/urn:li:activity:7229090247416975360 | Organic | David Mezzetti | 08/13/2024 | All followers | 632 | 7 | 0.011076 | 7 | 0 | 3 | 0.026899 |
An array of search results show the top N best matches for a query. But what if some of those matches aren't related to the others? That's where txtai's semantic graph and Graph RAG patterns help. Graphs open up different possibilities such as looking at the most central results and/or walking a specific path to pull in only certain types of results.
Check out this article for more on this topic: https://lnkd.in/e9vGkZ2x | https://www.linkedin.com/feed/update/urn:li:activity:7228883439846895617 | Organic | David Mezzetti | 08/12/2024 | All followers | 780 | 23 | 0.029487 | 6 | 0 | 1 | 0.038462 |
π₯³π August 11, 2020 - txtai was born. This image and Reddit post started it all!
The last 4 years have been amazing! The community that has developed around txtai has been humbling. 8.5K β's later there is still much work to do. Only a small fraction of the space knows about txtai. The more that know, the more that will see that it's a better approach than other "popular" frameworks. Let's go!
GitHub: https://lnkd.in/dxWDeey
Original Reddit Post: https://lnkd.in/eA9G7m6M | https://www.linkedin.com/feed/update/urn:li:activity:7228379520309825537 | Organic | David Mezzetti | 08/11/2024 | All followers | 780 | 38 | 0.048718 | 13 | 2 | 1 | 0.069231 |
π While LLMs are prone to mistakes, they are a great way to learn the lingo of a new domain. Take this example of a Llama 3.1 prompt that analyzes publicly traded stocks. If you run this example with a couple different stocks, you'll quickly learn the indicators that are common: P/E Ratio, market cap, trailing EPS, cash on hand etc. While LLMs shouldn't be trusted blindly, they are a great tool.
Code: https://lnkd.in/e9zn7UNA | https://www.linkedin.com/feed/update/urn:li:activity:7228373119634210816 | Organic | David Mezzetti | 08/11/2024 | All followers | 531 | 24 | 0.045198 | 6 | 1 | 1 | 0.060264 |
One of the most powerful pipelines available in txtai is it's textractor pipeline. It can convert a large number of document formats to Markdown for LLM/RAG consumption. One common concern is with it's Apache Tika/Java dependency.
Did you know that Apache Tika can instead be started via this Docker Image?
https://lnkd.in/eMUHkrBS
| https://www.linkedin.com/feed/update/urn:li:activity:7228017143827566592 | Organic | David Mezzetti | 08/10/2024 | All followers | 497 | 9 | 0.018109 | 5 | 0 | 1 | 0.030181 |
One of the more underappreciated components of txtai is it's cloud sync. Say you're doing market/sales/academic or even medical research, you parse and build an embeddings index over a series of papers/websites/documents. Given the portable nature of txtai's index format, an Embeddings index can easily be synced to cloud storage (i.e. AWS S3/Azure Blob/Google Cloud) or even the Hugging Face Hub. From there, anyone can spin up a RAG process using this data upon being granted access. No need for servers and complex setups.
Learn more below.
https://lnkd.in/eMGY7uRB
| https://www.linkedin.com/feed/update/urn:li:activity:7227995148129882112 | Organic | David Mezzetti | 08/10/2024 | All followers | 401 | 11 | 0.027431 | 1 | 0 | 1 | 0.032419 |
π₯ Did you know that txtai search results can be loaded as a Pandas DataFrame?
https://lnkd.in/euPdWUe2 | https://www.linkedin.com/feed/update/urn:li:activity:7227781117427302400 | Organic | David Mezzetti | 08/09/2024 | All followers | 510 | 8 | 0.015686 | 7 | 0 | 1 | 0.031373 |
π Thank you for the amazing feedback with the txtai RAG v0.2 release. There were some great ideas that were too good to sit on. With that, we're happy to announce v0.3 is now available!
GitHub: https://lnkd.in/evdB5HgN
Docker Hub: https://lnkd.in/e-3Cx_68
| https://www.linkedin.com/feed/update/urn:li:activity:7227731402933387264 | Organic | David Mezzetti | 08/09/2024 | All followers | 502 | 27 | 0.053785 | 6 | 0 | 1 | 0.067729 |
π Transforming HTML to Markdown is easy until it isn't! Most websites have headers, footers and sidebars with little consistency. Just naively converting a website to Markdown leads to a lot of irrelevant content which can throw LLMs off.
That's where txtai's textractor pipeline can helpπ This pipeline has logic to detect the most likely sections with the main content removing noisy sections such as headers, footers and sidebars. This helps improve the overall RAG accuracy.
Check out this example extraction: https://lnkd.in/eh6d8Nau
See how only the main content is extracted! | https://www.linkedin.com/feed/update/urn:li:activity:7227498189661052928 | Organic | David Mezzetti | 08/09/2024 | All followers | 542 | 28 | 0.051661 | 4 | 0 | 1 | 0.060886 |
π₯ v0.2 of our txtai RAG application is out! This is an easy-to-use application for exploring your own data with retrieval augmented generation (RAG) backed by txtai.
txtai is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows. txtai has a feature to automatically create knowledge graphs using semantic similarity. This enables running Graph RAG queries with path traversals. This RAG application generates a visual network to illustrate the path traversals and help understand the context from which answers are generated from.
Embeddings databases are used as the knowledge store. The application can start with a blank database or an existing one such as Wikipedia. In both cases, new data can be added. This enables augmenting a large data source with new/custom information.
Adding new data is done with the textractor pipeline. This pipeline can extract content from documents (PDF, Word, etc) along with websites. The website extraction logic detects the likely sections with main content removing noisy sections such as headers and sidebars. This helps improve the overall RAG accuracy.
See more at the links below.
GitHub: https://lnkd.in/evdB5HgN
Docker Hub: https://lnkd.in/e-3Cx_68 | https://www.linkedin.com/feed/update/urn:li:activity:7227396692638138368 | Organic | David Mezzetti | 08/08/2024 | All followers | 1,129 | 59 | 0.052259 | 16 | 0 | 4 | 0.069973 |
π Did you know that Markdown formatted text helps improve RAG accuracy? While retrieval and prompt engineering are important components of a RAG pipeline, Markdown can help give an additional boost β‘
The Textractor pipeline supports generating tables, lists, code, blockquotes and emphasis sections as Markdown.
https://lnkd.in/e8nfE-Zp | https://www.linkedin.com/feed/update/urn:li:activity:7227000688088621056 | Organic | David Mezzetti | 08/07/2024 | All followers | 793 | 29 | 0.03657 | 10 | 5 | 1 | 0.056747 |
Nice to see a continued bump in growth for txtai over the last couple of weeks!
https://lnkd.in/eXrGAb6W | https://www.linkedin.com/feed/update/urn:li:activity:7225931632002691072 | Organic | David Mezzetti | 08/04/2024 | All followers | 850 | 28 | 0.032941 | 8 | 2 | 1 | 0.045882 |
As many go down the "agentic path", we're choosing a different path.....graph path traversals! π΅->π¨->π’
Graph path traversals use vector similarity and/or relationships of your choosing to walk a graph and enable LLMs to explain complex concepts and relationships. This example walks a path and automatically generates an explanation of the network in the form of a short article.
Paths can be set directly (i.e. Roman Empire -> Reasons for collapse) or inferred from a query (Tell me the reasons why the Roman Empire collapsed).
Learn more here: https://lnkd.in/evdB5HgN | https://www.linkedin.com/feed/update/urn:li:activity:7222921388091682816 | Organic | David Mezzetti | 07/27/2024 | All followers | 1,211 | 36 | 0.029727 | 8 | 0 | 1 | 0.037159 |
Want to learn more on how the txtai RAG app works? Then check it out on GitHub!
https://lnkd.in/evdB5HgN
| https://www.linkedin.com/feed/update/urn:li:activity:7222630553475256320 | Organic | David Mezzetti | 07/26/2024 | All followers | 515 | 14 | 0.027184 | 3 | 0 | 1 | 0.034951 |
π The 2024 txtai survey is out!
We don't do telemetry (we'll be on the right side of this future issue). We're old fashioned and ask π©
Please submit your thoughts if you'd like to help guide the future direction of the project.
https://lnkd.in/equGWe9d | https://www.linkedin.com/feed/update/urn:li:activity:7222606179376529411 | Organic | David Mezzetti | 07/26/2024 | All followers | 417 | 14 | 0.033573 | 2 | 0 | 1 | 0.040767 |
π We're thrilled to release an innovative and easy-to-use application for RAG and π GraphRAG. We believe this application has features that are novel and not seen anywhere else.
txtai has been out ahead with semantic graphs for a while now. We've long known that graph path traversals are a great way to build contexts even before we knew LLMs would be the spot for this context.
The RAG application allows uploading your own data and documents. Graphs are automatically constructed, relationships automatically derived. Each node in the graph is given a LLM-generated topic.
With one pull of a Docker image, you can be up and running with a full-featured GraphRAG application on your own data. Enjoy!
GitHub: https://lnkd.in/evdB5HgN
Docker: https://lnkd.in/e-3Cx_68 | https://www.linkedin.com/feed/update/urn:li:activity:7222462570320793600 | Organic | David Mezzetti | 07/26/2024 | All followers | 1,533 | 115 | 0.075016 | 23 | 1 | 2 | 0.091977 |
π₯΄ Text extraction is a messy business!
There are libraries that don't work well, require numerous dependencies, have licenses that are commercially untenable (AGPL-3) and/or require sending data to a remote API for processing!
With txtai, we use Apache Tika as our main text extraction library. It works well for most formats but it does require Java. Keep in mind that a headless JRE is smaller than dependencies other text extraction libraries require such as LibreOffice.
txtai also plays nice with others. If you'd like to use an external library for PDF parsing, check this example out. Just be aware of the license for this library!
Link to code: https://lnkd.in/dt2z7W_j | https://www.linkedin.com/feed/update/urn:li:activity:7222242898904293376 | Organic | David Mezzetti | 07/25/2024 | All followers | 1,410 | 58 | 0.041135 | 9 | 2 | 2 | 0.050355 |
π Happy to see txtai is a trending Python project today on GitHub
https://lnkd.in/dUixHvq | https://www.linkedin.com/feed/update/urn:li:activity:7221481859354894338 | Organic | David Mezzetti | 07/23/2024 | All followers | 994 | 39 | 0.039235 | 18 | 1 | 1 | 0.059356 |
π Nice to see txtai trending on Hacker News today!
https://lnkd.in/ez3tmEa | https://www.linkedin.com/feed/update/urn:li:activity:7220876861659119616 | Organic | David Mezzetti | 07/21/2024 | All followers | 758 | 27 | 0.03562 | 6 | 3 | 1 | 0.048813 |
Did you know that a txtai embeddings database is a file format?
While txtai can integrate with a number of external components, it's base components all save content locally. The entire database can be saved to a singled compressed file. There is built-in support for saving these indexes to cloud storage (i.e. S3) and the HF Hub.
Learn more about the file formats behind this here: https://lnkd.in/e3j5Pf-n | https://www.linkedin.com/feed/update/urn:li:activity:7220747351454355456 | Organic | David Mezzetti | 07/21/2024 | All followers | 675 | 13 | 0.019259 | 3 | 0 | 1 | 0.025185 |
π Let's talk about Graph RAG. We've been looking at graph-based approaches for context generation since 2022. The best use case we've seen for Graph RAG is for more complex questions and research. For example, think of a problem as a road trip with multiple stops. A graph path traversal is a great way to pick up various concepts as context, concepts which may not be directly related and not picked up by a simple keyword/vector search.
The attached image shows two graph path traversal examples. The first shows the path between a squirrel and the Red Sox winning the world series. The 2nd shows an image path from a person parachuting and someone holding a french horn. Note the progression of both the text and images along the way. There is also another example of traversing history from the end of the Roman Empire to the Norman Conquest of England.
For problems like this, graphs do a great job. If the answer is a simple retrieval of a single entry, Graph RAG doesn't add much value. Like all things, Graph RAG isn't the be-all and end-all.
Read more in the articles below.
Semantic Graph Intro: https://lnkd.in/eMsAXarn
Graph RAG: https://lnkd.in/d-BSjuj7 | https://www.linkedin.com/feed/update/urn:li:activity:7220390554633732097 | Organic | David Mezzetti | 07/20/2024 | All followers | 1,174 | 38 | 0.032368 | 15 | 0 | 3 | 0.0477 |
π Want to work with vectors, LLMs and RAG but worried about security?
Did you know that txtai has full Postgres integration (dense vectors, sparse vectors, content and graph)? This can be combined with standard row level security to limit what content a user can utilize for GenAI processes.
Learn more at the links below.
Article: https://lnkd.in/eFeFNgYK
Postgres docs: https://lnkd.in/e9aFks5Y
| https://www.linkedin.com/feed/update/urn:li:activity:7220068323353337857 | Organic | David Mezzetti | 07/19/2024 | All followers | 510 | 5 | 0.009804 | 7 | 0 | 1 | 0.02549 |
txtai now supports building vector databases and/or RAG pipelines exclusively with llama.cpp and/or API integrations (i.e OpenAI, Claude, Ollama etc)
https://lnkd.in/e_rud4Ff | https://www.linkedin.com/feed/update/urn:li:activity:7219653044861370368 | Organic | David Mezzetti | 07/18/2024 | All followers | 851 | 14 | 0.016451 | 9 | 0 | 1 | 0.028202 |
π₯ We'll have a couple new videos in the coming weeks on our YouTube channel covering txtai. Stay tuned.
https://lnkd.in/dREtaaM7 | https://www.linkedin.com/feed/update/urn:li:activity:7219451260075212800 | Organic | David Mezzetti | 07/17/2024 | All followers | 277 | 5 | 0.018051 | 2 | 0 | 1 | 0.028881 |
Earlier this month we ran an experiment to compare txtai with other popular open-source frameworks.
The conclusion is that txtai should be on your list
Read more here: https://lnkd.in/eu4frpZ6
| https://www.linkedin.com/feed/update/urn:li:activity:7219385511302307840 | Organic | David Mezzetti | 07/17/2024 | All followers | 252 | 6 | 0.02381 | 2 | 0 | 1 | 0.035714 |
βοΈ New in txtai 7.3: Better Text Extraction. This release brings significant improvements to parsing web content (HTML). Supports parsing sections, lists, tables and more!
Link to code: https://lnkd.in/ebWwY473 | https://www.linkedin.com/feed/update/urn:li:activity:7219027434950586368 | Organic | David Mezzetti | 07/16/2024 | All followers | 989 | 48 | 0.048534 | 12 | 3 | 2 | 0.065723 |
π New in txtai 7.3: Streaming RAG. This feature builds off streaming LLMs which iteratively return chunks of content as a stream vs waiting for the entire generation call. Streaming is supported with Transformers, llama.cpp and LLM APIs (i.e. GPT-4, Claude, Ollama)
Try it out with txtai's new RAG application. Plug in a new txtai embeddings index to use this application with your own data.
Docker image: https://lnkd.in/d-fJiVXR
RAG Application: https://lnkd.in/e82fddyY | https://www.linkedin.com/feed/update/urn:li:activity:7219009499846619136 | Organic | David Mezzetti | 07/16/2024 | All followers | 676 | 44 | 0.065089 | 5 | 0 | 1 | 0.073964 |
txtai is a grassroots project dedicated to the developer-experience. We're passionate about building a quality project. GitHub β's and π₯ comments go a long way.
NeuML is building a sustainable and profitable company from the start. We provide consulting services in situations where it's an interesting business problem. It's not about generating buzz to get the next funding round or rubbing elbows with big names in Silicon Valley.
The majority of our work is open-source and from our desk to yours with β€οΈ Thank you! | https://www.linkedin.com/feed/update/urn:li:activity:7218951378222596097 | Organic | David Mezzetti | 07/16/2024 | All followers | 570 | 38 | 0.066667 | 5 | 2 | 1 | 0.080702 |
π txtai 7.3 is out!
This release adds a new RAG front-end template, streaming LLM and streaming RAG support along with significant text extraction improvements.
From local to remote vectorization, model inference and data storage - txtai has you covered. It's the easiest way to build vector search, LLM and RAG systems without the bloat.
See below for more.
GitHub:https://lnkd.in/dxWDeey
Release Notes: https://lnkd.in/e_4kYXma
PyPI: https://lnkd.in/eE_Jvft
Docker Hub: https://lnkd.in/e598zTHb
API Clients:
Python: https://lnkd.in/eqVx_nqt
JavaScriptβ: https://lnkd.in/dM8ua2y
Rust: https://lnkd.in/d2MAae2
Java: https://lnkd.in/dqmmjTw
Go: https://lnkd.in/dq7Ujv4 | https://www.linkedin.com/feed/update/urn:li:activity:7218684906715889664 | Organic | David Mezzetti | 07/15/2024 | All followers | 1,658 | 87 | 0.052473 | 25 | 1 | 2 | 0.069361 |
π All the cool kids have a RAG framework in 2024. Why? Because it's really easy. A simple RAG framework can be a couple lines of code. If all you do is call API services and stitch the results together, what value does this add?
txtai is much more than this. It's a local vector database that can also store data in Postgres. A LLM framework that works with multiple LLM backends local and remote. A sophisticated RAG pipeline. Not to mention components for graphs, BM25 and other traditional ML model pipelines. It does all this without creating unnecessary complexity and abstraction.
And it's open-source, check it out: https://lnkd.in/dxWDeey | https://www.linkedin.com/feed/update/urn:li:activity:7217524333554925570 | Organic | David Mezzetti | 07/12/2024 | All followers | 1,418 | 38 | 0.026798 | 11 | 0 | 4 | 0.037377 |
We often hear one say they have a LLM and they want to solve a problem. LLMs aren't always the best tool for the job. Let's take text classification using a sentiment dataset.
Running LLM prompts for this dataset only leads to 58% accuracy! Training with a 4.4M parameter model has 91% accuracy. BERT is 93%. Sure we can fine-tune the LLM for this task but why spend an hour vs 8 minutes?
Be willing to accept the simpler solution.
Link to code: https://lnkd.in/d38xBm6X | https://www.linkedin.com/feed/update/urn:li:activity:7217120867527385090 | Organic | David Mezzetti | 07/11/2024 | All followers | 1,710 | 49 | 0.028655 | 20 | 1 | 1 | 0.04152 |
π€ Machine translation just ask a LLM to do it?
While LLMs can translate that doesn't mean they should. What if we could utilize smaller models that were trained to translate between specific languages? What if there was a pipeline that automatically loads models based on the source to target language?
π Enter txtai's translation pipeline! The Translation pipeline automatically detects languages and searches the Hugging Face Hub for the best specialized model to perform the translation. These specialized models are often smaller than LLMs and much faster.
Link to code: https://lnkd.in/eefCwRDu | https://www.linkedin.com/feed/update/urn:li:activity:7216410295450120192 | Organic | David Mezzetti | 07/09/2024 | All followers | 1,884 | 46 | 0.024416 | 14 | 3 | 1 | 0.03397 |
No GPU available? Only using external API services? Want llama.cpp GPU models with txtai?
Did you know that Torch has a CPU-only install that brings a significantly smaller dependency chain (no PyPI CUDA libraries). The txtai-cpu Docker image employs this same strategy. It reduces the image size from 3.1 GB to 700 MB.
Link to code: https://lnkd.in/exnasWHf | https://www.linkedin.com/feed/update/urn:li:activity:7216057535986839552 | Organic | David Mezzetti | 07/08/2024 | All followers | 1,202 | 17 | 0.014143 | 20 | 0 | 5 | 0.034942 |
π Want to summarize webpages, word documents, PDFs and more? Did you know there are models pre-built for summarization that pre-date the latest LLMs? And that they do a decent job and are faster?
txtai supports pre-trained summarization models and LLMs for summarization. Either can be run as Python workflows or FastAPI services.
Link to code: https://lnkd.in/emg4QUfm
| https://www.linkedin.com/feed/update/urn:li:activity:7215768857762762752 | Organic | David Mezzetti | 07/07/2024 | All followers | 1,326 | 59 | 0.044495 | 11 | 0 | 2 | 0.054299 |
β¨ Let's build on our previous BM25 post and take tokenization into account. We'll compare LangChain's BM25 retriever, the recently released bm25s library (built with Scipy sparse matrices) and txtai. We'll use the same tokenization method for all 3, the Unicode Text Segmentation algorithm (UAX 29). Keep in mind that this is relevant to those using hybrid search (vector + keyword).
1.6M ArXiv abstracts were evaluated. txtai's index time was slower but search times were significantly faster. txtai used almost 6x less RAM.
Link to code: https://lnkd.in/ekSxN9pJ | https://www.linkedin.com/feed/update/urn:li:activity:7215679457859121152 | Organic | David Mezzetti | 07/07/2024 | All followers | 2,393 | 107 | 0.044714 | 33 | 3 | 2 | 0.060593 |
βοΈ Let's talk tokenization, an underappreciated part of the NLP pipeline! Naive methods like splitting on whitespace work for European languages but not with others. Stop words were once a common pattern but have since fallen out of favor.
Modern keyword tokenizers split using the Unicode Text Segmentation algorithm (UAX 29). This enables broader language support. Many Transformers models use either word/subword or BPE tokenizers.
txtai has a built-in tokenizer that implements UAX 29. This functionality is similar to what's found in systems like Elasticsearch/Apache Lucene and it's used with txtai's sparse keyword indexing.
Link to the code: https://lnkd.in/egVRgEAT | https://www.linkedin.com/feed/update/urn:li:activity:7215458584950681601 | Organic | David Mezzetti | 07/06/2024 | All followers | 1,938 | 56 | 0.028896 | 28 | 1 | 3 | 0.045408 |
β‘ LangChain and LlamaIndex both use the Rank-BM25 library to provide in-line BM25 document retrieval. Rank-BM25 is a great way to quickly stand up a BM25 search index for a small number of documents. But it doesn't scale as it's built to run in memory.
txtai has it's own BM25 implementation in Python. Term vectors are built harnessing the native performance of the Python arrays package. These term vectors are stored in a SQLite database. LRU caching stores frequently used vectors in memory. This combination of factors enables a highly performant index.
For this comparison, 2.3M ArXiv abstracts were used. LangChain ran out of memory (32 GB of RAM). The test was scaled to 1.6M abstracts. txtai had 2x slower index times but 13x faster search times than LangChain. LangChain used 25 GB of RAM, txtai used 3.8 GB of RAM.
Link to code: https://lnkd.in/eD7qn6qn | https://www.linkedin.com/feed/update/urn:li:activity:7215085459910070272 | Organic | David Mezzetti | 07/05/2024 | All followers | 2,096 | 92 | 0.043893 | 20 | 0 | 3 | 0.054866 |
The latest from NeuML in one place.
https://lnkd.in/emkngSiK | https://www.linkedin.com/feed/update/urn:li:activity:7215020830399832064 | Organic | David Mezzetti | 07/05/2024 | All followers | 336 | 3 | 0.008929 | 2 | 0 | 1 | 0.017857 |
π₯βΎ BM25 continues to be a heavy hitter in the information retrieval space. Did you know that txtai has a BM25 component built for speedπ¨?
BM25 term vectors are built harnessing the native performance of the Python arrays package. These term vectors are stored in a SQLite database. LRU caching stores frequently used vectors in memory. This combination of factors enables a highly performant index.
https://lnkd.in/eA3ui6cQ | https://www.linkedin.com/feed/update/urn:li:activity:7214952352154292225 | Organic | David Mezzetti | 07/05/2024 | All followers | 1,673 | 20 | 0.011955 | 5 | 0 | 2 | 0.016139 |
How does txtai stack up against other open source frameworks for Vector Search & RAG?
Short answer: it is up to the task π―
https://lnkd.in/eu4frpZ6
| https://www.linkedin.com/feed/update/urn:li:activity:7214681508152832000 | Organic | David Mezzetti | 07/04/2024 | All followers | 569 | 15 | 0.026362 | 5 | 0 | 1 | 0.036907 |
One LLM pipeline many tasks with txtai. The LLM pipeline supports many models local and remote. Simply change the model path.
Inputs can be prompt strings or chat messages. Easily run in Python or as an API service.
Link to code: https://lnkd.in/emT6SChu | https://www.linkedin.com/feed/update/urn:li:activity:7214580572268937216 | Organic | David Mezzetti | 07/04/2024 | All followers | 611 | 18 | 0.02946 | 6 | 0 | 1 | 0.040917 |
π‘ Retrieval Augmented Generation (RAG) is one of the most practical use cases of the Generative AI era. An LLM when presented with a bounding context often will generate factually grounded answers.
txtai makes RAG with your documents easy. It has pipelines to extract text from Office and PDF documents while preserving structured formatting (i.e. tables, lists). It has an easy-to-use LLM pipeline that automatically loads models from Hugging Face, llama.cpp and APIs (OpenAI, Ollama etc).
See how this compares to RAG with LangChain (txtai was able to generate the correct answer given it preserves table formatting): https://lnkd.in/evv_7gi6 | https://www.linkedin.com/feed/update/urn:li:activity:7214432494748581888 | Organic | David Mezzetti | 07/04/2024 | All followers | 898 | 40 | 0.044543 | 11 | 3 | 1 | 0.061247 |
π₯ Hnswlib is a great vector indexing library. It's integrated into a number of vector databases.
txtai utilizes the same core pipeline for generating embeddings and storing vectors regardless of the end components. There has been a careful focus in building a highly performant and efficient vector database implementation that runs great locally.
txtai uses mmap-ing and other techniques to ensure that memory limits are respected. Streaming vector generation and offloading those vectors during index creation allows txtai to build large local indexes whereas other implementations run out of memory.
See how this compares to Chroma DB (txtai is 3x faster for the same dataset): https://lnkd.in/eY_nMA85 | https://www.linkedin.com/feed/update/urn:li:activity:7214335924824850432 | Organic | David Mezzetti | 07/03/2024 | All followers | 747 | 31 | 0.041499 | 10 | 0 | 1 | 0.056225 |
Breadth vs depth? Support the maximum number of integrations or build a few deep and meaningful integrations?
txtai has taken the depth approach to ensure that integrations it adds are performant and support a large number of the underlying libraries features. We're not into box checking.
https://lnkd.in/dxWDeey
| https://www.linkedin.com/feed/update/urn:li:activity:7213901962503712768 | Organic | David Mezzetti | 07/02/2024 | All followers | 505 | 8 | 0.015842 | 4 | 0 | 1 | 0.025743 |
Came across this txtai mention in Star History's blog on open-source AI search. Thank you!
https://lnkd.in/eJU9TkpM | https://www.linkedin.com/feed/update/urn:li:activity:7213638147979554817 | Organic | David Mezzetti | 07/01/2024 | All followers | 342 | 8 | 0.023392 | 3 | 0 | 1 | 0.035088 |
π₯ Did you know that the original "Introducing txtai" notebook from the 1.0 release in August 2020 by and large still works today? 1300+ commits later. Why? Because user experience and good engineering practices matter to us.
See the original notebook for yourself: https://lnkd.in/ecfhQsSV | https://www.linkedin.com/feed/update/urn:li:activity:7213545801132765186 | Organic | David Mezzetti | 07/01/2024 | All followers | 546 | 7 | 0.012821 | 7 | 0 | 1 | 0.027473 |
β‘ Faiss is a great vector indexing library. It has so many features past just a flat index. txtai automatically creates a performant Faiss index scaled by the size of the incoming data. The index type can also be fully customized with configuration. This shows the power of a full-featured and long-standing integration.
See how this compares to LlamaIndex: https://lnkd.in/eWqU5z3U | https://www.linkedin.com/feed/update/urn:li:activity:7213531377437155328 | Organic | David Mezzetti | 07/01/2024 | All followers | 1,095 | 46 | 0.042009 | 9 | 0 | 1 | 0.051142 |
π Want to run RAG with Ollama and txtai? No problem! txtai supports Ollama models for both embeddings and LLM generation.
Link to code: https://lnkd.in/e8tvbp7m | https://www.linkedin.com/feed/update/urn:li:activity:7213252452555329536 | Organic | David Mezzetti | 06/30/2024 | All followers | 531 | 12 | 0.022599 | 4 | 0 | 1 | 0.032015 |
What do we get with txtai out of the box? txtai vector indexes use SQLite + Faiss by default. This enables search with SQL and dynamic columns. Results are standard Python dictionaries and that allows direct integration with Pandas/Polars DataFrames.
See how this compares to LangChain: https://lnkd.in/esX88rPR | https://www.linkedin.com/feed/update/urn:li:activity:7213202454392270848 | Organic | David Mezzetti | 06/30/2024 | All followers | 2,051 | 133 | 0.064846 | 26 | 0 | 5 | 0.079961 |
A fundamental part of any RAG solution is the data source.
txtai is an all-in-one embeddings database with support for storing data as local file-based indexes. Did you know that txtai has built-in support for storing these indexes as a Hugging Face model and cloud storage such as with AWS S3 buckets? These composable indexes can be built and shared for RAG.
https://lnkd.in/eMGY7uRB
A couple example datasets are linked below.
txtai-wikipedia: https://lnkd.in/eQz5dKtG
txtai-arxiv: https://lnkd.in/eSCCs-Jz | https://www.linkedin.com/feed/update/urn:li:activity:7213138432556969984 | Organic | David Mezzetti | 06/30/2024 | All followers | 1,454 | 11 | 0.007565 | 2 | 0 | 1 | 0.009629 |
Did you know that txtai has a full-featured workflow framework? It can run tasks sequentially, multi-threaded and/or with multiple processes (to work around Python's GIL).
Parse a directory of files, files in a S3 bucket, multi-step prompt action and more!
https://lnkd.in/eDj8NZtb
| https://www.linkedin.com/feed/update/urn:li:activity:7212840421532536832 | Organic | David Mezzetti | 06/29/2024 | All followers | 685 | 17 | 0.024818 | 6 | 0 | 1 | 0.035036 |
Frustrated by convoluted AI/LLM/RAG frameworks? Don't settle for ποΈ. Take a look at txtai!
https://lnkd.in/dxWDeey
| https://www.linkedin.com/feed/update/urn:li:activity:7212432020172349440 | Organic | David Mezzetti | 06/28/2024 | All followers | 358 | 10 | 0.027933 | 6 | 0 | 1 | 0.047486 |
Curious about RAG? Not a programmer and want to experiment? Well check out these easy-to-use series of RAG applications packaged as Docker images! Everything needed is built in.
Wikipedia: https://lnkd.in/d-fJiVXR
ArXiv: https://lnkd.in/dMx8Sfk2
All code and configuration used to build these images can be found on txtai's GitHub repo: https://lnkd.in/dxWDeey | https://www.linkedin.com/feed/update/urn:li:activity:7212193563953012736 | Organic | David Mezzetti | 06/27/2024 | All followers | 344 | 25 | 0.072674 | 2 | 0 | 1 | 0.081395 |
Did you know that txtai has an application for building language model workflows? Try it out on the HF Hub.
https://lnkd.in/dQxbucux
| https://www.linkedin.com/feed/update/urn:li:activity:7211020252254498818 | Organic | David Mezzetti | 06/24/2024 | All followers | 546 | 11 | 0.020147 | 4 | 0 | 1 | 0.029304 |
Want your own local RAG API service? Did you know that txtai can automatically start an API service using YAML? And that it can be run as a Docker container?
Read more here: https://lnkd.in/eC2_HkEi | https://www.linkedin.com/feed/update/urn:li:activity:7210716942779777024 | Organic | David Mezzetti | 06/23/2024 | All followers | 538 | 8 | 0.01487 | 1 | 0 | 1 | 0.018587 |
Did you know that txtai provides a schemaless database? Metadata can be persisted in SQLite, Postgres, MariaDB and DuckDB. Vectors can be stored with Faiss, HNSWLib and PGVector.
Read more on how this all works here: https://lnkd.in/e3j5Pf-n
| https://www.linkedin.com/feed/update/urn:li:activity:7210609072092389377 | Organic | David Mezzetti | 06/23/2024 | All followers | 528 | 4 | 0.007576 | 2 | 0 | 1 | 0.013258 |
txtai has published a lot of content lately covering RAG. This article puts all of the best content in one place!
https://lnkd.in/eDigfyYd
| https://www.linkedin.com/feed/update/urn:li:activity:7210321373389336578 | Organic | David Mezzetti | 06/22/2024 | All followers | 919 | 43 | 0.04679 | 11 | 2 | 2 | 0.063112 |
π€ Curious about how Retrieval Augmented Generation (RAG) works? Then check out this easy-to-understand article covering how txtai RAG works!
This article shows how to create RAG processes in Python. It also covers standing up low code RAG API services with FastAPI and Docker.
https://lnkd.in/eExBX_3A | https://www.linkedin.com/feed/update/urn:li:activity:7210243904963538944 | Organic | David Mezzetti | 06/22/2024 | All followers | 839 | 10 | 0.011919 | 4 | 0 | 1 | 0.017878 |
Why are so many AI projects failing? Unrealistic expectations has to be at the top. But another often overlooked item is picking too complex a stack. Many of the popular AI frameworks try to support integrating everything leading to unnecessary complexity.
txtai follows the KISS principle with it's architecture. It's designed to get up and running fast but also scale to production
https://lnkd.in/dxWDeey | https://www.linkedin.com/feed/update/urn:li:activity:7208574391343820801 | Organic | David Mezzetti | 06/17/2024 | All followers | 678 | 10 | 0.014749 | 9 | 0 | 1 | 0.029499 |
Did you know that txtai has a customizable FastAPI integration? Check out this example on how to create a custom endpoint that can easily be run as an API service.
https://lnkd.in/ei-u7grV
| https://www.linkedin.com/feed/update/urn:li:activity:7208535149259427842 | Organic | David Mezzetti | 06/17/2024 | All followers | 461 | 8 | 0.017354 | 2 | 0 | 1 | 0.023861 |
Looking for a fun Sunday project? Then check out this article that covers how to load Python code via C/C++ and x86 assembly. Step through an example using txtai.
https://lnkd.in/emHFyU98
| https://www.linkedin.com/feed/update/urn:li:activity:7208079799032950791 | Organic | David Mezzetti | 06/16/2024 | All followers | 477 | 6 | 0.012579 | 4 | 0 | 3 | 0.027254 |
Want to use LLMs to automatically extraction entity-relationship models? And load them into a knowledge graph? Then check out this article.
https://lnkd.in/dBy_H4C2 | https://www.linkedin.com/feed/update/urn:li:activity:7208074938795053056 | Organic | David Mezzetti | 06/16/2024 | All followers | 2,141 | 108 | 0.050444 | 25 | 2 | 5 | 0.06539 |
If you're new to LLMs/Vector Search/RAG/GenAI, then this article is worth a read. It covers a basic overview of semantic search, which is often the foundation of a RAG system.
https://lnkd.in/eqZs96D3 | https://www.linkedin.com/feed/update/urn:li:activity:7207373729729761282 | Organic | David Mezzetti | 06/14/2024 | All followers | 595 | 14 | 0.023529 | 4 | 0 | 2 | 0.033613 |
Cool π to see the jump in txtai installs over the last couple of weeks!
https://lnkd.in/ecEgQDYM | https://www.linkedin.com/feed/update/urn:li:activity:7207111803388973058 | Organic | David Mezzetti | 06/13/2024 | All followers | 468 | 29 | 0.061966 | 7 | 0 | 1 | 0.07906 |
Graph RAG is a π₯ topic right now. Did you know that txtai has Graph RAG support using Cypher queries?
https://lnkd.in/d-BSjuj7
| https://www.linkedin.com/feed/update/urn:li:activity:7207108256278740994 | Organic | David Mezzetti | 06/13/2024 | All followers | 725 | 34 | 0.046897 | 12 | 2 | 2 | 0.068966 |
Knowledge Graphs (KGs) are a π₯ topic now. But how do you build them? Check out this article that uses embeddings models to automatically build a semantic graph. And it's multimodal!
https://lnkd.in/eMsAXarn
| https://www.linkedin.com/feed/update/urn:li:activity:7205552493202685952 | Organic | David Mezzetti | 06/09/2024 | All followers | 2,567 | 140 | 0.054538 | 39 | 0 | 3 | 0.0709 |
Have an existing database of questions or a FAQ? Then check out this article. RAG can also be considered but semantic search might be enough and will use fewer resources.
https://lnkd.in/epjYtMaj | https://www.linkedin.com/feed/update/urn:li:activity:7205549947088162816 | Organic | David Mezzetti | 06/09/2024 | All followers | 626 | 18 | 0.028754 | 6 | 0 | 1 | 0.039936 |
Did you know that txtai has prebuilt Docker images for CPU and GPU?
https://lnkd.in/e598zTHb | https://www.linkedin.com/feed/update/urn:li:activity:7205152576298741761 | Organic | David Mezzetti | 06/08/2024 | All followers | 500 | 17 | 0.034 | 8 | 0 | 2 | 0.054 |
txtai 7.2 added full integration (data, vectors, graph, keyword) with Postgres. If txtai could integrate with something else, what would it be? Add a comment to share.
https://lnkd.in/eFeFNgYK
| https://www.linkedin.com/feed/update/urn:li:activity:7205141267171717120 | Organic | David Mezzetti | 06/08/2024 | All followers | 645 | 24 | 0.037209 | 9 | 0 | 1 | 0.052713 |
Want a RAG solution using only local llama.cpp GGUF models? Then check this article out.
https://lnkd.in/e_rud4Ff | https://www.linkedin.com/feed/update/urn:li:activity:7204476462714798081 | Organic | David Mezzetti | 06/06/2024 | All followers | 518 | 19 | 0.03668 | 4 | 0 | 3 | 0.050193 |
Want external vectorization for vector search? It's simple with txtai. | https://www.linkedin.com/feed/update/urn:li:activity:7203794434872721409 | Organic | David Mezzetti | 06/04/2024 | All followers | 377 | 4 | 0.01061 | 5 | 0 | 1 | 0.026525 |
Congratulations to DuckDB on their 1.0.0 "Nivis" release!
Did you know that txtai can store metadata and content in DuckDB?
https://lnkd.in/edem7iNX | https://www.linkedin.com/feed/update/urn:li:activity:7203398999981047809 | Organic | David Mezzetti | 06/03/2024 | All followers | 546 | 7 | 0.012821 | 8 | 1 | 2 | 0.032967 |
π txtai 7.2 is out!
This release adds Postgres integration for all components, LLM Chat Messages and vectorization with llama.cpp/LiteLLM
From local to remote vectorization, model inference and data storage - txtai has you covered. It's the easiest way to build vector search, LLM and RAG systems without the bloat.
See below for more.
GitHub: https://lnkd.in/dxWDeey
Release Notes: https://lnkd.in/eKe56dwE
PyPI: https://lnkd.in/eE_Jvft
Docker Hub: https://lnkd.in/e598zTHb
API Clients:
Python: https://lnkd.in/eqVx_nqt
JavaScript: https://lnkd.in/dM8ua2y
Rust: https://lnkd.in/d2MAae2
Java: https://lnkd.in/dqmmjTw
Go: https://lnkd.in/dq7Ujv4
| https://www.linkedin.com/feed/update/urn:li:activity:7202340108505677824 | Organic | David Mezzetti | 05/31/2024 | All followers | 971 | 36 | 0.037075 | 21 | 0 | 4 | 0.062822 |
Want txtai vectorization and/or LLM inference with llama.cpp or API services like OpenAI/Cohere/Azure? Then this article is for you π₯
https://lnkd.in/e_rud4Ff
| https://www.linkedin.com/feed/update/urn:li:activity:7202300570601209857 | Organic | David Mezzetti | 05/31/2024 | All followers | 694 | 20 | 0.028818 | 10 | 0 | 2 | 0.04611 |
LLMs can translate and summarize but that doesn't mean they should. Check out this simple summarization method that's still quite popular.
https://lnkd.in/dM6XB_Rc
| https://www.linkedin.com/feed/update/urn:li:activity:7198436489842618368 | Organic | David Mezzetti | 05/20/2024 | All followers | 2,703 | 125 | 0.046245 | 9 | 0 | 2 | 0.050314 |
txtai has a unique feature where it can persist indexes to object storage (i.e. S3) along with other systems such as the Hugging Face Hub. This adds a large level of customizability with the same code.
From Postgres to S3 and the Hugging Face Hub, txtai has you covered.
https://lnkd.in/eMGY7uRB
| https://www.linkedin.com/feed/update/urn:li:activity:7195411178435608576 | Organic | David Mezzetti | 05/12/2024 | All followers | 432 | 4 | 0.009259 | 3 | 0 | 1 | 0.018519 |
There has been considerable buzz on Knowledge Graph-driven LLM orchestration. txtai has been on it since 2022, check out this article for more.
https://lnkd.in/d-BSjuj7 | https://www.linkedin.com/feed/update/urn:li:activity:7195037571674947584 | Organic | David Mezzetti | 05/11/2024 | All followers | 2,253 | 128 | 0.056813 | 32 | 2 | 3 | 0.073236 |
π₯ Building on the recent pgvector integration is pgtext! pgtext makes it possible to build sparse (keyword) indexes with txtai and Postgres. It also enables full hybrid search with Postgres.
https://lnkd.in/eES4uruV | https://www.linkedin.com/feed/update/urn:li:activity:7194744256752746496 | Organic | David Mezzetti | 05/10/2024 | All followers | 626 | 24 | 0.038339 | 9 | 0 | 2 | 0.055911 |
Want RAG over scientific knowledge? Then check out this txtai datasource.
https://lnkd.in/eSCCs-Jz
| https://www.linkedin.com/feed/update/urn:li:activity:7194025197471985666 | Organic | David Mezzetti | 05/08/2024 | All followers | 880 | 33 | 0.0375 | 21 | 0 | 3 | 0.064773 |
Is Postgres all you need? Is a vector just a data type? That's a tough question. On one hand, dedicated vector databases have a lot of catching up to do in terms of almost 30 years of functionality. On the other, there are advantages to reimagining the architecture factoring in all we know in 2024.
The good news with txtai is that it's capable of working with multiple setups. It can persist data to Postgres. It can store data in Faiss + SQLite. It can also integrate content with other vector databases. The idea is to have everything needed to get started fast and be flexible to change as the requirements and landscape evolves.
https://lnkd.in/dxWDeey
| https://www.linkedin.com/feed/update/urn:li:activity:7193593763565322242 | Organic | David Mezzetti | 05/07/2024 | All followers | 731 | 16 | 0.021888 | 13 | 1 | 1 | 0.042408 |
Check out the latest newsletter for a summary of what's happening with txtai.
https://lnkd.in/eSJdBPNf | https://www.linkedin.com/feed/update/urn:li:activity:7192123946622746625 | Organic | David Mezzetti | 05/03/2024 | All followers | 445 | 10 | 0.022472 | 6 | 0 | 1 | 0.038202 |
One unique feature of txtai is it's ability to mix and match vector, content, graph and keyword index systems together. Out of the box, local defaults are set to get up and running fast. But txtai provides a high level of flexibility in integrating different components together. It also provides it's own SQL dialect for querying regardless of the underlying choices made. The architecture is designed to make it easy to add new file formats and integrations.
https://lnkd.in/edem7iNX
| https://www.linkedin.com/feed/update/urn:li:activity:7191041678437199874 | Organic | David Mezzetti | 04/30/2024 | All followers | 802 | 20 | 0.024938 | 12 | 0 | 1 | 0.041147 |
β Excited to announce that txtai has crossed 7K stars on GitHub!
https://lnkd.in/dxWDeey | https://www.linkedin.com/feed/update/urn:li:activity:7190743859981615104 | Organic | David Mezzetti | 04/29/2024 | All followers | 1,008 | 33 | 0.032738 | 21 | 3 | 2 | 0.058532 |
One unique feature of txtai is that it can load and save content as Hugging Face models. Read the article for more details and see the examples below.
Article: https://lnkd.in/eMGY7uRB
Examples: https://lnkd.in/ejr8e2Wy | https://www.linkedin.com/feed/update/urn:li:activity:7189933359651803136 | Organic | David Mezzetti | 04/27/2024 | All followers | 538 | 5 | 0.009294 | 3 | 0 | 1 | 0.016729 |
π Big news! We're excited to release this new Postgres + pgvector integration for txtai. It's now possible to fully persist txtai content, vectors and graph data to Postgres. From there it can be queried through txtai and/or directly with any Postgres client!
From prototyping to production, txtai has you covered.
https://lnkd.in/eFeFNgYK | https://www.linkedin.com/feed/update/urn:li:activity:7189323577396011010 | Organic | David Mezzetti | 04/25/2024 | All followers | 1,087 | 45 | 0.041398 | 22 | 3 | 4 | 0.068077 |
txtai has long had the ability to build serverless vector search. With this method, one can build a vector search system with Cloud Functions (i.e. AWS Lambda, Google Cloud Run, Azure Functions) and Object Storage. This also works with Kubernetes paired with KNative.
https://lnkd.in/ek2TaG9a | https://www.linkedin.com/feed/update/urn:li:activity:7187779947241877504 | Organic | David Mezzetti | 04/21/2024 | All followers | 537 | 9 | 0.01676 | 3 | 0 | 1 | 0.024209 |
Want to extract structured information with RAG? Then check out this article.
https://lnkd.in/euhTG2Gj | https://www.linkedin.com/feed/update/urn:li:activity:7187447546787569664 | Organic | David Mezzetti | 04/20/2024 | All followers | 966 | 63 | 0.065217 | 11 | 2 | 3 | 0.081781 |
π We're excited to release txtai 7.1
This release adds dynamic embeddings vector support along with semantic graph and RAG improvements.
See below for more.
GitHub: https://lnkd.in/dxWDeey
Release Notes: https://lnkd.in/eiJWhTA6
PyPI: https://lnkd.in/eE_Jvft
Docker Hub: https://lnkd.in/e598zTHb
API Clients:
Python: https://lnkd.in/eqVx_nqt
JavaScript: https://lnkd.in/dM8ua2y
Rust: https://lnkd.in/d2MAae2
Java: https://lnkd.in/dqmmjTw
Go: https://lnkd.in/dq7Ujv4 | https://www.linkedin.com/feed/update/urn:li:activity:7187168885161230338 | Organic | David Mezzetti | 04/19/2024 | All followers | 950 | 46 | 0.048421 | 11 | 0 | 1 | 0.061053 |
π₯ Check out this new article introducing Retrieval Augmented and Guided Generation (RAGG).
This article combines txtai with the great outlines library to generate structured output. See how knowledge can be stored as Pydantic models!
https://lnkd.in/euhTG2Gj | https://www.linkedin.com/feed/update/urn:li:activity:7186782380420968448 | Organic | David Mezzetti | 04/18/2024 | All followers | 977 | 52 | 0.053224 | 15 | 5 | 3 | 0.076766 |
Check out this interesting article that uses txtai to solve crossword puzzles.
https://lnkd.in/ewdqZWzi | https://www.linkedin.com/feed/update/urn:li:activity:7183437619777671169 | Organic | David Mezzetti | 04/09/2024 | All followers | 1,046 | 10 | 0.00956 | 4 | 0 | 1 | 0.01434 |
Check out this blog post that uses txtai to build a private chat RAG solution
https://lnkd.in/e-Mvk4-D | https://www.linkedin.com/feed/update/urn:li:activity:7183436717469638656 | Organic | David Mezzetti | 04/09/2024 | All followers | 455 | 6 | 0.013187 | 2 | 0 | 1 | 0.01978 |
As an open-source project, it's always great to get feedback like what's in the comment below.
"Thanks for this, txtai looks like the most production focused library in this space."
Thank you!
https://lnkd.in/e9gYiEGv | https://www.linkedin.com/feed/update/urn:li:activity:7182006310836543488 | Organic | David Mezzetti | 04/05/2024 | All followers | 579 | 19 | 0.032815 | 5 | 0 | 2 | 0.044905 |
Want to build agent workflows? Then take a look at txtai.
txtai has long (since 2021) had a framework for connecting different pipelines into unified workflows.
This can be used to connect LLM prompts and/or specialized models for translation/summarization/text extraction.
Read this to learn more: https://lnkd.in/em2ew5ia | https://www.linkedin.com/feed/update/urn:li:activity:7180942247956217856 | Organic | David Mezzetti | 04/02/2024 | All followers | 442 | 11 | 0.024887 | 2 | 0 | 2 | 0.033937 |
π We're excited to announce a new 500M parameter model π Space Time LLM.
Recent breakthroughs in LLMs have resulted in an uncanny and game changing ability to predict future outcomes. Impressive advances in quantization and compression such as 1-bit LLMs have contributed to this phenomenal breakthrough in predictive capabilities. This model redefines our understanding of what and how LLMs learn.
Check out this model and see what you can predict today!
https://lnkd.in/exdy4R5G
| https://www.linkedin.com/feed/update/urn:li:activity:7180594108413943809 | Organic | David Mezzetti | 04/01/2024 | All followers | 2,605 | 206 | 0.079079 | 19 | 9 | 2 | 0.090595 |
txtai is developed in the open. The full project history and design decisions are documented. What tests run and all code is easily accessible. Documentation is a priority and examples are provided for all major features. There are no secrets. It takes courage to develop with this level of transparency.
We're proud of our code quality, design decisions and code consistency. There is no "just get it done and we'll fix it later" mentality here.
https://lnkd.in/dxWDeey | https://www.linkedin.com/feed/update/urn:li:activity:7179166473456635906 | Organic | David Mezzetti | 03/28/2024 | All followers | 604 | 14 | 0.023179 | 4 | 0 | 1 | 0.031457 |
Thrilled to see that our PubMedBERT Embeddings model has over 200K downloads and is one of the most popular sentence similarity models on the HF Hub!
Add it to your list if you're looking to build semantic search apps for medical literature.
https://lnkd.in/egnEKcqd | https://www.linkedin.com/feed/update/urn:li:activity:7178844946857111552 | Organic | David Mezzetti | 03/27/2024 | All followers | 782 | 27 | 0.034527 | 13 | 0 | 1 | 0.05243 |
Nice video covering how to build an AI Search engine with txtai.
https://lnkd.in/e2NGemzv
Check out the NeuML YouTube channel for links to this video and more: https://lnkd.in/e9cJQ79k | https://www.linkedin.com/feed/update/urn:li:activity:7177633277652938752 | Organic | David Mezzetti | 03/24/2024 | All followers | 628 | 19 | 0.030255 | 4 | 0 | 1 | 0.038217 |
Subsets and Splits