AI & ML interests

None defined yet.

Recent Activity

jattokatarratto  updated a Space about 2 months ago
jrc-ai/MultiNER-simplified
jattokatarratto  updated a Space about 2 months ago
jrc-ai/MultiNER-simplified
roncmic  updated a Space 2 months ago
jrc-ai/crisesStorylinesRAG
View all activity

Upload app_pyvis_new.py

#4 opened 2 months ago by
roncmic

mib

#3 opened 2 months ago by
roncmic

mib

#2 opened 2 months ago by
roncmic
do-me 
posted an update 9 months ago
do-me 
posted an update 11 months ago
view post
Post
1239
What are your favorite text chunkers/splitters?
Mine are:
- https://github.com/benbrandt/text-splitter (Rust/Python, battle-tested, Wasm version coming soon)
- https://github.com/umarbutler/semchunk (Python, really performant but some issues with huge docs)

I tried the huge Jina AI regex, but it failed for my (admittedly messy) documents, e.g. from EUR-LEX. Their free segmenter API is really cool but unfortunately times out on my huge docs (~100 pages): https://jina.ai/segmenter/

Also, I tried to write a Vanilla JS chunker with a simple, adjustable hierarchical logic (inspired from the above). I think it does a decent job for the few lines of code: https://do-me.github.io/js-text-chunker/

Happy to hear your thoughts!
  • 1 reply
·
do-me 
posted an update 11 months ago
view post
Post
3452
SemanticFinder now supports WebGPU thanks to @Xenova 's efforts with transformers.js v3!
Expect massive performance gains. Inferenced a whole book with 46k chunks in <5min. If your device doesn't support #WebGPU use the classic Wasm-based version:
- WebGPU: https://do-me.github.io/SemanticFinder/webgpu/
- Wasm: https://do-me.github.io/SemanticFinder/

WebGPU harnesses the full power of your hardware, no longer being restricted to just the CPU. The speedup is significant (4-60x) for all kinds of devices: consumer-grade laptops, heavy Nvidia GPU setups or Apple Silicon. Measure the difference for your device here: Xenova/webgpu-embedding-benchmark
Chrome currently works out of the box, Firefox requires some tweaking.

WebGPU + transformers.js allows to build amazing applications and make them accessible to everyone. E.g. SemanticFinder could become a simple GUI for populating your (vector) DB of choice. See the pre-indexed community texts here: do-me/SemanticFinder
Happy to hear your ideas!
  • 1 reply
·
do-me 
posted an update about 1 year ago
view post
Post
1164
Hey HuggingFace, love your open source attitude and particularly transformers.js for embedding models! Your current integration "use this model" gives you the transformers.js code, but there is no quick way to really test a model in one click.
SemanticFinder ( do-me/SemanticFinder) offers such an integration for all compatible feature-extraction models! All you need to do is add a URL parameter with the model ID to it, like so: https://do-me.github.io/SemanticFinder/?model=Xenova/bge-small-en-v1.5. You can also decide between quantized and normal mode with https://do-me.github.io/SemanticFinder/?model=Xenova/bge-small-en-v1.5&quantized=false. Maybe that would do for a HF integration?
I know it's a small open source project, but I really believe that it provides value for devs before deciding for one model or the other. Also, it's much easier than having to spin up a notebook, install dependencies etc.. It's private, so you could even do some real-world evaluation on personal data without having to worry about third-party services data policies.
Happy to hear the community's thoughts!
  • 1 reply
·
do-me 
posted an update over 1 year ago
view post
Post
1579
Get daily/weekly/monthly notifications about latest trending feature-extraction models compatible with transformers.js for semantic search! All open source built on GitHub Actions and ntfy.sh.

I'm also providing daily updated tables (filterable and sortable by onnx model size too!) if you want to have a look only once in a while. Download what suits you best: csv, xlsx, parquet, json, html.

Would you like to monitor other models/tags? Feel free to open a PR :)

GitHub: https://github.com/do-me/trending-huggingface-models
Ntfy.sh daily channel: https://ntfy.sh/feature_extraction_transformers_js_models_daily
Sortable table: https://do-me.github.io/trending-huggingface-models/

And the best part: all 145 models are integrated in SemanticFinder to play around with https://do-me.github.io/SemanticFinder/!

do-me 
posted an update over 1 year ago
view post
Post
2343
Question: HF model search not showing all results

I noticed that when I use the HF model search with these tags:
- feature-extraction
- transformers.js
it is not showing all models that are actually tagged.

Example: All Alibaba-NLP models (e.g. gte family) are correctly tagged but they don't show here
- https://huggingface.co/models?pipeline_tag=feature-extraction&library=transformers.js&sort=trending&search=gte
- correctly tagged model Alibaba-NLP/gte-large-en-v1.5

Does anyone know why?

fyi @Xenova
  • 3 replies
·