Mariusz Kurman's picture

Mariusz Kurman PRO

mkurman

AI & ML interests

AI Tech Lead | MD

Recent Activity

Organizations

MedIT Solutions's profile picture BigScience Biomedical Datasets's profile picture SOWA Project's profile picture

mkurman's activity

reacted to reddgr's post with 👀 19 days ago
view post
Post
1807
Thought it would only make sense to share this here. Lately, one of my favorite activities has been annotating prompts and putting them into datasets ( reddgr/tl-test-learn-prompts reddgr/rq-request-question-prompts reddgr/nli-chatbot-prompt-categorization), which I then use to classify and select chatbot conversations for my website. It's quite fun to use this widget on the lmsys/lmsys-chat-1m, but I also use it on my 2 years of talking to chatbots (soon to be dataset, but still a lot of web scraping and ETL work left)... This one in the picture was probably one of the first prompts I wrote to an LLM:
posted an update 21 days ago
view post
Post
285
How Do I Contribute (HDIC)

Exciting times to come? We are working on a layer self-esteem technique to score their contribution to the final prediction. For now, it unlocks a lot of knowledge already stored in weights we couldn't force the model to extract by further fine-tuning!
reacted to AdinaY's post with 🔥 22 days ago
view post
Post
1331
HunyuanVideo 📹 The new open video generation model by Tencent!
👉 tencent/HunyuanVideo
zh-ai-community/video-models-666afd86cfa4e4dd1473b64c
✨ 13B parameters: Probably the largest open video model to date
✨ Unified architecture for image & video generation
✨ Powered by advanced features: MLLM Text Encoder, 3D VAE, and Prompt Rewrite
✨ Delivers stunning visuals, diverse motion, and unparalleled stability
🔓 Fully open with code & weights
reacted to singhsidhukuldeep's post with 🤗 22 days ago
view post
Post
1303
Exciting breakthrough in Document AI! Researchers from UNC Chapel Hill and Bloomberg have developed M3DocRAG, a revolutionary framework for multi-modal document understanding.

The innovation lies in its ability to handle complex document scenarios that traditional systems struggle with:
- Process 40,000+ pages across 3,000+ documents
- Answer questions requiring information from multiple pages
- Understand visual elements like charts, tables, and figures
- Support both closed-domain (single document) and open-domain (multiple documents) queries

Under the hood, M3DocRAG operates through three sophisticated stages:

>> Document Embedding:
- Converts PDF pages to RGB images
- Uses ColPali to project both text queries and page images into a shared embedding space
- Creates dense visual embeddings for each page while maintaining visual information integrity

>> Page Retrieval:
- Employs MaxSim scoring to compute relevance between queries and pages
- Implements inverted file indexing (IVFFlat) for efficient search
- Reduces retrieval latency from 20s to under 2s when searching 40K+ pages
- Supports approximate nearest neighbor search via Faiss

>> Question Answering:
- Leverages Qwen2-VL 7B as the multi-modal language model
- Processes retrieved pages through a visual encoder
- Generates answers considering both textual and visual context

The results are impressive:
- State-of-the-art performance on MP-DocVQA benchmark
- Superior handling of non-text evidence compared to text-only systems
- Significantly better performance on multi-hop reasoning tasks

This is a game-changer for industries dealing with large document volumes—finance, healthcare, and legal sectors can now process documents more efficiently while preserving crucial visual context.
·
reacted to cfahlgren1's post with 🔥 22 days ago
view post
Post
1810
You can just ask things 🗣️

"show me messages in the coding category that are in the top 10% of reward model scores"

Download really high quality instructions from the Llama3.1 405B synthetic dataset 🔥

argilla/magpie-ultra-v1.0

replied to their post 22 days ago
view reply

That is an excellent question. I was just googling and searching in Arxiv. Now, I try Elicit, “talk” with papers and listen to “podcasts” on NotebookLM.

replied to their post 22 days ago
reacted to AdinaY's post with ❤️ 23 days ago
view post
Post
1465
2023 & 2024 Top Downloaded (all time) Open Models on the hub are both from the Chinese community 👀

2023 👉 Bge base by BAAI
BAAI/bge-base-en-v1.5
2024 👉 Qwen 2.5 by Alibaba Qwen
Qwen/Qwen2.5-1.5B-Instruct

Can’t wait to see what incredible models the Chinese community will bring in 2025🚀

✨ Follow https://huggingface.co/zh-ai-community to get the latest updates from the Chinese community
✨ Explore the 2024 Year in Review huggingface/open-source-ai-year-in-review-2024
reacted to prithivMLmods's post with ❤️ 23 days ago
view post
Post
2627
Milestone for Flux.1 Dev 🔥

💢The Flux.1 Dev model has crossed 1️⃣0️⃣,0️⃣0️⃣0️⃣ creative public adapters! 🎈
🔗 https://huggingface.co/models?other=base_model:adapter:black-forest-labs/FLUX.1-dev

💢This includes:
- 266 Finetunes
- 19 Quants
- 4 Merges

💢 Here’s the 10,000th public adapter : 😜
+ strangerzonehf/Flux-3DXL-Partfile-0006

💢 Page :
+ https://huggingface.co/strangerzonehf

💢 Collection :
+ prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be
posted an update 23 days ago
view post
Post
426
What AI-enhanced research tools would you recommend for searching and analyzing scientific papers?
  • 5 replies
·
reacted to nataliaElv's post with 👀 23 days ago
view post
Post
1177
We're so close to reaching 100 languages! Can you help us cover the remaining 200? Check if we're still looking for language leads for your language: nataliaElv/language-leads-dashboard
reacted to AdinaY's post with 👀 23 days ago
view post
Post
1331
HunyuanVideo 📹 The new open video generation model by Tencent!
👉 tencent/HunyuanVideo
zh-ai-community/video-models-666afd86cfa4e4dd1473b64c
✨ 13B parameters: Probably the largest open video model to date
✨ Unified architecture for image & video generation
✨ Powered by advanced features: MLLM Text Encoder, 3D VAE, and Prompt Rewrite
✨ Delivers stunning visuals, diverse motion, and unparalleled stability
🔓 Fully open with code & weights
reacted to merve's post with 🤗 23 days ago
view post
Post
2868
Last week we were blessed with open-source models! A recap 💝
merve/nov-29-releases-674ccc255a57baf97b1e2d31

🖼️ Multimodal
> At Hugging Face we released SmolVLM, a performant and efficient smol vision language model 💗
> Show Lab released ShowUI-2B: new vision-language-action model to build GUI/web automation agents 🤖
> Rhymes AI has released the base model of Aria: Aria-Base-64K and Aria-Base-8K with their respective context length
> ViDoRe team released ColSmolVLM: A new ColPali-like retrieval model based on SmolVLM
> Dataset: Llava-CoT-o1-Instruct: new dataset labelled using Llava-CoT multimodal reasoning model📖
> Dataset: LLaVA-CoT-100k dataset used to train Llava-CoT released by creators of Llava-CoT 📕

💬 LLMs
> Qwen team released QwQ-32B-Preview, state-of-the-art open-source reasoning model, broke the internet 🔥
> AliBaba has released Marco-o1, a new open-source reasoning model 💥
> NVIDIA released Hymba 1.5B Base and Instruct, the new state-of-the-art SLMs with hybrid architecture (Mamba + transformer)

⏯️ Image/Video Generation
> Qwen2VL-Flux: new image generation model based on Qwen2VL image encoder, T5 and Flux for generation
> Lightricks released LTX-Video, a new DiT-based video generation model that can generate 24 FPS videos at 768x512 res ⏯️
> Dataset: Image Preferences is a new image generation preference dataset made with DIBT community effort of Argilla 🏷️

Audio
> OuteAI released OuteTTS-0.2-500M new multilingual text-to-speech model based on Qwen-2.5-0.5B trained on 5B audio prompt tokens
reacted to vincentg64's post with 👀 23 days ago
view post
Post
1223
LLM 2.0, the New Generation of Large Language Models https://mltblog.com/49ksOLL

I get many questions about the radically different LLM technology that I started to develop 2 years ago. Initially designed to retrieve information that I could no longer find on the Internet, not with search, OpenAI, Gemini, Perplexity or any other platform, it evolved to become the ideal solution for professional enterprise users. Now agentic and multimodal, automating business tasks at scale with lightning speed, consistently delivering real ROI, bypassing the costs associated to training and GPU with zero weight and explainable AI, tested and developed for Fortune 100 company.

So, what is behind the scenes, how different is it compared to LLM 1.0 (GPT and the likes), how can it be hallucination-free, what makes it a game changer, how did it eliminate prompt engineering, how does it handle knowledge graphs without neural networks, and what are the other benefits?

In a nutshell, the performance is due to building a robust architecture from the ground up and at every step, offering far more than a prompt box, relying on home-made technology rather than faulty Python libraries, and designed by enterprise and tech visionaries for enterprise users.

Contextual smart crawling to retrieve underlying taxonomies, augmented taxonomies, long contextual multi-tokens, real-time fine-tunning, increased security, LLM router with specialized sub-LLMs, an in-memory database architecture of its own to efficiently handle sparsity in keyword associations, contextual backend tables, agents built on the backend, mapping between prompt and corpus keywords, customized PMI rather than cosine similarity, variable-length embeddings, and the scoring engine (the new “PageRank” of LLMs) returning results along with the relevancy scores, are but a few of the differentiators.

➡️ Read the full article, at https://mltblog.com/49ksOLL
  • 1 reply
·
reacted to cutechicken's post with 🔥 23 days ago
view post
Post
3278
# Tank War: A Cool AI-Generated Game Making Waves on Hugging Face

Hey there! Let me tell you about Tank War, a super interesting HTML5 Canvas game that's currently ranked #11 on Hugging Face Trending. What's really cool about this game is that it was initially whipped up in just one minute using MOUSE-I ( VIDraft/mouse1), and then polished up with some human touches to the metadata files. Pretty neat, right?

## What Makes It Fun?

- **Stage-by-Stage Action**: You get 2 stages, each packed with 10 rounds and an epic boss battle
- **Power-Up Shopping**: Grab new tanks and upgrades with your hard-earned gold
- **Two-Gun System**: Switch between a heavy-hitting cannon and a rapid-fire machine gun
- **Air Support**: Call in BF-109 fighters and JU-87 dive bombers to rain down some extra firepower

## The Tech Behind the Magic

1. **AI-Powered Foundation**
- Quick game logic generation through MOUSE-I
- Fine-tuned with custom metadata tweaks

2. **Smooth Canvas Graphics**
- Butter-smooth animations with requestAnimationFrame
- Smart hitbox system for precise combat

3. **Smart Code Structure**
- Well-organized classes for enemies, effects, and support units
- Clever code reuse through inheritance

4. **Cool Game Features**
- Awesome sound effects and background music
- Smart enemy AI that keeps you on your toes
- Support units that know how to pick their targets

This project shows just how far we've come with AI-assisted game development, and its popularity on Hugging Face proves it's onto something good! It's a perfect example of how MOUSE-I's quick prototyping abilities and a developer's careful tweaking can create something really special. Think of it as AI and human creativity teaming up to make something awesome! 🎮✨

cutechicken/tankwar
reacted to ThomasSumner's post with 🔥 23 days ago
view post
Post
1089
New Datasets Will Train AI Models To Think Like Scientists
The Polymathic AI team has released two massive datasets for training artificial intelligence models to tackle problems across scientific disciplines. The datasets — accessible on HuggingFace — include data from dozens of sources, including astrophysics, biology, fluid dynamics, acoustics and chemistry.

Full announcement here: https://www.simonsfoundation.org/2024/12/02/new-datasets-will-train-ai-models-to-think-like-scientists/

Polymathic AI Hugging Face Org: https://huggingface.co/polymathic-ai
reacted to merve's post with 🚀 23 days ago
view post
Post
2630
small but mighty 🔥
you can fine-tune SmolVLM on an L4 with batch size of 4 and it will only take 16.4 GB VRAM 🫰🏻 also with gradient accumulation simulated batch size is 16 ✨
I made a notebook that includes all the goodies: QLoRA, gradient accumulation, gradient checkpointing with explanations on how they work 💝 https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb
replied to brunatrevelin's post 23 days ago
replied to clem's post 23 days ago
reacted to clem's post with 🚀 23 days ago
view post
Post
4108
Six predictions for AI in 2025 (and a review of how my 2024 predictions turned out):

- There will be the first major public protest related to AI
- A big company will see its market cap divided by two or more because of AI
- At least 100,000 personal AI robots will be pre-ordered
- China will start to lead the AI race (as a consequence of leading the open-source AI race).
- There will be big breakthroughs in AI for biology and chemistry.
- We will begin to see the economic and employment growth potential of AI, with 15M AI builders on Hugging Face.

How my predictions for 2024 turned out:

- A hyped AI company will go bankrupt or get acquired for a ridiculously low price
✅ (Inflexion, AdeptAI,...)

- Open-source LLMs will reach the level of the best closed-source LLMs
✅ with QwQ and dozens of others

- Big breakthroughs in AI for video, time-series, biology and chemistry
✅ for video 🔴for time-series, biology and chemistry

- We will talk much more about the cost (monetary and environmental) of AI
✅Monetary 🔴Environmental (😢)

- A popular media will be mostly AI-generated
✅ with NotebookLM by Google

- 10 millions AI builders on Hugging Face leading to no increase of unemployment
🔜currently 7M of AI builders on Hugging Face
·