William Suffill

wsuff
ยท

AI & ML interests

None yet

Recent Activity

liked a model 9 days ago
tiiuae/Falcon3-7B-Instruct
reacted to m-ric's post with ๐Ÿš€ 15 days ago
๐Ÿ’ฅ ๐—š๐—ผ๐—ผ๐—ด๐—น๐—ฒ ๐—ฟ๐—ฒ๐—น๐—ฒ๐—ฎ๐˜€๐—ฒ๐˜€ ๐—š๐—ฒ๐—บ๐—ถ๐—ป๐—ถ ๐Ÿฎ.๐Ÿฌ, ๐˜€๐˜๐—ฎ๐—ฟ๐˜๐—ถ๐—ป๐—ด ๐˜„๐—ถ๐˜๐—ต ๐—ฎ ๐—™๐—น๐—ฎ๐˜€๐—ต ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น ๐˜๐—ต๐—ฎ๐˜ ๐˜€๐˜๐—ฒ๐—ฎ๐—บ๐—ฟ๐—ผ๐—น๐—น๐˜€ ๐—š๐—ฃ๐—ง-๐Ÿฐ๐—ผ ๐—ฎ๐—ป๐—ฑ ๐—–๐—น๐—ฎ๐˜‚๐—ฑ๐—ฒ-๐Ÿฏ.๐Ÿฒ ๐—ฆ๐—ผ๐—ป๐—ป๐—ฒ๐˜! And they start a huge effort on agentic capabilities. ๐Ÿš€ The performance improvements are crazy for such a fast model: โ€ฃ Gemini 2.0 Flash outperforms the previous 1.5 Pro model at twice the speed โ€ฃ Now supports both input AND output of images, video, audio and text โ€ฃ Can natively use tools like Google Search and execute code โžก๏ธ If the price is on par with previous Flash iteration ($0.30 / M tokens, to compare with GPT-4o's $1.25) the competition will have a big problem with this 4x cheaper model that gets better benchmarks ๐Ÿคฏ ๐Ÿค– What about the agentic capabilities? โ€ฃ Project Astra: A universal AI assistant that can use Google Search, Lens and Maps โ€ฃ Project Mariner: A Chrome extension that can complete complex web tasks (83.5% success rate on WebVoyager benchmark, this is really impressive!) โ€ฃ Jules: An AI coding agent that integrates with GitHub workflows I'll be eagerly awaiting further news from Google! Read their blogpost here ๐Ÿ‘‰ https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/
View all activity

Organizations

None yet

wsuff's activity

reacted to csabakecskemeti's post with ๐Ÿ‘ 11 days ago
reacted to m-ric's post with ๐Ÿš€ 15 days ago
view post
Post
2425
๐Ÿ’ฅ ๐—š๐—ผ๐—ผ๐—ด๐—น๐—ฒ ๐—ฟ๐—ฒ๐—น๐—ฒ๐—ฎ๐˜€๐—ฒ๐˜€ ๐—š๐—ฒ๐—บ๐—ถ๐—ป๐—ถ ๐Ÿฎ.๐Ÿฌ, ๐˜€๐˜๐—ฎ๐—ฟ๐˜๐—ถ๐—ป๐—ด ๐˜„๐—ถ๐˜๐—ต ๐—ฎ ๐—™๐—น๐—ฎ๐˜€๐—ต ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น ๐˜๐—ต๐—ฎ๐˜ ๐˜€๐˜๐—ฒ๐—ฎ๐—บ๐—ฟ๐—ผ๐—น๐—น๐˜€ ๐—š๐—ฃ๐—ง-๐Ÿฐ๐—ผ ๐—ฎ๐—ป๐—ฑ ๐—–๐—น๐—ฎ๐˜‚๐—ฑ๐—ฒ-๐Ÿฏ.๐Ÿฒ ๐—ฆ๐—ผ๐—ป๐—ป๐—ฒ๐˜! And they start a huge effort on agentic capabilities.

๐Ÿš€ The performance improvements are crazy for such a fast model:
โ€ฃ Gemini 2.0 Flash outperforms the previous 1.5 Pro model at twice the speed
โ€ฃ Now supports both input AND output of images, video, audio and text
โ€ฃ Can natively use tools like Google Search and execute code

โžก๏ธ If the price is on par with previous Flash iteration ($0.30 / M tokens, to compare with GPT-4o's $1.25) the competition will have a big problem with this 4x cheaper model that gets better benchmarks ๐Ÿคฏ

๐Ÿค– What about the agentic capabilities?

โ€ฃ Project Astra: A universal AI assistant that can use Google Search, Lens and Maps
โ€ฃ Project Mariner: A Chrome extension that can complete complex web tasks (83.5% success rate on WebVoyager benchmark, this is really impressive!)
โ€ฃ Jules: An AI coding agent that integrates with GitHub workflows

I'll be eagerly awaiting further news from Google!

Read their blogpost here ๐Ÿ‘‰ https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/
reacted to merve's post with โค๏ธ 17 days ago
view post
Post
5514
This week in open-source AI was insane ๐Ÿค  A small recap๐Ÿ•บ๐Ÿป merve/dec-6-releases-67545caebe9fc4776faac0a3

Multimodal ๐Ÿ–ผ๏ธ
> Google shipped a PaliGemma 2, new iteration of PaliGemma with more sizes: 3B, 10B and 28B, with pre-trained and captioning variants ๐Ÿ‘
> OpenGVLab released InternVL2, seven new vision LMs in different sizes, with sota checkpoint with MIT license โœจ
> Qwen team at Alibaba released the base models of Qwen2VL models with 2B, 7B and 72B ckpts

LLMs ๐Ÿ’ฌ
> Meta released a new iteration of Llama 70B, Llama3.2-70B trained further
> EuroLLM-9B-Instruct is a new multilingual LLM for European languages with Apache 2.0 license ๐Ÿ”ฅ
> Dataset: CohereForAI released GlobalMMLU, multilingual version of MMLU with 42 languages with Apache 2.0 license
> Dataset: QwQ-LongCoT-130K is a new dataset to train reasoning models
> Dataset: FineWeb2 just landed with multilinguality update! ๐Ÿ”ฅ nearly 8TB pretraining data in many languages!

Image/Video Generation ๐Ÿ–ผ๏ธ
> Tencent released HunyuanVideo, a new photorealistic video generation model
> OminiControl is a new editing/control framework for image generation models like Flux

Audio ๐Ÿ”Š
> Indic-Parler-TTS is a new text2speech model made by community
reacted to Xenova's post with ๐Ÿš€ 17 days ago
view post
Post
2589
Introducing TTS WebGPU: The first ever text-to-speech web app built with WebGPU acceleration! ๐Ÿ”ฅ High-quality and natural speech generation that runs 100% locally in your browser, powered by OuteTTS and Transformers.js. ๐Ÿค— Try it out yourself!

Demo: webml-community/text-to-speech-webgpu
Source code: https://github.com/huggingface/transformers.js-examples/tree/main/text-to-speech-webgpu
Model: onnx-community/OuteTTS-0.2-500M (ONNX), OuteAI/OuteTTS-0.2-500M (PyTorch)
reacted to m-ric's post with ๐Ÿ‘€ 22 days ago
view post
Post
1468
๐—ฆ๐—ต๐—ผ๐˜„๐—จ๐—œ: ๐—ฎ ๐˜€๐—บ๐—ฎ๐—น๐—น ๐—ฒ๐—ป๐—ฑ-๐˜๐—ผ-๐—ฒ๐—ป๐—ฑ ๐—ฎ๐—ด๐—ฒ๐—ป๐˜ ๐˜๐—ต๐—ฎ๐˜ ๐—ฐ๐—ฎ๐—ป ๐—ป๐—ฎ๐˜ƒ๐—ถ๐—ด๐—ฎ๐˜๐—ฒ ๐—ฎ๐—ป๐˜† ๐—จ๐—œ ๐—ฎ๐—ป๐—ฑ ๐—ผ๐˜‚๐˜๐—ฝ๐—ฒ๐—ฟ๐—ณ๐—ผ๐—ฟ๐—บ๐˜€ ๐—บ๐˜‚๐—ฐ๐—ต ๐—ฏ๐—ถ๐—ด๐—ด๐—ฒ๐—ฟ ๐˜€๐˜†๐˜€๐˜๐—ฒ๐—บ๐˜€! ๐Ÿ“ฒ

A team from NUS and Microsoft just released an agent that can act on any UI (Desktop, Android, Web) without needing additional text information. It works extremely well : they applied their method on a tiny Qwen2-VL-2B, and they managed to beat methods that use either much more powerful vision models (like GPT-4V) without using any additional info (e.g. leveraging the DOM of a webpage) like previous methods did ! ๐Ÿ‘๐Ÿ‘

They started from the idea that most existing methods rely heavily on text, which makes them less generalizable, while letting aside rich UI structure that user actually rely on when navigating this interfaces.

โš™๏ธ They put several good ideas to work:

๐Ÿ’ก Simplify screenshots to the max:
They prune a lot the heavy visual content of UI screenshots, by removing cloned image patches (like any vast patch of the same color will be reduced to a small patch, while maintaining positional embeddings), then group patches from the same GUI elements together to simplify even further

๐Ÿ’ก Build a truly generalist dataset:
To train a general UI agent, you need trajectories from each possible UI, and express them in a common language. Authors merge datasets like OmniAct for Desktop, Mind2Web for websites, AMEX for Android trajectories to create a high-quality and diverse dataset.

โžก๏ธ Nice results ensued:
They fine-tune a tiny Qwen-2-VL-2B on their method, and it reaches SOTA on several task (element identification, web navigation), even beating methods that either use additional info from the DOM or use much bigger VLMS like GPT-4v! ๐Ÿ†

And performance could certainly jump with a slightly bigger vision model. Let's hope the community builds this soon! ๐Ÿš€

Paper added to my "Agents" collection ๐Ÿ‘‰ m-ric/agents-65ba776fbd9e29f771c07d4e
reacted to Xenova's post with ๐Ÿš€ 28 days ago
view post
Post
3928
We just released Transformers.js v3.1 and you're not going to believe what's now possible in the browser w/ WebGPU! ๐Ÿคฏ Let's take a look:
๐Ÿ”€ Janus from Deepseek for unified multimodal understanding and generation (Text-to-Image and Image-Text-to-Text)
๐Ÿ‘๏ธ Qwen2-VL from Qwen for dynamic-resolution image understanding
๐Ÿ”ข JinaCLIP from Jina AI for general-purpose multilingual multimodal embeddings
๐ŸŒ‹ LLaVA-OneVision from ByteDance for Image-Text-to-Text generation
๐Ÿคธโ€โ™€๏ธ ViTPose for pose estimation
๐Ÿ“„ MGP-STR for optical character recognition (OCR)
๐Ÿ“ˆ PatchTST & PatchTSMixer for time series forecasting

That's right, everything running 100% locally in your browser (no data sent to a server)! ๐Ÿ”ฅ Huge for privacy!

Check out the release notes for more information. ๐Ÿ‘‡
https://github.com/huggingface/transformers.js/releases/tag/3.1.0

Demo link (+ source code): webml-community/Janus-1.3B-WebGPU
reacted to PLB's post with ๐Ÿš€ about 1 month ago
view post
Post
1863
โš ๏ธ People selling AI chatbots for websites hate us.
Add an open source chat assistant on your website in 5 minutes: https://github.com/phospho-app/ai-chat-bubble

How does it work ?
- You give an URL
- The AI assistant crawls the website content and embed it
- Add it to your frontend in one line of code
- People on your website can ask the assistant questions

Powered by BAAI/bge-small-en-v1.5 and Mistral AI
ยท
reacted to fdaudens's post with ๐Ÿ‘ about 1 month ago
view post
Post
1844
Been reading about the "bigger models = better AI" narrative getting pushed back today.

@thomwolf tackled this head on at Web Summit and highlighted how important small models are (and why closed-source companies haven't pushed for this ๐Ÿ˜ฌ). They're crushing it: today's 1B parameter models outperform last year's 10B models.

Fascinating to hear him talk about the secret sauce behind this approach.
reacted to fdaudens's post with ๐Ÿ‘ about 2 months ago
view post
Post
2291
๐Ÿ” NYT leveraged AI to investigate election interference by analyzing 400+ hours of recorded meetings - that's 5M words of data!

AI spotted patterns, humans verified facts. Every AI-flagged quote was manually verified against source recordings. Really appreciate that they published their full methodology - transparency matters when using AI in journalism.

A perfect blend of tech & journalism.

The future of journalism isn't robots replacing reporters - it's AI helping humans process massive datasets more efficiently. Sometimes the most powerful tech solutions are the least flashy ones.

Read the article: https://www.nytimes.com/interactive/2024/10/28/us/politics/inside-the-movement-behind-trumps-election-lies.html?unlocked_article_code=1.Vk4.ucv9.dbHVquTQaf0G&smid=nytcore-ios-share
reacted to m-ric's post with ๐Ÿ‘€ 2 months ago
view post
Post
864
๐—›๐—ผ๐˜„ ๐˜๐—ผ ๐—ฟ๐—ฒ-๐—ฟ๐—ฎ๐—ป๐—ธ ๐˜†๐—ผ๐˜‚๐—ฟ ๐˜€๐—ป๐—ถ๐—ฝ๐—ฝ๐—ฒ๐˜๐˜€ ๐—ถ๐—ป ๐—ฅ๐—”๐—š โ‡’ ColBERT, Rerankers, Cross-Encoders

Letโ€™s say youโ€™re doing RAG, and in an effort to improve performance, you try to rerank a few possible source snippets by their relevancy to a query.

How can you score similarity between your query and any source document? ๐Ÿค” ๐Ÿ“„ โ†”๏ธ ๐Ÿ“‘

๐Ÿญ. ๐—๐˜‚๐˜€๐˜ ๐˜‚๐˜€๐—ฒ ๐—ฒ๐—บ๐—ฏ๐—ฒ๐—ฑ๐—ฑ๐—ถ๐—ป๐—ด๐˜€ : ๐—ก๐—ผ-๐—ถ๐—ป๐˜๐—ฒ๐—ฟ๐—ฎ๐—ฐ๐˜๐—ถ๐—ผ๐—ป ๐ŸŽ๏ธ

This means that you encode each token from both the query and the doc as separate vectors, then average the tokens of each separately to get in total 2 vectors, then you compute similarity via cosine or something.
โžก๏ธ Notable examples: Check the top of the MTEB leaderboard!

๐Ÿฎ. ๐—Ÿ๐—ฎ๐˜๐—ฒ-๐—ถ๐—ป๐˜๐—ฒ๐—ฟ๐—ฎ๐—ฐ๐˜๐—ถ๐—ผ๐—ป: ๐˜๐—ต๐—ถ๐˜€ ๐—ถ๐˜€ ๐—–๐—ผ๐—น๐—•๐—˜๐—ฅ๐—ง ๐Ÿƒ

These encode each token from both query and doc as separate vectors as before, but compare all together without previously averaging them and losing information.

This is more accurate than no-interaction but also slower because you have to compare n*m vectors instead of 2. At least you can store documents in memory. And ColBERT has some optimisations like pooling to be faster.

โžก๏ธ Notable examples: ColBERTv2, mxbai-colbert-large-v1, jina-colbert-v2

๐Ÿฏ. ๐—˜๐—ฎ๐—ฟ๐—น๐˜† ๐—ถ๐—ป๐˜๐—ฒ๐—ฟ๐—ฎ๐—ฐ๐˜๐—ถ๐—ผ๐—ป: ๐—–๐—ฟ๐—ผ๐˜€๐˜€-๐—ฒ๐—ป๐—ฐ๐—ผ๐—ฑ๐—ฒ๐—ฟ๐˜€ ๐Ÿ‹๏ธ

Basically you run the concatenated query + document in a model to get a final score.

This is the most accurate, but also the slowest since it gets really long when you have many docs to rerank! And you cannot pre-store embeddings.

โžก๏ธ Notable examples: MixedBread or Jina AI rerankers!

๐Ÿš€ So what you choose is a trade-off between speed and accuracy: I think ColBERT is often a really good choice!

Based on this great post by Jina AI ๐Ÿ‘‰ https://jina.ai/news/what-is-colbert-and-late-interaction-and-why-they-matter
reacted to m-ric's post with ๐Ÿ‘€ 3 months ago
view post
Post
367
Anthropic just released a chunk improvement technique that vastly improves RAG performance! ๐Ÿ”ฅ

Crash reminder: Retrieval Augmented Generation (RAG) is a widely-used technique for improving your LLM chatbot's answers to user questions.

It goes like this: instead of generating an LLM answer straight away, it just adds a previous step called Retrieval, that retrieves relevant documents from your knowledge base through semantic search, and just appends the top K documents to the prompt. โžก๏ธ As a result, the LLM answer is grounded in context.

โ›”๏ธ The difficulty with this retrieval step is that when you split your documents into chunks that will be retrieved, you lose context. So importance chunks could be missed.

๐Ÿ’ก Anthropic's just released blog post shows that you can add some context to each chunk, with one LLM call. Then you embed the original chunk + a bit of added context, so that the embedding is much more representative of the document in its context!

๐Ÿค” Isn't that crazy expensive? Well it would have been before, but not so much anymore with their new Prompt caching feature that makes duplicating thousands of requests with the same prompt much less expensive. They give an indicative price tag of only $1.02 per million chunks processed!

โœ… And this vastly improves performance on their benchmark!

Read their blog post ๐Ÿ‘‰ https://www.anthropic.com/news/contextual-retrieval
reacted to Tonic's post with ๐Ÿ”ฅ 3 months ago