X

HarbingerX

AI & ML interests

None yet

Recent Activity

liked a Space 11 days ago
skytnt/anime-remove-background
liked a model about 1 month ago
ByteDance/Hyper-SD
updated a Space about 1 month ago
HarbingerX/Llama-3.2-1B
View all activity

Organizations

DaemonDEX PowerHex's profile picture

HarbingerX's activity

reacted to andito's post with πŸ€— about 2 months ago
view post
Post
3352
Let's go! We are releasing SmolVLM, a smol 2B VLM built for on-device inference that outperforms all models at similar GPU RAM usage and tokens throughputs.

- SmolVLM generates tokens 7.5 to 16 times faster than Qwen2-VL! 🀯
- Other models at this size crash a laptop, but SmolVLM comfortably generates 17 tokens/sec on a macbook! πŸš€
- SmolVLM can be fine-tuned on a Google collab! Or process millions of documents with a consumer GPU!
- SmolVLM even outperforms larger models in video benchmarks, despite not even being trained on videos!

Check out more!
Demo: HuggingFaceTB/SmolVLM
Blog: https://huggingface.co/blog/smolvlm
Model: HuggingFaceTB/SmolVLM-Instruct
Fine-tuning script: https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb
reacted to garrethlee's post with πŸ‘€ about 2 months ago
view post
Post
1939
The latest o1 model from OpenAI is still unable to answer 9.11 > 9.9 correctly πŸ€”

A possible explanation? Tokenization - and our latest work investigates how it affects a model's ability to do math!

In this blog post, we discuss:
πŸ”’ The different ways numbers are tokenized in modern LLMs
πŸ§ͺ Our detailed approach in comparing these various methods
πŸ₯ͺ How we got a free boost in arithmetic performance by adding a few lines of code to the base Llama 3 tokenizer
πŸ‘‘ and a definitive, best tokenization method for math in LLMs!

Check out our work here: huggingface/number-tokenization-blog
  • 2 replies
Β·
reacted to nroggendorff's post with πŸ˜” about 2 months ago
view post
Post
3702
im so tired
  • 3 replies
Β·
reacted to DawnC's post with ❀️ about 2 months ago
view post
Post
1449
🌟 PawMatchAI: Making Breed Selection More Intuitive! πŸ•
Excited to share the latest update to this AI-powered companion for finding your perfect furry friend! The breed recommendation system just got a visual upgrade to help you make better decisions.

✨ What's New?
Enhanced breed recognition accuracy through strategic model improvements:
- Upgraded to a fine-tuned ConvNeXt architecture for superior feature extraction
- Implemented progressive layer unfreezing during training
- Optimized data augmentation pipeline for better generalization
- Achieved 8% improvement in breed classification accuracy

🎯 Key Features:
- Smart breed recognition powered by AI
- Visual matching scores with intuitive color indicators
- Detailed breed comparisons with interactive tooltips
- Lifestyle-based recommendations tailored to your needs

πŸ’­ Project Vision
Combining my passion for AI and pets, this project represents another step toward my goal of creating meaningful AI applications. Each update aims to make the breed selection process more accessible while improving the underlying technology.

πŸ‘‰ Try it now: DawnC/PawMatchAI

Your likes ❀️ on this space fuel this project's growth!

#AI #MachineLearning #DeepLearning #Pytorch #ComputerVision
See translation
reacted to lewtun's post with πŸ‘€ about 2 months ago
view post
Post
6845
We outperform Llama 70B with Llama 3B on hard math by scaling test-time compute πŸ”₯

How? By combining step-wise reward models with tree search algorithms :)

We show that smol models can match or exceed the performance of their much larger siblings when given enough "time to think"

We're open sourcing the full recipe and sharing a detailed blog post.

In our blog post we cover:

πŸ“ˆ Compute-optimal scaling: How we implemented DeepMind's recipe to boost the mathematical capabilities of open models at test-time.

πŸŽ„ Diverse Verifier Tree Search (DVTS): An unpublished extension we developed to the verifier-guided tree search technique. This simple yet effective method improves diversity and delivers better performance, particularly at large test-time compute budgets.

🧭 Search and Learn: A lightweight toolkit for implementing search strategies with LLMs and built for speed with vLLM

Here's the links:

- Blog post: HuggingFaceH4/blogpost-scaling-test-time-compute

- Code: https://github.com/huggingface/search-and-learn

Enjoy!
  • 2 replies
Β·
reacted to jbilcke-hf's post with πŸ€— about 2 months ago
view post
Post
6688
Doing some testing with HunyuanVideo on the Hugging Face Inference Endpoints πŸ€—

prompt: "a Shiba Inu is acting as a DJ, he wears sunglasses and is mixing and scratching with vinyl discs at a Ibiza sunny sand beach party"

1280x720, 22 steps, 121 frames

There are still some things to iron out regarding speed and memory usage, right now it takes 20min on an A100 (see attached charts)

but you can check it out here:

https://huggingface.co/jbilcke-hf/HunyuanVideo-for-InferenceEndpoints

There are various things I want to try like the 100% diffusers version and other models (LTX-Video..)