Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

merterbakΒ 
posted an update 1 day ago
hesamationΒ 
posted an update about 21 hours ago
view post
Post
1712
Google published a 69-page whitepaper on Prompt Engineering and its best practices, a must-read if you are using LLMs in production:
> zero-shot, one-shot, few-shot
> system prompting
> chain-of-thought (CoT)
> ReAct

LINK: https://www.kaggle.com/whitepaper-prompt-engineering
> code prompting
> best practices
fdaudensΒ 
posted an update about 21 hours ago
view post
Post
1583
🎨 Designers, meet OmniSVG! This new model helps you create professional vector graphics from text/images, generate editable SVGs from icons to detailed characters, convert rasters to vectors, maintain style consistency with references, and integrate into your workflow.

@OmniSVG
  • 2 replies
Β·
danielhanchenΒ 
posted an update 3 days ago
ajibawa-2023Β 
posted an update about 15 hours ago
view post
Post
1608
Hi All, I recently released two Audio datasets which are generated using my earlier released dataset: ajibawa-2023/Children-Stories-Collection

First Audio Dataset:https://huggingface.co/datasets/ajibawa-2023/Audio-Children-Stories-Collection-Large has 5600++ stories in .mp3 format.

Second Audio Dataset:https://huggingface.co/datasets/ajibawa-2023/Audio-Children-Stories-Collection has 600 stories in .mp3 format.
Β·
jasoncorkillΒ 
posted an update 1 day ago
view post
Post
1928
πŸ”₯ Yesterday was a fire day!
We dropped two brand-new datasets capturing Human Preferences for text-to-video and text-to-image generations powered by our own crowdsourcing tool!

Whether you're working on model evaluation, alignment, or fine-tuning, this is for you.

1. Text-to-Video Dataset (Pika 2.2 model):
Rapidata/text-2-video-human-preferences-pika2.2

2. Text-to-Image Dataset (Reve-AI Halfmoon):
Rapidata/Reve-AI-Halfmoon_t2i_human_preference

Let’s train AI on AI-generated content with humans in the loop.
Let’s make generative models that actually get us.
Steven10429Β 
posted an update 1 day ago
view post
Post
2099
I got rejected from llama4.
So that means I can use quantinized model without following their TOS.
Interesting.
Β·
fcakyonΒ 
posted an update 2 days ago
view post
Post
2292
πŸŽ‰ GitHub selected the ultralytics computer vision project, known for its YOLOv8/YOLO11 real-time SOTA computer vision models, as one of the top 5 open-source projects for first-time contributors in 2024!

Link to the project: https://github.com/ultralytics/ultralytics

Link to the full GitHub 2024 recap report: https://github.blog/news-insights/octoverse/octoverse-2024/
  • 2 replies
Β·
onekqΒ 
posted an update about 23 hours ago
view post
Post
972
We desperately need GPU for model inference. CPU can't replace GPU.

I will start with the basics. GPU is designed to serve predictable workloads with many parallel units (pixels, tensors, tokens). So a GPU allocates as much transistor budget as possible to build thousands of compute units (Cuda cores in NVidia or execution units in Apple Silicon), each capable of running a thread.

But CPU is designed to handle all kinds of workloads. CPU cores are much larger (hence a lot fewer) with branch prediction and other complex things. In addition, more and more transistors are allocated to build larger cache (~50% now) to house the unpredictable, devouring the compute budget.

Generalists can't beat specialists.
AdinaYΒ 
posted an update about 23 hours ago
view post
Post
1050
Moonshot AI ζœˆδΉ‹ζš—ι’ πŸŒ› @Kimi_Moonshotis just dropped an MoE VLM and an MoE Reasoning VLM on the hub!!

Model:https://huggingface.co/collections/moonshotai/kimi-vl-a3b-67f67b6ac91d3b03d382dd85

✨3B with MIT license
✨Long context windows up to 128K
✨Strong multimodal reasoning (36.8% on MathVision, on par with 10x larger models) and agent skills (34.5% on ScreenSpot-Pro)