AI & ML interests

None defined yet.

group2test's activity

regisssΒ 
posted an update 10 days ago
view post
Post
1606
Nice paper comparing the fp8 inference efficiency of Nvidia H100 and Intel Gaudi2: An Investigation of FP8 Across Accelerators for LLM Inference (2502.01070)

The conclusion is interesting: "Our findings highlight that the Gaudi 2, by leveraging FP8, achieves higher throughput-to-power efficiency during LLM inference"

One aspect of AI hardware accelerators that is often overlooked is how they consume less energy than GPUs. It's nice to see researchers starting carrying out experiments to measure this!

Gaudi3 results soon...
dylanebertΒ 
posted an update 26 days ago
dylanebertΒ 
posted an update about 1 month ago
view post
Post
669
βš™οΈ Convert .ply to .splat

i've created a simple space to convert .ply gaussian splat files to .splat format

dylanebert/ply-to-splat
dylanebertΒ 
posted an update about 1 month ago
view post
Post
2008
🟦 New Image-to-3D model from Stability AI

stabilityai/stable-point-aware-3d

here's how it looks, with TRELLIS for comparison
akhaliqΒ 
posted an update 2 months ago
view post
Post
11236
Google drops Gemini 2.0 Flash Thinking

a new experimental model that unlocks stronger reasoning capabilities and shows its thoughts. The model plans (with thoughts visible), can solve complex problems with Flash speeds, and more

now available in anychat, try it out: akhaliq/anychat
  • 2 replies
Β·
regisssΒ 
posted an update 2 months ago
dylanebertΒ 
posted an update 2 months ago
view post
Post
2305
TRELLIS is now the highest ranked open-source model in the 3D Arena Leaderboard, surpassing InstantMesh

dylanebert/3d-arena
  • 1 reply
Β·
dylanebertΒ 
posted an update 3 months ago
view post
Post
2919
blender has AI now
dylanebertΒ 
posted an update 3 months ago
view post
Post
4359
🟦 New open-source Image-to-3D model from Microsoft

TRELLIS: Structured 3D Latents for Scalable and Versatile 3D Generation

it's really good! the topology isn't clean, but it's a very very good 3D reference

JeffreyXiang/TRELLIS-image-large
  • 1 reply
Β·
akhaliqΒ 
posted an update 3 months ago
view post
Post
11924
QwQ-32B-Preview is now available in anychat

A reasoning model that is competitive with OpenAI o1-mini and o1-preview

try it out: akhaliq/anychat
  • 1 reply
Β·
akhaliqΒ 
posted an update 3 months ago
view post
Post
4108
New model drop in anychat

allenai/Llama-3.1-Tulu-3-8B is now available

try it here: akhaliq/anychat
dylanebertΒ 
posted an update 3 months ago
view post
Post
1659
Generate meshes with AI locally in Blender

πŸ“’ New open-source release

meshgen, a local blender integration of LLaMa-Mesh, is open source and available now πŸ€—

get started here: https://github.com/huggingface/meshgen
akhaliqΒ 
posted an update 3 months ago
view post
Post
3030
anychat

supports chatgpt, gemini, perplexity, claude, meta llama, grok all in one app

try it out there: akhaliq/anychat
regisssΒ 
posted an update 4 months ago
view post
Post
1418
Interested in performing inference with an ONNX model?⚑️

The Optimum docs about model inference with ONNX Runtime is now much clearer and simpler!

You want to deploy your favorite model on the hub but you don't know how to export it to the ONNX format? You can do it in one line of code as follows:
from optimum.onnxruntime import ORTModelForSequenceClassification

# Load the model from the hub and export it to the ONNX format
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
model = ORTModelForSequenceClassification.from_pretrained(model_id, export=True)

Check out the whole guide πŸ‘‰ https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models
dylanebertΒ 
posted an update 5 months ago
dylanebertΒ 
posted an update 5 months ago
dylanebertΒ 
posted an update 5 months ago
dylanebertΒ 
posted an update 6 months ago
view post
Post
2556
Here's a 1-minute video tutorial on how to fine-tune unsloth/llama-3-8b-bnb-4bit with unsloth

Using Roller Coaster Tycoon peep thoughts as an example
akhaliqΒ 
posted an update 9 months ago
view post
Post
20770
Phased Consistency Model

Phased Consistency Model (2405.18407)

The consistency model (CM) has recently made significant progress in accelerating the generation of diffusion models. However, its application to high-resolution, text-conditioned image generation in the latent space (a.k.a., LCM) remains unsatisfactory. In this paper, we identify three key flaws in the current design of LCM. We investigate the reasons behind these limitations and propose the Phased Consistency Model (PCM), which generalizes the design space and addresses all identified limitations. Our evaluations demonstrate that PCM significantly outperforms LCM across 1--16 step generation settings. While PCM is specifically designed for multi-step refinement, it achieves even superior or comparable 1-step generation results to previously state-of-the-art specifically designed 1-step methods. Furthermore, we show that PCM's methodology is versatile and applicable to video generation, enabling us to train the state-of-the-art few-step text-to-video generator.
akhaliqΒ 
posted an update 9 months ago
view post
Post
21033
Chameleon

Mixed-Modal Early-Fusion Foundation Models

Chameleon: Mixed-Modal Early-Fusion Foundation Models (2405.09818)

We present Chameleon, a family of early-fusion token-based mixed-modal models capable of understanding and generating images and text in any arbitrary sequence. We outline a stable training approach from inception, an alignment recipe, and an architectural parameterization tailored for the early-fusion, token-based, mixed-modal setting. The models are evaluated on a comprehensive range of tasks, including visual question answering, image captioning, text generation, image generation, and long-form mixed modal generation. Chameleon demonstrates broad and general capabilities, including state-of-the-art performance in image captioning tasks, outperforms Llama-2 in text-only tasks while being competitive with models such as Mixtral 8x7B and Gemini-Pro, and performs non-trivial image generation, all in a single model. It also matches or exceeds the performance of much larger models, including Gemini Pro and GPT-4V, according to human judgments on a new long-form mixed-modal generation evaluation, where either the prompt or outputs contain mixed sequences of both images and text. Chameleon marks a significant step forward in a unified modeling of full multimodal documents.