Pure C++ version of the SmolLM2 model code for EDGE implementations

#3
by MartialTerran - opened

I first heard about this model today in this article: https://venturebeat.com/ai/ai-on-your-smartphone-hugging-faces-smollm2-brings-powerful-models-to-the-palm-of-your-hand/?_bhlid=071034f893836a3364663dcc52fbea6fd14a2f15
I am disappointed that the full edge-optimal "model" is not disclosed in a compileable code format that can be ported to an edge device capable of running compiled code, such as a raspberry pi, or an andriod phone, or a arduino.

I'm hoping to deploy this model on resource-constrained devices like Raspberry Pis, Android phones, and even Arduinos. Currently, I don't see a readily available, portable implementation. While I understand the model architecture is based on LLaMA, I'm looking for something more directly deployable than using AutoModelForCausalLM.

Ideally, I'd like to see a simplified, compilable representation of the SmolLM2 model, perhaps in C/C++, that doesn't rely on the transformers library. This would allow for greater flexibility in porting and optimizing for these edge devices. Something analogous to this (though I understand this is a highly simplified illustration):

// Hypothetical example
LLAMA_for_SmolLM2 model(SmolLM_135M_checkpoint_file);
model.to(device); // Assuming some device abstraction

LLAMA_for_SmolLM2_tokenizer tokenizer;
auto inputs = tokenizer.encode("Gravity is");
inputs.to(device);

Ideally, I'd like to see a simplified, compilable representation of the SmolLM2 model, perhaps in C/C++, that doesn't rely on the transformers library. This would allow for greater flexibility in porting and optimizing for these edge devices. Something analogous to this (though I understand this is a highly simplified illustration):

// Hypothetical example

// Load the model from a checkpoint file
LLAMA_for_SmolLM2 model(SmolLM_135M_checkpoint_file);

// Move the model to the target device (e.g., CPU, GPU, or specialized hardware)
model.to(device);

// Tokenizer instance for preprocessing text
LLAMA_for_SmolLM2_tokenizer tokenizer;

// Encode the input string
auto inputs = tokenizer.encode("Gravity is");

// Move the encoded input to the same device as the model
inputs.to(device);

This contrasts with the current Python-based approach using transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-1.7B"
device = "cuda" # or "cpu"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)

Is providing a more portable, compilable version of SmolLM2 something being considered?

Sign up or log in to comment