Built on the LLaMA-3.2 architecture, this model uses a standard Byte-Pair-Encoding (BPE) tokenizer optimized for Danish. It serves as a baseline for evaluating the performance of subword tokenization.
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.