Usage (Transformers.js)

If you haven't already, you can install the Transformers.js JavaScript library from NPM using:

npm i @xenova/transformers

You can then use the model to generate text like this:

import { pipeline } from "@xenova/transformers";

// Create a text-generation pipeline
const generator = await pipeline('text-generation', 'Xenova/llama2.c-stories110M');

const text = 'Once upon a time,';
const output = await generator(text);
console.log(output);
// [{ generated_text: "Once upon a time, there was a little girl named Lily. She loved to play outside in" }]

const output2 = await generator(text, { max_new_tokens: 50 });
console.log(output2);
// [{ generated_text: "Once upon a time, there was a little girl named Lily. She loved to play outside in the sunshine. One day, she saw a big, scary dog. She was scared and didn't know what to do. \nSudden" }]
Downloads last month
690
Inference Examples
Inference API (serverless) does not yet support transformers.js models for this pipeline type.

Model tree for Xenova/llama2.c-stories110M

Finetunes
4 models