Safetensors
neo_scalinglaw_250M / README.md
dododododo's picture
Create README.md
68849ff verified
|
raw
history blame
2.29 kB
metadata
license: apache-2.0

NEO

🤗Neo-Models | 🤗Neo-Datasets | Github

Neo is a completely open source large language model, including code, all model weights, datasets used for training, and training details.

Model

Model Describe Download
neo_7b This repository contains the base model of neo_7b 🤗 Hugging Face
neo_7b_intermediate This repo contains normal pre-training intermediate ckpts. A total of 3.7T tokens were learned at this phase. 🤗 Hugging Face
neo_7b_decay This repo contains intermediate ckpts during the decay phase. A total of 720B tokens were learned at this phase. 🤗 Hugging Face
neo_scalinglaw_980M This repo contains ckpts related to scalinglaw experiments 🤗 Hugging Face
neo_scalinglaw_460M This repo contains ckpts related to scalinglaw experiments 🤗 Hugging Face
neo_scalinglaw_250M This repo contains ckpts related to scalinglaw experiments 🤗 Hugging Face
neo_2b_general This repo contains ckpts of 2b model trained using common domain knowledge 🤗 Hugging Face

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = '<your-hf-model-path-with-tokenizer>'

tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True)

model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()

input_text = "A long, long time ago,"

input_ids = tokenizer(input_text, add_generation_prompt=True, return_tensors='pt').to(model.device)
output_ids = model.generate(**input_ids, max_new_tokens=20)
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)

print(response)