--- license: apache-2.0 datasets: - wikimedia/wikipedia language: - id base_model: - openai-community/gpt2 --- ```python from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import TextStreamer model_name = "akahana/wikipedia-gpt2" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) wikipedia_prompt = """Artikel Wikipedia [[Judul]] {} [[Artikel]] {}""" title = "Hal Holbrook" prompt = wikipedia_prompt.format(title, "") model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512, ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] response ```