SocraticLM
Collection
Enhanced Implementations of the paper: "SocraticLM: Exploring Socratic Personalized Teaching with Large Language Models" (NeurIPS'2024 Spotlight).
•
4 items
•
Updated
This model is a fine-tuned version of Qwen2.5-Math-7B-Instruct on the SocraTeach dataset.
It is an implementation of SocraticLM.
SocraticLM is designed for educational perposes, where students need a Socratic-style guidance when having difficulties learning to solve mathematical problems. Also, SocraticLM can solve mathematical problems itself.
This model mainly supports English and Chinese.
For Huggingface transformers:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("CogBase-USTC/SocraticLM")
model = AutoModelForCausalLM.from_pretrained(
"CogBase-USTC/SocraticLM",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
### Math Problem Solving ###
messages = [
{"role": "system", "content" : "Please analyse and solve the following problem step by step."},
{"role": "user", "content": "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"},
]
### Socratic-style Guidance ###
# messages = [
# {"role": "system", "content" : "You are a Socratic teacher, please guide me to solve the [Problem] with heuristic questions based on the following information. \n"},
# {"role": "user", "content": "[Problem] Debelyn, Christel, and Andrena collect dolls. Debelyn had 20 dolls before she gave Andrena 2 dolls. Christel had 24 dolls before giving Andrena 5 dolls. After all the gifts, Andrena now has 2 more dolls than Christel, how many more dolls does andrena have now than Debelyn? [Answer] 3 [Analysis] Debelyn had 20 - 2 = 18 dolls left after giving out 2 dolls to Christel. Christel had 24 + 2 = 26 dolls after receiving 2 dolls from Debelyn. Christel had 24 - 5 = 19 dolls after giving Andrena 5 dolls. So, Andrena has 19 +2 = 21 dolls now. Therefore, Andrena has 21 - 18 = 3 more dolls than Debelyn."},
# ]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=4096)
print(tokenizer.decode(outputs[0]))
For vLLM:
from vllm import LLM, SamplingParams
llm = LLM(model=r'CogBase-USTC/SocraticLM',
tokenizer=r'CogBase-USTC/SocraticLM',
trust_remote_code=True,
tensor_parallel_size=1,
gpu_memory_utilization=0.99,
enable_chunked_prefill=True,
max_num_batched_tokens=512,
max_num_seqs=128)
sampling_params = SamplingParams(temperature=0, max_tokens=4096, seed=42)
def print_outputs(outputs):
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Generated text: {generated_text!r}")
print("-" * 80)
print("=" * 80)
### Math Problem Solving ###
conversation = [
{
"role": "system",
"content": "Please analyse and solve the following problem step by step."
},
{
"role": "user",
"content": "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"
},
]
### Socratic-style Guidance ###
# conversation = [
# {
# "role": "system",
# "content": "You are a Socratic teacher, please guide me to solve the [Problem] with heuristic questions based on the following information. \n"
# },
# {
# "role": "user",
# "content": "[Problem] Debelyn, Christel, and Andrena collect dolls. Debelyn had 20 dolls before she gave Andrena 2 dolls. Christel had 24 dolls before giving Andrena 5 dolls. After all the gifts, Andrena now has 2 more dolls than Christel, how many more dolls does andrena have now than Debelyn? [Answer] 3 [Analysis] Debelyn had 20 - 2 = 18 dolls left after giving out 2 dolls to Christel. Christel had 24 + 2 = 26 dolls after receiving 2 dolls from Debelyn. Christel had 24 - 5 = 19 dolls after giving Andrena 5 dolls. So, Andrena has 19 +2 = 21 dolls now. Therefore, Andrena has 21 - 18 = 3 more dolls than Debelyn."
# },
# ]
outputs = llm.chat(conversation,
sampling_params=sampling_params,
use_tqdm=False,
)
print_outputs(outputs)
The following hyperparameters were used during training: