PMC_LLAMA_7B_trainer_Wiki_lora / trainer_peft.log
Jingmei's picture
Training in progress, step 10
3832801 verified
raw
history blame
6.24 kB
2024-05-29 18:45 - Cuda check
2024-05-29 18:45 - True
2024-05-29 18:45 - 1
2024-05-29 18:45 - Configue Model and tokenizer
2024-05-29 18:45 - Memory usage in 0.00 GB
2024-05-29 18:45 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic_WHO
2024-05-29 18:46 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-05-29 18:49 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-05-29 18:49 - Setup PEFT
2024-05-29 18:49 - Setup optimizer
2024-05-29 18:49 - Start training
2024-05-29 18:57 - Cuda check
2024-05-29 18:57 - True
2024-05-29 18:57 - 1
2024-05-29 18:57 - Configue Model and tokenizer
2024-05-29 18:57 - Memory usage in 25.17 GB
2024-05-29 18:57 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic_WHO
2024-05-29 18:57 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-05-29 18:57 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-05-29 18:57 - Setup PEFT
2024-05-29 18:57 - Setup optimizer
2024-05-29 18:57 - Start training
2024-05-29 19:04 - Cuda check
2024-05-29 19:04 - True
2024-05-29 19:04 - 1
2024-05-29 19:04 - Configue Model and tokenizer
2024-05-29 19:04 - Memory usage in 25.17 GB
2024-05-29 19:04 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic_WHO
2024-05-29 19:04 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-05-29 19:04 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-05-29 19:04 - Setup PEFT
2024-05-29 19:04 - Setup optimizer
2024-05-29 19:04 - Start training
2024-05-29 19:10 - Cuda check
2024-05-29 19:10 - True
2024-05-29 19:10 - 1
2024-05-29 19:10 - Configue Model and tokenizer
2024-05-29 19:10 - Memory usage in 25.17 GB
2024-05-29 19:10 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic_WHO
2024-05-29 19:10 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-05-29 19:10 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-05-29 19:10 - Setup PEFT
2024-05-29 19:10 - Setup optimizer
2024-05-29 19:10 - Start training
2024-05-29 19:16 - Cuda check
2024-05-29 19:16 - True
2024-05-29 19:16 - 1
2024-05-29 19:16 - Configue Model and tokenizer
2024-05-29 19:16 - Memory usage in 25.17 GB
2024-05-29 19:16 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic_WHO
2024-05-29 19:16 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-05-29 19:16 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-05-29 19:16 - Setup PEFT
2024-05-29 19:16 - Setup optimizer
2024-05-29 19:16 - Start training
2024-05-29 19:22 - Cuda check
2024-05-29 19:22 - True
2024-05-29 19:22 - 1
2024-05-29 19:22 - Configue Model and tokenizer
2024-05-29 19:22 - Memory usage in 25.17 GB
2024-05-29 19:22 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic_WHO
2024-05-29 19:22 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-05-29 19:22 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-05-29 19:22 - Setup PEFT
2024-05-29 19:22 - Setup optimizer
2024-05-29 19:22 - Start training
2024-05-29 19:29 - Cuda check
2024-05-29 19:29 - True
2024-05-29 19:29 - 1
2024-05-29 19:29 - Configue Model and tokenizer
2024-05-29 19:29 - Memory usage in 25.17 GB
2024-05-29 19:29 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic_WHO
2024-05-29 19:29 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-05-29 19:29 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-05-29 19:29 - Setup PEFT
2024-05-29 19:29 - Setup optimizer
2024-05-29 19:29 - Start training