--- license: cc-by-sa-4.0 datasets: - csitfun/LogiCoT language: - en library_name: transformers pipeline_tag: text-generation tags: - logical --- This model is tuned on the LogiCoT data and the GPT-4 alpaca data with the LLaMa-7b model. We use 2 A100 GPUs We first instruction-tuning LLaMa-7b on the GPT-4 alpaca data for 3 days, then on the LogiCoT data for 4 days.