YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, any-to-any, other

Generic badge

Model

internlm-chat-20b-qlora-oasst1 is fine-tuned from InternLM-Chat-20B with openassistant-guanaco dataset by XTuner.

Quickstart

Usage with XTuner CLI

Installation

pip install xtuner

Chat

xtuner chat internlm/internlm-chat-20b --adapter xtuner/internlm-chat-20b-qlora-oasst1 --prompt-template internlm_chat

Fine-tune

Use the following command to quickly reproduce the fine-tuning results.

xtuner train internlm_chat_20b_qlora_oasst1_e3
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for xtuner/internlm-chat-20b-qlora-oasst1

Adapter
(2)
this model

Dataset used to train xtuner/internlm-chat-20b-qlora-oasst1

Collection including xtuner/internlm-chat-20b-qlora-oasst1