--- base_model: - nectec/Pathumma-llm-text-1.0.0 - ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3 - jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.8 - Tsunami-th/Tsunami-1.0-7B-Instruct - ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3 - openthaigpt/openthaigpt1.5-7b-instruct - ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3 - scb10x/typhoon2-qwen2.5-7b-instruct - ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3 library_name: transformers tags: - mergekit - merge --- # Chat Template ChatML ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ``` {{ if .System }}<|im_start|>system {{ .System }}<|im_end|> {{ end }}{{ if .Prompt }}<|im_start|>user {{ .Prompt }}<|im_end|> {{ end }}<|im_start|>assistant {{ .Response }}{{ if .Response }}<|im_end|>{{ end }} ``` # GGUF Thank you [mradermacher](https://huggingface.co/mradermacher) for creating the GGUF versions of this model. * Static quants - [mradermacher/Qwen2.5-7B-Thai-Instruct-GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-Thai-Instruct-GGUF) # MERGE This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.8](https://huggingface.co/jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.8) as a base. ### Models Merged The following models were included in the merge: * [nectec/Pathumma-llm-text-1.0.0](https://huggingface.co/nectec/Pathumma-llm-text-1.0.0) + [ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3](https://huggingface.co/ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3) * [Tsunami-th/Tsunami-1.0-7B-Instruct](https://huggingface.co/Tsunami-th/Tsunami-1.0-7B-Instruct) + [ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3](https://huggingface.co/ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3) * [openthaigpt/openthaigpt1.5-7b-instruct](https://huggingface.co/openthaigpt/openthaigpt1.5-7b-instruct) + [ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3](https://huggingface.co/ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3) * [scb10x/typhoon2-qwen2.5-7b-instruct](https://huggingface.co/scb10x/typhoon2-qwen2.5-7b-instruct) + [ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3](https://huggingface.co/ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: openthaigpt/openthaigpt1.5-7b-instruct+ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3 - model: Tsunami-th/Tsunami-1.0-7B-Instruct+ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3 - model: nectec/Pathumma-llm-text-1.0.0+ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3 - model: scb10x/typhoon2-qwen2.5-7b-instruct+ngxson/LoRA-Qwen2.5-7B-Instruct-abliterated-v3 merge_method: model_stock base_model: jeffmeloy/Qwen2.5-7B-nerd-uncensored-v1.8 parameters: filter_wise: false dtype: float32 ```