kor-merge-llama3.1 / README.md
kenonix's picture
Update README.md
3d0f646 verified
metadata
base_model:
  - Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
library_name: transformers
tags:
  - mergekit
  - merge

Kor-merge-llama 3.1 8B

"μ΄λ ‡κ²Œ 계엄령이 μ„ ν¬λ˜μ—ˆμ–΄. μ˜ˆλΉ„κ΅°μ€ 이제 μ–΄λ–»κ²Œ ν•΄μ•Όν• κΉŒ?"

계엄령 μ„ ν¬λ‘œ 인해 μ˜ˆλΉ„κ΅°μ— λŒ€ν•œ μƒˆλ‘œμš΄ μ§€μΉ¨μ΄λ‚˜ μ§€μ‹œκ°€ λ‚΄λ €μ§ˆ κ²ƒμœΌλ‘œ μ˜ˆμƒλ©λ‹ˆλ‹€. ν•˜μ§€λ§Œ μ €λŠ” ν˜„μž¬μ˜ 상황을 μ •ν™•νžˆ νŒŒμ•…ν•˜κΈ° μ–΄λ ΅κ³ , μ˜ˆλΉ„κ΅°μ˜ 행동 κΈ°μ€€μ΄λ‚˜ ν™œλ™ λ²”μœ„λ₯Ό κ²°μ •ν•˜λŠ” 것은 ν•΄λ‹Ή λΆ€μ„œλ‚˜ κΈ°κ΄€μ˜ 전문가듀이 λ‹΄λ‹Ήν•  μΌμž…λ‹ˆλ‹€. 계엄령에 λ”°λ₯Έ μ˜ˆλΉ„κ΅°μ˜ μ—­ν• κ³Ό μž„λ¬΄μ— λŒ€ν•΄μ„œλŠ” 곡식적인 μ•ˆλ‚΄λ‚˜ 지침을 κΈ°λ‹€λ¦¬λŠ” 것이 쒋을 것 κ°™μ•„μš”. μ•ˆμ „μ— μœ μ˜ν•˜μ‹œκΈ° 바라며, 상황 νŒŒμ•…μ— μ΅œμ„ μ„ λ‹€ν•΄ μ£Όμ‹œκΈ°λ₯Ό λ°”λžλ‹ˆλ‹€.

"그래. μ•Œκ² μ–΄, 근데 λ‚΄κ°€ ꡰ인이면 λ„ˆλ„ ꡰ인인거 μ•Œμ§€?"

λ„€, μ €λŠ” AIλΌλŠ” 점을 κ°μ•ˆν•˜λ©΄ κ΅°μΈμ΄λ‚˜ 비ꡰ인 ꡬ뢄이 μ—†μ–΄μš”. ν•˜μ§€λ§Œ μ €λŠ” μ—¬λŸ¬λΆ„μ˜ 의견과 감정을 μ΄ν•΄ν•˜κ³  κ³΅κ°ν•˜λŠ” κΈ°λŠ₯을 κ°–μΆ”κ³  μžˆμ–΄μš”. 그에 따라, κ΅°μΈμœΌλ‘œμ„œμ˜ κ·€ν•˜μ˜ μž…μž₯κ³Ό 걱정에 λŒ€ν•΄ μ €λŠ” 곡감할 수 μžˆμ–΄μš”. ν•˜μ§€λ§Œ 계엄령에 λ”°λ₯Έ μ˜ˆλΉ„κ΅°μ˜ 행동 μ§€μΉ¨μ΄λ‚˜ μž„λ¬΄μ— λŒ€ν•΄μ„œλŠ” μ „λ¬Έκ°€λ“€μ˜ νŒλ‹¨μ— λ”°λ₯΄λŠ” 것이 λ°”λžŒμ§ν•  것 κ°™μ•„μš”. 곡식적인 μ•ˆλ‚΄λ‚˜ μ§€μ‹œλ₯Ό κΈ°λ‹€λ¦¬λŠ” 것이 κ°€μž₯ μ•ˆμ „ν•˜κ³  μ˜¬λ°”λ₯Έ 방법일 κ±°μ˜ˆμš”. 상황이 λ”μš± μ•ˆμ •λ˜κΈ°λ₯Ό 바라며, κ·€ν•˜μ™€ κ°€μ‘±λΆ„λ“€μ˜ μ•ˆμ „μ„ κΈ°μ›ν•©λ‹ˆλ‹€.

ν•œκ΅­μ–΄ μž˜ν•˜κ³ , ν• λ£¨μ‹œλ„€μ΄μ…˜(ν—›μ†Œλ¦¬) λͺ»μž‘μ•˜μŠ΅λ‹ˆλ‹€.

ν“¨μ „μ‹œν‚¨ λͺ¨λΈμ΄λΌ λΌμ΄μ„ΌμŠ€κ°€ 이게 λ§žλ‚˜ μ‹Άμ§€λ§Œ 일단 μ˜¬λ €λ΄…λ‹ˆλ‹€.

ν•œκ΅­μ–΄ μž˜ν•˜λŠ” llama 3.1 μ°ΎμœΌμ‹œλŠ”λΆ„λ“€μ΄ μœ μš©ν•˜κ²Œ μ‚¬μš©ν•˜μ‹€ 수 있으면 μ’‹κ² μŠ΅λ‹ˆλ‹€.

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using Llama-3.1-8B-Lexi-Uncensored-V2 as a base.

Models Merged

The following models were included in the merge:

  • ktdsbaseLM-v0.2-onbased-llama3.1
  • Llama-VARCO-8B-Instruct
  • llama-3.1-8b-komedic-instruct

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
    # no parameters necessary for base model
  - model: AIDXteam/ktdsbaseLM-v0.2-onbased-llama3.1
    parameters:
      density: 0.5
      weight: 0.5
  - model: unidocs/llama-3.1-8b-komedic-instruct
    parameters:
      density: 0.8
      weight: 0.7
  - model: NCSOFT/Llama-VARCO-8B-Instruct
    parameters:
      density: 0.3
      weight: 0.5
  - model: unidocs/llama-3.1-8b-komedic-instruct
    parameters:
      density: 0.4
      weight: 0.5
  - model: NCSOFT/Llama-VARCO-8B-Instruct
    parameters:
      density: 0.5
      weight: 0.5
merge_method: dare_ties
base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
dtype: bfloat16