FINGU-AI commited on
Commit
0fb4cff
·
verified ·
1 Parent(s): a72a349

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -1,13 +1,13 @@
1
  ---
2
  license: mit
3
  ---
4
- # FINGU-AI/Qwen2.5-7B-M
5
 
6
  ## Overview
7
- `FINGU-AI/Qwen2.5-7B-M` is a powerful causal language model designed for a variety of natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. This model is particularly useful for translating between Korean and Uzbek, as well as supporting other custom NLP tasks through flexible input.
8
 
9
  ## Model Details
10
- - **Model ID**: `FINGU-AI/Qwen2.5-7B-M`
11
  - **Architecture**: Causal Language Model (LM)
12
  - **Parameters**: 7 billion
13
  - **Precision**: Torch BF16 for efficient GPU memory usage
@@ -29,7 +29,7 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
29
  import torch
30
 
31
  # Model and Tokenizer
32
- model_id = 'FINGU-AI/Qwen2.5-7B-M'
33
  model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="sdpa", torch_dtype=torch.bfloat16)
34
  tokenizer = AutoTokenizer.from_pretrained(model_id)
35
  model.to('cuda')
 
1
  ---
2
  license: mit
3
  ---
4
+ # FINGU-AI/BNK_Translate_LLM_V3
5
 
6
  ## Overview
7
+ `FINGU-AI/BNK_Translate_LLM_V3` is a powerful causal language model designed for a variety of natural language processing (NLP) tasks, including machine translation, text generation, and chat-based applications. This model is particularly useful for translating between Korean and Uzbek, as well as supporting other custom NLP tasks through flexible input.
8
 
9
  ## Model Details
10
+ - **Model ID**: `FINGU-AI/BNK_Translate_LLM_V3`
11
  - **Architecture**: Causal Language Model (LM)
12
  - **Parameters**: 7 billion
13
  - **Precision**: Torch BF16 for efficient GPU memory usage
 
29
  import torch
30
 
31
  # Model and Tokenizer
32
+ model_id = 'FINGU-AI/BNK_Translate_LLM_V3'
33
  model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="sdpa", torch_dtype=torch.bfloat16)
34
  tokenizer = AutoTokenizer.from_pretrained(model_id)
35
  model.to('cuda')