student-abdullah commited on
Commit
38aa0f3
·
verified ·
1 Parent(s): 8504523

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -1
README.md CHANGED
@@ -54,7 +54,33 @@ Rest of the remaining layers were quantized to *q3_k_l*
54
 
55
  ---
56
  # Model Architect
57
- <pre><code>```python Qwen2ForCausalLM( (model): Qwen2Model( (embed_tokens): Embedding(151936, 896, padding_idx=151665) (layers): ModuleList( (0-23): 24 x Qwen2DecoderLayer( (self_attn): Qwen2Attention( (q_proj): Linear(in_features=896, out_features=896, bias=True) (k_proj): Linear(in_features=896, out_features=128, bias=True) (v_proj): Linear(in_features=896, out_features=128, bias=True) (o_proj): Linear(in_features=896, out_features=896, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): Qwen2MLP( (gate_proj): Linear(in_features=896, out_features=4864, bias=False) (up_proj): Linear(in_features=896, out_features=4864, bias=False) (down_proj): Linear(in_features=4864, out_features=896, bias=False) (act_fn): SiLU() ) (input_layernorm): Qwen2RMSNorm((896,), eps=1e-06) (post_attention_layernorm): Qwen2RMSNorm((896,), eps=1e-06) ) ) (norm): Qwen2RMSNorm((896,), eps=1e-06) (rotary_emb): LlamaRotaryEmbedding() ) (lm_head): Linear(in_features=896, out_features=151936, bias=False) ) ```</code></pre>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
  ---
60
  # Performance & Limitations
 
54
 
55
  ---
56
  # Model Architect
57
+ <pre><code>```Qwen2ForCausalLM(
58
+ (model): Qwen2Model(
59
+ (embed_tokens): Embedding(151936, 896, padding_idx=151665)
60
+ (layers): ModuleList(
61
+ (0-23): 24 x Qwen2DecoderLayer(
62
+ (self_attn): Qwen2Attention(
63
+ (q_proj): Linear(in_features=896, out_features=896, bias=True)
64
+ (k_proj): Linear(in_features=896, out_features=128, bias=True)
65
+ (v_proj): Linear(in_features=896, out_features=128, bias=True)
66
+ (o_proj): Linear(in_features=896, out_features=896, bias=False)
67
+ (rotary_emb): LlamaRotaryEmbedding()
68
+ )
69
+ (mlp): Qwen2MLP(
70
+ (gate_proj): Linear(in_features=896, out_features=4864, bias=False)
71
+ (up_proj): Linear(in_features=896, out_features=4864, bias=False)
72
+ (down_proj): Linear(in_features=4864, out_features=896, bias=False)
73
+ (act_fn): SiLU()
74
+ )
75
+ (input_layernorm): Qwen2RMSNorm((896,), eps=1e-06)
76
+ (post_attention_layernorm): Qwen2RMSNorm((896,), eps=1e-06)
77
+ )
78
+ )
79
+ (norm): Qwen2RMSNorm((896,), eps=1e-06)
80
+ (rotary_emb): LlamaRotaryEmbedding()
81
+ )
82
+ (lm_head): Linear(in_features=896, out_features=151936, bias=False)
83
+ ) ```</code></pre>
84
 
85
  ---
86
  # Performance & Limitations