yasserrmd commited on
Commit
b1c1f46
·
verified ·
1 Parent(s): 4054fbf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -6
README.md CHANGED
@@ -8,14 +8,115 @@ tags:
8
  license: apache-2.0
9
  language:
10
  - en
 
 
11
  ---
12
 
13
- # Uploaded finetuned model
14
 
15
- - **Developed by:** yasserrmd
16
- - **License:** apache-2.0
17
- - **Finetuned from model :** unsloth/LFM2-1.2B
 
 
18
 
19
- This lfm2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
8
  license: apache-2.0
9
  language:
10
  - en
11
+ datasets:
12
+ - ajibawa-2023/Software-Architecture
13
  ---
14
 
15
+ # SoftwareArchitecture-Instruct-v1
16
 
17
+ **Domain:** Software Architecture (for technical professionals)
18
+ **Type:** Instruction-tuned LLM
19
+ **Base:** LiquidAI/LFM2-1.2B (1.2 B parameter hybrid edge-optimized model) :contentReference[oaicite:1]{index=1}
20
+ **Fine-tuned on:** `ajibawa-2023/Software-Architecture` dataset
21
+ **Author:** Mohamed Yasser (`yasserrmd`)
22
 
23
+ ---
24
+
25
+ ## ​ Model Description
26
+
27
+ **SoftwareArchitecture-Instruct-v1** is an instruction-tuned adaptation of LiquidAI’s lightweight and efficient **LFM2-1.2B** model. It’s specifically tailored to deliver high-quality, accurate, and technically rich responses to questions about **software architecture**—designed with engineers and architects in mind.
28
+
29
+ The base model, LFM2-1.2B, features a **16-layer hybrid design** (10 convolutional + 6 grouped query attention layers), supports a **32,768 token context**, and offers **fast inference on CPU, GPU, and NPU** platforms—ideal for both cloud and edge deployments :contentReference[oaicite:2]{index=2}.
30
+
31
+ ---
32
+
33
+ ## ​ Benchmark Summary
34
+
35
+ We performed a 50-prompt benchmark across diverse software architecture topics:
36
+
37
+ | Metric | Value |
38
+ |------------------------------|----------------------|
39
+ | Average Words per Response | ~144 |
40
+ | Median Words per Response | ~139 |
41
+ | Min / Max Words per Response | 47 / 224 |
42
+ | Avg Sentences per Output | ~8.6 |
43
+ | Lexical Diversity (TTR) | ~0.73 |
44
+ | Readability Complexity | High (professional-level) |
45
+ | Accuracy (topic keyword coverage) | Majority ≥ 60% |
46
+ | Off-topic Responses | None detected |
47
+
48
+ **Interpretation:**
49
+ - Responses are **substantive and domain-appropriate** for technical audiences.
50
+ - Coverage is strong—while a few answers could benefit from including extra keywords, the core technical content is accurate.
51
+ - Readability intentionally leans into complexity, aligning with expert users.
52
+
53
+ ---
54
+
55
+ ## ​ Intended Use
56
+
57
+ - **Ideal for:** Software architects, system designers, engineering leads, and experienced developers seeking architecture guidance.
58
+ - **Use cases include:**
59
+ - Exploring architectural patterns (e.g., CQRS, Saga, API Gateway).
60
+ - Drafting design docs and decision rationale.
61
+ - Architectural interview prep and system design walkthroughs.
62
+
63
+ **Not intended for:**
64
+ - Non-technical or general-purpose Q&A.
65
+ - In-depth code generation or debugging without architectural focus.
66
+
67
+ ---
68
+
69
+ ## ​ Usage Example
70
+
71
+ ```python
72
+ from transformers import AutoModelForCausalLM, AutoTokenizer
73
+
74
+ model_name = "yasserrmd/SoftwareArchitecture-Instruct-v1"
75
+
76
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
77
+ model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
78
+
79
+ messages = [
80
+ {"role": "user", "content": "Explain the Saga pattern with orchestration and choreography."}
81
+ ]
82
+
83
+ inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
84
+
85
+ outputs = model.generate(
86
+ **inputs,
87
+ max_new_tokens=256,
88
+ temperature=0.3,
89
+ repetition_penalty=1.05
90
+ )
91
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
92
+ ````
93
+
94
+ ---
95
+
96
+ ## &#x20;Training Details
97
+
98
+ * **Base model:** `LiquidAI/LFM2-1.2B`, optimized for edge/CPU inference ([ai.plainenglish.io][1], [generativeai.pub][2], [AI Models][3], [marktechpost.com][4], [Hugging Face][5])
99
+ * **Dataset:** `ajibawa‑2023/Software‑Architecture`
100
+ * **Fine-tuning:** Supervised instruction tuning
101
+ * *(Optionally include parameters if available—epochs, LR, hardware used)*
102
+
103
+ ---
104
+
105
+ ## &#x20;Limitations
106
+
107
+ * **Answer length is capped** by `max_new_tokens`. Some responses may truncate mid-explanation—raising this limit improves completeness.
108
+ * **Keyword coverage is strong but not exhaustive.** A few responses could benefit from enriching with additional terms.
109
+ * **Not a replacement** for expert-reviewed architectural validation—use as a support tool, not the final authority.
110
+
111
+ ---
112
+
113
+ ## &#x20;License
114
+
115
+ * **Base model license:** LFM Open License v1.0 ([Hugging Face][6])
116
+ * **Dataset license:** (Insert dataset license if known)
117
+
118
+ ---
119
+
120
+ ## Author
121
 
122
+ Mohamed Yasser – [Hugging Face profile](https://huggingface.co/yasserrmd)