nextai-team commited on
Commit
30c6acc
·
verified ·
1 Parent(s): c1c2d28

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -44
README.md CHANGED
@@ -7,67 +7,47 @@ tags:
7
  - code
8
  - QA
9
  - reasoning
 
 
 
 
 
 
10
  ---
11
 
12
 
13
- # Model Card for Model ID
14
 
15
- <!-- Provide a quick summary of what the model is/does. -->
16
 
 
17
 
 
18
 
19
- ## Model Details
20
 
21
- ### Model Description
22
 
23
- <!-- Provide a longer summary of what this model is. -->
24
- A powerfull MOE 4x7b mixtral of mistral models build using
25
- HuggingFaceH4/zephyr-7b-beta,
26
- mistralai/Mistral-7B-Instruct-v0.2,
27
- teknium/OpenHermes-2.5-Mistral-7B,
28
- Intel/neural-chat-7b-v3-3
29
- for more accuracy and precision in general reasoning, QA and code.
30
 
31
- - **Developed by:** NEXT AI
32
- - **Funded by :** Zpay Labs Pvt Ltd.
33
- - **Model type:** Mixtral of Mistral 4x7b
34
- - **Language(s) (NLP):** Code-Reasoning-QA
35
- -
36
 
37
- ### Model Sources
38
 
39
- https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
40
- https://huggingface.co/Intel/neural-chat-7b-v3-3
41
- https://huggingface.co/HuggingFaceH4/zephyr-7b-beta
42
- https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B
43
 
44
- ### Instructions to run the model
45
 
 
46
 
47
- from transformers import AutoTokenizer
48
- import transformers
49
- import torch
50
 
51
- model = "nextai-team/Moe-4x7b-reason-code-qa"
52
 
53
- tokenizer = AutoTokenizer.from_pretrained(model)
54
- pipeline = transformers.pipeline(
55
- "text-generation",
56
- model=model,
57
- model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
58
- )
59
 
60
- def generate_resposne(query):
61
- messages = [{"role": "user", "content": query}]
62
- prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
63
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
64
- return outputs[0]['generated_text']
65
 
66
- response = generate_resposne("How to start learning GenAI")
67
- print(response)
68
-
69
-
70
- <!-- Provide the basic links for the model. -->
71
-
72
- - **Demo :** Https://nextai.co.in
73
 
 
 
7
  - code
8
  - QA
9
  - reasoning
10
+ - mixtral
11
+ - maths
12
+ - sql
13
+ - mistral
14
+ - zephyr
15
+ - codellama
16
  ---
17
 
18
 
19
+ Model Details
20
 
21
+ Model Name: Moe-4x7b-QA-Code-Inst Publisher: nextai-team Model Type: Question Answering & Code Generation Architecture: Mixture of Experts (MoE) Model Size: 4x7 billion parameters
22
 
23
+ Overview
24
 
25
+ Moe-4x7b-QA-Code-Inst is an advanced AI model designed by the nextai-team for the purpose of enhancing question answering and code generation capabilities. Building upon the foundation of its predecessor, Moe-2x7b-QA-Code, this iteration introduces refined mechanisms and expanded training datasets to deliver more precise and contextually relevant responses.
26
 
27
+ Intended Use
28
 
29
+ This model is intended for developers, data scientists, and researchers seeking to integrate sophisticated natural language understanding and code generation functionalities into their applications. Ideal use cases include but are not limited to:
30
 
31
+ Automated coding assistance Technical support bots Educational tools for learning programming Enhancing code review processes
 
 
 
 
 
 
32
 
33
+ Model Architecture Moe-4x7b-QA-Code-Inst employs a Mixture of Experts (MoE) architecture, which allows it to efficiently manage its vast number of parameters for specialized tasks. This architecture facilitates the model's ability to discern subtle nuances in programming languages and natural language queries, leading to more accurate code generation and question answering performance.
 
 
 
 
34
 
35
+ Training Data The model has been trained on a diverse and extensive corpus comprising technical documentation, open-source code repositories, Stack Overflow questions and answers, and other programming-related texts. Special attention has been given to ensure a wide range of programming languages and frameworks are represented in the training data to enhance the model's versatility.
36
 
37
+ Performance Moe-4x7b-QA-Code-Inst demonstrates significant improvements in accuracy and relevance over its predecessor, particularly in complex coding scenarios and detailed technical queries. Benchmarks and performance metrics can be provided upon request.
 
 
 
38
 
39
+ Limitations and Biases
40
 
41
+ While Moe-4x7b-QA-Code-Inst represents a leap forward in AI-assisted coding and technical Q&A, it is not without limitations. The model may exhibit biases present in its training data, and its performance can vary based on the specificity and context of the input queries. Users are encouraged to critically assess the model's output and consider it as one of several tools in the decision-making process.
42
 
43
+ Ethical Considerations
 
 
44
 
45
+ We are committed to ethical AI development and urge users to employ Moe-4x7b-QA-Code-Inst responsibly. This includes but is not limited to avoiding the generation of harmful or unsafe code, respecting copyright and intellectual property rights, and being mindful of privacy concerns when inputting sensitive information into the model.
46
 
47
+ Usage Instructions
 
 
 
 
 
48
 
49
+ For detailed instructions on how to integrate and utilize Moe-4x7b-QA-Code-Inst in your projects, please refer to our GitHub repository and Hugging Face documentation.
 
 
 
 
50
 
51
+ Citation If you use Moe-4x7b-QA-Code-Inst in your research or application, please cite it as follows:
 
 
 
 
 
 
52
 
53
+ @misc{nextai2024moe4x7b, title={Moe-4x7b-QA-Code-Inst: Enhancing Question Answering and Code Generation with Mixture of Experts}, author={NextAI Team}, year={2024}, publisher={Hugging Face} }