Update README.md
Browse files
README.md
CHANGED
@@ -3,45 +3,32 @@ base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
|
|
3 |
library_name: peft
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
This model is a fine-tuned version of **LLaMA 3.1** specifically designed to automate dependency parsing of simple sentences, categorizing words into their syntactic roles according to Universal Dependency Parsing tags.
|
10 |
|
11 |
## Model Details
|
12 |
|
13 |
### Model Description
|
14 |
|
15 |
-
<!-- Provide a longer summary of what this model is. -->
|
16 |
The model has been fine-tuned to accurately parse simple sentences by classifying each word into its respective dependency category, such as `nsubj`, `obj`, and `root`, following the Universal Dependency framework. This fine-tuning enhances the LLaMA 3.1 model's ability to understand and analyze sentence structures, making it a valuable tool for linguistic analysis and natural language processing tasks.
|
17 |
|
18 |
- **Developed by:** Emanuel Pinasco
|
19 |
- **Model type:** NLP, Dependency Parsing
|
20 |
- **Language(s) (NLP):** English
|
21 |
|
22 |
-
## Uses
|
23 |
-
|
24 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
25 |
-
|
26 |
### Direct Use
|
27 |
|
28 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
29 |
The model can be used directly for syntactic analysis and linguistic research, where dependency parsing is required to understand sentence structures. It’s particularly suited for tasks involving simple sentence parsing.
|
30 |
|
31 |
### Downstream Use [optional]
|
32 |
|
33 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
34 |
Ideal for integration into larger NLP systems that require detailed sentence parsing, such as grammar checking tools, machine translation systems, and educational software.
|
35 |
|
36 |
### Out-of-Scope Use
|
37 |
|
38 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
39 |
The model is not designed for complex sentence structures, idiomatic expressions, or languages other than English. Misuse may involve attempts to apply it to tasks beyond simple dependency parsing, leading to inaccurate results.
|
40 |
|
41 |
-
## Bias, Risks, and Limitations
|
42 |
-
|
43 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
44 |
-
|
45 |
### Recommendations
|
46 |
|
47 |
Users (both direct and downstream) should be aware that the model's accuracy may decline with more complex or less conventional sentence structures. It's recommended to use this model in conjunction with other tools for more comprehensive linguistic analysis.
|
@@ -50,12 +37,11 @@ Users (both direct and downstream) should be aware that the model's accuracy may
|
|
50 |
|
51 |
### Training Data
|
52 |
|
53 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
54 |
The model was trained on a curated dataset of simple English sentences annotated with Universal Dependency Parsing tags. The training data focused on ensuring high accuracy in syntactic role assignment.
|
55 |
|
56 |
### Training Procedure
|
57 |
|
58 |
-
|
59 |
|
60 |
#### Training Hyperparameters
|
61 |
|
@@ -63,34 +49,27 @@ The model was trained on a curated dataset of simple English sentences annotated
|
|
63 |
|
64 |
## Evaluation
|
65 |
|
66 |
-
|
67 |
|
68 |
### Testing Data, Factors & Metrics
|
69 |
|
|
|
70 |
#### Testing Data
|
71 |
|
72 |
-
<!-- This should link to a Dataset Card if possible. -->
|
73 |
The model was evaluated using a separate dataset of simple sentences annotated with Universal Dependency tags.
|
74 |
|
75 |
#### Factors
|
76 |
|
77 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
78 |
Evaluation focused on sentence simplicity, vocabulary diversity, and syntactic structure variations.
|
79 |
|
80 |
#### Metrics
|
81 |
|
82 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
83 |
Accuracy in word classification into dependency categories was the primary metric.
|
84 |
|
85 |
#### Summary
|
86 |
|
87 |
The fine-tuned model demonstrates high accuracy in dependency parsing of simple English sentences, making it a robust tool for basic syntactic analysis.
|
88 |
|
89 |
-
## Environmental Impact
|
90 |
-
|
91 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
92 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
93 |
-
|
94 |
## Model Card Authors
|
95 |
|
96 |
Emanuel Pinasco
|
|
|
3 |
library_name: peft
|
4 |
---
|
5 |
|
6 |
+
# Fine-Tuned LLaMA 3.1 on Dependency Parsing
|
7 |
|
|
|
8 |
This model is a fine-tuned version of **LLaMA 3.1** specifically designed to automate dependency parsing of simple sentences, categorizing words into their syntactic roles according to Universal Dependency Parsing tags.
|
9 |
|
10 |
## Model Details
|
11 |
|
12 |
### Model Description
|
13 |
|
|
|
14 |
The model has been fine-tuned to accurately parse simple sentences by classifying each word into its respective dependency category, such as `nsubj`, `obj`, and `root`, following the Universal Dependency framework. This fine-tuning enhances the LLaMA 3.1 model's ability to understand and analyze sentence structures, making it a valuable tool for linguistic analysis and natural language processing tasks.
|
15 |
|
16 |
- **Developed by:** Emanuel Pinasco
|
17 |
- **Model type:** NLP, Dependency Parsing
|
18 |
- **Language(s) (NLP):** English
|
19 |
|
|
|
|
|
|
|
|
|
20 |
### Direct Use
|
21 |
|
|
|
22 |
The model can be used directly for syntactic analysis and linguistic research, where dependency parsing is required to understand sentence structures. It’s particularly suited for tasks involving simple sentence parsing.
|
23 |
|
24 |
### Downstream Use [optional]
|
25 |
|
|
|
26 |
Ideal for integration into larger NLP systems that require detailed sentence parsing, such as grammar checking tools, machine translation systems, and educational software.
|
27 |
|
28 |
### Out-of-Scope Use
|
29 |
|
|
|
30 |
The model is not designed for complex sentence structures, idiomatic expressions, or languages other than English. Misuse may involve attempts to apply it to tasks beyond simple dependency parsing, leading to inaccurate results.
|
31 |
|
|
|
|
|
|
|
|
|
32 |
### Recommendations
|
33 |
|
34 |
Users (both direct and downstream) should be aware that the model's accuracy may decline with more complex or less conventional sentence structures. It's recommended to use this model in conjunction with other tools for more comprehensive linguistic analysis.
|
|
|
37 |
|
38 |
### Training Data
|
39 |
|
|
|
40 |
The model was trained on a curated dataset of simple English sentences annotated with Universal Dependency Parsing tags. The training data focused on ensuring high accuracy in syntactic role assignment.
|
41 |
|
42 |
### Training Procedure
|
43 |
|
44 |
+
|
45 |
|
46 |
#### Training Hyperparameters
|
47 |
|
|
|
49 |
|
50 |
## Evaluation
|
51 |
|
52 |
+
|
53 |
|
54 |
### Testing Data, Factors & Metrics
|
55 |
|
56 |
+
|
57 |
#### Testing Data
|
58 |
|
|
|
59 |
The model was evaluated using a separate dataset of simple sentences annotated with Universal Dependency tags.
|
60 |
|
61 |
#### Factors
|
62 |
|
|
|
63 |
Evaluation focused on sentence simplicity, vocabulary diversity, and syntactic structure variations.
|
64 |
|
65 |
#### Metrics
|
66 |
|
|
|
67 |
Accuracy in word classification into dependency categories was the primary metric.
|
68 |
|
69 |
#### Summary
|
70 |
|
71 |
The fine-tuned model demonstrates high accuracy in dependency parsing of simple English sentences, making it a robust tool for basic syntactic analysis.
|
72 |
|
|
|
|
|
|
|
|
|
|
|
73 |
## Model Card Authors
|
74 |
|
75 |
Emanuel Pinasco
|