mjwagerman commited on
Commit
5515605
·
1 Parent(s): dcce0f8

gpt genereated readme :)

Browse files
Files changed (1) hide show
  1. README.md +19 -6
README.md CHANGED
@@ -7,18 +7,18 @@ tags:
7
  - fine-tuning
8
  license: mit
9
  datasets:
10
- - your-dataset-name
11
  model-index:
12
  - name: Bias Detector
13
  results:
14
  - task:
15
  type: text-classification
16
  dataset:
17
- name: Your Dataset Name
18
- type: dataset-type
19
  metrics:
20
  - type: accuracy
21
- value: 0.92
22
  ---
23
 
24
  # Bias Detector
@@ -29,10 +29,23 @@ This model is fine-tuned using **PEFT LoRA** on existing **Hugging Face models**
29
  - **Architecture:** Transformer-based (e.g., BERT, RoBERTa)
30
  - **Fine-tuning Method:** Parameter Efficient Fine-Tuning (LoRA)
31
  - **Use Case:** Bias classification, text summarization, sentiment analysis
32
- - **Dataset:** [Your Dataset Name](https://huggingface.co/datasets/your-dataset)
33
  - **Training Framework:** PyTorch + Transformers
34
 
35
  ## Usage
36
  To use this model, install the necessary libraries:
37
  ```bash
38
- pip install transformers torch
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  - fine-tuning
8
  license: mit
9
  datasets:
10
+ - ...
11
  model-index:
12
  - name: Bias Detector
13
  results:
14
  - task:
15
  type: text-classification
16
  dataset:
17
+ name: ...
18
+ type: ...
19
  metrics:
20
  - type: accuracy
21
+ value: ...
22
  ---
23
 
24
  # Bias Detector
 
29
  - **Architecture:** Transformer-based (e.g., BERT, RoBERTa)
30
  - **Fine-tuning Method:** Parameter Efficient Fine-Tuning (LoRA)
31
  - **Use Case:** Bias classification, text summarization, sentiment analysis
32
+ - **Dataset:** [...](https://huggingface.co/datasets/your-dataset)
33
  - **Training Framework:** PyTorch + Transformers
34
 
35
  ## Usage
36
  To use this model, install the necessary libraries:
37
  ```bash
38
+ pip install transformers torch
39
+ ```
40
+ Then load the model with:
41
+ ```python
42
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
43
+
44
+ model_name = "mjwagerman/bias-detector"
45
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
46
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
47
+
48
+ text = "This is an example news headline."
49
+ inputs = tokenizer(text, return_tensors="pt")
50
+ outputs = model(**inputs)
51
+ ```