QCRI
/

mbayan commited on
Commit
3676651
·
verified ·
1 Parent(s): 0bfcf10

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -1
README.md CHANGED
@@ -38,7 +38,44 @@ The code to replicate the experiments is available on [GitHub](https://github.co
38
 
39
  ## Model Inference
40
 
41
- TBA
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
  # License
44
  This model is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
 
38
 
39
  ## Model Inference
40
 
41
+ To utilize the LlamaLens model for inference, follow these steps:
42
+
43
+ 1. **Install the Required Libraries**:
44
+
45
+ Ensure you have the necessary libraries installed. You can do this using pip:
46
+
47
+ ```bash
48
+ pip install transformers torch
49
+ ```
50
+ 2. **Load the Model and Tokenizer:**:
51
+ Use the transformers library to load the LlamaLens model and its tokenizer:
52
+
53
+ ```python
54
+ from transformers import AutoTokenizer, AutoModelForCausalLM
55
+
56
+ model_name = "QCRI/LlamaLens"
57
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
58
+ model = AutoModelForCausalLM.from_pretrained(model_name)
59
+ ```
60
+ 3. **Prepare the Input:**:
61
+ Tokenize your input text:
62
+ ```python
63
+ input_text = "Your input text here"
64
+ inputs = tokenizer(input_text, return_tensors="pt")
65
+ ```
66
+ 4. **Generate the Output:**:
67
+ Generate a response using the model:
68
+ ```python
69
+ output = model.generate(**inputs)
70
+ response = tokenizer.decode(output[0], skip_special_tokens=True)
71
+ print(response)
72
+ ```
73
+
74
+ ## Paper
75
+ For an in-depth understanding, refer to our paper: [**LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content**](https://arxiv.org/pdf/2410.15308).
76
+
77
+
78
+
79
 
80
  # License
81
  This model is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).