File size: 3,303 Bytes
0a6d498 6cf270c 0a6d498 6cf270c 3e7e042 0a6d498 3e7e042 0a6d498 3e7e042 0a6d498 6cf270c 0a6d498 6cf270c 0a6d498 6cf270c 0a6d498 6cf270c 0a6d498 6cf270c 0a6d498 6cf270c 0a6d498 3e7e042 0a6d498 3e7e042 0a6d498 3e7e042 0a6d498 3e7e042 0a6d498 3e7e042 0a6d498 3e7e042 0a6d498 3e7e042 0a6d498 3e7e042 0a6d498 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
---
license: llama3
language:
- en
metrics:
- perplexity
base_model:
- meta-llama/Meta-Llama-3-8B
pipeline_tag: text-generation
tags:
- log
- anomaly
- detection
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is a Llama 3 model finetuned on execution logs to be used for a sock-shop app anomaly detections.
## Model Details
### Model Description
This model was finetuned on a variety of system logs of a sock shop app. Given a log chunk of 10 messages, it generates the next log message according to normal execution.
- **Developed by:** Luís Almeida, Diego Pedroso, Lucas Pulcinelli, William Aisawa, Sarita Bruschi, Inês Dutra
- **Model type:** Text Generation
- **Language(s) (NLP):** [English]
- **License:** [llama3]
- **Finetuned from model:** Llama 3 8b
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/lasdpc-icmc/maia
- **Paper [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Since the model was finetuned on execution logs of the sock-shop app, it is intended to be used to generate logs of said app. To adapt it to another system, it should be
finetuned on a sample of execution logs of the new system.
### Direct Use
Direct plugin to the sock-shop app.
### Out-of-Scope Use
The usage of this model on execution logs that it hasn't been finetuned on may yield bad results.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
We recommend users to finetune this model on logs of their app before they use it.
## How to Get Started with the Model
Please refer to https://github.com/lasdpc-icmc/maia/apps/llm for the code files that were developed for this model. The file "eval_llm.py" provides code to detect system
anomalies.
## Training Details
### Training Data
https://huggingface.co/datasets/lmma25/sock-shop-logs-train
### Training Procedure
The model was finetuned using the SFTTrainer from the transformer's library in an autoregressive way.
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Training Hyperparameters
- **Training regime:** 10 epochs, AdamW optimizer, 1e-4 learning rate, bf16, weight decay 0.01, max gradient norm 0.3, cosine learning rate scheduler
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
https://huggingface.co/datasets/lmma25/sock-shop-logs-test
#### Metrics
The model was used to detect anomalies on a small sample of execution logs, achieving a precision of 0.77 and a recall of 1. Precision and recall metrics were used since
they allow for the accurate assessment of model behavior in regards to false positives and false negatives.
### Results
Precision 0.77
Recall 1
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed] |