Model Card for Model ID

This is a Llama 3 model finetuned on execution logs to be used for a sock-shop app anomaly detections.

Model Details

Model Description

This model was finetuned on a variety of system logs of a sock shop app. Given a log chunk of 10 messages, it generates the next log message according to normal execution.

  • Developed by: Lu铆s Almeida, Diego Pedroso, Lucas Pulcinelli, William Aisawa, Sarita Bruschi, In锚s Dutra
  • Model type: Text Generation
  • Language(s) (NLP): [English]
  • License: [llama3]
  • Finetuned from model: Llama 3 8b

Model Sources

Uses

Since the model was finetuned on execution logs of the sock-shop app, it is intended to be used to generate logs of said app. To adapt it to another system, it should be finetuned on a sample of execution logs of the new system.

Direct Use

Direct plugin to the sock-shop app.

Out-of-Scope Use

The usage of this model on execution logs that it hasn't been finetuned on may yield bad results.

Recommendations

We recommend users to finetune this model on logs of their app before they use it.

How to Get Started with the Model

Please refer to https://github.com/lasdpc-icmc/maia/apps/llm for the code files that were developed for this model. The file "eval_llm.py" provides code to detect system anomalies.

Training Details

Training Data

https://huggingface.co/datasets/lmma25/sock-shop-logs-train

Training Procedure

The model was finetuned using the SFTTrainer from the transformer's library in an autoregressive way.

Training Hyperparameters

  • Training regime: 10 epochs, AdamW optimizer, 1e-4 learning rate, bf16, weight decay 0.01, max gradient norm 0.3, cosine learning rate scheduler

Evaluation

Testing Data, Factors & Metrics

Testing Data

https://huggingface.co/datasets/lmma25/sock-shop-logs-test

Metrics

The model was used to detect anomalies on a small sample of execution logs, achieving a precision of 0.77 and a recall of 1. Precision and recall metrics were used since they allow for the accurate assessment of model behavior in regards to false positives and false negatives.

Results

Precision 0.77 Recall 1

Citation [optional]

BibTeX:

[More Information Needed]

Downloads last month
10
Safetensors
Model size
4.82B params
Tensor type
F32
FP16
U8
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for lmma25/log-gen

Quantized
(240)
this model