Multi2ConvAI-Corona: German logistic regression model using fasttext embeddings
This model was developed in the Multi2ConvAI project:
- domain: Corona (more details about our use cases: (en, de))
- language: German (de)
- model type: logistic regression
- embeddings: fastText embeddings
How to run
Requires:
- multi2convai
- serialized fastText embeddings (see last section of this readme or these instructions)
Run with one line of code
After installing multi2convai
and locally available fastText embeddings you can run:
# assumes working dir is the root of the cloned multi2convai repo
python scripts/run_inference.py -m multi2convai-corona-de-logreg-ft
>>> Create pipeline for config: multi2convai-corona-de-logreg-ft.
>>> Created a LogisticRegressionFasttextPipeline for domain: 'corona' and language 'de'.
>>>
>>> Enter your text (type 'stop' to end execution): Muss ich eine Maske tragen?
>>> 'Muss ich eine Maske tragen?' was classified as 'corona.masks' (confidence: 0.8943)
How to run model using multi2convai
After installing multi2convai
and locally available fastText embeddings you can run:
# assumes working dir is the root of the cloned multi2convai repo
from pathlib import Path
from multi2convai.pipelines.inference.base import ClassificationConfig
from multi2convai.pipelines.inference.logistic_regression_fasttext import (
LogisticRegressionFasttextConfig,
LogisticRegressionFasttextPipeline,
)
language = "de"
domain = "corona"
# 1. Define paths of model, label dict and embeddings
model_file = "model.pth"
label_dict_file = "label_dict.json"
embedding_path = Path(
f"../models/embeddings/fasttext/de/wiki.200k.de.embed"
)
vocabulary_path = Path(
f"../models/embeddings/fasttext/de/wiki.200k.de.vocab"
)
# 2. Create and setup pipeline
model_config = LogisticRegressionFasttextConfig(
model_file, embedding_path, vocabulary_path
)
config = ClassificationConfig(language, domain, label_dict_file, model_config)
pipeline = LogisticRegressionFasttextPipeline(config)
pipeline.setup()
# 3. Run intent classification on a text of your choice
label = pipeline.run("Muss ich eine Maske tragen?")
label
>>> Label(string='corona.masks', ratio='0.8943')
Download and serialize fastText
# assumes working dir is the root of the cloned multi2convai repo
mkdir models/fasttext/de
curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.de.vec --output models/fasttext/de/wiki.de.vec
python scripts/serialize_fasttext.py -r fasttext/wiki.de.vec -v fasttext/de/wiki.200k.de.vocab -e fasttext/de/wiki.200k.de.embed -n 200000
Further information on Multi2ConvAI:
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.