Inference Model with API and Integrate to LM (Language Model)

#1
by ridhoalattqas - opened

Is this model when i do inference with api, with this code :

headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://api-inference.huggingface.co/models/ridhoalattqas/xlrs-best-lm"

def query(audio_bytes):
response = requests.request("POST", API_URL, headers=headers, data=audio_bytes)
return json.loads(response.content.decode("utf-8"))

is already connect with language_model sir ?

ridhoalattqas changed discussion status to closed
ridhoalattqas changed discussion status to open

Hello @ridhoalattqas ,

Sorry I don't fully understand the question here - do you want to use the model with the inference API?

i want to use the model with API inference without saving the model into my local drive, but i try to inference the language_model (KenLM.arpa) is not linked with the inference model

@Narsil do you know more here by any chance? Can one use the KenLM model in the inference API?

@ridhoalattqas ,

Sorry for the long delay, new notification system really helped me see conversations where I was mentionned.

Yes currently the API should use your kenlm model.

If you want to check, just use latest transformers and do something like pipe = pipeline(model="ridhoalattqas/xlrs-best-lm") with kenlm installed. Should work out of the box.

Sign up or log in to comment