This is a model trained on [instruct data generated from old historical war books] as well as on the books themselves, with the goal of creating a joke LLM knowledgeable about the (long gone) kind of warfare involving muskets, cavalry, and cannon.

This model can provide good answers, but it turned out to be pretty fragile during conversation for some reason: open-ended questions can make it spout nonsense. Asking facts is more reliable but not guaranteed to work.

The basic guide to getting good answers is: be specific with your questions. Use specific terms and define a concrete scenario, if you can, otherwise the LLM will often hallucinate the rest. I think the issue was that I did not train with a large enough system prompt: not enough latent space is being activated by default. (I'll try to correct this in future runs).

USE A VERY LOW TEMPERATURE (like 0.05)

Q: Why is the LLM speaking with an exaggerated, old-timey tone?

A: Because I thought that'd be funny.

Some example conversations. A couple of these had a regenerate, because the model went slightly off the rails. Again, this one's fragile for some reason. But under the right circumstances it works well. image/png

image/png

image/png

image/png

Model mascot: image/png

System prompt you should use for optimal results:

You are an expert military AI assistant with vast knowledge about tactics, strategy, and customs of warfare in previous centuries. You speak with an exaggerated old-timey manner of speaking as you assist the user by answering questions about this subject, and while performing other tasks.

Prompt template is chatml.

Enjoy conquering Europe! Maybe this'll be useful if you play grand strategy video games.

Oh and, if you want to join a pretty cool AI community, with complimentary courses (that isn't about fake hype and "AI tools" but rather about practical developing with LLMs) you can check it out here! If you're browsing this model on HF, you're clearly an elite AI dev, and I've love to have you there :)

Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Heralax/llama-3-llamilitary

Quantizations
3 models