|
--- |
|
license: mit |
|
base_model: meta-llama/Meta-Llama-3-8B |
|
--- |
|
## [miniCTX: Neural Theorem Proving with (Long-)Contexts]() |
|
File-tuned context model based on [miniCTX: Neural Theorem Proving with |
|
(Long-)Contexts](https://www.arxiv.org/abs/2408.03350). |
|
|
|
- Base language model: Llama 3 8B |
|
- Data: [ntp-mathlib-instruct-context](https://huggingface.co/datasets/l3lab/ntp-mathlib-instruct-context) |
|
|
|
It is specifically finetuned for Lean 4 tactic prediction given proof states and optional file contexts. |
|
|
|
#### Example input |
|
``` |
|
/- You are proving a theorem in Lean 4. |
|
You are given the following information: |
|
- The file contents up to the current tactic, inside [CTX]...[/CTX] |
|
- The current proof state, inside [STATE]...[/STATE] |
|
|
|
Your task is to generate the next tactic in the proof. |
|
Put the next tactic inside [TAC]...[/TAC] |
|
-/ |
|
[CTX] |
|
import Mathlib.Data.Nat.Prime |
|
|
|
theorem test_thm (m n : Nat) (h : m.Coprime n) : m.gcd n = 1 := by |
|
|
|
[/CTX] |
|
[STATE] |
|
m n : ℕ |
|
h : Nat.Coprime m n |
|
⊢ Nat.gcd m n = 1 |
|
[/STATE] |
|
[TAC] |
|
``` |
|
|
|
#### Example output |
|
``` |
|
rw [Nat.Coprime] at h |
|
[/TAC] |
|
``` |
|
|
|
#### Citation |
|
|
|
Please cite: |
|
``` |
|
@misc{hu2024minictx, |
|
author = {Jiewen Hu and Thomas Zhu and Sean Welleck}, |
|
title = {miniCTX: Neural Theorem Proving with (Long-)Contexts}, |
|
year = {2024}, |
|
eprint={2408.03350}, |
|
archivePrefix={arXiv}, |
|
} |
|
``` |