happyllll commited on
Commit
d4538b5
·
verified ·
1 Parent(s): 42fb6d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -46
README.md CHANGED
@@ -1,57 +1,30 @@
1
- ---
2
- library_name: sentence-transformers
3
- pipeline_tag: sentence-similarity
4
- tags:
5
- - sentence-transformers
6
- - feature-extraction
7
- - sentence-similarity
8
 
9
- ---
10
 
11
- # {MODEL_NAME}
12
 
13
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
14
 
15
- <!--- Describe your model here -->
16
 
17
- ## Usage (Sentence-Transformers)
18
 
19
- Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
20
-
21
- ```
22
- pip install -U sentence-transformers
23
- ```
24
-
25
- Then you can use the model like this:
26
-
27
- ```python
28
- from sentence_transformers import SentenceTransformer
29
- sentences = ["This is an example sentence", "Each sentence is converted"]
30
-
31
- model = SentenceTransformer('{MODEL_NAME}')
32
- embeddings = model.encode(sentences)
33
- print(embeddings)
34
- ```
35
 
36
 
37
 
38
- ## Evaluation Results
 
39
 
40
- <!--- Describe how your model was evaluated -->
41
-
42
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
43
-
44
-
45
-
46
- ## Full Model Architecture
47
  ```
48
- SentenceTransformer(
49
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
50
- (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
51
- (2): Normalize()
52
- )
53
- ```
54
-
55
- ## Citing & Authors
56
-
57
- <!--- Describe where people can find more information -->
 
1
+ # Model Card: Assisting Mathematical Formalization with A Learning-based Premise Retriever
 
 
 
 
 
 
2
 
3
+ ## Model Description
4
 
5
+ This model is the first version designed for **premise retrieval** in **Lean**, based on the **state representation** of Lean. The model follows the architecture described in the paper:
6
 
7
+ [Assisting Mathematical Formalization with A Learning-based Premise Retriever](https://arxiv.org/abs/2501.13959)
8
 
9
+ The model implementation and code are available at:
10
 
11
+ [GitHub Repository](https://github.com/ruc-ai4math/Premise-Retrieval)
12
 
13
+ [Try our model](https://premise-search.com)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
 
16
 
17
+ ## Citation
18
+ If you use this model, please cite the following paper:
19
 
 
 
 
 
 
 
 
20
  ```
21
+ @misc{tao2025assistingmathematicalformalizationlearningbased,
22
+ title={Assisting Mathematical Formalization with A Learning-based Premise Retriever},
23
+ author={Yicheng Tao and Haotian Liu and Shanwen Wang and Hongteng Xu},
24
+ year={2025},
25
+ eprint={2501.13959},
26
+ archivePrefix={arXiv},
27
+ primaryClass={cs.CL},
28
+ url={https://arxiv.org/abs/2501.13959},
29
+ }
30
+ ```