morriszms's picture
Update README.md
eea133a verified
|
raw
history blame
7.18 kB
---
base_model: AI-MO/NuminaMath-7B-TIR
tags:
- alignment-handbook
- generated_from_trainer
- TensorBlock
- GGUF
widget:
- example_title: Math problem
messages:
- role: user
content: For how many values of the constant $k$ will the polynomial $x^{2}+kx+36$
have two distinct integer roots?
output:
text: "### Solution: \n1- For the polynomial \\\\( x^2 + kx + 36 \\\\) to have\
\ two distinct integer roots, let's denote these roots by \\\\( r_1 \\\\) and\
\ \\\\( r_2 \\\\).\n\n\n2- According to Vieta's formulas, the sum of the roots\
\ \\\\( r_1 + r_2 \\\\) is equal to \\\\(-k\\\\), and the product of the roots\
\ \\\\( r_1 \\\\cdot r_2 \\\\) is equal to 36.\n\n\n3- To find the distinct\
\ integer pairs \\\\((r_1, r_2)\\\\) whose product is 36, we need to determine\
\ all pairs of integers \\\\((r_1, r_2)\\\\) such that \\\\( r_1 \\\\cdot r_2\
\ = 36 \\\\) and \\\\( r_1 \\\\neq r_2 \\\\).\n\n\n4- From the equation \\\\\
( r_1 + r_2 = -k \\\\), for each valid pair \\\\((r_1, r_2)\\\\), we can compute\
\ the corresponding value of \\\\( k \\\\).\n\n\n5- Finally, since we need the\
\ polynomial to have two distinct integer roots, we need to ensure that \\\\\
( r_1 \\\\) and \\\\( r_2 \\\\) are distinct.\nLet's start by finding all pairs\
\ \\\\((r_1, r_2)\\\\) such that \\\\( r_1 \\\\cdot r_2 = 36 \\\\). We'll then\
\ determine the values of \\\\( k \\\\) and ensure the roots are distinct.\n\
```python import itertools\n# Find all pairs (r1, r2) such that r1 * r2 = 36\
\ product_36 = 36 factor_pairs = []\nfor i in range(1, product_36 + 1):\n if\
\ product_36 % i == 0:\n pair = (i, product_36 // i)\n if pair[0] != pair[1]:\
\ # Ensure distinct pairs\n factor_pairs.append(pair)\n \n # Calculate\
\ k for each pair and ensure distinct integer roots\n valid_k_values = set()\n\
\ for r1, r2 in factor_pairs:\n if r1 != r2:\n k = -(r1 + r2)\n\
\ valid_k_values.add(k)\n \n print((len(valid_k_values), sorted(valid_k_values)))\n\
\ ```\n \n ```output\n (4, [-37, -20, -15,-13])\n ```\n The distinct integer\
\ values of \\\\( k \\\\) that make the\npolynomial \\\\( x^2 + kx + 36 \\\\\
) have two distinct integer roots are \\\\(-37, -20, -15, \\\\text{and} -13\\\
\\).\nTherefore, the number of such values of \\\\( k \\\\) is:\n[ \\\\boxed{4}\
\ \\\\]"
pipeline_tag: text-generation
license: apache-2.0
model-index:
- name: NuminaMath-7B-TIR
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## AI-MO/NuminaMath-7B-TIR - GGUF
This repo contains GGUF format model files for [AI-MO/NuminaMath-7B-TIR](https://huggingface.co/AI-MO/NuminaMath-7B-TIR).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
### Problem: {prompt}
### Solution:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [NuminaMath-7B-TIR-Q2_K.gguf](https://huggingface.co/tensorblock/NuminaMath-7B-TIR-GGUF/blob/main/NuminaMath-7B-TIR-Q2_K.gguf) | Q2_K | 2.532 GB | smallest, significant quality loss - not recommended for most purposes |
| [NuminaMath-7B-TIR-Q3_K_S.gguf](https://huggingface.co/tensorblock/NuminaMath-7B-TIR-GGUF/blob/main/NuminaMath-7B-TIR-Q3_K_S.gguf) | Q3_K_S | 2.923 GB | very small, high quality loss |
| [NuminaMath-7B-TIR-Q3_K_M.gguf](https://huggingface.co/tensorblock/NuminaMath-7B-TIR-GGUF/blob/main/NuminaMath-7B-TIR-Q3_K_M.gguf) | Q3_K_M | 3.223 GB | very small, high quality loss |
| [NuminaMath-7B-TIR-Q3_K_L.gguf](https://huggingface.co/tensorblock/NuminaMath-7B-TIR-GGUF/blob/main/NuminaMath-7B-TIR-Q3_K_L.gguf) | Q3_K_L | 3.489 GB | small, substantial quality loss |
| [NuminaMath-7B-TIR-Q4_0.gguf](https://huggingface.co/tensorblock/NuminaMath-7B-TIR-GGUF/blob/main/NuminaMath-7B-TIR-Q4_0.gguf) | Q4_0 | 3.725 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [NuminaMath-7B-TIR-Q4_K_S.gguf](https://huggingface.co/tensorblock/NuminaMath-7B-TIR-GGUF/blob/main/NuminaMath-7B-TIR-Q4_K_S.gguf) | Q4_K_S | 3.749 GB | small, greater quality loss |
| [NuminaMath-7B-TIR-Q4_K_M.gguf](https://huggingface.co/tensorblock/NuminaMath-7B-TIR-GGUF/blob/main/NuminaMath-7B-TIR-Q4_K_M.gguf) | Q4_K_M | 3.933 GB | medium, balanced quality - recommended |
| [NuminaMath-7B-TIR-Q5_0.gguf](https://huggingface.co/tensorblock/NuminaMath-7B-TIR-GGUF/blob/main/NuminaMath-7B-TIR-Q5_0.gguf) | Q5_0 | 4.481 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [NuminaMath-7B-TIR-Q5_K_S.gguf](https://huggingface.co/tensorblock/NuminaMath-7B-TIR-GGUF/blob/main/NuminaMath-7B-TIR-Q5_K_S.gguf) | Q5_K_S | 4.481 GB | large, low quality loss - recommended |
| [NuminaMath-7B-TIR-Q5_K_M.gguf](https://huggingface.co/tensorblock/NuminaMath-7B-TIR-GGUF/blob/main/NuminaMath-7B-TIR-Q5_K_M.gguf) | Q5_K_M | 4.588 GB | large, very low quality loss - recommended |
| [NuminaMath-7B-TIR-Q6_K.gguf](https://huggingface.co/tensorblock/NuminaMath-7B-TIR-GGUF/blob/main/NuminaMath-7B-TIR-Q6_K.gguf) | Q6_K | 5.284 GB | very large, extremely low quality loss |
| [NuminaMath-7B-TIR-Q8_0.gguf](https://huggingface.co/tensorblock/NuminaMath-7B-TIR-GGUF/blob/main/NuminaMath-7B-TIR-Q8_0.gguf) | Q8_0 | 6.842 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/NuminaMath-7B-TIR-GGUF --include "NuminaMath-7B-TIR-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/NuminaMath-7B-TIR-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```