File size: 1,097 Bytes
cedbb16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: mit
language:
- en
pipeline_tag: text-generation
tags:
- astronomy
- science
- LLM
---

# AstroLLaMA.gguf

[AstroLLaMA](https://huggingface.co/universeTBD/astrollama) in GPT-Generated Unified Format.

![](https://huggingface.co/universeTBD/astrollama/resolve/main/images/astrollama-logo.png)

## What's GPT-Generated Unified Format?

GGUF is file format used for storing models for inference, particularly in the context of language models like GPT. GGUF models **could be run on CPUs**, which made them accessible to a wider range of users.

More detailed description can be found [here](https://medium.com/@phillipgimmi/what-is-gguf-and-ggml-e364834d241c).

## How to play with AstroLLaMA on your laptop?

1. Install [ollama](https://github.com/jmorganca/ollama);
2. Download `astrollama.gguf` to your PC;
3. Create a file named Modelfile, and add a FROM instruction with the local filepath to AstroLLaMA, e.g.,
```
FROM ./astrollama.gguf
```
4. Create the model in Ollama
```
ollama create astrollama -f path_to_modelfile
```
5. Run AstroLLaMA locally
```
ollama run astrollama
```