Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
Steven10429
/
apply_lora_and_quantize
like
0
Running
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
main
apply_lora_and_quantize
/
llama.cpp
/
ggml
/
src
/
ggml-cpu
/
ggml-cpu-hbm.h
Steven10429
llama.cpp
61b850a
about 2 months ago
raw
Copy download link
history
blame
contribute
delete
Safe
155 Bytes
#
pragma
once
#
include
"ggml-backend.h"
#
include
"ggml.h"
// GGML CPU internal header
ggml_backend_buffer_type_t
ggml_backend_cpu_hbm_buffer_type
(
void
)
;