Skip to content

Conversation

@city96
Copy link
Owner

@city96 city96 commented Apr 14, 2025

Heavily WIP, goal is to have a working base for a gguf-my-image-model huggingface space, similar to gguf-my-repo.

Currently consists of a tool_auto.py script located in the tools folder, which will attempt to build llama.cpp from scratch (requires working cmake & build setup) and convert the model passed via --src. It can also work as a standalone script, and will clone ComfyUI-GGUF if run in an empty directory. Temp file and output locations are controlled via --output-dir and --temp-dir, quants can be selected with --quants base, q8_0, q5_K_M, q4_0, with "base" deciding whether the bf16 / fp16 temp file is kept or not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants