HN Mail
Subscribe
CPP
Show HN: Llama.cpp Tutorial 2026: Run GGUF Models Locally on CPU and GPU
12 points
|
2 comments
LlaMa.cpp Robot Wars
4 points
|
1 comments
Cppreference.com Update
3 points
|
0 comments
Local ML inference benchmark: PyTorch vs. llama.cpp vs. the Rust ecosystem
2 points
|
1 comments
Inferena: Local benchmark of PyTorch vs. Llama.cpp vs. Rust frameworks
2 points
|
0 comments
One-command local AI stack setup for Ubuntu (CUDA, Ollama, llama.cpp, chat UIs)
1 points
|
0 comments
Local Model Router: Ollama/OpenAI-compat bridges for local LLMs via llama.cpp
1 points
|
0 comments