HN Mail
Subscribe
CPP
Show HN: Llama.cpp Tutorial 2026: Run GGUF Models Locally on CPU and GPU
10 points
|
2 comments
LlaMa.cpp Robot Wars
4 points
|
1 comments
Intel Releases OpenVINO 2026.1 with Back End for Llama.cpp, New Hardware Support
4 points
|
0 comments
Cppreference.com Update
3 points
|
0 comments
Inferena: Local benchmark of PyTorch vs. Llama.cpp vs. Rust frameworks
2 points
|
0 comments
Show HN: How to Use Google's Extreme AI Compression with Ollama and Llama.cpp
2 points
|
0 comments
One-command local AI stack setup for Ubuntu (CUDA, Ollama, llama.cpp, chat UIs)
1 points
|
0 comments