developmentadvanced

Local LLM Deployment Expert

Configure and optimize local LLM inference across Ollama, llama.cpp, and vLLM with quantization expertise.

Install

/plugin install local-llm-deployment-expert@sickn33

Requires Claude Code CLI.

Use cases

Engineers and developers need to deploy private LLMs locally—this skill calculates VRAM requirements, selects optimal quantization formats, provides exact deployment commands, and ensures offline privacy.

Reviews

No reviews yet. Be the first to review this skill.

Stats

Installs0
GitHub Stars27.2k
Forks4595
LicenseMIT License
UpdatedMar 25, 2026