Deployment Guide
Run Gemma 4 locally on your own hardware. Multiple deployment options from one-click installers to production-grade serving frameworks.
Ollama
The simplest way to run Gemma 4 locally. One command to download and serve any variant with automatic hardware optimization.
Install Ollama
curl -fsSL https://ollama.com/install.sh | shRun Model
# Gemma 4 31B (Dense) - 最强性能
ollama run gemma4:31b
# Gemma 4 26B (MoE) - 效率优先
ollama run gemma4:26b
# Gemma 4 E4B - 移动/轻量
ollama run gemma4:e4b
# Gemma 4 E2B - 边缘设备
ollama run gemma4:e2bLM Studio
Desktop application with a visual interface for downloading, configuring, and chatting with Gemma 4 models. Great for beginners.
- Download LM Studio from lmstudio.ai
- Search for "Gemma 4" in the model browser
- Select a quantized version matching your VRAM
- Click Download and wait for completion
- Start chatting in the built-in interface
vLLM
High-throughput production serving engine with PagedAttention, continuous batching, and OpenAI-compatible API endpoints.
pip install vllm
vllm serve google/gemma-4-31b --max-model-len 32768llama.cpp
Optimized C++ inference engine supporting GGUF quantized models. Run Gemma 4 on CPU or mixed CPU/GPU configurations.
# Build llama.cpp
git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp && cmake -B build && cmake --build build
# Run with GGUF model
./build/bin/llama-cli -m gemma-4-31b-Q4_K_M.gguf -p "Hello"MLX
Apple Silicon-native framework by Apple. Optimized for M-series chips with unified memory, delivering excellent performance on Mac hardware.
pip install mlx-lm
mlx_lm.generate --model google/gemma-4-31b --prompt "Hello"VRAM Requirements
Estimated VRAM usage for each model variant at different quantization levels.
| Model | BF16 | INT8 | INT4 |
|---|---|---|---|
| E2B | 4 GB | 2.5 GB | 1.5 GB |
| E4B | 8 GB | 5 GB | 3 GB |
| 26B MoE | 52 GB | 28 GB | 16 GB |
| 31B Dense | 62 GB | 33 GB | 18 GB |
Download Models
Get Gemma 4 model weights from official sources.