7 Steps to Build Your Own Private AI Image Generator with Docker Model Runner and Open WebUI
We've all been there: you need to generate a few images for a project, you fire up an AI image service, and suddenly you're wondering what happens to your prompts, how many credits you have left, or why that “safe content” filter rejected your perfectly reasonable request for a dragon wearing a business suit. What if you could skip all of that and run the whole thing on your own machine, with a slick chat UI on top? That's exactly what Docker Model Runner now makes possible. With a couple of commands you can pull an image-generation model, connect it to Open WebUI, and start generating images right from a chat interface—fully local, fully private, fully yours. Let's explore the seven things you need to know to get started.
- What Docker Model Runner Does
- Hardware and Software Requirements
- Pulling an Image Generation Model
- Understanding the DDUF Format
- Launching Open WebUI
- How the Integration Works
- Key Benefits of Going Local
1. What Docker Model Runner Does
Docker Model Runner acts as the control plane for your local AI image generation. It downloads the model, manages the inference back-end lifecycle, and exposes a 100% OpenAI-compatible API—including the POST /v1/images/generations endpoint that Open WebUI already knows how to talk to. Essentially, it turns your machine into a private DALL-E clone, no cloud subscription required. You run a couple of commands, and you get a fully functional image generation service that speaks the same API as popular cloud services.

2. Hardware and Software Requirements
Before you dive in, make sure you have the right setup. You'll need Docker Desktop (on macOS) or Docker Engine (on Linux). For models, aim for at least 8 GB of free RAM (more is better for larger models). A GPU is optional but highly recommended: NVIDIA CUDA for Windows/Linux, Apple Silicon (MPS) for Macs, or a CPU fallback if needed. To verify everything is ready, run docker model version in your terminal—if it returns without errors, you're good to go.
3. Pulling an Image Generation Model
Docker Model Runner distributes image generation models through Docker Hub using a compact packaging format called DDUF (Diffusers Unified Format). To pull the Stable Diffusion model, run:docker model pull stable-diffusion
This downloads the model and stores it locally. You can confirm it's ready with:docker model inspect stable-diffusion
The output shows the model ID, format (diffusers), size (e.g., 6.94 GB), and the DDUF file inside. Once pulled, the model is cached and can be used offline.
4. Understanding the DDUF Format
Behind the scenes, the model is stored as a DDUF (Diffusers Unified Format) file—a single-file container that bundles all components of a diffusion model: the text encoder, VAE, UNet/DiT, and scheduler configuration. This portable artifact makes distribution and runtime unpacking seamless. Docker Model Runner knows how to unpack the DDUF at inference time, so you don't have to worry about managing multiple files or dependencies. It's like a self-contained AI model in a box.

5. Launching Open WebUI
Here's the magic trick: Docker Model Runner has a built-in launch command that automatically wires up Open WebUI against your local inference endpoint. Just run:docker model launch openwebui
That's it. The command knows exactly how to configure the connection, so you don't need to manually set API keys or endpoints. Open WebUI opens in your browser, ready to accept image generation prompts. You can start typing and see results appear instantly, powered entirely by your local machine.
6. How the Integration Works
The integration is seamless because of the shared API standard. Docker Model Runner exposes an OpenAI-compatible API, including the /v1/images/generations endpoint. Open WebUI is designed to talk to exactly that API. The pipeline works like this: you type a prompt in Open WebUI, it sends an API request to the local endpoint, Docker Model Runner runs inference on your machine, and returns the generated image. All data stays local—no internet required after the initial model download.
7. Key Benefits of Going Local
Running your own AI image generator locally gives you full privacy: your prompts never leave your computer. No cloud credits to manage, no content filters blocking your creative ideas (unless you want them), and no subscription fees. With a GPU, performance can rival cloud services. You also gain complete control over the model and inference parameters. It's a powerful setup for developers, designers, and anyone who wants to experiment without limitations.
In summary, Docker Model Runner together with Open WebUI provides an easy, private way to generate images on your own hardware. Follow these steps, and you'll have your personal image generator up and running in minutes. No cloud, no cost, no compromises.
Related Articles
- Kubernetes v1.36 Beta: Dynamic Resource Tuning for Suspended Jobs
- How Docker Hardened Images Rescue ClickHouse Deployments Blocked by Security Scanners
- Kubernetes v1.36 Memory QoS: 5 Key Enhancements for Tiered Memory Protection
- Kubernetes v1.36 Enhances Memory Management with Tiered Protection and Opt-In Reservations
- Tailoring Cloud Provider Observability: A Guide to Customizing Dashboards in Grafana Cloud
- 10 Key Insights into Kubernetes v1.36’s Fine-Grained Kubelet Authorization
- 10 Milestones of Docker Hardened Images: One Year of Security Innovation
- 7 Critical Updates in Kubernetes v1.36 That Combat Controller Staleness