Mon 11 August 2025
You don’t need a paid API or a farm of GPUs to make a demo look real. Let’s turn placeholders into useful pictures with a small local server and a couple of dead‑simple calls. See it in action here: Bob’s Widgets demo.
What we’re using
- ImageAIServer — a local, privacy‑first image/vision server with an OpenAI‑compatible images endpoint.
- Your app — React, Vue, Django, whatever. If it can call HTTP, it can make pictures.
Why local? - No rate limits or surprise bills. - Works offline; sensitive data stays on your machine. - CPU mode for any box, GPU mode for quality/speed when you have it.
Quick start (pick one)
Option A — CPU/Any Machine (ONNX)
:::bash pip install imageaiserver imageaiserver
open http://localhost:8001
:::
Option B — GPU/NVIDIA or Apple Silicon (PyTorch)
:::bash pip install imageaiserver[torch] imageaiserver
SDXL/FLUX models will be available
:::
Option C — Docker
:::bash docker run -p 8001:8001 imageaiserver:latest :::
Once it’s running, hit http://localhost:8001 in your browser to test a prompt. The server also exposes an images API at /v1/images/generations
.
Minimal integration example
cURL
:::bash curl -X POST http://localhost:8001/v1/images/generations \ -H "Content-Type: application/json" \ -d '{ "prompt": "a serene mountain landscape", "model": "sd15-onnx" }' :::
Python
:::python import base64, json, requests
resp = requests.post( "http://localhost:8001/v1/images/generations", json={"prompt": "cute robot in a garden", "model": "sd15-onnx"} ) img_b64 = resp.json()["data"][0]["b64_json"] with open("robot.png", "wb") as f: f.write(base64.b64decode(img_b64)) :::
Frontend (TypeScript)
:::ts
async function generateImage(prompt: string) {
const res = await fetch("http://localhost:8001/v1/images/generations", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ prompt, model: "sd15-onnx" })
});
const { data } = await res.json();
return data:image/png;base64,${data[0].b64_json}
;
}
:::
From placeholders to a real catalog
Here’s a tiny batch script that generates thumbnails for a mock product list. Swap the prompts for your domain and you’re done.
:::python import base64, json, os, requests
products = [ {"id": 1, "name": "Precision widget", "prompt": "macro photo of a brushed aluminum precision widget on a neutral background"}, {"id": 2, "name": "Outdoor widget", "prompt": "rugged outdoor widget on a weathered wood table, soft daylight"}, # ...add 50–500 as needed ]
os.makedirs("public/images", exist_ok=True)
for p in products: r = requests.post( "http://localhost:8001/v1/images/generations", json={"prompt": p["prompt"], "model": "sd15-onnx"} ) img_path = f"public/images/widget_{p['id']}.png" with open(img_path, "wb") as f: f.write(base64.b64decode(r.json()["data"][0]["b64_json"])) print("wrote", img_path) :::
Wire those paths into your seed data and your UI suddenly looks like a real store. For a live example of that idea, open the Bob’s Widgets demo.
Picking a model
If you’re on CPU, start with sd15-onnx
(INT8, ~500 MB RAM). On a GPU, use sdxl
for quality, sdxl-turbo
for speed, or flux1-schnell
if you have the VRAM.
Model | Memory (approx.) | Hardware | Quality | Speed |
---|---|---|---|---|
SD 1.5 ONNX INT8 | ~500 MB | CPU | Good | Fast |
SDXL | ~8 GB | GPU | Excellent | Medium |
SDXL Turbo | ~8 GB | GPU | Very good | Very fast |
FLUX.1 Schnell | ~12 GB | GPU | Excellent | Fast |
Tip: you can switch models per‑request via the
model
field. Keep your code the same and adapt quality/speed on the fly.
Production notes
- Determinism: if you need repeatable results, add a
seed
field. - File naming: hash prompts or store IDs to avoid overwriting.
- Caching: serve generated images via your CDN once baked.
- Access: keep the generation endpoint private; ship baked assets publicly.
What you get for doing this
- Realistic demos that sell your UI/UX.
- Faster product iteration (no waiting on stock images or approvals).
- Zero marginal cost at prototype scale.
If you want to go deeper, the repository has install options (CPU/ONNX, GPU/Torch), a simple CLI (imageai-generate
), and Docker examples. It’s all local‑first and friendly to script.
Repo: https://github.com/imgailab/imgai-server
PS: If you like seeing this idea outside of a blog post, that’s exactly what the Bob’s demo is meant to show—what “good enough to evaluate” looks like without spending money or leaking data.