Build an Admin Portal Before You Need One
LLM-powered demos often impress in development but fail in production, not because of the AI, but because the system around it isn’t built for real use.
Today’s tip: Create a lightweight admin portal early in your project… before things go wrong.
Why This Matters
When you’re developing an AI automation, it’s tempting to put off building admin tools. After all, your app is “just a demo.” But once it grows into production, small tasks like restarting containers, checking system health, or viewing logs quickly become urgent.
Without a centralized place to do those things, you’re left SSHing into containers, tailing logs manually, or relying on brittle workarounds. That doesn’t scale, and it certainly doesn’t help when something breaks during a customer pilot.
What to Do
Start with a simple admin portal that includes just enough to:
- Launch or restart apps (e.g. N8N, OpenWebUI, Ollama, Graylog)
- View system status (e.g. is Postgres online? Is memory maxing out?)
- View logs (from a file or log aggregator like Graylog)
You don’t need a polished UI. A basic Flask app or Node.js server with a dashboard and a few buttons is enough to make your life easier, and impress stakeholders with visibility and control.
Production Tip
Even if your AI automation is just a proof of concept, build a basic admin portal early. You’ll need it sooner than you think.
Code Example
Below is a minimal Flask-based admin dashboard that shows the status of services and lets you launch/restart containers using Docker.
from flask import Flask, render_template_string
import subprocess
import os
app = Flask(__name__)
TEMPLATE = """
<h2>AI Automation Admin Panel</h2>
<p>Status:</p>
<ul>
<li>OpenWebUI: {{ status['openwebui'] }}</li>
<li>N8N: {{ status['n8n'] }}</li>
<li>Ollama: {{ status['ollama'] }}</li>
<li>Postgres: {{ status['postgres'] }}</li>
</ul>
<form method="post" action="/restart">
<button type="submit">Restart All Services</button>
</form>
<p><a href="/logs">View Logs</a></p>
"""
def check_status(service):
result = subprocess.run(
["docker", "inspect", "-f", "{{.State.Running}}", service],
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL,
text=True
)
return result.stdout.strip() if result.returncode == 0 else "unknown"
@app.route("/")
def index():
services = ["openwebui", "n8n", "ollama", "postgres"]
status = {s: check_status(s) for s in services}
return render_template_string(TEMPLATE, status=status)
@app.route("/restart", methods=["POST"])
def restart():
os.system("docker compose restart")
return "<p>Restarted containers. <a href='/'>Go back</a></p>"
@app.route("/logs")
def logs():
with open("/var/log/llm-automation.log") as f:
content = f.read()
return f"<pre>{content}</pre>"
Run this with:
FLASK_APP=admin_portal.py flask run
Example Use Cases
Here’s how this would support the stack mentioned in earlier posts:
- OpenWebUI crashes under high load? Restart it from the portal.
- Ollama stops responding to LLM queries? Check the status instantly.
- N8N fails a workflow silently? Click through logs directly from the UI.
- Postgres latency rising? Spot it before it breaks other services.
Going Further
Once you have the basics working, consider:
- Adding basic auth to secure your portal
- Using Docker’s API instead of os.system
- Displaying resource usage (CPU, memory) with psutil
- Embedding logs from a remote aggregator (e.g. Graylog query API)
- Triggering health checks or test runs from the UI
Final Thought
The best time to build an admin portal is before you need one. The second-best time is when you’re debugging at 2am. Build it now.
Creating a simple admin portal that can scale with your AI project is another step toward making your project production ready!