Hitsukaya DevOps: How to Build a High-Performance and Scalable Web Runtime on a Modern VPS
DevOps
# News GitHub News Story DevOps
ValentaizarHitsukaya
Hitsukaya DevOps: How to Build a High-Performance and Scalable Web Runtime on a Modern VPS
Introduction
In modern web development, frameworks like Next.js or NestJS are very popular, but on self-managed servers, their performance is not always optimal. In this article, I show how I built a custom runtime, DoragonPHP, which combines the simplicity of PHP with DevOps power, and how I apply the same principles for Next.js, all on my own servers.
The Problem
Next.js and SSR are great, but:
Native cache exists only via SSR / ISR → limited on self-managed servers.
React SSR consumes a lot of CPU and RAM.
V8 garbage collection can cause unexpected spikes.
No per-route optimization without an external layer (Redis, Nginx caching).
Traditional PHP is simple, but DoragonPHP brings:
Tunable workers
Configurable cache (file, memory, Redis)
Unix socket for minimal latency
Integration with Nginx + Fail2Ban for security
The DoragonPHP Solution
Client → Nginx (reverse proxy + cache) → DoragonPHP (workers, Unix socket) → DB/Redis
Advantages:
Minimal latency (5–15 ms for SSR/API)
High throughput (~2500 rps estimated)
Tunable workers with RAM and CPU control
Fail2Ban for security
Unix socket + rudimentary runtime → simple, fast, scalable.
Next.js Setup on VPS
For Next.js, I applied several optimizations:
Nginx reverse proxy +
proxy_cachefor SSR and APINode.js / cluster workers to utilize all VPS vCPUs
Redis / memory cache for dynamic pages
Static assets served directly from
_next/static→ bypassing Node.jsISR / SSG for stable pages
Systemd for service management: automatic restart, resource limits, centralized logging
Next.js Optimization Plan
To boost Next.js speed on my servers:
Aggressive per-route caching using Nginx + proxy_cache and Redis
Node.js / cluster workers for all vCPUs
Unix socket instead of TCP loopback
Static / ISR at maximum
Systemd service for complete management
CDN / edge caching for static assets
Goal: achieve maximum performance without sacrificing the flexibility of the JS framework, but with full control via systemd.
DoragonPHP vs Next.js Comparison
Feature | Next.js | DoragonPHP |
|---|---|---|
SSR/API Latency | 40–60 ms | 5–15 ms |
Requests/sec | ~500 rps | ~2500 rps |
Cache | Limited SSR/ISR | Nginx + file/Redis |
Workers | Manual cluster | Tunable workers |
Predictability | GC spikes | Predictable, resource-controlled |
Deployment | Cloud-friendly | VPS / bare-metal / container |
Conclusion: On self-managed servers, Next.js lags in raw performance, but with DevOps optimizations it can reach scalable performance.
Complete setup: Nginx + Unix socket + caching, DoragonPHP / Next.js with systemd, Fail2Ban monitoring, tunable workers + memory limits, Redis / file cache for dynamic pages.
Personal Lesson
DevOps is not just “the server works,” but full control: caching, workers, security, latency tuning.
Next.js shines mainly in cloud-managed ecosystems (Vercel, edge), but on a self-managed VPS, custom runtime + aggressive caching beats standard performance.
DoragonPHP is an example of how to build a high-performance, scalable, and DevOps-friendly runtime.
Systemd + Unix Socket Configurations (nextjs.socket)
Next.js Socket Unit: /etc/systemd/system/nextjs.socket
[Unit]
Description=Next.js Socket
[Socket]
ListenStream=/var/run/nextjs.sock
SocketUser=www-data
SocketGroup=www-data
SocketMode=0660
[Install]
WantedBy=sockets.target
Next.js Service Unit: /etc/systemd/system/nextjs.service
[Unit]
Description=Next.js Application
After=network.target
[Service]
ExecStart=/usr/bin/node /home/www/nextjs-app/server.js
User=www-data
Group=www-data
Restart=always
RestartSec=3
Environment=NODE_ENV=production
# Optional resource limits
MemoryMax=4G
CPUQuota=200%
Environment=WORKERS=4
Environment=CACHE_SIZE=1G
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=nextjs-app
[Install]
WantedBy=multi-user.target
Explanation:
Nginx communicates with Next.js via Unix socket (/var/run/nextjs.sock) → lower latency than TCP loopback.
Systemd handles auto-start, restart, logging, and resource limits.
WORKERS and CACHE_SIZE → used in server.js to configure number of workers and cache size.
MemoryMax and CPUQuota → limit systemd resources per service.
Requires=nextjs.socket → systemd starts the socket before the service.
server.js
const workers = parseInt(process.env.WORKERS) || 2;
const cacheSize = process.env.CACHE_SIZE || '512M';
console.log(`Starting Next.js with ${workers} workers and ${cacheSize} cache`);
Nginx Config for Unix Socket - Next.js
server {
listen 80;
server_name your-site.com;
location / {
proxy_pass http://unix:/var/run/nextjs.sock:;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache nextjs_cache;
proxy_cache_valid 200 10m;
}
}
Advantages:
Unix socket → lower latency than TCP loopback
Nginx proxy cache → faster dynamic pages
Environment variables → configurable workers/cache without modifying source code
Systemd → automatic restart, logging, resource limits
DoragonPHP Unix Socket (doragonphp.socket)
[Unit]
Description=DoragonPHP Socket
[Socket]
ListenStream=/var/run/doragonphp.sock
SocketUser=www-data
SocketGroup=www-data
SocketMode=0660
[Install]
WantedBy=sockets.target
DoragonPHP Service (doragonphp.service)
[Unit]
Description=DoragonPHP Application
After=network.target
Requires=doragonphp.socket
[Service]
ExecStart=/usr/bin/php /home/www/doragonphp/app.php
User=www-data
Group=www-data
Restart=always
RestartSec=3
Environment=APP_ENV=production
Environment=WORKERS=4
Environment=CACHE_SIZE=1G
# Optional resource limits
MemoryMax=4G
CPUQuota=200%
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=doragonphp
[Install]
WantedBy=multi-user.target
Explanation:
WORKERSandCACHE_SIZE→ used inapp.phpto configure workers and cache size.MemoryMaxandCPUQuota→ limit resources per worker/service.Requires=doragonphp.socket→ socket must be active before the service.
Example usage in app.php
<?php
$workers = getenv('WORKERS') ?: 2;
$cacheSize = getenv('CACHE_SIZE') ?: '512M';
echo "Starting DoragonPHP with {$workers} workers and {$cacheSize} cache\n";
Nginx Config for DoragonPHP
server {
listen 80;
server_name your-site.com;
location /doragonphp/ {
proxy_pass http://unix:/var/run/doragonphp.sock:;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache doragonphp_cache;
proxy_cache_valid 200 10m;
}
}
Benefits:
Unix socket → minimal latency
Nginx proxy cache → faster APIs and SSR
Systemd + environment variables → full control of workers and cache, auto-restart, logging, and resource limits
Once these configurations are applied, both Next.js and DoragonPHP can be fully managed via systemd, communicating via Unix socket and with high-performance caching through Nginx.
Conclusion
With this approach, I transformed a self-managed VPS into a fast and predictable environment for SSR, API, and caching. The same principles can be applied to other frameworks, and full control of resources and security makes the difference between “it works” and “it works optimally and scalably.”
I am also working on optimizing Next.js on my servers, with aggressive caching, Unix socket, and systemd, to reach maximum speed on self-managed servers.
Final Note
⚠️ The project is still in progress.
Everything presented represents the plan, concepts, and experimental setup for DoragonPHP and Next.js optimization on my VPS. There are no fully production-ready implementations yet, but all will be developed step by step and documented on the blog.