Technical Documentation

System Architecture &
Technical Specification

Comprehensive technical architecture documentation for Kubepanel - the Kubernetes-powered web hosting control panel with automated WordPress deployment and enterprise-grade reliability.

Introduction & Overview

What is Kubepanel?

Kubepanel is an open-source, Kubernetes-powered web hosting control panel designed as a modern alternative to traditional hosting control panels like cPanel and DirectAdmin. It targets small and medium-sized hosting providers who want to leverage the power, scalability, and reliability of Kubernetes for their hosting infrastructure.

Goals

  • Modern Architecture: Built on Kubernetes from the ground up, providing cloud-native scalability and reliability
  • Easy Management: Simplified web hosting management with automated WordPress deployment and certificate provisioning
  • High Availability: Three-node cluster with replicated storage ensures minimal downtime
  • Cost Effective: Open-source solution reducing licensing costs compared to proprietary alternatives
  • Developer Friendly: API-first approach with comprehensive REST APIs for automation and integration

Non-Goals

  • Supporting single-node deployments (minimum 3 nodes required for HA)
  • Replacing enterprise-grade solutions for large hosting providers (500+ servers)
  • Supporting non-Kubernetes container orchestrators
  • Providing Windows-based hosting capabilities

System Architecture & Traffic Flow

System Overview

Kubepanel operates as a two-tier system: the management plane (Kubepanel Django application) for hosting administrators, and the workload plane (WordPress/PHP websites) for end users. The platform is designed primarily to host WordPress and PHP applications with MariaDB databases, though it can run any containerized workload.

System Context - User Traffic Flows

Website Visitors
→
DNS Load Balancing
→
Nginx Ingress
→
WordPress/PHP
→
MariaDB
Hosting Admins
→
DNS Resolution
→
Nginx Ingress
→
Kubepanel Django
→
Kubernetes API

Three-Node Kubernetes Cluster

Cluster Deployment Topology

Node 1 (Control Plane + Worker)
Ubuntu 24.04 + MicroK8S
Kubepanel Django App
WordPress/PHP Workloads
LINSTOR Controller
/dev/sdb → LINSTOR Pool
Node 2 (Worker)
Ubuntu 24.04 + MicroK8S
WordPress/PHP Workloads
LINSTOR Satellite
Mail Services
/dev/sdb → LINSTOR Pool
Node 3 (Worker)
Ubuntu 24.04 + MicroK8S
WordPress/PHP Workloads
LINSTOR Satellite
phpMyAdmin
/dev/sdb → LINSTOR Pool

Load Balancing Strategies

Default: DNS Load Balancing

Multiple A records pointing to different cluster node IPs. Works in any environment (cloud/on-premises) without additional infrastructure.

example.com A 203.0.113.10 example.com A 203.0.113.11 example.com A 203.0.113.12 *.example.com A 203.0.113.10 *.example.com A 203.0.113.11 *.example.com A 203.0.113.12

On-Premises: MetalLB (Recommended)

For bare metal installations, MetalLB provides Layer 2 or BGP load balancing with a single virtual IP address for improved reliability and faster failover.

WordPress & PHP Application Architecture

Each WordPress site runs as a separate set of pods with dedicated persistent volumes for WordPress files and shared MariaDB access. The platform supports:

  • One-Click WordPress: Automated WordPress installation with database creation
  • Multi-PHP Support: Multiple PHP versions available (✅ Implemented)
  • WordPress Optimization: Performance tuning and caching built-in (✅ Implemented)
  • Persistent Storage: WordPress uploads and files stored on replicated LINSTOR volumes
  • Database Sharing: Multiple WordPress sites can share MariaDB instances
  • Auto-Scaling: WordPress pods can scale based on traffic demands

Storage & Automated Backup Solution

LINSTOR Storage Architecture

Kubepanel utilizes LINSTOR (Linux Storage) with DRBD (Distributed Replicated Block Device) to provide high-performance, replicated storage across the three-node cluster. Each node contributes its /dev/sdb block device to a shared storage pool.

Storage Replication & Backup Architecture

Node 1 Storage
/dev/sdb → LVM VG
WordPress Site A (Primary)
WordPress Site B (Replica)
Daily LVM Snapshots
Node 2 Storage
/dev/sdb → LVM VG
WordPress Site B (Primary)
WordPress Site A (Replica)
Daily LVM Snapshots
Node 3 Storage
/dev/sdb → LVM VG
MariaDB (Primary)
Available for Replicas
Daily LVM Snapshots
LINSTOR Controller & DRBD Replication
LINSTOR Controller
↔
DRBD Sync (Real-time)
↔
LVM Snapshots (Daily)

Default Replication Factor: 2

All persistent volumes are automatically replicated to two nodes using DRBD. This ensures that if any single node fails, data remains accessible from another node without interruption.

Automated Backup Features

Backup & Recovery Workflow

LINSTOR Scheduler
→
LVM Snapshot
→
Timestamp Tag
Admin Request
→
Kubepanel UI
→
Instant Snapshot
Select Snapshot
→
Restore Volume
→
Restart Pods

Key Backup Features:

  • Daily Automated Snapshots: LVM snapshots created automatically every day
  • One-Click Backups: Hosting admins can create snapshots instantly
  • One-Click Restore: Complete website restoration from any snapshot
  • Space Efficient: Only changed blocks are stored in snapshots
  • Cross-Node Recovery: Restore from replica if primary node fails

Architecture Components

Component Technology Purpose Deployment
Web UI & API Django + REST Framework Main control panel interface and API endpoints Multiple pods with HPA
WordPress Hosting WordPress + PHP (Multiple Versions) Website hosting with one-click deployment Per-site pod deployments
Database MariaDB Primary data store for WordPress and configuration StatefulSet with persistent volumes
Ingress Controller Nginx Ingress HTTP/HTTPS traffic routing and TLS termination DaemonSet across all nodes
Certificate Manager Cert-Manager + Let's Encrypt Automatic SSL/TLS certificate provisioning Cluster-wide operator
Storage Layer LINSTOR/DRBD Replicated block storage across nodes DaemonSet with CSI driver
Mail Services Postfix + Roundcube Email server and webmail interface Dedicated pods with persistent mail storage
Database Admin phpMyAdmin Web-based database management Deployment with secure ingress

Component Interactions

The Django-based Web UI serves as the central orchestrator, communicating with the Kubernetes API to manage workloads, volumes, and certificates. Background controllers handle long-running tasks asynchronously, while the ingress controller provides unified entry points for all services with automatic TLS termination.

Availability, Scaling & Disaster Recovery

High Availability Features

  • Three-Node Cluster: Ensures quorum and fault tolerance for single node failures
  • Replicated Storage: LINSTOR provides 2-way replication across nodes
  • Pod Distribution: Kubernetes scheduler distributes workloads across available nodes
  • Database Persistence: MariaDB data stored on replicated persistent volumes

Horizontal Pod Autoscaling (HPA)

Django application pods and WordPress sites can scale automatically based on CPU and memory utilization. Additional nodes can be added to the cluster for increased capacity.

Disaster Recovery

  • Automated Snapshots: LVM snapshots managed by LINSTOR for point-in-time recovery
  • Backup Strategy: Regular backups with configurable retention policies
  • Cluster Recovery: Node replacement procedures with automatic data replication
  • One-Click Restore: Website restoration directly from the control panel

Security & Compliance

Transport Layer Security

  • Automatic HTTPS: All traffic encrypted with Let's Encrypt certificates
  • Certificate Rotation: Automated certificate renewal before expiration
  • TLS Best Practices: Modern cipher suites and protocols enforced

Access Control

  • Kubernetes RBAC: Role-based access control for cluster resources
  • Namespace Isolation: Logical separation of customer workloads
  • Pod Security Standards: Enforced security contexts and policies
  • Network Policies: Optional micro-segmentation for enhanced isolation

Data Protection

  • Encryption at Rest: Sensitive data encrypted in database and storage
  • Backup Encryption: All backup data encrypted during transit and storage
  • Secrets Management: Kubernetes secrets for sensitive configuration data

Operations & Monitoring

Installation Process

  1. Bootstrap Node 1: Run installation script to set up control plane
  2. Join Nodes 2 & 3: Execute join scripts on worker nodes
  3. Configuration: Input admin credentials, domain, and email settings
  4. Validation: Verify cluster health and service availability
# Example bootstrap command
bash <(curl \ https://raw.githubusercontent.com/laszlokulcsar/kubepanel-infra/refs/heads/main/kubepanel-install.sh)

Monitoring Stack (Planned)

Component Purpose Status Priority
Prometheus Metrics collection and alerting Planned Q2 High
Grafana Metrics visualization and dashboards Planned Q2 High
Loki/ELK Log aggregation and analysis Under evaluation Medium
AlertManager Alert routing and notification Planned Q2 High

Key Metrics to Monitor:

  • Cluster node health and resource utilization
  • Storage capacity and DRBD replication status
  • Certificate expiration dates and renewal status
  • WordPress application response times and error rates
  • Backup success rates and snapshot retention compliance

System Requirements

Hardware Requirements

Component Minimum Recommended Notes
Nodes 3 nodes (required) 3-5 nodes Minimum for HA quorum
CPU 4 vCPU per node 8 vCPU per node Control plane needs 2+ vCPU
RAM 8 GB per node 16 GB per node For Kubernetes and workloads
Storage 100 GB root + 500 GB /dev/sdb 200 GB root + 2 TB /dev/sdb /dev/sdb for LINSTOR storage pool
Network 1 Gbps 10 Gbps Low latency between nodes

Software Requirements

  • Operating System: Ubuntu 24.04 LTS (required)
  • Container Runtime: MicroK8S with containerd
  • Python: 3.10+ for Django applications
  • Storage: Additional block device (/dev/sdb) on each node

Network Requirements

  • Public IP: At least one static public IP address
  • DNS Records: A record or wildcard A record pointing to cluster IP
  • Ports: 80/443 (HTTP/HTTPS), 25/587/993 (SMTP/IMAP)
  • Internal Communication: Full connectivity between cluster nodes

DNS Configuration Example

panel.example.com A 203.0.113.10 *.panel.example.com A 203.0.113.10 mail.example.com A 203.0.113.10 webmail.example.com A 203.0.113.10

Development Roadmap & Future Considerations

Current Implementation Status

Multi-PHP Versions
Support for PHP 7.4, 8.0, 8.1, 8.2, 8.3
WordPress Optimization
Performance tuning and caching built-in

Integration Roadmap

Integration Priority Complexity Status
S3-Compatible Backup High Medium Planned Q1
Monitoring Stack (Prometheus/Grafana) High High Planned Q2
Additional DNS Providers Medium Low Planned Q2
Multi-PHP Versions - - ✅ Implemented
WordPress Optimization - - ✅ Implemented

Future Considerations

Database Topology

Current state uses single-primary MariaDB deployment with replicated storage. Future consideration for MariaDB Galera cluster for enhanced database availability and read scaling.

Scaling Enhancements

  • Multi-Cluster Support: Federation for geographic distribution
  • Resource Limits: Per-tenant resource quotas and limits
  • CDN Integration: Content delivery network support for static assets
  • Database Sharding: Horizontal scaling strategies for large deployments

Security Enhancements

  • Network Policies: Micro-segmentation implementation strategy
  • Pod Security: Enhanced security contexts and admission controllers
  • Vulnerability Scanning: Automated container and cluster security scanning
  • Compliance: Industry compliance requirements (SOC2, PCI-DSS)

Conclusion

Kubepanel represents a modern approach to web hosting control panels, leveraging Kubernetes' native capabilities for scalability, reliability, and automation. The three-node architecture with replicated storage provides a solid foundation for small to medium hosting providers seeking an alternative to traditional control panels.

The design emphasizes automation, security, and operational simplicity while maintaining the flexibility to scale and adapt to evolving requirements. Key success factors include proper DNS configuration, reliable block storage devices, and ongoing monitoring of cluster health and performance.