- Replace all Zitadel references with Authentik in README files
- Update example configurations to use authentik instead of zitadel
- Remove reference to deleted PROJECT_REFERENCE.md
- Update clients/README.md to reflect actual available scripts
- Update secrets documentation with correct variable names
All documentation now accurately reflects current infrastructure
using Authentik as the identity provider.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Enable deployment of client servers without public IPs using private
network (10.0.0.0/16) with NAT gateway via edge server.
## Infrastructure Changes:
### Terraform (tofu/):
- **network.tf**: Define private network and subnet (10.0.0.0/24)
- NAT gateway route through edge server
- Firewall rules for client servers
- **main.tf**: Support private-only servers
- Optional public_ip_enabled flag per client
- Dynamic network block for private IP assignment
- User-data templates for public vs private servers
- **user-data-*.yml**: Cloud-init templates
- Private servers: Configure default route via NAT gateway
- Public servers: Standard configuration
- **dns.tf**: Update DNS to support edge routing
- Client domains point to edge server IP
- Wildcard DNS for subdomains
- **variables.tf**: Add private_ip and public_ip_enabled options
### Ansible:
- **deploy.yml**: Add diun and kuma roles to deployment
## Benefits:
- Cost savings: No public IP needed for each client
- Scalability: No public IP exhaustion limits
- Security: Clients not directly exposed to internet
- Centralized SSL: All TLS termination at edge
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Updates to Uptime Kuma monitoring setup:
DNS Configuration:
- Added DNS A record for status.vrije.cloud -> 94.130.231.155
- Updated Uptime Kuma container to use status.vrije.cloud domain
- HTTPS access via nginx-proxy with Let's Encrypt SSL
Automated Monitor Management:
- Created scripts/add-client-to-monitoring.sh
- Created scripts/remove-client-from-monitoring.sh
- Integrated monitoring into deploy-client.sh (step 5/5)
- Integrated monitoring into destroy-client.sh (step 0/7)
- Deployment now prompts to add monitors after success
- Destruction now prompts to remove monitors before deletion
Email Notification Setup:
- Created docs/uptime-kuma-email-setup.md with complete guide
- SMTP configuration using smtp.strato.com
- Credentials: server@postxsociety.org
- Alerts sent to mail@postxsociety.org
Documentation:
- Updated docs/monitoring.md with new domain
- Added email setup reference
- Replaced all URLs to use status.vrije.cloud
Benefits:
✅ Friendly domain instead of IP address
✅ HTTPS access with auto-SSL
✅ Automated monitoring reminders on deploy/destroy
✅ Complete email notification guide
✅ Streamlined workflow for monitor management
Note: Monitor creation/deletion currently manual (API automation planned)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Implement persistent block storage for Nextcloud user data, separating application and data layers:
OpenTofu Changes:
- tofu/volumes.tf: Create and attach Hetzner Volumes per client
- Configurable size per client (default 100 GB for dev)
- ext4 formatted, attached but not auto-mounted
- tofu/variables.tf: Add nextcloud_volume_size to client config
- tofu/terraform.tfvars: Set volume size for dev client (100 GB ~€5.40/mo)
Ansible Changes:
- ansible/roles/nextcloud/tasks/mount-volume.yml: New mount tasks
- Detect volume device automatically
- Format if needed, mount at /mnt/nextcloud-data
- Add to fstab for persistence
- Set correct permissions for www-data
- ansible/roles/nextcloud/tasks/main.yml: Include volume mounting
- ansible/roles/nextcloud/templates/docker-compose.nextcloud.yml.j2:
- Use host mount /mnt/nextcloud-data/data instead of Docker volume
- Keep app code in Docker volume (nextcloud-app)
- User data now on Hetzner Volume
Scripts:
- scripts/resize-client-volume.sh: Online volume resizing
- Resize via Hetzner API
- Expand filesystem automatically
- Show cost impact
- Verify new size
Documentation:
- docs/storage-architecture.md: Complete storage guide
- Architecture diagrams
- Volume specifications
- Sizing guidelines
- Operations procedures
- Performance considerations
- Troubleshooting guide
- docs/volume-migration.md: Step-by-step migration
- Safe migration from Docker volumes
- Rollback procedures
- Verification checklist
- Timeline estimates
Benefits:
✅ Data independent from server instance
✅ Resize storage without rebuilding server
✅ Easy data migration between servers
✅ Better separation of concerns (app vs data)
✅ Simplified backup strategy
✅ Cost-optimized (pay for what you use)
Volume Pricing:
- 50 GB: ~€2.70/month
- 100 GB: ~€5.40/month
- 250 GB: ~€13.50/month
- Resizable online, no downtime
Note: Existing clients require manual migration
Follow docs/volume-migration.md for safe migration procedure
Closes#18🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit implements a complete Zitadel identity provider deployment
with automated DNS management using vrije.cloud domain.
## Infrastructure Changes
### DNS Management
- Migrated from deprecated hetznerdns provider to modern hcloud provider v1.57+
- Automated DNS record creation for client subdomains (test.vrije.cloud)
- Automated wildcard DNS for service subdomains (*.test.vrije.cloud)
- Supports both IPv4 (A) and IPv6 (AAAA) records
### Zitadel Deployment
- Added complete Zitadel role with PostgreSQL 16 database
- Configured Zitadel v2.63.7 with proper external domain settings
- Implemented first instance setup with admin user creation
- Set up database connection with proper user and admin credentials
- Configured email verification bypass for first admin user
### Traefik Updates
- Upgraded from v3.0 to v3.2 for better Docker API compatibility
- Added manual routing configuration in dynamic.yml for Zitadel
- Configured HTTP/2 Cleartext (h2c) backend for Zitadel service
- Added Zitadel-specific security headers middleware
- Fixed Docker API version compatibility issues
### Secrets Management
- Added Zitadel credentials to test client secrets
- Generated proper 32-character masterkey (Zitadel requirement)
- Created admin password with symbol complexity requirement
- Added zitadel_domain configuration
## Deployment Details
Test environment now accessible at:
- Server: test.vrije.cloud (78.47.191.38)
- Zitadel: https://zitadel.test.vrije.cloud/
- Admin user: admin@test.zitadel.test.vrije.cloud
Successfully tested:
- HTTPS with Let's Encrypt SSL certificate
- Admin login with 2FA setup
- First instance initialization
Fixes#3🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Pieter <pieter@kolabnow.com>
Co-authored-by: Claude <noreply@anthropic.com>