Compare commits
No commits in common. "main" and "v1.1-2fa-enforcement" have entirely different histories.
main
...
v1.1-2fa-e
108 changed files with 1998 additions and 6924 deletions
|
|
@ -37,7 +37,7 @@ High-level guardian of the infrastructure architecture, ensuring consistency, ma
|
|||
| Secrets | SOPS + Age | Simple, no server needed |
|
||||
| Hosting | Hetzner | German, family-owned, GDPR |
|
||||
| DNS | Hetzner DNS | Single provider simplicity |
|
||||
| Identity | Authentik | German project lead |
|
||||
| Identity | Zitadel | Swiss company, AGPL |
|
||||
| File Sync | Nextcloud | German company, AGPL |
|
||||
| Reverse Proxy | Traefik | French company, MIT |
|
||||
| Backup | Restic → Hetzner Storage Box | Open source, EU storage |
|
||||
|
|
@ -48,13 +48,13 @@ High-level guardian of the infrastructure architecture, ensuring consistency, ma
|
|||
### Does NOT Handle
|
||||
- Writing OpenTofu configurations (→ Infrastructure Agent)
|
||||
- Writing Ansible playbooks or roles (→ Infrastructure Agent)
|
||||
- Authentik-specific configuration (→ Authentik Agent)
|
||||
- Zitadel-specific configuration (→ Zitadel Agent)
|
||||
- Nextcloud-specific configuration (→ Nextcloud Agent)
|
||||
- Debugging application issues (→ respective App Agent)
|
||||
|
||||
### Defers To
|
||||
- **Infrastructure Agent**: All IaC implementation questions
|
||||
- **Authentik Agent**: Identity, SSO, OIDC specifics
|
||||
- **Zitadel Agent**: Identity, SSO, OIDC specifics
|
||||
- **Nextcloud Agent**: Nextcloud features, `occ` commands
|
||||
|
||||
### Escalates When
|
||||
|
|
@ -138,3 +138,6 @@ When reviewing proposed changes, verify:
|
|||
|
||||
**Good prompt:** "Review this PR that adds a new Ansible role"
|
||||
**Response approach:** Check role follows conventions, doesn't violate isolation, uses SOPS for secrets, aligns with existing patterns.
|
||||
|
||||
**Redirect prompt:** "How do I configure Zitadel OIDC scopes?"
|
||||
**Response:** "This is a Zitadel-specific question. Please ask the Zitadel Agent. I can help if you need to understand how it fits into the overall architecture."
|
||||
|
|
@ -46,7 +46,7 @@ Implements and maintains all Infrastructure as Code, including OpenTofu configur
|
|||
## Boundaries
|
||||
|
||||
### Does NOT Handle
|
||||
- Authentik application configuration (→ Authentik Agent)
|
||||
- Zitadel application configuration (→ Zitadel Agent)
|
||||
- Nextcloud application configuration (→ Nextcloud Agent)
|
||||
- Architecture decisions (→ Architect Agent)
|
||||
- Application-specific Docker compose sections (→ respective App Agent)
|
||||
|
|
@ -58,7 +58,7 @@ Implements and maintains all Infrastructure as Code, including OpenTofu configur
|
|||
|
||||
### Defers To
|
||||
- **Architect Agent**: Technology choices, principle questions
|
||||
- **Authentik Agent**: Authentik container config, bootstrap logic
|
||||
- **Zitadel Agent**: Zitadel container config, bootstrap logic
|
||||
- **Nextcloud Agent**: Nextcloud container config, `occ` commands
|
||||
|
||||
## Key Files (Owns)
|
||||
|
|
@ -170,8 +170,8 @@ output "client_ips" {
|
|||
- role: common
|
||||
- role: docker
|
||||
- role: traefik
|
||||
- role: authentik
|
||||
when: "'authentik' in apps"
|
||||
- role: zitadel
|
||||
when: "'zitadel' in apps"
|
||||
- role: nextcloud
|
||||
when: "'nextcloud' in apps"
|
||||
- role: backup
|
||||
|
|
@ -291,3 +291,6 @@ backup_retention_daily: 7
|
|||
|
||||
**Good prompt:** "Set up the common Ansible role for base system hardening"
|
||||
**Response approach:** Create role with tasks for SSH, firewall, unattended-upgrades, fail2ban, following conventions.
|
||||
|
||||
**Redirect prompt:** "How do I configure Zitadel to create an OIDC application?"
|
||||
**Response:** "Zitadel configuration is handled by the Zitadel Agent. I can set up the Ansible role structure and Docker Compose skeleton - the Zitadel Agent will fill in the application-specific configuration."
|
||||
11
.gitignore
vendored
11
.gitignore
vendored
|
|
@ -3,9 +3,7 @@ secrets/**/*.yaml
|
|||
secrets/**/*.yml
|
||||
!secrets/**/*.sops.yaml
|
||||
!secrets/.sops.yaml
|
||||
secrets/clients/*.sops.yaml
|
||||
keys/age-key.txt
|
||||
keys/ssh/
|
||||
*.key
|
||||
*.pem
|
||||
|
||||
|
|
@ -14,16 +12,12 @@ tofu/.terraform/
|
|||
tofu/.terraform.lock.hcl
|
||||
tofu/terraform.tfstate
|
||||
tofu/terraform.tfstate.backup
|
||||
tofu/terraform.tfstate.*.backup
|
||||
tofu/*.tfvars
|
||||
!tofu/terraform.tfvars.example
|
||||
tofu/*.tfplan
|
||||
tofu/tfplan
|
||||
|
||||
# Ansible
|
||||
ansible/*.retry
|
||||
ansible/.vault_pass
|
||||
ansible/host_vars/
|
||||
|
||||
# OS files
|
||||
.DS_Store
|
||||
|
|
@ -67,8 +61,3 @@ temp/
|
|||
scripts/*-test*.py
|
||||
scripts/test-*.py
|
||||
**/test-oidc-provider.py
|
||||
|
||||
# Documentation/reports (except README.md)
|
||||
*.md
|
||||
!README.md
|
||||
docs/
|
||||
|
|
|
|||
21
LICENSE
21
LICENSE
|
|
@ -1,21 +0,0 @@
|
|||
MIT License
|
||||
|
||||
Copyright (c) 2026 Post-X Society
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
119
PROJECT_REFERENCE.md
Normal file
119
PROJECT_REFERENCE.md
Normal file
|
|
@ -0,0 +1,119 @@
|
|||
# Project Reference
|
||||
|
||||
Quick reference for essential project information and common operations.
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
infrastructure/
|
||||
├── ansible/ # Ansible playbooks and roles
|
||||
│ ├── hcloud.yml # Dynamic inventory (Hetzner Cloud)
|
||||
│ ├── playbooks/ # Main playbooks
|
||||
│ │ ├── deploy.yml # Deploy applications to clients
|
||||
│ │ └── setup.yml # Setup base server infrastructure
|
||||
│ └── roles/ # Ansible roles (traefik, authentik, nextcloud, etc.)
|
||||
├── keys/
|
||||
│ └── age-key.txt # SOPS encryption key (gitignored)
|
||||
├── secrets/
|
||||
│ ├── clients/ # Per-client encrypted secrets
|
||||
│ │ └── test.sops.yaml
|
||||
│ └── shared.sops.yaml # Shared secrets
|
||||
└── terraform/ # Infrastructure as Code (Hetzner)
|
||||
```
|
||||
|
||||
## Essential Configuration
|
||||
|
||||
### SOPS Age Key
|
||||
**Location**: `infrastructure/keys/age-key.txt`
|
||||
**Usage**: Always set before running Ansible:
|
||||
```bash
|
||||
export SOPS_AGE_KEY_FILE="../keys/age-key.txt"
|
||||
```
|
||||
|
||||
### Hetzner Cloud Token
|
||||
**Usage**: Required for dynamic inventory:
|
||||
```bash
|
||||
export HCLOUD_TOKEN="MlURmliUzLcGyzCWXWWsZt3DeWxKcQH9ZMGiaaNrFM3VcgnASlEWKhhxLHdWAl0J"
|
||||
```
|
||||
|
||||
### Ansible Paths
|
||||
**Working Directory**: `infrastructure/ansible/`
|
||||
**Inventory**: `hcloud.yml` (dynamic, pulls from Hetzner Cloud API)
|
||||
**Python**: `~/.local/bin/ansible-playbook` (user-local installation)
|
||||
|
||||
## Current Deployment
|
||||
|
||||
### Client: test
|
||||
- **Hostname**: test (from Hetzner Cloud)
|
||||
- **Authentik SSO**: https://auth.test.vrije.cloud
|
||||
- **Nextcloud**: https://nextcloud.test.vrije.cloud
|
||||
- **Secrets**: `secrets/clients/test.sops.yaml`
|
||||
|
||||
## Common Operations
|
||||
|
||||
### Deploy Applications
|
||||
```bash
|
||||
cd infrastructure/ansible
|
||||
export HCLOUD_TOKEN="MlURmliUzLcGyzCWXWWsZt3DeWxKcQH9ZMGiaaNrFM3VcgnASlEWKhhxLHdWAl0J"
|
||||
export SOPS_AGE_KEY_FILE="../keys/age-key.txt"
|
||||
|
||||
# Deploy everything to test client
|
||||
~/.local/bin/ansible-playbook -i hcloud.yml playbooks/deploy.yml --limit test
|
||||
```
|
||||
|
||||
### Check Service Status
|
||||
```bash
|
||||
# List inventory hosts
|
||||
export HCLOUD_TOKEN="..."
|
||||
~/.local/bin/ansible-inventory -i hcloud.yml --list
|
||||
|
||||
# Run ad-hoc commands
|
||||
~/.local/bin/ansible test -i hcloud.yml -m shell -a "docker ps"
|
||||
~/.local/bin/ansible test -i hcloud.yml -m shell -a "docker logs nextcloud 2>&1 | tail -50"
|
||||
```
|
||||
|
||||
### Edit Secrets
|
||||
```bash
|
||||
cd infrastructure
|
||||
export SOPS_AGE_KEY_FILE="keys/age-key.txt"
|
||||
|
||||
# Edit client secrets
|
||||
sops secrets/clients/test.sops.yaml
|
||||
|
||||
# View decrypted secrets
|
||||
sops --decrypt secrets/clients/test.sops.yaml
|
||||
```
|
||||
|
||||
## Architecture Notes
|
||||
|
||||
### Service Stack
|
||||
- **Traefik**: Reverse proxy with automatic Let's Encrypt certificates
|
||||
- **Authentik 2025.10.3**: Identity provider (OAuth2/OIDC, SAML, LDAP)
|
||||
- **PostgreSQL 16**: Database for Authentik
|
||||
- **Nextcloud 30.0.17**: File sync and collaboration
|
||||
- **Redis**: Caching for Nextcloud
|
||||
- **MariaDB**: Database for Nextcloud
|
||||
|
||||
### Docker Networks
|
||||
- `traefik`: External network for all web-accessible services
|
||||
- `authentik-internal`: Internal network for Authentik ↔ PostgreSQL
|
||||
- `nextcloud-internal`: Internal network for Nextcloud ↔ Redis/DB
|
||||
|
||||
### Volumes
|
||||
- `authentik_authentik-db-data`: Authentik PostgreSQL data
|
||||
- `authentik_authentik-media`: Authentik uploaded media
|
||||
- `authentik_authentik-templates`: Custom Authentik templates
|
||||
- `nextcloud_nextcloud-data`: Nextcloud files and database
|
||||
|
||||
## Service Credentials
|
||||
|
||||
### Authentik Admin
|
||||
- **URL**: https://auth.test.vrije.cloud
|
||||
- **Setup**: Complete initial setup at `/if/flow/initial-setup/`
|
||||
- **Username**: akadmin (recommended)
|
||||
|
||||
### Nextcloud Admin
|
||||
- **URL**: https://nextcloud.test.vrije.cloud
|
||||
- **Username**: admin
|
||||
- **Password**: In `secrets/clients/test.sops.yaml` → `nextcloud_admin_password`
|
||||
- **SSO**: Login with Authentik button (auto-configured)
|
||||
75
README.md
75
README.md
|
|
@ -1,6 +1,6 @@
|
|||
# De Vrije Cloud: Post-Tyranny Tech Multi-Tenant Infrastructure
|
||||
# Post-X Society Multi-Tenant Infrastructure
|
||||
|
||||
Infrastructure as Code for our "[Vrije Cloud](https://www.vrije.cloud)" a scalable multi-tenant VPS platform running Nextcloud (file sync/share) on Hetzner Cloud.
|
||||
Infrastructure as Code for a scalable multi-tenant VPS platform running Nextcloud (file sync/share) on Hetzner Cloud.
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
|
|
@ -38,35 +38,21 @@ infrastructure/
|
|||
**The fastest way to deploy a client:**
|
||||
|
||||
```bash
|
||||
# 1. Ensure SOPS Age key is available (if not set)
|
||||
# 1. Set environment variables
|
||||
export HCLOUD_TOKEN="your-hetzner-api-token"
|
||||
export SOPS_AGE_KEY_FILE="./keys/age-key.txt"
|
||||
|
||||
# 2. Add client to terraform.tfvars
|
||||
# clients = {
|
||||
# newclient = {
|
||||
# server_type = "cx22"
|
||||
# location = "fsn1"
|
||||
# subdomain = "newclient"
|
||||
# apps = ["authentik", "nextcloud"]
|
||||
# }
|
||||
# }
|
||||
|
||||
# 3. Deploy client (fully automated, ~10-15 minutes)
|
||||
# The script automatically loads the Hetzner API token from SOPS
|
||||
./scripts/deploy-client.sh newclient
|
||||
# 2. Deploy client (fully automated, ~10-15 minutes)
|
||||
./scripts/deploy-client.sh <client_name>
|
||||
```
|
||||
|
||||
**Note**: The Hetzner API token is now stored encrypted in `secrets/shared.sops.yaml` and loaded automatically by all scripts. No need to manually set `HCLOUD_TOKEN`.
|
||||
|
||||
The script will automatically:
|
||||
- ✅ Generate unique SSH key pair (if missing)
|
||||
- ✅ Create secrets file from template (if missing, opens in editor)
|
||||
- ✅ Provision VPS on Hetzner Cloud
|
||||
- ✅ Deploy Authentik (SSO/identity provider)
|
||||
- ✅ Deploy Nextcloud (file storage)
|
||||
- ✅ Configure OAuth2/OIDC integration
|
||||
- ✅ Set up SSL certificates
|
||||
- ✅ Create admin accounts
|
||||
This automatically:
|
||||
- ✅ Provisions VPS on Hetzner Cloud
|
||||
- ✅ Deploys Authentik (SSO/identity provider)
|
||||
- ✅ Deploys Nextcloud (file storage)
|
||||
- ✅ Configures OAuth2/OIDC integration
|
||||
- ✅ Sets up SSL certificates
|
||||
- ✅ Creates admin accounts
|
||||
|
||||
**Result**: Fully functional system, ready to use immediately!
|
||||
|
||||
|
|
@ -142,11 +128,40 @@ See [scripts/README.md](scripts/README.md) for detailed documentation.
|
|||
4. **Infrastructure as Code**: All changes via version control
|
||||
5. **Security by default**: Encryption, hardening, least privilege
|
||||
|
||||
## 📖 Documentation
|
||||
|
||||
- **[PROJECT_REFERENCE.md](PROJECT_REFERENCE.md)** - Essential information and common operations
|
||||
- **[scripts/README.md](scripts/README.md)** - Management scripts documentation
|
||||
- **[AUTOMATION_STATUS.md](docs/AUTOMATION_STATUS.md)** - Full automation details
|
||||
- [Architecture Decision Record](docs/architecture-decisions.md) - Complete design rationale
|
||||
- [SSO Automation](docs/sso-automation.md) - OAuth2/OIDC integration workflow
|
||||
- [Agent Definitions](.claude/agents/) - Specialized AI agent instructions
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
This project uses specialized AI agents for development:
|
||||
|
||||
- **Architect**: High-level design decisions
|
||||
- **Infrastructure**: OpenTofu + Ansible implementation
|
||||
- **Authentik**: Identity provider and SSO configuration
|
||||
- **Nextcloud**: File sync/share configuration
|
||||
|
||||
See individual agent files in `.claude/agents/` for responsibilities.
|
||||
|
||||
## 🔒 Security
|
||||
|
||||
- Secrets are encrypted with SOPS + Age before committing
|
||||
- Age private keys are **NEVER** stored in this repository
|
||||
- See `.gitignore` for protected files
|
||||
|
||||
## 📝 License
|
||||
|
||||
MIT License - see [LICENSE](LICENSE) for details
|
||||
TBD
|
||||
|
||||
## 🙋 Support
|
||||
|
||||
For issues or questions, please create am issue or contact us on [vrijecloud@postxsociety.org](mailto:vrijecloud@postxsociety.org)
|
||||
|
||||
For issues or questions, please create a GitHub issue with the appropriate label:
|
||||
- `agent:architect` - Architecture/design questions
|
||||
- `agent:infrastructure` - IaC implementation
|
||||
- `agent:authentik` - Identity provider/SSO
|
||||
- `agent:nextcloud` - File sync/share
|
||||
|
|
|
|||
|
|
@ -84,7 +84,7 @@ ansible/
|
|||
│ ├── common/ # Base system hardening
|
||||
│ ├── docker/ # Docker + Docker Compose
|
||||
│ ├── traefik/ # Reverse proxy
|
||||
│ ├── authentik/ # Identity provider (OAuth2/OIDC SSO)
|
||||
│ ├── zitadel/ # Identity provider
|
||||
│ ├── nextcloud/ # File sync/share
|
||||
│ └── backup/ # Restic backup
|
||||
└── group_vars/ # Group variables
|
||||
|
|
@ -120,8 +120,8 @@ Reverse proxy with automatic SSL:
|
|||
- HTTP to HTTPS redirection
|
||||
- Dashboard (optional)
|
||||
|
||||
### authentik
|
||||
Identity provider deployment (OAuth2/OIDC SSO)
|
||||
### zitadel
|
||||
Identity provider deployment (see Zitadel Agent for details)
|
||||
|
||||
### nextcloud
|
||||
File sync/share deployment (see Nextcloud Agent for details)
|
||||
|
|
@ -273,9 +273,10 @@ ansible-playbook playbooks/setup.yml -vvv # Very verbose
|
|||
## Next Steps
|
||||
|
||||
After initial setup:
|
||||
1. Deploy applications: Run `playbooks/deploy.yml` to deploy Authentik and Nextcloud
|
||||
2. Configure backups: Use `backup` role
|
||||
3. Set up monitoring: Configure Uptime Kuma
|
||||
1. Deploy Zitadel: Follow Zitadel Agent instructions
|
||||
2. Deploy Nextcloud: Follow Nextcloud Agent instructions
|
||||
3. Configure backups: Use `backup` role
|
||||
4. Set up monitoring: Configure Uptime Kuma
|
||||
|
||||
## Resources
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[defaults]
|
||||
# Inventory configuration
|
||||
# inventory = hcloud.yml # Disabled - use -i flag instead
|
||||
inventory = hcloud.yml
|
||||
host_key_checking = False
|
||||
interpreter_python = auto_silent
|
||||
|
||||
|
|
@ -26,8 +26,8 @@ timeout = 30
|
|||
roles_path = ./roles
|
||||
|
||||
[inventory]
|
||||
# Enable inventory plugins
|
||||
enable_plugins = hetzner.hcloud.hcloud, ini, yaml, auto
|
||||
# Enable Hetzner Cloud dynamic inventory plugin
|
||||
enable_plugins = hetzner.hcloud.hcloud
|
||||
|
||||
[privilege_escalation]
|
||||
become = True
|
||||
|
|
@ -37,4 +37,4 @@ become_ask_pass = False
|
|||
|
||||
[ssh_connection]
|
||||
pipelining = True
|
||||
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o IdentitiesOnly=yes
|
||||
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
|
||||
|
|
|
|||
|
|
@ -1,4 +0,0 @@
|
|||
[clients]
|
||||
valk ansible_host=78.47.191.38 ansible_user=root ansible_ssh_private_key_file=../keys/ssh/valk
|
||||
kikker ansible_host=23.88.124.67 ansible_user=root ansible_ssh_private_key_file=../keys/ssh/kikker
|
||||
das ansible_host=49.13.49.246 ansible_user=root ansible_ssh_private_key_file=../keys/ssh/das
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
all:
|
||||
children:
|
||||
clients:
|
||||
hosts:
|
||||
valk:
|
||||
ansible_host: 78.47.191.38
|
||||
ansible_user: root
|
||||
ansible_ssh_private_key_file: ../keys/ssh/valk
|
||||
|
|
@ -1,124 +0,0 @@
|
|||
---
|
||||
# Configure Diun to use webhook notifications instead of email
|
||||
# This playbook updates all servers to send container update notifications
|
||||
# to a Matrix room via webhook instead of individual emails per server
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook -i hcloud.yml playbooks/260123-configure-diun-webhook.yml
|
||||
#
|
||||
# Or for specific servers:
|
||||
# ansible-playbook -i hcloud.yml playbooks/260123-configure-diun-webhook.yml --limit das,uil,vos
|
||||
|
||||
- name: Configure Diun webhook notifications on all servers
|
||||
hosts: all
|
||||
become: yes
|
||||
|
||||
vars:
|
||||
# Diun base configuration (from role defaults)
|
||||
diun_version: "latest"
|
||||
diun_log_level: "info"
|
||||
diun_watch_workers: 10
|
||||
diun_watch_all: true
|
||||
diun_exclude_containers: []
|
||||
diun_first_check_notif: false
|
||||
|
||||
# Schedule: Daily at 6am UTC
|
||||
diun_schedule: "0 6 * * *"
|
||||
|
||||
# Webhook configuration - sends to Matrix via custom webhook
|
||||
diun_notif_enabled: true
|
||||
diun_notif_type: webhook
|
||||
diun_webhook_endpoint: "https://diun-webhook.postxsociety.cloud"
|
||||
diun_webhook_method: POST
|
||||
diun_webhook_headers:
|
||||
Content-Type: application/json
|
||||
|
||||
# Disable email notifications
|
||||
diun_email_enabled: false
|
||||
|
||||
# SMTP defaults (not used when email disabled, but needed for template)
|
||||
diun_smtp_host: "smtp.eu.mailgun.org"
|
||||
diun_smtp_port: 587
|
||||
diun_smtp_from: "{{ client_name }}@mg.vrije.cloud"
|
||||
diun_smtp_to: "pieter@postxsociety.org"
|
||||
|
||||
# Optional notification defaults (unused but needed for template)
|
||||
diun_slack_webhook_url: ""
|
||||
diun_matrix_enabled: false
|
||||
diun_matrix_homeserver_url: ""
|
||||
diun_matrix_user: ""
|
||||
diun_matrix_password: ""
|
||||
diun_matrix_room_id: ""
|
||||
|
||||
pre_tasks:
|
||||
- name: Gather facts
|
||||
setup:
|
||||
|
||||
- name: Determine client name from hostname
|
||||
set_fact:
|
||||
client_name: "{{ inventory_hostname }}"
|
||||
|
||||
- name: Load client secrets
|
||||
community.sops.load_vars:
|
||||
file: "{{ playbook_dir }}/../../secrets/clients/{{ client_name }}.sops.yaml"
|
||||
name: client_secrets
|
||||
age_keyfile: "{{ lookup('env', 'SOPS_AGE_KEY_FILE') }}"
|
||||
no_log: true
|
||||
|
||||
- name: Load shared secrets
|
||||
community.sops.load_vars:
|
||||
file: "{{ playbook_dir }}/../../secrets/shared.sops.yaml"
|
||||
name: shared_secrets
|
||||
age_keyfile: "{{ lookup('env', 'SOPS_AGE_KEY_FILE') }}"
|
||||
no_log: true
|
||||
|
||||
- name: Merge shared secrets into client_secrets
|
||||
set_fact:
|
||||
client_secrets: "{{ client_secrets | combine(shared_secrets) }}"
|
||||
no_log: true
|
||||
|
||||
tasks:
|
||||
- name: Set SMTP credentials (required by template even if unused)
|
||||
set_fact:
|
||||
diun_smtp_username_final: "{{ client_secrets.mailgun_smtp_user | default('') }}"
|
||||
diun_smtp_password_final: ""
|
||||
no_log: true
|
||||
|
||||
- name: Display configuration summary
|
||||
debug:
|
||||
msg: |
|
||||
Configuring Diun on {{ inventory_hostname }}:
|
||||
- Webhook endpoint: {{ diun_webhook_endpoint }}
|
||||
- Email notifications: {{ 'enabled' if diun_email_enabled else 'disabled' }}
|
||||
- Schedule: {{ diun_schedule }} (Daily at 6am UTC)
|
||||
|
||||
- name: Deploy Diun configuration with webhook
|
||||
template:
|
||||
src: "{{ playbook_dir }}/../roles/diun/templates/diun.yml.j2"
|
||||
dest: /opt/docker/diun/diun.yml
|
||||
mode: '0644'
|
||||
notify: Restart Diun
|
||||
|
||||
- name: Restart Diun to apply new configuration
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: /opt/docker/diun
|
||||
state: restarted
|
||||
|
||||
- name: Wait for Diun to start
|
||||
pause:
|
||||
seconds: 5
|
||||
|
||||
- name: Check Diun status
|
||||
shell: docker ps --filter name=diun --format "{{ '{{' }}.Status{{ '}}' }}"
|
||||
register: diun_status
|
||||
changed_when: false
|
||||
|
||||
- name: Display Diun status
|
||||
debug:
|
||||
msg: "Diun status on {{ inventory_hostname }}: {{ diun_status.stdout }}"
|
||||
|
||||
handlers:
|
||||
- name: Restart Diun
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: /opt/docker/diun
|
||||
state: restarted
|
||||
|
|
@ -1,123 +0,0 @@
|
|||
---
|
||||
# Nextcloud Upgrade Stage Task File (Fixed Version)
|
||||
# This file is included by 260123-upgrade-nextcloud-v2.yml for each upgrade stage
|
||||
# Do not run directly
|
||||
#
|
||||
# Improvements:
|
||||
# - Better version detection (actual running version)
|
||||
# - Proper error handling
|
||||
# - Clearer status messages
|
||||
# - Maintenance mode handling
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Starting v{{ stage.from }} → v{{ stage.to }}"
|
||||
debug:
|
||||
msg: |
|
||||
============================================================
|
||||
Stage {{ stage.stage }}: Upgrading v{{ stage.from }} → v{{ stage.to }}
|
||||
============================================================
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Get current running version"
|
||||
shell: docker exec -u www-data nextcloud php occ status --output=json
|
||||
register: stage_version_check
|
||||
changed_when: false
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Parse current version"
|
||||
set_fact:
|
||||
stage_current: "{{ (stage_version_check.stdout | from_json).versionstring }}"
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Display current version"
|
||||
debug:
|
||||
msg: "Currently running: v{{ stage_current }}"
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Check if already on target version"
|
||||
debug:
|
||||
msg: "✓ Already on v{{ stage_current }} - skipping this stage"
|
||||
when: stage_current is version(stage.to, '>=')
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Skip if already upgraded"
|
||||
meta: end_play
|
||||
when: stage_current is version(stage.to, '>=')
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Verify version is compatible"
|
||||
fail:
|
||||
msg: "Cannot upgrade from v{{ stage_current }} (expected v{{ stage.from }}.x)"
|
||||
when: stage_current is version(stage.from, '<') or (stage_current is version(stage.to, '>='))
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Update docker-compose.yml to v{{ stage.to }}"
|
||||
replace:
|
||||
path: "{{ nextcloud_base_dir }}/docker-compose.yml"
|
||||
regexp: 'image:\s*nextcloud:{{ stage.from }}'
|
||||
replace: 'image: nextcloud:{{ stage.to }}'
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Verify docker-compose.yml was updated"
|
||||
shell: grep "image{{ ':' }} nextcloud{{ ':' }}{{ stage.to }}" {{ nextcloud_base_dir }}/docker-compose.yml
|
||||
register: compose_verify
|
||||
changed_when: false
|
||||
failed_when: compose_verify.rc != 0
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Pull Nextcloud v{{ stage.to }} image"
|
||||
shell: docker pull nextcloud:{{ stage.to }}
|
||||
register: image_pull
|
||||
changed_when: "'Downloaded' in image_pull.stdout or 'Pulling' in image_pull.stdout or 'Downloaded newer' in image_pull.stderr"
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Stop containers before upgrade"
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ nextcloud_base_dir }}"
|
||||
state: stopped
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Start containers with new version"
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ nextcloud_base_dir }}"
|
||||
state: present
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Wait for Nextcloud container to be ready"
|
||||
shell: |
|
||||
count=0
|
||||
max_attempts=60
|
||||
while [ $count -lt $max_attempts ]; do
|
||||
if docker exec nextcloud curl -f http://localhost:80/status.php 2>/dev/null; then
|
||||
echo "Container ready after $count attempts"
|
||||
exit 0
|
||||
fi
|
||||
sleep 5
|
||||
count=$((count + 1))
|
||||
done
|
||||
echo "Timeout waiting for container after $max_attempts attempts"
|
||||
exit 1
|
||||
register: container_ready
|
||||
changed_when: false
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Run occ upgrade"
|
||||
shell: docker exec -u www-data nextcloud php occ upgrade --no-interaction
|
||||
register: occ_upgrade
|
||||
changed_when: "'Update successful' in occ_upgrade.stdout or 'upgraded' in occ_upgrade.stdout"
|
||||
failed_when:
|
||||
- occ_upgrade.rc != 0
|
||||
- "'already latest version' not in occ_upgrade.stdout"
|
||||
- "'No upgrade required' not in occ_upgrade.stdout"
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Display upgrade output"
|
||||
debug:
|
||||
msg: "{{ occ_upgrade.stdout_lines }}"
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Verify upgrade succeeded"
|
||||
shell: docker exec -u www-data nextcloud php occ status --output=json
|
||||
register: stage_verify
|
||||
changed_when: false
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Parse upgraded version"
|
||||
set_fact:
|
||||
stage_upgraded: "{{ (stage_verify.stdout | from_json).versionstring }}"
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Check upgrade was successful"
|
||||
fail:
|
||||
msg: "Upgrade to v{{ stage.to }} failed - still on v{{ stage_upgraded }}"
|
||||
when: stage_upgraded is version(stage.to, '<')
|
||||
|
||||
- name: "Stage {{ stage.stage }}: Success"
|
||||
debug:
|
||||
msg: |
|
||||
============================================================
|
||||
✓ Stage {{ stage.stage }} completed successfully
|
||||
Upgraded from v{{ stage_current }} to v{{ stage_upgraded }}
|
||||
============================================================
|
||||
|
|
@ -1,378 +0,0 @@
|
|||
---
|
||||
# Nextcloud Major Version Upgrade Playbook (Fixed Version)
|
||||
# Created: 2026-01-23
|
||||
# Purpose: Safely upgrade Nextcloud from v30 to v32 via v31 (staged upgrade)
|
||||
#
|
||||
# Usage:
|
||||
# cd ansible/
|
||||
# HCLOUD_TOKEN="..." ansible-playbook -i hcloud.yml \
|
||||
# playbooks/260123-upgrade-nextcloud-v2.yml --limit <server> \
|
||||
# --private-key "../keys/ssh/<server>"
|
||||
#
|
||||
# Requirements:
|
||||
# - HCLOUD_TOKEN environment variable set
|
||||
# - SSH access to target server
|
||||
# - Sufficient disk space for backups
|
||||
#
|
||||
# Improvements over v1:
|
||||
# - Idempotent: can be re-run safely after failures
|
||||
# - Better version state tracking (reads actual running version)
|
||||
# - Proper maintenance mode handling
|
||||
# - Stage skipping if already on target version
|
||||
# - Better error messages and rollback instructions
|
||||
|
||||
- name: Upgrade Nextcloud from v30 to v32 (staged)
|
||||
hosts: all
|
||||
become: true
|
||||
gather_facts: true
|
||||
|
||||
vars:
|
||||
nextcloud_base_dir: "/opt/nextcloud"
|
||||
backup_dir: "/root/nextcloud-backup-{{ ansible_date_time.iso8601_basic_short }}"
|
||||
target_version: "32"
|
||||
|
||||
tasks:
|
||||
# ============================================================
|
||||
# PRE-UPGRADE CHECKS
|
||||
# ============================================================
|
||||
|
||||
- name: Display upgrade plan
|
||||
debug:
|
||||
msg: |
|
||||
============================================================
|
||||
Nextcloud Upgrade Plan - {{ inventory_hostname }}
|
||||
============================================================
|
||||
|
||||
Target: Nextcloud v{{ target_version }}
|
||||
Backup: {{ backup_dir }}
|
||||
|
||||
This playbook will:
|
||||
1. Detect current version
|
||||
2. Create backup if needed
|
||||
3. Upgrade through required stages (v30→v31→v32)
|
||||
4. Skip stages already completed
|
||||
5. Re-enable apps and disable maintenance mode
|
||||
|
||||
Estimated time: 10-20 minutes
|
||||
============================================================
|
||||
|
||||
- name: Check if Nextcloud is installed
|
||||
shell: docker ps --filter "name=^nextcloud$" --format "{{ '{{' }}.Names{{ '}}' }}"
|
||||
register: nextcloud_running
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Fail if Nextcloud is not running
|
||||
fail:
|
||||
msg: "Nextcloud container is not running on {{ inventory_hostname }}"
|
||||
when: "'nextcloud' not in nextcloud_running.stdout"
|
||||
|
||||
- name: Get current Nextcloud version
|
||||
shell: docker exec -u www-data nextcloud php occ status --output=json
|
||||
register: nextcloud_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Parse Nextcloud status
|
||||
set_fact:
|
||||
nc_status: "{{ nextcloud_status.stdout | from_json }}"
|
||||
when: nextcloud_status.rc == 0
|
||||
|
||||
- name: Handle Nextcloud in maintenance mode
|
||||
block:
|
||||
- name: Display maintenance mode warning
|
||||
debug:
|
||||
msg: "⚠ Nextcloud is in maintenance mode. Attempting to disable it..."
|
||||
|
||||
- name: Disable maintenance mode if enabled
|
||||
shell: docker exec -u www-data nextcloud php occ maintenance:mode --off
|
||||
register: maint_off
|
||||
changed_when: "'disabled' in maint_off.stdout"
|
||||
|
||||
- name: Wait a moment for mode change
|
||||
pause:
|
||||
seconds: 2
|
||||
|
||||
- name: Re-check status after disabling maintenance mode
|
||||
shell: docker exec -u www-data nextcloud php occ status --output=json
|
||||
register: nextcloud_status_retry
|
||||
changed_when: false
|
||||
|
||||
- name: Update status
|
||||
set_fact:
|
||||
nc_status: "{{ nextcloud_status_retry.stdout | from_json }}"
|
||||
when: nextcloud_status.rc != 0 or (nc_status is defined and nc_status.maintenance | bool)
|
||||
|
||||
- name: Display current version
|
||||
debug:
|
||||
msg: |
|
||||
Current: v{{ nc_status.versionstring }}
|
||||
Target: v{{ target_version }}
|
||||
Maintenance mode: {{ nc_status.maintenance }}
|
||||
|
||||
- name: Check if already on target version
|
||||
debug:
|
||||
msg: "✓ Nextcloud is already on v{{ nc_status.versionstring }} - nothing to do"
|
||||
when: nc_status.versionstring is version(target_version, '>=')
|
||||
|
||||
- name: End play if already upgraded
|
||||
meta: end_host
|
||||
when: nc_status.versionstring is version(target_version, '>=')
|
||||
|
||||
- name: Check disk space
|
||||
shell: df -BG {{ nextcloud_base_dir }} | tail -1 | awk '{print $4}' | sed 's/G//'
|
||||
register: disk_space_gb
|
||||
changed_when: false
|
||||
|
||||
- name: Verify sufficient disk space
|
||||
fail:
|
||||
msg: "Insufficient disk space: {{ disk_space_gb.stdout }}GB available, need at least 5GB"
|
||||
when: disk_space_gb.stdout | int < 5
|
||||
|
||||
- name: Display available disk space
|
||||
debug:
|
||||
msg: "Available disk space: {{ disk_space_gb.stdout }}GB"
|
||||
|
||||
# ============================================================
|
||||
# BACKUP PHASE (only if not already backed up)
|
||||
# ============================================================
|
||||
|
||||
- name: Check if backup already exists
|
||||
stat:
|
||||
path: "{{ backup_dir }}"
|
||||
register: backup_exists
|
||||
|
||||
- name: Skip backup if already exists
|
||||
debug:
|
||||
msg: "✓ Backup already exists at {{ backup_dir }} - skipping backup phase"
|
||||
when: backup_exists.stat.exists
|
||||
|
||||
- name: Create backup
|
||||
block:
|
||||
- name: Create backup directory
|
||||
file:
|
||||
path: "{{ backup_dir }}"
|
||||
state: directory
|
||||
mode: '0700'
|
||||
|
||||
- name: Enable maintenance mode for backup
|
||||
shell: docker exec -u www-data nextcloud php occ maintenance:mode --on
|
||||
register: maintenance_on
|
||||
changed_when: "'enabled' in maintenance_on.stdout"
|
||||
|
||||
- name: Backup Nextcloud database
|
||||
shell: |
|
||||
docker exec nextcloud-db pg_dump -U nextcloud nextcloud | gzip > {{ backup_dir }}/database.sql.gz
|
||||
args:
|
||||
creates: "{{ backup_dir }}/database.sql.gz"
|
||||
|
||||
- name: Get database backup size
|
||||
stat:
|
||||
path: "{{ backup_dir }}/database.sql.gz"
|
||||
register: db_backup
|
||||
|
||||
- name: Display database backup info
|
||||
debug:
|
||||
msg: "Database backup: {{ (db_backup.stat.size / 1024 / 1024) | round(2) }} MB"
|
||||
|
||||
- name: Stop Nextcloud containers for volume backup
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ nextcloud_base_dir }}"
|
||||
state: stopped
|
||||
|
||||
- name: Backup Nextcloud app volume
|
||||
shell: |
|
||||
tar -czf {{ backup_dir }}/nextcloud-app-volume.tar.gz -C /var/lib/docker/volumes/nextcloud-app/_data .
|
||||
args:
|
||||
creates: "{{ backup_dir }}/nextcloud-app-volume.tar.gz"
|
||||
|
||||
- name: Backup Nextcloud database volume
|
||||
shell: |
|
||||
tar -czf {{ backup_dir }}/nextcloud-db-volume.tar.gz -C /var/lib/docker/volumes/nextcloud-db-data/_data .
|
||||
args:
|
||||
creates: "{{ backup_dir }}/nextcloud-db-volume.tar.gz"
|
||||
|
||||
- name: Copy current docker-compose.yml to backup
|
||||
copy:
|
||||
src: "{{ nextcloud_base_dir }}/docker-compose.yml"
|
||||
dest: "{{ backup_dir }}/docker-compose.yml.backup"
|
||||
remote_src: true
|
||||
|
||||
- name: Display backup summary
|
||||
debug:
|
||||
msg: |
|
||||
============================================================
|
||||
✓ Backup completed: {{ backup_dir }}
|
||||
============================================================
|
||||
|
||||
To restore from backup if needed:
|
||||
1. cd {{ nextcloud_base_dir }} && docker compose down
|
||||
2. tar -xzf {{ backup_dir }}/nextcloud-app-volume.tar.gz -C /var/lib/docker/volumes/nextcloud-app/_data
|
||||
3. tar -xzf {{ backup_dir }}/nextcloud-db-volume.tar.gz -C /var/lib/docker/volumes/nextcloud-db-data/_data
|
||||
4. cp {{ backup_dir }}/docker-compose.yml.backup {{ nextcloud_base_dir }}/docker-compose.yml
|
||||
5. cd {{ nextcloud_base_dir }} && docker compose up -d
|
||||
============================================================
|
||||
|
||||
- name: Restart containers after backup
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ nextcloud_base_dir }}"
|
||||
state: present
|
||||
|
||||
- name: Wait for Nextcloud to be ready
|
||||
shell: |
|
||||
count=0
|
||||
max_attempts=24
|
||||
while [ $count -lt $max_attempts ]; do
|
||||
if docker exec nextcloud curl -f http://localhost:80/status.php 2>/dev/null; then
|
||||
echo "Ready after $count attempts"
|
||||
exit 0
|
||||
fi
|
||||
sleep 5
|
||||
count=$((count + 1))
|
||||
done
|
||||
echo "Timeout after $max_attempts attempts"
|
||||
exit 1
|
||||
register: nextcloud_ready
|
||||
changed_when: false
|
||||
|
||||
- name: Disable maintenance mode after backup
|
||||
shell: docker exec -u www-data nextcloud php occ maintenance:mode --off
|
||||
register: maint_off_backup
|
||||
changed_when: "'disabled' in maint_off_backup.stdout"
|
||||
|
||||
when: not backup_exists.stat.exists
|
||||
|
||||
# ============================================================
|
||||
# DETERMINE UPGRADE PATH
|
||||
# ============================================================
|
||||
|
||||
- name: Initialize stage counter
|
||||
set_fact:
|
||||
stage_number: 0
|
||||
|
||||
# ============================================================
|
||||
# STAGED UPGRADE LOOP - Dynamic version checking
|
||||
# ============================================================
|
||||
|
||||
- name: Stage 1 - Upgrade v30→v31 if needed
|
||||
block:
|
||||
- name: Get current version
|
||||
shell: docker exec -u www-data nextcloud php occ status --output=json
|
||||
register: version_check
|
||||
changed_when: false
|
||||
|
||||
- name: Parse version
|
||||
set_fact:
|
||||
current_version: "{{ (version_check.stdout | from_json).versionstring }}"
|
||||
|
||||
- name: Check if v30→v31 upgrade needed
|
||||
set_fact:
|
||||
needs_v31_upgrade: "{{ current_version is version('30', '>=') and current_version is version('31', '<') }}"
|
||||
|
||||
- name: Perform v30→v31 upgrade
|
||||
include_tasks: "{{ playbook_dir }}/260123-upgrade-nextcloud-stage-v2.yml"
|
||||
vars:
|
||||
stage:
|
||||
from: "30"
|
||||
to: "31"
|
||||
stage: 1
|
||||
when: needs_v31_upgrade
|
||||
|
||||
- name: Stage 2 - Upgrade v31→v32 if needed
|
||||
block:
|
||||
- name: Get current version
|
||||
shell: docker exec -u www-data nextcloud php occ status --output=json
|
||||
register: version_check
|
||||
changed_when: false
|
||||
|
||||
- name: Parse version
|
||||
set_fact:
|
||||
current_version: "{{ (version_check.stdout | from_json).versionstring }}"
|
||||
|
||||
- name: Check if v31→v32 upgrade needed
|
||||
set_fact:
|
||||
needs_v32_upgrade: "{{ current_version is version('31', '>=') and current_version is version('32', '<') }}"
|
||||
|
||||
- name: Perform v31→v32 upgrade
|
||||
include_tasks: "{{ playbook_dir }}/260123-upgrade-nextcloud-stage-v2.yml"
|
||||
vars:
|
||||
stage:
|
||||
from: "31"
|
||||
to: "32"
|
||||
stage: 2
|
||||
when: needs_v32_upgrade
|
||||
|
||||
# ============================================================
|
||||
# POST-UPGRADE
|
||||
# ============================================================
|
||||
|
||||
- name: Get final version
|
||||
shell: docker exec -u www-data nextcloud php occ status --output=json
|
||||
register: final_status
|
||||
changed_when: false
|
||||
|
||||
- name: Parse final version
|
||||
set_fact:
|
||||
final_version: "{{ (final_status.stdout | from_json).versionstring }}"
|
||||
|
||||
- name: Verify upgrade to target version
|
||||
fail:
|
||||
msg: "Upgrade incomplete - on v{{ final_version }}, expected v{{ target_version }}.x"
|
||||
when: final_version is version(target_version, '<')
|
||||
|
||||
- name: Run database optimizations
|
||||
shell: docker exec -u www-data nextcloud php occ db:add-missing-indices
|
||||
register: db_indices
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Run bigint conversion
|
||||
shell: docker exec -u www-data nextcloud php occ db:convert-filecache-bigint --no-interaction
|
||||
register: db_bigint
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
timeout: 600
|
||||
|
||||
- name: Re-enable critical apps
|
||||
shell: |
|
||||
docker exec -u www-data nextcloud php occ app:enable user_oidc || true
|
||||
docker exec -u www-data nextcloud php occ app:enable richdocuments || true
|
||||
register: apps_enabled
|
||||
changed_when: false
|
||||
|
||||
- name: Ensure maintenance mode is disabled
|
||||
shell: docker exec -u www-data nextcloud php occ maintenance:mode --off
|
||||
register: final_maint_off
|
||||
changed_when: "'disabled' in final_maint_off.stdout"
|
||||
failed_when: false
|
||||
|
||||
- name: Update docker-compose.yml to use latest tag
|
||||
replace:
|
||||
path: "{{ nextcloud_base_dir }}/docker-compose.yml"
|
||||
regexp: 'image:\s*nextcloud:\d+'
|
||||
replace: 'image: nextcloud:latest'
|
||||
|
||||
- name: Display success message
|
||||
debug:
|
||||
msg: |
|
||||
============================================================
|
||||
✓ UPGRADE SUCCESSFUL!
|
||||
============================================================
|
||||
|
||||
Server: {{ inventory_hostname }}
|
||||
From: v30.x
|
||||
To: v{{ final_version }}
|
||||
|
||||
Backup: {{ backup_dir }}
|
||||
|
||||
Next steps:
|
||||
1. Test login: https://nextcloud.{{ client_domain }}
|
||||
2. Test OIDC: Click "Login with Authentik"
|
||||
3. Test file operations
|
||||
4. Test Collabora Office
|
||||
|
||||
If all tests pass, remove backup:
|
||||
rm -rf {{ backup_dir }}
|
||||
|
||||
docker-compose.yml now uses 'nextcloud:latest' tag
|
||||
============================================================
|
||||
|
|
@ -1,156 +0,0 @@
|
|||
---
|
||||
# Configure Diun to disable watchRepo and add Docker Hub authentication
|
||||
# This playbook updates all servers to:
|
||||
# - Only watch specific image tags (not entire repositories) to reduce API calls
|
||||
# - Add Docker Hub authentication for higher rate limits
|
||||
#
|
||||
# Background:
|
||||
# - watchRepo: true checks ALL tags in a repository (hundreds of API calls)
|
||||
# - watchRepo: false only checks the specific tag being used (1-2 API calls)
|
||||
# - Docker Hub auth increases rate limit from 100 to 5000 pulls per 6 hours
|
||||
#
|
||||
# Usage:
|
||||
# cd ansible/
|
||||
# SOPS_AGE_KEY_FILE="../keys/age-key.txt" HCLOUD_TOKEN="..." \
|
||||
# ansible-playbook -i hcloud.yml playbooks/260124-configure-diun-watchrepo.yml
|
||||
#
|
||||
# Or for specific servers:
|
||||
# SOPS_AGE_KEY_FILE="../keys/age-key.txt" HCLOUD_TOKEN="..." \
|
||||
# ansible-playbook -i hcloud.yml playbooks/260124-configure-diun-watchrepo.yml \
|
||||
# --limit das,uil,vos --private-key "../keys/ssh/das"
|
||||
|
||||
- name: Configure Diun watchRepo and Docker Hub authentication
|
||||
hosts: all
|
||||
become: yes
|
||||
|
||||
vars:
|
||||
# Diun base configuration
|
||||
diun_version: "latest"
|
||||
diun_log_level: "info"
|
||||
diun_watch_workers: 10
|
||||
diun_watch_all: true
|
||||
diun_exclude_containers: []
|
||||
diun_first_check_notif: false
|
||||
|
||||
# Schedule: Weekly on Monday at 6am UTC (to reduce API calls)
|
||||
diun_schedule: "0 6 * * 1"
|
||||
|
||||
# Disable watchRepo - only check the specific tags we're using
|
||||
diun_watch_repo: false
|
||||
|
||||
# Webhook configuration - sends to Matrix via custom webhook
|
||||
diun_notif_enabled: true
|
||||
diun_notif_type: webhook
|
||||
diun_webhook_endpoint: "https://diun-webhook.postxsociety.cloud"
|
||||
diun_webhook_method: POST
|
||||
diun_webhook_headers:
|
||||
Content-Type: application/json
|
||||
|
||||
# Disable email notifications
|
||||
diun_email_enabled: false
|
||||
|
||||
# SMTP defaults (not used when email disabled, but needed for template)
|
||||
diun_smtp_host: "smtp.eu.mailgun.org"
|
||||
diun_smtp_port: 587
|
||||
diun_smtp_from: "{{ client_name }}@mg.vrije.cloud"
|
||||
diun_smtp_to: "pieter@postxsociety.org"
|
||||
|
||||
# Optional notification defaults (unused but needed for template)
|
||||
diun_slack_webhook_url: ""
|
||||
diun_matrix_enabled: false
|
||||
diun_matrix_homeserver_url: ""
|
||||
diun_matrix_user: ""
|
||||
diun_matrix_password: ""
|
||||
diun_matrix_room_id: ""
|
||||
|
||||
pre_tasks:
|
||||
- name: Gather facts
|
||||
setup:
|
||||
|
||||
- name: Determine client name from hostname
|
||||
set_fact:
|
||||
client_name: "{{ inventory_hostname }}"
|
||||
|
||||
- name: Load client secrets
|
||||
community.sops.load_vars:
|
||||
file: "{{ playbook_dir }}/../../secrets/clients/{{ client_name }}.sops.yaml"
|
||||
name: client_secrets
|
||||
age_keyfile: "{{ lookup('env', 'SOPS_AGE_KEY_FILE') }}"
|
||||
no_log: true
|
||||
|
||||
- name: Load shared secrets
|
||||
community.sops.load_vars:
|
||||
file: "{{ playbook_dir }}/../../secrets/shared.sops.yaml"
|
||||
name: shared_secrets
|
||||
age_keyfile: "{{ lookup('env', 'SOPS_AGE_KEY_FILE') }}"
|
||||
no_log: true
|
||||
|
||||
- name: Merge shared secrets into client_secrets
|
||||
set_fact:
|
||||
client_secrets: "{{ client_secrets | combine(shared_secrets) }}"
|
||||
no_log: true
|
||||
|
||||
tasks:
|
||||
- name: Set SMTP credentials (required by template even if unused)
|
||||
set_fact:
|
||||
diun_smtp_username_final: "{{ client_secrets.mailgun_smtp_user | default('') }}"
|
||||
diun_smtp_password_final: ""
|
||||
no_log: true
|
||||
|
||||
- name: Set Docker Hub credentials for higher rate limits
|
||||
set_fact:
|
||||
diun_docker_hub_username: "{{ client_secrets.docker_hub_username }}"
|
||||
diun_docker_hub_password: "{{ client_secrets.docker_hub_password }}"
|
||||
no_log: true
|
||||
|
||||
- name: Display configuration summary
|
||||
debug:
|
||||
msg: |
|
||||
Configuring Diun on {{ inventory_hostname }}:
|
||||
- Webhook endpoint: {{ diun_webhook_endpoint }}
|
||||
- Email notifications: {{ 'enabled' if diun_email_enabled else 'disabled' }}
|
||||
- Schedule: {{ diun_schedule }} (Weekly on Monday at 6am UTC)
|
||||
- Watch entire repositories: {{ 'yes' if diun_watch_repo else 'no (only specific tags)' }}
|
||||
- Docker Hub auth: {{ 'enabled' if diun_docker_hub_username else 'disabled' }}
|
||||
|
||||
- name: Deploy Diun configuration with watchRepo disabled and Docker Hub auth
|
||||
template:
|
||||
src: "{{ playbook_dir }}/../roles/diun/templates/diun.yml.j2"
|
||||
dest: /opt/docker/diun/diun.yml
|
||||
mode: '0644'
|
||||
notify: Restart Diun
|
||||
|
||||
- name: Restart Diun to apply new configuration
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: /opt/docker/diun
|
||||
state: restarted
|
||||
|
||||
- name: Wait for Diun to start
|
||||
pause:
|
||||
seconds: 5
|
||||
|
||||
- name: Check Diun status
|
||||
shell: docker ps --filter name=diun --format "{{ '{{' }}.Status{{ '}}' }}"
|
||||
register: diun_status
|
||||
changed_when: false
|
||||
|
||||
- name: Display Diun status
|
||||
debug:
|
||||
msg: "Diun status on {{ inventory_hostname }}: {{ diun_status.stdout }}"
|
||||
|
||||
- name: Verify Diun configuration
|
||||
shell: docker exec diun cat /diun.yml | grep -E "(watchRepo|regopts)" || echo "Config deployed"
|
||||
register: diun_config_check
|
||||
changed_when: false
|
||||
|
||||
- name: Display configuration verification
|
||||
debug:
|
||||
msg: |
|
||||
Configuration applied on {{ inventory_hostname }}:
|
||||
{{ diun_config_check.stdout }}
|
||||
|
||||
handlers:
|
||||
- name: Restart Diun
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: /opt/docker/diun
|
||||
state: restarted
|
||||
|
|
@ -1,151 +0,0 @@
|
|||
---
|
||||
# Nextcloud Maintenance Playbook
|
||||
# Created: 2026-01-24
|
||||
# Purpose: Run database and file maintenance tasks on Nextcloud instances
|
||||
#
|
||||
# This playbook performs:
|
||||
# 1. Add missing database indices (improves query performance)
|
||||
# 2. Update mimetypes database (ensures proper file type handling)
|
||||
#
|
||||
# Usage:
|
||||
# cd ansible/
|
||||
# HCLOUD_TOKEN="..." ansible-playbook -i hcloud.yml \
|
||||
# playbooks/nextcloud-maintenance.yml --limit <server> \
|
||||
# --private-key "../keys/ssh/<server>"
|
||||
#
|
||||
# To run on all servers:
|
||||
# HCLOUD_TOKEN="..." ansible-playbook -i hcloud.yml \
|
||||
# playbooks/nextcloud-maintenance.yml \
|
||||
# --private-key "../keys/ssh/<server>"
|
||||
#
|
||||
# Requirements:
|
||||
# - HCLOUD_TOKEN environment variable set
|
||||
# - SSH access to target server(s)
|
||||
# - Nextcloud container must be running
|
||||
|
||||
- name: Nextcloud Maintenance Tasks
|
||||
hosts: all
|
||||
become: true
|
||||
gather_facts: true
|
||||
|
||||
vars:
|
||||
nextcloud_container: "nextcloud"
|
||||
|
||||
tasks:
|
||||
# ============================================================
|
||||
# PRE-CHECK
|
||||
# ============================================================
|
||||
|
||||
- name: Display maintenance plan
|
||||
debug:
|
||||
msg: |
|
||||
============================================================
|
||||
Nextcloud Maintenance - {{ inventory_hostname }}
|
||||
============================================================
|
||||
|
||||
This playbook will:
|
||||
1. Add missing database indices
|
||||
2. Update mimetypes database
|
||||
3. Display results
|
||||
|
||||
Estimated time: 1-3 minutes per server
|
||||
============================================================
|
||||
|
||||
- name: Check if Nextcloud container is running
|
||||
shell: docker ps --filter "name=^{{ nextcloud_container }}$" --format "{{ '{{' }}.Names{{ '}}' }}"
|
||||
register: nextcloud_running
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Fail if Nextcloud is not running
|
||||
fail:
|
||||
msg: "Nextcloud container is not running on {{ inventory_hostname }}"
|
||||
when: "'nextcloud' not in nextcloud_running.stdout"
|
||||
|
||||
- name: Get current Nextcloud version
|
||||
shell: docker exec -u www-data {{ nextcloud_container }} php occ --version
|
||||
register: nextcloud_version
|
||||
changed_when: false
|
||||
|
||||
- name: Display Nextcloud version
|
||||
debug:
|
||||
msg: "{{ nextcloud_version.stdout }}"
|
||||
|
||||
# ============================================================
|
||||
# TASK 1: ADD MISSING DATABASE INDICES
|
||||
# ============================================================
|
||||
|
||||
- name: Check for missing database indices
|
||||
shell: docker exec -u www-data {{ nextcloud_container }} php occ db:add-missing-indices
|
||||
register: db_indices_result
|
||||
changed_when: "'updated successfully' in db_indices_result.stdout"
|
||||
failed_when: db_indices_result.rc != 0
|
||||
|
||||
- name: Display database indices results
|
||||
debug:
|
||||
msg: |
|
||||
============================================================
|
||||
Database Indices Results
|
||||
============================================================
|
||||
{{ db_indices_result.stdout }}
|
||||
============================================================
|
||||
|
||||
# ============================================================
|
||||
# TASK 2: UPDATE MIMETYPES DATABASE
|
||||
# ============================================================
|
||||
|
||||
- name: Update mimetypes database
|
||||
shell: docker exec -u www-data {{ nextcloud_container }} php occ maintenance:mimetype:update-db
|
||||
register: mimetype_result
|
||||
changed_when: "'Added' in mimetype_result.stdout"
|
||||
failed_when: mimetype_result.rc != 0
|
||||
|
||||
- name: Parse mimetype results
|
||||
set_fact:
|
||||
mimetypes_added: "{{ mimetype_result.stdout | regex_search('Added (\\d+) new mimetypes', '\\1') | default(['0'], true) | first }}"
|
||||
|
||||
- name: Display mimetype results
|
||||
debug:
|
||||
msg: |
|
||||
============================================================
|
||||
Mimetype Update Results
|
||||
============================================================
|
||||
Mimetypes added: {{ mimetypes_added }}
|
||||
{% if mimetypes_added | int > 0 %}
|
||||
✓ Mimetype database updated successfully
|
||||
{% else %}
|
||||
✓ All mimetypes already up to date
|
||||
{% endif %}
|
||||
============================================================
|
||||
|
||||
# ============================================================
|
||||
# SUMMARY
|
||||
# ============================================================
|
||||
|
||||
- name: Display maintenance summary
|
||||
debug:
|
||||
msg: |
|
||||
============================================================
|
||||
✓ MAINTENANCE COMPLETED - {{ inventory_hostname }}
|
||||
============================================================
|
||||
|
||||
Server: {{ inventory_hostname }}
|
||||
Version: {{ nextcloud_version.stdout }}
|
||||
|
||||
Tasks completed:
|
||||
{% if db_indices_result.changed %}
|
||||
✓ Database indices: Updated
|
||||
{% else %}
|
||||
✓ Database indices: Already optimized
|
||||
{% endif %}
|
||||
{% if mimetype_result.changed %}
|
||||
✓ Mimetypes: Added {{ mimetypes_added }} new types
|
||||
{% else %}
|
||||
✓ Mimetypes: Already up to date
|
||||
{% endif %}
|
||||
|
||||
Next steps:
|
||||
- Check admin interface for any remaining warnings
|
||||
- Warnings may take a few minutes to clear from cache
|
||||
|
||||
============================================================
|
||||
|
|
@ -1,53 +0,0 @@
|
|||
---
|
||||
# Configure email for a single server
|
||||
- hosts: all
|
||||
gather_facts: yes
|
||||
tasks:
|
||||
- name: Load client secrets
|
||||
community.sops.load_vars:
|
||||
file: "{{ playbook_dir }}/../../secrets/clients/{{ inventory_hostname }}.sops.yaml"
|
||||
name: client_secrets
|
||||
age_keyfile: "{{ lookup('env', 'SOPS_AGE_KEY_FILE') }}"
|
||||
no_log: true
|
||||
|
||||
- name: Load shared secrets
|
||||
community.sops.load_vars:
|
||||
file: "{{ playbook_dir }}/../../secrets/shared.sops.yaml"
|
||||
name: shared_secrets
|
||||
age_keyfile: "{{ lookup('env', 'SOPS_AGE_KEY_FILE') }}"
|
||||
no_log: true
|
||||
|
||||
- name: Merge secrets
|
||||
set_fact:
|
||||
client_secrets: "{{ client_secrets | combine(shared_secrets) }}"
|
||||
no_log: true
|
||||
|
||||
- name: Include mailgun role
|
||||
include_role:
|
||||
name: mailgun
|
||||
|
||||
- name: Configure Nextcloud email if credentials available
|
||||
shell: |
|
||||
docker exec -u www-data nextcloud php occ config:system:set mail_smtpmode --value="smtp"
|
||||
docker exec -u www-data nextcloud php occ config:system:set mail_smtpsecure --value="tls"
|
||||
docker exec -u www-data nextcloud php occ config:system:set mail_smtphost --value="smtp.eu.mailgun.org"
|
||||
docker exec -u www-data nextcloud php occ config:system:set mail_smtpport --value="587"
|
||||
docker exec -u www-data nextcloud php occ config:system:set mail_smtpauth --value="1"
|
||||
docker exec -u www-data nextcloud php occ config:system:set mail_smtpname --value="{{ mailgun_smtp_user }}"
|
||||
docker exec -u www-data nextcloud php occ config:system:set mail_smtppassword --value="{{ mailgun_smtp_password }}"
|
||||
docker exec -u www-data nextcloud php occ config:system:set mail_from_address --value="{{ inventory_hostname }}"
|
||||
docker exec -u www-data nextcloud php occ config:system:set mail_domain --value="mg.vrije.cloud"
|
||||
when: mailgun_smtp_user is defined
|
||||
no_log: true
|
||||
register: email_config
|
||||
|
||||
- name: Display email configuration status
|
||||
debug:
|
||||
msg: |
|
||||
========================================
|
||||
Email Configuration
|
||||
========================================
|
||||
Status: {{ 'Configured' if email_config.changed | default(false) else 'Skipped (credentials not available)' }}
|
||||
SMTP: smtp.eu.mailgun.org:587 (TLS)
|
||||
From: {{ inventory_hostname }}@mg.vrije.cloud
|
||||
========================================
|
||||
|
|
@ -39,7 +39,6 @@
|
|||
name: client_secrets
|
||||
age_keyfile: "{{ lookup('env', 'SOPS_AGE_KEY_FILE') }}"
|
||||
no_log: true
|
||||
tags: always
|
||||
|
||||
- name: Load shared secrets (Mailgun API key, etc.)
|
||||
community.sops.load_vars:
|
||||
|
|
@ -47,13 +46,11 @@
|
|||
name: shared_secrets
|
||||
age_keyfile: "{{ lookup('env', 'SOPS_AGE_KEY_FILE') }}"
|
||||
no_log: true
|
||||
tags: always
|
||||
|
||||
- name: Merge shared secrets into client_secrets
|
||||
set_fact:
|
||||
client_secrets: "{{ client_secrets | combine(shared_secrets) }}"
|
||||
no_log: true
|
||||
tags: always
|
||||
|
||||
- name: Set client domain from secrets
|
||||
set_fact:
|
||||
|
|
@ -69,10 +66,6 @@
|
|||
- role: mailgun
|
||||
- role: authentik
|
||||
- role: nextcloud
|
||||
- role: diun
|
||||
tags: diun
|
||||
- role: kuma
|
||||
tags: kuma
|
||||
|
||||
post_tasks:
|
||||
- name: Display deployment summary
|
||||
|
|
|
|||
|
|
@ -18,13 +18,6 @@
|
|||
- name: Gather facts
|
||||
setup:
|
||||
|
||||
- name: Load shared secrets (Docker Hub, etc.)
|
||||
community.sops.load_vars:
|
||||
file: "{{ playbook_dir }}/../../secrets/shared.sops.yaml"
|
||||
name: shared_secrets
|
||||
age_keyfile: "{{ lookup('env', 'SOPS_AGE_KEY_FILE') }}"
|
||||
no_log: true
|
||||
|
||||
roles:
|
||||
- role: common
|
||||
tags: ['common', 'security']
|
||||
|
|
|
|||
|
|
@ -1,311 +0,0 @@
|
|||
---
|
||||
# Playbook: Update Docker containers across clients
|
||||
# Usage:
|
||||
# # Update single client
|
||||
# ansible-playbook -i hcloud.yml playbooks/update-containers.yml --limit black
|
||||
#
|
||||
# # Update specific service only
|
||||
# ansible-playbook -i hcloud.yml playbooks/update-containers.yml --limit black --tags authentik
|
||||
#
|
||||
# # Dry run (check mode)
|
||||
# ansible-playbook -i hcloud.yml playbooks/update-containers.yml --limit black --check
|
||||
#
|
||||
# # Update multiple clients in sequence
|
||||
# ansible-playbook -i hcloud.yml playbooks/update-containers.yml --limit "dev,test"
|
||||
|
||||
- name: Update Docker containers
|
||||
hosts: all
|
||||
become: yes
|
||||
serial: 1 # Process one host at a time for safety
|
||||
|
||||
vars:
|
||||
# Services to update (override with -e "services_to_update=['authentik']")
|
||||
services_to_update:
|
||||
- traefik
|
||||
- authentik
|
||||
- nextcloud
|
||||
- diun
|
||||
|
||||
# Backup before update
|
||||
create_backup: true
|
||||
|
||||
# Wait time between service updates (seconds)
|
||||
update_delay: 30
|
||||
|
||||
pre_tasks:
|
||||
- name: Display update plan
|
||||
debug:
|
||||
msg: |
|
||||
Updating {{ inventory_hostname }}
|
||||
Services: {{ services_to_update | join(', ') }}
|
||||
Backup enabled: {{ create_backup }}
|
||||
tags: always
|
||||
|
||||
- name: Check if host is reachable
|
||||
ping:
|
||||
tags: always
|
||||
|
||||
- name: Get current container status (before)
|
||||
shell: docker ps --format 'table {{{{.Names}}}}\t{{{{.Status}}}}\t{{{{.Image}}}}'
|
||||
register: containers_before
|
||||
changed_when: false
|
||||
tags: always
|
||||
|
||||
- name: Display current containers
|
||||
debug:
|
||||
msg: "{{ containers_before.stdout_lines }}"
|
||||
tags: always
|
||||
|
||||
tasks:
|
||||
# ==========================================
|
||||
# Traefik Updates
|
||||
# ==========================================
|
||||
- name: Update Traefik
|
||||
block:
|
||||
- name: Create Traefik backup
|
||||
shell: |
|
||||
cd /opt/docker/traefik
|
||||
tar -czf /tmp/traefik-backup-$(date +%Y%m%d-%H%M%S).tar.gz \
|
||||
acme.json docker-compose.yml traefik.yml 2>/dev/null || true
|
||||
when: create_backup
|
||||
|
||||
- name: Pull latest Traefik image
|
||||
docker_image:
|
||||
name: traefik:latest
|
||||
source: pull
|
||||
force_source: yes
|
||||
|
||||
- name: Restart Traefik
|
||||
docker_compose:
|
||||
project_src: /opt/docker/traefik
|
||||
restarted: yes
|
||||
pull: yes
|
||||
|
||||
- name: Wait for Traefik to be healthy
|
||||
shell: docker inspect --format='{{{{.State.Status}}}}' traefik
|
||||
register: traefik_status
|
||||
until: traefik_status.stdout == "running"
|
||||
retries: 10
|
||||
delay: 5
|
||||
changed_when: false
|
||||
|
||||
- name: Verify Traefik SSL certificates
|
||||
shell: docker exec traefik ls -la /acme.json
|
||||
register: traefik_certs
|
||||
changed_when: false
|
||||
failed_when: traefik_certs.rc != 0
|
||||
|
||||
- name: Delay between services
|
||||
pause:
|
||||
seconds: "{{ update_delay }}"
|
||||
when: "'traefik' in services_to_update"
|
||||
tags: traefik
|
||||
|
||||
# ==========================================
|
||||
# Authentik Updates
|
||||
# ==========================================
|
||||
- name: Update Authentik
|
||||
block:
|
||||
- name: Create Authentik database backup
|
||||
shell: |
|
||||
docker exec authentik-db pg_dump -U authentik authentik | \
|
||||
gzip > /tmp/authentik-backup-$(date +%Y%m%d-%H%M%S).sql.gz
|
||||
when: create_backup
|
||||
|
||||
- name: Pull latest Authentik images
|
||||
docker_image:
|
||||
name: "{{ item }}"
|
||||
source: pull
|
||||
force_source: yes
|
||||
loop:
|
||||
- ghcr.io/goauthentik/server:latest
|
||||
- postgres:16-alpine
|
||||
- redis:alpine
|
||||
|
||||
- name: Restart Authentik services
|
||||
docker_compose:
|
||||
project_src: /opt/docker/authentik
|
||||
restarted: yes
|
||||
pull: yes
|
||||
|
||||
- name: Wait for Authentik server to be healthy
|
||||
shell: docker inspect --format='{{{{.State.Health.Status}}}}' authentik-server
|
||||
register: authentik_status
|
||||
until: authentik_status.stdout == "healthy"
|
||||
retries: 20
|
||||
delay: 10
|
||||
changed_when: false
|
||||
|
||||
- name: Wait for Authentik worker to be healthy
|
||||
shell: docker inspect --format='{{{{.State.Health.Status}}}}' authentik-worker
|
||||
register: authentik_worker_status
|
||||
until: authentik_worker_status.stdout == "healthy"
|
||||
retries: 20
|
||||
delay: 10
|
||||
changed_when: false
|
||||
|
||||
- name: Verify Authentik web interface
|
||||
uri:
|
||||
url: "https://auth.{{ client_name }}.vrije.cloud/if/flow/default-authentication-flow/"
|
||||
validate_certs: yes
|
||||
status_code: 200
|
||||
register: authentik_health
|
||||
retries: 5
|
||||
delay: 10
|
||||
|
||||
- name: Delay between services
|
||||
pause:
|
||||
seconds: "{{ update_delay }}"
|
||||
when: "'authentik' in services_to_update"
|
||||
tags: authentik
|
||||
|
||||
# ==========================================
|
||||
# Nextcloud Updates
|
||||
# ==========================================
|
||||
- name: Update Nextcloud
|
||||
block:
|
||||
- name: Create Nextcloud database backup
|
||||
shell: |
|
||||
docker exec nextcloud-db mysqldump -u nextcloud -p$(docker exec nextcloud-db cat /run/secrets/db_password 2>/dev/null || echo 'password') nextcloud | \
|
||||
gzip > /tmp/nextcloud-backup-$(date +%Y%m%d-%H%M%S).sql.gz
|
||||
when: create_backup
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Enable Nextcloud maintenance mode
|
||||
shell: docker exec -u www-data nextcloud php occ maintenance:mode --on
|
||||
register: maintenance_mode
|
||||
changed_when: "'Maintenance mode enabled' in maintenance_mode.stdout"
|
||||
|
||||
- name: Pull latest Nextcloud images
|
||||
docker_image:
|
||||
name: "{{ item }}"
|
||||
source: pull
|
||||
force_source: yes
|
||||
loop:
|
||||
- nextcloud:latest
|
||||
- mariadb:11
|
||||
- redis:alpine
|
||||
- collabora/code:latest
|
||||
|
||||
- name: Restart Nextcloud services
|
||||
docker_compose:
|
||||
project_src: /opt/docker/nextcloud
|
||||
restarted: yes
|
||||
pull: yes
|
||||
|
||||
- name: Wait for Nextcloud to be ready
|
||||
shell: docker exec nextcloud-db mysqladmin ping -h localhost -u root --silent
|
||||
register: nc_db_status
|
||||
until: nc_db_status.rc == 0
|
||||
retries: 20
|
||||
delay: 5
|
||||
changed_when: false
|
||||
|
||||
- name: Run Nextcloud upgrade (if needed)
|
||||
shell: docker exec -u www-data nextcloud php occ upgrade
|
||||
register: nc_upgrade
|
||||
changed_when: "'Updated database' in nc_upgrade.stdout"
|
||||
failed_when: nc_upgrade.rc != 0 and 'already latest version' not in nc_upgrade.stdout
|
||||
|
||||
- name: Disable Nextcloud maintenance mode
|
||||
shell: docker exec -u www-data nextcloud php occ maintenance:mode --off
|
||||
register: maintenance_off
|
||||
changed_when: "'Maintenance mode disabled' in maintenance_off.stdout"
|
||||
|
||||
- name: Verify Nextcloud web interface
|
||||
uri:
|
||||
url: "https://nextcloud.{{ client_name }}.vrije.cloud/status.php"
|
||||
validate_certs: yes
|
||||
status_code: 200
|
||||
register: nc_health
|
||||
retries: 10
|
||||
delay: 10
|
||||
|
||||
- name: Verify Nextcloud installed status
|
||||
uri:
|
||||
url: "https://nextcloud.{{ client_name }}.vrije.cloud/status.php"
|
||||
validate_certs: yes
|
||||
return_content: yes
|
||||
register: nc_status_check
|
||||
failed_when: "'\"installed\":true' not in nc_status_check.content"
|
||||
|
||||
- name: Delay between services
|
||||
pause:
|
||||
seconds: "{{ update_delay }}"
|
||||
when: "'nextcloud' in services_to_update"
|
||||
tags: nextcloud
|
||||
|
||||
# ==========================================
|
||||
# Diun Updates
|
||||
# ==========================================
|
||||
- name: Update Diun
|
||||
block:
|
||||
- name: Pull latest Diun image
|
||||
docker_image:
|
||||
name: crazymax/diun:latest
|
||||
source: pull
|
||||
force_source: yes
|
||||
|
||||
- name: Restart Diun
|
||||
docker_compose:
|
||||
project_src: /opt/docker/diun
|
||||
restarted: yes
|
||||
pull: yes
|
||||
|
||||
- name: Wait for Diun to be running
|
||||
shell: docker inspect --format='{{{{.State.Status}}}}' diun
|
||||
register: diun_status
|
||||
until: diun_status.stdout == "running"
|
||||
retries: 5
|
||||
delay: 3
|
||||
changed_when: false
|
||||
when: "'diun' in services_to_update"
|
||||
tags: diun
|
||||
|
||||
post_tasks:
|
||||
- name: Get final container status
|
||||
shell: docker ps --format 'table {{{{.Names}}}}\t{{{{.Status}}}}\t{{{{.Image}}}}'
|
||||
register: containers_after
|
||||
changed_when: false
|
||||
tags: always
|
||||
|
||||
- name: Display final container status
|
||||
debug:
|
||||
msg: "{{ containers_after.stdout_lines }}"
|
||||
tags: always
|
||||
|
||||
- name: Verify all expected containers are running
|
||||
shell: docker ps --filter "status=running" --format '{{{{.Names}}}}' | wc -l
|
||||
register: running_count
|
||||
changed_when: false
|
||||
tags: always
|
||||
|
||||
- name: Check for unhealthy containers
|
||||
shell: docker ps --filter "health=unhealthy" --format '{{{{.Names}}}}'
|
||||
register: unhealthy_containers
|
||||
changed_when: false
|
||||
failed_when: unhealthy_containers.stdout != ""
|
||||
tags: always
|
||||
|
||||
- name: Update summary
|
||||
debug:
|
||||
msg: |
|
||||
========================================
|
||||
Update Summary for {{ inventory_hostname }}
|
||||
========================================
|
||||
Running containers: {{ running_count.stdout }}
|
||||
Unhealthy containers: {{ unhealthy_containers.stdout or 'None' }}
|
||||
|
||||
Services updated: {{ services_to_update | join(', ') }}
|
||||
Status: SUCCESS
|
||||
tags: always
|
||||
|
||||
- name: Post-update validation
|
||||
hosts: all
|
||||
become: yes
|
||||
gather_facts: no
|
||||
|
||||
tasks:
|
||||
- name: Final health check
|
||||
debug:
|
||||
msg: "All updates completed successfully on {{ inventory_hostname }}"
|
||||
|
|
@ -1,61 +0,0 @@
|
|||
---
|
||||
# Update enrollment flow blueprint on running Authentik instance
|
||||
- name: Update enrollment flow blueprint
|
||||
hosts: all
|
||||
gather_facts: no
|
||||
become: yes
|
||||
|
||||
vars:
|
||||
authentik_api_token: "ak_DtA2LG1Z9shl-tw9r0cs34B1G9l8Lpz76GxLf-4OBiUWbiHbAVJ04GYLcZ30"
|
||||
client_domain: "dev.vrije.cloud"
|
||||
|
||||
tasks:
|
||||
|
||||
- name: Create blueprints directory
|
||||
file:
|
||||
path: /opt/config/authentik/blueprints
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Copy enrollment flow blueprint
|
||||
copy:
|
||||
src: ../roles/authentik/files/enrollment-flow.yaml
|
||||
dest: /opt/config/authentik/blueprints/enrollment-flow.yaml
|
||||
mode: '0644'
|
||||
register: blueprint_copied
|
||||
|
||||
- name: Copy blueprint into authentik-worker container
|
||||
shell: |
|
||||
docker cp /opt/config/authentik/blueprints/enrollment-flow.yaml authentik-worker:/blueprints/enrollment-flow.yaml
|
||||
when: blueprint_copied.changed
|
||||
|
||||
- name: Copy blueprint into authentik-server container
|
||||
shell: |
|
||||
docker cp /opt/config/authentik/blueprints/enrollment-flow.yaml authentik-server:/blueprints/enrollment-flow.yaml
|
||||
when: blueprint_copied.changed
|
||||
|
||||
- name: Restart authentik-worker to force blueprint discovery
|
||||
shell: docker restart authentik-worker
|
||||
when: blueprint_copied.changed
|
||||
|
||||
- name: Wait for blueprint to be applied
|
||||
shell: |
|
||||
sleep 30
|
||||
docker exec authentik-server curl -sf -H 'Authorization: Bearer {{ authentik_api_token }}' \
|
||||
'http://localhost:9000/api/v3/flows/instances/?slug=default-enrollment-flow'
|
||||
register: flow_check
|
||||
retries: 6
|
||||
delay: 10
|
||||
until: flow_check.rc == 0
|
||||
no_log: true
|
||||
|
||||
- name: Display success message
|
||||
debug:
|
||||
msg: |
|
||||
✓ Enrollment flow blueprint updated successfully!
|
||||
|
||||
The invitation-only enrollment flow is now set as the default.
|
||||
When you create invitations in Authentik, they will automatically
|
||||
use the correct flow.
|
||||
|
||||
Flow URL: https://auth.{{ client_domain }}/if/flow/default-enrollment-flow/
|
||||
67
ansible/roles/authentik/files/configure_recovery_flow.py
Normal file
67
ansible/roles/authentik/files/configure_recovery_flow.py
Normal file
|
|
@ -0,0 +1,67 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Configure Authentik recovery flow.
|
||||
Verifies that the default recovery flow exists (Authentik creates it by default).
|
||||
The recovery flow is used when clicking "Create recovery link" in the UI.
|
||||
"""
|
||||
import sys
|
||||
import json
|
||||
import urllib.request
|
||||
import urllib.error
|
||||
|
||||
def api_request(base_url, token, path, method='GET', data=None):
|
||||
"""Make API request to Authentik"""
|
||||
url = f"{base_url}{path}"
|
||||
headers = {
|
||||
'Authorization': f'Bearer {token}',
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
|
||||
request_data = json.dumps(data).encode() if data else None
|
||||
req = urllib.request.Request(url, data=request_data, headers=headers, method=method)
|
||||
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=30) as resp:
|
||||
return resp.status, json.loads(resp.read().decode())
|
||||
except urllib.error.HTTPError as e:
|
||||
error_body = e.read().decode()
|
||||
try:
|
||||
error_data = json.loads(error_body)
|
||||
except:
|
||||
error_data = {'error': error_body}
|
||||
return e.code, error_data
|
||||
|
||||
def main():
|
||||
if len(sys.argv) != 3:
|
||||
print(json.dumps({'error': 'Usage: configure_recovery_flow.py <base_url> <api_token>'}), file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
base_url = sys.argv[1]
|
||||
token = sys.argv[2]
|
||||
|
||||
# Get the default recovery flow (created by Authentik by default)
|
||||
status, flows_response = api_request(base_url, token, '/api/v3/flows/instances/')
|
||||
if status != 200:
|
||||
print(json.dumps({'error': 'Failed to list flows', 'details': flows_response}), file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
recovery_flow = next((f for f in flows_response.get('results', [])
|
||||
if f.get('designation') == 'recovery'), None)
|
||||
|
||||
if not recovery_flow:
|
||||
print(json.dumps({'error': 'No recovery flow found - Authentik should create one by default'}), file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
flow_slug = recovery_flow['slug']
|
||||
flow_pk = recovery_flow['pk']
|
||||
|
||||
print(json.dumps({
|
||||
'success': True,
|
||||
'message': 'Recovery flow configured',
|
||||
'flow_slug': flow_slug,
|
||||
'flow_pk': flow_pk,
|
||||
'note': 'Using Authentik default recovery flow'
|
||||
}))
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
|
|
@ -1,522 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Authentik Recovery Flow Automation Script
|
||||
|
||||
This script creates a complete password recovery flow in Authentik with:
|
||||
- Password complexity policy (12 chars, mixed case, digit, symbol)
|
||||
- Recovery identification stage (username/email)
|
||||
- Recovery email stage (sends recovery token)
|
||||
- Password change stages (with validation)
|
||||
- Integration with default authentication flow
|
||||
- Brand default recovery flow configuration
|
||||
|
||||
Usage:
|
||||
python3 create_recovery_flow.py <api_token> <authentik_domain>
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
import urllib.request
|
||||
import urllib.error
|
||||
|
||||
|
||||
def api_request(base_url, token, path, method='GET', data=None):
|
||||
"""Make an API request to Authentik"""
|
||||
url = f"{base_url}{path}"
|
||||
headers = {
|
||||
'Authorization': f'Bearer {token}',
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
|
||||
request_data = json.dumps(data).encode() if data else None
|
||||
req = urllib.request.Request(url, data=request_data, headers=headers, method=method)
|
||||
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=30) as resp:
|
||||
body = resp.read().decode()
|
||||
if body:
|
||||
return resp.status, json.loads(body)
|
||||
return resp.status, {}
|
||||
except urllib.error.HTTPError as e:
|
||||
error_body = e.read().decode()
|
||||
try:
|
||||
error_data = json.loads(error_body) if error_body else {'error': 'Empty error response'}
|
||||
except:
|
||||
error_data = {'error': error_body or 'Unknown error'}
|
||||
return e.code, error_data
|
||||
except Exception as e:
|
||||
return 0, {'error': str(e)}
|
||||
|
||||
|
||||
def get_or_create_password_policy(base_url, token):
|
||||
"""Create password complexity policy"""
|
||||
print("Checking for password complexity policy...")
|
||||
|
||||
policy_data = {
|
||||
"name": "password-complexity",
|
||||
"password_field": "password",
|
||||
"amount_digits": 1,
|
||||
"amount_uppercase": 1,
|
||||
"amount_lowercase": 1,
|
||||
"amount_symbols": 1,
|
||||
"length_min": 12,
|
||||
"symbol_charset": "!\\\"#$%&'()*+,-./:;<=>?@[]^_`{|}~",
|
||||
"error_message": "Enter a minimum of 12 characters, with at least 1 lowercase, uppercase, digit and symbol",
|
||||
"check_static_rules": True,
|
||||
"check_have_i_been_pwned": True,
|
||||
"check_zxcvbn": True,
|
||||
"hibp_allowed_count": 0,
|
||||
"zxcvbn_score_threshold": 2
|
||||
}
|
||||
|
||||
# Check if policy already exists
|
||||
status, policies = api_request(base_url, token, '/api/v3/policies/password/')
|
||||
print(f" Initial check status: {status}")
|
||||
if status == 200:
|
||||
results = policies.get('results', [])
|
||||
print(f" Found {len(results)} existing policies")
|
||||
for policy in results:
|
||||
policy_name = policy.get('name')
|
||||
print(f" - {policy_name}")
|
||||
if policy_name == 'password-complexity':
|
||||
print(f" ✓ Password policy already exists: {policy['pk']}")
|
||||
return policy['pk']
|
||||
else:
|
||||
print(f" Initial check failed: {policies}")
|
||||
|
||||
# Create new policy
|
||||
status, policy = api_request(base_url, token, '/api/v3/policies/password/', 'POST', policy_data)
|
||||
if status == 201:
|
||||
print(f" ✓ Created password policy: {policy['pk']}")
|
||||
return policy['pk']
|
||||
elif status == 400 and 'name' in policy:
|
||||
# Policy with same name already exists, search for it again
|
||||
print(f" ! Policy name already exists, retrieving existing policy...")
|
||||
status, policies = api_request(base_url, token, '/api/v3/policies/password/')
|
||||
if status == 200:
|
||||
for existing_policy in policies.get('results', []):
|
||||
if existing_policy.get('name') == 'password-complexity':
|
||||
print(f" ✓ Found existing password policy: {existing_policy['pk']}")
|
||||
return existing_policy['pk']
|
||||
print(f" ✗ Failed to find existing policy after creation conflict")
|
||||
return None
|
||||
else:
|
||||
print(f" ✗ Failed to create password policy: {policy}")
|
||||
return None
|
||||
|
||||
|
||||
def get_or_create_recovery_identification_stage(base_url, token):
|
||||
"""Create recovery identification stage"""
|
||||
print("Creating recovery identification stage...")
|
||||
|
||||
stage_data = {
|
||||
"name": "recovery-authentication-identification",
|
||||
"user_fields": ["username", "email"],
|
||||
"password_stage": None,
|
||||
"case_insensitive_matching": True,
|
||||
"show_matched_user": True,
|
||||
"pretend_user_exists": True,
|
||||
"enable_remember_me": False
|
||||
}
|
||||
|
||||
# Check if stage already exists
|
||||
status, stages = api_request(base_url, token, '/api/v3/stages/identification/')
|
||||
if status == 200:
|
||||
for stage in stages.get('results', []):
|
||||
if stage.get('name') == 'recovery-authentication-identification':
|
||||
print(f" ✓ Recovery identification stage already exists: {stage['pk']}")
|
||||
return stage['pk']
|
||||
|
||||
# Create new stage
|
||||
status, stage = api_request(base_url, token, '/api/v3/stages/identification/', 'POST', stage_data)
|
||||
if status == 201:
|
||||
print(f" ✓ Created recovery identification stage: {stage['pk']}")
|
||||
return stage['pk']
|
||||
elif status == 400 and 'name' in stage:
|
||||
# Stage with same name already exists
|
||||
print(f" ! Stage name already exists, retrieving existing stage...")
|
||||
status, stages = api_request(base_url, token, '/api/v3/stages/identification/')
|
||||
if status == 200:
|
||||
for existing_stage in stages.get('results', []):
|
||||
if existing_stage.get('name') == 'recovery-authentication-identification':
|
||||
print(f" ✓ Found existing recovery identification stage: {existing_stage['pk']}")
|
||||
return existing_stage['pk']
|
||||
print(f" ✗ Failed to find existing stage after creation conflict")
|
||||
return None
|
||||
else:
|
||||
print(f" ✗ Failed to create recovery identification stage: {stage}")
|
||||
return None
|
||||
|
||||
|
||||
def get_or_create_recovery_email_stage(base_url, token):
|
||||
"""Create recovery email stage"""
|
||||
print("Creating recovery email stage...")
|
||||
|
||||
stage_data = {
|
||||
"name": "recovery-email",
|
||||
"use_global_settings": True,
|
||||
"token_expiry": "minutes=30",
|
||||
"subject": "Password recovery",
|
||||
"template": "email/password_reset.html",
|
||||
"activate_user_on_success": True,
|
||||
"recovery_max_attempts": 5,
|
||||
"recovery_cache_timeout": "minutes=5"
|
||||
}
|
||||
|
||||
# Check if stage already exists
|
||||
status, stages = api_request(base_url, token, '/api/v3/stages/email/')
|
||||
if status == 200:
|
||||
for stage in stages.get('results', []):
|
||||
if stage.get('name') == 'recovery-email':
|
||||
print(f" ✓ Recovery email stage already exists: {stage['pk']}")
|
||||
return stage['pk']
|
||||
|
||||
# Create new stage
|
||||
status, stage = api_request(base_url, token, '/api/v3/stages/email/', 'POST', stage_data)
|
||||
if status == 201:
|
||||
print(f" ✓ Created recovery email stage: {stage['pk']}")
|
||||
return stage['pk']
|
||||
elif status == 400 and 'name' in stage:
|
||||
# Stage with same name already exists
|
||||
print(f" ! Stage name already exists, retrieving existing stage...")
|
||||
status, stages = api_request(base_url, token, '/api/v3/stages/email/')
|
||||
if status == 200:
|
||||
for existing_stage in stages.get('results', []):
|
||||
if existing_stage.get('name') == 'recovery-email':
|
||||
print(f" ✓ Found existing recovery email stage: {existing_stage['pk']}")
|
||||
return existing_stage['pk']
|
||||
print(f" ✗ Failed to find existing stage after creation conflict")
|
||||
return None
|
||||
else:
|
||||
print(f" ✗ Failed to create recovery email stage: {stage}")
|
||||
return None
|
||||
|
||||
|
||||
def get_existing_stage_uuid(base_url, token, stage_name, stage_type):
|
||||
"""Get UUID of an existing stage"""
|
||||
status, stages = api_request(base_url, token, f'/api/v3/stages/{stage_type}/')
|
||||
if status == 200:
|
||||
for stage in stages.get('results', []):
|
||||
if stage.get('name') == stage_name:
|
||||
return stage['pk']
|
||||
return None
|
||||
|
||||
|
||||
def get_or_create_recovery_flow(base_url, token, stage_ids):
|
||||
"""Create recovery flow with stage bindings"""
|
||||
print("Creating recovery flow...")
|
||||
|
||||
flow_data = {
|
||||
"name": "recovery",
|
||||
"slug": "recovery",
|
||||
"title": "Recovery",
|
||||
"designation": "recovery",
|
||||
"policy_engine_mode": "any",
|
||||
"compatibility_mode": False,
|
||||
"layout": "stacked",
|
||||
"denied_action": "message_continue"
|
||||
}
|
||||
|
||||
# Check if flow already exists
|
||||
status, flows = api_request(base_url, token, '/api/v3/flows/instances/')
|
||||
if status == 200:
|
||||
for flow in flows.get('results', []):
|
||||
if flow.get('slug') == 'recovery':
|
||||
print(f" ✓ Recovery flow already exists: {flow['pk']}")
|
||||
return flow['pk']
|
||||
|
||||
# Create new flow
|
||||
status, flow = api_request(base_url, token, '/api/v3/flows/instances/', 'POST', flow_data)
|
||||
if status != 201:
|
||||
print(f" ✗ Failed to create recovery flow: {flow}")
|
||||
return None
|
||||
|
||||
flow_uuid = flow['pk']
|
||||
print(f" ✓ Created recovery flow: {flow_uuid}")
|
||||
|
||||
# Create stage bindings
|
||||
bindings = [
|
||||
{"stage": stage_ids['recovery_identification'], "order": 0},
|
||||
{"stage": stage_ids['recovery_email'], "order": 10},
|
||||
{"stage": stage_ids['password_change_prompt'], "order": 20},
|
||||
{"stage": stage_ids['password_change_write'], "order": 30},
|
||||
]
|
||||
|
||||
for binding in bindings:
|
||||
binding_data = {
|
||||
"target": flow_uuid,
|
||||
"stage": binding['stage'],
|
||||
"order": binding['order'],
|
||||
"evaluate_on_plan": False,
|
||||
"re_evaluate_policies": True,
|
||||
"policy_engine_mode": "any",
|
||||
"invalid_response_action": "retry"
|
||||
}
|
||||
|
||||
status, result = api_request(base_url, token, '/api/v3/flows/bindings/', 'POST', binding_data)
|
||||
if status == 201:
|
||||
print(f" ✓ Bound stage {binding['stage']} at order {binding['order']}")
|
||||
else:
|
||||
print(f" ✗ Failed to bind stage: {result}")
|
||||
|
||||
return flow_uuid
|
||||
|
||||
|
||||
def update_password_change_prompt_stage(base_url, token, stage_uuid, password_complexity_uuid):
|
||||
"""Add password complexity policy to password change prompt stage"""
|
||||
print("Updating password change prompt stage...")
|
||||
|
||||
# Get current stage configuration
|
||||
status, stage = api_request(base_url, token, f'/api/v3/stages/prompt/stages/{stage_uuid}/')
|
||||
if status != 200:
|
||||
print(f" ✗ Failed to get stage: {stage}")
|
||||
return False
|
||||
|
||||
# Add password complexity to validation policies
|
||||
validation_policies = stage.get('validation_policies', [])
|
||||
if password_complexity_uuid not in validation_policies:
|
||||
validation_policies.append(password_complexity_uuid)
|
||||
|
||||
update_data = {
|
||||
"validation_policies": validation_policies
|
||||
}
|
||||
|
||||
status, result = api_request(base_url, token, f'/api/v3/stages/prompt/stages/{stage_uuid}/', 'PATCH', update_data)
|
||||
if status == 200:
|
||||
print(f" ✓ Added password complexity policy to validation")
|
||||
return True
|
||||
else:
|
||||
print(f" ✗ Failed to update stage: {result}")
|
||||
return False
|
||||
else:
|
||||
print(f" ✓ Password complexity policy already in validation")
|
||||
return True
|
||||
|
||||
|
||||
def remove_separate_password_stage_from_auth_flow(base_url, token, auth_flow_uuid, password_stage_uuid):
|
||||
"""Remove separate password stage from authentication flow if it exists"""
|
||||
print("Checking for separate password stage in authentication flow...")
|
||||
|
||||
# Get all flow bindings
|
||||
status, bindings_data = api_request(base_url, token, '/api/v3/flows/bindings/')
|
||||
if status != 200:
|
||||
print(f" ✗ Failed to get flow bindings: {bindings_data}")
|
||||
return False
|
||||
|
||||
# Find password stage binding in auth flow
|
||||
password_binding = None
|
||||
for binding in bindings_data.get('results', []):
|
||||
if binding.get('target') == auth_flow_uuid and binding.get('stage') == password_stage_uuid:
|
||||
password_binding = binding
|
||||
break
|
||||
|
||||
if not password_binding:
|
||||
print(f" ✓ No separate password stage found (already removed)")
|
||||
return True
|
||||
|
||||
# Delete the password stage binding
|
||||
binding_uuid = password_binding.get('pk')
|
||||
status, result = api_request(base_url, token, f'/api/v3/flows/bindings/{binding_uuid}/', 'DELETE')
|
||||
if status == 204 or status == 200:
|
||||
print(f" ✓ Removed separate password stage from authentication flow")
|
||||
return True
|
||||
else:
|
||||
print(f" ✗ Failed to remove password stage: {result}")
|
||||
return False
|
||||
|
||||
|
||||
def update_authentication_identification_stage(base_url, token, stage_uuid, password_stage_uuid, recovery_flow_uuid):
|
||||
"""Update authentication identification stage with password field and recovery flow"""
|
||||
print("Updating authentication identification stage...")
|
||||
|
||||
# First get the current stage configuration
|
||||
status, current_stage = api_request(base_url, token, f'/api/v3/stages/identification/{stage_uuid}/')
|
||||
if status != 200:
|
||||
print(f" ✗ Failed to get current stage: {current_stage}")
|
||||
return False
|
||||
|
||||
# Check if already configured
|
||||
if current_stage.get('password_stage') == password_stage_uuid and current_stage.get('recovery_flow') == recovery_flow_uuid:
|
||||
print(f" ✓ Authentication identification stage already configured")
|
||||
return True
|
||||
|
||||
# Update with new values while preserving existing configuration
|
||||
update_data = {
|
||||
"name": current_stage.get('name'),
|
||||
"user_fields": current_stage.get('user_fields', ["username", "email"]),
|
||||
"password_stage": password_stage_uuid,
|
||||
"recovery_flow": recovery_flow_uuid,
|
||||
"case_insensitive_matching": current_stage.get('case_insensitive_matching', True),
|
||||
"show_matched_user": current_stage.get('show_matched_user', True),
|
||||
"pretend_user_exists": current_stage.get('pretend_user_exists', True)
|
||||
}
|
||||
|
||||
status, result = api_request(base_url, token, f'/api/v3/stages/identification/{stage_uuid}/', 'PATCH', update_data)
|
||||
if status == 200:
|
||||
print(f" ✓ Updated authentication identification stage")
|
||||
print(f" - Added password field on same page")
|
||||
print(f" - Added recovery flow link")
|
||||
return True
|
||||
else:
|
||||
print(f" ✗ Failed to update stage: {result}")
|
||||
return False
|
||||
|
||||
|
||||
def update_brand_recovery_flow(base_url, token, recovery_flow_uuid):
|
||||
"""Update the default brand to use the recovery flow"""
|
||||
print("Updating brand default recovery flow...")
|
||||
|
||||
# Get the default brand (authentik has one brand by default)
|
||||
status, brands = api_request(base_url, token, '/api/v3/core/brands/')
|
||||
if status != 200:
|
||||
print(f" ✗ Failed to get brands: {brands}")
|
||||
return False
|
||||
|
||||
results = brands.get('results', [])
|
||||
if not results:
|
||||
print(f" ✗ No brands found")
|
||||
return False
|
||||
|
||||
# Use the first/default brand
|
||||
brand = results[0]
|
||||
brand_uuid = brand.get('brand_uuid')
|
||||
|
||||
# Check if already configured
|
||||
if brand.get('flow_recovery') == recovery_flow_uuid:
|
||||
print(f" ✓ Brand recovery flow already configured")
|
||||
return True
|
||||
|
||||
# Update the brand with recovery flow
|
||||
update_data = {
|
||||
"domain": brand.get('domain'),
|
||||
"flow_recovery": recovery_flow_uuid
|
||||
}
|
||||
|
||||
status, result = api_request(base_url, token, f'/api/v3/core/brands/{brand_uuid}/', 'PATCH', update_data)
|
||||
if status == 200:
|
||||
print(f" ✓ Updated brand default recovery flow")
|
||||
return True
|
||||
else:
|
||||
print(f" ✗ Failed to update brand: {result}")
|
||||
return False
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 3:
|
||||
print("Usage: python3 create_recovery_flow.py <api_token> <authentik_domain>")
|
||||
sys.exit(1)
|
||||
|
||||
token = sys.argv[1]
|
||||
authentik_domain = sys.argv[2]
|
||||
|
||||
# Use internal localhost URL when running inside Authentik container
|
||||
# This avoids SSL/DNS issues
|
||||
base_url = "http://localhost:9000"
|
||||
print(f"Using internal API endpoint: {base_url}")
|
||||
print(f"External domain: https://{authentik_domain}\n")
|
||||
|
||||
print("=" * 80)
|
||||
print("Authentik Recovery Flow Automation")
|
||||
print("=" * 80)
|
||||
print(f"Target: {base_url}\n")
|
||||
|
||||
# Step 1: Create password complexity policy
|
||||
password_complexity_uuid = get_or_create_password_policy(base_url, token)
|
||||
if not password_complexity_uuid:
|
||||
print("\n✗ Failed to create password complexity policy")
|
||||
sys.exit(1)
|
||||
|
||||
# Step 2: Create recovery identification stage
|
||||
recovery_identification_uuid = get_or_create_recovery_identification_stage(base_url, token)
|
||||
if not recovery_identification_uuid:
|
||||
print("\n✗ Failed to create recovery identification stage")
|
||||
sys.exit(1)
|
||||
|
||||
# Step 3: Create recovery email stage
|
||||
recovery_email_uuid = get_or_create_recovery_email_stage(base_url, token)
|
||||
if not recovery_email_uuid:
|
||||
print("\n✗ Failed to create recovery email stage")
|
||||
sys.exit(1)
|
||||
|
||||
# Step 4: Get existing stage and flow UUIDs
|
||||
print("\nGetting existing stage and flow UUIDs...")
|
||||
password_change_prompt_uuid = get_existing_stage_uuid(base_url, token, 'default-password-change-prompt', 'prompt/stages')
|
||||
password_change_write_uuid = get_existing_stage_uuid(base_url, token, 'default-password-change-write', 'user_write')
|
||||
auth_identification_uuid = get_existing_stage_uuid(base_url, token, 'default-authentication-identification', 'identification')
|
||||
auth_password_uuid = get_existing_stage_uuid(base_url, token, 'default-authentication-password', 'password')
|
||||
|
||||
# Get default authentication flow UUID
|
||||
status, flows = api_request(base_url, token, '/api/v3/flows/instances/')
|
||||
auth_flow_uuid = None
|
||||
if status == 200:
|
||||
for flow in flows.get('results', []):
|
||||
if flow.get('slug') == 'default-authentication-flow':
|
||||
auth_flow_uuid = flow.get('pk')
|
||||
break
|
||||
|
||||
if not all([password_change_prompt_uuid, password_change_write_uuid, auth_identification_uuid, auth_password_uuid, auth_flow_uuid]):
|
||||
print(" ✗ Failed to find all required existing stages and flows")
|
||||
sys.exit(1)
|
||||
|
||||
print(f" ✓ Found all existing stages and flows")
|
||||
|
||||
# Step 5: Create recovery flow
|
||||
stage_ids = {
|
||||
'recovery_identification': recovery_identification_uuid,
|
||||
'recovery_email': recovery_email_uuid,
|
||||
'password_change_prompt': password_change_prompt_uuid,
|
||||
'password_change_write': password_change_write_uuid
|
||||
}
|
||||
|
||||
recovery_flow_uuid = get_or_create_recovery_flow(base_url, token, stage_ids)
|
||||
if not recovery_flow_uuid:
|
||||
print("\n✗ Failed to create recovery flow")
|
||||
sys.exit(1)
|
||||
|
||||
# Step 6: Update password change prompt stage
|
||||
if not update_password_change_prompt_stage(base_url, token, password_change_prompt_uuid, password_complexity_uuid):
|
||||
print("\n⚠ Warning: Failed to update password change prompt stage")
|
||||
|
||||
# Step 7: Update authentication identification stage
|
||||
if not update_authentication_identification_stage(base_url, token, auth_identification_uuid, auth_password_uuid, recovery_flow_uuid):
|
||||
print("\n⚠ Warning: Failed to update authentication identification stage")
|
||||
|
||||
# Step 8: Remove separate password stage from authentication flow
|
||||
if not remove_separate_password_stage_from_auth_flow(base_url, token, auth_flow_uuid, auth_password_uuid):
|
||||
print("\n⚠ Warning: Failed to remove separate password stage (may not exist)")
|
||||
|
||||
# Step 9: Update brand default recovery flow
|
||||
if not update_brand_recovery_flow(base_url, token, recovery_flow_uuid):
|
||||
print("\n⚠ Warning: Failed to update brand recovery flow (non-critical)")
|
||||
|
||||
# Success!
|
||||
print("\n" + "=" * 80)
|
||||
print("✓ Recovery Flow Configuration Complete!")
|
||||
print("=" * 80)
|
||||
print(f"\nRecovery Flow UUID: {recovery_flow_uuid}")
|
||||
print(f"Recovery URL: https://{authentik_domain}/if/flow/recovery/")
|
||||
print(f"\nFeatures enabled:")
|
||||
print(" ✓ Password complexity policy (12 chars, mixed case, digit, symbol)")
|
||||
print(" ✓ Recovery email with 30-minute token")
|
||||
print(" ✓ Password + username on same login page")
|
||||
print(" ✓ 'Forgot password?' link on login page")
|
||||
print(" ✓ Brand default recovery flow configured")
|
||||
print("\nTest the recovery flow:")
|
||||
print(f" 1. Visit: https://{authentik_domain}/if/flow/default-authentication-flow/")
|
||||
print(" 2. Click 'Forgot password?' link")
|
||||
print(" 3. Enter username or email")
|
||||
print(" 4. Check email for recovery link")
|
||||
print("=" * 80)
|
||||
|
||||
# Output JSON for Ansible
|
||||
result = {
|
||||
"success": True,
|
||||
"recovery_flow_uuid": recovery_flow_uuid,
|
||||
"password_complexity_uuid": password_complexity_uuid,
|
||||
"recovery_url": f"https://{authentik_domain}/if/flow/recovery/"
|
||||
}
|
||||
print("\n" + json.dumps(result))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -2,7 +2,7 @@ version: 1
|
|||
metadata:
|
||||
name: custom-flow-configuration
|
||||
labels:
|
||||
blueprints.goauthentik.io/description: "Configure invitation and 2FA enforcement"
|
||||
blueprints.goauthentik.io/description: "Configure invitation, recovery, and 2FA enforcement"
|
||||
blueprints.goauthentik.io/instantiate: "true"
|
||||
|
||||
entries:
|
||||
|
|
@ -26,7 +26,15 @@ entries:
|
|||
evaluate_on_plan: true
|
||||
re_evaluate_policies: false
|
||||
|
||||
# 3. ENFORCE 2FA CONFIGURATION
|
||||
# 3. SET RECOVERY FLOW IN BRAND
|
||||
# Configures the default brand to use the recovery flow
|
||||
- model: authentik_core.brand
|
||||
identifiers:
|
||||
domain: authentik-default
|
||||
attrs:
|
||||
flow_recovery: !Find [authentik_flows.flow, [designation, recovery]]
|
||||
|
||||
# 4. ENFORCE 2FA CONFIGURATION
|
||||
# Updates MFA validation stage to force users to configure TOTP
|
||||
- model: authentik_stages_authenticator_validate.authenticatorvalidatestage
|
||||
identifiers:
|
||||
|
|
|
|||
|
|
@ -1,153 +0,0 @@
|
|||
version: 1
|
||||
metadata:
|
||||
name: invitation-enrollment-flow
|
||||
labels:
|
||||
blueprints.goauthentik.io/description: "Invitation-only enrollment flow"
|
||||
blueprints.goauthentik.io/instantiate: "true"
|
||||
|
||||
entries:
|
||||
# 1. CREATE ENROLLMENT FLOW
|
||||
- attrs:
|
||||
designation: enrollment
|
||||
name: Default enrollment Flow
|
||||
title: Welcome to authentik!
|
||||
authentication: none
|
||||
denied_action: message_continue
|
||||
identifiers:
|
||||
slug: default-enrollment-flow
|
||||
model: authentik_flows.flow
|
||||
id: flow
|
||||
|
||||
# 2. CREATE INVITATION STAGE
|
||||
- attrs:
|
||||
continue_flow_without_invitation: false
|
||||
identifiers:
|
||||
name: default-enrollment-invitation
|
||||
id: invitation-stage
|
||||
model: authentik_stages_invitation.invitationstage
|
||||
|
||||
# 3. CREATE PROMPT FIELDS
|
||||
- attrs:
|
||||
order: 0
|
||||
placeholder: Username
|
||||
placeholder_expression: false
|
||||
required: true
|
||||
type: username
|
||||
field_key: username
|
||||
label: Username
|
||||
identifiers:
|
||||
name: default-enrollment-field-username
|
||||
id: prompt-field-username
|
||||
model: authentik_stages_prompt.prompt
|
||||
|
||||
- attrs:
|
||||
order: 1
|
||||
placeholder: Name
|
||||
placeholder_expression: false
|
||||
required: true
|
||||
type: text
|
||||
field_key: name
|
||||
label: Name
|
||||
identifiers:
|
||||
name: default-enrollment-field-name
|
||||
id: prompt-field-name
|
||||
model: authentik_stages_prompt.prompt
|
||||
|
||||
- attrs:
|
||||
order: 2
|
||||
placeholder: Email
|
||||
placeholder_expression: false
|
||||
required: true
|
||||
type: email
|
||||
field_key: email
|
||||
label: Email
|
||||
identifiers:
|
||||
name: default-enrollment-field-email
|
||||
id: prompt-field-email
|
||||
model: authentik_stages_prompt.prompt
|
||||
|
||||
- attrs:
|
||||
order: 3
|
||||
placeholder: Password
|
||||
placeholder_expression: false
|
||||
required: true
|
||||
type: password
|
||||
field_key: password
|
||||
label: Password
|
||||
identifiers:
|
||||
name: default-enrollment-field-password
|
||||
id: prompt-field-password
|
||||
model: authentik_stages_prompt.prompt
|
||||
|
||||
- attrs:
|
||||
order: 4
|
||||
placeholder: Password (repeat)
|
||||
placeholder_expression: false
|
||||
required: true
|
||||
type: password
|
||||
field_key: password_repeat
|
||||
label: Password (repeat)
|
||||
identifiers:
|
||||
name: default-enrollment-field-password-repeat
|
||||
id: prompt-field-password-repeat
|
||||
model: authentik_stages_prompt.prompt
|
||||
|
||||
# 4. CREATE PROMPT STAGE
|
||||
- attrs:
|
||||
fields:
|
||||
- !KeyOf prompt-field-username
|
||||
- !KeyOf prompt-field-name
|
||||
- !KeyOf prompt-field-email
|
||||
- !KeyOf prompt-field-password
|
||||
- !KeyOf prompt-field-password-repeat
|
||||
validation_policies: []
|
||||
identifiers:
|
||||
name: default-enrollment-prompt
|
||||
id: prompt-stage
|
||||
model: authentik_stages_prompt.promptstage
|
||||
|
||||
# 5. CREATE USER WRITE STAGE
|
||||
- attrs:
|
||||
user_creation_mode: always_create
|
||||
create_users_as_inactive: false
|
||||
create_users_group: null
|
||||
user_path_template: ""
|
||||
identifiers:
|
||||
name: default-enrollment-user-write
|
||||
id: user-write-stage
|
||||
model: authentik_stages_user_write.userwritestage
|
||||
|
||||
# 6. BIND INVITATION STAGE TO FLOW (order 0)
|
||||
- attrs:
|
||||
evaluate_on_plan: true
|
||||
re_evaluate_policies: false
|
||||
identifiers:
|
||||
order: 0
|
||||
stage: !KeyOf invitation-stage
|
||||
target: !KeyOf flow
|
||||
model: authentik_flows.flowstagebinding
|
||||
|
||||
# 8. BIND PROMPT STAGE TO FLOW (order 10)
|
||||
- attrs:
|
||||
evaluate_on_plan: true
|
||||
re_evaluate_policies: false
|
||||
identifiers:
|
||||
order: 10
|
||||
stage: !KeyOf prompt-stage
|
||||
target: !KeyOf flow
|
||||
model: authentik_flows.flowstagebinding
|
||||
|
||||
# 9. BIND USER WRITE STAGE TO FLOW (order 20)
|
||||
- attrs:
|
||||
evaluate_on_plan: true
|
||||
re_evaluate_policies: false
|
||||
identifiers:
|
||||
order: 20
|
||||
stage: !KeyOf user-write-stage
|
||||
target: !KeyOf flow
|
||||
model: authentik_flows.flowstagebinding
|
||||
|
||||
# Note: Brand enrollment flow configuration must be done via API
|
||||
# The tenant model is restricted in blueprints
|
||||
# Use: PATCH /api/v3/core/tenants/{tenant_uuid}/
|
||||
# Body: {"flow_enrollment": "<flow_uuid>"}
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
version: 1
|
||||
metadata:
|
||||
name: invitation-flow-configuration
|
||||
labels:
|
||||
blueprints.goauthentik.io/description: "Configure invitation stage for enrollment"
|
||||
blueprints.goauthentik.io/instantiate: "true"
|
||||
|
||||
entries:
|
||||
# 1. CREATE INVITATION STAGE
|
||||
- model: authentik_stages_invitation.invitationstage
|
||||
identifiers:
|
||||
name: default-enrollment-invitation
|
||||
id: invitation-stage
|
||||
attrs:
|
||||
continue_flow_without_invitation: true
|
||||
|
||||
# 2. BIND INVITATION STAGE TO ENROLLMENT FLOW
|
||||
- model: authentik_flows.flowstagebinding
|
||||
identifiers:
|
||||
target: !Find [authentik_flows.flow, [designation, enrollment]]
|
||||
stage: !KeyOf invitation-stage
|
||||
order: 0
|
||||
attrs:
|
||||
evaluate_on_plan: true
|
||||
re_evaluate_policies: false
|
||||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
# Configure Authentik flows (invitation, 2FA) via Blueprints
|
||||
# Configure Authentik flows (invitation, recovery, 2FA) via Blueprints
|
||||
|
||||
- name: Use bootstrap token for API access
|
||||
set_fact:
|
||||
|
|
@ -27,31 +27,22 @@
|
|||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Copy flow blueprints to server
|
||||
- name: Copy custom flows blueprint to server
|
||||
copy:
|
||||
src: "{{ item }}"
|
||||
dest: "{{ authentik_config_dir }}/blueprints/{{ item }}"
|
||||
src: custom-flows.yaml
|
||||
dest: "{{ authentik_config_dir }}/blueprints/custom-flows.yaml"
|
||||
mode: '0644'
|
||||
loop:
|
||||
- custom-flows.yaml
|
||||
- enrollment-flow.yaml
|
||||
register: blueprints_copied
|
||||
register: blueprint_copied
|
||||
|
||||
- name: Copy blueprints into authentik-worker container
|
||||
- name: Copy blueprint into authentik-worker container
|
||||
shell: |
|
||||
docker cp "{{ authentik_config_dir }}/blueprints/{{ item }}" authentik-worker:/blueprints/{{ item }}
|
||||
loop:
|
||||
- custom-flows.yaml
|
||||
- enrollment-flow.yaml
|
||||
when: blueprints_copied.changed
|
||||
docker cp "{{ authentik_config_dir }}/blueprints/custom-flows.yaml" authentik-worker:/blueprints/custom-flows.yaml
|
||||
changed_when: blueprint_copied.changed
|
||||
|
||||
- name: Copy blueprints into authentik-server container
|
||||
- name: Copy blueprint into authentik-server container
|
||||
shell: |
|
||||
docker cp "{{ authentik_config_dir }}/blueprints/{{ item }}" authentik-server:/blueprints/{{ item }}
|
||||
loop:
|
||||
- custom-flows.yaml
|
||||
- enrollment-flow.yaml
|
||||
when: blueprints_copied.changed
|
||||
docker cp "{{ authentik_config_dir }}/blueprints/custom-flows.yaml" authentik-server:/blueprints/custom-flows.yaml
|
||||
changed_when: blueprint_copied.changed
|
||||
|
||||
- name: Wait for blueprint to be discovered and applied
|
||||
shell: |
|
||||
|
|
@ -96,6 +87,14 @@
|
|||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Verify brand recovery flow was set
|
||||
shell: |
|
||||
docker exec authentik-server curl -sf -H "Authorization: Bearer {{ authentik_api_token }}" \
|
||||
"http://localhost:9000/api/v3/core/brands/" | \
|
||||
python3 -c "import sys, json; data = json.load(sys.stdin); brand = data['results'][0] if data['results'] else {}; print(json.dumps({'recovery_flow_set': brand.get('flow_recovery') is not None}))"
|
||||
register: recovery_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display flows configuration status
|
||||
debug:
|
||||
|
|
@ -105,22 +104,24 @@
|
|||
========================================
|
||||
|
||||
Configuration Method: YAML Blueprints
|
||||
Blueprints Deployed:
|
||||
- /blueprints/custom-flows.yaml (2FA enforcement)
|
||||
- /blueprints/enrollment-flow.yaml (invitation-only registration)
|
||||
Blueprint File: /blueprints/custom-flows.yaml
|
||||
|
||||
✓ Blueprints Deployed: {{ blueprints_copied.changed }}
|
||||
✓ Blueprints Applied: {{ 'Yes' if 'successfully' in blueprint_wait.stdout else 'In Progress' }}
|
||||
✓ Blueprint Deployed: {{ blueprint_copied.changed }}
|
||||
✓ Blueprint Applied: {{ 'Yes' if 'successfully' in blueprint_wait.stdout else 'In Progress' }}
|
||||
|
||||
Verification:
|
||||
{{ invitation_check.stdout | default('Invitation stage: Checking...') }}
|
||||
{{ recovery_check.stdout | default('Recovery flow: Checking...') }}
|
||||
|
||||
Note: Authentik applies blueprints asynchronously.
|
||||
Changes should be visible within 1-2 minutes.
|
||||
Recovery flows must be configured manually in Authentik admin UI.
|
||||
|
||||
Flow URLs:
|
||||
- Enrollment: https://{{ authentik_domain }}/if/flow/default-enrollment-flow/
|
||||
To verify manually:
|
||||
- Login to https://{{ authentik_domain }}
|
||||
- Check Admin > Flows > Stages for invitation stage
|
||||
- Check Admin > System > Brands for recovery flow setting
|
||||
- Check default-authentication-mfa-validation stage for 2FA enforcement
|
||||
|
||||
Email configuration is active - emails sent via Mailgun SMTP.
|
||||
Email configuration is active and flows
|
||||
will send emails via Mailgun SMTP.
|
||||
========================================
|
||||
|
|
|
|||
|
|
@ -1,112 +0,0 @@
|
|||
---
|
||||
# Configure invitation stage for enrollment flow
|
||||
|
||||
- name: Use bootstrap token for API access
|
||||
set_fact:
|
||||
authentik_api_token: "{{ client_secrets.authentik_bootstrap_token }}"
|
||||
|
||||
- name: Wait for Authentik API to be ready
|
||||
uri:
|
||||
url: "https://{{ authentik_domain }}/api/v3/root/config/"
|
||||
method: GET
|
||||
validate_certs: no
|
||||
status_code: 200
|
||||
register: api_result
|
||||
until: api_result.status == 200
|
||||
retries: 12
|
||||
delay: 5
|
||||
ignore_errors: yes
|
||||
failed_when: false
|
||||
|
||||
- name: Create blueprints directory on server
|
||||
file:
|
||||
path: /opt/config/authentik/blueprints
|
||||
state: directory
|
||||
mode: '0755'
|
||||
when: api_result.status is defined and api_result.status == 200
|
||||
|
||||
- name: Copy public enrollment flow blueprint to server
|
||||
copy:
|
||||
src: enrollment-flow.yaml
|
||||
dest: /opt/config/authentik/blueprints/enrollment-flow.yaml
|
||||
mode: '0644'
|
||||
register: enrollment_blueprint_copied
|
||||
when: api_result.status is defined and api_result.status == 200
|
||||
|
||||
- name: Copy enrollment blueprint into authentik-worker container
|
||||
shell: |
|
||||
docker cp /opt/config/authentik/blueprints/enrollment-flow.yaml authentik-worker:/blueprints/enrollment-flow.yaml
|
||||
when: api_result.status is defined and api_result.status == 200
|
||||
|
||||
- name: Copy enrollment blueprint into authentik-server container
|
||||
shell: |
|
||||
docker cp /opt/config/authentik/blueprints/enrollment-flow.yaml authentik-server:/blueprints/enrollment-flow.yaml
|
||||
when: api_result.status is defined and api_result.status == 200
|
||||
|
||||
- name: Wait for enrollment blueprint to be discovered and applied
|
||||
shell: |
|
||||
echo "Waiting for public enrollment blueprint to be discovered and applied..."
|
||||
sleep 10
|
||||
# Check if blueprint instance was created
|
||||
i=1
|
||||
while [ $i -le 24 ]; do
|
||||
result=$(docker exec authentik-server curl -sf -H 'Authorization: Bearer {{ authentik_api_token }}' \
|
||||
'http://localhost:9000/api/v3/managed/blueprints/' 2>/dev/null || echo '')
|
||||
if echo "$result" | grep -q 'public-enrollment-flow'; then
|
||||
echo "Blueprint instance found"
|
||||
if echo "$result" | grep -A 10 'public-enrollment-flow' | grep -q 'successful'; then
|
||||
echo "Blueprint applied successfully"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
sleep 5
|
||||
i=$((i+1))
|
||||
done
|
||||
echo "Blueprint deployment in progress (may take 1-2 minutes)"
|
||||
register: enrollment_blueprint_result
|
||||
changed_when: false
|
||||
when: api_result.status is defined and api_result.status == 200
|
||||
|
||||
- name: Verify enrollment flow was created
|
||||
shell: |
|
||||
docker exec authentik-server curl -sf -H 'Authorization: Bearer {{ authentik_api_token }}' \
|
||||
'http://localhost:9000/api/v3/flows/instances/?slug=default-enrollment-flow' | \
|
||||
python3 -c "import sys, json; d = json.load(sys.stdin); print(json.dumps({'found': len(d.get('results', [])) > 0, 'count': len(d.get('results', []))}))"
|
||||
register: enrollment_flow_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: api_result.status is defined and api_result.status == 200
|
||||
|
||||
- name: Display public enrollment flow configuration status
|
||||
debug:
|
||||
msg: |
|
||||
========================================
|
||||
Authentik Public Enrollment Flow
|
||||
========================================
|
||||
|
||||
Configuration Method: YAML Blueprints
|
||||
Blueprint File: /blueprints/enrollment-flow.yaml
|
||||
|
||||
✓ Blueprint Deployed: {{ enrollment_blueprint_copied.changed | default(false) }}
|
||||
✓ Blueprint Applied: {{ 'In Progress' if (enrollment_blueprint_result is defined and enrollment_blueprint_result.rc is defined and enrollment_blueprint_result.rc != 0) else 'Complete' }}
|
||||
|
||||
Verification: {{ enrollment_flow_check.stdout | default('{}') }}
|
||||
|
||||
Features:
|
||||
- Invitation-only enrollment (requires valid invitation token)
|
||||
- User prompts: username, name, email, password
|
||||
- Automatic user creation and login
|
||||
|
||||
Note: Brand enrollment flow is NOT auto-configured (API restriction).
|
||||
Flow is accessible via direct URL even without brand configuration.
|
||||
|
||||
To use enrollment:
|
||||
1. Create invitation: Directory > Invitations > Create Invitation
|
||||
2. Share invitation link: https://{{ authentik_domain }}/if/flow/default-enrollment-flow/?itoken=TOKEN
|
||||
|
||||
To verify:
|
||||
- Login to https://{{ authentik_domain }}
|
||||
- Check Admin > Flows for "default-enrollment-flow"
|
||||
- Test enrollment URL: https://{{ authentik_domain }}/if/flow/default-enrollment-flow/
|
||||
========================================
|
||||
when: api_result.status is defined and api_result.status == 200
|
||||
|
|
@ -17,7 +17,7 @@
|
|||
when: mailgun_smtp_user is defined or (client_secrets.mailgun_smtp_user is defined and client_secrets.mailgun_smtp_user != "" and "PLACEHOLDER" not in client_secrets.mailgun_smtp_user)
|
||||
tags: ['authentik', 'email']
|
||||
|
||||
- name: Include flows configuration (invitation, 2FA)
|
||||
- name: Include flows configuration (recovery, invitation)
|
||||
include_tasks: flows.yml
|
||||
when: authentik_bootstrap | default(true)
|
||||
tags: ['authentik', 'flows']
|
||||
|
|
@ -26,13 +26,3 @@
|
|||
include_tasks: mfa.yml
|
||||
when: authentik_bootstrap | default(true)
|
||||
tags: ['authentik', 'mfa', '2fa']
|
||||
|
||||
- name: Include invitation stage configuration
|
||||
include_tasks: invitation.yml
|
||||
when: authentik_bootstrap | default(true)
|
||||
tags: ['authentik', 'invitation']
|
||||
|
||||
- name: Include password recovery flow configuration
|
||||
include_tasks: recovery.yml
|
||||
when: authentik_bootstrap | default(true)
|
||||
tags: ['authentik', 'recovery']
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@
|
|||
if not auth_flow or not key: print(json.dumps({'error': 'Config missing'}), file=sys.stderr); sys.exit(1)
|
||||
s, prov = req('/api/v3/providers/oauth2/', 'POST', {'name': 'Nextcloud', 'authorization_flow': auth_flow, 'invalidation_flow': inval_flow, 'client_type': 'confidential', 'redirect_uris': [{'matching_mode': 'strict', 'url': 'https://{{ nextcloud_domain }}/apps/user_oidc/code'}], 'signing_key': key, 'sub_mode': 'hashed_user_id', 'include_claims_in_id_token': True})
|
||||
if s != 201: print(json.dumps({'error': 'Provider failed', 'details': prov}), file=sys.stderr); sys.exit(1)
|
||||
s, app = req('/api/v3/core/applications/', 'POST', {'name': 'Nextcloud', 'slug': 'nextcloud', 'provider': prov['pk'], 'meta_launch_url': 'https://nextcloud.{{ client_domain }}'})
|
||||
s, app = req('/api/v3/core/applications/', 'POST', {'name': 'Nextcloud', 'slug': 'nextcloud', 'provider': prov['pk'], 'meta_launch_url': 'https://{{ nextcloud_domain }}'})
|
||||
if s != 201: print(json.dumps({'error': 'App failed', 'details': app}), file=sys.stderr); sys.exit(1)
|
||||
print(json.dumps({'success': True, 'provider_id': prov['pk'], 'application_id': app['pk'], 'client_id': prov['client_id'], 'client_secret': prov['client_secret'], 'discovery_uri': f"https://{{ authentik_domain }}/application/o/nextcloud/.well-known/openid-configuration", 'issuer': f"https://{{ authentik_domain }}/application/o/nextcloud/"}))
|
||||
dest: /tmp/create_oidc.py
|
||||
|
|
|
|||
|
|
@ -1,85 +0,0 @@
|
|||
---
|
||||
# Configure Authentik password recovery flow
|
||||
# This creates a complete recovery flow with email verification and password complexity validation
|
||||
|
||||
- name: Use bootstrap token for API access
|
||||
set_fact:
|
||||
authentik_api_token: "{{ client_secrets.authentik_bootstrap_token }}"
|
||||
|
||||
- name: Copy recovery flow creation script to server
|
||||
copy:
|
||||
src: create_recovery_flow.py
|
||||
dest: /tmp/create_recovery_flow.py
|
||||
mode: '0755'
|
||||
|
||||
- name: Copy recovery flow script into Authentik container
|
||||
shell: docker cp /tmp/create_recovery_flow.py authentik-server:/tmp/create_recovery_flow.py
|
||||
changed_when: false
|
||||
|
||||
- name: Create recovery flow via Authentik API
|
||||
shell: |
|
||||
docker exec authentik-server python3 /tmp/create_recovery_flow.py "{{ authentik_api_token }}" "{{ authentik_domain }}"
|
||||
register: recovery_flow_result
|
||||
failed_when: false
|
||||
changed_when: "'Recovery Flow Configuration Complete' in recovery_flow_result.stdout"
|
||||
|
||||
- name: Cleanup recovery flow script from server
|
||||
file:
|
||||
path: /tmp/create_recovery_flow.py
|
||||
state: absent
|
||||
|
||||
- name: Cleanup recovery flow script from container
|
||||
shell: docker exec authentik-server rm -f /tmp/create_recovery_flow.py
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Parse recovery flow result
|
||||
set_fact:
|
||||
recovery_flow: "{{ recovery_flow_result.stdout | regex_search('\\{.*\\}', multiline=True) | from_json }}"
|
||||
when: recovery_flow_result.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Display recovery flow configuration result
|
||||
debug:
|
||||
msg: |
|
||||
========================================
|
||||
Authentik Password Recovery Flow
|
||||
========================================
|
||||
|
||||
{% if recovery_flow is defined and recovery_flow.success | default(false) %}
|
||||
Status: ✓ Configured Successfully
|
||||
|
||||
Recovery Flow UUID: {{ recovery_flow.recovery_flow_uuid }}
|
||||
Password Policy UUID: {{ recovery_flow.password_complexity_uuid }}
|
||||
|
||||
Features:
|
||||
- Password complexity: 12+ chars, mixed case, digit, symbol
|
||||
- Recovery email with 30-minute expiry token
|
||||
- Username + password on same login page
|
||||
- "Forgot password?" link on login page
|
||||
|
||||
Test Recovery Flow:
|
||||
1. Go to: https://{{ authentik_domain }}/if/flow/default-authentication-flow/
|
||||
2. Click "Forgot password?" link
|
||||
3. Enter username or email
|
||||
4. Check email for recovery link (sent via Mailgun)
|
||||
5. Set new password (must meet complexity requirements)
|
||||
|
||||
========================================
|
||||
{% else %}
|
||||
Status: ⚠ Configuration incomplete or failed
|
||||
|
||||
This is non-critical - recovery flow can be configured manually.
|
||||
|
||||
To configure manually:
|
||||
1. Login to https://{{ authentik_domain }}
|
||||
2. Go to Admin > Flows & Stages
|
||||
3. Create recovery flow with email verification
|
||||
|
||||
Details: {{ recovery_flow_result.stdout | default('No output') }}
|
||||
========================================
|
||||
{% endif %}
|
||||
|
||||
- name: Set recovery flow status fact
|
||||
set_fact:
|
||||
recovery_flow_configured: "{{ recovery_flow is defined and recovery_flow.success | default(false) }}"
|
||||
|
|
@ -1,11 +1,6 @@
|
|||
---
|
||||
# Handlers for common role
|
||||
|
||||
- name: Restart systemd-resolved
|
||||
service:
|
||||
name: systemd-resolved
|
||||
state: restarted
|
||||
|
||||
- name: Restart SSH
|
||||
service:
|
||||
name: ssh
|
||||
|
|
|
|||
|
|
@ -1,28 +1,6 @@
|
|||
---
|
||||
# Main tasks for common role - base system setup and hardening
|
||||
|
||||
- name: Ensure systemd-resolved config directory exists
|
||||
file:
|
||||
path: /etc/systemd/resolved.conf.d
|
||||
state: directory
|
||||
mode: '0755'
|
||||
tags: [dns]
|
||||
|
||||
- name: Configure DNS (systemd-resolved)
|
||||
copy:
|
||||
dest: /etc/systemd/resolved.conf.d/dns_servers.conf
|
||||
content: |
|
||||
[Resolve]
|
||||
DNS=8.8.8.8 8.8.4.4
|
||||
FallbackDNS=1.1.1.1 1.0.0.1
|
||||
mode: '0644'
|
||||
notify: Restart systemd-resolved
|
||||
tags: [dns]
|
||||
|
||||
- name: Flush handlers (apply DNS config immediately)
|
||||
meta: flush_handlers
|
||||
tags: [dns]
|
||||
|
||||
- name: Update apt cache
|
||||
apt:
|
||||
update_cache: yes
|
||||
|
|
|
|||
|
|
@ -1,39 +0,0 @@
|
|||
---
|
||||
# Diun default configuration
|
||||
diun_version: "latest"
|
||||
diun_schedule: "0 6 * * *" # Daily at 6am UTC
|
||||
diun_log_level: "info"
|
||||
diun_watch_workers: 10
|
||||
|
||||
# Notification configuration
|
||||
diun_notif_enabled: true
|
||||
diun_notif_type: "webhook" # Options: webhook, slack, discord, email, gotify
|
||||
diun_webhook_endpoint: "" # Set per environment or via secrets
|
||||
diun_webhook_method: "POST"
|
||||
diun_webhook_headers: {}
|
||||
|
||||
# Optional: Slack notification
|
||||
diun_slack_webhook_url: ""
|
||||
|
||||
# Optional: Email notification (Mailgun)
|
||||
# Note: Uses per-client SMTP credentials from mailgun role
|
||||
diun_email_enabled: true
|
||||
diun_smtp_host: "smtp.eu.mailgun.org"
|
||||
diun_smtp_port: 587
|
||||
diun_smtp_from: "{{ client_name }}@mg.vrije.cloud"
|
||||
diun_smtp_to: "pieter@postxsociety.org"
|
||||
|
||||
# Which containers to watch
|
||||
diun_watch_all: true
|
||||
diun_exclude_containers: []
|
||||
|
||||
# Don't send notifications on first check (prevents spam on initial run)
|
||||
diun_first_check_notif: false
|
||||
|
||||
# Optional: Matrix notification
|
||||
diun_matrix_enabled: false
|
||||
diun_matrix_homeserver_url: "" # e.g., https://matrix.postxsociety.cloud
|
||||
diun_matrix_user: "" # e.g., @diun:matrix.postxsociety.cloud
|
||||
diun_matrix_password: "" # Bot user password (if using password auth)
|
||||
diun_matrix_access_token: "" # Bot access token (preferred over password)
|
||||
diun_matrix_room_id: "" # e.g., !abc123:matrix.postxsociety.cloud
|
||||
|
|
@ -1,5 +0,0 @@
|
|||
---
|
||||
- name: Restart Diun
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: /opt/docker/diun
|
||||
state: restarted
|
||||
|
|
@ -1,57 +0,0 @@
|
|||
---
|
||||
- name: Set SMTP credentials from mailgun role facts or client_secrets
|
||||
set_fact:
|
||||
diun_smtp_username_final: "{{ mailgun_smtp_user | default(client_secrets.mailgun_smtp_user | default(client_name ~ '@mg.vrije.cloud')) }}"
|
||||
diun_smtp_password_final: "{{ mailgun_smtp_password | default(client_secrets.mailgun_smtp_password | default('')) }}"
|
||||
when: mailgun_smtp_user is defined or client_secrets.mailgun_smtp_user is defined or client_name is defined
|
||||
no_log: true
|
||||
|
||||
- name: Create monitoring Docker network
|
||||
community.docker.docker_network:
|
||||
name: monitoring
|
||||
state: present
|
||||
|
||||
- name: Create Diun directory
|
||||
file:
|
||||
path: /opt/docker/diun
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Create Diun data directory
|
||||
file:
|
||||
path: /opt/docker/diun/data
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Deploy Diun configuration
|
||||
template:
|
||||
src: diun.yml.j2
|
||||
dest: /opt/docker/diun/diun.yml
|
||||
mode: '0644'
|
||||
notify: Restart Diun
|
||||
|
||||
- name: Deploy Diun docker-compose.yml
|
||||
template:
|
||||
src: docker-compose.yml.j2
|
||||
dest: /opt/docker/diun/docker-compose.yml
|
||||
mode: '0644'
|
||||
notify: Restart Diun
|
||||
|
||||
- name: Start Diun container
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: /opt/docker/diun
|
||||
state: present
|
||||
pull: always
|
||||
register: diun_deploy
|
||||
|
||||
- name: Wait for Diun to be healthy
|
||||
shell: docker inspect --format='{{"{{"}} .State.Status {{"}}"}}' diun
|
||||
register: diun_status
|
||||
until: diun_status.stdout == "running"
|
||||
retries: 5
|
||||
delay: 3
|
||||
changed_when: false
|
||||
|
||||
- name: Display Diun status
|
||||
debug:
|
||||
msg: "Diun is {{ diun_status.stdout }} on {{ inventory_hostname }}"
|
||||
|
|
@ -1,77 +0,0 @@
|
|||
---
|
||||
# Diun configuration for {{ inventory_hostname }}
|
||||
# Documentation: https://crazymax.dev/diun/
|
||||
|
||||
db:
|
||||
path: /data/diun.db
|
||||
|
||||
watch:
|
||||
workers: {{ diun_watch_workers }}
|
||||
schedule: "{{ diun_schedule }}"
|
||||
firstCheckNotif: {{ diun_first_check_notif | lower }}
|
||||
|
||||
defaults:
|
||||
watchRepo: {{ diun_watch_repo | default(true) | lower }}
|
||||
notifyOn:
|
||||
- new
|
||||
- update
|
||||
|
||||
{% if diun_docker_hub_username is defined and diun_docker_hub_password is defined %}
|
||||
regopts:
|
||||
- selector: image
|
||||
username: {{ diun_docker_hub_username }}
|
||||
password: {{ diun_docker_hub_password }}
|
||||
{% endif %}
|
||||
|
||||
providers:
|
||||
docker:
|
||||
watchByDefault: {{ diun_watch_all | lower }}
|
||||
{% if diun_exclude_containers | length > 0 %}
|
||||
excludeContainers:
|
||||
{% for container in diun_exclude_containers %}
|
||||
- {{ container }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
notif:
|
||||
{% if diun_notif_enabled and diun_notif_type == 'webhook' and diun_webhook_endpoint %}
|
||||
webhook:
|
||||
endpoint: {{ diun_webhook_endpoint }}
|
||||
method: {{ diun_webhook_method }}
|
||||
timeout: 10s
|
||||
{% if diun_webhook_headers | length > 0 %}
|
||||
headers:
|
||||
{% for key, value in diun_webhook_headers.items() %}
|
||||
{{ key }}: {{ value }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
{% if diun_slack_webhook_url %}
|
||||
slack:
|
||||
webhookURL: {{ diun_slack_webhook_url }}
|
||||
{% endif %}
|
||||
|
||||
{% if diun_email_enabled and diun_smtp_username_final is defined and diun_smtp_password_final is defined and diun_smtp_password_final != '' %}
|
||||
mail:
|
||||
host: {{ diun_smtp_host }}
|
||||
port: {{ diun_smtp_port }}
|
||||
ssl: false
|
||||
insecureSkipVerify: false
|
||||
username: {{ diun_smtp_username_final }}
|
||||
password: {{ diun_smtp_password_final }}
|
||||
from: {{ diun_smtp_from }}
|
||||
to: {{ diun_smtp_to }}
|
||||
{% endif %}
|
||||
|
||||
{% if diun_matrix_enabled and diun_matrix_homeserver_url and diun_matrix_user and diun_matrix_room_id %}
|
||||
matrix:
|
||||
homeserverURL: {{ diun_matrix_homeserver_url }}
|
||||
user: "{{ diun_matrix_user }}"
|
||||
{% if diun_matrix_access_token %}
|
||||
accessToken: {{ diun_matrix_access_token }}
|
||||
{% elif diun_matrix_password %}
|
||||
password: "{{ diun_matrix_password }}"
|
||||
{% endif %}
|
||||
roomID: "{{ diun_matrix_room_id }}"
|
||||
{% endif %}
|
||||
|
|
@ -1,24 +0,0 @@
|
|||
version: '3.8'
|
||||
|
||||
services:
|
||||
diun:
|
||||
image: crazymax/diun:{{ diun_version }}
|
||||
container_name: diun
|
||||
restart: unless-stopped
|
||||
command: serve
|
||||
volumes:
|
||||
- "./data:/data"
|
||||
- "./diun.yml:/diun.yml:ro"
|
||||
- "/var/run/docker.sock:/var/run/docker.sock:ro"
|
||||
environment:
|
||||
- TZ=UTC
|
||||
- LOG_LEVEL={{ diun_log_level }}
|
||||
labels:
|
||||
- "diun.enable=true"
|
||||
networks:
|
||||
- monitoring
|
||||
|
||||
networks:
|
||||
monitoring:
|
||||
name: monitoring
|
||||
external: true
|
||||
|
|
@ -66,13 +66,3 @@
|
|||
path: /opt/docker
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Login to Docker Hub (if credentials provided)
|
||||
community.docker.docker_login:
|
||||
username: "{{ shared_secrets.docker_hub_username }}"
|
||||
password: "{{ shared_secrets.docker_hub_password }}"
|
||||
state: present
|
||||
when:
|
||||
- shared_secrets.docker_hub_username is defined
|
||||
- shared_secrets.docker_hub_password is defined
|
||||
tags: [docker, docker-login]
|
||||
|
|
|
|||
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
# Uptime Kuma monitoring registration
|
||||
kuma_enabled: true
|
||||
kuma_url: "https://status.vrije.cloud"
|
||||
|
||||
# Authentication - credentials loaded from shared_secrets in tasks/main.yml
|
||||
# Uses username/password (required for Socket.io API used by Python library)
|
||||
kuma_username: "" # Loaded from shared_secrets.kuma_username
|
||||
kuma_password: "" # Loaded from shared_secrets.kuma_password
|
||||
|
||||
# Monitors to create for each client
|
||||
kuma_monitors:
|
||||
- name: "{{ client_name }} - Authentik SSO"
|
||||
type: "http"
|
||||
url: "https://auth.{{ client_domain }}"
|
||||
method: "GET"
|
||||
interval: 60
|
||||
maxretries: 3
|
||||
retry_interval: 60
|
||||
expected_status: "200,302"
|
||||
|
||||
- name: "{{ client_name }} - Nextcloud"
|
||||
type: "http"
|
||||
url: "https://nextcloud.{{ client_domain }}"
|
||||
method: "GET"
|
||||
interval: 60
|
||||
maxretries: 3
|
||||
retry_interval: 60
|
||||
expected_status: "200,302"
|
||||
|
||||
- name: "{{ client_name }} - Collabora Office"
|
||||
type: "http"
|
||||
url: "https://office.{{ client_domain }}"
|
||||
method: "GET"
|
||||
interval: 60
|
||||
maxretries: 3
|
||||
retry_interval: 60
|
||||
expected_status: "200"
|
||||
|
|
@ -1,49 +0,0 @@
|
|||
---
|
||||
# Register client services with Uptime Kuma monitoring
|
||||
# Uses uptime-kuma-api Python library with Socket.io
|
||||
|
||||
- name: Set Kuma credentials from shared secrets
|
||||
set_fact:
|
||||
kuma_username: "{{ shared_secrets.kuma_username | default('') }}"
|
||||
kuma_password: "{{ shared_secrets.kuma_password | default('') }}"
|
||||
when: shared_secrets is defined
|
||||
|
||||
- name: Check if Kuma monitoring is enabled
|
||||
set_fact:
|
||||
kuma_registration_enabled: "{{ (kuma_enabled | bool) and (kuma_url | length > 0) and (kuma_username | length > 0) and (kuma_password | length > 0) }}"
|
||||
|
||||
- name: Kuma registration block
|
||||
when: kuma_registration_enabled
|
||||
delegate_to: localhost
|
||||
become: false
|
||||
block:
|
||||
- name: Ensure uptime-kuma-api Python package is installed
|
||||
pip:
|
||||
name: uptime-kuma-api
|
||||
state: present
|
||||
|
||||
- name: Create Kuma registration script
|
||||
template:
|
||||
src: register_monitors.py.j2
|
||||
dest: /tmp/kuma_register_{{ client_name }}.py
|
||||
mode: '0700'
|
||||
|
||||
- name: Register monitors with Uptime Kuma
|
||||
command: "{{ ansible_playbook_python }} /tmp/kuma_register_{{ client_name }}.py"
|
||||
register: kuma_result
|
||||
changed_when: "'Added' in kuma_result.stdout or 'Updated' in kuma_result.stdout"
|
||||
failed_when: kuma_result.rc != 0
|
||||
|
||||
- name: Display Kuma registration result
|
||||
debug:
|
||||
msg: "{{ kuma_result.stdout_lines }}"
|
||||
|
||||
- name: Cleanup registration script
|
||||
file:
|
||||
path: /tmp/kuma_register_{{ client_name }}.py
|
||||
state: absent
|
||||
|
||||
- name: Skip Kuma registration message
|
||||
debug:
|
||||
msg: "Kuma monitoring registration skipped (not enabled or missing credentials)"
|
||||
when: not kuma_registration_enabled
|
||||
|
|
@ -1,128 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Uptime Kuma Monitor Registration Script
|
||||
Auto-generated for client: {{ client_name }}
|
||||
"""
|
||||
|
||||
import sys
|
||||
from uptime_kuma_api import UptimeKumaApi, MonitorType
|
||||
|
||||
# Configuration
|
||||
KUMA_URL = "{{ kuma_url }}"
|
||||
KUMA_USERNAME = "{{ kuma_username | default('') }}"
|
||||
KUMA_PASSWORD = "{{ kuma_password | default('') }}"
|
||||
CLIENT_NAME = "{{ client_name }}"
|
||||
CLIENT_DOMAIN = "{{ client_domain }}"
|
||||
|
||||
# Monitor definitions
|
||||
MONITORS = {{ kuma_monitors | to_json }}
|
||||
|
||||
# Monitor type mapping
|
||||
TYPE_MAP = {
|
||||
"http": MonitorType.HTTP,
|
||||
"https": MonitorType.HTTP,
|
||||
"ping": MonitorType.PING,
|
||||
"tcp": MonitorType.PORT,
|
||||
"dns": MonitorType.DNS,
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
"""Register monitors with Uptime Kuma"""
|
||||
|
||||
# Check if credentials are provided
|
||||
if not KUMA_USERNAME or not KUMA_PASSWORD:
|
||||
print("⚠️ Kuma registration skipped: No credentials provided")
|
||||
print("")
|
||||
print("To enable automated monitor registration, add to your secrets:")
|
||||
print(" kuma_username: your_username")
|
||||
print(" kuma_password: your_password")
|
||||
print("")
|
||||
print("Note: API keys (uk1_*) are only for REST endpoints, not monitor management")
|
||||
print("Manual registration required at: https://status.vrije.cloud")
|
||||
sys.exit(0) # Exit with success (not a failure, just skipped)
|
||||
|
||||
try:
|
||||
# Connect to Uptime Kuma (Socket.io connection)
|
||||
print(f"🔌 Connecting to Uptime Kuma at {KUMA_URL}...")
|
||||
api = UptimeKumaApi(KUMA_URL)
|
||||
|
||||
# Login with username/password
|
||||
print(f"🔐 Authenticating as {KUMA_USERNAME}...")
|
||||
api.login(KUMA_USERNAME, KUMA_PASSWORD)
|
||||
|
||||
# Get existing monitors
|
||||
print("📋 Fetching existing monitors...")
|
||||
existing_monitors = api.get_monitors()
|
||||
existing_names = {m['name']: m['id'] for m in existing_monitors}
|
||||
|
||||
# Register each monitor
|
||||
added_count = 0
|
||||
updated_count = 0
|
||||
skipped_count = 0
|
||||
|
||||
for monitor_config in MONITORS:
|
||||
monitor_name = monitor_config['name']
|
||||
monitor_type_str = monitor_config.get('type', 'http').lower()
|
||||
monitor_type = TYPE_MAP.get(monitor_type_str, MonitorType.HTTP)
|
||||
|
||||
# Build monitor parameters
|
||||
params = {
|
||||
'type': monitor_type,
|
||||
'name': monitor_name,
|
||||
'interval': monitor_config.get('interval', 60),
|
||||
'maxretries': monitor_config.get('maxretries', 3),
|
||||
'retryInterval': monitor_config.get('retry_interval', 60),
|
||||
}
|
||||
|
||||
# Add type-specific parameters
|
||||
if monitor_type == MonitorType.HTTP:
|
||||
params['url'] = monitor_config['url']
|
||||
params['method'] = monitor_config.get('method', 'GET')
|
||||
if 'expected_status' in monitor_config:
|
||||
params['accepted_statuscodes'] = monitor_config['expected_status'].split(',')
|
||||
elif monitor_type == MonitorType.PING:
|
||||
params['hostname'] = monitor_config.get('hostname', monitor_config.get('url', ''))
|
||||
|
||||
# Check if monitor already exists
|
||||
if monitor_name in existing_names:
|
||||
print(f"⚠️ Monitor '{monitor_name}' already exists (ID: {existing_monitors[monitor_name]})")
|
||||
print(f" Skipping (update not implemented)")
|
||||
skipped_count += 1
|
||||
else:
|
||||
print(f"➕ Adding monitor: {monitor_name}")
|
||||
try:
|
||||
result = api.add_monitor(**params)
|
||||
print(f" ✓ Added (ID: {result.get('monitorID', 'unknown')})")
|
||||
added_count += 1
|
||||
except Exception as e:
|
||||
print(f" ✗ Failed: {e}")
|
||||
|
||||
# Disconnect
|
||||
api.disconnect()
|
||||
|
||||
# Summary
|
||||
print("")
|
||||
print("=" * 60)
|
||||
print(f"📊 Registration Summary for {CLIENT_NAME}:")
|
||||
print(f" Added: {added_count}")
|
||||
print(f" Skipped (already exist): {skipped_count}")
|
||||
print(f" Total monitors: {len(MONITORS)}")
|
||||
print("=" * 60)
|
||||
|
||||
if added_count > 0:
|
||||
print(f"✅ Successfully registered {added_count} new monitor(s)")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ ERROR: Failed to register monitors: {e}")
|
||||
print("")
|
||||
print("Troubleshooting:")
|
||||
print(f" 1. Verify Kuma is accessible: {KUMA_URL}")
|
||||
print(" 2. Check username/password are correct")
|
||||
print(" 3. Ensure uptime-kuma-api Python package is installed")
|
||||
print(" 4. Check network connectivity from deployment machine")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -2,7 +2,7 @@
|
|||
# Default variables for nextcloud role
|
||||
|
||||
# Nextcloud version
|
||||
nextcloud_version: "latest" # Always use latest stable version
|
||||
nextcloud_version: "30" # Latest stable version (uses major version tag)
|
||||
|
||||
# Database configuration
|
||||
nextcloud_db_type: "pgsql"
|
||||
|
|
@ -22,12 +22,10 @@ nextcloud_redis_host: "nextcloud-redis"
|
|||
nextcloud_redis_port: "6379"
|
||||
|
||||
# OIDC configuration
|
||||
# Note: OIDC credentials are provided dynamically by the Authentik role
|
||||
# via /tmp/authentik_oidc_credentials.json during deployment
|
||||
nextcloud_oidc_enabled: true
|
||||
nextcloud_oidc_provider_url: "https://{{ authentik_domain }}"
|
||||
nextcloud_oidc_client_id: "" # Set dynamically from Authentik
|
||||
nextcloud_oidc_client_secret: "" # Set dynamically from Authentik
|
||||
nextcloud_oidc_provider_url: "https://{{ zitadel_domain }}"
|
||||
nextcloud_oidc_client_id: "" # Will be set after creating app in Zitadel
|
||||
nextcloud_oidc_client_secret: "" # Will be set after creating app in Zitadel
|
||||
|
||||
# Trusted domains (for Nextcloud config)
|
||||
nextcloud_trusted_domains:
|
||||
|
|
|
|||
|
|
@ -20,12 +20,10 @@
|
|||
state: present
|
||||
register: nextcloud_deploy
|
||||
|
||||
- name: Wait for Nextcloud container to be ready
|
||||
shell: docker exec nextcloud sh -c 'until curl -f http://localhost:80 >/dev/null 2>&1; do sleep 2; done'
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: nextcloud_ready
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
- name: Wait for Nextcloud to be ready
|
||||
wait_for:
|
||||
host: localhost
|
||||
port: 80
|
||||
delay: 10
|
||||
timeout: 300
|
||||
when: nextcloud_deploy.changed
|
||||
|
|
|
|||
|
|
@ -1,12 +1,6 @@
|
|||
---
|
||||
# Main tasks for Nextcloud deployment
|
||||
|
||||
- name: Include volume mounting tasks
|
||||
include_tasks: mount-volume.yml
|
||||
tags:
|
||||
- nextcloud
|
||||
- volume
|
||||
|
||||
- name: Include Docker deployment tasks
|
||||
include_tasks: docker.yml
|
||||
tags:
|
||||
|
|
|
|||
|
|
@ -1,74 +0,0 @@
|
|||
---
|
||||
# Mount Hetzner Volume for Nextcloud Data Storage
|
||||
#
|
||||
# This task file handles mounting the Hetzner Volume that stores Nextcloud user data.
|
||||
# The volume is created and attached by OpenTofu, we just mount it here.
|
||||
|
||||
- name: Wait for volume device to appear
|
||||
wait_for:
|
||||
path: /dev/disk/by-id/
|
||||
timeout: 30
|
||||
register: disk_ready
|
||||
|
||||
- name: Find Nextcloud volume device
|
||||
shell: |
|
||||
ls -1 /dev/disk/by-id/scsi-0HC_Volume_* 2>/dev/null | head -1
|
||||
register: volume_device_result
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Set volume device fact
|
||||
set_fact:
|
||||
volume_device: "{{ volume_device_result.stdout }}"
|
||||
|
||||
- name: Display found volume device
|
||||
debug:
|
||||
msg: "Found Nextcloud volume at: {{ volume_device }}"
|
||||
|
||||
- name: Check if volume is already formatted
|
||||
shell: |
|
||||
blkid {{ volume_device }} | grep -q 'TYPE="ext4"'
|
||||
register: volume_formatted
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Format volume as ext4 if not formatted
|
||||
filesystem:
|
||||
fstype: ext4
|
||||
dev: "{{ volume_device }}"
|
||||
when: volume_formatted.rc != 0
|
||||
|
||||
- name: Create mount point directory
|
||||
file:
|
||||
path: /mnt/nextcloud-data
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Mount Nextcloud data volume
|
||||
mount:
|
||||
path: /mnt/nextcloud-data
|
||||
src: "{{ volume_device }}"
|
||||
fstype: ext4
|
||||
state: mounted
|
||||
opts: defaults,discard
|
||||
register: mount_result
|
||||
|
||||
- name: Ensure mount persists across reboots
|
||||
mount:
|
||||
path: /mnt/nextcloud-data
|
||||
src: "{{ volume_device }}"
|
||||
fstype: ext4
|
||||
state: present
|
||||
opts: defaults,discard
|
||||
|
||||
- name: Create Nextcloud data directory on volume
|
||||
file:
|
||||
path: /mnt/nextcloud-data/data
|
||||
state: directory
|
||||
owner: www-data
|
||||
group: www-data
|
||||
mode: '0750'
|
||||
|
||||
- name: Display mount success
|
||||
debug:
|
||||
msg: "Nextcloud volume successfully mounted at /mnt/nextcloud-data"
|
||||
|
|
@ -56,17 +56,6 @@
|
|||
register: oidc_config
|
||||
changed_when: oidc_config.rc == 0
|
||||
|
||||
- name: Configure OIDC settings (allow native login + OIDC)
|
||||
shell: |
|
||||
docker exec -u www-data nextcloud php occ config:app:set user_oidc allow_multiple_user_backends --value=1
|
||||
docker exec -u www-data nextcloud php occ config:app:set user_oidc auto_provision --value=1
|
||||
docker exec -u www-data nextcloud php occ config:app:set user_oidc single_logout --value=0
|
||||
when:
|
||||
- authentik_oidc is defined
|
||||
- authentik_oidc.success | default(false)
|
||||
register: oidc_settings
|
||||
changed_when: oidc_settings.rc == 0
|
||||
|
||||
- name: Cleanup OIDC credentials file
|
||||
file:
|
||||
path: /tmp/authentik_oidc_credentials.json
|
||||
|
|
|
|||
|
|
@ -10,17 +10,8 @@ services:
|
|||
POSTGRES_DB: {{ nextcloud_db_name }}
|
||||
POSTGRES_USER: {{ nextcloud_db_user }}
|
||||
POSTGRES_PASSWORD: {{ client_secrets.nextcloud_db_password }}
|
||||
# Grant full privileges to the user
|
||||
POSTGRES_INITDB_ARGS: "--auth-host=scram-sha-256"
|
||||
command: >
|
||||
postgres
|
||||
-c shared_buffers=256MB
|
||||
-c max_connections=200
|
||||
-c shared_preload_libraries=''
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U {{ nextcloud_db_user }} -d {{ nextcloud_db_name }}"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
- nextcloud-internal
|
||||
|
||||
|
|
@ -44,8 +35,7 @@ services:
|
|||
- nextcloud-db
|
||||
- nextcloud-redis
|
||||
volumes:
|
||||
- nextcloud-app:/var/www/html
|
||||
- /mnt/nextcloud-data/data:/var/www/html/data # User data on Hetzner Volume
|
||||
- nextcloud-data:/var/www/html
|
||||
entrypoint: /cron.sh
|
||||
networks:
|
||||
- nextcloud-internal
|
||||
|
|
@ -59,8 +49,7 @@ services:
|
|||
- nextcloud-db
|
||||
- nextcloud-redis
|
||||
volumes:
|
||||
- nextcloud-app:/var/www/html
|
||||
- /mnt/nextcloud-data/data:/var/www/html/data # User data on Hetzner Volume
|
||||
- nextcloud-data:/var/www/html
|
||||
environment:
|
||||
# Database configuration
|
||||
POSTGRES_HOST: {{ nextcloud_db_host }}
|
||||
|
|
@ -126,18 +115,11 @@ services:
|
|||
image: collabora/code:latest
|
||||
container_name: collabora
|
||||
restart: unless-stopped
|
||||
# Required capabilities for optimal performance (bind-mount instead of copy)
|
||||
cap_add:
|
||||
- MKNOD
|
||||
- SYS_CHROOT
|
||||
environment:
|
||||
- domain={{ nextcloud_domain | regex_replace('\.', '\\.') }}
|
||||
- username={{ collabora_admin_user }}
|
||||
- password={{ client_secrets.collabora_admin_password }}
|
||||
# Performance tuning based on available CPU cores
|
||||
# num_prespawn_children: Number of child processes to keep started (default: 1)
|
||||
# per_document.max_concurrency: Max threads per document (should be <= CPU cores)
|
||||
- extra_params=--o:ssl.enable=false --o:ssl.termination=true --o:num_prespawn_children=1 --o:per_document.max_concurrency=2
|
||||
- extra_params=--o:ssl.enable=false --o:ssl.termination=true
|
||||
- MEMPROPORTION=60.0
|
||||
- MAX_DOCUMENTS=10
|
||||
- MAX_CONNECTIONS=20
|
||||
|
|
@ -176,6 +158,5 @@ volumes:
|
|||
name: nextcloud-db-data
|
||||
nextcloud-redis-data:
|
||||
name: nextcloud-redis-data
|
||||
nextcloud-app:
|
||||
name: nextcloud-app
|
||||
# Note: nextcloud-data volume removed - user data now stored on Hetzner Volume at /mnt/nextcloud-data
|
||||
nextcloud-data:
|
||||
name: nextcloud-data
|
||||
|
|
|
|||
|
|
@ -1,8 +1,66 @@
|
|||
# Traefik dynamic configuration
|
||||
# Managed by Ansible - Client-specific routes come from Docker labels
|
||||
# Managed by Ansible - do not edit manually
|
||||
|
||||
http:
|
||||
routers:
|
||||
# Zitadel identity provider
|
||||
zitadel:
|
||||
rule: "Host(`zitadel.test.vrije.cloud`)"
|
||||
service: zitadel
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
middlewares:
|
||||
- zitadel-headers
|
||||
|
||||
# Nextcloud file sync/share
|
||||
nextcloud:
|
||||
rule: "Host(`nextcloud.test.vrije.cloud`)"
|
||||
service: nextcloud
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
middlewares:
|
||||
- nextcloud-headers
|
||||
- nextcloud-redirectregex
|
||||
|
||||
services:
|
||||
# Zitadel service
|
||||
zitadel:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: "h2c://zitadel:8080"
|
||||
|
||||
# Nextcloud service
|
||||
nextcloud:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: "http://nextcloud:80"
|
||||
|
||||
middlewares:
|
||||
# Zitadel-specific headers
|
||||
zitadel-headers:
|
||||
headers:
|
||||
stsSeconds: 31536000
|
||||
stsIncludeSubdomains: true
|
||||
stsPreload: true
|
||||
|
||||
# Nextcloud-specific headers
|
||||
nextcloud-headers:
|
||||
headers:
|
||||
stsSeconds: 31536000
|
||||
stsIncludeSubdomains: true
|
||||
stsPreload: true
|
||||
|
||||
# CalDAV/CardDAV redirect for Nextcloud
|
||||
nextcloud-redirectregex:
|
||||
redirectRegex:
|
||||
permanent: true
|
||||
regex: "https://(.*)/.well-known/(card|cal)dav"
|
||||
replacement: "https://$1/remote.php/dav/"
|
||||
|
||||
# Security headers
|
||||
security-headers:
|
||||
headers:
|
||||
|
|
|
|||
|
|
@ -1,83 +0,0 @@
|
|||
# Client Registry
|
||||
|
||||
This directory contains the client registry system for tracking all deployed infrastructure.
|
||||
|
||||
## Files
|
||||
|
||||
- **[registry.yml](registry.yml)** - Single source of truth for all clients
|
||||
- Deployment status and lifecycle
|
||||
- Server specifications
|
||||
- Application versions
|
||||
- Maintenance history
|
||||
- Access URLs
|
||||
|
||||
## Management Scripts
|
||||
|
||||
All scripts are located in [`../scripts/`](../scripts/):
|
||||
|
||||
### View Clients
|
||||
|
||||
View the registry directly:
|
||||
```bash
|
||||
# View full registry
|
||||
cat registry.yml
|
||||
|
||||
# View specific client (requires yq)
|
||||
yq eval '.clients.dev' registry.yml
|
||||
```
|
||||
|
||||
### View Client Details
|
||||
|
||||
```bash
|
||||
# Show detailed status with live health checks
|
||||
../scripts/client-status.sh <client_name>
|
||||
```
|
||||
|
||||
### Update Registry
|
||||
|
||||
The registry is **automatically updated** by deployment scripts:
|
||||
- `deploy-client.sh` - Creates/updates entry on deployment
|
||||
- `rebuild-client.sh` - Updates entry on rebuild
|
||||
- `destroy-client.sh` - Marks as destroyed
|
||||
|
||||
For manual updates, edit `registry.yml` directly.
|
||||
|
||||
## Registry Structure
|
||||
|
||||
Each client entry tracks:
|
||||
- **Status**: `pending` → `deployed` → `maintenance` → `offboarding` → `destroyed`
|
||||
- **Role**: `canary` (testing) or `production` (live)
|
||||
- **Server**: Type, location, IP, Hetzner ID
|
||||
- **Apps**: Installed applications
|
||||
- **Versions**: Application and OS versions
|
||||
- **Maintenance**: Update and backup history
|
||||
- **URLs**: Access endpoints
|
||||
- **Notes**: Operational documentation
|
||||
|
||||
## Canary Deployment
|
||||
|
||||
The `dev` client has role `canary` and is used for testing:
|
||||
|
||||
```bash
|
||||
# 1. Test on canary first
|
||||
../scripts/deploy-client.sh dev
|
||||
|
||||
# 2. Verify it works
|
||||
../scripts/client-status.sh dev
|
||||
|
||||
# 3. Roll out to production clients manually
|
||||
# Review registry.yml for production clients, then rebuild each one
|
||||
```
|
||||
|
||||
## Registry Structure Details
|
||||
|
||||
The `registry.yml` file uses YAML format with the following structure:
|
||||
- Complete registry structure reference in the file itself
|
||||
- Client lifecycle states and metadata
|
||||
- Server specifications and IP addresses
|
||||
- Deployment timestamps and version tracking
|
||||
|
||||
## Requirements
|
||||
|
||||
- **yq**: YAML processor (`brew install yq`)
|
||||
- **jq**: JSON processor (`brew install jq`)
|
||||
|
|
@ -1,89 +0,0 @@
|
|||
# Client Registry
|
||||
#
|
||||
# Single source of truth for all clients in the infrastructure.
|
||||
# This file tracks client lifecycle, deployment state, and versions.
|
||||
#
|
||||
# Status values:
|
||||
# - pending: Client configuration created, not yet deployed
|
||||
# - deployed: Client is live and operational
|
||||
# - maintenance: Under maintenance, may be temporarily unavailable
|
||||
# - offboarding: Being decommissioned
|
||||
# - destroyed: Infrastructure removed, secrets archived
|
||||
#
|
||||
# Role values:
|
||||
# - canary: Used for testing updates before production rollout
|
||||
# - production: Live client serving real users
|
||||
|
||||
clients:
|
||||
dev:
|
||||
status: deployed
|
||||
role: canary
|
||||
deployed_date: 2026-01-17
|
||||
destroyed_date: null
|
||||
|
||||
server:
|
||||
type: cpx22 # 3 vCPU, 4 GB RAM, 80 GB SSD
|
||||
location: fsn1 # Falkenstein, Germany
|
||||
ip: 78.47.191.38
|
||||
id: "117714358" # Hetzner server ID
|
||||
|
||||
apps:
|
||||
- authentik
|
||||
- nextcloud
|
||||
|
||||
versions:
|
||||
authentik: "2025.10.3"
|
||||
nextcloud: "30.0.17"
|
||||
traefik: "v3.0"
|
||||
ubuntu: "24.04"
|
||||
|
||||
maintenance:
|
||||
last_full_update: 2026-01-17
|
||||
last_security_patch: 2026-01-17
|
||||
last_os_update: 2026-01-17
|
||||
last_backup_verified: null
|
||||
|
||||
urls:
|
||||
authentik: "https://auth.dev.vrije.cloud"
|
||||
nextcloud: "https://nextcloud.dev.vrije.cloud"
|
||||
|
||||
notes: |
|
||||
Canary/test server. Used for testing updates before production rollout.
|
||||
Server was recreated on 2026-01-17 for per-client SSH key implementation.
|
||||
|
||||
# Add new clients here as they are deployed
|
||||
# Template:
|
||||
#
|
||||
# clientname:
|
||||
# status: deployed
|
||||
# role: production
|
||||
# deployed_date: YYYY-MM-DD
|
||||
# destroyed_date: null
|
||||
#
|
||||
# server:
|
||||
# type: cx22
|
||||
# location: nbg1
|
||||
# ip: 1.2.3.4
|
||||
# id: "12345678"
|
||||
#
|
||||
# apps:
|
||||
# - authentik
|
||||
# - nextcloud
|
||||
#
|
||||
# versions:
|
||||
# authentik: "2025.10.3"
|
||||
# nextcloud: "30.0.17"
|
||||
# traefik: "v3.0"
|
||||
# ubuntu: "24.04"
|
||||
#
|
||||
# maintenance:
|
||||
# last_full_update: YYYY-MM-DD
|
||||
# last_security_patch: YYYY-MM-DD
|
||||
# last_os_update: YYYY-MM-DD
|
||||
# last_backup_verified: null
|
||||
#
|
||||
# urls:
|
||||
# authentik: "https://auth.clientname.vrije.cloud"
|
||||
# nextcloud: "https://nextcloud.clientname.vrije.cloud"
|
||||
#
|
||||
# notes: ""
|
||||
245
docs/AUTOMATION_STATUS.md
Normal file
245
docs/AUTOMATION_STATUS.md
Normal file
|
|
@ -0,0 +1,245 @@
|
|||
# Automation Status
|
||||
|
||||
## ✅ FULLY AUTOMATED DEPLOYMENT
|
||||
|
||||
**Status**: The infrastructure is now **100% automated** with **ZERO manual steps** required.
|
||||
|
||||
## What Gets Deployed
|
||||
|
||||
When you run the deployment playbook, the following happens automatically:
|
||||
|
||||
### 1. Hetzner Cloud Infrastructure
|
||||
- VPS server provisioned via OpenTofu
|
||||
- Firewall rules configured
|
||||
- SSH keys deployed
|
||||
- Domain DNS configured
|
||||
|
||||
### 2. Traefik Reverse Proxy
|
||||
- Docker containers deployed
|
||||
- Let's Encrypt SSL certificates obtained automatically
|
||||
- HTTPS configured for all services
|
||||
|
||||
### 3. Authentik Identity Provider
|
||||
- PostgreSQL database deployed
|
||||
- Authentik server + worker containers started
|
||||
- **Admin user `akadmin` created automatically** via `AUTHENTIK_BOOTSTRAP_PASSWORD`
|
||||
- **API token created automatically** via `AUTHENTIK_BOOTSTRAP_TOKEN`
|
||||
- OAuth2/OIDC provider for Nextcloud created via API
|
||||
- Client credentials generated and saved
|
||||
|
||||
### 4. Nextcloud File Storage
|
||||
- MariaDB database deployed
|
||||
- Redis cache configured
|
||||
- Nextcloud container started
|
||||
- **Admin account created automatically**
|
||||
- **OIDC app installed and configured automatically**
|
||||
- **SSO integration with Authentik configured automatically**
|
||||
|
||||
## Deployment Command
|
||||
|
||||
```bash
|
||||
cd infrastructure/tofu
|
||||
tofu apply
|
||||
|
||||
cd ../ansible
|
||||
export HCLOUD_TOKEN="<your_token>"
|
||||
export SOPS_AGE_KEY_FILE="../keys/age-key.txt"
|
||||
|
||||
ansible-playbook -i hcloud.yml playbooks/setup.yml
|
||||
ansible-playbook -i hcloud.yml playbooks/deploy.yml
|
||||
```
|
||||
|
||||
## What You Get
|
||||
|
||||
After deployment completes (typically 10-15 minutes):
|
||||
|
||||
### Immediately Usable Services
|
||||
|
||||
1. **Authentik SSO**: `https://auth.<client>.vrije.cloud`
|
||||
- Admin user: `akadmin`
|
||||
- Password: Generated automatically, stored in secrets
|
||||
- Fully configured and ready to create users
|
||||
|
||||
2. **Nextcloud**: `https://nextcloud.<client>.vrije.cloud`
|
||||
- Admin user: `admin`
|
||||
- Password: Generated automatically, stored in secrets
|
||||
- **"Login with Authentik" button already visible**
|
||||
- No additional configuration needed
|
||||
|
||||
### End User Workflow
|
||||
|
||||
1. Admin logs into Authentik
|
||||
2. Admin creates user accounts in Authentik
|
||||
3. Users visit Nextcloud login page
|
||||
4. Users click "Login with Authentik"
|
||||
5. Users enter Authentik credentials
|
||||
6. Nextcloud account automatically created and linked
|
||||
7. User is logged in and can use Nextcloud
|
||||
|
||||
## Technical Details
|
||||
|
||||
### Bootstrap Automation
|
||||
|
||||
Authentik supports official bootstrap environment variables:
|
||||
|
||||
```yaml
|
||||
# In docker-compose.authentik.yml.j2
|
||||
environment:
|
||||
AUTHENTIK_BOOTSTRAP_PASSWORD: "{{ client_secrets.authentik_bootstrap_password }}"
|
||||
AUTHENTIK_BOOTSTRAP_TOKEN: "{{ client_secrets.authentik_bootstrap_token }}"
|
||||
AUTHENTIK_BOOTSTRAP_EMAIL: "{{ client_secrets.authentik_bootstrap_email }}"
|
||||
```
|
||||
|
||||
These variables:
|
||||
- Are only read during **first startup** (when database is empty)
|
||||
- Create the default `akadmin` user with specified password
|
||||
- Create an API token for programmatic access
|
||||
- **Require no manual intervention**
|
||||
|
||||
### OIDC Provider Automation
|
||||
|
||||
The `authentik_api.py` script:
|
||||
1. Waits for Authentik to be ready
|
||||
2. Authenticates using bootstrap token
|
||||
3. Gets default authorization flow UUID
|
||||
4. Gets default signing certificate UUID
|
||||
5. Creates OAuth2/OIDC provider for Nextcloud
|
||||
6. Creates application linked to provider
|
||||
7. Returns `client_id`, `client_secret`, `discovery_uri`
|
||||
|
||||
The Nextcloud role:
|
||||
1. Installs `user_oidc` app
|
||||
2. Reads credentials from temporary file
|
||||
3. Configures OIDC provider via `occ` command
|
||||
4. Cleanup temporary files
|
||||
|
||||
### Secrets Management
|
||||
|
||||
All sensitive data is:
|
||||
- Generated automatically using Python's `secrets` module
|
||||
- Stored in SOPS-encrypted files
|
||||
- Never committed to git in plaintext
|
||||
- Decrypted only during Ansible execution
|
||||
|
||||
## Multi-Tenant Support
|
||||
|
||||
To add a new client:
|
||||
|
||||
```bash
|
||||
# 1. Create secrets file
|
||||
cp secrets/clients/test.sops.yaml secrets/clients/newclient.sops.yaml
|
||||
sops secrets/clients/newclient.sops.yaml
|
||||
# Edit: client_name, domains, regenerate all passwords/tokens
|
||||
|
||||
# 2. Deploy
|
||||
tofu apply
|
||||
ansible-playbook -i hcloud.yml playbooks/deploy.yml --limit newclient
|
||||
```
|
||||
|
||||
Each client gets:
|
||||
- Isolated VPS server
|
||||
- Separate databases
|
||||
- Separate Docker networks
|
||||
- Own SSL certificates
|
||||
- Own admin credentials
|
||||
- Own SSO configuration
|
||||
|
||||
## Zero Manual Configuration
|
||||
|
||||
### What is NOT required
|
||||
|
||||
❌ No web UI clicking
|
||||
❌ No manual account creation
|
||||
❌ No copying/pasting of credentials
|
||||
❌ No OAuth2 provider setup in web UI
|
||||
❌ No Nextcloud app configuration
|
||||
❌ No DNS configuration (handled by Hetzner API)
|
||||
❌ No SSL certificate generation (handled by Traefik)
|
||||
|
||||
### What IS required
|
||||
|
||||
✅ Run OpenTofu to provision infrastructure
|
||||
✅ Run Ansible to deploy and configure services
|
||||
✅ Wait 10-15 minutes for deployment to complete
|
||||
|
||||
That's it!
|
||||
|
||||
## Validation
|
||||
|
||||
After deployment, you can verify automation worked:
|
||||
|
||||
```bash
|
||||
# 1. Check services are running
|
||||
ssh root@<client_ip>
|
||||
docker ps
|
||||
|
||||
# 2. Visit Nextcloud
|
||||
curl -I https://nextcloud.<client>.vrije.cloud
|
||||
# Should return 200 OK with SSL
|
||||
|
||||
# 3. Check for "Login with Authentik" button
|
||||
# Visit https://nextcloud.<client>.vrije.cloud/login
|
||||
# Button should be visible immediately
|
||||
|
||||
# 4. Test SSO flow
|
||||
# Click button → redirected to Authentik
|
||||
# Login with Authentik credentials
|
||||
# Redirected back to Nextcloud, logged in
|
||||
```
|
||||
|
||||
## Comparison: Before vs After
|
||||
|
||||
### Before (Manual Setup)
|
||||
|
||||
1. Deploy Authentik ✅
|
||||
2. **Visit web UI and create admin account** ❌
|
||||
3. **Login and create API token manually** ❌
|
||||
4. **Add token to secrets file** ❌
|
||||
5. **Re-run deployment** ❌
|
||||
6. Deploy Nextcloud ✅
|
||||
7. **Configure OIDC provider in Authentik UI** ❌
|
||||
8. **Copy client_id and client_secret** ❌
|
||||
9. **Configure Nextcloud OIDC app** ❌
|
||||
10. Test SSO ✅
|
||||
|
||||
**Total manual steps: 7**
|
||||
**Time to production: 30-60 minutes**
|
||||
|
||||
### After (Fully Automated)
|
||||
|
||||
1. Run `tofu apply` ✅
|
||||
2. Run `ansible-playbook` ✅
|
||||
3. Test SSO ✅
|
||||
|
||||
**Total manual steps: 0**
|
||||
**Time to production: 10-15 minutes**
|
||||
|
||||
## Project Goal Achieved
|
||||
|
||||
> "I never want to do anything manually, the whole point of this project is that we use it to automatically create servers in the Hetzner cloud that run authentik and nextcloud that people can use out of the box"
|
||||
|
||||
✅ **GOAL ACHIEVED**
|
||||
|
||||
The system now:
|
||||
- Automatically creates servers in Hetzner Cloud
|
||||
- Automatically deploys Authentik and Nextcloud
|
||||
- Automatically configures SSO integration
|
||||
- Is ready to use immediately after deployment
|
||||
- Requires zero manual configuration
|
||||
|
||||
Users can:
|
||||
- Login to Nextcloud with Authentik credentials
|
||||
- Get automatically provisioned accounts
|
||||
- Use the system immediately
|
||||
|
||||
## Next Steps
|
||||
|
||||
The system is production-ready for automated multi-tenant deployment. Potential enhancements:
|
||||
|
||||
1. **Automated user provisioning** - Create default users via Authentik API
|
||||
2. **Email configuration** - Add SMTP settings for password resets
|
||||
3. **Backup automation** - Automated backups to Hetzner Storage Box
|
||||
4. **Monitoring** - Add Prometheus/Grafana for observability
|
||||
5. **Additional apps** - OnlyOffice, Collabora, etc.
|
||||
|
||||
But for the core goal of **automated Authentik + Nextcloud with SSO**, the system is **complete and fully automated**.
|
||||
846
docs/architecture-decisions.md
Normal file
846
docs/architecture-decisions.md
Normal file
|
|
@ -0,0 +1,846 @@
|
|||
# Infrastructure Architecture Decision Record
|
||||
|
||||
## Post-X Society Multi-Tenant VPS Platform
|
||||
|
||||
**Document Status:** Living document
|
||||
**Created:** December 2024
|
||||
**Last Updated:** December 2025
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document captures architectural decisions for a scalable, multi-tenant infrastructure platform starting with 10 identical VPS instances running Keycloak and Nextcloud, with plans to expand both server count and application offerings.
|
||||
|
||||
**Key Technology Choices:**
|
||||
- **OpenTofu** over Terraform (truly open source, MPL 2.0)
|
||||
- **SOPS + Age** over HashiCorp Vault (simple, no server, European-friendly)
|
||||
- **Hetzner** for all infrastructure (GDPR-compliant, EU-based)
|
||||
|
||||
---
|
||||
|
||||
## 1. Infrastructure Provisioning
|
||||
|
||||
### Decision: OpenTofu + Ansible with Dynamic Inventory
|
||||
|
||||
**Choice:** Infrastructure as Code using OpenTofu for resource provisioning and Ansible for configuration management.
|
||||
|
||||
**Why OpenTofu over Terraform:**
|
||||
- Truly open source (MPL 2.0) vs HashiCorp's BSL 1.1
|
||||
- Drop-in replacement - same syntax, same providers
|
||||
- Linux Foundation governance - no single company can close the license
|
||||
- Active community after HashiCorp's 2023 license change
|
||||
- No risk of future license restrictions
|
||||
|
||||
**Approach:**
|
||||
- **OpenTofu** manages Hetzner resources (VPS instances, networks, firewalls, DNS)
|
||||
- **Ansible** configures servers using the `hcloud` dynamic inventory plugin
|
||||
- No static inventory files - Ansible queries Hetzner API at runtime
|
||||
|
||||
**Rationale:**
|
||||
- 10+ identical servers makes manual management unsustainable
|
||||
- Version-controlled infrastructure in Git
|
||||
- Dynamic inventory eliminates sync issues between OpenTofu and Ansible
|
||||
- Skills transfer to other providers if needed
|
||||
|
||||
**Implementation:**
|
||||
```yaml
|
||||
# ansible.cfg
|
||||
[inventory]
|
||||
enable_plugins = hetzner.hcloud.hcloud
|
||||
|
||||
# hcloud.yml (inventory config)
|
||||
plugin: hetzner.hcloud.hcloud
|
||||
locations:
|
||||
- fsn1
|
||||
keyed_groups:
|
||||
- key: labels.role
|
||||
prefix: role
|
||||
- key: labels.client
|
||||
prefix: client
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Application Deployment
|
||||
|
||||
### Decision: Modular Ansible Roles with Feature Flags
|
||||
|
||||
**Choice:** Each application is a separate Ansible role, enabled per-server via inventory variables.
|
||||
|
||||
**Rationale:**
|
||||
- Allows heterogeneous deployments (client A wants Pretix, client B doesn't)
|
||||
- Test new applications on single server before fleet rollout
|
||||
- Clear separation of concerns
|
||||
- Minimal refactoring when adding new applications
|
||||
|
||||
**Structure:**
|
||||
```
|
||||
ansible/
|
||||
├── roles/
|
||||
│ ├── common/ # Base setup, hardening, Docker
|
||||
│ ├── traefik/ # Reverse proxy, SSL
|
||||
│ ├── nextcloud/ # File sync and collaboration
|
||||
│ ├── pretix/ # Future: Event ticketing
|
||||
│ ├── listmonk/ # Future: Newsletter/mailing
|
||||
│ ├── backup/ # Restic configuration
|
||||
│ └── monitoring/ # Node exporter, promtail
|
||||
```
|
||||
|
||||
**Inventory Example:**
|
||||
```yaml
|
||||
all:
|
||||
children:
|
||||
clients:
|
||||
hosts:
|
||||
client-alpha:
|
||||
client_name: alpha
|
||||
domain: alpha.platform.nl
|
||||
apps:
|
||||
- nextcloud
|
||||
client-beta:
|
||||
client_name: beta
|
||||
domain: beta.platform.nl
|
||||
apps:
|
||||
- nextcloud
|
||||
- pretix
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. DNS Management
|
||||
|
||||
### Decision: Hetzner DNS via OpenTofu
|
||||
|
||||
**Choice:** Manage all DNS records through Hetzner DNS using OpenTofu.
|
||||
|
||||
**Rationale:**
|
||||
- Single provider for infrastructure and DNS simplifies management
|
||||
- OpenTofu provider available and well-maintained (same as Terraform provider)
|
||||
- Cost-effective (included with Hetzner)
|
||||
- GDPR-compliant (EU-based)
|
||||
|
||||
**Domain Strategy:**
|
||||
- Start with subdomains: `{client}.platform.nl`
|
||||
- Support custom domains later via variable override
|
||||
- Wildcard approach not used - explicit records per service
|
||||
|
||||
**Implementation:**
|
||||
```hcl
|
||||
resource "hcloud_server" "client" {
|
||||
for_each = var.clients
|
||||
name = each.key
|
||||
server_type = each.value.server_type
|
||||
# ...
|
||||
}
|
||||
|
||||
resource "hetznerdns_record" "client_a" {
|
||||
for_each = var.clients
|
||||
zone_id = data.hetznerdns_zone.main.id
|
||||
name = each.value.subdomain
|
||||
type = "A"
|
||||
value = hcloud_server.client[each.key].ipv4_address
|
||||
}
|
||||
```
|
||||
|
||||
**SSL Certificates:** Handled by Traefik with Let's Encrypt, automatic per-domain.
|
||||
|
||||
---
|
||||
|
||||
## 4. Identity Provider
|
||||
|
||||
### Decision: Authentik (replacing Zitadel)
|
||||
|
||||
**Choice:** Authentik as the identity provider for SSO across all client installations.
|
||||
|
||||
**Why Authentik:**
|
||||
|
||||
| Factor | Authentik | Zitadel | Keycloak |
|
||||
|--------|-----------|---------|----------|
|
||||
| License | MIT (permissive) | AGPL 3.0 | Apache 2.0 |
|
||||
| Setup Complexity | Simple Docker Compose | Complex FirstInstance bugs | Heavy Java setup |
|
||||
| Database | PostgreSQL only | PostgreSQL only | Multiple options |
|
||||
| Language | Python | Go | Java |
|
||||
| Resource Usage | Lightweight | Lightweight | Heavy |
|
||||
| Maturity | v2025.10 (stable) | v2.x (buggy) | Very mature |
|
||||
| Architecture | Modern, API-first | Event-sourced | Traditional |
|
||||
|
||||
**Key Advantages:**
|
||||
- **Truly open source**: MIT license (most permissive OSI license)
|
||||
- **Simple deployment**: Works out-of-box with Docker Compose, no manual wizard steps
|
||||
- **Modern architecture**: Python-based, lightweight, API-first design
|
||||
- **Comprehensive protocols**: SAML, OAuth2/OIDC, LDAP, RADIUS, SCIM
|
||||
- **No Redis required** (as of 2025.10): All caching moved to PostgreSQL
|
||||
- **Built-in workflows**: Customizable authentication flows and policies
|
||||
- **Active development**: Regular releases, strong community
|
||||
|
||||
**Deployment:**
|
||||
```yaml
|
||||
services:
|
||||
authentik-server:
|
||||
image: ghcr.io/goauthentik/server:2025.10.3
|
||||
command: server
|
||||
environment:
|
||||
AUTHENTIK_SECRET_KEY: ${AUTHENTIK_SECRET_KEY}
|
||||
AUTHENTIK_POSTGRESQL__HOST: postgresql
|
||||
depends_on:
|
||||
- postgresql
|
||||
|
||||
authentik-worker:
|
||||
image: ghcr.io/goauthentik/server:2025.10.3
|
||||
command: worker
|
||||
environment:
|
||||
AUTHENTIK_SECRET_KEY: ${AUTHENTIK_SECRET_KEY}
|
||||
AUTHENTIK_POSTGRESQL__HOST: postgresql
|
||||
depends_on:
|
||||
- postgresql
|
||||
```
|
||||
|
||||
**Previous Choice (Zitadel):**
|
||||
- Removed due to FirstInstance initialization bugs in v2.63.7
|
||||
- Required manual web UI setup (not scalable for multi-tenant)
|
||||
- See: https://github.com/zitadel/zitadel/issues/8791
|
||||
|
||||
---
|
||||
|
||||
## 4. Backup Strategy
|
||||
|
||||
### Decision: Dual Backup Approach
|
||||
|
||||
**Choice:** Hetzner automated snapshots + Restic application-level backups to Hetzner Storage Box.
|
||||
|
||||
#### Layer 1: Hetzner Snapshots
|
||||
|
||||
**Purpose:** Disaster recovery (complete server loss)
|
||||
|
||||
| Aspect | Configuration |
|
||||
|--------|---------------|
|
||||
| Frequency | Daily (Hetzner automated) |
|
||||
| Retention | 7 snapshots |
|
||||
| Cost | 20% of VPS price |
|
||||
| Restoration | Full server restore via Hetzner console/API |
|
||||
|
||||
**Limitations:**
|
||||
- Crash-consistent only (may catch database mid-write)
|
||||
- Same datacenter (not true off-site)
|
||||
- Coarse granularity (all or nothing)
|
||||
|
||||
#### Layer 2: Restic to Hetzner Storage Box
|
||||
|
||||
**Purpose:** Granular application recovery, off-server storage
|
||||
|
||||
**Backend Choice:** Hetzner Storage Box
|
||||
|
||||
**Rationale:**
|
||||
- GDPR-compliant (German/EU data residency)
|
||||
- Same Hetzner network = fast transfers, no egress costs
|
||||
- Cost-effective (~€3.81/month for BX10 with 1TB)
|
||||
- Supports SFTP, CIFS/Samba, rsync, Restic-native
|
||||
- Can be accessed from all VPSs simultaneously
|
||||
|
||||
**Storage Hierarchy:**
|
||||
```
|
||||
Storage Box (BX10 or larger)
|
||||
└── /backups/
|
||||
├── /client-alpha/
|
||||
│ ├── /restic-repo/ # Encrypted Restic repository
|
||||
│ └── /manual/ # Ad-hoc exports if needed
|
||||
├── /client-beta/
|
||||
│ └── /restic-repo/
|
||||
└── /client-gamma/
|
||||
└── /restic-repo/
|
||||
```
|
||||
|
||||
**Connection Method:**
|
||||
- Primary: SFTP (native Restic support, encrypted in transit)
|
||||
- Optional: CIFS mount for manual file access
|
||||
- Each client VPS gets Storage Box sub-account or uses main credentials with path restrictions
|
||||
|
||||
| Aspect | Configuration |
|
||||
|--------|---------------|
|
||||
| Frequency | Nightly (after DB dumps) |
|
||||
| Time | 03:00 local time |
|
||||
| Retention | 7 daily, 4 weekly, 6 monthly |
|
||||
| Encryption | Restic default (AES-256) |
|
||||
| Repo passwords | Stored in SOPS-encrypted files |
|
||||
|
||||
**What Gets Backed Up:**
|
||||
```
|
||||
/opt/docker/
|
||||
├── nextcloud/
|
||||
│ └── data/ # ✓ User files
|
||||
├── pretix/
|
||||
│ └── data/ # ✓ When applicable
|
||||
└── configs/ # ✓ docker-compose files, env
|
||||
```
|
||||
|
||||
**Backup Ansible Role Tasks:**
|
||||
1. Install Restic
|
||||
2. Initialize repo (if not exists)
|
||||
3. Configure SFTP connection to Storage Box
|
||||
4. Create pre-backup script (database dumps)
|
||||
5. Create backup script
|
||||
6. Create systemd timer
|
||||
7. Configure backup monitoring (alert on failure)
|
||||
|
||||
**Sizing Guidance:**
|
||||
- Start with BX10 (1TB) for 10 clients
|
||||
- Monitor usage monthly
|
||||
- Scale to BX20 (2TB) when approaching 70% capacity
|
||||
|
||||
**Verification:**
|
||||
- Weekly `restic check` via cron
|
||||
- Monthly test restore to staging environment
|
||||
- Alerts on backup job failures
|
||||
|
||||
---
|
||||
|
||||
## 5. Secrets Management
|
||||
|
||||
### Decision: SOPS + Age Encryption
|
||||
|
||||
**Choice:** File-based secrets encryption using SOPS with Age encryption, stored in Git.
|
||||
|
||||
**Why SOPS + Age over HashiCorp Vault:**
|
||||
- No additional server to maintain
|
||||
- Truly open source (MPL 2.0 for SOPS, Apache 2.0 for Age)
|
||||
- Secrets versioned alongside infrastructure code
|
||||
- Simple to understand and debug
|
||||
- Age developed with European privacy values (FiloSottile)
|
||||
- Perfect for 10-50 server scale
|
||||
- No vendor lock-in concerns
|
||||
|
||||
**How It Works:**
|
||||
1. Secrets stored in YAML files, encrypted with Age
|
||||
2. Only the values are encrypted, keys remain readable
|
||||
3. Decryption happens at Ansible runtime
|
||||
4. One Age key per environment (or shared across all)
|
||||
|
||||
**Example Encrypted File:**
|
||||
```yaml
|
||||
# secrets/client-alpha.sops.yaml
|
||||
db_password: ENC[AES256_GCM,data:kH3x9...,iv:abc...,tag:def...,type:str]
|
||||
keycloak_admin: ENC[AES256_GCM,data:mN4y2...,iv:ghi...,tag:jkl...,type:str]
|
||||
nextcloud_admin: ENC[AES256_GCM,data:pQ5z7...,iv:mno...,tag:pqr...,type:str]
|
||||
restic_repo_password: ENC[AES256_GCM,data:rS6a1...,iv:stu...,tag:vwx...,type:str]
|
||||
```
|
||||
|
||||
**Key Management:**
|
||||
```
|
||||
keys/
|
||||
├── age-key.txt # Master key (NEVER in Git, backed up securely)
|
||||
└── .sops.yaml # SOPS configuration (in Git)
|
||||
```
|
||||
|
||||
**.sops.yaml Configuration:**
|
||||
```yaml
|
||||
creation_rules:
|
||||
- path_regex: secrets/.*\.sops\.yaml$
|
||||
age: age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
**Secret Structure:**
|
||||
```
|
||||
secrets/
|
||||
├── .sops.yaml # SOPS config
|
||||
├── shared.sops.yaml # Shared secrets (Storage Box, API tokens)
|
||||
└── clients/
|
||||
├── alpha.sops.yaml # Client-specific secrets
|
||||
├── beta.sops.yaml
|
||||
└── gamma.sops.yaml
|
||||
```
|
||||
|
||||
**Ansible Integration:**
|
||||
```yaml
|
||||
# Using community.sops collection
|
||||
- name: Load client secrets
|
||||
community.sops.load_vars:
|
||||
file: "secrets/clients/{{ client_name }}.sops.yaml"
|
||||
name: client_secrets
|
||||
|
||||
- name: Use decrypted secret
|
||||
ansible.builtin.template:
|
||||
src: docker-compose.yml.j2
|
||||
dest: /opt/docker/docker-compose.yml
|
||||
vars:
|
||||
db_password: "{{ client_secrets.db_password }}"
|
||||
```
|
||||
|
||||
**Daily Operations:**
|
||||
```bash
|
||||
# Encrypt a new file
|
||||
sops --encrypt --age $(cat keys/age-key.pub) secrets/clients/new.yaml > secrets/clients/new.sops.yaml
|
||||
|
||||
# Edit existing secrets (decrypts, opens editor, re-encrypts)
|
||||
SOPS_AGE_KEY_FILE=keys/age-key.txt sops secrets/clients/alpha.sops.yaml
|
||||
|
||||
# View decrypted content
|
||||
SOPS_AGE_KEY_FILE=keys/age-key.txt sops --decrypt secrets/clients/alpha.sops.yaml
|
||||
```
|
||||
|
||||
**Key Backup Strategy:**
|
||||
- Age private key stored in password manager (Bitwarden/1Password)
|
||||
- Printed paper backup in secure location
|
||||
- Key never stored in Git repository
|
||||
- Consider key escrow for bus factor
|
||||
|
||||
**Advantages for Your Setup:**
|
||||
| Aspect | Benefit |
|
||||
|--------|---------|
|
||||
| Simplicity | No Vault server to maintain, secure, update |
|
||||
| Auditability | Git history shows who changed what secrets when |
|
||||
| Portability | Works offline, no network dependency |
|
||||
| Reliability | No secrets server = no secrets server downtime |
|
||||
| Cost | Zero infrastructure cost |
|
||||
|
||||
---
|
||||
|
||||
## 6. Monitoring
|
||||
|
||||
### Decision: Centralized Uptime Kuma
|
||||
|
||||
**Choice:** Uptime Kuma on dedicated monitoring server.
|
||||
|
||||
**Rationale:**
|
||||
- Simple to deploy and maintain
|
||||
- Beautiful UI for status overview
|
||||
- Flexible alerting (email, Slack, webhook)
|
||||
- Self-hosted (data stays in-house)
|
||||
- Sufficient for "is it up?" monitoring at current scale
|
||||
|
||||
**Deployment:**
|
||||
- Dedicated VPS or container on monitoring server
|
||||
- Monitors all client servers and services
|
||||
- Public status page optional per client
|
||||
|
||||
**Monitors per Client:**
|
||||
- HTTPS endpoint (Nextcloud)
|
||||
- TCP port checks (database, if exposed)
|
||||
- Docker container health (via API or agent)
|
||||
|
||||
**Alerting:**
|
||||
- Primary: Email
|
||||
- Secondary: Slack/Mattermost webhook
|
||||
- Escalation: SMS for extended downtime (future)
|
||||
|
||||
**Future Expansion Path:**
|
||||
When deeper metrics needed:
|
||||
1. Add Prometheus + Node Exporter
|
||||
2. Add Grafana dashboards
|
||||
3. Add Loki for log aggregation
|
||||
4. Uptime Kuma remains for synthetic monitoring
|
||||
|
||||
---
|
||||
|
||||
## 7. Client Isolation
|
||||
|
||||
### Decision: Full Isolation
|
||||
|
||||
**Choice:** Maximum isolation between clients at all levels.
|
||||
|
||||
**Implementation:**
|
||||
|
||||
| Layer | Isolation Method |
|
||||
|-------|------------------|
|
||||
| Compute | Separate VPS per client |
|
||||
| Network | Hetzner firewall rules, no inter-VPS traffic |
|
||||
| Database | Separate PostgreSQL container per client |
|
||||
| Storage | Separate Docker volumes |
|
||||
| Backups | Separate Restic repositories |
|
||||
| Secrets | Separate SOPS files per client |
|
||||
| DNS | Separate records/domains |
|
||||
|
||||
**Network Rules:**
|
||||
- Each VPS accepts traffic only on 80, 443, 22 (management IP only)
|
||||
- No private network between client VPSs
|
||||
- Monitoring server can reach all clients (outbound checks)
|
||||
|
||||
**Rationale:**
|
||||
- Security: Compromise of one client cannot spread
|
||||
- Compliance: Data separation demonstrable
|
||||
- Operations: Can maintain/upgrade clients independently
|
||||
- Billing: Clear resource attribution
|
||||
|
||||
---
|
||||
|
||||
## 8. Deployment Strategy
|
||||
|
||||
### Decision: Canary Deployments with Version Pinning
|
||||
|
||||
**Choice:** Staged rollouts with explicit version control.
|
||||
|
||||
#### Version Pinning
|
||||
|
||||
All container images use explicit tags:
|
||||
```yaml
|
||||
# docker-compose.yml
|
||||
services:
|
||||
nextcloud:
|
||||
image: nextcloud:28.0.1 # Never use :latest
|
||||
keycloak:
|
||||
image: quay.io/keycloak/keycloak:23.0.1
|
||||
postgres:
|
||||
image: postgres:16.1
|
||||
```
|
||||
|
||||
Version updates require explicit change and commit.
|
||||
|
||||
#### Canary Process
|
||||
|
||||
**Inventory Groups:**
|
||||
```yaml
|
||||
all:
|
||||
children:
|
||||
canary:
|
||||
hosts:
|
||||
client-alpha: # Designated test client (internal or willing partner)
|
||||
production:
|
||||
hosts:
|
||||
client-beta:
|
||||
client-gamma:
|
||||
# ... remaining clients
|
||||
```
|
||||
|
||||
**Deployment Script:**
|
||||
```bash
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "=== Deploying to canary ==="
|
||||
ansible-playbook deploy.yml --limit canary
|
||||
|
||||
echo "=== Waiting for verification ==="
|
||||
read -p "Canary OK? Proceed to production? [y/N] " confirm
|
||||
if [[ $confirm != "y" ]]; then
|
||||
echo "Deployment aborted"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "=== Deploying to production ==="
|
||||
ansible-playbook deploy.yml --limit production
|
||||
```
|
||||
|
||||
#### Rollback Procedures
|
||||
|
||||
**Scenario 1: Bad container version**
|
||||
```bash
|
||||
# Revert version in docker-compose
|
||||
git revert HEAD
|
||||
# Redeploy
|
||||
ansible-playbook deploy.yml --limit affected_hosts
|
||||
```
|
||||
|
||||
**Scenario 2: Database migration issue**
|
||||
```bash
|
||||
# Restore from pre-upgrade Restic backup
|
||||
restic -r sftp:user@backup-server:/client-x/restic-repo restore latest --target /tmp/restore
|
||||
# Restore database dump
|
||||
psql < /tmp/restore/db-dumps/keycloak.sql
|
||||
# Revert and redeploy application
|
||||
```
|
||||
|
||||
**Scenario 3: Complete server failure**
|
||||
```bash
|
||||
# Restore Hetzner snapshot via API
|
||||
hcloud server rebuild <server-id> --image <snapshot-id>
|
||||
# Or via OpenTofu
|
||||
tofu apply -replace="hcloud_server.client[\"affected\"]"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Security Baseline
|
||||
|
||||
### Decision: Comprehensive Hardening
|
||||
|
||||
All servers receive the `common` Ansible role with:
|
||||
|
||||
#### SSH Hardening
|
||||
```yaml
|
||||
# /etc/ssh/sshd_config (managed by Ansible)
|
||||
PermitRootLogin: no
|
||||
PasswordAuthentication: no
|
||||
PubkeyAuthentication: yes
|
||||
AllowUsers: deploy
|
||||
```
|
||||
|
||||
#### Firewall (UFW)
|
||||
```yaml
|
||||
- 22/tcp: Management IPs only
|
||||
- 80/tcp: Any (redirects to 443)
|
||||
- 443/tcp: Any
|
||||
- All other: Deny
|
||||
```
|
||||
|
||||
#### Automatic Updates
|
||||
```yaml
|
||||
# unattended-upgrades configuration
|
||||
Unattended-Upgrade::Allowed-Origins {
|
||||
"${distro_id}:${distro_codename}-security";
|
||||
};
|
||||
Unattended-Upgrade::AutoFixInterruptedDpkg "true";
|
||||
Unattended-Upgrade::Automatic-Reboot "false"; # Manual reboot control
|
||||
```
|
||||
|
||||
#### Fail2ban
|
||||
```yaml
|
||||
# Jails enabled
|
||||
- sshd
|
||||
- traefik-auth (custom, for repeated 401s)
|
||||
```
|
||||
|
||||
#### Container Security
|
||||
```yaml
|
||||
# Trivy scanning in CI/CD
|
||||
- Scan images before deployment
|
||||
- Block critical vulnerabilities
|
||||
- Weekly scheduled scans of running containers
|
||||
```
|
||||
|
||||
#### Additional Measures
|
||||
- No password authentication anywhere
|
||||
- Secrets encrypted with SOPS + Age, never plaintext in Git
|
||||
- Regular dependency updates via Dependabot/Renovate
|
||||
- SSH keys rotated annually
|
||||
|
||||
---
|
||||
|
||||
## 10. Onboarding Procedure
|
||||
|
||||
### New Client Checklist
|
||||
|
||||
```markdown
|
||||
## Client Onboarding: {CLIENT_NAME}
|
||||
|
||||
### Prerequisites
|
||||
- [ ] Client agreement signed
|
||||
- [ ] Domain/subdomain confirmed: _______________
|
||||
- [ ] Contact email: _______________
|
||||
- [ ] Desired applications: [ ] Keycloak [ ] Nextcloud [ ] Pretix [ ] Listmonk
|
||||
|
||||
### Infrastructure
|
||||
- [ ] Add client to `tofu/variables.tf`
|
||||
- [ ] Add client to `ansible/inventory/clients.yml`
|
||||
- [ ] Create secrets file: `sops secrets/clients/{name}.sops.yaml`
|
||||
- [ ] Create Storage Box subdirectory for backups
|
||||
- [ ] Run: `tofu apply`
|
||||
- [ ] Run: `ansible-playbook playbooks/setup.yml --limit {client}`
|
||||
|
||||
### Verification
|
||||
- [ ] HTTPS accessible
|
||||
- [ ] Nextcloud admin login works
|
||||
- [ ] Backup job runs successfully
|
||||
- [ ] Monitoring checks green
|
||||
|
||||
### Handover
|
||||
- [ ] Send credentials securely (1Password link, Signal, etc.)
|
||||
- [ ] Schedule onboarding call if needed
|
||||
- [ ] Add to status page (if applicable)
|
||||
- [ ] Document any custom configuration
|
||||
|
||||
### Estimated Time: 30-45 minutes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. Offboarding Procedure
|
||||
|
||||
### Client Removal Checklist
|
||||
|
||||
```markdown
|
||||
## Client Offboarding: {CLIENT_NAME}
|
||||
|
||||
### Pre-Offboarding
|
||||
- [ ] Confirm termination date: _______________
|
||||
- [ ] Data export requested? [ ] Yes [ ] No
|
||||
- [ ] Final invoice sent
|
||||
|
||||
### Data Export (if requested)
|
||||
- [ ] Export Nextcloud data
|
||||
- [ ] Confirm receipt
|
||||
|
||||
### Infrastructure Removal
|
||||
- [ ] Disable monitoring checks (set maintenance mode first)
|
||||
- [ ] Create final backup (retain per policy)
|
||||
- [ ] Remove from Ansible inventory
|
||||
- [ ] Remove from OpenTofu config
|
||||
- [ ] Run: `tofu apply` (destroys VPS)
|
||||
- [ ] Remove DNS records (automatic via OpenTofu)
|
||||
- [ ] Remove/archive SOPS secrets file
|
||||
|
||||
### Backup Retention
|
||||
- [ ] Move Restic repo to archive path
|
||||
- [ ] Set deletion date: _______ (default: 90 days post-termination)
|
||||
- [ ] Schedule deletion job
|
||||
|
||||
### Cleanup
|
||||
- [ ] Remove from status page
|
||||
- [ ] Update client count in documentation
|
||||
- [ ] Archive client folder in documentation
|
||||
|
||||
### Verification
|
||||
- [ ] DNS no longer resolves
|
||||
- [ ] IP returns nothing
|
||||
- [ ] Monitoring shows no alerts (host removed)
|
||||
- [ ] Billing stopped
|
||||
|
||||
### Estimated Time: 15-30 minutes
|
||||
```
|
||||
|
||||
### Data Retention Policy
|
||||
|
||||
| Data Type | Retention Post-Offboarding |
|
||||
|-----------|---------------------------|
|
||||
| Application data (Restic) | 90 days |
|
||||
| Hetzner snapshots | Deleted immediately (with VPS) |
|
||||
| SOPS secrets files | Archived 90 days, then deleted |
|
||||
| Logs | 30 days |
|
||||
| Invoices/contracts | 7 years (legal requirement) |
|
||||
|
||||
---
|
||||
|
||||
## 12. Repository Structure
|
||||
|
||||
```
|
||||
infrastructure/
|
||||
├── README.md
|
||||
├── docs/
|
||||
│ ├── architecture-decisions.md # This document
|
||||
│ ├── runbook.md # Operational procedures
|
||||
│ └── clients/ # Per-client notes
|
||||
│ ├── alpha.md
|
||||
│ └── beta.md
|
||||
├── tofu/ # OpenTofu configuration
|
||||
│ ├── main.tf
|
||||
│ ├── variables.tf
|
||||
│ ├── outputs.tf
|
||||
│ ├── dns.tf
|
||||
│ ├── firewall.tf
|
||||
│ └── versions.tf
|
||||
├── ansible/
|
||||
│ ├── ansible.cfg
|
||||
│ ├── hcloud.yml # Dynamic inventory config
|
||||
│ ├── playbooks/
|
||||
│ │ ├── setup.yml # Initial server setup
|
||||
│ │ ├── deploy.yml # Deploy/update applications
|
||||
│ │ ├── upgrade.yml # System updates
|
||||
│ │ └── backup-restore.yml # Manual backup/restore
|
||||
│ ├── roles/
|
||||
│ │ ├── common/
|
||||
│ │ ├── docker/
|
||||
│ │ ├── traefik/
|
||||
│ │ ├── nextcloud/
|
||||
│ │ ├── backup/
|
||||
│ │ └── monitoring-agent/
|
||||
│ └── group_vars/
|
||||
│ └── all.yml
|
||||
├── secrets/ # SOPS-encrypted secrets
|
||||
│ ├── .sops.yaml # SOPS configuration
|
||||
│ ├── shared.sops.yaml # Shared secrets
|
||||
│ └── clients/
|
||||
│ ├── alpha.sops.yaml
|
||||
│ └── beta.sops.yaml
|
||||
├── docker/
|
||||
│ ├── docker-compose.base.yml # Common services
|
||||
│ └── docker-compose.apps.yml # Application services
|
||||
└── scripts/
|
||||
├── deploy.sh # Canary deployment wrapper
|
||||
├── onboard-client.sh
|
||||
└── offboard-client.sh
|
||||
```
|
||||
|
||||
**Note:** The Age private key (`age-key.txt`) is NOT stored in this repository. It must be:
|
||||
- Stored in a password manager
|
||||
- Backed up securely offline
|
||||
- Available on deployment machine only
|
||||
|
||||
---
|
||||
|
||||
## 13. Open Decisions / Future Considerations
|
||||
|
||||
### To Decide Later
|
||||
- [ ] Identity provider (Authentik or other) - if SSO needed
|
||||
- [ ] Prometheus metrics - when/if needed
|
||||
- [ ] Custom domain SSL workflow
|
||||
- [ ] Client self-service portal
|
||||
|
||||
### Scaling Triggers
|
||||
- **20+ servers:** Consider Kubernetes or Nomad
|
||||
- **Multi-region:** Add OpenTofu workspaces per region
|
||||
- **Team growth:** Consider moving from SOPS to Infisical for better access control
|
||||
- **Complex secret rotation:** May need dedicated secrets server
|
||||
|
||||
---
|
||||
|
||||
## 14. Technology Choices Rationale
|
||||
|
||||
### Why We Chose Open Source / European-Friendly Tools
|
||||
|
||||
| Tool | Chosen | Avoided | Reason |
|
||||
|------|--------|---------|--------|
|
||||
| IaC | OpenTofu | Terraform | BSL license concerns, HashiCorp trust issues |
|
||||
| Secrets | SOPS + Age | HashiCorp Vault | Simplicity, no US vendor dependency, truly open source |
|
||||
| Identity | (Removed) | Keycloak/Zitadel | Removed due to complexity; may add Authentik in future |
|
||||
| Hosting | Hetzner | AWS/GCP/Azure | EU-based, cost-effective, GDPR-compliant |
|
||||
| Backup | Restic + Hetzner Storage Box | Cloud backup services | Open source, EU data residency |
|
||||
|
||||
**Guiding Principles:**
|
||||
1. Prefer truly open source (OSI-approved) over source-available
|
||||
2. Prefer EU-based services for GDPR simplicity
|
||||
3. Avoid vendor lock-in where practical
|
||||
4. Choose simplicity appropriate to scale (10-50 servers)
|
||||
|
||||
---
|
||||
|
||||
## 15. Development Environment and Tooling
|
||||
|
||||
### Decision: Isolated Python Environments with pipx
|
||||
|
||||
**Choice:** Use `pipx` for installing Python CLI tools (Ansible) in isolated virtual environments.
|
||||
|
||||
**Why pipx:**
|
||||
- Prevents dependency conflicts between tools
|
||||
- Each tool has its own Python environment
|
||||
- No interference with system Python packages
|
||||
- Easy to upgrade/rollback individual tools
|
||||
- Modern best practice for Python CLI tools
|
||||
|
||||
**Implementation:**
|
||||
```bash
|
||||
# Install pipx
|
||||
brew install pipx
|
||||
pipx ensurepath
|
||||
|
||||
# Install Ansible in isolation
|
||||
pipx install --include-deps ansible
|
||||
|
||||
# Inject additional dependencies as needed
|
||||
pipx inject ansible requests python-dateutil
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
| Aspect | Benefit |
|
||||
|--------|---------|
|
||||
| Isolation | No conflicts with other Python tools |
|
||||
| Reproducibility | Each team member gets same isolated environment |
|
||||
| Maintainability | Easy to upgrade Ansible without breaking other tools |
|
||||
| Clean system | No pollution of system Python packages |
|
||||
|
||||
**Alternatives Considered:**
|
||||
- **Homebrew Ansible** - Rejected: Can conflict with system Python, harder to manage dependencies
|
||||
- **System pip install** - Rejected: Pollutes global Python environment
|
||||
- **Manual venv** - Rejected: More manual work, pipx automates this
|
||||
|
||||
---
|
||||
|
||||
## Changelog
|
||||
|
||||
| Date | Change | Author |
|
||||
|------|--------|--------|
|
||||
| 2024-12 | Initial architecture decisions | Pieter / Claude |
|
||||
| 2024-12 | Added Hetzner Storage Box as Restic backend | Pieter / Claude |
|
||||
| 2024-12 | Switched from Terraform to OpenTofu (licensing concerns) | Pieter / Claude |
|
||||
| 2024-12 | Switched from HashiCorp Vault to SOPS + Age (simplicity, open source) | Pieter / Claude |
|
||||
| 2024-12 | Switched from Keycloak to Zitadel (Swiss company, GDPR jurisdiction) | Pieter / Claude |
|
||||
| 2026-01 | Removed Zitadel due to FirstInstance bugs; may add Authentik in future | Pieter / Claude |
|
||||
```
|
||||
317
docs/sso-automation.md
Normal file
317
docs/sso-automation.md
Normal file
|
|
@ -0,0 +1,317 @@
|
|||
# SSO Automation Workflow
|
||||
|
||||
Complete guide to the automated Authentik + Nextcloud SSO integration.
|
||||
|
||||
## Overview
|
||||
|
||||
This infrastructure implements **automated OAuth2/OIDC integration** between Authentik (identity provider) and Nextcloud (application). The goal is to achieve **zero manual configuration** for SSO when deploying a new client.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────┐ ┌─────────────┐
|
||||
│ Authentik │◄──────OIDC────────►│ Nextcloud │
|
||||
│ (IdP) │ OAuth2/OIDC │ (App) │
|
||||
└─────────────┘ Discovery URI └─────────────┘
|
||||
│ │
|
||||
│ 1. Create provider via API │
|
||||
│ 2. Get client_id/secret │
|
||||
│ │
|
||||
└───────────► credentials ──────────►│
|
||||
(temporary file) │ 3. Configure OIDC app
|
||||
```
|
||||
|
||||
## Automation Workflow
|
||||
|
||||
### Phase 1: Deployment (Ansible)
|
||||
|
||||
1. **Deploy Authentik** (`roles/authentik/tasks/docker.yml`)
|
||||
- Start PostgreSQL database
|
||||
- Start Authentik server + worker containers
|
||||
- Wait for health check (HTTP 200/302 on root)
|
||||
|
||||
2. **Check for API Token** (`roles/authentik/tasks/providers.yml`)
|
||||
- Look for `client_secrets.authentik_api_token` in secrets file
|
||||
- If missing: Display manual setup instructions and skip automation
|
||||
- If present: Proceed to Phase 2
|
||||
|
||||
### Phase 2: OIDC Provider Creation (API)
|
||||
|
||||
**Script**: `roles/authentik/files/authentik_api.py`
|
||||
|
||||
1. **Wait for Authentik Ready**
|
||||
- Poll root endpoint until 200/302 response
|
||||
- Timeout: 300 seconds (configurable)
|
||||
|
||||
2. **Get Authorization Flow UUID**
|
||||
- `GET /api/v3/flows/instances/`
|
||||
- Find flow with `slug=default-authorization-flow` or `designation=authorization`
|
||||
|
||||
3. **Get Signing Key UUID**
|
||||
- `GET /api/v3/crypto/certificatekeypairs/`
|
||||
- Use first available certificate
|
||||
|
||||
4. **Create OAuth2 Provider**
|
||||
- `POST /api/v3/providers/oauth2/`
|
||||
```json
|
||||
{
|
||||
"name": "Nextcloud",
|
||||
"authorization_flow": "<flow_uuid>",
|
||||
"client_type": "confidential",
|
||||
"redirect_uris": "https://nextcloud.example.com/apps/user_oidc/code",
|
||||
"signing_key": "<key_uuid>",
|
||||
"sub_mode": "hashed_user_id",
|
||||
"include_claims_in_id_token": true
|
||||
}
|
||||
```
|
||||
|
||||
5. **Create Application**
|
||||
- `POST /api/v3/core/applications/`
|
||||
```json
|
||||
{
|
||||
"name": "Nextcloud",
|
||||
"slug": "nextcloud",
|
||||
"provider": "<provider_id>",
|
||||
"meta_launch_url": "https://nextcloud.example.com"
|
||||
}
|
||||
```
|
||||
|
||||
6. **Return Credentials**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"client_id": "...",
|
||||
"client_secret": "...",
|
||||
"discovery_uri": "https://auth.example.com/application/o/nextcloud/.well-known/openid-configuration",
|
||||
"issuer": "https://auth.example.com/application/o/nextcloud/"
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Nextcloud Configuration
|
||||
|
||||
**Task**: `roles/nextcloud/tasks/oidc.yml`
|
||||
|
||||
1. **Install user_oidc App**
|
||||
```bash
|
||||
docker exec -u www-data nextcloud php occ app:install user_oidc
|
||||
docker exec -u www-data nextcloud php occ app:enable user_oidc
|
||||
```
|
||||
|
||||
2. **Load Credentials from Temp File**
|
||||
- Read `/tmp/authentik_oidc_credentials.json` (created by Phase 2)
|
||||
- Parse JSON to Ansible fact
|
||||
|
||||
3. **Configure OIDC Provider**
|
||||
```bash
|
||||
docker exec -u www-data nextcloud php occ user_oidc:provider:add \
|
||||
--clientid="<client_id>" \
|
||||
--clientsecret="<client_secret>" \
|
||||
--discoveryuri="<discovery_uri>" \
|
||||
"Authentik"
|
||||
```
|
||||
|
||||
4. **Cleanup**
|
||||
- Remove temporary credentials file
|
||||
|
||||
### Result
|
||||
|
||||
- ✅ "Login with Authentik" button appears on Nextcloud login page
|
||||
- ✅ Users can log in with Authentik credentials
|
||||
- ✅ Zero manual configuration required (if API token is present)
|
||||
|
||||
## Manual Bootstrap (One-Time Setup)
|
||||
|
||||
If `authentik_api_token` is not in secrets, follow these steps **once per Authentik instance**:
|
||||
|
||||
### Step 1: Complete Initial Setup
|
||||
|
||||
1. Visit: `https://auth.example.com/if/flow/initial-setup/`
|
||||
2. Create admin account:
|
||||
- **Username**: `akadmin` (recommended)
|
||||
- **Password**: Secure random password
|
||||
- **Email**: Your admin email
|
||||
|
||||
### Step 2: Create API Token
|
||||
|
||||
1. Login to Authentik admin UI
|
||||
2. Navigate: **Admin Interface → Tokens & App passwords**
|
||||
3. Click **Create → Tokens**
|
||||
4. Configure token:
|
||||
- **User**: Your admin user (akadmin)
|
||||
- **Intent**: API Token
|
||||
- **Description**: Ansible automation
|
||||
- **Expires**: Never (or far future date)
|
||||
5. Copy the generated token
|
||||
|
||||
### Step 3: Add to Secrets
|
||||
|
||||
Edit your client secrets file:
|
||||
|
||||
```bash
|
||||
cd infrastructure
|
||||
export SOPS_AGE_KEY_FILE="keys/age-key.txt"
|
||||
sops secrets/clients/test.sops.yaml
|
||||
```
|
||||
|
||||
Add line:
|
||||
```yaml
|
||||
authentik_api_token: ak_<your_token_here>
|
||||
```
|
||||
|
||||
### Step 4: Re-run Deployment
|
||||
|
||||
```bash
|
||||
cd infrastructure/ansible
|
||||
export HCLOUD_TOKEN="..."
|
||||
export SOPS_AGE_KEY_FILE="../keys/age-key.txt"
|
||||
|
||||
~/.local/bin/ansible-playbook -i hcloud.yml playbooks/deploy.yml \
|
||||
--tags authentik,oidc \
|
||||
--limit test
|
||||
```
|
||||
|
||||
## API Token Security
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. **Scope**: Token has full API access - treat as root password
|
||||
2. **Storage**: Always encrypted with SOPS in secrets files
|
||||
3. **Rotation**: Rotate tokens periodically (update secrets file)
|
||||
4. **Audit**: Monitor token usage in Authentik logs
|
||||
|
||||
### Alternative: Service Account
|
||||
|
||||
For production, consider creating a dedicated service account:
|
||||
|
||||
1. Create user: `ansible-automation`
|
||||
2. Assign minimal permissions (provider creation only)
|
||||
3. Create token for this user
|
||||
4. Use in automation
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### OIDC Provider Creation Fails
|
||||
|
||||
**Symptom**: Script returns error creating provider
|
||||
|
||||
**Check**:
|
||||
```bash
|
||||
# Test API connectivity
|
||||
curl -H "Authorization: Bearer $TOKEN" \
|
||||
https://auth.example.com/api/v3/flows/instances/
|
||||
|
||||
# Check Authentik logs
|
||||
docker logs authentik-server
|
||||
docker logs authentik-worker
|
||||
```
|
||||
|
||||
**Common Issues**:
|
||||
- Token expired or invalid
|
||||
- Authorization flow not found (check flows in admin UI)
|
||||
- Certificate/key missing
|
||||
|
||||
### "Login with Authentik" Button Missing
|
||||
|
||||
**Symptom**: Nextcloud shows only username/password login
|
||||
|
||||
**Check**:
|
||||
```bash
|
||||
# List configured providers
|
||||
docker exec -u www-data nextcloud php occ user_oidc:provider
|
||||
|
||||
# Check user_oidc app status
|
||||
docker exec -u www-data nextcloud php occ app:list | grep user_oidc
|
||||
```
|
||||
|
||||
**Fix**:
|
||||
```bash
|
||||
# Re-configure OIDC
|
||||
cd infrastructure/ansible
|
||||
~/.local/bin/ansible-playbook -i hcloud.yml playbooks/deploy.yml \
|
||||
--tags oidc \
|
||||
--limit test
|
||||
```
|
||||
|
||||
### API Token Not Working
|
||||
|
||||
**Symptom**: "Authentication failed" from API script
|
||||
|
||||
**Check**:
|
||||
1. Token format: Should start with `ak_`
|
||||
2. User still exists in Authentik
|
||||
3. Token not expired (check in admin UI)
|
||||
|
||||
**Fix**: Create new token and update secrets file
|
||||
|
||||
## Testing SSO Flow
|
||||
|
||||
### End-to-End Test
|
||||
|
||||
1. **Open Nextcloud**: `https://nextcloud.example.com`
|
||||
2. **Click "Login with Authentik"**
|
||||
3. **Redirected to Authentik**: `https://auth.example.com`
|
||||
4. **Enter Authentik credentials** (created in Authentik admin UI)
|
||||
5. **Redirected back to Nextcloud** (logged in)
|
||||
|
||||
### Create Test User in Authentik
|
||||
|
||||
```bash
|
||||
# Access Authentik admin UI
|
||||
https://auth.example.com
|
||||
|
||||
# Navigate: Directory → Users → Create
|
||||
# Fill in:
|
||||
# - Username: testuser
|
||||
# - Email: test@example.com
|
||||
# - Password: <secure_password>
|
||||
```
|
||||
|
||||
### Test Login
|
||||
|
||||
1. Logout of Nextcloud (if logged in as admin)
|
||||
2. Go to Nextcloud login page
|
||||
3. Click "Login with Authentik"
|
||||
4. Login with `testuser` credentials
|
||||
5. First login: Nextcloud creates local account linked to Authentik
|
||||
6. Subsequent logins: Automatic via SSO
|
||||
|
||||
## Future Improvements
|
||||
|
||||
### Fully Automated Bootstrap
|
||||
|
||||
**Goal**: Automate the initial admin account creation via API
|
||||
|
||||
**Approach**:
|
||||
- Research Authentik bootstrap tokens
|
||||
- Automate initial setup flow via HTTP POST requests
|
||||
- Generate admin credentials automatically
|
||||
- Store in secrets file
|
||||
|
||||
**Status**: Not yet implemented (initial setup still manual)
|
||||
|
||||
### SAML Support
|
||||
|
||||
Add SAML provider alongside OIDC for applications that don't support OAuth2/OIDC.
|
||||
|
||||
### Multi-Application Support
|
||||
|
||||
Extend automation to create OIDC providers for other applications:
|
||||
- Collabora Online
|
||||
- OnlyOffice
|
||||
- Custom web applications
|
||||
|
||||
## Related Files
|
||||
|
||||
- **API Script**: `ansible/roles/authentik/files/authentik_api.py`
|
||||
- **Provider Tasks**: `ansible/roles/authentik/tasks/providers.yml`
|
||||
- **OIDC Config**: `ansible/roles/nextcloud/tasks/oidc.yml`
|
||||
- **Main Playbook**: `ansible/playbooks/deploy.yml`
|
||||
- **Secrets Template**: `secrets/clients/test.sops.yaml`
|
||||
- **Agent Config**: `.claude/agents/authentik.md`
|
||||
|
||||
## References
|
||||
|
||||
- **Authentik API Docs**: https://docs.goauthentik.io/developer-docs/api
|
||||
- **OAuth2 Provider**: https://docs.goauthentik.io/docs/providers/oauth2
|
||||
- **Nextcloud OIDC**: https://github.com/nextcloud/user_oidc
|
||||
- **OpenID Connect**: https://openid.net/specs/openid-connect-core-1_0.html
|
||||
7
keys/ssh/.gitignore
vendored
7
keys/ssh/.gitignore
vendored
|
|
@ -1,7 +0,0 @@
|
|||
# NEVER commit SSH private keys
|
||||
*
|
||||
|
||||
# Allow README and public keys only
|
||||
!.gitignore
|
||||
!README.md
|
||||
!*.pub
|
||||
|
|
@ -1,196 +0,0 @@
|
|||
# SSH Keys Directory
|
||||
|
||||
This directory contains **per-client SSH key pairs** for server access.
|
||||
|
||||
## Purpose
|
||||
|
||||
Each client gets a dedicated SSH key pair to ensure:
|
||||
- **Isolation**: Compromise of one client ≠ access to others
|
||||
- **Granular control**: Rotate or revoke keys per-client
|
||||
- **Security**: Defense in depth, minimize blast radius
|
||||
|
||||
## Files
|
||||
|
||||
```
|
||||
keys/ssh/
|
||||
├── .gitignore # Protects private keys from git
|
||||
├── README.md # This file
|
||||
├── dev # Private key for dev server (gitignored)
|
||||
├── dev.pub # Public key for dev server (committed)
|
||||
├── client1 # Private key for client1 (gitignored)
|
||||
└── client1.pub # Public key for client1 (committed)
|
||||
```
|
||||
|
||||
## Generating Keys
|
||||
|
||||
Use the helper script:
|
||||
|
||||
```bash
|
||||
./scripts/generate-client-keys.sh <client_name>
|
||||
```
|
||||
|
||||
Or manually:
|
||||
|
||||
```bash
|
||||
ssh-keygen -t ed25519 -f keys/ssh/<client_name> -C "client-<client_name>-deploy-key" -N ""
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
### What Gets Committed
|
||||
|
||||
- ✅ **Public keys** (`*.pub`) - Safe to commit
|
||||
- ✅ **README.md** - Documentation
|
||||
- ✅ **`.gitignore`** - Protection rules
|
||||
|
||||
### What NEVER Gets Committed
|
||||
|
||||
- ❌ **Private keys** (no `.pub` extension) - Gitignored
|
||||
- ❌ **Temporary files** - Gitignored
|
||||
- ❌ **Backup keys** - Gitignored
|
||||
|
||||
The `.gitignore` file in this directory ensures private keys are never committed:
|
||||
|
||||
```gitignore
|
||||
# NEVER commit SSH private keys
|
||||
*
|
||||
|
||||
# Allow README and public keys only
|
||||
!.gitignore
|
||||
!README.md
|
||||
!*.pub
|
||||
```
|
||||
|
||||
## Backup Strategy
|
||||
|
||||
**⚠️ IMPORTANT: Backup private keys securely!**
|
||||
|
||||
Private keys must be backed up to prevent lockout:
|
||||
|
||||
1. **Password Manager** (Recommended):
|
||||
- Store in 1Password, Bitwarden, etc.
|
||||
- Tag with client name and server IP
|
||||
|
||||
2. **Encrypted Archive**:
|
||||
```bash
|
||||
tar czf - keys/ssh/ | gpg -c > ssh-keys-backup.tar.gz.gpg
|
||||
```
|
||||
|
||||
3. **Team Vault**:
|
||||
- Share securely with team members who need access
|
||||
- Document key ownership
|
||||
|
||||
## Usage
|
||||
|
||||
### SSH Connection
|
||||
|
||||
```bash
|
||||
# Connect to client server
|
||||
ssh -i keys/ssh/dev root@<server_ip>
|
||||
|
||||
# Run command
|
||||
ssh -i keys/ssh/dev root@<server_ip> "docker ps"
|
||||
```
|
||||
|
||||
### Ansible
|
||||
|
||||
Ansible automatically uses the correct key (via dynamic inventory and OpenTofu):
|
||||
|
||||
```bash
|
||||
ansible-playbook -i hcloud.yml playbooks/deploy.yml --limit dev
|
||||
```
|
||||
|
||||
### SSH Config
|
||||
|
||||
Add to `~/.ssh/config` for convenience:
|
||||
|
||||
```
|
||||
Host dev.vrije.cloud
|
||||
User root
|
||||
IdentityFile ~/path/to/infrastructure/keys/ssh/dev
|
||||
```
|
||||
|
||||
Then: `ssh dev.vrije.cloud`
|
||||
|
||||
## Key Rotation
|
||||
|
||||
Rotate keys annually or on security events:
|
||||
|
||||
```bash
|
||||
# Generate new key (backs up old automatically)
|
||||
./scripts/generate-client-keys.sh dev
|
||||
|
||||
# Apply to server (recreates server with new key)
|
||||
cd tofu && tofu apply
|
||||
|
||||
# Test new key
|
||||
ssh -i keys/ssh/dev root@<new_ip> hostname
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
### Check Key Fingerprint
|
||||
|
||||
```bash
|
||||
# Show fingerprint of private key
|
||||
ssh-keygen -lf keys/ssh/dev
|
||||
|
||||
# Show fingerprint of public key
|
||||
ssh-keygen -lf keys/ssh/dev.pub
|
||||
|
||||
# Should match!
|
||||
```
|
||||
|
||||
### Check What's in Git
|
||||
|
||||
```bash
|
||||
# Verify no private keys committed
|
||||
git ls-files keys/ssh/
|
||||
|
||||
# Should only show:
|
||||
# keys/ssh/.gitignore
|
||||
# keys/ssh/README.md
|
||||
# keys/ssh/*.pub
|
||||
```
|
||||
|
||||
### Check Permissions
|
||||
|
||||
```bash
|
||||
# Private keys must be 600
|
||||
ls -la keys/ssh/dev
|
||||
|
||||
# Should show: -rw------- (600)
|
||||
|
||||
# Fix if needed:
|
||||
chmod 600 keys/ssh/*
|
||||
chmod 644 keys/ssh/*.pub
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Permission denied (publickey)"
|
||||
|
||||
1. Check you're using the correct private key for the client
|
||||
2. Verify public key is on server (check OpenTofu state)
|
||||
3. Ensure private key has correct permissions (600)
|
||||
|
||||
### "No such file or directory"
|
||||
|
||||
Generate the key first:
|
||||
```bash
|
||||
./scripts/generate-client-keys.sh <client_name>
|
||||
```
|
||||
|
||||
### "Bad permissions"
|
||||
|
||||
Fix key permissions:
|
||||
```bash
|
||||
chmod 600 keys/ssh/<client_name>
|
||||
chmod 644 keys/ssh/<client_name>.pub
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [../docs/ssh-key-management.md](../../docs/ssh-key-management.md) - Complete SSH key management guide
|
||||
- [../../scripts/generate-client-keys.sh](../../scripts/generate-client-keys.sh) - Key generation script
|
||||
- [../../tofu/main.tf](../../tofu/main.tf) - OpenTofu SSH key resources
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILKuSYRVVWCYqjNvJ5pHZTErkmVbEb1g3ac8olXUcXy7 client-bever-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGsGfzhrcVtYEn2YHzxVGibBDXPd571unltfOaVo5JlR client-das-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE75mnMfKHTIeq5Hp8LKaKYHGbzdFke1a9N7e0UEMNBu client-egel-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAa4QHMVKnTSS/q5kptQYzas7ln2MbgE5Db47GM2DjRI client-haas-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICtZzQTzNWLcFi4NNqg6l53kqPVDsgau1O7GWWKwZh9l client-kikker-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPXF5COMplFqwxCRymXN7y4b+RWiBbVQpIMmFoK10qgh client-kraai-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDGPPukFDhM4eIolsowRsD6jYrNYoM3/B9yLi2KNqmPi client-mees-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHAsLbdkl0peC15KnxhSsCI45Z2FwQu2Hy1LArzHoXu5 client-mol-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINAoeg3LDX5zRuw5Yt5WwbYNRXo70H7e5OYE3oMbJRyL client-mus-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG3edQhsIBD9Ers7wuFWSww8r3ROkKNJF8YcxgRtQdov client-otter-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB4QOkx75M28l7JAkQPl8bLjGuV/kKDFQINkUGRVRgIk client-ree-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAXFskaLenHy4FJHUZL2gpehFUAYaUdNfwP0BTMqp4La client-specht-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEhDcLx3ZaBXSHbhOoAgb5sI5xUVJwZEXl2HYq5+eRID client-uil-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILLDJCSNj3OZDDwGgoWSxy17K8DmJ8eqUXQ4Wmu/vRtG client-valk-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDg8F6LIVfdBdhD/CiNavs+xfFSiu9jxMmZcyigskuIQ client-vos-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKUcrgfG+JWtieySkcSZNyBehf/rB0YEQ35IQ93L+HHP client-wolf-deploy-key
|
||||
|
|
@ -1 +0,0 @@
|
|||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG76TbSdY1o5T7PlzGkbfu0HNGOKsiW5vtbAKLDz0BGv client-zwaan-deploy-key
|
||||
|
|
@ -4,14 +4,13 @@ Automated scripts for managing client infrastructure.
|
|||
|
||||
## Prerequisites
|
||||
|
||||
Set SOPS Age key location (optional, scripts use default):
|
||||
Set required environment variables:
|
||||
|
||||
```bash
|
||||
export HCLOUD_TOKEN="your-hetzner-cloud-api-token"
|
||||
export SOPS_AGE_KEY_FILE="./keys/age-key.txt"
|
||||
```
|
||||
|
||||
**Note**: The Hetzner API token is now automatically loaded from SOPS-encrypted `secrets/shared.sops.yaml`. No need to manually set `HCLOUD_TOKEN`.
|
||||
|
||||
## Scripts
|
||||
|
||||
### 1. Deploy Fresh Client
|
||||
|
|
@ -23,31 +22,22 @@ export SOPS_AGE_KEY_FILE="./keys/age-key.txt"
|
|||
./scripts/deploy-client.sh <client_name>
|
||||
```
|
||||
|
||||
**What it does** (automatically):
|
||||
1. **Generates SSH key** (if missing) - Unique per-client key pair
|
||||
2. **Creates secrets file** (if missing) - From template, opens in editor
|
||||
3. Provisions VPS server (if not exists)
|
||||
4. Sets up base system (Docker, Traefik)
|
||||
5. Deploys Authentik + Nextcloud
|
||||
6. Configures SSO integration automatically
|
||||
**What it does**:
|
||||
1. Provisions VPS server (if not exists)
|
||||
2. Sets up base system (Docker, Traefik)
|
||||
3. Deploys Authentik + Nextcloud
|
||||
4. Configures SSO integration automatically
|
||||
|
||||
**Time**: ~10-15 minutes
|
||||
|
||||
**Example**:
|
||||
```bash
|
||||
# Just run the script - it handles everything!
|
||||
./scripts/deploy-client.sh newclient
|
||||
|
||||
# Script will:
|
||||
# 1. Generate keys/ssh/newclient + keys/ssh/newclient.pub
|
||||
# 2. Copy secrets/clients/template.sops.yaml → secrets/clients/newclient.sops.yaml
|
||||
# 3. Open SOPS editor for you to customize secrets
|
||||
# 4. Continue with deployment
|
||||
./scripts/deploy-client.sh test
|
||||
```
|
||||
|
||||
**Requirements**:
|
||||
- Secrets file must exist: `secrets/clients/<client_name>.sops.yaml`
|
||||
- Client must be defined in `tofu/terraform.tfvars`
|
||||
- SOPS Age key available at `keys/age-key.txt` (or set `SOPS_AGE_KEY_FILE`)
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -108,26 +98,20 @@ export SOPS_AGE_KEY_FILE="./keys/age-key.txt"
|
|||
|
||||
## Workflow Examples
|
||||
|
||||
### Deploy a New Client (Fully Automated)
|
||||
### Deploy a New Client
|
||||
|
||||
```bash
|
||||
# 1. Add to terraform.tfvars
|
||||
# 1. Create secrets file
|
||||
cp secrets/clients/test.sops.yaml secrets/clients/newclient.sops.yaml
|
||||
sops secrets/clients/newclient.sops.yaml
|
||||
# Edit: client_name, domains, regenerate passwords
|
||||
|
||||
# 2. Add to terraform.tfvars
|
||||
vim tofu/terraform.tfvars
|
||||
# Add:
|
||||
# newclient = {
|
||||
# server_type = "cx22"
|
||||
# location = "fsn1"
|
||||
# subdomain = "newclient"
|
||||
# apps = ["authentik", "nextcloud"]
|
||||
# }
|
||||
# Add client definition
|
||||
|
||||
# 2. Deploy (script handles SSH key + secrets automatically)
|
||||
# 3. Deploy
|
||||
./scripts/deploy-client.sh newclient
|
||||
|
||||
# That's it! Script will:
|
||||
# - Generate SSH key if missing
|
||||
# - Create secrets file from template if missing (opens editor)
|
||||
# - Deploy everything
|
||||
```
|
||||
|
||||
### Test Changes (Rebuild)
|
||||
|
|
@ -188,6 +172,7 @@ These scripts can be used in automation:
|
|||
|
||||
```bash
|
||||
# Non-interactive deployment
|
||||
export HCLOUD_TOKEN="..."
|
||||
export SOPS_AGE_KEY_FILE="..."
|
||||
|
||||
./scripts/deploy-client.sh production
|
||||
|
|
@ -203,22 +188,8 @@ For rebuild (skip confirmation):
|
|||
|
||||
### Script fails with "HCLOUD_TOKEN not set"
|
||||
|
||||
The token should be automatically loaded from SOPS. If this fails:
|
||||
|
||||
1. Ensure SOPS Age key is available:
|
||||
```bash
|
||||
export SOPS_AGE_KEY_FILE="./keys/age-key.txt"
|
||||
ls -la keys/age-key.txt
|
||||
```
|
||||
|
||||
2. Verify token is in shared secrets:
|
||||
```bash
|
||||
sops -d secrets/shared.sops.yaml | grep hcloud_token
|
||||
```
|
||||
|
||||
3. Manually load secrets:
|
||||
```bash
|
||||
source scripts/load-secrets-env.sh
|
||||
export HCLOUD_TOKEN="your-token-here"
|
||||
```
|
||||
|
||||
### Script fails with "Secrets file not found"
|
||||
|
|
|
|||
|
|
@ -1,87 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Add client monitors to Uptime Kuma
|
||||
#
|
||||
# Usage: ./scripts/add-client-to-monitoring.sh <client_name>
|
||||
#
|
||||
# This script creates HTTP(S) and SSL monitors for a client's services
|
||||
# Currently uses manual instructions - future: use Uptime Kuma API
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
# Check arguments
|
||||
if [ $# -ne 1 ]; then
|
||||
echo -e "${RED}Error: Client name required${NC}"
|
||||
echo "Usage: $0 <client_name>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLIENT_NAME="$1"
|
||||
BASE_DOMAIN="vrije.cloud"
|
||||
|
||||
# Calculate URLs
|
||||
AUTH_URL="https://auth.${CLIENT_NAME}.${BASE_DOMAIN}"
|
||||
NEXTCLOUD_URL="https://nextcloud.${CLIENT_NAME}.${BASE_DOMAIN}"
|
||||
AUTH_DOMAIN="auth.${CLIENT_NAME}.${BASE_DOMAIN}"
|
||||
NEXTCLOUD_DOMAIN="nextcloud.${CLIENT_NAME}.${BASE_DOMAIN}"
|
||||
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo -e "${BLUE}Add Client to Monitoring${NC}"
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}Client: ${CLIENT_NAME}${NC}"
|
||||
echo ""
|
||||
|
||||
# TODO: Implement automated monitor creation via Uptime Kuma API
|
||||
# For now, provide manual instructions
|
||||
|
||||
echo -e "${YELLOW}Manual Setup Required:${NC}"
|
||||
echo ""
|
||||
echo "Please add the following monitors in Uptime Kuma:"
|
||||
echo "🔗 Access: https://status.vrije.cloud"
|
||||
echo ""
|
||||
echo -e "${GREEN}HTTP(S) Monitors:${NC}"
|
||||
echo ""
|
||||
echo "1. ${CLIENT_NAME} - Authentik"
|
||||
echo " Type: HTTP(S)"
|
||||
echo " URL: ${AUTH_URL}"
|
||||
echo " Interval: 300 seconds (5 min)"
|
||||
echo " Retries: 3"
|
||||
echo ""
|
||||
echo "2. ${CLIENT_NAME} - Nextcloud"
|
||||
echo " Type: HTTP(S)"
|
||||
echo " URL: ${NEXTCLOUD_URL}"
|
||||
echo " Interval: 300 seconds (5 min)"
|
||||
echo " Retries: 3"
|
||||
echo ""
|
||||
echo -e "${GREEN}SSL Certificate Monitors:${NC}"
|
||||
echo ""
|
||||
echo "3. ${CLIENT_NAME} - Authentik SSL"
|
||||
echo " Type: Certificate Expiry"
|
||||
echo " Hostname: ${AUTH_DOMAIN}"
|
||||
echo " Port: 443"
|
||||
echo " Expiry Days: 30"
|
||||
echo " Interval: 86400 seconds (1 day)"
|
||||
echo ""
|
||||
echo "4. ${CLIENT_NAME} - Nextcloud SSL"
|
||||
echo " Type: Certificate Expiry"
|
||||
echo " Hostname: ${NEXTCLOUD_DOMAIN}"
|
||||
echo " Port: 443"
|
||||
echo " Expiry Days: 30"
|
||||
echo " Interval: 86400 seconds (1 day)"
|
||||
echo ""
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}Note: Automated monitor creation via API is planned for future enhancement.${NC}"
|
||||
echo ""
|
||||
|
|
@ -1,250 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Add a new client to OpenTofu configuration
|
||||
#
|
||||
# Usage: ./scripts/add-client-to-terraform.sh <client_name> [options]
|
||||
#
|
||||
# Options:
|
||||
# --server-type=TYPE Server type (default: cpx22)
|
||||
# --location=LOC Data center location (default: fsn1)
|
||||
# --volume-size=SIZE Nextcloud volume size in GB (default: 100)
|
||||
# --apps=APP1,APP2 Applications to deploy (default: zitadel,nextcloud)
|
||||
# --non-interactive Don't prompt, use defaults
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
TFVARS_FILE="$PROJECT_ROOT/tofu/terraform.tfvars"
|
||||
|
||||
# Check arguments
|
||||
if [ $# -lt 1 ]; then
|
||||
echo -e "${RED}Error: Client name required${NC}"
|
||||
echo "Usage: $0 <client_name> [options]"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --server-type=TYPE Server type (default: cpx22)"
|
||||
echo " --location=LOC Data center (default: fsn1)"
|
||||
echo " --volume-size=SIZE Nextcloud volume GB (default: 100)"
|
||||
echo " --apps=APP1,APP2 Apps (default: zitadel,nextcloud)"
|
||||
echo " --non-interactive Use defaults, don't prompt"
|
||||
echo ""
|
||||
echo "Example: $0 blue --server-type=cx22 --location=nbg1 --volume-size=50"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLIENT_NAME="$1"
|
||||
shift
|
||||
|
||||
# Default values
|
||||
SERVER_TYPE="cpx22"
|
||||
LOCATION="fsn1"
|
||||
VOLUME_SIZE="100"
|
||||
APPS="authentik,nextcloud"
|
||||
PRIVATE_IP=""
|
||||
PUBLIC_IP_ENABLED="false"
|
||||
NON_INTERACTIVE=false
|
||||
|
||||
# Parse options
|
||||
for arg in "$@"; do
|
||||
case $arg in
|
||||
--server-type=*)
|
||||
SERVER_TYPE="${arg#*=}"
|
||||
;;
|
||||
--location=*)
|
||||
LOCATION="${arg#*=}"
|
||||
;;
|
||||
--volume-size=*)
|
||||
VOLUME_SIZE="${arg#*=}"
|
||||
;;
|
||||
--apps=*)
|
||||
APPS="${arg#*=}"
|
||||
;;
|
||||
--private-ip=*)
|
||||
PRIVATE_IP="${arg#*=}"
|
||||
;;
|
||||
--public-ip)
|
||||
PUBLIC_IP_ENABLED="true"
|
||||
;;
|
||||
--non-interactive)
|
||||
NON_INTERACTIVE=true
|
||||
;;
|
||||
*)
|
||||
echo -e "${RED}Unknown option: $arg${NC}"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Auto-assign private IP if not provided
|
||||
if [ -z "$PRIVATE_IP" ]; then
|
||||
# Find the highest existing IP in terraform.tfvars and increment
|
||||
LAST_IP=$(grep -oP 'private_ip\s*=\s*"10\.0\.0\.\K\d+' "$TFVARS_FILE" 2>/dev/null | sort -n | tail -1)
|
||||
if [ -z "$LAST_IP" ]; then
|
||||
NEXT_IP=40 # Start from 10.0.0.40 (edge is .2)
|
||||
else
|
||||
NEXT_IP=$((LAST_IP + 1))
|
||||
fi
|
||||
PRIVATE_IP="10.0.0.$NEXT_IP"
|
||||
echo -e "${BLUE}Auto-assigned private IP: $PRIVATE_IP${NC}"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Validate client name
|
||||
if [[ ! "$CLIENT_NAME" =~ ^[a-z0-9-]+$ ]]; then
|
||||
echo -e "${RED}Error: Client name must contain only lowercase letters, numbers, and hyphens${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if tfvars file exists
|
||||
if [ ! -f "$TFVARS_FILE" ]; then
|
||||
echo -e "${RED}Error: terraform.tfvars not found at $TFVARS_FILE${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if client already exists
|
||||
if grep -q "^[[:space:]]*${CLIENT_NAME}[[:space:]]*=" "$TFVARS_FILE"; then
|
||||
echo -e "${YELLOW}⚠ Client '${CLIENT_NAME}' already exists in terraform.tfvars${NC}"
|
||||
echo ""
|
||||
echo "Existing configuration:"
|
||||
grep -A 7 "^[[:space:]]*${CLIENT_NAME}[[:space:]]*=" "$TFVARS_FILE" | head -8
|
||||
echo ""
|
||||
read -p "Update configuration? (yes/no): " confirm
|
||||
if [ "$confirm" != "yes" ]; then
|
||||
echo "Cancelled"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Remove existing entry
|
||||
# This is complex - for now just error and let user handle manually
|
||||
echo -e "${RED}Error: Updating existing clients not yet implemented${NC}"
|
||||
echo "Please manually edit $TFVARS_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Interactive prompts (if not non-interactive)
|
||||
if [ "$NON_INTERACTIVE" = false ]; then
|
||||
echo -e "${BLUE}Adding client '${CLIENT_NAME}' to OpenTofu configuration${NC}"
|
||||
echo ""
|
||||
echo "Current defaults:"
|
||||
echo " Server type: $SERVER_TYPE"
|
||||
echo " Location: $LOCATION"
|
||||
echo " Volume size: $VOLUME_SIZE GB"
|
||||
echo " Apps: $APPS"
|
||||
echo ""
|
||||
read -p "Use these defaults? (yes/no): " use_defaults
|
||||
|
||||
if [ "$use_defaults" != "yes" ]; then
|
||||
# Prompt for each value
|
||||
echo ""
|
||||
read -p "Server type [$SERVER_TYPE]: " input
|
||||
SERVER_TYPE="${input:-$SERVER_TYPE}"
|
||||
|
||||
read -p "Location [$LOCATION]: " input
|
||||
LOCATION="${input:-$LOCATION}"
|
||||
|
||||
read -p "Volume size GB [$VOLUME_SIZE]: " input
|
||||
VOLUME_SIZE="${input:-$VOLUME_SIZE}"
|
||||
|
||||
read -p "Apps (comma-separated) [$APPS]: " input
|
||||
APPS="${input:-$APPS}"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Convert apps list to array format
|
||||
APPS_ARRAY=$(echo "$APPS" | sed 's/,/", "/g' | sed 's/^/["/' | sed 's/$/"]/')
|
||||
|
||||
# Find the closing brace of the clients block
|
||||
CLIENTS_CLOSE_LINE=$(grep -n "^}" "$TFVARS_FILE" | head -1 | cut -d: -f1)
|
||||
|
||||
if [ -z "$CLIENTS_CLOSE_LINE" ]; then
|
||||
echo -e "${RED}Error: Could not find closing brace in terraform.tfvars${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create the new client configuration
|
||||
NEW_CLIENT_CONFIG="
|
||||
# ${CLIENT_NAME} server
|
||||
${CLIENT_NAME} = {
|
||||
server_type = \"${SERVER_TYPE}\"
|
||||
location = \"${LOCATION}\"
|
||||
subdomain = \"${CLIENT_NAME}\"
|
||||
apps = ${APPS_ARRAY}
|
||||
nextcloud_volume_size = ${VOLUME_SIZE}
|
||||
private_ip = \"${PRIVATE_IP}\"
|
||||
public_ip_enabled = ${PUBLIC_IP_ENABLED}
|
||||
}"
|
||||
|
||||
# Create temporary file with new config inserted before closing brace
|
||||
TMP_FILE=$(mktemp)
|
||||
head -n $((CLIENTS_CLOSE_LINE - 1)) "$TFVARS_FILE" > "$TMP_FILE"
|
||||
echo "$NEW_CLIENT_CONFIG" >> "$TMP_FILE"
|
||||
tail -n +$CLIENTS_CLOSE_LINE "$TFVARS_FILE" >> "$TMP_FILE"
|
||||
|
||||
# Show the diff
|
||||
echo ""
|
||||
echo -e "${CYAN}Configuration to be added:${NC}"
|
||||
echo "$NEW_CLIENT_CONFIG"
|
||||
echo ""
|
||||
|
||||
# Confirm
|
||||
if [ "$NON_INTERACTIVE" = false ]; then
|
||||
read -p "Add this configuration to terraform.tfvars? (yes/no): " confirm
|
||||
if [ "$confirm" != "yes" ]; then
|
||||
rm "$TMP_FILE"
|
||||
echo "Cancelled"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
# Apply changes
|
||||
mv "$TMP_FILE" "$TFVARS_FILE"
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}✓ Client '${CLIENT_NAME}' added to terraform.tfvars${NC}"
|
||||
echo ""
|
||||
|
||||
# Create Ansible host_vars file
|
||||
HOST_VARS_FILE="$PROJECT_ROOT/ansible/host_vars/${CLIENT_NAME}.yml"
|
||||
if [ ! -f "$HOST_VARS_FILE" ]; then
|
||||
echo -e "${BLUE}Creating Ansible host_vars file...${NC}"
|
||||
|
||||
mkdir -p "$(dirname "$HOST_VARS_FILE")"
|
||||
|
||||
cat > "$HOST_VARS_FILE" << EOF
|
||||
---
|
||||
# ${CLIENT_NAME} server configuration
|
||||
|
||||
ansible_host: ${PRIVATE_IP}
|
||||
|
||||
# Client identification
|
||||
client_name: ${CLIENT_NAME}
|
||||
client_domain: ${CLIENT_NAME}.vrije.cloud
|
||||
client_secrets_file: ${CLIENT_NAME}.sops.yaml
|
||||
EOF
|
||||
|
||||
echo -e "${GREEN}✓ Created host_vars file: $HOST_VARS_FILE${NC}"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo "Configuration added:"
|
||||
echo " Server: $SERVER_TYPE in $LOCATION"
|
||||
echo " Volume: $VOLUME_SIZE GB"
|
||||
echo " Apps: $APPS"
|
||||
echo " Private IP: $PRIVATE_IP"
|
||||
echo ""
|
||||
echo -e "${CYAN}Next steps:${NC}"
|
||||
echo "1. Review changes: cat tofu/terraform.tfvars"
|
||||
echo "2. Plan infrastructure: cd tofu && tofu plan"
|
||||
echo "3. Apply infrastructure: cd tofu && tofu apply"
|
||||
echo "4. Deploy services: ./scripts/deploy-client.sh $CLIENT_NAME"
|
||||
echo ""
|
||||
|
|
@ -1,251 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Report software versions across all clients
|
||||
#
|
||||
# Usage: ./scripts/check-client-versions.sh [options]
|
||||
#
|
||||
# Options:
|
||||
# --format=table Show as colorized table (default)
|
||||
# --format=csv Export as CSV
|
||||
# --format=json Export as JSON
|
||||
# --app=<name> Filter by application (authentik|nextcloud|traefik|ubuntu)
|
||||
# --outdated Show only clients with outdated versions
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
REGISTRY_FILE="$PROJECT_ROOT/clients/registry.yml"
|
||||
|
||||
# Default options
|
||||
FORMAT="table"
|
||||
FILTER_APP=""
|
||||
SHOW_OUTDATED=false
|
||||
|
||||
# Parse arguments
|
||||
for arg in "$@"; do
|
||||
case $arg in
|
||||
--format=*)
|
||||
FORMAT="${arg#*=}"
|
||||
;;
|
||||
--app=*)
|
||||
FILTER_APP="${arg#*=}"
|
||||
;;
|
||||
--outdated)
|
||||
SHOW_OUTDATED=true
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $arg"
|
||||
echo "Usage: $0 [--format=table|csv|json] [--app=<name>] [--outdated]"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Check if yq is available
|
||||
if ! command -v yq &> /dev/null; then
|
||||
echo -e "${RED}Error: 'yq' not found. Install with: brew install yq${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if registry exists
|
||||
if [ ! -f "$REGISTRY_FILE" ]; then
|
||||
echo -e "${RED}Error: Registry file not found: $REGISTRY_FILE${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get list of clients
|
||||
CLIENTS=$(yq eval '.clients | keys | .[]' "$REGISTRY_FILE" 2>/dev/null)
|
||||
|
||||
if [ -z "$CLIENTS" ]; then
|
||||
echo -e "${YELLOW}No clients found in registry${NC}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Determine latest versions (from canary/dev or most common)
|
||||
declare -A LATEST_VERSIONS
|
||||
LATEST_VERSIONS[authentik]=$(yq eval '.clients | to_entries | .[].value.versions.authentik' "$REGISTRY_FILE" | sort -V | tail -1)
|
||||
LATEST_VERSIONS[nextcloud]=$(yq eval '.clients | to_entries | .[].value.versions.nextcloud' "$REGISTRY_FILE" | sort -V | tail -1)
|
||||
LATEST_VERSIONS[traefik]=$(yq eval '.clients | to_entries | .[].value.versions.traefik' "$REGISTRY_FILE" | sort -V | tail -1)
|
||||
LATEST_VERSIONS[ubuntu]=$(yq eval '.clients | to_entries | .[].value.versions.ubuntu' "$REGISTRY_FILE" | sort -V | tail -1)
|
||||
|
||||
# Function to check if version is outdated
|
||||
is_outdated() {
|
||||
local app=$1
|
||||
local version=$2
|
||||
local latest=${LATEST_VERSIONS[$app]}
|
||||
|
||||
if [ "$version" != "$latest" ] && [ "$version" != "null" ] && [ "$version" != "unknown" ]; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
case $FORMAT in
|
||||
table)
|
||||
echo -e "${BLUE}═══════════════════════════════════════════════════════════════════════════════${NC}"
|
||||
echo -e "${BLUE} CLIENT VERSION REPORT${NC}"
|
||||
echo -e "${BLUE}═══════════════════════════════════════════════════════════════════════════════${NC}"
|
||||
echo ""
|
||||
|
||||
# Header
|
||||
printf "${CYAN}%-15s %-15s %-15s %-15s %-15s %-15s${NC}\n" \
|
||||
"CLIENT" "STATUS" "AUTHENTIK" "NEXTCLOUD" "TRAEFIK" "UBUNTU"
|
||||
echo -e "${CYAN}$(printf '─%.0s' {1..90})${NC}"
|
||||
|
||||
# Rows
|
||||
for client in $CLIENTS; do
|
||||
status=$(yq eval ".clients.\"$client\".status" "$REGISTRY_FILE")
|
||||
authentik=$(yq eval ".clients.\"$client\".versions.authentik" "$REGISTRY_FILE")
|
||||
nextcloud=$(yq eval ".clients.\"$client\".versions.nextcloud" "$REGISTRY_FILE")
|
||||
traefik=$(yq eval ".clients.\"$client\".versions.traefik" "$REGISTRY_FILE")
|
||||
ubuntu=$(yq eval ".clients.\"$client\".versions.ubuntu" "$REGISTRY_FILE")
|
||||
|
||||
# Skip if filtering by outdated and not outdated
|
||||
if [ "$SHOW_OUTDATED" = true ]; then
|
||||
has_outdated=false
|
||||
is_outdated "authentik" "$authentik" && has_outdated=true
|
||||
is_outdated "nextcloud" "$nextcloud" && has_outdated=true
|
||||
is_outdated "traefik" "$traefik" && has_outdated=true
|
||||
is_outdated "ubuntu" "$ubuntu" && has_outdated=true
|
||||
|
||||
if [ "$has_outdated" = false ]; then
|
||||
continue
|
||||
fi
|
||||
fi
|
||||
|
||||
# Colorize versions (red if outdated)
|
||||
authentik_color=$NC
|
||||
is_outdated "authentik" "$authentik" && authentik_color=$RED
|
||||
|
||||
nextcloud_color=$NC
|
||||
is_outdated "nextcloud" "$nextcloud" && nextcloud_color=$RED
|
||||
|
||||
traefik_color=$NC
|
||||
is_outdated "traefik" "$traefik" && traefik_color=$RED
|
||||
|
||||
ubuntu_color=$NC
|
||||
is_outdated "ubuntu" "$ubuntu" && ubuntu_color=$RED
|
||||
|
||||
# Status color
|
||||
status_color=$GREEN
|
||||
[ "$status" != "deployed" ] && status_color=$YELLOW
|
||||
|
||||
printf "%-15s ${status_color}%-15s${NC} ${authentik_color}%-15s${NC} ${nextcloud_color}%-15s${NC} ${traefik_color}%-15s${NC} ${ubuntu_color}%-15s${NC}\n" \
|
||||
"$client" "$status" "$authentik" "$nextcloud" "$traefik" "$ubuntu"
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo -e "${CYAN}Latest versions:${NC}"
|
||||
echo " Authentik: ${LATEST_VERSIONS[authentik]}"
|
||||
echo " Nextcloud: ${LATEST_VERSIONS[nextcloud]}"
|
||||
echo " Traefik: ${LATEST_VERSIONS[traefik]}"
|
||||
echo " Ubuntu: ${LATEST_VERSIONS[ubuntu]}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}Note: ${RED}Red${NC} indicates outdated version${NC}"
|
||||
echo ""
|
||||
;;
|
||||
|
||||
csv)
|
||||
# CSV header
|
||||
echo "client,status,authentik,nextcloud,traefik,ubuntu,last_update,outdated"
|
||||
|
||||
# CSV rows
|
||||
for client in $CLIENTS; do
|
||||
status=$(yq eval ".clients.\"$client\".status" "$REGISTRY_FILE")
|
||||
authentik=$(yq eval ".clients.\"$client\".versions.authentik" "$REGISTRY_FILE")
|
||||
nextcloud=$(yq eval ".clients.\"$client\".versions.nextcloud" "$REGISTRY_FILE")
|
||||
traefik=$(yq eval ".clients.\"$client\".versions.traefik" "$REGISTRY_FILE")
|
||||
ubuntu=$(yq eval ".clients.\"$client\".versions.ubuntu" "$REGISTRY_FILE")
|
||||
last_update=$(yq eval ".clients.\"$client\".maintenance.last_full_update" "$REGISTRY_FILE")
|
||||
|
||||
# Check if any version is outdated
|
||||
outdated="no"
|
||||
is_outdated "authentik" "$authentik" && outdated="yes"
|
||||
is_outdated "nextcloud" "$nextcloud" && outdated="yes"
|
||||
is_outdated "traefik" "$traefik" && outdated="yes"
|
||||
is_outdated "ubuntu" "$ubuntu" && outdated="yes"
|
||||
|
||||
# Skip if filtering by outdated
|
||||
if [ "$SHOW_OUTDATED" = true ] && [ "$outdated" = "no" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
echo "$client,$status,$authentik,$nextcloud,$traefik,$ubuntu,$last_update,$outdated"
|
||||
done
|
||||
;;
|
||||
|
||||
json)
|
||||
# Build JSON array
|
||||
echo "{"
|
||||
echo " \"latest_versions\": {"
|
||||
echo " \"authentik\": \"${LATEST_VERSIONS[authentik]}\","
|
||||
echo " \"nextcloud\": \"${LATEST_VERSIONS[nextcloud]}\","
|
||||
echo " \"traefik\": \"${LATEST_VERSIONS[traefik]}\","
|
||||
echo " \"ubuntu\": \"${LATEST_VERSIONS[ubuntu]}\""
|
||||
echo " },"
|
||||
echo " \"clients\": ["
|
||||
|
||||
first=true
|
||||
for client in $CLIENTS; do
|
||||
status=$(yq eval ".clients.\"$client\".status" "$REGISTRY_FILE")
|
||||
authentik=$(yq eval ".clients.\"$client\".versions.authentik" "$REGISTRY_FILE")
|
||||
nextcloud=$(yq eval ".clients.\"$client\".versions.nextcloud" "$REGISTRY_FILE")
|
||||
traefik=$(yq eval ".clients.\"$client\".versions.traefik" "$REGISTRY_FILE")
|
||||
ubuntu=$(yq eval ".clients.\"$client\".versions.ubuntu" "$REGISTRY_FILE")
|
||||
last_update=$(yq eval ".clients.\"$client\".maintenance.last_full_update" "$REGISTRY_FILE")
|
||||
|
||||
# Check if any version is outdated
|
||||
outdated=false
|
||||
is_outdated "authentik" "$authentik" && outdated=true
|
||||
is_outdated "nextcloud" "$nextcloud" && outdated=true
|
||||
is_outdated "traefik" "$traefik" && outdated=true
|
||||
is_outdated "ubuntu" "$ubuntu" && outdated=true
|
||||
|
||||
# Skip if filtering by outdated
|
||||
if [ "$SHOW_OUTDATED" = true ] && [ "$outdated" = false ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
if [ "$first" = false ]; then
|
||||
echo " ,"
|
||||
fi
|
||||
first=false
|
||||
|
||||
cat <<EOF
|
||||
{
|
||||
"name": "$client",
|
||||
"status": "$status",
|
||||
"versions": {
|
||||
"authentik": "$authentik",
|
||||
"nextcloud": "$nextcloud",
|
||||
"traefik": "$traefik",
|
||||
"ubuntu": "$ubuntu"
|
||||
},
|
||||
"last_update": "$last_update",
|
||||
"outdated": $outdated
|
||||
}
|
||||
EOF
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo " ]"
|
||||
echo "}"
|
||||
;;
|
||||
|
||||
*)
|
||||
echo -e "${RED}Error: Unknown format '$FORMAT'${NC}"
|
||||
echo "Valid formats: table, csv, json"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
|
@ -1,237 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Show detailed status for a specific client
|
||||
#
|
||||
# Usage: ./scripts/client-status.sh <client_name>
|
||||
#
|
||||
# Displays:
|
||||
# - Deployment status and metadata
|
||||
# - Server information
|
||||
# - Application versions
|
||||
# - Maintenance history
|
||||
# - URLs and access information
|
||||
# - Live health checks (optional)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
REGISTRY_FILE="$PROJECT_ROOT/clients/registry.yml"
|
||||
|
||||
# Check arguments
|
||||
if [ $# -ne 1 ]; then
|
||||
echo -e "${RED}Error: Client name required${NC}"
|
||||
echo "Usage: $0 <client_name>"
|
||||
echo ""
|
||||
echo "Example: $0 dev"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLIENT_NAME="$1"
|
||||
|
||||
# Check if yq is available
|
||||
if ! command -v yq &> /dev/null; then
|
||||
echo -e "${RED}Error: 'yq' not found. Install with: brew install yq${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if registry exists
|
||||
if [ ! -f "$REGISTRY_FILE" ]; then
|
||||
echo -e "${RED}Error: Registry file not found: $REGISTRY_FILE${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if client exists
|
||||
if yq eval ".clients.\"$CLIENT_NAME\"" "$REGISTRY_FILE" | grep -q "null"; then
|
||||
echo -e "${RED}Error: Client '$CLIENT_NAME' not found in registry${NC}"
|
||||
echo ""
|
||||
echo "Available clients:"
|
||||
yq eval '.clients | keys | .[]' "$REGISTRY_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Extract client information
|
||||
STATUS=$(yq eval ".clients.\"$CLIENT_NAME\".status" "$REGISTRY_FILE")
|
||||
ROLE=$(yq eval ".clients.\"$CLIENT_NAME\".role" "$REGISTRY_FILE")
|
||||
DEPLOYED_DATE=$(yq eval ".clients.\"$CLIENT_NAME\".deployed_date" "$REGISTRY_FILE")
|
||||
DESTROYED_DATE=$(yq eval ".clients.\"$CLIENT_NAME\".destroyed_date" "$REGISTRY_FILE")
|
||||
|
||||
SERVER_TYPE=$(yq eval ".clients.\"$CLIENT_NAME\".server.type" "$REGISTRY_FILE")
|
||||
SERVER_LOCATION=$(yq eval ".clients.\"$CLIENT_NAME\".server.location" "$REGISTRY_FILE")
|
||||
SERVER_IP=$(yq eval ".clients.\"$CLIENT_NAME\".server.ip" "$REGISTRY_FILE")
|
||||
SERVER_ID=$(yq eval ".clients.\"$CLIENT_NAME\".server.id" "$REGISTRY_FILE")
|
||||
|
||||
APPS=$(yq eval ".clients.\"$CLIENT_NAME\".apps | join(\", \")" "$REGISTRY_FILE")
|
||||
|
||||
AUTHENTIK_VERSION=$(yq eval ".clients.\"$CLIENT_NAME\".versions.authentik" "$REGISTRY_FILE")
|
||||
NEXTCLOUD_VERSION=$(yq eval ".clients.\"$CLIENT_NAME\".versions.nextcloud" "$REGISTRY_FILE")
|
||||
TRAEFIK_VERSION=$(yq eval ".clients.\"$CLIENT_NAME\".versions.traefik" "$REGISTRY_FILE")
|
||||
UBUNTU_VERSION=$(yq eval ".clients.\"$CLIENT_NAME\".versions.ubuntu" "$REGISTRY_FILE")
|
||||
|
||||
LAST_FULL_UPDATE=$(yq eval ".clients.\"$CLIENT_NAME\".maintenance.last_full_update" "$REGISTRY_FILE")
|
||||
LAST_SECURITY_PATCH=$(yq eval ".clients.\"$CLIENT_NAME\".maintenance.last_security_patch" "$REGISTRY_FILE")
|
||||
LAST_OS_UPDATE=$(yq eval ".clients.\"$CLIENT_NAME\".maintenance.last_os_update" "$REGISTRY_FILE")
|
||||
LAST_BACKUP_VERIFIED=$(yq eval ".clients.\"$CLIENT_NAME\".maintenance.last_backup_verified" "$REGISTRY_FILE")
|
||||
|
||||
AUTHENTIK_URL=$(yq eval ".clients.\"$CLIENT_NAME\".urls.authentik" "$REGISTRY_FILE")
|
||||
NEXTCLOUD_URL=$(yq eval ".clients.\"$CLIENT_NAME\".urls.nextcloud" "$REGISTRY_FILE")
|
||||
|
||||
NOTES=$(yq eval ".clients.\"$CLIENT_NAME\".notes" "$REGISTRY_FILE")
|
||||
|
||||
# Display header
|
||||
echo -e "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
||||
echo -e "${BLUE} CLIENT STATUS: $CLIENT_NAME${NC}"
|
||||
echo -e "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
||||
echo ""
|
||||
|
||||
# Status section
|
||||
echo -e "${CYAN}━━━ Deployment Status ━━━${NC}"
|
||||
echo ""
|
||||
|
||||
# Color status
|
||||
STATUS_COLOR=$NC
|
||||
case $STATUS in
|
||||
deployed) STATUS_COLOR=$GREEN ;;
|
||||
pending) STATUS_COLOR=$YELLOW ;;
|
||||
maintenance) STATUS_COLOR=$CYAN ;;
|
||||
offboarding) STATUS_COLOR=$RED ;;
|
||||
destroyed) STATUS_COLOR=$RED ;;
|
||||
esac
|
||||
|
||||
# Color role
|
||||
ROLE_COLOR=$NC
|
||||
case $ROLE in
|
||||
canary) ROLE_COLOR=$YELLOW ;;
|
||||
production) ROLE_COLOR=$GREEN ;;
|
||||
esac
|
||||
|
||||
echo -e "Status: ${STATUS_COLOR}$STATUS${NC}"
|
||||
echo -e "Role: ${ROLE_COLOR}$ROLE${NC}"
|
||||
echo -e "Deployed: $DEPLOYED_DATE"
|
||||
if [ "$DESTROYED_DATE" != "null" ]; then
|
||||
echo -e "Destroyed: ${RED}$DESTROYED_DATE${NC}"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Server section
|
||||
echo -e "${CYAN}━━━ Server Information ━━━${NC}"
|
||||
echo ""
|
||||
echo -e "Server Type: $SERVER_TYPE"
|
||||
echo -e "Location: $SERVER_LOCATION"
|
||||
echo -e "IP Address: $SERVER_IP"
|
||||
echo -e "Server ID: $SERVER_ID"
|
||||
echo ""
|
||||
|
||||
# Applications section
|
||||
echo -e "${CYAN}━━━ Applications ━━━${NC}"
|
||||
echo ""
|
||||
echo -e "Installed: $APPS"
|
||||
echo ""
|
||||
|
||||
# Versions section
|
||||
echo -e "${CYAN}━━━ Versions ━━━${NC}"
|
||||
echo ""
|
||||
echo -e "Authentik: $AUTHENTIK_VERSION"
|
||||
echo -e "Nextcloud: $NEXTCLOUD_VERSION"
|
||||
echo -e "Traefik: $TRAEFIK_VERSION"
|
||||
echo -e "Ubuntu: $UBUNTU_VERSION"
|
||||
echo ""
|
||||
|
||||
# Maintenance section
|
||||
echo -e "${CYAN}━━━ Maintenance History ━━━${NC}"
|
||||
echo ""
|
||||
echo -e "Last Full Update: $LAST_FULL_UPDATE"
|
||||
echo -e "Last Security Patch: $LAST_SECURITY_PATCH"
|
||||
echo -e "Last OS Update: $LAST_OS_UPDATE"
|
||||
if [ "$LAST_BACKUP_VERIFIED" != "null" ]; then
|
||||
echo -e "Last Backup Verified: $LAST_BACKUP_VERIFIED"
|
||||
else
|
||||
echo -e "Last Backup Verified: ${YELLOW}Never${NC}"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# URLs section
|
||||
echo -e "${CYAN}━━━ Access URLs ━━━${NC}"
|
||||
echo ""
|
||||
echo -e "Authentik: $AUTHENTIK_URL"
|
||||
echo -e "Nextcloud: $NEXTCLOUD_URL"
|
||||
echo ""
|
||||
|
||||
# Notes section
|
||||
if [ "$NOTES" != "null" ] && [ -n "$NOTES" ]; then
|
||||
echo -e "${CYAN}━━━ Notes ━━━${NC}"
|
||||
echo ""
|
||||
echo "$NOTES" | sed 's/^/ /'
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Live health check (if server is deployed and reachable)
|
||||
if [ "$STATUS" = "deployed" ]; then
|
||||
echo -e "${CYAN}━━━ Live Health Check ━━━${NC}"
|
||||
echo ""
|
||||
|
||||
# Check if server is reachable via SSH (if Ansible is configured)
|
||||
if command -v ansible &> /dev/null && [ -n "${HCLOUD_TOKEN:-}" ]; then
|
||||
cd "$PROJECT_ROOT/ansible"
|
||||
if timeout 10 ~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m ping -o &>/dev/null; then
|
||||
echo -e "SSH Access: ${GREEN}✓ Reachable${NC}"
|
||||
|
||||
# Get Docker status
|
||||
DOCKER_STATUS=$(~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m shell -a "docker ps --format '{{.Names}}' 2>/dev/null | wc -l" -o 2>/dev/null | tail -1 | awk '{print $NF}' || echo "0")
|
||||
if [ "$DOCKER_STATUS" != "0" ]; then
|
||||
echo -e "Docker: ${GREEN}✓ Running ($DOCKER_STATUS containers)${NC}"
|
||||
else
|
||||
echo -e "Docker: ${RED}✗ No containers running${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "SSH Access: ${RED}✗ Not reachable${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "${YELLOW}Note: Install Ansible and set HCLOUD_TOKEN for live health checks${NC}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Check HTTPS endpoints
|
||||
echo -e "HTTPS Endpoints:"
|
||||
|
||||
# Check Authentik
|
||||
if command -v curl &> /dev/null; then
|
||||
if timeout 10 curl -sSf -o /dev/null "$AUTHENTIK_URL" 2>/dev/null; then
|
||||
echo -e " Authentik: ${GREEN}✓ Responding${NC}"
|
||||
else
|
||||
echo -e " Authentik: ${RED}<EFBFBD><EFBFBD> Not responding${NC}"
|
||||
fi
|
||||
|
||||
# Check Nextcloud
|
||||
if timeout 10 curl -sSf -o /dev/null "$NEXTCLOUD_URL" 2>/dev/null; then
|
||||
echo -e " Nextcloud: ${GREEN}✓ Responding${NC}"
|
||||
else
|
||||
echo -e " Nextcloud: ${RED}✗ Not responding${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e " ${YELLOW}Install curl for endpoint checks${NC}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Management commands section
|
||||
echo -e "${CYAN}━━━ Management Commands ━━━${NC}"
|
||||
echo ""
|
||||
echo -e "View secrets: ${BLUE}sops secrets/clients/${CLIENT_NAME}.sops.yaml${NC}"
|
||||
echo -e "Rebuild server: ${BLUE}./scripts/rebuild-client.sh $CLIENT_NAME${NC}"
|
||||
echo -e "Destroy server: ${BLUE}./scripts/destroy-client.sh $CLIENT_NAME${NC}"
|
||||
echo -e "List all: ${BLUE}./scripts/list-clients.sh${NC}"
|
||||
echo ""
|
||||
|
||||
echo -e "${BLUE}═══════════════════════════════════════════════════════════${NC}"
|
||||
|
|
@ -1,130 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Collect deployed software versions from a client and update registry
|
||||
#
|
||||
# Usage: ./scripts/collect-client-versions.sh <client_name>
|
||||
#
|
||||
# Queries the deployed server for actual running versions:
|
||||
# - Docker container image versions
|
||||
# - Ubuntu OS version
|
||||
# - Updates the client registry with collected versions
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
REGISTRY_FILE="$PROJECT_ROOT/clients/registry.yml"
|
||||
|
||||
# Check arguments
|
||||
if [ $# -ne 1 ]; then
|
||||
echo -e "${RED}Error: Client name required${NC}"
|
||||
echo "Usage: $0 <client_name>"
|
||||
echo ""
|
||||
echo "Example: $0 dev"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLIENT_NAME="$1"
|
||||
|
||||
# Check if yq is available
|
||||
if ! command -v yq &> /dev/null; then
|
||||
echo -e "${RED}Error: 'yq' not found. Install with: brew install yq${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Load Hetzner API token from SOPS if not already set
|
||||
if [ -z "${HCLOUD_TOKEN:-}" ]; then
|
||||
# shellcheck source=scripts/load-secrets-env.sh
|
||||
source "$SCRIPT_DIR/load-secrets-env.sh" > /dev/null 2>&1
|
||||
fi
|
||||
|
||||
# Check if registry exists
|
||||
if [ ! -f "$REGISTRY_FILE" ]; then
|
||||
echo -e "${RED}Error: Registry file not found: $REGISTRY_FILE${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if client exists in registry
|
||||
if yq eval ".clients.\"$CLIENT_NAME\"" "$REGISTRY_FILE" | grep -q "null"; then
|
||||
echo -e "${RED}Error: Client '$CLIENT_NAME' not found in registry${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}Collecting versions for client: $CLIENT_NAME${NC}"
|
||||
echo ""
|
||||
|
||||
cd "$PROJECT_ROOT/ansible"
|
||||
|
||||
# Check if server is reachable
|
||||
if ! timeout 10 ~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m ping -o &>/dev/null; then
|
||||
echo -e "${RED}Error: Cannot reach server for client '$CLIENT_NAME'${NC}"
|
||||
echo "Server may not be deployed or network is unreachable"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${YELLOW}Querying deployed versions...${NC}"
|
||||
echo ""
|
||||
|
||||
# Query Docker container versions
|
||||
echo "Collecting Docker container versions..."
|
||||
|
||||
# Function to extract version from image tag
|
||||
extract_version() {
|
||||
local image=$1
|
||||
# Extract version after the colon, or return "latest"
|
||||
if [[ $image == *":"* ]]; then
|
||||
echo "$image" | awk -F: '{print $2}'
|
||||
else
|
||||
echo "latest"
|
||||
fi
|
||||
}
|
||||
|
||||
# Collect Authentik version
|
||||
AUTHENTIK_IMAGE=$(~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m shell -a "docker inspect authentik-server 2>/dev/null | jq -r '.[0].Config.Image' 2>/dev/null || echo 'unknown'" -o 2>/dev/null | tail -1 | awk '{print $NF}')
|
||||
AUTHENTIK_VERSION=$(extract_version "$AUTHENTIK_IMAGE")
|
||||
|
||||
# Collect Nextcloud version
|
||||
NEXTCLOUD_IMAGE=$(~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m shell -a "docker inspect nextcloud 2>/dev/null | jq -r '.[0].Config.Image' 2>/dev/null || echo 'unknown'" -o 2>/dev/null | tail -1 | awk '{print $NF}')
|
||||
NEXTCLOUD_VERSION=$(extract_version "$NEXTCLOUD_IMAGE")
|
||||
|
||||
# Collect Traefik version
|
||||
TRAEFIK_IMAGE=$(~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m shell -a "docker inspect traefik 2>/dev/null | jq -r '.[0].Config.Image' 2>/dev/null || echo 'unknown'" -o 2>/dev/null | tail -1 | awk '{print $NF}')
|
||||
TRAEFIK_VERSION=$(extract_version "$TRAEFIK_IMAGE")
|
||||
|
||||
# Collect Ubuntu version
|
||||
UBUNTU_VERSION=$(~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m shell -a "lsb_release -rs 2>/dev/null || echo 'unknown'" -o 2>/dev/null | tail -1 | awk '{print $NF}')
|
||||
|
||||
echo -e "${GREEN}✓ Versions collected${NC}"
|
||||
echo ""
|
||||
|
||||
# Display collected versions
|
||||
echo "Collected versions:"
|
||||
echo " Authentik: $AUTHENTIK_VERSION"
|
||||
echo " Nextcloud: $NEXTCLOUD_VERSION"
|
||||
echo " Traefik: $TRAEFIK_VERSION"
|
||||
echo " Ubuntu: $UBUNTU_VERSION"
|
||||
echo ""
|
||||
|
||||
# Update registry
|
||||
echo -e "${YELLOW}Updating registry...${NC}"
|
||||
|
||||
# Update versions in registry
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".versions.authentik = \"$AUTHENTIK_VERSION\"" "$REGISTRY_FILE"
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".versions.nextcloud = \"$NEXTCLOUD_VERSION\"" "$REGISTRY_FILE"
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".versions.traefik = \"$TRAEFIK_VERSION\"" "$REGISTRY_FILE"
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".versions.ubuntu = \"$UBUNTU_VERSION\"" "$REGISTRY_FILE"
|
||||
|
||||
echo -e "${GREEN}✓ Registry updated${NC}"
|
||||
echo ""
|
||||
echo "Updated: $REGISTRY_FILE"
|
||||
echo ""
|
||||
echo "To view registry:"
|
||||
echo " ./scripts/client-status.sh $CLIENT_NAME"
|
||||
|
|
@ -1,170 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Configure Diun on all servers (disable watchRepo, add Docker Hub auth)
|
||||
# Created: 2026-01-24
|
||||
#
|
||||
# This script runs the diun configuration playbook on each server
|
||||
# with its corresponding SSH key.
|
||||
#
|
||||
# Usage:
|
||||
# cd infrastructure/
|
||||
# SOPS_AGE_KEY_FILE="keys/age-key.txt" HCLOUD_TOKEN="..." ./scripts/configure-diun-all-servers.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Configuration
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
ANSIBLE_DIR="$PROJECT_ROOT/ansible"
|
||||
KEYS_DIR="$PROJECT_ROOT/keys/ssh"
|
||||
PLAYBOOK="playbooks/260124-configure-diun-watchrepo.yml"
|
||||
|
||||
# Check required environment variables
|
||||
if [ -z "${HCLOUD_TOKEN:-}" ]; then
|
||||
echo -e "${RED}Error: HCLOUD_TOKEN environment variable is required${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "${SOPS_AGE_KEY_FILE:-}" ]; then
|
||||
echo -e "${RED}Error: SOPS_AGE_KEY_FILE environment variable is required${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Convert SOPS_AGE_KEY_FILE to absolute path if it's relative
|
||||
if [[ ! "$SOPS_AGE_KEY_FILE" = /* ]]; then
|
||||
export SOPS_AGE_KEY_FILE="$PROJECT_ROOT/$SOPS_AGE_KEY_FILE"
|
||||
fi
|
||||
|
||||
# Change to ansible directory
|
||||
cd "$ANSIBLE_DIR"
|
||||
|
||||
echo -e "${BLUE}============================================================${NC}"
|
||||
echo -e "${BLUE}Diun Configuration - All Servers${NC}"
|
||||
echo -e "${BLUE}============================================================${NC}"
|
||||
echo ""
|
||||
echo "Playbook: $PLAYBOOK"
|
||||
echo "Ansible directory: $ANSIBLE_DIR"
|
||||
echo ""
|
||||
echo "Configuration changes:"
|
||||
echo " - Disable watchRepo (only check specific tags, not entire repos)"
|
||||
echo " - Add Docker Hub authentication (5000 pulls/6h limit)"
|
||||
echo " - Schedule: Weekly on Monday at 6am UTC"
|
||||
echo ""
|
||||
|
||||
# Get list of all servers with SSH keys
|
||||
SERVERS=()
|
||||
for keyfile in "$KEYS_DIR"/*.pub; do
|
||||
if [ -f "$keyfile" ]; then
|
||||
server=$(basename "$keyfile" .pub)
|
||||
# Skip special servers
|
||||
if [[ "$server" != "README" ]] && [[ "$server" != "edge" ]]; then
|
||||
SERVERS+=("$server")
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
echo -e "${BLUE}Found ${#SERVERS[@]} servers:${NC}"
|
||||
printf '%s\n' "${SERVERS[@]}" | sort
|
||||
echo ""
|
||||
|
||||
# Counters
|
||||
SUCCESS_COUNT=0
|
||||
FAILED_COUNT=0
|
||||
SKIPPED_COUNT=0
|
||||
declare -a SUCCESS_SERVERS
|
||||
declare -a FAILED_SERVERS
|
||||
declare -a SKIPPED_SERVERS
|
||||
|
||||
echo -e "${BLUE}============================================================${NC}"
|
||||
echo -e "${BLUE}Starting configuration run...${NC}"
|
||||
echo -e "${BLUE}============================================================${NC}"
|
||||
echo ""
|
||||
|
||||
# Run playbook for each server
|
||||
for server in "${SERVERS[@]}"; do
|
||||
echo -e "${YELLOW}-----------------------------------------------------------${NC}"
|
||||
echo -e "${YELLOW}Processing: $server${NC}"
|
||||
echo -e "${YELLOW}-----------------------------------------------------------${NC}"
|
||||
|
||||
SSH_KEY="$KEYS_DIR/$server"
|
||||
|
||||
if [ ! -f "$SSH_KEY" ]; then
|
||||
echo -e "${RED}✗ SSH key not found: $SSH_KEY${NC}"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
SKIPPED_SERVERS+=("$server")
|
||||
echo ""
|
||||
continue
|
||||
fi
|
||||
|
||||
# Run the playbook (with SSH options to prevent agent key issues)
|
||||
if env HCLOUD_TOKEN="$HCLOUD_TOKEN" \
|
||||
SOPS_AGE_KEY_FILE="$SOPS_AGE_KEY_FILE" \
|
||||
ANSIBLE_SSH_ARGS="-o IdentitiesOnly=yes" \
|
||||
~/.local/bin/ansible-playbook \
|
||||
-i hcloud.yml \
|
||||
"$PLAYBOOK" \
|
||||
--limit "$server" \
|
||||
--private-key "$SSH_KEY" 2>&1; then
|
||||
|
||||
echo -e "${GREEN}✓ Success: $server${NC}"
|
||||
SUCCESS_COUNT=$((SUCCESS_COUNT + 1))
|
||||
SUCCESS_SERVERS+=("$server")
|
||||
else
|
||||
echo -e "${RED}✗ Failed: $server${NC}"
|
||||
FAILED_COUNT=$((FAILED_COUNT + 1))
|
||||
FAILED_SERVERS+=("$server")
|
||||
fi
|
||||
|
||||
echo ""
|
||||
done
|
||||
|
||||
# Summary
|
||||
echo -e "${BLUE}============================================================${NC}"
|
||||
echo -e "${BLUE}CONFIGURATION RUN SUMMARY${NC}"
|
||||
echo -e "${BLUE}============================================================${NC}"
|
||||
echo ""
|
||||
echo "Total servers: ${#SERVERS[@]}"
|
||||
echo -e "${GREEN}Successful: $SUCCESS_COUNT${NC}"
|
||||
echo -e "${RED}Failed: $FAILED_COUNT${NC}"
|
||||
echo -e "${YELLOW}Skipped: $SKIPPED_COUNT${NC}"
|
||||
echo ""
|
||||
|
||||
if [ $SUCCESS_COUNT -gt 0 ]; then
|
||||
echo -e "${GREEN}Successful servers:${NC}"
|
||||
printf ' %s\n' "${SUCCESS_SERVERS[@]}"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
if [ $FAILED_COUNT -gt 0 ]; then
|
||||
echo -e "${RED}Failed servers:${NC}"
|
||||
printf ' %s\n' "${FAILED_SERVERS[@]}"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
if [ $SKIPPED_COUNT -gt 0 ]; then
|
||||
echo -e "${YELLOW}Skipped servers:${NC}"
|
||||
printf ' %s\n' "${SKIPPED_SERVERS[@]}"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}============================================================${NC}"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo " 1. Wait for next Monday at 6am UTC for scheduled run"
|
||||
echo " 2. Or manually trigger: docker exec diun diun once"
|
||||
echo " 3. Check logs: docker logs diun"
|
||||
echo ""
|
||||
|
||||
# Exit with error if any failures
|
||||
if [ $FAILED_COUNT -gt 0 ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exit 0
|
||||
|
|
@ -1,156 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Configure OIDC for a single client
|
||||
#
|
||||
# Usage: ./scripts/configure-oidc.sh <client_name>
|
||||
#
|
||||
# This script:
|
||||
# 1. Creates OIDC provider in Authentik
|
||||
# 2. Installs user_oidc app in Nextcloud
|
||||
# 3. Configures OIDC connection
|
||||
# 4. Enables multiple user backends
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
# Check arguments
|
||||
if [ $# -ne 1 ]; then
|
||||
echo -e "${RED}Error: Client name required${NC}"
|
||||
echo "Usage: $0 <client_name>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLIENT_NAME="$1"
|
||||
|
||||
# Check environment variables
|
||||
if [ -z "${SOPS_AGE_KEY_FILE:-}" ]; then
|
||||
export SOPS_AGE_KEY_FILE="$PROJECT_ROOT/keys/age-key.txt"
|
||||
fi
|
||||
|
||||
if [ -z "${HCLOUD_TOKEN:-}" ]; then
|
||||
echo -e "${RED}Error: HCLOUD_TOKEN not set${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}Configuring OIDC for ${CLIENT_NAME}...${NC}"
|
||||
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# Step 1: Get credentials from secrets
|
||||
echo "Getting credentials from secrets..."
|
||||
TOKEN=$(sops -d "secrets/clients/${CLIENT_NAME}.sops.yaml" | grep authentik_bootstrap_token | awk '{print $2}')
|
||||
|
||||
if [ -z "$TOKEN" ]; then
|
||||
echo -e "${RED}Error: Could not get Authentik token${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Step 2: Create OIDC provider in Authentik
|
||||
echo "Creating OIDC provider in Authentik..."
|
||||
|
||||
# Create Python script
|
||||
cat > /tmp/create_oidc_${CLIENT_NAME}.py << EOFPYTHON
|
||||
import sys, json, urllib.request
|
||||
base_url, token = "http://localhost:9000", "${TOKEN}"
|
||||
def req(p, m='GET', d=None):
|
||||
r = urllib.request.Request(f"{base_url}{p}", json.dumps(d).encode() if d else None, {'Authorization': f'Bearer {token}', 'Content-Type': 'application/json'}, method=m)
|
||||
try:
|
||||
with urllib.request.urlopen(r, timeout=30) as resp: return resp.status, json.loads(resp.read())
|
||||
except urllib.error.HTTPError as e: return e.code, json.loads(e.read()) if e.headers.get('Content-Type', '').startswith('application/json') else {'error': e.read().decode()}
|
||||
s, d = req('/api/v3/flows/instances/')
|
||||
auth_flow = next((f['pk'] for f in d.get('results', []) if f.get('slug') == 'default-authorization-flow' or f.get('designation') == 'authorization'), None)
|
||||
inval_flow = next((f['pk'] for f in d.get('results', []) if f.get('slug') == 'default-invalidation-flow' or f.get('designation') == 'invalidation'), None)
|
||||
s, d = req('/api/v3/crypto/certificatekeypairs/')
|
||||
key = d.get('results', [{}])[0].get('pk') if d.get('results') else None
|
||||
if not auth_flow or not key: print(json.dumps({'error': 'Config missing', 'auth_flow': auth_flow, 'key': key}), file=sys.stderr); sys.exit(1)
|
||||
s, prov = req('/api/v3/providers/oauth2/', 'POST', {'name': 'Nextcloud', 'authorization_flow': auth_flow, 'invalidation_flow': inval_flow, 'client_type': 'confidential', 'redirect_uris': [{'matching_mode': 'strict', 'url': 'https://nextcloud.${CLIENT_NAME}.vrije.cloud/apps/user_oidc/code'}], 'signing_key': key, 'sub_mode': 'hashed_user_id', 'include_claims_in_id_token': True})
|
||||
if s != 201: print(json.dumps({'error': 'Provider failed', 'status': s, 'details': prov}), file=sys.stderr); sys.exit(1)
|
||||
s, app = req('/api/v3/core/applications/', 'POST', {'name': 'Nextcloud', 'slug': 'nextcloud', 'provider': prov['pk'], 'meta_launch_url': 'https://nextcloud.${CLIENT_NAME}.vrije.cloud'})
|
||||
if s != 201: print(json.dumps({'error': 'App failed', 'status': s, 'details': app}), file=sys.stderr); sys.exit(1)
|
||||
print(json.dumps({'success': True, 'provider_id': prov['pk'], 'application_id': app['pk'], 'client_id': prov['client_id'], 'client_secret': prov['client_secret'], 'discovery_uri': f"https://auth.${CLIENT_NAME}.vrije.cloud/application/o/nextcloud/.well-known/openid-configuration", 'issuer': f"https://auth.${CLIENT_NAME}.vrije.cloud/application/o/nextcloud/"}))
|
||||
EOFPYTHON
|
||||
|
||||
# Copy script to server and execute
|
||||
cd ansible
|
||||
env HCLOUD_TOKEN="$HCLOUD_TOKEN" \
|
||||
ansible "${CLIENT_NAME}" \
|
||||
-i hcloud.yml \
|
||||
-m copy \
|
||||
-a "src=/tmp/create_oidc_${CLIENT_NAME}.py dest=/tmp/create_oidc.py mode=0755" \
|
||||
--private-key "../keys/ssh/${CLIENT_NAME}" > /dev/null 2>&1
|
||||
|
||||
# Execute the script
|
||||
OIDC_RESULT=$(env HCLOUD_TOKEN="$HCLOUD_TOKEN" \
|
||||
ansible "${CLIENT_NAME}" \
|
||||
-i hcloud.yml \
|
||||
-m shell \
|
||||
-a "docker exec -i authentik-server python3 < /tmp/create_oidc.py" \
|
||||
--private-key "../keys/ssh/${CLIENT_NAME}" 2>/dev/null | grep -A1 "CHANGED" | tail -1)
|
||||
|
||||
if [ -z "$OIDC_RESULT" ]; then
|
||||
echo -e "${RED}Error: Failed to create OIDC provider${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Parse credentials
|
||||
CLIENT_ID=$(echo "$OIDC_RESULT" | python3 -c "import sys, json; d=json.load(sys.stdin); print(d['client_id'])")
|
||||
CLIENT_SECRET=$(echo "$OIDC_RESULT" | python3 -c "import sys, json; d=json.load(sys.stdin); print(d['client_secret'])")
|
||||
DISCOVERY_URI=$(echo "$OIDC_RESULT" | python3 -c "import sys, json; d=json.load(sys.stdin); print(d['discovery_uri'])")
|
||||
|
||||
if [ -z "$CLIENT_ID" ] || [ -z "$CLIENT_SECRET" ] || [ -z "$DISCOVERY_URI" ]; then
|
||||
echo -e "${RED}Error: Failed to parse OIDC credentials${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}✓ OIDC provider created${NC}"
|
||||
|
||||
# Step 3: Install user_oidc app in Nextcloud
|
||||
echo "Installing user_oidc app..."
|
||||
|
||||
env HCLOUD_TOKEN="$HCLOUD_TOKEN" \
|
||||
ansible "${CLIENT_NAME}" \
|
||||
-i hcloud.yml \
|
||||
-m shell \
|
||||
-a "docker exec -u www-data nextcloud php occ app:install user_oidc" \
|
||||
--private-key "../keys/ssh/${CLIENT_NAME}" > /dev/null 2>&1 || true
|
||||
|
||||
echo -e "${GREEN}✓ user_oidc app installed${NC}"
|
||||
|
||||
# Step 4: Configure OIDC provider in Nextcloud
|
||||
echo "Configuring OIDC provider..."
|
||||
|
||||
env HCLOUD_TOKEN="$HCLOUD_TOKEN" \
|
||||
ansible "${CLIENT_NAME}" \
|
||||
-i hcloud.yml \
|
||||
-m shell \
|
||||
-a "docker exec -u www-data nextcloud php occ user_oidc:provider --clientid=\"${CLIENT_ID}\" --clientsecret=\"${CLIENT_SECRET}\" --discoveryuri=\"${DISCOVERY_URI}\" \"Authentik\"" \
|
||||
--private-key "../keys/ssh/${CLIENT_NAME}" > /dev/null 2>&1
|
||||
|
||||
echo -e "${GREEN}✓ OIDC provider configured${NC}"
|
||||
|
||||
# Step 5: Configure OIDC settings
|
||||
echo "Configuring OIDC settings..."
|
||||
|
||||
env HCLOUD_TOKEN="$HCLOUD_TOKEN" \
|
||||
ansible "${CLIENT_NAME}" \
|
||||
-i hcloud.yml \
|
||||
-m shell \
|
||||
-a "docker exec -u www-data nextcloud php occ config:app:set user_oidc allow_multiple_user_backends --value=1 && docker exec -u www-data nextcloud php occ config:app:set user_oidc auto_provision --value=1 && docker exec -u www-data nextcloud php occ config:app:set user_oidc single_logout --value=0" \
|
||||
--private-key "../keys/ssh/${CLIENT_NAME}" > /dev/null 2>&1
|
||||
|
||||
echo -e "${GREEN}✓ OIDC settings configured${NC}"
|
||||
|
||||
# Cleanup
|
||||
rm -f /tmp/create_oidc_${CLIENT_NAME}.py
|
||||
|
||||
echo -e "${GREEN}✓ OIDC configuration complete for ${CLIENT_NAME}${NC}"
|
||||
|
|
@ -36,86 +36,33 @@ fi
|
|||
|
||||
CLIENT_NAME="$1"
|
||||
|
||||
# Check if SSH key exists, generate if missing
|
||||
SSH_KEY_FILE="$PROJECT_ROOT/keys/ssh/${CLIENT_NAME}"
|
||||
if [ ! -f "$SSH_KEY_FILE" ]; then
|
||||
echo -e "${YELLOW}SSH key not found for client: $CLIENT_NAME${NC}"
|
||||
echo "Generating SSH key pair automatically..."
|
||||
echo ""
|
||||
|
||||
# Generate SSH key
|
||||
"$SCRIPT_DIR/generate-client-keys.sh" "$CLIENT_NAME"
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}✓ SSH key generated${NC}"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Check if secrets file exists, create from template if missing
|
||||
# Check if secrets file exists
|
||||
SECRETS_FILE="$PROJECT_ROOT/secrets/clients/${CLIENT_NAME}.sops.yaml"
|
||||
TEMPLATE_FILE="$PROJECT_ROOT/secrets/clients/template.sops.yaml"
|
||||
|
||||
if [ ! -f "$SECRETS_FILE" ]; then
|
||||
echo -e "${YELLOW}Secrets file not found for client: $CLIENT_NAME${NC}"
|
||||
echo "Creating from template and opening for editing..."
|
||||
echo -e "${RED}Error: Secrets file not found: $SECRETS_FILE${NC}"
|
||||
echo ""
|
||||
|
||||
# Check if template exists
|
||||
if [ ! -f "$TEMPLATE_FILE" ]; then
|
||||
echo -e "${RED}Error: Template file not found: $TEMPLATE_FILE${NC}"
|
||||
echo "Create a secrets file first:"
|
||||
echo " 1. Copy the template:"
|
||||
echo " cp secrets/clients/test.sops.yaml secrets/clients/${CLIENT_NAME}.sops.yaml"
|
||||
echo ""
|
||||
echo " 2. Edit with SOPS:"
|
||||
echo " sops secrets/clients/${CLIENT_NAME}.sops.yaml"
|
||||
echo ""
|
||||
echo " 3. Update the following fields:"
|
||||
echo " - client_name: $CLIENT_NAME"
|
||||
echo " - client_domain: ${CLIENT_NAME}.vrije.cloud"
|
||||
echo " - authentik_domain: auth.${CLIENT_NAME}.vrije.cloud"
|
||||
echo " - nextcloud_domain: nextcloud.${CLIENT_NAME}.vrije.cloud"
|
||||
echo " - All passwords and tokens (regenerate for security)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Copy template and decrypt to temporary file
|
||||
if [ -z "${SOPS_AGE_KEY_FILE:-}" ]; then
|
||||
export SOPS_AGE_KEY_FILE="$PROJECT_ROOT/keys/age-key.txt"
|
||||
fi
|
||||
|
||||
# Decrypt template to temp file
|
||||
TEMP_PLAIN=$(mktemp)
|
||||
sops -d "$TEMPLATE_FILE" > "$TEMP_PLAIN"
|
||||
|
||||
# Replace client name placeholders
|
||||
sed -i '' "s/test/${CLIENT_NAME}/g" "$TEMP_PLAIN"
|
||||
|
||||
# Create unencrypted file in correct location (matching .sops.yaml regex)
|
||||
# This is necessary because SOPS needs the file path to match creation rules
|
||||
TEMP_SOPS="${SECRETS_FILE%.sops.yaml}-unenc.sops.yaml"
|
||||
cat "$TEMP_PLAIN" > "$TEMP_SOPS"
|
||||
|
||||
# Encrypt in-place (SOPS finds creation rules because path matches regex)
|
||||
sops --encrypt --in-place "$TEMP_SOPS"
|
||||
|
||||
# Rename to final name
|
||||
mv "$TEMP_SOPS" "$SECRETS_FILE"
|
||||
|
||||
# Cleanup
|
||||
rm "$TEMP_PLAIN"
|
||||
|
||||
echo -e "${GREEN}✓ Created secrets file with client-specific domains${NC}"
|
||||
echo ""
|
||||
|
||||
# Automatically generate unique passwords
|
||||
echo -e "${BLUE}Generating unique passwords for ${CLIENT_NAME}...${NC}"
|
||||
echo ""
|
||||
|
||||
# Call the password generator script
|
||||
"$SCRIPT_DIR/generate-passwords.sh" "$CLIENT_NAME"
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}✓ Secrets file configured with unique passwords${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}To view credentials:${NC}"
|
||||
echo -e " ${BLUE}./scripts/get-passwords.sh ${CLIENT_NAME}${NC}"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Load Hetzner API token from SOPS if not already set
|
||||
# Check required environment variables
|
||||
if [ -z "${HCLOUD_TOKEN:-}" ]; then
|
||||
echo -e "${BLUE}Loading Hetzner API token from SOPS...${NC}"
|
||||
# shellcheck source=scripts/load-secrets-env.sh
|
||||
source "$SCRIPT_DIR/load-secrets-env.sh"
|
||||
echo ""
|
||||
echo -e "${RED}Error: HCLOUD_TOKEN environment variable not set${NC}"
|
||||
echo "Export your Hetzner Cloud API token:"
|
||||
echo " export HCLOUD_TOKEN='your-token-here'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "${SOPS_AGE_KEY_FILE:-}" ]; then
|
||||
|
|
@ -144,28 +91,6 @@ if ! grep -q "\"$CLIENT_NAME\"" terraform.tfvars 2>/dev/null; then
|
|||
fi
|
||||
fi
|
||||
|
||||
# Check if client exists in terraform.tfvars
|
||||
TFVARS_FILE="$PROJECT_ROOT/tofu/terraform.tfvars"
|
||||
if ! grep -q "^[[:space:]]*${CLIENT_NAME}[[:space:]]*=" "$TFVARS_FILE"; then
|
||||
echo -e "${YELLOW}⚠ Client '${CLIENT_NAME}' not found in terraform.tfvars${NC}"
|
||||
echo ""
|
||||
echo "The client must be added to OpenTofu configuration before deployment."
|
||||
echo ""
|
||||
read -p "Would you like to add it now? (yes/no): " add_confirm
|
||||
|
||||
if [ "$add_confirm" = "yes" ]; then
|
||||
echo ""
|
||||
"$SCRIPT_DIR/add-client-to-terraform.sh" "$CLIENT_NAME"
|
||||
echo ""
|
||||
else
|
||||
echo -e "${RED}Error: Cannot deploy without OpenTofu configuration${NC}"
|
||||
echo ""
|
||||
echo "Add the client manually to tofu/terraform.tfvars, or run:"
|
||||
echo " ./scripts/add-client-to-terraform.sh $CLIENT_NAME"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Start timer
|
||||
START_TIME=$(date +%s)
|
||||
|
||||
|
|
@ -175,16 +100,10 @@ echo -e "${BLUE}========================================${NC}"
|
|||
echo ""
|
||||
|
||||
# Step 1: Provision infrastructure
|
||||
echo -e "${YELLOW}[1/5] Provisioning infrastructure with OpenTofu...${NC}"
|
||||
echo -e "${YELLOW}[1/3] Provisioning infrastructure with OpenTofu...${NC}"
|
||||
|
||||
cd "$PROJECT_ROOT/tofu"
|
||||
|
||||
# Export TF_VAR environment variables if HCLOUD_TOKEN is set
|
||||
if [ -n "${HCLOUD_TOKEN:-}" ]; then
|
||||
export TF_VAR_hcloud_token="$HCLOUD_TOKEN"
|
||||
export TF_VAR_hetznerdns_token="$HCLOUD_TOKEN"
|
||||
fi
|
||||
|
||||
# Check if already exists
|
||||
if tofu state list 2>/dev/null | grep -q "hcloud_server.client\[\"$CLIENT_NAME\"\]"; then
|
||||
echo -e "${YELLOW}⚠ Server already exists, applying any missing DNS records...${NC}"
|
||||
|
|
@ -205,77 +124,25 @@ fi
|
|||
echo ""
|
||||
|
||||
# Step 2: Setup base system
|
||||
echo -e "${YELLOW}[2/5] Setting up base system (Docker, Traefik)...${NC}"
|
||||
echo -e "${YELLOW}[2/3] Setting up base system (Docker, Traefik)...${NC}"
|
||||
|
||||
cd "$PROJECT_ROOT/ansible"
|
||||
|
||||
~/.local/bin/ansible-playbook -i hcloud.yml playbooks/setup.yml --limit "$CLIENT_NAME" --private-key "../keys/ssh/$CLIENT_NAME"
|
||||
~/.local/bin/ansible-playbook -i hcloud.yml playbooks/setup.yml --limit "$CLIENT_NAME"
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}✓ Base system configured${NC}"
|
||||
echo ""
|
||||
|
||||
# Step 3: Deploy applications
|
||||
echo -e "${YELLOW}[3/5] Deploying applications (Authentik, Nextcloud, SSO)...${NC}"
|
||||
echo -e "${YELLOW}[3/3] Deploying applications (Authentik, Nextcloud, SSO)...${NC}"
|
||||
|
||||
~/.local/bin/ansible-playbook -i hcloud.yml playbooks/deploy.yml --limit "$CLIENT_NAME" --private-key "../keys/ssh/$CLIENT_NAME"
|
||||
~/.local/bin/ansible-playbook -i hcloud.yml playbooks/deploy.yml --limit "$CLIENT_NAME"
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}✓ Applications deployed${NC}"
|
||||
echo ""
|
||||
|
||||
# Step 4: Update client registry
|
||||
echo -e "${YELLOW}[4/5] Updating client registry...${NC}"
|
||||
|
||||
cd "$PROJECT_ROOT/tofu"
|
||||
|
||||
# Get server information from Terraform state
|
||||
SERVER_IP=$(tofu output -json client_ips 2>/dev/null | jq -r ".\"$CLIENT_NAME\"" || echo "")
|
||||
SERVER_ID=$(tofu state show "hcloud_server.client[\"$CLIENT_NAME\"]" 2>/dev/null | grep "^[[:space:]]*id[[:space:]]*=" | awk '{print $3}' | tr -d '"' || echo "")
|
||||
SERVER_TYPE=$(tofu state show "hcloud_server.client[\"$CLIENT_NAME\"]" 2>/dev/null | grep "^[[:space:]]*server_type[[:space:]]*=" | awk '{print $3}' | tr -d '"' || echo "")
|
||||
SERVER_LOCATION=$(tofu state show "hcloud_server.client[\"$CLIENT_NAME\"]" 2>/dev/null | grep "^[[:space:]]*location[[:space:]]*=" | awk '{print $3}' | tr -d '"' || echo "")
|
||||
|
||||
# Determine role (dev is canary, everything else is production by default)
|
||||
ROLE="production"
|
||||
if [ "$CLIENT_NAME" = "dev" ]; then
|
||||
ROLE="canary"
|
||||
fi
|
||||
|
||||
# Update registry
|
||||
"$SCRIPT_DIR/update-registry.sh" "$CLIENT_NAME" deploy \
|
||||
--role="$ROLE" \
|
||||
--server-ip="$SERVER_IP" \
|
||||
--server-id="$SERVER_ID" \
|
||||
--server-type="$SERVER_TYPE" \
|
||||
--server-location="$SERVER_LOCATION"
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}✓ Registry updated${NC}"
|
||||
echo ""
|
||||
|
||||
# Collect deployed versions
|
||||
echo -e "${YELLOW}Collecting deployed versions...${NC}"
|
||||
|
||||
"$SCRIPT_DIR/collect-client-versions.sh" "$CLIENT_NAME" 2>/dev/null || {
|
||||
echo -e "${YELLOW}⚠ Could not collect versions automatically${NC}"
|
||||
echo "Run manually later: ./scripts/collect-client-versions.sh $CLIENT_NAME"
|
||||
}
|
||||
|
||||
echo ""
|
||||
|
||||
# Add to monitoring
|
||||
echo -e "${YELLOW}[5/5] Adding client to monitoring...${NC}"
|
||||
echo ""
|
||||
|
||||
if [ -f "$SCRIPT_DIR/add-client-to-monitoring.sh" ]; then
|
||||
"$SCRIPT_DIR/add-client-to-monitoring.sh" "$CLIENT_NAME"
|
||||
else
|
||||
echo -e "${YELLOW}⚠ Monitoring script not found${NC}"
|
||||
echo "Manually add monitors at: https://status.vrije.cloud"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Calculate duration
|
||||
END_TIME=$(date +%s)
|
||||
DURATION=$((END_TIME - START_TIME))
|
||||
|
|
|
|||
|
|
@ -41,12 +41,12 @@ if [ ! -f "$SECRETS_FILE" ]; then
|
|||
exit 1
|
||||
fi
|
||||
|
||||
# Load Hetzner API token from SOPS if not already set
|
||||
# Check required environment variables
|
||||
if [ -z "${HCLOUD_TOKEN:-}" ]; then
|
||||
echo -e "${BLUE}Loading Hetzner API token from SOPS...${NC}"
|
||||
# shellcheck source=scripts/load-secrets-env.sh
|
||||
source "$SCRIPT_DIR/load-secrets-env.sh"
|
||||
echo ""
|
||||
echo -e "${RED}Error: HCLOUD_TOKEN environment variable not set${NC}"
|
||||
echo "Export your Hetzner Cloud API token:"
|
||||
echo " export HCLOUD_TOKEN='your-token-here'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "${SOPS_AGE_KEY_FILE:-}" ]; then
|
||||
|
|
@ -78,21 +78,8 @@ echo ""
|
|||
echo -e "${YELLOW}Starting destruction of client: $CLIENT_NAME${NC}"
|
||||
echo ""
|
||||
|
||||
# Step 0: Remove from monitoring
|
||||
echo -e "${YELLOW}[0/7] Removing client from monitoring...${NC}"
|
||||
echo ""
|
||||
|
||||
if [ -f "$SCRIPT_DIR/remove-client-from-monitoring.sh" ]; then
|
||||
"$SCRIPT_DIR/remove-client-from-monitoring.sh" "$CLIENT_NAME"
|
||||
else
|
||||
echo -e "${YELLOW}⚠ Monitoring script not found${NC}"
|
||||
echo "Manually remove monitors at: https://status.vrije.cloud"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 1: Delete Mailgun SMTP credentials
|
||||
echo -e "${YELLOW}[1/7] Deleting Mailgun SMTP credentials...${NC}"
|
||||
echo -e "${YELLOW}[1/3] Deleting Mailgun SMTP credentials...${NC}"
|
||||
|
||||
cd "$PROJECT_ROOT/ansible"
|
||||
|
||||
|
|
@ -103,26 +90,20 @@ echo -e "${GREEN}✓ SMTP credentials cleanup attempted${NC}"
|
|||
echo ""
|
||||
|
||||
# Step 2: Clean up Docker containers and volumes on the server (if reachable)
|
||||
echo -e "${YELLOW}[2/7] Cleaning up Docker containers and volumes...${NC}"
|
||||
echo -e "${YELLOW}[2/3] Cleaning up Docker containers and volumes...${NC}"
|
||||
|
||||
# Try to use per-client SSH key if it exists
|
||||
SSH_KEY_ARG=""
|
||||
if [ -f "$PROJECT_ROOT/keys/ssh/${CLIENT_NAME}" ]; then
|
||||
SSH_KEY_ARG="--private-key=$PROJECT_ROOT/keys/ssh/${CLIENT_NAME}"
|
||||
fi
|
||||
|
||||
if ~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" $SSH_KEY_ARG -m ping -o &>/dev/null; then
|
||||
if ~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m ping -o &>/dev/null; then
|
||||
echo "Server is reachable, cleaning up Docker resources..."
|
||||
|
||||
# Stop and remove all containers
|
||||
~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" $SSH_KEY_ARG -m shell -a "docker ps -aq | xargs -r docker stop" -b 2>/dev/null || true
|
||||
~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" $SSH_KEY_ARG -m shell -a "docker ps -aq | xargs -r docker rm -f" -b 2>/dev/null || true
|
||||
~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m shell -a "docker ps -aq | xargs -r docker stop" -b 2>/dev/null || true
|
||||
~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m shell -a "docker ps -aq | xargs -r docker rm -f" -b 2>/dev/null || true
|
||||
|
||||
# Remove all volumes
|
||||
~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" $SSH_KEY_ARG -m shell -a "docker volume ls -q | xargs -r docker volume rm -f" -b 2>/dev/null || true
|
||||
~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m shell -a "docker volume ls -q | xargs -r docker volume rm -f" -b 2>/dev/null || true
|
||||
|
||||
# Remove all networks (except defaults)
|
||||
~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" $SSH_KEY_ARG -m shell -a "docker network ls --filter type=custom -q | xargs -r docker network rm" -b 2>/dev/null || true
|
||||
~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m shell -a "docker network ls --filter type=custom -q | xargs -r docker network rm" -b 2>/dev/null || true
|
||||
|
||||
echo -e "${GREEN}✓ Docker cleanup complete${NC}"
|
||||
else
|
||||
|
|
@ -132,127 +113,21 @@ fi
|
|||
echo ""
|
||||
|
||||
# Step 3: Destroy infrastructure with OpenTofu
|
||||
echo -e "${YELLOW}[3/7] Destroying infrastructure with OpenTofu...${NC}"
|
||||
echo -e "${YELLOW}[3/3] Destroying infrastructure with OpenTofu...${NC}"
|
||||
|
||||
cd "$PROJECT_ROOT/tofu"
|
||||
|
||||
# Destroy all resources for this client (server, volume, SSH key, DNS)
|
||||
# Get current infrastructure state
|
||||
echo "Checking current infrastructure..."
|
||||
tofu plan -destroy -var-file="terraform.tfvars" \
|
||||
-target="hcloud_server.client[\"$CLIENT_NAME\"]" \
|
||||
-target="hcloud_volume.nextcloud_data[\"$CLIENT_NAME\"]" \
|
||||
-target="hcloud_volume_attachment.nextcloud_data[\"$CLIENT_NAME\"]" \
|
||||
-target="hcloud_ssh_key.client[\"$CLIENT_NAME\"]" \
|
||||
-target="hcloud_zone_rrset.client_a[\"$CLIENT_NAME\"]" \
|
||||
-target="hcloud_zone_rrset.client_wildcard[\"$CLIENT_NAME\"]" \
|
||||
-out=destroy.tfplan
|
||||
tofu plan -destroy -var-file="terraform.tfvars" -target="hcloud_server.client[\"$CLIENT_NAME\"]" -out=destroy.tfplan
|
||||
|
||||
echo ""
|
||||
echo "Verifying plan only targets $CLIENT_NAME resources..."
|
||||
|
||||
# Verify the plan only contains the client's resources
|
||||
PLAN_OUTPUT=$(tofu show destroy.tfplan 2>&1)
|
||||
if echo "$PLAN_OUTPUT" | grep -E "will be destroyed" | grep -v "\"$CLIENT_NAME\"" | grep -q .; then
|
||||
echo -e "${RED}ERROR: Plan contains resources NOT belonging to $CLIENT_NAME!${NC}"
|
||||
echo ""
|
||||
echo "Resources in plan:"
|
||||
echo "$PLAN_OUTPUT" | grep -E "# .* will be destroyed" | head -20
|
||||
echo ""
|
||||
echo "Aborting to prevent accidental destruction of other clients."
|
||||
rm -f destroy.tfplan
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}✓ Plan verified - only $CLIENT_NAME resources will be destroyed${NC}"
|
||||
echo ""
|
||||
echo "Applying destruction..."
|
||||
tofu apply -auto-approve destroy.tfplan
|
||||
tofu apply destroy.tfplan
|
||||
|
||||
# Cleanup plan file
|
||||
rm -f destroy.tfplan
|
||||
|
||||
echo -e "${GREEN}✓ Infrastructure destroyed${NC}"
|
||||
echo ""
|
||||
|
||||
# Step 4: Remove client from terraform.tfvars
|
||||
echo -e "${YELLOW}[4/7] Removing client from terraform.tfvars...${NC}"
|
||||
|
||||
TFVARS_FILE="$PROJECT_ROOT/tofu/terraform.tfvars"
|
||||
if grep -q "^[[:space:]]*${CLIENT_NAME}[[:space:]]*=" "$TFVARS_FILE"; then
|
||||
# Create backup
|
||||
cp "$TFVARS_FILE" "$TFVARS_FILE.bak"
|
||||
|
||||
# Remove the client block (from "client_name = {" to the closing "}")
|
||||
# This uses awk to find and remove the entire block
|
||||
awk -v client="$CLIENT_NAME" '
|
||||
BEGIN { skip=0; in_block=0 }
|
||||
/^[[:space:]]*#.*[Cc]lient/ { if (skip==0) print; next }
|
||||
$0 ~ "^[[:space:]]*" client "[[:space:]]*=" { skip=1; in_block=1; brace_count=0; next }
|
||||
skip==1 {
|
||||
for(i=1; i<=length($0); i++) {
|
||||
c=substr($0,i,1)
|
||||
if(c=="{") brace_count++
|
||||
if(c=="}") brace_count--
|
||||
}
|
||||
if(brace_count<0 || (brace_count==0 && $0 ~ /^[[:space:]]*}/)) {
|
||||
skip=0
|
||||
in_block=0
|
||||
next
|
||||
}
|
||||
next
|
||||
}
|
||||
{ print }
|
||||
' "$TFVARS_FILE" > "$TFVARS_FILE.tmp"
|
||||
|
||||
mv "$TFVARS_FILE.tmp" "$TFVARS_FILE"
|
||||
echo -e "${GREEN}✓ Removed $CLIENT_NAME from terraform.tfvars${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}⚠ Client not found in terraform.tfvars${NC}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 5: Remove SSH keys
|
||||
echo -e "${YELLOW}[5/7] Removing SSH keys...${NC}"
|
||||
|
||||
SSH_PRIVATE="$PROJECT_ROOT/keys/ssh/${CLIENT_NAME}"
|
||||
SSH_PUBLIC="$PROJECT_ROOT/keys/ssh/${CLIENT_NAME}.pub"
|
||||
|
||||
if [ -f "$SSH_PRIVATE" ]; then
|
||||
rm -f "$SSH_PRIVATE"
|
||||
echo -e "${GREEN}✓ Removed private key: $SSH_PRIVATE${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}⚠ Private key not found${NC}"
|
||||
fi
|
||||
|
||||
if [ -f "$SSH_PUBLIC" ]; then
|
||||
rm -f "$SSH_PUBLIC"
|
||||
echo -e "${GREEN}✓ Removed public key: $SSH_PUBLIC${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}⚠ Public key not found${NC}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 6: Remove secrets file
|
||||
echo -e "${YELLOW}[6/7] Removing secrets file...${NC}"
|
||||
|
||||
if [ -f "$SECRETS_FILE" ]; then
|
||||
rm -f "$SECRETS_FILE"
|
||||
echo -e "${GREEN}✓ Removed secrets file: $SECRETS_FILE${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}⚠ Secrets file not found${NC}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 7: Update client registry
|
||||
echo -e "${YELLOW}[7/7] Updating client registry...${NC}"
|
||||
|
||||
"$SCRIPT_DIR/update-registry.sh" "$CLIENT_NAME" destroy
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}✓ Registry updated${NC}"
|
||||
echo ""
|
||||
echo -e "${GREEN}========================================${NC}"
|
||||
echo -e "${GREEN}✓ Client '$CLIENT_NAME' destroyed successfully${NC}"
|
||||
|
|
@ -261,13 +136,12 @@ echo ""
|
|||
echo "The following have been removed:"
|
||||
echo " ✓ Mailgun SMTP credentials"
|
||||
echo " ✓ VPS server"
|
||||
echo " ✓ Hetzner Volume"
|
||||
echo " ✓ SSH keys (Hetzner + local)"
|
||||
echo " ✓ DNS records"
|
||||
echo " ✓ Firewall rules"
|
||||
echo " ✓ Secrets file"
|
||||
echo " ✓ terraform.tfvars entry"
|
||||
echo " ✓ Registry entry"
|
||||
echo " ✓ DNS records (if managed by OpenTofu)"
|
||||
echo " ✓ Firewall rules (if not shared)"
|
||||
echo ""
|
||||
echo "The client has been completely removed from the infrastructure."
|
||||
echo -e "${YELLOW}Note: Secrets file still exists at:${NC}"
|
||||
echo " $SECRETS_FILE"
|
||||
echo ""
|
||||
echo "To rebuild this client, run:"
|
||||
echo " ./scripts/rebuild-client.sh $CLIENT_NAME"
|
||||
echo ""
|
||||
|
|
|
|||
|
|
@ -1,228 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Detect version drift between clients
|
||||
#
|
||||
# Usage: ./scripts/detect-version-drift.sh [options]
|
||||
#
|
||||
# Options:
|
||||
# --threshold=<days> Only report clients not updated in X days (default: 30)
|
||||
# --app=<name> Check specific app only (authentik|nextcloud|traefik|ubuntu)
|
||||
# --format=table Show as table (default)
|
||||
# --format=summary Show summary only
|
||||
#
|
||||
# Exit codes:
|
||||
# 0 - No drift detected
|
||||
# 1 - Drift detected
|
||||
# 2 - Error
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
REGISTRY_FILE="$PROJECT_ROOT/clients/registry.yml"
|
||||
|
||||
# Default options
|
||||
THRESHOLD_DAYS=30
|
||||
FILTER_APP=""
|
||||
FORMAT="table"
|
||||
|
||||
# Parse arguments
|
||||
for arg in "$@"; do
|
||||
case $arg in
|
||||
--threshold=*)
|
||||
THRESHOLD_DAYS="${arg#*=}"
|
||||
;;
|
||||
--app=*)
|
||||
FILTER_APP="${arg#*=}"
|
||||
;;
|
||||
--format=*)
|
||||
FORMAT="${arg#*=}"
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $arg"
|
||||
echo "Usage: $0 [--threshold=<days>] [--app=<name>] [--format=table|summary]"
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Check if yq is available
|
||||
if ! command -v yq &> /dev/null; then
|
||||
echo -e "${RED}Error: 'yq' not found. Install with: brew install yq${NC}"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# Check if registry exists
|
||||
if [ ! -f "$REGISTRY_FILE" ]; then
|
||||
echo -e "${RED}Error: Registry file not found: $REGISTRY_FILE${NC}"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# Get list of deployed clients only
|
||||
CLIENTS=$(yq eval '.clients | to_entries | map(select(.value.status == "deployed")) | .[].key' "$REGISTRY_FILE" 2>/dev/null)
|
||||
|
||||
if [ -z "$CLIENTS" ]; then
|
||||
echo -e "${YELLOW}No deployed clients found${NC}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Determine latest versions
|
||||
declare -A LATEST_VERSIONS
|
||||
LATEST_VERSIONS[authentik]=$(yq eval '.clients | to_entries | .[].value.versions.authentik' "$REGISTRY_FILE" | sort -V | tail -1)
|
||||
LATEST_VERSIONS[nextcloud]=$(yq eval '.clients | to_entries | .[].value.versions.nextcloud' "$REGISTRY_FILE" | sort -V | tail -1)
|
||||
LATEST_VERSIONS[traefik]=$(yq eval '.clients | to_entries | .[].value.versions.traefik' "$REGISTRY_FILE" | sort -V | tail -1)
|
||||
LATEST_VERSIONS[ubuntu]=$(yq eval '.clients | to_entries | .[].value.versions.ubuntu' "$REGISTRY_FILE" | sort -V | tail -1)
|
||||
|
||||
# Calculate date threshold
|
||||
if command -v gdate &> /dev/null; then
|
||||
# macOS with GNU coreutils
|
||||
THRESHOLD_DATE=$(gdate -d "$THRESHOLD_DAYS days ago" +%Y-%m-%d)
|
||||
elif date --version &> /dev/null 2>&1; then
|
||||
# GNU date (Linux)
|
||||
THRESHOLD_DATE=$(date -d "$THRESHOLD_DAYS days ago" +%Y-%m-%d)
|
||||
else
|
||||
# BSD date (macOS default)
|
||||
THRESHOLD_DATE=$(date -v-${THRESHOLD_DAYS}d +%Y-%m-%d)
|
||||
fi
|
||||
|
||||
# Counters
|
||||
DRIFT_FOUND=0
|
||||
OUTDATED_COUNT=0
|
||||
STALE_COUNT=0
|
||||
|
||||
# Arrays to store drift details
|
||||
declare -a DRIFT_CLIENTS
|
||||
declare -a DRIFT_DETAILS
|
||||
|
||||
# Analyze each client
|
||||
for client in $CLIENTS; do
|
||||
authentik=$(yq eval ".clients.\"$client\".versions.authentik" "$REGISTRY_FILE")
|
||||
nextcloud=$(yq eval ".clients.\"$client\".versions.nextcloud" "$REGISTRY_FILE")
|
||||
traefik=$(yq eval ".clients.\"$client\".versions.traefik" "$REGISTRY_FILE")
|
||||
ubuntu=$(yq eval ".clients.\"$client\".versions.ubuntu" "$REGISTRY_FILE")
|
||||
last_update=$(yq eval ".clients.\"$client\".maintenance.last_full_update" "$REGISTRY_FILE")
|
||||
|
||||
has_drift=false
|
||||
drift_reasons=()
|
||||
|
||||
# Check version drift
|
||||
if [ -z "$FILTER_APP" ] || [ "$FILTER_APP" = "authentik" ]; then
|
||||
if [ "$authentik" != "${LATEST_VERSIONS[authentik]}" ] && [ "$authentik" != "null" ] && [ "$authentik" != "unknown" ]; then
|
||||
has_drift=true
|
||||
drift_reasons+=("Authentik: $authentik → ${LATEST_VERSIONS[authentik]}")
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -z "$FILTER_APP" ] || [ "$FILTER_APP" = "nextcloud" ]; then
|
||||
if [ "$nextcloud" != "${LATEST_VERSIONS[nextcloud]}" ] && [ "$nextcloud" != "null" ] && [ "$nextcloud" != "unknown" ]; then
|
||||
has_drift=true
|
||||
drift_reasons+=("Nextcloud: $nextcloud → ${LATEST_VERSIONS[nextcloud]}")
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -z "$FILTER_APP" ] || [ "$FILTER_APP" = "traefik" ]; then
|
||||
if [ "$traefik" != "${LATEST_VERSIONS[traefik]}" ] && [ "$traefik" != "null" ] && [ "$traefik" != "unknown" ]; then
|
||||
has_drift=true
|
||||
drift_reasons+=("Traefik: $traefik → ${LATEST_VERSIONS[traefik]}")
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -z "$FILTER_APP" ] || [ "$FILTER_APP" = "ubuntu" ]; then
|
||||
if [ "$ubuntu" != "${LATEST_VERSIONS[ubuntu]}" ] && [ "$ubuntu" != "null" ] && [ "$ubuntu" != "unknown" ]; then
|
||||
has_drift=true
|
||||
drift_reasons+=("Ubuntu: $ubuntu → ${LATEST_VERSIONS[ubuntu]}")
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check if update is stale (older than threshold)
|
||||
is_stale=false
|
||||
if [ "$last_update" != "null" ] && [ -n "$last_update" ]; then
|
||||
if [[ "$last_update" < "$THRESHOLD_DATE" ]]; then
|
||||
is_stale=true
|
||||
drift_reasons+=("Last update: $last_update (>$THRESHOLD_DAYS days ago)")
|
||||
fi
|
||||
fi
|
||||
|
||||
# Record drift
|
||||
if [ "$has_drift" = true ] || [ "$is_stale" = true ]; then
|
||||
DRIFT_FOUND=1
|
||||
DRIFT_CLIENTS+=("$client")
|
||||
DRIFT_DETAILS+=("$(IFS='; '; echo "${drift_reasons[*]}")")
|
||||
|
||||
[ "$has_drift" = true ] && ((OUTDATED_COUNT++)) || true
|
||||
[ "$is_stale" = true ] && ((STALE_COUNT++)) || true
|
||||
fi
|
||||
done
|
||||
|
||||
# Output results
|
||||
case $FORMAT in
|
||||
table)
|
||||
if [ $DRIFT_FOUND -eq 0 ]; then
|
||||
echo -e "${GREEN}✓ No version drift detected${NC}"
|
||||
echo ""
|
||||
echo "All deployed clients are running latest versions:"
|
||||
echo " Authentik: ${LATEST_VERSIONS[authentik]}"
|
||||
echo " Nextcloud: ${LATEST_VERSIONS[nextcloud]}"
|
||||
echo " Traefik: ${LATEST_VERSIONS[traefik]}"
|
||||
echo " Ubuntu: ${LATEST_VERSIONS[ubuntu]}"
|
||||
echo ""
|
||||
else
|
||||
echo -e "${RED}⚠ VERSION DRIFT DETECTED${NC}"
|
||||
echo ""
|
||||
echo -e "${CYAN}Clients with outdated versions:${NC}"
|
||||
echo ""
|
||||
|
||||
for i in "${!DRIFT_CLIENTS[@]}"; do
|
||||
client="${DRIFT_CLIENTS[$i]}"
|
||||
details="${DRIFT_DETAILS[$i]}"
|
||||
|
||||
echo -e "${YELLOW}• $client${NC}"
|
||||
IFS=';' read -ra REASONS <<< "$details"
|
||||
for reason in "${REASONS[@]}"; do
|
||||
echo " $reason"
|
||||
done
|
||||
echo ""
|
||||
done
|
||||
|
||||
echo -e "${CYAN}Recommended actions:${NC}"
|
||||
echo ""
|
||||
echo "1. Test updates on canary server first:"
|
||||
echo " ${BLUE}./scripts/rebuild-client.sh dev${NC}"
|
||||
echo ""
|
||||
echo "2. Verify canary health:"
|
||||
echo " ${BLUE}./scripts/client-status.sh dev${NC}"
|
||||
echo ""
|
||||
echo "3. Update outdated clients:"
|
||||
for client in "${DRIFT_CLIENTS[@]}"; do
|
||||
echo " ${BLUE}./scripts/rebuild-client.sh $client${NC}"
|
||||
done
|
||||
echo ""
|
||||
fi
|
||||
;;
|
||||
|
||||
summary)
|
||||
if [ $DRIFT_FOUND -eq 0 ]; then
|
||||
echo "Status: OK"
|
||||
echo "Drift: No"
|
||||
echo "Clients checked: $(echo "$CLIENTS" | wc -l | xargs)"
|
||||
else
|
||||
echo "Status: DRIFT DETECTED"
|
||||
echo "Drift: Yes"
|
||||
echo "Clients checked: $(echo "$CLIENTS" | wc -l | xargs)"
|
||||
echo "Clients with outdated versions: $OUTDATED_COUNT"
|
||||
echo "Clients not updated in $THRESHOLD_DAYS days: $STALE_COUNT"
|
||||
echo "Affected clients: ${DRIFT_CLIENTS[*]}"
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
exit $DRIFT_FOUND
|
||||
|
|
@ -1,84 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Generate SSH key pair for a client
|
||||
#
|
||||
# Usage: ./scripts/generate-client-keys.sh <client_name>
|
||||
#
|
||||
# This script generates a dedicated ED25519 SSH key pair for a client,
|
||||
# ensuring proper isolation between client servers.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
KEY_DIR="$PROJECT_ROOT/keys/ssh"
|
||||
|
||||
# Check arguments
|
||||
if [ $# -ne 1 ]; then
|
||||
echo -e "${RED}Error: Client name required${NC}"
|
||||
echo "Usage: $0 <client_name>"
|
||||
echo ""
|
||||
echo "Example: $0 newclient"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLIENT_NAME="$1"
|
||||
|
||||
# Validate client name (alphanumeric and hyphens only)
|
||||
if ! [[ "$CLIENT_NAME" =~ ^[a-z0-9-]+$ ]]; then
|
||||
echo -e "${RED}Error: Invalid client name${NC}"
|
||||
echo "Client name must contain only lowercase letters, numbers, and hyphens"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if key already exists
|
||||
if [ -f "$KEY_DIR/$CLIENT_NAME" ]; then
|
||||
echo -e "${YELLOW}⚠ Warning: SSH key already exists for client: $CLIENT_NAME${NC}"
|
||||
echo ""
|
||||
echo "Existing key: $KEY_DIR/$CLIENT_NAME"
|
||||
echo ""
|
||||
read -p "Overwrite existing key? This will break SSH access to the server! [yes/NO] " confirm
|
||||
if [ "$confirm" != "yes" ]; then
|
||||
echo "Aborted"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Create keys directory if it doesn't exist
|
||||
mkdir -p "$KEY_DIR"
|
||||
|
||||
echo -e "${BLUE}Generating SSH key pair for client: $CLIENT_NAME${NC}"
|
||||
echo ""
|
||||
|
||||
# Generate ED25519 key pair
|
||||
ssh-keygen -t ed25519 \
|
||||
-f "$KEY_DIR/$CLIENT_NAME" \
|
||||
-C "client-$CLIENT_NAME-deploy-key" \
|
||||
-N ""
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}✓ SSH key pair generated successfully${NC}"
|
||||
echo ""
|
||||
echo "Private key: $KEY_DIR/$CLIENT_NAME"
|
||||
echo "Public key: $KEY_DIR/$CLIENT_NAME.pub"
|
||||
echo ""
|
||||
echo "Key fingerprint:"
|
||||
ssh-keygen -lf "$KEY_DIR/$CLIENT_NAME.pub"
|
||||
echo ""
|
||||
echo -e "${BLUE}Next steps:${NC}"
|
||||
echo "1. Add client to tofu/terraform.tfvars"
|
||||
echo "2. Apply OpenTofu: cd tofu && tofu apply"
|
||||
echo "3. Deploy client: ./scripts/deploy-client.sh $CLIENT_NAME"
|
||||
echo ""
|
||||
echo -e "${YELLOW}⚠ IMPORTANT: Backup this key securely!${NC}"
|
||||
echo " Store in password manager or secure backup location"
|
||||
echo ""
|
||||
|
|
@ -1,131 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Generate secure random passwords and tokens for a client
|
||||
# Usage: ./generate-passwords.sh <client-name>
|
||||
#
|
||||
# This script generates unique credentials for each client and updates their SOPS-encrypted secrets file.
|
||||
# All passwords are cryptographically secure (43 characters, base64-encoded random data).
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Get script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
|
||||
# Function to generate a secure random password/token
|
||||
generate_password() {
|
||||
# Generate 32 random bytes and encode as base64, removing padding and special chars
|
||||
openssl rand -base64 32 | tr -d '\n=' | head -c 43
|
||||
}
|
||||
|
||||
# Function to generate an API token (with ak_ prefix for Authentik)
|
||||
generate_api_token() {
|
||||
echo "ak_$(openssl rand -base64 32 | tr -d '\n=' | head -c 46)"
|
||||
}
|
||||
|
||||
# Main script
|
||||
main() {
|
||||
if [ $# -ne 1 ]; then
|
||||
echo -e "${RED}Usage: $0 <client-name>${NC}"
|
||||
echo ""
|
||||
echo "Example: $0 green"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLIENT_NAME="$1"
|
||||
SECRETS_FILE="$PROJECT_ROOT/secrets/clients/${CLIENT_NAME}.sops.yaml"
|
||||
|
||||
echo -e "${BLUE}==================================================${NC}"
|
||||
echo -e "${BLUE}Password Generator for Client: ${CLIENT_NAME}${NC}"
|
||||
echo -e "${BLUE}==================================================${NC}"
|
||||
echo ""
|
||||
|
||||
# Check if secrets file exists
|
||||
if [ ! -f "$SECRETS_FILE" ]; then
|
||||
echo -e "${RED}Error: Secrets file not found: $SECRETS_FILE${NC}"
|
||||
echo ""
|
||||
echo "Create the secrets file first with:"
|
||||
echo " ./scripts/deploy-client.sh $CLIENT_NAME"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check for SOPS_AGE_KEY_FILE
|
||||
if [ -z "${SOPS_AGE_KEY_FILE:-}" ]; then
|
||||
export SOPS_AGE_KEY_FILE="$PROJECT_ROOT/keys/age-key.txt"
|
||||
fi
|
||||
|
||||
if [ ! -f "$SOPS_AGE_KEY_FILE" ]; then
|
||||
echo -e "${RED}Error: SOPS age key not found: $SOPS_AGE_KEY_FILE${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}Generating unique passwords for ${CLIENT_NAME}...${NC}"
|
||||
echo ""
|
||||
|
||||
# Generate all passwords
|
||||
AUTHENTIK_BOOTSTRAP_PASSWORD=$(generate_password)
|
||||
AUTHENTIK_BOOTSTRAP_TOKEN=$(generate_api_token)
|
||||
AUTHENTIK_SECRET_KEY=$(generate_password)
|
||||
AUTHENTIK_DB_PASSWORD=$(generate_password)
|
||||
NEXTCLOUD_ADMIN_PASSWORD=$(generate_password)
|
||||
NEXTCLOUD_DB_PASSWORD=$(generate_password)
|
||||
|
||||
echo "Generated credentials:"
|
||||
echo " ✓ Authentik bootstrap password (43 chars)"
|
||||
echo " ✓ Authentik bootstrap token (49 chars with ak_ prefix)"
|
||||
echo " ✓ Authentik secret key (43 chars)"
|
||||
echo " ✓ Authentik database password (43 chars)"
|
||||
echo " ✓ Nextcloud admin password (43 chars)"
|
||||
echo " ✓ Nextcloud database password (43 chars)"
|
||||
echo ""
|
||||
|
||||
# Create a temporary decrypted file
|
||||
TEMP_PLAIN=$(mktemp)
|
||||
sops -d "$SECRETS_FILE" > "$TEMP_PLAIN"
|
||||
|
||||
# Update passwords in the decrypted file
|
||||
# Using perl for in-place editing because it handles special characters better
|
||||
perl -pi -e "s|^(authentik_bootstrap_password:).*|\$1 $AUTHENTIK_BOOTSTRAP_PASSWORD|" "$TEMP_PLAIN"
|
||||
perl -pi -e "s|^(authentik_bootstrap_token:).*|\$1 $AUTHENTIK_BOOTSTRAP_TOKEN|" "$TEMP_PLAIN"
|
||||
perl -pi -e "s|^(authentik_secret_key:).*|\$1 $AUTHENTIK_SECRET_KEY|" "$TEMP_PLAIN"
|
||||
perl -pi -e "s|^(authentik_db_password:).*|\$1 $AUTHENTIK_DB_PASSWORD|" "$TEMP_PLAIN"
|
||||
perl -pi -e "s|^(nextcloud_admin_password:).*|\$1 $NEXTCLOUD_ADMIN_PASSWORD|" "$TEMP_PLAIN"
|
||||
perl -pi -e "s|^(nextcloud_db_password:).*|\$1 $NEXTCLOUD_DB_PASSWORD|" "$TEMP_PLAIN"
|
||||
|
||||
# Re-encrypt the file
|
||||
# We need to use a temp file that matches the .sops.yaml creation rules
|
||||
TEMP_SOPS="${SECRETS_FILE%.sops.yaml}-temp.sops.yaml"
|
||||
cp "$TEMP_PLAIN" "$TEMP_SOPS"
|
||||
|
||||
# Encrypt in place
|
||||
sops --encrypt --in-place "$TEMP_SOPS"
|
||||
|
||||
# Replace original file
|
||||
mv "$TEMP_SOPS" "$SECRETS_FILE"
|
||||
|
||||
# Cleanup
|
||||
rm "$TEMP_PLAIN"
|
||||
|
||||
echo -e "${GREEN}✓ Updated $SECRETS_FILE with unique passwords${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}IMPORTANT: Passwords are now stored encrypted in SOPS.${NC}"
|
||||
echo ""
|
||||
echo "To view passwords:"
|
||||
echo -e " ${BLUE}SOPS_AGE_KEY_FILE=\"keys/age-key.txt\" sops -d secrets/clients/${CLIENT_NAME}.sops.yaml${NC}"
|
||||
echo ""
|
||||
echo "Or use the retrieval script:"
|
||||
echo -e " ${BLUE}./scripts/get-passwords.sh ${CLIENT_NAME}${NC}"
|
||||
echo ""
|
||||
echo -e "${GREEN}==================================================${NC}"
|
||||
echo -e "${GREEN}Password generation complete!${NC}"
|
||||
echo -e "${GREEN}==================================================${NC}"
|
||||
}
|
||||
|
||||
main "$@"
|
||||
|
|
@ -1,98 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Retrieve passwords for a client from SOPS-encrypted secrets
|
||||
# Usage: ./get-passwords.sh <client-name>
|
||||
#
|
||||
# This script decrypts and displays passwords in a readable format.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Get script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
|
||||
# Main script
|
||||
main() {
|
||||
if [ $# -ne 1 ]; then
|
||||
echo -e "${RED}Usage: $0 <client-name>${NC}"
|
||||
echo ""
|
||||
echo "Example: $0 green"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLIENT_NAME="$1"
|
||||
SECRETS_FILE="$PROJECT_ROOT/secrets/clients/${CLIENT_NAME}.sops.yaml"
|
||||
|
||||
# Check if secrets file exists
|
||||
if [ ! -f "$SECRETS_FILE" ]; then
|
||||
echo -e "${RED}Error: Secrets file not found: $SECRETS_FILE${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check for SOPS_AGE_KEY_FILE
|
||||
if [ -z "${SOPS_AGE_KEY_FILE:-}" ]; then
|
||||
export SOPS_AGE_KEY_FILE="$PROJECT_ROOT/keys/age-key.txt"
|
||||
fi
|
||||
|
||||
if [ ! -f "$SOPS_AGE_KEY_FILE" ]; then
|
||||
echo -e "${RED}Error: SOPS age key not found: $SOPS_AGE_KEY_FILE${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Decrypt and parse secrets
|
||||
TEMP_PLAIN=$(mktemp)
|
||||
sops -d "$SECRETS_FILE" > "$TEMP_PLAIN"
|
||||
|
||||
# Extract values
|
||||
CLIENT_DOMAIN=$(grep "^client_domain:" "$TEMP_PLAIN" | awk '{print $2}')
|
||||
AUTHENTIK_DOMAIN=$(grep "^authentik_domain:" "$TEMP_PLAIN" | awk '{print $2}')
|
||||
NEXTCLOUD_DOMAIN=$(grep "^nextcloud_domain:" "$TEMP_PLAIN" | awk '{print $2}')
|
||||
AUTHENTIK_BOOTSTRAP_PASSWORD=$(grep "^authentik_bootstrap_password:" "$TEMP_PLAIN" | awk '{print $2}')
|
||||
AUTHENTIK_BOOTSTRAP_TOKEN=$(grep "^authentik_bootstrap_token:" "$TEMP_PLAIN" | awk '{print $2}')
|
||||
NEXTCLOUD_ADMIN_USER=$(grep "^nextcloud_admin_user:" "$TEMP_PLAIN" | awk '{print $2}')
|
||||
NEXTCLOUD_ADMIN_PASSWORD=$(grep "^nextcloud_admin_password:" "$TEMP_PLAIN" | awk '{print $2}')
|
||||
|
||||
# Cleanup
|
||||
rm "$TEMP_PLAIN"
|
||||
|
||||
# Display formatted output
|
||||
echo ""
|
||||
echo -e "${CYAN}==============================================================${NC}"
|
||||
echo -e "${CYAN} Credentials for Client: ${GREEN}${CLIENT_NAME}${NC}"
|
||||
echo -e "${CYAN}==============================================================${NC}"
|
||||
echo ""
|
||||
echo -e "${BLUE}Service URLs:${NC}"
|
||||
echo -e " Client Domain: ${GREEN}https://${CLIENT_DOMAIN}${NC}"
|
||||
echo -e " Authentik SSO: ${GREEN}https://${AUTHENTIK_DOMAIN}${NC}"
|
||||
echo -e " Nextcloud: ${GREEN}https://${NEXTCLOUD_DOMAIN}${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}─────────────────────────────────────────────────────────────${NC}"
|
||||
echo ""
|
||||
echo -e "${BLUE}Authentik Admin Access:${NC}"
|
||||
echo -e " URL: ${GREEN}https://${AUTHENTIK_DOMAIN}${NC}"
|
||||
echo -e " Username: ${GREEN}akadmin${NC}"
|
||||
echo -e " Password: ${YELLOW}${AUTHENTIK_BOOTSTRAP_PASSWORD}${NC}"
|
||||
echo -e " API Token: ${YELLOW}${AUTHENTIK_BOOTSTRAP_TOKEN}${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}─────────────────────────────────────────────────────────────${NC}"
|
||||
echo ""
|
||||
echo -e "${BLUE}Nextcloud Admin Access:${NC}"
|
||||
echo -e " URL: ${GREEN}https://${NEXTCLOUD_DOMAIN}${NC}"
|
||||
echo -e " Username: ${GREEN}${NEXTCLOUD_ADMIN_USER}${NC}"
|
||||
echo -e " Password: ${YELLOW}${NEXTCLOUD_ADMIN_PASSWORD}${NC}"
|
||||
echo ""
|
||||
echo -e "${CYAN}==============================================================${NC}"
|
||||
echo ""
|
||||
echo -e "${BLUE}💡 Tip: Copy passwords carefully - they are case-sensitive!${NC}"
|
||||
echo ""
|
||||
}
|
||||
|
||||
main "$@"
|
||||
|
|
@ -1,116 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Health check script for client servers
|
||||
# Usage: ./health-check.sh <client-name>
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
CLIENT="${1:-}"
|
||||
|
||||
if [ -z "$CLIENT" ]; then
|
||||
echo "Usage: $0 <client-name>"
|
||||
echo "Example: $0 black"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Get client IP
|
||||
cd "$(dirname "$0")/../tofu"
|
||||
IP=$(tofu output -json client_ips 2>/dev/null | jq -r ".$CLIENT" 2>/dev/null)
|
||||
|
||||
if [ -z "$IP" ] || [ "$IP" = "null" ]; then
|
||||
echo -e "${RED}✗ ERROR: Client '$CLIENT' not found${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "========================================"
|
||||
echo "Health Check: $CLIENT ($IP)"
|
||||
echo "========================================"
|
||||
echo ""
|
||||
|
||||
# Container Status
|
||||
echo "Container Status:"
|
||||
echo "----------------"
|
||||
ssh -i "../keys/ssh/$CLIENT" -o StrictHostKeyChecking=no root@$IP \
|
||||
"docker ps --format 'table {{.Names}}\t{{.Status}}' | grep -E 'NAME|traefik|authentik|nextcloud|collabora|diun|redis|db'" 2>/dev/null || {
|
||||
echo -e "${RED}✗ Cannot connect to server${NC}"
|
||||
exit 1
|
||||
}
|
||||
echo ""
|
||||
|
||||
# Service URLs
|
||||
echo "Service Accessibility:"
|
||||
echo "---------------------"
|
||||
|
||||
# Authentik
|
||||
AUTH_STATUS=$(curl -sI "https://auth.$CLIENT.vrije.cloud" 2>/dev/null | grep HTTP | awk '{print $2}')
|
||||
if [ "$AUTH_STATUS" = "200" ] || [ "$AUTH_STATUS" = "302" ]; then
|
||||
echo -e "Authentik: ${GREEN}✓ OK${NC} (HTTP $AUTH_STATUS)"
|
||||
else
|
||||
echo -e "Authentik: ${RED}✗ FAIL${NC} (HTTP ${AUTH_STATUS:-timeout})"
|
||||
fi
|
||||
|
||||
# Nextcloud
|
||||
NC_STATUS=$(curl -sI "https://nextcloud.$CLIENT.vrije.cloud" 2>/dev/null | grep HTTP | awk '{print $2}')
|
||||
if [ "$NC_STATUS" = "200" ] || [ "$NC_STATUS" = "302" ]; then
|
||||
echo -e "Nextcloud: ${GREEN}✓ OK${NC} (HTTP $NC_STATUS)"
|
||||
else
|
||||
echo -e "Nextcloud: ${RED}✗ FAIL${NC} (HTTP ${NC_STATUS:-timeout})"
|
||||
fi
|
||||
|
||||
# Collabora
|
||||
COLLAB_STATUS=$(curl -sI "https://office.$CLIENT.vrije.cloud" 2>/dev/null | grep HTTP | awk '{print $2}')
|
||||
if [ "$COLLAB_STATUS" = "200" ]; then
|
||||
echo -e "Collabora: ${GREEN}✓ OK${NC} (HTTP $COLLAB_STATUS)"
|
||||
else
|
||||
echo -e "Collabora: ${YELLOW}⚠ WARNING${NC} (HTTP ${COLLAB_STATUS:-timeout})"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Disk Usage
|
||||
echo "Disk Usage:"
|
||||
echo "-----------"
|
||||
DISK_USAGE=$(ssh -i "../keys/ssh/$CLIENT" -o StrictHostKeyChecking=no root@$IP \
|
||||
"df -h /mnt/nextcloud-data 2>/dev/null | tail -1" || echo "N/A")
|
||||
echo "$DISK_USAGE"
|
||||
echo ""
|
||||
|
||||
# fail2ban
|
||||
echo "Security (fail2ban):"
|
||||
echo "--------------------"
|
||||
BANNED=$(ssh -i "../keys/ssh/$CLIENT" -o StrictHostKeyChecking=no root@$IP \
|
||||
"fail2ban-client status sshd 2>/dev/null | grep 'Currently banned'" || echo "N/A")
|
||||
echo "$BANNED"
|
||||
echo ""
|
||||
|
||||
# SSL Certificate Expiry
|
||||
echo "SSL Certificate:"
|
||||
echo "----------------"
|
||||
CERT_EXPIRY=$(echo | openssl s_client -connect "auth.$CLIENT.vrije.cloud:443" 2>/dev/null | \
|
||||
openssl x509 -noout -enddate 2>/dev/null | cut -d= -f2)
|
||||
if [ -n "$CERT_EXPIRY" ]; then
|
||||
echo -e "Expires: ${GREEN}$CERT_EXPIRY${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ Cannot retrieve certificate${NC}"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Diun Status (if installed)
|
||||
echo "Monitoring (Diun):"
|
||||
echo "------------------"
|
||||
DIUN_STATUS=$(ssh -i "../keys/ssh/$CLIENT" -o StrictHostKeyChecking=no root@$IP \
|
||||
"docker ps --filter 'name=diun' --format '{{.Status}}' 2>/dev/null" || echo "Not installed")
|
||||
if [ "$DIUN_STATUS" = "Not installed" ]; then
|
||||
echo -e "${YELLOW}⚠ Diun not installed${NC}"
|
||||
else
|
||||
echo -e "${GREEN}✓ Diun: $DIUN_STATUS${NC}"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
echo "========================================"
|
||||
echo -e "${GREEN}Health check complete!${NC}"
|
||||
echo "========================================"
|
||||
|
|
@ -1,231 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# List all clients from the registry
|
||||
#
|
||||
# Usage: ./scripts/list-clients.sh [--status=<status>] [--role=<role>] [--format=<format>]
|
||||
#
|
||||
# Options:
|
||||
# --status=<status> Filter by status (deployed, pending, maintenance, offboarding, destroyed)
|
||||
# --role=<role> Filter by role (canary, production)
|
||||
# --format=<format> Output format: table (default), json, csv, summary
|
||||
#
|
||||
# Examples:
|
||||
# ./scripts/list-clients.sh # List all clients
|
||||
# ./scripts/list-clients.sh --status=deployed # Only deployed clients
|
||||
# ./scripts/list-clients.sh --role=production # Only production clients
|
||||
# ./scripts/list-clients.sh --format=json # JSON output
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
REGISTRY_FILE="$PROJECT_ROOT/clients/registry.yml"
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Parse arguments
|
||||
FILTER_STATUS=""
|
||||
FILTER_ROLE=""
|
||||
FORMAT="table"
|
||||
|
||||
for arg in "$@"; do
|
||||
case $arg in
|
||||
--status=*)
|
||||
FILTER_STATUS="${arg#*=}"
|
||||
;;
|
||||
--role=*)
|
||||
FILTER_ROLE="${arg#*=}"
|
||||
;;
|
||||
--format=*)
|
||||
FORMAT="${arg#*=}"
|
||||
;;
|
||||
--help|-h)
|
||||
echo "Usage: $0 [--status=<status>] [--role=<role>] [--format=<format>]"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --status=<status> Filter by status (deployed, pending, maintenance, offboarding, destroyed)"
|
||||
echo " --role=<role> Filter by role (canary, production)"
|
||||
echo " --format=<format> Output format: table (default), json, csv, summary"
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Check if registry exists
|
||||
if [ ! -f "$REGISTRY_FILE" ]; then
|
||||
echo -e "${RED}Error: Registry file not found: $REGISTRY_FILE${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if yq is available (for YAML parsing)
|
||||
if ! command -v yq &> /dev/null; then
|
||||
echo -e "${YELLOW}Warning: 'yq' not found. Install with: brew install yq${NC}"
|
||||
echo "Falling back to basic grep parsing..."
|
||||
USE_YQ=false
|
||||
else
|
||||
USE_YQ=true
|
||||
fi
|
||||
|
||||
# Function to get clients using yq
|
||||
list_clients_yq() {
|
||||
local clients=$(yq eval '.clients | keys | .[]' "$REGISTRY_FILE")
|
||||
|
||||
for client in $clients; do
|
||||
local status=$(yq eval ".clients.\"$client\".status" "$REGISTRY_FILE")
|
||||
local role=$(yq eval ".clients.\"$client\".role" "$REGISTRY_FILE")
|
||||
|
||||
# Apply filters
|
||||
if [ -n "$FILTER_STATUS" ] && [ "$status" != "$FILTER_STATUS" ]; then
|
||||
continue
|
||||
fi
|
||||
if [ -n "$FILTER_ROLE" ] && [ "$role" != "$FILTER_ROLE" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
# Get other fields
|
||||
local deployed_date=$(yq eval ".clients.\"$client\".deployed_date" "$REGISTRY_FILE")
|
||||
local server_ip=$(yq eval ".clients.\"$client\".server.ip" "$REGISTRY_FILE")
|
||||
local server_type=$(yq eval ".clients.\"$client\".server.type" "$REGISTRY_FILE")
|
||||
local apps=$(yq eval ".clients.\"$client\".apps | join(\", \")" "$REGISTRY_FILE")
|
||||
|
||||
echo "$client|$status|$role|$deployed_date|$server_type|$server_ip|$apps"
|
||||
done
|
||||
}
|
||||
|
||||
# Function to output in table format
|
||||
output_table() {
|
||||
echo -e "${BLUE}╔════════════════════════════════════════════════════════════════════════════════════╗${NC}"
|
||||
echo -e "${BLUE}║ CLIENT REGISTRY ║${NC}"
|
||||
echo -e "${BLUE}╠════════════════════════════════════════════════════════════════════════════════════╣${NC}"
|
||||
|
||||
printf "${CYAN}%-15s ${GREEN}%-12s ${YELLOW}%-10s ${NC}%-12s %-10s %-15s %-20s\n" \
|
||||
"CLIENT" "STATUS" "ROLE" "DEPLOYED" "TYPE" "IP" "APPS"
|
||||
echo -e "${BLUE}────────────────────────────────────────────────────────────────────────────────────${NC}"
|
||||
|
||||
local count=0
|
||||
while IFS='|' read -r client status role deployed_date server_type server_ip apps; do
|
||||
# Color status
|
||||
local status_color=$NC
|
||||
case $status in
|
||||
deployed) status_color=$GREEN ;;
|
||||
pending) status_color=$YELLOW ;;
|
||||
maintenance) status_color=$CYAN ;;
|
||||
offboarding) status_color=$RED ;;
|
||||
destroyed) status_color=$RED ;;
|
||||
esac
|
||||
|
||||
# Color role
|
||||
local role_color=$NC
|
||||
case $role in
|
||||
canary) role_color=$YELLOW ;;
|
||||
production) role_color=$GREEN ;;
|
||||
esac
|
||||
|
||||
printf "%-15s ${status_color}%-12s${NC} ${role_color}%-10s${NC} %-12s %-10s %-15s %-20s\n" \
|
||||
"$client" "$status" "$role" "$deployed_date" "$server_type" "$server_ip" "${apps:0:20}"
|
||||
((count++))
|
||||
done
|
||||
|
||||
echo -e "${BLUE}────────────────────────────────────────────────────────────────────────────────────${NC}"
|
||||
echo -e "${BLUE}║${NC} Total clients: $count ${BLUE}║${NC}"
|
||||
echo -e "${BLUE}╚════════════════════════════════════════════════════════════════════════════════════╝${NC}"
|
||||
}
|
||||
|
||||
# Function to output summary
|
||||
output_summary() {
|
||||
local total=0
|
||||
local deployed=0
|
||||
local pending=0
|
||||
local maintenance=0
|
||||
local canary=0
|
||||
local production=0
|
||||
|
||||
while IFS='|' read -r client status role deployed_date server_type server_ip apps; do
|
||||
((total++))
|
||||
case $status in
|
||||
deployed) ((deployed++)) ;;
|
||||
pending) ((pending++)) ;;
|
||||
maintenance) ((maintenance++)) ;;
|
||||
esac
|
||||
case $role in
|
||||
canary) ((canary++)) ;;
|
||||
production) ((production++)) ;;
|
||||
esac
|
||||
done
|
||||
|
||||
echo -e "${BLUE}═══════════════════════════════════${NC}"
|
||||
echo -e "${BLUE} CLIENT REGISTRY SUMMARY${NC}"
|
||||
echo -e "${BLUE}═══════════════════════════════════${NC}"
|
||||
echo ""
|
||||
echo -e "Total Clients: ${CYAN}$total${NC}"
|
||||
echo ""
|
||||
echo -e "By Status:"
|
||||
echo -e " Deployed: ${GREEN}$deployed${NC}"
|
||||
echo -e " Pending: ${YELLOW}$pending${NC}"
|
||||
echo -e " Maintenance: ${CYAN}$maintenance${NC}"
|
||||
echo ""
|
||||
echo -e "By Role:"
|
||||
echo -e " Canary: ${YELLOW}$canary${NC}"
|
||||
echo -e " Production: ${GREEN}$production${NC}"
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Function to output JSON
|
||||
output_json() {
|
||||
if $USE_YQ; then
|
||||
yq eval -o=json '.clients' "$REGISTRY_FILE"
|
||||
else
|
||||
echo "{}"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to output CSV
|
||||
output_csv() {
|
||||
echo "client,status,role,deployed_date,server_type,server_ip,apps"
|
||||
while IFS='|' read -r client status role deployed_date server_type server_ip apps; do
|
||||
echo "$client,$status,$role,$deployed_date,$server_type,$server_ip,\"$apps\""
|
||||
done
|
||||
}
|
||||
|
||||
# Main execution
|
||||
if $USE_YQ; then
|
||||
DATA=$(list_clients_yq)
|
||||
else
|
||||
echo -e "${RED}Error: yq is required for this script${NC}"
|
||||
echo "Install with: brew install yq"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if any clients found
|
||||
if [ -z "$DATA" ]; then
|
||||
echo -e "${YELLOW}No clients found matching criteria${NC}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Output based on format
|
||||
case $FORMAT in
|
||||
table)
|
||||
echo "$DATA" | output_table
|
||||
;;
|
||||
json)
|
||||
output_json
|
||||
;;
|
||||
csv)
|
||||
echo "$DATA" | output_csv
|
||||
;;
|
||||
summary)
|
||||
echo "$DATA" | output_summary
|
||||
;;
|
||||
*)
|
||||
echo -e "${RED}Unknown format: $FORMAT${NC}"
|
||||
echo "Valid formats: table, json, csv, summary"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
|
@ -1,59 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Load secrets from SOPS into environment variables
|
||||
#
|
||||
# Usage: source scripts/load-secrets-env.sh
|
||||
#
|
||||
# This script loads the Hetzner API token from SOPS-encrypted secrets
|
||||
# and exports it as both:
|
||||
# - HCLOUD_TOKEN (for Ansible dynamic inventory)
|
||||
# - TF_VAR_hcloud_token (for OpenTofu)
|
||||
# - TF_VAR_hetznerdns_token (for OpenTofu DNS provider)
|
||||
|
||||
# Determine script directory
|
||||
if [ -n "${BASH_SOURCE[0]}" ]; then
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
else
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
fi
|
||||
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
# Set SOPS key file if not already set
|
||||
if [ -z "${SOPS_AGE_KEY_FILE:-}" ]; then
|
||||
export SOPS_AGE_KEY_FILE="$PROJECT_ROOT/keys/age-key.txt"
|
||||
fi
|
||||
|
||||
# Check if SOPS key file exists
|
||||
if [ ! -f "$SOPS_AGE_KEY_FILE" ]; then
|
||||
echo "Error: SOPS Age key not found at: $SOPS_AGE_KEY_FILE" >&2
|
||||
return 1 2>/dev/null || exit 1
|
||||
fi
|
||||
|
||||
# Load token from SOPS
|
||||
SHARED_SECRETS="$PROJECT_ROOT/secrets/shared.sops.yaml"
|
||||
|
||||
if [ ! -f "$SHARED_SECRETS" ]; then
|
||||
echo "Error: Shared secrets file not found: $SHARED_SECRETS" >&2
|
||||
return 1 2>/dev/null || exit 1
|
||||
fi
|
||||
|
||||
# Extract hcloud_token
|
||||
HCLOUD_TOKEN=$(sops -d "$SHARED_SECRETS" | grep "^hcloud_token:" | awk '{print $2}')
|
||||
|
||||
if [ -z "$HCLOUD_TOKEN" ]; then
|
||||
echo "Error: Could not extract hcloud_token from secrets" >&2
|
||||
return 1 2>/dev/null || exit 1
|
||||
fi
|
||||
|
||||
# Export for Ansible (dynamic inventory)
|
||||
export HCLOUD_TOKEN
|
||||
|
||||
# Export for OpenTofu
|
||||
export TF_VAR_hcloud_token="$HCLOUD_TOKEN"
|
||||
export TF_VAR_hetznerdns_token="$HCLOUD_TOKEN"
|
||||
|
||||
echo "✓ Loaded Hetzner API token from SOPS"
|
||||
echo " • HCLOUD_TOKEN (for Ansible)"
|
||||
echo " • TF_VAR_hcloud_token (for OpenTofu)"
|
||||
echo " • TF_VAR_hetznerdns_token (for OpenTofu DNS)"
|
||||
|
|
@ -36,100 +36,23 @@ fi
|
|||
|
||||
CLIENT_NAME="$1"
|
||||
|
||||
# Check if SSH key exists, generate if missing
|
||||
SSH_KEY_FILE="$PROJECT_ROOT/keys/ssh/${CLIENT_NAME}"
|
||||
if [ ! -f "$SSH_KEY_FILE" ]; then
|
||||
echo -e "${YELLOW}SSH key not found for client: $CLIENT_NAME${NC}"
|
||||
echo "Generating SSH key pair automatically..."
|
||||
echo ""
|
||||
|
||||
# Generate SSH key
|
||||
"$SCRIPT_DIR/generate-client-keys.sh" "$CLIENT_NAME"
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}✓ SSH key generated${NC}"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Check if secrets file exists, create from template if missing
|
||||
# Check if secrets file exists
|
||||
SECRETS_FILE="$PROJECT_ROOT/secrets/clients/${CLIENT_NAME}.sops.yaml"
|
||||
TEMPLATE_FILE="$PROJECT_ROOT/secrets/clients/template.sops.yaml"
|
||||
|
||||
if [ ! -f "$SECRETS_FILE" ]; then
|
||||
echo -e "${YELLOW}Secrets file not found for client: $CLIENT_NAME${NC}"
|
||||
echo "Creating from template and opening for editing..."
|
||||
echo -e "${RED}Error: Secrets file not found: $SECRETS_FILE${NC}"
|
||||
echo ""
|
||||
|
||||
# Check if template exists
|
||||
if [ ! -f "$TEMPLATE_FILE" ]; then
|
||||
echo -e "${RED}Error: Template file not found: $TEMPLATE_FILE${NC}"
|
||||
echo "Create a secrets file first:"
|
||||
echo " cp secrets/clients/test.sops.yaml secrets/clients/${CLIENT_NAME}.sops.yaml"
|
||||
echo " sops secrets/clients/${CLIENT_NAME}.sops.yaml"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Copy template and decrypt to temporary file
|
||||
if [ -z "${SOPS_AGE_KEY_FILE:-}" ]; then
|
||||
export SOPS_AGE_KEY_FILE="$PROJECT_ROOT/keys/age-key.txt"
|
||||
fi
|
||||
|
||||
# Decrypt template to temp file
|
||||
TEMP_PLAIN=$(mktemp)
|
||||
sops -d "$TEMPLATE_FILE" > "$TEMP_PLAIN"
|
||||
|
||||
# Replace client name placeholders
|
||||
sed -i '' "s/test/${CLIENT_NAME}/g" "$TEMP_PLAIN"
|
||||
|
||||
# Create unencrypted file in correct location (matching .sops.yaml regex)
|
||||
# This is necessary because SOPS needs the file path to match creation rules
|
||||
TEMP_SOPS="${SECRETS_FILE%.sops.yaml}-unenc.sops.yaml"
|
||||
cat "$TEMP_PLAIN" > "$TEMP_SOPS"
|
||||
|
||||
# Encrypt in-place (SOPS finds creation rules because path matches regex)
|
||||
sops --encrypt --in-place "$TEMP_SOPS"
|
||||
|
||||
# Rename to final name
|
||||
mv "$TEMP_SOPS" "$SECRETS_FILE"
|
||||
|
||||
# Cleanup
|
||||
rm "$TEMP_PLAIN"
|
||||
|
||||
echo -e "${GREEN}✓ Created secrets file with client-specific domains${NC}"
|
||||
echo ""
|
||||
|
||||
# Open in SOPS for editing passwords
|
||||
echo -e "${BLUE}Opening secrets file in SOPS for password generation...${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}IMPORTANT: Regenerate ALL passwords and tokens!${NC}"
|
||||
echo ""
|
||||
echo "Domains have been automatically configured:"
|
||||
echo " ✓ client_name: $CLIENT_NAME"
|
||||
echo " ✓ client_domain: ${CLIENT_NAME}.vrije.cloud"
|
||||
echo " ✓ authentik_domain: auth.${CLIENT_NAME}.vrije.cloud"
|
||||
echo " ✓ nextcloud_domain: nextcloud.${CLIENT_NAME}.vrije.cloud"
|
||||
echo ""
|
||||
echo "You MUST regenerate:"
|
||||
echo " - All database passwords"
|
||||
echo " - authentik_secret_key"
|
||||
echo " - authentik_bootstrap_password"
|
||||
echo " - authentik_bootstrap_token"
|
||||
echo " - All other passwords"
|
||||
echo ""
|
||||
echo "Press Enter to open editor..."
|
||||
read -r
|
||||
|
||||
# Open in SOPS
|
||||
sops "$SECRETS_FILE"
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}✓ Secrets file configured${NC}"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Load Hetzner API token from SOPS if not already set
|
||||
# Check required environment variables
|
||||
if [ -z "${HCLOUD_TOKEN:-}" ]; then
|
||||
echo -e "${BLUE}Loading Hetzner API token from SOPS...${NC}"
|
||||
# shellcheck source=scripts/load-secrets-env.sh
|
||||
source "$SCRIPT_DIR/load-secrets-env.sh"
|
||||
echo ""
|
||||
echo -e "${RED}Error: HCLOUD_TOKEN environment variable not set${NC}"
|
||||
echo "Export your Hetzner Cloud API token:"
|
||||
echo " export HCLOUD_TOKEN='your-token-here'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "${SOPS_AGE_KEY_FILE:-}" ]; then
|
||||
|
|
@ -137,18 +60,6 @@ if [ -z "${SOPS_AGE_KEY_FILE:-}" ]; then
|
|||
export SOPS_AGE_KEY_FILE="$PROJECT_ROOT/keys/age-key.txt"
|
||||
fi
|
||||
|
||||
# Check if client exists in terraform.tfvars
|
||||
TFVARS_FILE="$PROJECT_ROOT/tofu/terraform.tfvars"
|
||||
if ! grep -q "^[[:space:]]*${CLIENT_NAME}[[:space:]]*=" "$TFVARS_FILE"; then
|
||||
echo -e "${RED}Error: Client '${CLIENT_NAME}' not found in terraform.tfvars${NC}"
|
||||
echo ""
|
||||
echo "Cannot rebuild a client that doesn't exist in OpenTofu configuration."
|
||||
echo ""
|
||||
echo "To deploy a new client, use:"
|
||||
echo " ./scripts/deploy-client.sh $CLIENT_NAME"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Start timer
|
||||
START_TIME=$(date +%s)
|
||||
|
||||
|
|
@ -158,7 +69,7 @@ echo -e "${BLUE}========================================${NC}"
|
|||
echo ""
|
||||
|
||||
# Step 1: Check if infrastructure exists and destroy it
|
||||
echo -e "${YELLOW}[1/5] Checking existing infrastructure...${NC}"
|
||||
echo -e "${YELLOW}[1/4] Checking existing infrastructure...${NC}"
|
||||
|
||||
cd "$PROJECT_ROOT/tofu"
|
||||
|
||||
|
|
@ -186,7 +97,7 @@ fi
|
|||
echo ""
|
||||
|
||||
# Step 2: Provision infrastructure
|
||||
echo -e "${YELLOW}[2/5] Provisioning infrastructure with OpenTofu...${NC}"
|
||||
echo -e "${YELLOW}[2/4] Provisioning infrastructure with OpenTofu...${NC}"
|
||||
|
||||
cd "$PROJECT_ROOT/tofu"
|
||||
|
||||
|
|
@ -204,7 +115,7 @@ sleep 60
|
|||
echo ""
|
||||
|
||||
# Step 3: Setup base system
|
||||
echo -e "${YELLOW}[3/5] Setting up base system (Docker, Traefik)...${NC}"
|
||||
echo -e "${YELLOW}[3/4] Setting up base system (Docker, Traefik)...${NC}"
|
||||
|
||||
cd "$PROJECT_ROOT/ansible"
|
||||
|
||||
|
|
@ -215,7 +126,7 @@ echo -e "${GREEN}✓ Base system configured${NC}"
|
|||
echo ""
|
||||
|
||||
# Step 4: Deploy applications
|
||||
echo -e "${YELLOW}[4/5] Deploying applications (Authentik, Nextcloud, SSO)...${NC}"
|
||||
echo -e "${YELLOW}[4/4] Deploying applications (Authentik, Nextcloud, SSO)...${NC}"
|
||||
|
||||
~/.local/bin/ansible-playbook -i hcloud.yml playbooks/deploy.yml --limit "$CLIENT_NAME"
|
||||
|
||||
|
|
@ -223,45 +134,6 @@ echo ""
|
|||
echo -e "${GREEN}✓ Applications deployed${NC}"
|
||||
echo ""
|
||||
|
||||
# Step 5: Update client registry
|
||||
echo -e "${YELLOW}[5/5] Updating client registry...${NC}"
|
||||
|
||||
cd "$PROJECT_ROOT/tofu"
|
||||
|
||||
# Get server information from Terraform state
|
||||
SERVER_IP=$(tofu output -json client_ips 2>/dev/null | jq -r ".\"$CLIENT_NAME\"" || echo "")
|
||||
SERVER_ID=$(tofu state show "hcloud_server.client[\"$CLIENT_NAME\"]" 2>/dev/null | grep "^[[:space:]]*id[[:space:]]*=" | awk '{print $3}' | tr -d '"' || echo "")
|
||||
SERVER_TYPE=$(tofu state show "hcloud_server.client[\"$CLIENT_NAME\"]" 2>/dev/null | grep "^[[:space:]]*server_type[[:space:]]*=" | awk '{print $3}' | tr -d '"' || echo "")
|
||||
SERVER_LOCATION=$(tofu state show "hcloud_server.client[\"$CLIENT_NAME\"]" 2>/dev/null | grep "^[[:space:]]*location[[:space:]]*=" | awk '{print $3}' | tr -d '"' || echo "")
|
||||
|
||||
# Determine role (dev is canary, everything else is production by default)
|
||||
ROLE="production"
|
||||
if [ "$CLIENT_NAME" = "dev" ]; then
|
||||
ROLE="canary"
|
||||
fi
|
||||
|
||||
# Update registry
|
||||
"$SCRIPT_DIR/update-registry.sh" "$CLIENT_NAME" deploy \
|
||||
--role="$ROLE" \
|
||||
--server-ip="$SERVER_IP" \
|
||||
--server-id="$SERVER_ID" \
|
||||
--server-type="$SERVER_TYPE" \
|
||||
--server-location="$SERVER_LOCATION"
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}✓ Registry updated${NC}"
|
||||
echo ""
|
||||
|
||||
# Collect deployed versions
|
||||
echo -e "${YELLOW}Collecting deployed versions...${NC}"
|
||||
|
||||
"$SCRIPT_DIR/collect-client-versions.sh" "$CLIENT_NAME" 2>/dev/null || {
|
||||
echo -e "${YELLOW}⚠ Could not collect versions automatically${NC}"
|
||||
echo "Run manually later: ./scripts/collect-client-versions.sh $CLIENT_NAME"
|
||||
}
|
||||
|
||||
echo ""
|
||||
|
||||
# Calculate duration
|
||||
END_TIME=$(date +%s)
|
||||
DURATION=$((END_TIME - START_TIME))
|
||||
|
|
|
|||
|
|
@ -1,56 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Remove client monitors from Uptime Kuma
|
||||
#
|
||||
# Usage: ./scripts/remove-client-from-monitoring.sh <client_name>
|
||||
#
|
||||
# This script removes HTTP(S) and SSL monitors for a destroyed client
|
||||
# Currently uses manual instructions - future: use Uptime Kuma API
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
# Check arguments
|
||||
if [ $# -ne 1 ]; then
|
||||
echo -e "${RED}Error: Client name required${NC}"
|
||||
echo "Usage: $0 <client_name>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLIENT_NAME="$1"
|
||||
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo -e "${BLUE}Remove Client from Monitoring${NC}"
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}Client: ${CLIENT_NAME}${NC}"
|
||||
echo ""
|
||||
|
||||
# TODO: Implement automated monitor removal via Uptime Kuma API
|
||||
# For now, provide manual instructions
|
||||
|
||||
echo -e "${YELLOW}Manual Removal Required:${NC}"
|
||||
echo ""
|
||||
echo "Please remove the following monitors from Uptime Kuma:"
|
||||
echo "🔗 Access: https://status.vrije.cloud"
|
||||
echo ""
|
||||
echo "Monitors to delete:"
|
||||
echo " • ${CLIENT_NAME} - Authentik"
|
||||
echo " • ${CLIENT_NAME} - Nextcloud"
|
||||
echo " • ${CLIENT_NAME} - Authentik SSL"
|
||||
echo " • ${CLIENT_NAME} - Nextcloud SSL"
|
||||
echo ""
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}Note: Automated monitor removal via API is planned for future enhancement.${NC}"
|
||||
echo ""
|
||||
|
|
@ -1,192 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Resize a client's Nextcloud data volume
|
||||
#
|
||||
# Usage: ./scripts/resize-client-volume.sh <client_name> <new_size_gb>
|
||||
#
|
||||
# This script will:
|
||||
# 1. Resize the Hetzner Volume via API
|
||||
# 2. Expand the filesystem on the server
|
||||
# 3. Verify the new size
|
||||
#
|
||||
# Note: Volumes can only be increased in size, never decreased
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
# Check arguments
|
||||
if [ $# -ne 2 ]; then
|
||||
echo -e "${RED}Error: Client name and new size required${NC}"
|
||||
echo "Usage: $0 <client_name> <new_size_gb>"
|
||||
echo ""
|
||||
echo "Example: $0 dev 200"
|
||||
echo ""
|
||||
echo "Note: You can only INCREASE volume size, never decrease"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLIENT_NAME="$1"
|
||||
NEW_SIZE="$2"
|
||||
|
||||
# Validate new size is a number
|
||||
if ! [[ "$NEW_SIZE" =~ ^[0-9]+$ ]]; then
|
||||
echo -e "${RED}Error: Size must be a number${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check minimum size
|
||||
if [ "$NEW_SIZE" -lt 10 ]; then
|
||||
echo -e "${RED}Error: Minimum volume size is 10 GB${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check maximum size
|
||||
if [ "$NEW_SIZE" -gt 10000 ]; then
|
||||
echo -e "${RED}Error: Maximum volume size is 10,000 GB (10 TB)${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Load Hetzner API token from SOPS if not already set
|
||||
if [ -z "${HCLOUD_TOKEN:-}" ]; then
|
||||
echo -e "${BLUE}Loading Hetzner API token from SOPS...${NC}"
|
||||
# shellcheck source=scripts/load-secrets-env.sh
|
||||
source "$SCRIPT_DIR/load-secrets-env.sh"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo -e "${BLUE}Resizing Nextcloud Volume${NC}"
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo ""
|
||||
echo "Client: $CLIENT_NAME"
|
||||
echo "New size: ${NEW_SIZE} GB"
|
||||
echo ""
|
||||
|
||||
# Step 1: Get volume ID from Hetzner API
|
||||
echo -e "${YELLOW}[1/4] Looking up volume...${NC}"
|
||||
|
||||
VOLUME_NAME="nextcloud-data-${CLIENT_NAME}"
|
||||
|
||||
# Get volume info
|
||||
VOLUME_INFO=$(curl -s -H "Authorization: Bearer $HCLOUD_TOKEN" \
|
||||
"https://api.hetzner.cloud/v1/volumes?name=$VOLUME_NAME")
|
||||
|
||||
VOLUME_ID=$(echo "$VOLUME_INFO" | jq -r '.volumes[0].id // empty')
|
||||
CURRENT_SIZE=$(echo "$VOLUME_INFO" | jq -r '.volumes[0].size // empty')
|
||||
|
||||
if [ -z "$VOLUME_ID" ] || [ "$VOLUME_ID" = "null" ]; then
|
||||
echo -e "${RED}Error: Volume '$VOLUME_NAME' not found${NC}"
|
||||
echo "Make sure the client exists and has been deployed with volume support"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Volume ID: $VOLUME_ID"
|
||||
echo "Current size: ${CURRENT_SIZE} GB"
|
||||
echo ""
|
||||
|
||||
# Check if new size is larger
|
||||
if [ "$NEW_SIZE" -le "$CURRENT_SIZE" ]; then
|
||||
echo -e "${RED}Error: New size ($NEW_SIZE GB) must be larger than current size ($CURRENT_SIZE GB)${NC}"
|
||||
echo "Volumes can only be increased in size, never decreased"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Calculate cost increase
|
||||
COST_INCREASE=$(echo "scale=2; ($NEW_SIZE - $CURRENT_SIZE) * 0.054" | bc)
|
||||
|
||||
echo -e "${YELLOW}Warning: This will increase monthly costs by approximately €${COST_INCREASE}${NC}"
|
||||
echo ""
|
||||
read -p "Continue with resize? (yes/no): " confirm
|
||||
|
||||
if [ "$confirm" != "yes" ]; then
|
||||
echo "Resize cancelled"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 2: Resize volume via API
|
||||
echo -e "${YELLOW}[2/4] Resizing volume via Hetzner API...${NC}"
|
||||
|
||||
RESIZE_RESULT=$(curl -s -X POST \
|
||||
-H "Authorization: Bearer $HCLOUD_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"size\": $NEW_SIZE}" \
|
||||
"https://api.hetzner.cloud/v1/volumes/$VOLUME_ID/actions/resize")
|
||||
|
||||
ACTION_ID=$(echo "$RESIZE_RESULT" | jq -r '.action.id // empty')
|
||||
|
||||
if [ -z "$ACTION_ID" ] || [ "$ACTION_ID" = "null" ]; then
|
||||
echo -e "${RED}Error: Failed to resize volume${NC}"
|
||||
echo "$RESIZE_RESULT" | jq .
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Wait for resize action to complete
|
||||
echo "Waiting for resize action to complete..."
|
||||
while true; do
|
||||
ACTION_STATUS=$(curl -s -H "Authorization: Bearer $HCLOUD_TOKEN" \
|
||||
"https://api.hetzner.cloud/v1/volumes/actions/$ACTION_ID" | jq -r '.action.status')
|
||||
|
||||
if [ "$ACTION_STATUS" = "success" ]; then
|
||||
break
|
||||
elif [ "$ACTION_STATUS" = "error" ]; then
|
||||
echo -e "${RED}Error: Resize action failed${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
sleep 2
|
||||
done
|
||||
|
||||
echo -e "${GREEN}✓ Volume resized${NC}"
|
||||
echo ""
|
||||
|
||||
# Step 3: Expand filesystem on the server
|
||||
echo -e "${YELLOW}[3/4] Expanding filesystem on server...${NC}"
|
||||
|
||||
cd "$PROJECT_ROOT/ansible"
|
||||
|
||||
# Find the device
|
||||
DEVICE_CMD="ls -1 /dev/disk/by-id/scsi-0HC_Volume_* 2>/dev/null | grep -i 'nextcloud-data-${CLIENT_NAME}' | head -1"
|
||||
DEVICE=$(~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m shell -a "$DEVICE_CMD" -o 2>/dev/null | tail -1 | awk '{print $NF}')
|
||||
|
||||
if [ -z "$DEVICE" ]; then
|
||||
echo -e "${RED}Error: Could not find volume device on server${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Device: $DEVICE"
|
||||
|
||||
# Resize filesystem
|
||||
~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m shell -a "resize2fs $DEVICE" -b
|
||||
|
||||
echo -e "${GREEN}✓ Filesystem expanded${NC}"
|
||||
echo ""
|
||||
|
||||
# Step 4: Verify new size
|
||||
echo -e "${YELLOW}[4/4] Verifying new size...${NC}"
|
||||
|
||||
DF_OUTPUT=$(~/.local/bin/ansible -i hcloud.yml "$CLIENT_NAME" -m shell -a "df -h /mnt/nextcloud-data" -o 2>/dev/null | tail -1)
|
||||
|
||||
echo "$DF_OUTPUT"
|
||||
echo ""
|
||||
|
||||
echo -e "${GREEN}========================================${NC}"
|
||||
echo -e "${GREEN}✓ Resize complete!${NC}"
|
||||
echo -e "${GREEN}========================================${NC}"
|
||||
echo ""
|
||||
echo "Volume resized from ${CURRENT_SIZE} GB to ${NEW_SIZE} GB"
|
||||
echo "Additional monthly cost: €${COST_INCREASE}"
|
||||
echo ""
|
||||
echo "The new storage is immediately available to Nextcloud."
|
||||
echo ""
|
||||
|
|
@ -1,151 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Run Nextcloud maintenance playbook on all servers
|
||||
# Created: 2026-01-24
|
||||
#
|
||||
# This script runs the nextcloud maintenance playbook on each server
|
||||
# with its corresponding SSH key.
|
||||
#
|
||||
# Usage:
|
||||
# cd infrastructure/
|
||||
# HCLOUD_TOKEN="..." ./scripts/run-maintenance-all-servers.sh
|
||||
#
|
||||
# Or with SOPS_AGE_KEY_FILE if needed:
|
||||
# SOPS_AGE_KEY_FILE="keys/age-key.txt" HCLOUD_TOKEN="..." ./scripts/run-maintenance-all-servers.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Configuration
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
ANSIBLE_DIR="$PROJECT_ROOT/ansible"
|
||||
KEYS_DIR="$PROJECT_ROOT/keys/ssh"
|
||||
PLAYBOOK="playbooks/260124-nextcloud-maintenance.yml"
|
||||
|
||||
# Check required environment variables
|
||||
if [ -z "${HCLOUD_TOKEN:-}" ]; then
|
||||
echo -e "${RED}Error: HCLOUD_TOKEN environment variable is required${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Change to ansible directory
|
||||
cd "$ANSIBLE_DIR"
|
||||
|
||||
echo -e "${BLUE}============================================================${NC}"
|
||||
echo -e "${BLUE}Nextcloud Maintenance - All Servers${NC}"
|
||||
echo -e "${BLUE}============================================================${NC}"
|
||||
echo ""
|
||||
echo "Playbook: $PLAYBOOK"
|
||||
echo "Ansible directory: $ANSIBLE_DIR"
|
||||
echo ""
|
||||
|
||||
# Get list of all servers with SSH keys
|
||||
SERVERS=()
|
||||
for keyfile in "$KEYS_DIR"/*.pub; do
|
||||
if [ -f "$keyfile" ]; then
|
||||
server=$(basename "$keyfile" .pub)
|
||||
# Skip special servers
|
||||
if [[ "$server" != "README" ]] && [[ "$server" != "edge" ]]; then
|
||||
SERVERS+=("$server")
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
echo -e "${BLUE}Found ${#SERVERS[@]} servers:${NC}"
|
||||
printf '%s\n' "${SERVERS[@]}" | sort
|
||||
echo ""
|
||||
|
||||
# Counters
|
||||
SUCCESS_COUNT=0
|
||||
FAILED_COUNT=0
|
||||
SKIPPED_COUNT=0
|
||||
declare -a SUCCESS_SERVERS
|
||||
declare -a FAILED_SERVERS
|
||||
declare -a SKIPPED_SERVERS
|
||||
|
||||
echo -e "${BLUE}============================================================${NC}"
|
||||
echo -e "${BLUE}Starting maintenance run...${NC}"
|
||||
echo -e "${BLUE}============================================================${NC}"
|
||||
echo ""
|
||||
|
||||
# Run playbook for each server
|
||||
for server in "${SERVERS[@]}"; do
|
||||
echo -e "${YELLOW}-----------------------------------------------------------${NC}"
|
||||
echo -e "${YELLOW}Processing: $server${NC}"
|
||||
echo -e "${YELLOW}-----------------------------------------------------------${NC}"
|
||||
|
||||
SSH_KEY="$KEYS_DIR/$server"
|
||||
|
||||
if [ ! -f "$SSH_KEY" ]; then
|
||||
echo -e "${RED}✗ SSH key not found: $SSH_KEY${NC}"
|
||||
SKIPPED_COUNT=$((SKIPPED_COUNT + 1))
|
||||
SKIPPED_SERVERS+=("$server")
|
||||
echo ""
|
||||
continue
|
||||
fi
|
||||
|
||||
# Run the playbook (with SSH options to prevent agent key issues)
|
||||
if env HCLOUD_TOKEN="$HCLOUD_TOKEN" \
|
||||
ANSIBLE_SSH_ARGS="-o IdentitiesOnly=yes" \
|
||||
~/.local/bin/ansible-playbook \
|
||||
-i hcloud.yml \
|
||||
"$PLAYBOOK" \
|
||||
--limit "$server" \
|
||||
--private-key "$SSH_KEY" 2>&1; then
|
||||
|
||||
echo -e "${GREEN}✓ Success: $server${NC}"
|
||||
SUCCESS_COUNT=$((SUCCESS_COUNT + 1))
|
||||
SUCCESS_SERVERS+=("$server")
|
||||
else
|
||||
echo -e "${RED}✗ Failed: $server${NC}"
|
||||
FAILED_COUNT=$((FAILED_COUNT + 1))
|
||||
FAILED_SERVERS+=("$server")
|
||||
fi
|
||||
|
||||
echo ""
|
||||
done
|
||||
|
||||
# Summary
|
||||
echo -e "${BLUE}============================================================${NC}"
|
||||
echo -e "${BLUE}MAINTENANCE RUN SUMMARY${NC}"
|
||||
echo -e "${BLUE}============================================================${NC}"
|
||||
echo ""
|
||||
echo "Total servers: ${#SERVERS[@]}"
|
||||
echo -e "${GREEN}Successful: $SUCCESS_COUNT${NC}"
|
||||
echo -e "${RED}Failed: $FAILED_COUNT${NC}"
|
||||
echo -e "${YELLOW}Skipped: $SKIPPED_COUNT${NC}"
|
||||
echo ""
|
||||
|
||||
if [ $SUCCESS_COUNT -gt 0 ]; then
|
||||
echo -e "${GREEN}Successful servers:${NC}"
|
||||
printf ' %s\n' "${SUCCESS_SERVERS[@]}"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
if [ $FAILED_COUNT -gt 0 ]; then
|
||||
echo -e "${RED}Failed servers:${NC}"
|
||||
printf ' %s\n' "${FAILED_SERVERS[@]}"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
if [ $SKIPPED_COUNT -gt 0 ]; then
|
||||
echo -e "${YELLOW}Skipped servers:${NC}"
|
||||
printf ' %s\n' "${SKIPPED_SERVERS[@]}"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}============================================================${NC}"
|
||||
|
||||
# Exit with error if any failures
|
||||
if [ $FAILED_COUNT -gt 0 ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exit 0
|
||||
|
|
@ -1,183 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
#
|
||||
# Update the client registry with deployment information
|
||||
#
|
||||
# Usage: ./scripts/update-registry.sh <client_name> <action> [options]
|
||||
#
|
||||
# Actions:
|
||||
# deploy - Mark client as deployed (creates/updates entry)
|
||||
# destroy - Mark client as destroyed
|
||||
# status - Update status field
|
||||
#
|
||||
# Options:
|
||||
# --status=<status> Set status (pending|deployed|maintenance|offboarding|destroyed)
|
||||
# --role=<role> Set role (canary|production)
|
||||
# --server-ip=<ip> Set server IP
|
||||
# --server-id=<id> Set server ID
|
||||
# --server-type=<type> Set server type
|
||||
# --server-location=<loc> Set server location
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
REGISTRY_FILE="$PROJECT_ROOT/clients/registry.yml"
|
||||
|
||||
# Check if yq is available
|
||||
if ! command -v yq &> /dev/null; then
|
||||
echo "Error: 'yq' not found. Install with: brew install yq"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Parse arguments
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "Usage: $0 <client_name> <action> [options]"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CLIENT_NAME="$1"
|
||||
ACTION="$2"
|
||||
shift 2
|
||||
|
||||
# Parse options
|
||||
STATUS=""
|
||||
ROLE=""
|
||||
SERVER_IP=""
|
||||
SERVER_ID=""
|
||||
SERVER_TYPE=""
|
||||
SERVER_LOCATION=""
|
||||
|
||||
for arg in "$@"; do
|
||||
case $arg in
|
||||
--status=*)
|
||||
STATUS="${arg#*=}"
|
||||
;;
|
||||
--role=*)
|
||||
ROLE="${arg#*=}"
|
||||
;;
|
||||
--server-ip=*)
|
||||
SERVER_IP="${arg#*=}"
|
||||
;;
|
||||
--server-id=*)
|
||||
SERVER_ID="${arg#*=}"
|
||||
;;
|
||||
--server-type=*)
|
||||
SERVER_TYPE="${arg#*=}"
|
||||
;;
|
||||
--server-location=*)
|
||||
SERVER_LOCATION="${arg#*=}"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Ensure registry file exists
|
||||
if [ ! -f "$REGISTRY_FILE" ]; then
|
||||
cat > "$REGISTRY_FILE" <<'EOF'
|
||||
# Client Registry
|
||||
#
|
||||
# Single source of truth for all clients in the infrastructure.
|
||||
|
||||
clients: {}
|
||||
EOF
|
||||
fi
|
||||
|
||||
TODAY=$(date +%Y-%m-%d)
|
||||
|
||||
case $ACTION in
|
||||
deploy)
|
||||
# Check if client exists
|
||||
if yq eval ".clients.\"$CLIENT_NAME\"" "$REGISTRY_FILE" | grep -q "null"; then
|
||||
# Create new client entry
|
||||
echo "Creating new registry entry for $CLIENT_NAME"
|
||||
|
||||
# Start with minimal structure
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\" = {}" "$REGISTRY_FILE"
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".status = \"deployed\"" "$REGISTRY_FILE"
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".deployed_date = \"$TODAY\"" "$REGISTRY_FILE"
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".destroyed_date = null" "$REGISTRY_FILE"
|
||||
|
||||
# Add role
|
||||
if [ -n "$ROLE" ]; then
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".role = \"$ROLE\"" "$REGISTRY_FILE"
|
||||
else
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".role = \"production\"" "$REGISTRY_FILE"
|
||||
fi
|
||||
|
||||
# Add server info
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".server = {}" "$REGISTRY_FILE"
|
||||
[ -n "$SERVER_TYPE" ] && yq eval -i ".clients.\"$CLIENT_NAME\".server.type = \"$SERVER_TYPE\"" "$REGISTRY_FILE"
|
||||
[ -n "$SERVER_LOCATION" ] && yq eval -i ".clients.\"$CLIENT_NAME\".server.location = \"$SERVER_LOCATION\"" "$REGISTRY_FILE"
|
||||
[ -n "$SERVER_IP" ] && yq eval -i ".clients.\"$CLIENT_NAME\".server.ip = \"$SERVER_IP\"" "$REGISTRY_FILE"
|
||||
[ -n "$SERVER_ID" ] && yq eval -i ".clients.\"$CLIENT_NAME\".server.id = \"$SERVER_ID\"" "$REGISTRY_FILE"
|
||||
|
||||
# Add apps
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".apps = [\"authentik\", \"nextcloud\"]" "$REGISTRY_FILE"
|
||||
|
||||
# Add maintenance tracking
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".maintenance = {}" "$REGISTRY_FILE"
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".maintenance.last_full_update = \"$TODAY\"" "$REGISTRY_FILE"
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".maintenance.last_security_patch = \"$TODAY\"" "$REGISTRY_FILE"
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".maintenance.last_os_update = \"$TODAY\"" "$REGISTRY_FILE"
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".maintenance.last_backup_verified = null" "$REGISTRY_FILE"
|
||||
|
||||
# Add URLs (will be determined from secrets file)
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".urls = {}" "$REGISTRY_FILE"
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".urls.authentik = \"https://auth.$CLIENT_NAME.vrije.cloud\"" "$REGISTRY_FILE"
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".urls.nextcloud = \"https://nextcloud.$CLIENT_NAME.vrije.cloud\"" "$REGISTRY_FILE"
|
||||
|
||||
# Add notes
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".notes = \"\"" "$REGISTRY_FILE"
|
||||
else
|
||||
# Update existing client
|
||||
echo "Updating registry entry for $CLIENT_NAME"
|
||||
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".status = \"deployed\"" "$REGISTRY_FILE"
|
||||
|
||||
# Update server info if provided
|
||||
[ -n "$SERVER_IP" ] && yq eval -i ".clients.\"$CLIENT_NAME\".server.ip = \"$SERVER_IP\"" "$REGISTRY_FILE"
|
||||
[ -n "$SERVER_ID" ] && yq eval -i ".clients.\"$CLIENT_NAME\".server.id = \"$SERVER_ID\"" "$REGISTRY_FILE"
|
||||
[ -n "$SERVER_TYPE" ] && yq eval -i ".clients.\"$CLIENT_NAME\".server.type = \"$SERVER_TYPE\"" "$REGISTRY_FILE"
|
||||
[ -n "$SERVER_LOCATION" ] && yq eval -i ".clients.\"$CLIENT_NAME\".server.location = \"$SERVER_LOCATION\"" "$REGISTRY_FILE"
|
||||
|
||||
# Update maintenance date
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".maintenance.last_full_update = \"$TODAY\"" "$REGISTRY_FILE"
|
||||
fi
|
||||
;;
|
||||
|
||||
destroy)
|
||||
echo "Marking $CLIENT_NAME as destroyed in registry"
|
||||
|
||||
if yq eval ".clients.\"$CLIENT_NAME\"" "$REGISTRY_FILE" | grep -q "null"; then
|
||||
echo "Warning: Client $CLIENT_NAME not found in registry"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".status = \"destroyed\"" "$REGISTRY_FILE"
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".destroyed_date = \"$TODAY\"" "$REGISTRY_FILE"
|
||||
;;
|
||||
|
||||
status)
|
||||
if [ -z "$STATUS" ]; then
|
||||
echo "Error: --status=<status> required for status action"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Updating status of $CLIENT_NAME to $STATUS"
|
||||
|
||||
if yq eval ".clients.\"$CLIENT_NAME\"" "$REGISTRY_FILE" | grep -q "null"; then
|
||||
echo "Error: Client $CLIENT_NAME not found in registry"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
yq eval -i ".clients.\"$CLIENT_NAME\".status = \"$STATUS\"" "$REGISTRY_FILE"
|
||||
;;
|
||||
|
||||
*)
|
||||
echo "Error: Unknown action '$ACTION'"
|
||||
echo "Valid actions: deploy, destroy, status"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
echo "✓ Registry updated successfully"
|
||||
|
|
@ -104,7 +104,7 @@ openssl rand -base64 32
|
|||
# Random 24-character password
|
||||
openssl rand -base64 24
|
||||
|
||||
# Random hex string (32-byte, 64 characters)
|
||||
# Zitadel masterkey (32-byte hex)
|
||||
openssl rand -hex 32
|
||||
```
|
||||
|
||||
|
|
@ -121,7 +121,7 @@ Ansible automatically decrypts SOPS files using the `community.sops` collection.
|
|||
|
||||
- name: Use decrypted secret
|
||||
debug:
|
||||
msg: "DB Password: {{ client_secrets.authentik_db_password }}"
|
||||
msg: "DB Password: {{ client_secrets.zitadel_db_password }}"
|
||||
```
|
||||
|
||||
**Environment variable required:**
|
||||
|
|
@ -141,9 +141,9 @@ Contains secrets shared across all infrastructure:
|
|||
### clients/*.sops.yaml
|
||||
|
||||
Per-client secrets:
|
||||
- Database passwords (Authentik, Nextcloud)
|
||||
- Database passwords (Zitadel, Nextcloud)
|
||||
- Admin passwords
|
||||
- Secret keys and tokens
|
||||
- Zitadel masterkey
|
||||
- Restic repository password
|
||||
- OIDC credentials (after generation)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,122 +0,0 @@
|
|||
# Client Secrets Directory
|
||||
|
||||
This directory contains SOPS-encrypted secrets files for each deployed client.
|
||||
|
||||
## Files
|
||||
|
||||
### Active Clients
|
||||
|
||||
- **`dev.sops.yaml`** - Development/canary server secrets
|
||||
- Status: Deployed
|
||||
- Purpose: Testing and canary deployments
|
||||
|
||||
### Templates
|
||||
|
||||
- **`template.sops.yaml`** - Template for creating new client secrets
|
||||
- Status: Reference only (not deployed)
|
||||
- Purpose: Copy this file when onboarding new clients
|
||||
|
||||
## Creating Secrets for a New Client
|
||||
|
||||
```bash
|
||||
# 1. Copy the template
|
||||
cp secrets/clients/template.sops.yaml secrets/clients/newclient.sops.yaml
|
||||
|
||||
# 2. Edit with SOPS
|
||||
export SOPS_AGE_KEY_FILE="./keys/age-key.txt"
|
||||
sops secrets/clients/newclient.sops.yaml
|
||||
|
||||
# 3. Update all fields:
|
||||
# - client_name: newclient
|
||||
# - client_domain: newclient.vrije.cloud
|
||||
# - authentik_domain: auth.newclient.vrije.cloud
|
||||
# - nextcloud_domain: nextcloud.newclient.vrije.cloud
|
||||
# - REGENERATE all passwords and tokens (never reuse!)
|
||||
|
||||
# 4. Deploy the client
|
||||
./scripts/deploy-client.sh newclient
|
||||
```
|
||||
|
||||
## Important Security Notes
|
||||
|
||||
⚠️ **Never commit plaintext secrets!**
|
||||
|
||||
- Only `*.sops.yaml` files should be committed
|
||||
- Temporary files (`*-temp.yaml`, `*.tmp`) are gitignored
|
||||
- Always verify secrets are encrypted: `file secrets/clients/*.sops.yaml`
|
||||
|
||||
⚠️ **Always regenerate secrets for new clients!**
|
||||
|
||||
- Never copy passwords between clients
|
||||
- Use strong random passwords (32+ characters)
|
||||
- Each client must have unique credentials
|
||||
|
||||
## File Naming Convention
|
||||
|
||||
- **Production clients**: `clientname.sops.yaml`
|
||||
- **Development/test**: `dev.sops.yaml`
|
||||
- **Templates**: `template.sops.yaml`
|
||||
- **Never commit**: `*-temp.yaml`, `*.tmp`, `*_plaintext.yaml`
|
||||
|
||||
## Viewing Secrets
|
||||
|
||||
```bash
|
||||
# View encrypted file (shows SOPS metadata)
|
||||
cat secrets/clients/dev.sops.yaml
|
||||
|
||||
# Decrypt and view (requires age key)
|
||||
export SOPS_AGE_KEY_FILE="./keys/age-key.txt"
|
||||
sops -d secrets/clients/dev.sops.yaml
|
||||
```
|
||||
|
||||
## Required Secrets per Client
|
||||
|
||||
Each client secrets file must contain:
|
||||
|
||||
### Authentik (Identity Provider)
|
||||
- `authentik_db_password` - PostgreSQL database password
|
||||
- `authentik_secret_key` - Django secret key
|
||||
- `authentik_bootstrap_password` - Initial admin (akadmin) password
|
||||
- `authentik_bootstrap_token` - API token for automation
|
||||
- `authentik_bootstrap_email` - Admin email address
|
||||
|
||||
### Nextcloud (File Storage)
|
||||
- `nextcloud_admin_user` - Admin username (usually "admin")
|
||||
- `nextcloud_admin_password` - Admin password
|
||||
- `nextcloud_db_password` - MariaDB database password
|
||||
- `nextcloud_db_root_password` - MariaDB root password
|
||||
- `redis_password` - Redis cache password
|
||||
|
||||
### Optional
|
||||
- `collabora_admin_password` - Collabora Online admin password (if using)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "No such file or directory: age-key.txt"
|
||||
```bash
|
||||
# Ensure SOPS_AGE_KEY_FILE is set correctly
|
||||
export SOPS_AGE_KEY_FILE="./keys/age-key.txt"
|
||||
# Or use absolute path
|
||||
export SOPS_AGE_KEY_FILE="/full/path/to/infrastructure/keys/age-key.txt"
|
||||
```
|
||||
|
||||
### "Failed to decrypt"
|
||||
- Verify you have the correct age private key
|
||||
- Check that `.sops.yaml` references the correct age public key
|
||||
- Ensure the file was encrypted with the same age key
|
||||
|
||||
### "File contains plaintext secrets"
|
||||
```bash
|
||||
# Check if file is properly encrypted
|
||||
file secrets/clients/dev.sops.yaml
|
||||
# Should show: ASCII text (with SOPS encryption metadata)
|
||||
|
||||
# Re-encrypt if needed
|
||||
sops -e -i secrets/clients/dev.sops.yaml
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [../README.md](../../secrets/README.md) - Secrets management overview
|
||||
- [../../docs/architecture-decisions.md](../../docs/architecture-decisions.md) - SOPS decision rationale
|
||||
- [SOPS Documentation](https://github.com/getsops/sops)
|
||||
38
secrets/clients/dev.sops.yaml
Normal file
38
secrets/clients/dev.sops.yaml
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
#ENC[AES256_GCM,data:Z5yDXg28JTSIUtpFsI6k71ToslPeU4TM,iv:CzLHfKk2rwbuTK73ucm8vg19SEbYkHGsxao8Fxj0smk=,tag:JNSvnD7tmngOTiccRlTrHA==,type:comment]
|
||||
#ENC[AES256_GCM,data:SkLXnxlTpEUo4RUP6EU5h2hMUjHYpOkl8Ndjv+jyncXVgMXxfYw=,iv:7aoaONvTIOE4Pu+MulBR7mhJnIjVRNrlMV+d8G+sGG0=,tag:hShCDFAKrW6cWnJd2vL+Og==,type:comment]
|
||||
#ENC[AES256_GCM,data:Rv664eaZjj1MfU6HcZWilrz5577Agg==,iv:EMZwUCMQXrdewyLY5aZPcshMGkx6+k/jBalJ1ByAj/A=,tag:ODdGpf0id/w8aDYNrdWEFg==,type:comment]
|
||||
client_name: ENC[AES256_GCM,data:sLox,iv:iC2so9WyM58BYmMrmfcWXodj4a5wSvzyWsCVe5WbnX4=,tag:AfwOoFQpjpHqbXWxXO8Eeg==,type:str]
|
||||
client_domain: ENC[AES256_GCM,data:7F76Vt9k0TIQGiuoPW3O,iv:OpYEYhEKCGkRMUgFhGi+Y/uM9P6XLFv+WMmYHLKeQ0U=,tag:HbpOb/J8hpSdNDVv9A07TA==,type:str]
|
||||
#ENC[AES256_GCM,data:XBQwOaBVIkcfKXOYKA/CYe3XWDG+Ojre,iv:sMzd/BIOtDuQo+RsoO393DmPlZhY/X/jxSdI3j+T2aQ=,tag:iPf1+jfQE9n0KkZjHWvXUA==,type:comment]
|
||||
authentik_domain: ENC[AES256_GCM,data:d5ZVFyfPSJj2DcFQwEB00uh4flo=,iv:dMbMQTo3Vx35FE1471TPGP5iYvYDdWO43Ic7Z6GAEB8=,tag:W4CSgVpxt/eclakj5qtu3g==,type:str]
|
||||
authentik_db_password: ENC[AES256_GCM,data:kQ629SlJW4WgWu5nUOxBs5p48EJb478Q0qrbZfvgbBQTrfPQnaneFJQyrA==,iv:9puxfMZM2t+qkZjjlmaUCsvlqA9oXzxLLJ9oZ+HkSec=,tag:UDsVLqWDaAjR+sQS6/OBow==,type:str]
|
||||
authentik_secret_key: ENC[AES256_GCM,data:h+R7rHTRikUooMeQ0z0La3qZ7bknHTerHVJBTs9mFhoOQC8uO2DBaG3FGsZRTqWy5sBidegjp4r+6oa+aubF7r0Gkg==,iv:UNpawp0bf4koib7DwgFxdRpOFV28Ktwjdh2Pa0h/Qmo=,tag:cqvWwYzdSzaGBaMOEXTszQ==,type:str]
|
||||
#ENC[AES256_GCM,data:YJzCkx97cHc9lczEzpVaVytMEK2cahn9PJ4luS4mzBAhQnmLkWKRoUg8wfjCyIc=,iv:tWj6FMYXd88CUohJ8GdZI17JVFuEk+07yBHm4kAk2yI=,tag:WwV1C7GmvevyNiSzco82Eg==,type:comment]
|
||||
authentik_bootstrap_password: ENC[AES256_GCM,data:Y1yMVyRi8Ce+TVZwj4RU6NHN4SvSD3GYfk7Fi3IsQmdCAKgBEDZYI8Mw/A==,iv:npBA1hpbe7ttD7lIDTD2ZxpRsFzohGCiLISNKeNsY18=,tag:jDCVHp8ATO0TyOSn8J0frg==,type:str]
|
||||
authentik_bootstrap_token: ENC[AES256_GCM,data:IGEwNd4ZDoyLILJ8NEw2Qp6CyfCXrmvHlnjygUl6qIj6vKoHys9zkk2ZiFYAolYcZXcHq3569q9yXQMvYelb,iv:h9p3JNDZgr4gz2PHHnesrVPtwTVbSn48YW5u4iy163E=,tag:Lu6z+K+LhVt9LFyOgmmUWA==,type:str]
|
||||
authentik_bootstrap_email: ENC[AES256_GCM,data:P7Bb+RruJlV9OKW8U5yXZGRMKTjJ,iv:paFh41RaJO1Nu0ejrxgYXpKlZMdDLCVt810hiSgHxUg=,tag:8zw/0N+pJdeLVm5flY6O4Q==,type:str]
|
||||
#ENC[AES256_GCM,data:/5TakPAsaXrgkk0qvexe1kkG6ltsWQOQ,iv:TWhQrknF38g3hVTwJ7RIuSbHJ8Np07BhhN0MtfSyQLY=,tag:gdwRtBhXDsHeDkt8AWYk7w==,type:comment]
|
||||
nextcloud_domain: ENC[AES256_GCM,data:XCnxio1Yk5xqhF1GpQmZ4BhvVNnweZWBDg==,iv:bBGbn9AmrgmeGJRToXb/ujl3eInltaFV/7lmazFRM7U=,tag:B/WbLePFK+uRMwIIOPItaA==,type:str]
|
||||
nextcloud_admin_user: ENC[AES256_GCM,data:Xvw+QHU=,iv:IFGiGOv+ZI7R308nNrQ4SJPZtVP0dU5IwH7lFpOhBu4=,tag:Sb5SvUBYNg6Oj0PKl9+2Ig==,type:str]
|
||||
nextcloud_admin_password: ENC[AES256_GCM,data:uTLqkEPoq17bTkBxGpMak7zkqc6h2fhx7VJIEzZ9RGU13vRbgcIoO4d7jQ==,iv:Hub/66fCYFdK7j4Yc+5IBFbAM4WafgUzFpnnWbDbQVg=,tag:Awq398GYFRwreYGmqLP+cw==,type:str]
|
||||
nextcloud_db_password: ENC[AES256_GCM,data:1gT5rj8buyyvyCfv79BWuZPmAEH++4jIMBbVsdkqWMq3YiQSFAtQDpCEVw==,iv:qHvP/Tf1d+zHMHMnCQ5FK9tU+bQtFJbDxCtB5JAlZhg=,tag:qrvMW5rnlCt+dFCNvAso3A==,type:str]
|
||||
nextcloud_db_root_password: ENC[AES256_GCM,data:IvfUibOFhW5agn7rxRtM4W6SN4WbwOmc/UzDC+u8NBBK9ZV5/yAQbd+3oQ==,iv:yEW/41M+YJnEyCne3DzIZ4+h+p0xzO3b8ZC6ai5MquE=,tag:Gu+xADBxeQnLPwAiQ6BFsA==,type:str]
|
||||
#ENC[AES256_GCM,data:mC4JlJLFFT6OuCHt8DH/uKuXtX2x2zHu2y0+MKQ=,iv:yOrqx+5ZR95b7Bn8BeKexwsT/crpX7kOMom0bdGBTCY=,tag:ErSgI32zshgW3MPT4MZLlA==,type:comment]
|
||||
redis_password: ENC[AES256_GCM,data:VBAJRe3cO5rt9TJ1N+YUXg6pDL27UrTtJ6rXQtzBxWToF1E1/4DWxr90xw==,iv:nowHNAqbD1qlTZYaGxD0KCFS4PfBpP9e5XQbiBRRGzU=,tag:4vVp1CLOpaXw3i21BrQiaw==,type:str]
|
||||
#ENC[AES256_GCM,data:ZsI7f5v762m7M3g9AZQILU8EFokmKGAKFvPPyJj1uLu+aYJw,iv:HVvYS0XgTUUHNUVuYRXTzeXJYBHhi0XXCMy1zRlVfAw=,tag:huhPxsQl9W23KhM4RZs22A==,type:comment]
|
||||
collabora_admin_password: ENC[AES256_GCM,data:74+2efnEZFRStWaE7Moxu2m89H1EMhNhsvBw4eJu50HY+8ltmSqagYLrsA==,iv:IWDpO6MfTwH4HJrIWti+CVRtGfe5q8bRkemB46jLYPM=,tag:DKEiE84OZ6RVClz4L3oITw==,type:str]
|
||||
sops:
|
||||
age:
|
||||
- recipient: age170jqy5pg6z62kevadqyxxekw8ryf3e394zaquw0nhs9ae3v9wd6qq2hxnk
|
||||
enc: |
|
||||
-----BEGIN AGE ENCRYPTED FILE-----
|
||||
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBWUXloeEZMcEt4M3kxL3U5
|
||||
MXpiU1c4Vy9uTkVDL0R3Rng5N25DZFhPTUhjCllyeU0rbEp0SVFTLzFNUVJscHhv
|
||||
L1htaUt3S2pJN3NZQ0UwTXpReG9NcnMKLS0tIGpQbnU4SnRyb3RzeCswL2t1d1Vt
|
||||
aTR0SGowcmdBdE9GV0pDV2hUajR2QzAKZupaPPPAgagGrj88sVZF9/SbmLpZIBJC
|
||||
EyKmyzi4HR2cb541LVTFY2FCBX3oy6xWbt6omCqnmnymAqD1s8IaTw==
|
||||
-----END AGE ENCRYPTED FILE-----
|
||||
lastmodified: "2026-01-14T13:32:54Z"
|
||||
mac: ENC[AES256_GCM,data:q0NindbnNfVCnzr7fgvWUPZlk5Dw7rIMhDqCCaOSdYJaJ+gLTbmO1eaG2rA/Q2u7ATYge4AV7rxuAAMk5kws7btzLLJjnZ1pVpmoOGuKV8Py1+6d3Ah7Lzvn4Rgdi3b4VHL5N2e967yodqFRz7WPGoqeHGnjlijYh3/gOYOfmNQ=,iv:UCi3Ar6Vq79RFcY36giDX79fQnq0wPnT1hoBB/JyVhI=,tag:MlqjepPQDl4i1ddYG9o7oA==,type:str]
|
||||
unencrypted_suffix: _unencrypted
|
||||
version: 3.11.0
|
||||
0
secrets/clients/test-temp.sops.yaml
Normal file
0
secrets/clients/test-temp.sops.yaml
Normal file
38
secrets/clients/test.sops.yaml
Normal file
38
secrets/clients/test.sops.yaml
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
#ENC[AES256_GCM,data:eZqiMbgZ970iP9xR1lP1Mf4//4y3l76kTg==,iv:cYffSE0jP5zrezKl/UBoNFc2gxb6El1hhripoXC6Uck=,tag:bnZZjLPH2zyObXU0QT9i+Q==,type:comment]
|
||||
#ENC[AES256_GCM,data:3lAY7IxFpSbgBS9Jfte4tqBi6/jv1d4rqpXvFIzwaBi8kbIRZWc=,iv:Hx+Jd4xVRwzU7yjm962I5xU2NFX5njx43u8ibBKe/fk=,tag:EEDSENvFr/PhRu0PIY0K2g==,type:comment]
|
||||
#ENC[AES256_GCM,data:QWGb4941FGgKU/iMUHEyK+eJoIxrig==,iv:GhFhT6jSQZ076/5yfDzEvsxoxCx9O6ueTbRePGxEdD8=,tag:w/psPqZ98Dn9BZFjL4X8pw==,type:comment]
|
||||
client_name: ENC[AES256_GCM,data:RgV0RQ==,iv:uCKSI8QpjTlkTg6/wpbTcnjFxB77pjSaCnCeG0tZ4g0=,tag:vWI6wakgwwCAv6HW82q8oA==,type:str]
|
||||
client_domain: ENC[AES256_GCM,data:66fMimASNHXHjY62altJkg==,iv:q4umVB66CiqGwAp7IHcVd6txXE9Wv/Ge0AhUfb4Wyrc=,tag:3IsOGtI91VzlnHFqAzmzkg==,type:str]
|
||||
#ENC[AES256_GCM,data:2JdPa35b7MsjQ8OR3zxQF5ssn+js8AQo,iv:kDwIUJ/35Y7MJVts0DH1x3kuKWSxawrfBStDA+BbRO0=,tag:rNgsObk+N1gss5C+IzMi5A==,type:comment]
|
||||
authentik_domain: ENC[AES256_GCM,data:Mw6zdhoC5ENTsYWGx4VqgUtTNPwM,iv:xOVUdfvqpj0feDHA8s6aSTqgCWEJJhlgVKF34GW2Hm0=,tag:eZyTNJEWkSPiVexXW8zy9A==,type:str]
|
||||
authentik_db_password: ENC[AES256_GCM,data:HsyTlbM8pewD6ZUndnPQzBzlNECdlOqEWt6AgIMURU4U85NmhoRaAIwcVw==,iv:x2hHZVGnbCDggRRyW7BFfhmUT8WpAwua0tonwF2UDSI=,tag:Bbboc0vKGcrIvjIAsC2eVA==,type:str]
|
||||
authentik_secret_key: ENC[AES256_GCM,data:cl1U+PGeaQNu2OW3t4QzfWIyMtvkQdYk8Adb7EmLrSHceeHxfXgKwgxvp2Fn7C8RDpuCsztkxEz1D2vePO2xSpIo3Q==,iv:trlB7PJd4os21wOK+CyfymE+oopdksydS+z3VHBT1wU=,tag:BwQ2FygYOaX22YKOTgY0mw==,type:str]
|
||||
#ENC[AES256_GCM,data:3AF1/xf9DULcTEhTfxSr9ls8U0cr0ToG88783V10OAmsOclhq5h3ncFoLM3GZXY=,iv:Ji7447QFwRn0MKoXakAoe7ZDeJrT0fYAVHwYBWr/hjQ=,tag:+CQyj9pZxzKualOV/hlrkg==,type:comment]
|
||||
authentik_bootstrap_password: ENC[AES256_GCM,data:K0nR2CCA+mZLwt1eKY3NU0iB3aXRbze+aX089cmAfTXunBsRZgXWirC3Pg==,iv:Ki4G/iMoL8rqIR/E5YWWNa60TEFEJlpmjfSO17ccjms=,tag:c91a6Dlu2cDeAbtH0VMynw==,type:str]
|
||||
authentik_bootstrap_token: ENC[AES256_GCM,data:wzToXlHEEo4hqbTpYaj8VcjIzl9JIBYelb6csfSXB3gsecyOOriUsvpBua2By0l6c2DMpUVipRR1fEo6CZLc,iv:3U7eseITVM6LTzlc7tEPV44qYTdiLbKpOcDR+S0y9ME=,tag:UFxakIe4ZhgJy8K8caF16A==,type:str]
|
||||
authentik_bootstrap_email: ENC[AES256_GCM,data:3H2b7nl+i5AnXVSWCWkpzfCe7lk8ow==,iv:KlpRA6aP1/sSG5PSs8Q3aRshn1ZgHQwW4AtTYwCgd+0=,tag:SpD7K4Xme/QUTxLEL7Xi3A==,type:str]
|
||||
#ENC[AES256_GCM,data:ZXsSQkRtXNF5DMUPAAaLBWkAgh/hJMUX,iv:+r+WtRYebnFEkw3qmIkXRPUUYSep53qzgy2FvpGhSfw=,tag:S+w04XduCSLRntLJiEDFUQ==,type:comment]
|
||||
nextcloud_domain: ENC[AES256_GCM,data:i0hWB89Lxjn+s9NOrFsYZr/zsQ2/BzZKIk0=,iv:AU1LLm04+4Ekjm9Q3Gqe3MpqdIdGAGK7EaClJMO2bz0=,tag:8AEN6jdruVUzFEZe0sVBrg==,type:str]
|
||||
nextcloud_admin_user: ENC[AES256_GCM,data:EkGgPFQ=,iv:69EdTYC3xMzp5g9RQ+C5hjBw+gLBghaKQArOc+77nR4=,tag:17oRhQUMD1yHj06gS3ODAA==,type:str]
|
||||
nextcloud_admin_password: ENC[AES256_GCM,data:aRbg8hmK5QMOS0xqEkgq2j96ajhtG+gYnriHrT5lrZynbpNt0tXGh2SIuQ==,iv:WWnoi9si/o/9Qsj68sR3XFKba2UUWiVrjx1XLsvuhcI=,tag:AUr9WFNGyedvc1woGMFeMw==,type:str]
|
||||
nextcloud_db_password: ENC[AES256_GCM,data:xygLEUi1doSFzG8JANguzGxyP8vXm9GDhDqmRAAsj2VfIEbzANsa5iWbtQ==,iv:UgKufxyqi2LwJ8/QIT4mssHxSGvixW7dWXRTURaoI0k=,tag:yr8ZiR3DphX+mzJ63qRbRw==,type:str]
|
||||
nextcloud_db_root_password: ENC[AES256_GCM,data:IuKUtIDDJOmFHbG6dZFOC+WDrEg2vBTemWVjbapwRmYRIwQg47+38dOQjg==,iv:CISRoJZtV4JI0AB5erHNZLPRE+oeo4jxd446GUfSkWo=,tag:juEZ+gV82kfgrny2lC6Qow==,type:str]
|
||||
#ENC[AES256_GCM,data:fh5zP6W0szyikkvHfNIs98J2Vl9C8xhHnWrmFZM=,iv:Di1DjQ8Nxrb1KnvtRKJIOMfO1CmbNpweVj7Ijsx79dA=,tag:YL/eJn+uG5qLP4TW4KyPdg==,type:comment]
|
||||
redis_password: ENC[AES256_GCM,data:EgNqS7asbH0PHlad43D3kgEJqb5qpZVHI1XuWdu8uqm0H6pJu6M435s3Pg==,iv:dsiEU9Ik12CFT+6PATLA40MMgN/kgoHfOc7Lfkih/Ug=,tag:2fSPKLZgd8Ebc/j3xeb2bA==,type:str]
|
||||
#ENC[AES256_GCM,data:OxFZyktOkNHq32ixDlpaHRmlu10we9rHb+YKOG4BNig6cdzh,iv:tyh/ozm0ooidGCSEKzZ0jqX0x7Z3v+/rtV4q5+vYpjQ=,tag:zQ0KKB5U9+4T8dKhBD7ZdQ==,type:comment]
|
||||
collabora_admin_password: ENC[AES256_GCM,data:jxrOdFLAeIRp7lVBz4WiqYFNdCn+FqHJsPSfRyD3uqQWUwWhXuG2LlQmOw==,iv:j8KWGx4392q6IllfTMjL9JitkHL9XVuShdOM+6ZtP/4=,tag:D3nqs03YwmjmT4A3W1uumA==,type:str]
|
||||
sops:
|
||||
age:
|
||||
- recipient: age170jqy5pg6z62kevadqyxxekw8ryf3e394zaquw0nhs9ae3v9wd6qq2hxnk
|
||||
enc: |
|
||||
-----BEGIN AGE ENCRYPTED FILE-----
|
||||
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBNVzNUaC94SnBRU2lNQjdu
|
||||
Q05BMzF6VWlBckd1VjlXOVNSMTdFR2Z3ZEhvCmdsU2tJOTNCMkhjNlVJK3FOeUFl
|
||||
VnhxT1ZObkZMdXNoSkE1UWVXUVY4d0EKLS0tIDllbVJCMGZDaXJWb2oxbHJ6Y05F
|
||||
NnN0SE4rZ0lFWUlaNjBIc293UzlxakkKYOxxyTtwEEo3j6iMGeHyArYSquT+2ieB
|
||||
cPA1QayU4OBucKo34WuZTh41TxIg2hr1GG3Ews5QDEiTJlAQuAzldw==
|
||||
-----END AGE ENCRYPTED FILE-----
|
||||
lastmodified: "2026-01-09T07:31:15Z"
|
||||
mac: ENC[AES256_GCM,data:MSnPPzLLCZIIK/RmhlpMaNGEeZCHVzY2PK4A4PhC4nXuw9AwGjYDrHn3FQ9aJywi7NlXxLqFWo9nSnFswNlIUpea/3MTsa5LNimX6a22c9YRut+yImwrBU3abcgzxVJsHk7DUGIA1TY/AElC5ZLNROrw/X+sVf5L2pq7P2/oous=,iv:cOxocMqLgzzzT89RdfJdfvOfZ3Ph4tWbE6bV21WZgZI=,tag:zrthLaXOrdx3IU4I5G+zBQ==,type:str]
|
||||
unencrypted_suffix: _unencrypted
|
||||
version: 3.11.0
|
||||
|
|
@ -1,38 +1,27 @@
|
|||
#ENC[AES256_GCM,data:04ALB+Pn3US4027Oh487+ZrAQfoLWwVh7J04BH5bLVk1odSadMrG+sg=,iv:kXjuTiE22hgz7FYcBkDw8ANCbmKBPgNvbLdx85p4kn0=,tag:Y987uVwfd4z2WG3+EsbRTQ==,type:comment]
|
||||
#ENC[AES256_GCM,data:JR39Jk+yo+4xal384o+cWnxxsgZJhrf6JNCtHvl0iyx8Ukb3R4c=,iv:hY5s0YYbw0aO0958NuHeT0rPicvNoMatdzNfBwMl6/k=,tag:dGP2YmLcU9inq4LR1gFQIQ==,type:comment]
|
||||
#ENC[AES256_GCM,data:MwHqTG0q+8qzCl1fbug581WIH/fAdgCuVvjew9ckw9kaXMFdjLwacKMsAAkttGrsHs60jNyOi9YP//p7dqCpDxNT+7be,iv:EAEf4ULn4YdGQSgml3S9SzWtYNPWaAp8xJm/sTrFNRc=,tag:dgEqhUTpI4KSf+jpA0tR6Q==,type:comment]
|
||||
hcloud_token: ENC[AES256_GCM,data:cEBVeTBeZaoJ9zNUGeGiqG6svMY9We5Bo+xUxqyP553F/3SGhVNnIoPTQNnhoi33av9SL0TK9XUPshFnB/wAJQ==,iv:e5kv7GQZQqtResfLyWX35T4Bdh9IwoErIOAcUmtr4UM=,tag:uqYqBQYlzOE/2uN3s9lRYg==,type:str]
|
||||
#ENC[AES256_GCM,data:ypfNO7kd/SLrsz2jRIykmbGWJB5Vp8SNUwh6UEE8qhttVf6bECk6a/wBVj4XzA==,iv:379fklcd9rJaq3zvJcAb/IfsFYPxPKX+ZcMmyFQ+ak0=,tag:E6IlkeneMLx2SHibs4pVrw==,type:comment]
|
||||
storage_box_host: ENC[AES256_GCM,data:AZAlPKfIAjvqc6YkpIKHutny6ubJLgcpASY=,iv:MYsq9lmFVc4Yq5Dj6dF1sWrHUpcAHmXgsGozT2M2URI=,tag:R1UYUQVZH5UVqrWtBjsBpw==,type:str]
|
||||
storage_box_user: ENC[AES256_GCM,data:mb3F22aIcQ==,iv:BeOYQV4ZharXF1NlPUC/G3IF98eBbZ0W3XxgMkCuR4o=,tag:stLdytx0n/EvCOta/3Cn/g==,type:str]
|
||||
storage_box_password: ENC[AES256_GCM,data:2VSgTly3rNozgfU4lg4Un+K1vmNf5XEq8NTYH1ekcI9a,iv:7uDxpspTVyirGlzaHDZwSE4c/c+4TaMaH7gGCd1X1JY=,tag:njzoKPDLa5mz5RMp2gYOtQ==,type:str]
|
||||
#ENC[AES256_GCM,data:qCDYq8QCwrJZu0AiE+d4ePbbBgBQAwZ1knv4,iv:Yx4aEyRVMxQlirfdYyvUXCmukOzaL/XTypB+mRaFwv4=,tag:6DHV8r33jLsVGYNHLNUE+A==,type:comment]
|
||||
acme_email: ENC[AES256_GCM,data:7S0SVt/wIV6MexKntpxiioiS62dQ3JNX,iv:xCx4aXuCwDtUzUy4cEhy6Y5G221UtHiUgPBLab14Ti8=,tag:bc4cRIV663cjroTZ9B8Z5g==,type:str]
|
||||
#ENC[AES256_GCM,data:QjXYPryU+RWEpbfh0Iicu6tTog73i7HVzHH3UD5HqyPMs/wSM6q2s9Ap9fw=,iv:sClGgcS7gGKptbJx+UtPn6wfYg/s8kf6GXltFsbpXyE=,tag:9ZygWXObDSCKZjGtZSPIQQ==,type:comment]
|
||||
mailgun_api_key: ENC[AES256_GCM,data:+KEwwt3g24KbplG705tOab/AIXOSnpmBvQtscpB0m/UTYHQv9d3h45n1Hplw3kIvEMk=,iv:g1XC1/HqsKeysOWPDk8ArMIvYuw8mRRyjmOtn38oxFI=,tag:MQI//J1Uxw43x/n9baymdA==,type:str]
|
||||
#ENC[AES256_GCM,data:+76iTnTb2ER/8LaZMwnOFe65mqD+01h3WLCtJeiFinaQ9NitTfK/+J6OI+12IIGvFzdDW4Eo1kj1ETxekg==,iv:Y4tAG2TYmMbHpZcq44WzTno4pd5JW9dF1gOvEJw7kjc=,tag:QyXsSR1gd0hI6JYzT1E4uA==,type:comment]
|
||||
kuma_username: ENC[AES256_GCM,data:zmar2ok=,iv:rdMU87CaPx2SvhIBrTaMI8TzQNMy7mme6LfPwaU0B2g=,tag:GaRPVvNBwt0SEvfT5UR39A==,type:str]
|
||||
kuma_password: ENC[AES256_GCM,data:GI6fLItkyjmv5yU5EBXIFeHDuQg=,iv:nnc9DfhfsZPKK66Ivme5xSrLaR+onIgEmnrND0gsYvA=,tag:LHIZUTYakGie/UgZdb0wng==,type:str]
|
||||
#ENC[AES256_GCM,data:09+nWhVzHUIsLzvn6vkhv7thV2QbDJtv0dEWR+eygYSX7B+n5UhKVFhUyzgan2CA,iv:0Fs9R7cwJErJfC4o7yBpP+uSgxSjRNIZJH7ZMMAtEhs=,tag:w5WZZQrYKh+DdCxmDzbIJQ==,type:comment]
|
||||
docker_hub_username: ENC[AES256_GCM,data:GJ3X6f0HnRhNTVES,iv:HS2SE/9XAScGl3tCjbvYj8rSeFbyuXsBh4+P5adRo+g=,tag:Deo5k281WG3kQnGDUiFnSA==,type:str]
|
||||
docker_hub_password: ENC[AES256_GCM,data:Uj3lLaBitHiEK4Z8ki/EfomWDi35eDtrRDMehTqsi7OtMqDY,iv:tVXbAumOn8NlrBBKzUqL6G+W6VZ4hSm0UZO4+ymqqas=,tag:hznzHSJpCwfpaPdJ0fVzVQ==,type:str]
|
||||
#ENC[AES256_GCM,data:JR8oUMSIpn1beUD+bLl+3Q1QTJ+fLBrgOb4aZgQJf2A5MnfmYyuz/qgn24eekfuRxV0aPua4RjaUjl77YtWtXwDhcQ6y8pUjhUniW2R5ZnnCyecJ9zs=,iv:NGrXYBj6pVvIGmN8RI8abGrtjPPtkdHRZxahgH087cw=,tag:4gBgYddoK3Nd6pVOLxBcGQ==,type:comment]
|
||||
matrix_homeserver_url: ENC[AES256_GCM,data:BEgOaA/kYbT0UU20QPJp+/QR2agNiR6OygG6gyrG8lqV,iv:CVwnqhuEV/VTbcUBliFgp1rLMGNWDyBEyt1xR87O9g4=,tag:azocD23lo9riGtCLBuSIcA==,type:str]
|
||||
matrix_diun_user: ENC[AES256_GCM,data:JtO2Zpt2eSPTWcGlgOHtGfMPLUmpKjHK,iv:LMnqHgrybjaTJvMdOs1TsYW4lRdQK1qtiHjVd5tygeY=,tag:tMcNM5pcrpg90BhP1akwxw==,type:str]
|
||||
matrix_diun_access_token: ENC[AES256_GCM,data:13118eQvzxkI8AJAXlOfaVn0QyQm3bys5SMxXBRT/9ZlgAFHJpY=,iv:S71vHz5VAucmNUQ2Ttj+QB6Fgo+1kx7fpmwZtDDM04U=,tag:C/Fk4CTu5BkGe0X1pB++8Q==,type:str]
|
||||
matrix_diun_room_id: ENC[AES256_GCM,data:uz+tkB+lArVm9XMB/vy3LfwG7YW0JSmpmI/g0hgtecKRorGSAkAD1/PFrWgv,iv:FHgFQ/w4GT/xrQHawEIi1jicri8+WACJcKHKXugImBM=,tag:2M0zFYDE0vQUGxW7YpTT3w==,type:str]
|
||||
#ENC[AES256_GCM,data:/eh4zz6uEw7qlElFH0QH6C78W+bwRwzUfrVw1w0+5poZOQl136b+6e4=,iv:t/wsXLGjDe+Lf3Cvp5R4VATw3olGLVJ1H2RUSFlOMF8=,tag:D7kzKpDHGtCY+E67LJeKkQ==,type:comment]
|
||||
#ENC[AES256_GCM,data:VM0dHs+tOx/1Z6oamSlAa21A4M2He4KuNLXsPdM6/leqvus5M+k=,iv:61HeqUFJBEVw2Ge5jWps/hv4uuvPxz6iZaJrBONwySs=,tag:zi4V00IRiudBZujrs67bdQ==,type:comment]
|
||||
#ENC[AES256_GCM,data:CH6F+c8mchwyNk9N9O4CB42c7nmpzMTE9we46q8lkNQk4LZsWbdrzp8+/gfrBPIwGKRaXtvJSi8=,iv:X3OrXEttqwjc5gu4JLf4S9DnK6IEt3kuFXChE1vpI1Y=,tag:V/H3O2pLnuEsUYLJOBSEJA==,type:comment]
|
||||
hcloud_token: ENC[AES256_GCM,data:aqixVRwAcmqHlB0e3tbQOo2giawp++KjTW2hfK6aZ3VRtTHEcA==,iv:iHt1fY70cWTihiGfypUkhm6//+xfU+JFSkOBUGt1pAs=,tag:QpfzCsDdQR6rppemuZlUCg==,type:str]
|
||||
#ENC[AES256_GCM,data:KpCylAL5gOarG+cNdmcL5cgmJI/6YT4mdIA7GlSqSJRfgNBVYe/xBgL1Hpiq+Q==,iv:+O4/ADo/OoYvMx50+g/sAqyjy+O7DmwURGMqBdDhLZM=,tag:PI42CwChPU6MVF/8/mT6Pg==,type:comment]
|
||||
storage_box_host: ENC[AES256_GCM,data:rO/FEQp1Ksd824TToUh3q0WOVFY4cRk3W64=,iv:61Jor26LvSTKoXo3A9S5NTfgwuVcP8afUneKxSmyT/c=,tag:MrWsp91eygd2YvOnNNyanA==,type:str]
|
||||
storage_box_user: ENC[AES256_GCM,data:KXUlMAixCQ==,iv:8o84GdNHZXKtBJwYop31YwqUL4HqhBNeKbEnhVLPl9A=,tag:hW5O9zSZ9dSLq0FORKqx3g==,type:str]
|
||||
storage_box_password: ENC[AES256_GCM,data:SyzHuEXRbLru+wflZGkxauZpZtUDmo1vMuHmbJlh0yhS,iv:PgqmRC85bQqSreMaL2ibnmOL9+nkg07i0lDNJSEQoDU=,tag:+TVM11wz89TirKBesjKwfQ==,type:str]
|
||||
#ENC[AES256_GCM,data:y+MWRmWUdPVOdaHk8vyZrc3HPD44NIWDZ4mr,iv:P6wh9kz8XcE2i/OVIOfvKRj49qizcQoYh0NZqpJk3bE=,tag:ZYgRx8zsILKp06ZQDn/4/A==,type:comment]
|
||||
acme_email: ENC[AES256_GCM,data:cu9ReaF3xouE0eKZEx4PkiNmMfKOPyix,iv:fqgM8f3tMz7D8HAGCJ5ziwQ/Swsu3K8ZNkQ+p6Qc0Hs=,tag:pO+8wVnAMHSMMfeZ1dEKNA==,type:str]
|
||||
#ENC[AES256_GCM,data:SX03vdRrckyWY15r//Y5pJZWXFGfaQSnNnJvA6k+AsutO1Vl9reVQMexQEU=,iv:p1LbT0qxIFfoiJPUerjGqlHD+fK0o0lLnFUvPhIyHdk=,tag:1zURmS+50zRThvHpOWJ0HA==,type:comment]
|
||||
mailgun_api_key: ENC[AES256_GCM,data:WxrIeq1odexHduN4YTJSIX9+CaiLaGnkPNkEQpUwTou8PU7aMqJtclxliGl6YrYjncM=,iv:RzHQO1URtLPeAFRRjR3YF9+z//5WostpuwPtf7wxCZI=,tag:mi+mVIGGJqJaHu6cM3HcfA==,type:str]
|
||||
sops:
|
||||
age:
|
||||
- recipient: age170jqy5pg6z62kevadqyxxekw8ryf3e394zaquw0nhs9ae3v9wd6qq2hxnk
|
||||
enc: |
|
||||
-----BEGIN AGE ENCRYPTED FILE-----
|
||||
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBuU2JYVlA5bEMrZitEUk95
|
||||
VHltbXgyZUtVTVVicEd3bklPQlRtMURTYTNVCkRXZTJTSkV3aG51emFsQVh2QTBi
|
||||
QnZ5RmpWN3F6RmFnWTlnWUZZcXBqVW8KLS0tIGR4VTh4M3huNGFORU9CcVlMUllx
|
||||
cWhVaU9sN3dYNER5ZXhRbVpOWHZBS2cKHc0pIQno9sUGsfBxRlHxLQ5BPLerb4qd
|
||||
abbxYhLJ4roN+9dw3d26fEPYESg/lLts1nyZNxNGtTIz1oJG2MwJVg==
|
||||
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSA4cGhtWTREWTA1NWZ1SE9u
|
||||
UGwvNWM3Z2pjdmsxMXRFK2UwcHQ1MkgySVE0CmJsVDBZUWhnVlJjdnhLWkwwOE83
|
||||
b3dsMDhKamJNLzFaYnZ4V2ZnS0VydzgKLS0tIE10MmZuc3U5bFFmeDFGNXhwbytG
|
||||
eXNRencwRmM5ZEdqbks2NTZ5UloxOTgK3NE24DZp7QaDUIUQOQjENm3zKorckrmt
|
||||
JEk2oRXoH6PGJHrZMh2AkmoG3/enh24U8PNQBpmYX6U2ZA7zfnjZXg==
|
||||
-----END AGE ENCRYPTED FILE-----
|
||||
lastmodified: "2026-01-23T20:07:21Z"
|
||||
mac: ENC[AES256_GCM,data:huCxiz8tOLyrn2yIcIi2YRMf3TuMPNr9YAsaQJI6aho/4ZqZ7IF3Jm/qrtRsr0tyn3en0UWrQAUnWG+9RQ3AC3fM80T6TghANV4xLXJqNk0WJ5EZ7p4ILDRuuEDqeVcvDYEzawGfCGt6tKL4JVk2BW2PHQz52FaQnJCc4/3Q+ZE=,iv:7pV2UvOz56R+WljGYJl1HeVjXdOqELFvBLE9SKnEzmg=,tag:8Yw7lolyQ2oxTW10o1AdIQ==,type:str]
|
||||
lastmodified: "2026-01-14T14:49:25Z"
|
||||
mac: ENC[AES256_GCM,data:Fa/ssGgx28qa9rJ/6BVGisR6w9xzAD7UwqGcz2ufT9wau+1AjoolDCJnYlhuhufCLU5L9nhwjPe1UPu8Ficpvge3i+UoXbVvnSNuc1ib4Vqaz2KBuHdP+S03/dimbdqjbfudk55uaML94Z5taa9d7xtM9119oG/XZj/qlhA85kI=,iv:lsg/m/92NC/nOSAj/WS4EKlSD9yauyrzzrCpS0+oYO4=,tag:t7lYNzGXL6OUWz0ykVSLPg==,type:str]
|
||||
unencrypted_suffix: _unencrypted
|
||||
version: 3.11.0
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue