- Create LICENSE file with MIT License
- Update README.md to reference the license
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add brand default recovery flow configuration to Authentik setup
- Update create_recovery_flow.py to set brand's recovery flow automatically
- All 17 servers now have brand recovery flow configured
Security improvements:
- Remove secrets/clients/*.sops.yaml from git tracking
- Remove ansible/host_vars/ from git tracking
- Update .gitignore to exclude sensitive config files
- Files remain encrypted and local, just not in repo
Note: Files still exist in git history. Consider using BFG Repo Cleaner
to remove them completely if needed.
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
This commit resolves Docker Hub rate limiting issues on all servers by:
1. Adding Docker Hub authentication support to Diun configuration
2. Making watchRepo configurable (disabled to reduce API calls)
3. Creating automation to deploy changes across all 17 servers
Changes:
- Enhanced diun.yml.j2 template to support:
- Configurable watchRepo setting (defaults to true for compatibility)
- Docker Hub authentication via regopts when credentials provided
- Created 260124-configure-diun-watchrepo.yml playbook to:
- Disable watchRepo (only checks specific tags vs entire repo)
- Enable Docker Hub authentication (5000 pulls/6h vs 100/6h)
- Change schedule to weekly (Monday 6am UTC)
- Created configure-diun-all-servers.sh automation script with:
- Proper SOPS age key file path handling
- Per-server SSH key management
- Sequential deployment across all servers
- Fixed Authentik OIDC provider meta_launch_url to use client_domain
Successfully deployed to all 17 servers (bever, das, egel, haas, kikker,
kraai, mees, mol, mus, otter, ree, specht, uil, valk, vos, wolf, zwaan).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add 260124-nextcloud-maintenance.yml playbook for database indices and mimetypes
- Add run-maintenance-all-servers.sh script to run maintenance on all servers
- Update ansible.cfg with IdentitiesOnly SSH option to prevent auth failures
- Remove orphaned SSH keys for deleted servers (black, dev, purple, white, edge)
- Remove obsolete edge-traefik and nat-gateway roles
- Remove old upgrade playbooks and fix-private-network playbook
- Update host_vars for egel, ree, zwaan
- Update diun webhook configuration
Successfully ran maintenance on all 17 active servers:
- Database indices optimized
- Mimetypes updated (145-157 new types on most servers)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Complete rewrite of the upgrade playbook based on lessons learned
from the kikker upgrade. The v2 playbook is fully idempotent and
handles all edge cases properly.
Key improvements over v1:
1. **Idempotency** - Can be safely re-run after failures
2. **Smart version detection** - Reads actual running version, not just docker-compose.yml
3. **Stage skipping** - Automatically skips completed upgrade stages
4. **Better maintenance mode handling** - Properly enables/disables at right times
5. **Backup reuse** - Skips backup if already exists from previous run
6. **Dynamic upgrade path** - Only runs needed stages based on current version
7. **Clear status messages** - Shows what's happening at each step
8. **Proper error handling** - Fails gracefully with helpful messages
Files:
- playbooks/260123-upgrade-nextcloud-v2.yml (main playbook)
- playbooks/260123-upgrade-nextcloud-stage-v2.yml (stage tasks)
Testing:
- v1 playbook partially tested on kikker (manual intervention required)
- v2 playbook ready for full end-to-end testing
Usage:
cd ansible/
HCLOUD_TOKEN="..." ansible-playbook -i hcloud.yml \
playbooks/260123-upgrade-nextcloud-v2.yml --limit <server> \
--private-key "../keys/ssh/<server>"
The playbook will:
- Detect current version (v30/v31/v32)
- Skip stages already completed
- Create backup only if needed
- Upgrade through required stages
- Re-enable critical apps
- Update to 'latest' tag
🤖 Generated with Claude Code (https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Created: 2026-01-23
Add automated playbook to safely upgrade Nextcloud from v30 (EOL) to v32
through staged upgrades, respecting Nextcloud's no-version-skip policy.
Features:
- Pre-upgrade validation (version, disk space, maintenance mode)
- Automatic full backup (database + volumes)
- Staged upgrades: v30 → v31 → v32
- Per-stage app disabling/enabling
- Database migrations (indices, bigint conversion)
- Post-upgrade validation and system checks
- Rollback instructions in case of failure
- Updates docker-compose.yml to 'latest' tag after success
Files:
- playbooks/260123-upgrade-nextcloud.yml (main playbook)
- playbooks/260123-upgrade-nextcloud-stage.yml (stage tasks)
Usage:
cd ansible/
HCLOUD_TOKEN="..." ansible-playbook -i hcloud.yml \
playbooks/260123-upgrade-nextcloud.yml --limit kikker
Safety:
- Creates timestamped backup before any changes
- Stops containers during volume backup
- Verifies version after each stage
- Provides rollback commands in output
Ready to upgrade kikker from v30.0.17 to v32.x
🤖 Generated with Claude Code (https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Remove accidentally committed tfplan file and obsolete backup files
from the tofu/ directory.
Changes:
- Remove tofu/tfplan from repository (binary plan file, should not be tracked)
- Delete terraform.tfvars.bak (old private network config, no longer needed)
- Delete terraform.tfstate.1768302414.backup (outdated state from Jan 13)
- Update .gitignore to prevent future commits of:
- tfplan files (tofu/tfplan, tofu/*.tfplan)
- Numbered state backups (tofu/terraform.tfstate.*.backup)
Security Assessment:
- tfplan contained infrastructure state (server IPs) but no credentials
- No sensitive tokens or passwords were exposed
- All actual secrets remain in SOPS-encrypted files only
The tfplan was only in commit b6c9fa6 (post-workshop state) and is now
removed going forward.
🤖 Generated with Claude Code (https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Complete the migration from Zitadel to Authentik by removing all
remaining Zitadel references in Ansible templates and defaults.
Changes:
- Update Nextcloud defaults to reference authentik_domain instead of zitadel_domain
- Add clarifying comments about dynamic OIDC credential provisioning
- Clean up Traefik dynamic config template - remove obsolete static routes
- Remove hardcoded test.vrije.cloud routes (routes now come from Docker labels)
- Remove unused Zitadel service definitions and middleware configs
Impact:
- Nextcloud version now defaults to "latest" (from hardcoded "30")
- Traefik template simplified to only define shared middlewares
- All service routing handled via Docker Compose labels (already working)
- No impact on existing deployments (these defaults were unused)
Related to: Post-workshop cleanup following commit b6c9fa6🤖 Generated with Claude Code (https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit captures the infrastructure state immediately following
the "Post-Tyranny Tech" workshop on January 23rd, 2026.
Infrastructure Status:
- 13 client servers deployed (white, valk, zwaan, specht, das, uil, vos,
haas, wolf, ree, mees, mus, mol, kikker)
- Services: Authentik SSO, Nextcloud, Collabora Office, Traefik
- Private network architecture with edge NAT gateway
- OIDC integration between Authentik and Nextcloud
- Automated recovery flows and invitation system
- Container update monitoring with Diun
- Uptime monitoring with Uptime Kuma
Changes include:
- Multiple new client host configurations
- Network architecture improvements (private IPs + NAT)
- DNS management automation
- Container update notifications
- Email configuration via Mailgun
- SSH key generation for all clients
- Encrypted secrets for all deployments
- Health check and diagnostic scripts
Known Issues to Address:
- Nextcloud version pinned to v30 (should use 'latest' or v32)
- Zitadel references in templates (migrated to Authentik but templates not updated)
- Traefik dynamic config has obsolete static routes
🤖 Generated with Claude Code (https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The API key was not used by the automation (which uses username/password
from shared_secrets instead) and should not be in version control.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Replace all Zitadel references with Authentik in README files
- Update example configurations to use authentik instead of zitadel
- Remove reference to deleted PROJECT_REFERENCE.md
- Update clients/README.md to reflect actual available scripts
- Update secrets documentation with correct variable names
All documentation now accurately reflects current infrastructure
using Authentik as the identity provider.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Removed internal deployment logs, security notes, test reports, and docs
folder from git tracking. These files remain locally but are now ignored
by git as they contain internal/sensitive information not needed by
external contributors.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Added docs/ directory and all .md files (except README.md) to .gitignore
to prevent internal deployment logs, security notes, and test reports
from being committed to the repository.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Common role improvements:
- Add systemd-resolved DNS configuration (Google + Cloudflare)
- Ensures reliable DNS resolution for private network servers
- Flush handlers immediately to apply DNS before other tasks
Docker role improvements:
- Enhanced Docker daemon configuration
- Better support for private network deployments
Scripts:
- Update add-client-to-terraform.sh for new architecture
These changes ensure private network clients can resolve DNS and
access internet via NAT gateway.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Enable deployment of client servers without public IPs using private
network (10.0.0.0/16) with NAT gateway via edge server.
## Infrastructure Changes:
### Terraform (tofu/):
- **network.tf**: Define private network and subnet (10.0.0.0/24)
- NAT gateway route through edge server
- Firewall rules for client servers
- **main.tf**: Support private-only servers
- Optional public_ip_enabled flag per client
- Dynamic network block for private IP assignment
- User-data templates for public vs private servers
- **user-data-*.yml**: Cloud-init templates
- Private servers: Configure default route via NAT gateway
- Public servers: Standard configuration
- **dns.tf**: Update DNS to support edge routing
- Client domains point to edge server IP
- Wildcard DNS for subdomains
- **variables.tf**: Add private_ip and public_ip_enabled options
### Ansible:
- **deploy.yml**: Add diun and kuma roles to deployment
## Benefits:
- Cost savings: No public IP needed for each client
- Scalability: No public IP exhaustion limits
- Security: Clients not directly exposed to internet
- Centralized SSL: All TLS termination at edge
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add new Ansible roles and configuration for the edge proxy and
private network architecture:
## New Roles:
- **edge-traefik**: Edge reverse proxy that routes to private clients
- Dynamic routing configuration for multiple clients
- SSL termination at the edge
- Routes traffic to private IPs (10.0.0.x)
- **nat-gateway**: NAT/gateway configuration for edge server
- IP forwarding and masquerading
- Allows private network clients to access internet
- iptables rules for Docker integration
- **diun**: Docker Image Update Notifier
- Monitors containers for available updates
- Email notifications via Mailgun
- Per-client configuration
- **kuma**: Uptime monitoring integration
- Registers HTTP monitors for client services
- Automated monitor creation via API
- Checks Authentik, Nextcloud, Collabora endpoints
## New Playbooks:
- **setup-edge.yml**: Configure edge server with proxy and NAT
## Configuration:
- **host_vars**: Per-client Ansible configuration (valk, white)
- SSH bastion configuration for private IPs
- Client-specific secrets file references
This enables the scalable multi-tenant architecture where:
- Edge server has public IP and routes traffic
- Client servers use private IPs only (cost savings)
- All traffic flows through edge proxy with SSL termination
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add create_recovery_flow.py script that configures Authentik password
recovery flow via REST API. This script is called by recovery.yml
during deployment.
The script creates:
- Password complexity policy (12+ chars, mixed case, digit, symbol)
- Recovery identification stage (username/email input)
- Recovery email stage (sends recovery token with 30min expiry)
- Recovery flow with proper stage bindings
- Updates authentication flow to show "Forgot password?" link
Uses internal Authentik API (localhost:9000) to avoid SSL/DNS issues
during initial setup. Works entirely via API calls, replacing the
unreliable blueprint-based approach.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add recovery.yml task include to main.yml to enable automated
password recovery flow setup. This calls the recovery.yml tasks
which use create_recovery_flow.py to configure:
- Password complexity policy (12+ chars, mixed case, digit, symbol)
- Recovery identification stage (username/email)
- Recovery email stage (30-minute token expiry)
- Integration with default authentication flow
- "Forgot password?" link on login page
This restores automated recovery flow setup that was previously
removed when the blueprint-based approach was abandoned. The new
approach uses direct API calls via Python script which is more
reliable than blueprints.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The recovery flow automation was failing because the Ansible task
was piping the API token via stdin (echo -e), but the Python script
(create_recovery_flow.py) expects command-line arguments via sys.argv.
Changed from:
echo -e "$TOKEN\n$DOMAIN" | docker exec -i python3 script.py
To:
docker exec python3 script.py "$TOKEN" "$DOMAIN"
This matches how the Python script is designed (line 365-370).
Tested on valk deployment - recovery flow now creates successfully
with all features:
- Password complexity policy
- Email verification
- "Forgot password?" link on login page
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Updated the Ansible task output to reflect the actual behavior
after blueprint fix:
Changes:
- Removed misleading "Set as default enrollment flow in brand" feature
- Updated to "Invitation-only enrollment" (more accurate)
- Added note about brand enrollment flow API restriction
- Added clear instructions for creating and using invitation tokens
- Simplified verification steps
This provides operators with accurate expectations about what
the enrollment flow blueprint does and doesn't do.
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
The enrollment flow blueprint was failing with error:
"Model authentik.tenants.models.Tenant not allowed"
This is because the tenant/brand model is restricted in Authentik's
blueprint system and cannot be modified via blueprints.
Changes:
- Removed the tenant model entry (lines 150-156)
- Added documentation comment explaining the restriction
- Enrollment flow now applies successfully
- Brand enrollment flow must be configured manually via API if needed
Note: The enrollment flow is still fully functional and accessible
via direct URL even without brand configuration:
https://auth.<domain>/if/flow/default-enrollment-flow/
Tested on: black client deployment
Blueprint status: successful (previously: error)
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
Removed plaintext SMTP password from uptime-kuma-email-setup.md.
Users should retrieve password from monitoring server or password manager.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Updates to Uptime Kuma monitoring setup:
DNS Configuration:
- Added DNS A record for status.vrije.cloud -> 94.130.231.155
- Updated Uptime Kuma container to use status.vrije.cloud domain
- HTTPS access via nginx-proxy with Let's Encrypt SSL
Automated Monitor Management:
- Created scripts/add-client-to-monitoring.sh
- Created scripts/remove-client-from-monitoring.sh
- Integrated monitoring into deploy-client.sh (step 5/5)
- Integrated monitoring into destroy-client.sh (step 0/7)
- Deployment now prompts to add monitors after success
- Destruction now prompts to remove monitors before deletion
Email Notification Setup:
- Created docs/uptime-kuma-email-setup.md with complete guide
- SMTP configuration using smtp.strato.com
- Credentials: server@postxsociety.org
- Alerts sent to mail@postxsociety.org
Documentation:
- Updated docs/monitoring.md with new domain
- Added email setup reference
- Replaced all URLs to use status.vrije.cloud
Benefits:
✅ Friendly domain instead of IP address
✅ HTTPS access with auto-SSL
✅ Automated monitoring reminders on deploy/destroy
✅ Complete email notification guide
✅ Streamlined workflow for monitor management
Note: Monitor creation/deletion currently manual (API automation planned)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Resolves#17
Deployed Uptime Kuma on external monitoring server for centralized
monitoring of all PTT client services.
Implementation:
- Deployed Uptime Kuma v1 on external server (94.130.231.155)
- Configured Docker Compose with nginx-proxy integration
- Created comprehensive monitoring documentation
Architecture:
- Independent monitoring server (not part of PTT infrastructure)
- Can monitor infrastructure failures and dev server
- Access: http://94.130.231.155:3001
- Future DNS: https://status.postxsociety.cloud
Monitors to configure (manual setup required):
- HTTP(S) endpoint monitoring for Authentik and Nextcloud
- SSL certificate expiration monitoring
- Per-client monitors for: dev, green
Documentation:
- Complete setup guide in docs/monitoring.md
- Monitor configuration instructions
- Management and troubleshooting procedures
- Integration guidelines for deployment scripts
Next Steps:
1. Access http://94.130.231.155:3001 to create admin account
2. Configure monitors for each client as per docs/monitoring.md
3. Set up email notifications for alerts
4. (Optional) Configure DNS for status.postxsociety.cloud
5. (Future) Automate monitor creation via Uptime Kuma API
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Resolves#20
Changes:
- Add hcloud_token to secrets/shared.sops.yaml (encrypted with Age)
- Create scripts/load-secrets-env.sh to automatically load token from SOPS
- Update all management scripts to auto-load token if not set
- Remove plaintext tokens from tofu/terraform.tfvars
- Update documentation in README.md, scripts/README.md, and SECURITY-NOTE-tokens.md
Benefits:
✅ Token encrypted at rest
✅ Can be safely backed up to cloud storage
✅ Consistent with other secrets management
✅ Automatic loading - no manual token management needed
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Implement persistent block storage for Nextcloud user data, separating application and data layers:
OpenTofu Changes:
- tofu/volumes.tf: Create and attach Hetzner Volumes per client
- Configurable size per client (default 100 GB for dev)
- ext4 formatted, attached but not auto-mounted
- tofu/variables.tf: Add nextcloud_volume_size to client config
- tofu/terraform.tfvars: Set volume size for dev client (100 GB ~€5.40/mo)
Ansible Changes:
- ansible/roles/nextcloud/tasks/mount-volume.yml: New mount tasks
- Detect volume device automatically
- Format if needed, mount at /mnt/nextcloud-data
- Add to fstab for persistence
- Set correct permissions for www-data
- ansible/roles/nextcloud/tasks/main.yml: Include volume mounting
- ansible/roles/nextcloud/templates/docker-compose.nextcloud.yml.j2:
- Use host mount /mnt/nextcloud-data/data instead of Docker volume
- Keep app code in Docker volume (nextcloud-app)
- User data now on Hetzner Volume
Scripts:
- scripts/resize-client-volume.sh: Online volume resizing
- Resize via Hetzner API
- Expand filesystem automatically
- Show cost impact
- Verify new size
Documentation:
- docs/storage-architecture.md: Complete storage guide
- Architecture diagrams
- Volume specifications
- Sizing guidelines
- Operations procedures
- Performance considerations
- Troubleshooting guide
- docs/volume-migration.md: Step-by-step migration
- Safe migration from Docker volumes
- Rollback procedures
- Verification checklist
- Timeline estimates
Benefits:
✅ Data independent from server instance
✅ Resize storage without rebuilding server
✅ Easy data migration between servers
✅ Better separation of concerns (app vs data)
✅ Simplified backup strategy
✅ Cost-optimized (pay for what you use)
Volume Pricing:
- 50 GB: ~€2.70/month
- 100 GB: ~€5.40/month
- 250 GB: ~€13.50/month
- Resizable online, no downtime
Note: Existing clients require manual migration
Follow docs/volume-migration.md for safe migration procedure
Closes#18🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add comprehensive client registry for tracking all deployed infrastructure:
Registry System:
- Single source of truth in clients/registry.yml
- Tracks status, server specs, versions, maintenance history
- Supports canary deployment workflow
- Automatic updates via deployment scripts
New Scripts:
- scripts/list-clients.sh: List/filter clients (table/json/csv/summary)
- scripts/client-status.sh: Detailed client info with health checks
- scripts/update-registry.sh: Manual registry updates
Updated Scripts:
- scripts/deploy-client.sh: Auto-updates registry on deploy
- scripts/rebuild-client.sh: Auto-updates registry on rebuild
- scripts/destroy-client.sh: Marks clients as destroyed
Documentation:
- docs/client-registry.md: Complete registry reference
- clients/README.md: Quick start guide
Status tracking: pending → deployed → maintenance → destroyed
Role support: canary (dev) and production clients
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Automated recovery flow setup via blueprints was too complex and
unreliable. Recovery flows (password reset via email) must now be
configured manually in Authentik admin UI.
Changes:
- Removed recovery-flow.yaml blueprint
- Removed configure_recovery_flow.py script
- Removed update-recovery-flow.yml playbook
- Updated flows.yml to remove recovery references
- Updated custom-flows.yaml to remove brand recovery flow config
- Updated comments to reflect manual recovery flow requirement
Automated configuration still includes:
- Enrollment flow with invitation support
- 2FA/MFA enforcement
- OIDC provider for Nextcloud
- Email configuration via SMTP
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
CRITICAL FIX: Ensures all three flow blueprints are deployed during initial setup
The issue was that only custom-flows.yaml was being deployed, but
enrollment-flow.yaml and recovery-flow.yaml were created separately
and manually deployed later. This caused problems when servers were
rebuilt - the enrollment and recovery flows would disappear.
Changes:
- Updated flows.yml to deploy all three blueprints in a loop
- enrollment-flow.yaml: Invitation-only user registration
- recovery-flow.yaml: Password reset via email
- custom-flows.yaml: 2FA enforcement and brand settings
Now all flows will be available immediately after deployment:
✓ https://auth.dev.vrije.cloud/if/flow/default-enrollment-flow/
✓ https://auth.dev.vrije.cloud/if/flow/default-recovery-flow/🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
ACHIEVEMENT: Password recovery via email is now fully working! 🎉
Implemented a complete password recovery flow that:
- Asks users for their email address
- Sends a recovery link via Mailgun SMTP
- Allows users to set a new password
- Expires recovery links after 30 minutes
Flow stages:
1. Identification stage - collects user email
2. Email stage - sends recovery link
3. Prompt stage - collects new password
4. User write stage - updates password
Features:
✓ Email sent via Mailgun (noreply@mg.vrije.cloud)
✓ 30-minute token expiry for security
✓ Set as default recovery flow in brand
✓ Clean, user-friendly interface
✓ Password confirmation required
Users can access recovery at:
https://auth.dev.vrije.cloud/if/flow/default-recovery-flow/
Files added:
- recovery-flow.yaml - Blueprint defining the complete flow
- update-recovery-flow.yml - Deployment playbook
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
ACHIEVEMENT: Invitation-only enrollment flow is now fully working! 🎉
This commit adds a utility playbook that was used to successfully deploy
the updated enrollment-flow.yaml blueprint to the running dev server.
The key fix was adding the tenant configuration to set the enrollment flow
as the default in the Authentik brand, ensuring invitations created in the
UI automatically use the correct flow.
Changes:
- Added update-enrollment-flow.yml playbook for deploying flow updates
- Successfully deployed and verified on dev server
- Invitation URLs now work correctly with the format:
https://auth.dev.vrije.cloud/if/flow/default-enrollment-flow/?itoken=<token>
Features confirmed working:
✓ Invitation-only registration (no public signup)
✓ Correct flow is set as brand default
✓ Email notifications via Mailgun SMTP
✓ 2FA enforcement configured
✓ Password recovery flow configured
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This ensures that when admins create invitations in the Authentik UI,
they automatically use the correct default-enrollment-flow instead of
the default-source-enrollment flow (which only works with external IdPs).
Changes:
- Added tenant configuration to set flow_enrollment
- Invitation URLs will now correctly use /if/flow/default-enrollment-flow/
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Set continue_flow_without_invitation: false
- Enrollment now requires a valid invitation token
- Users cannot self-register without an invitation
- Renamed metadata to reflect invitation-only nature
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Removed user login stage from enrollment flow
- Users now see completion page instead of being auto-logged in
- Prevents redirect to /if/user/ which requires internal user permissions
- Users can manually go to Nextcloud and log in with OIDC after registration
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Created enrollment-flow.yaml blueprint with:
* Enrollment flow with authentication: none
* Invitation stage (continues without invitation token)
* Prompt fields for user registration
* User write stage with user_creation_mode: always_create
* User login stage for automatic login after registration
- Fixed blueprint structure (attrs before identifiers)
- Public enrollment available at /if/flow/default-enrollment-flow/
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Implements automatic configuration of 2FA enforcement via Authentik API:
**Features:**
- Forces users to configure TOTP authenticator on first login
- Supports multiple 2FA methods: TOTP, WebAuthn, Static backup codes
- Idempotent: detects existing configuration and skips update
- Fully automated via Ansible deployment
**Implementation:**
- New task file: ansible/roles/authentik/tasks/mfa.yml
- Updates default-authentication-mfa-validation stage via API
- Sets not_configured_action to "configure"
- Links default-authenticator-totp-setup as configuration stage
**Configuration:**
```yaml
not_configured_action: configure
device_classes: [totp, webauthn, static]
configuration_stages: [default-authenticator-totp-setup]
```
**Testing:**
✅ Deployed to dev server successfully
✅ MFA enforcement verified via API
✅ Status: "Already configured" (idempotent check works)
Users will now be required to set up 2FA on their next login.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>