Skip to content

OAuth & LDAP Setup

This guide covers OIDC authentication setup in SPOG. Use the configuration chooser below to see only the instructions relevant to your setup.

Prerequisites

This guide builds on the Authentication & Authorization guide, which explains the security model, JWT tokens, and REGO policies. You should have a working SPOG deployment from the Quickstart guide.


Choose Your Configuration

LDAP Integration
OIDC Provider

Setup Overview

Based on your selections above, you'll complete these steps:

  1. TLS Certificate Setup - Install cert-manager and configure locally-trusted certificates
  2. Deploy OIDC Provider - Set up your chosen identity provider
  1. Deploy OpenLDAP - Set up a test LDAP directory (or connect to your existing one)
  1. Configure LDAP Federation - Connect Keycloak to LDAP for user sync
  1. Connect Glass UI - Configure SPOG to authenticate against your OIDC provider

TLS Certificate Setup

HTTPS is Required

OIDC requires HTTPS for all authentication flows. Browsers block insecure login forms and OIDC providers reject non-TLS redirect URIs.

This guide uses mkcert to create locally-trusted certificates for development. For production deployments with real domains, see Production Deployment at the end of this guide.

Install cert-manager

Bash
1
2
3
4
5
6
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set crds.enabled=true

Local TLS with mkcert

mkcert creates a local CA trusted by your browser.

Install mkcert:

Bash
1
2
3
4
5
6
7
# macOS
brew install mkcert

# Linux
sudo apt install libnss3-tools
curl -Lo mkcert https://github.com/FiloSottile/mkcert/releases/download/v1.4.4/mkcert-v1.4.4-linux-amd64
chmod +x mkcert && sudo mv mkcert /usr/local/bin/

Create and import CA:

Bash
1
2
3
4
5
6
7
# Install local CA (one-time - adds to system trust store)
mkcert -install

# Import CA into cert-manager
kubectl -n cert-manager create secret tls mkcert-ca \
  --cert="$(mkcert -CAROOT)/rootCA.pem" \
  --key="$(mkcert -CAROOT)/rootCA-key.pem"

Create ClusterIssuer:

Bash
1
2
3
4
5
6
7
8
9
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: spog-tls-issuer
spec:
  ca:
    secretName: mkcert-ca
EOF

Configure /etc/hosts:

Bash
1
2
3
4
5
6
# Use your cluster's ingress IP
# Local k3d/kind with port-forward:
127.0.0.1 auth.spog.local console.spog.local

# k3d with loadbalancer or remote cluster:
# <cluster-ingress-ip> auth.spog.local console.spog.local

Deploy OIDC Provider

Deploy Mock OIDC Server

The Mock OAuth2 Server is perfect for development and testing. It provides a full OIDC implementation where you specify your claims directly in the login form.

Development Only

Mock OIDC Server accepts any password. Do not use in production.

Step 1: Create Namespace

Bash
kubectl create namespace auth

Step 2: Deploy Mock OAuth2 Server

The deployment manifest includes a cert-manager Certificate resource that automatically creates the TLS secret.

Bash
kubectl apply -f "https://doc.powerdns.com/spog/1.0.0/helm/3rd-party-examples/mock-oauth2-server/manifests/deployment.yaml" -n auth
Manifest contents
YAML
# Mock OAuth2 Server Deployment
# Repository: https://github.com/navikt/mock-oauth2-server
#
# This deploys a mock OIDC server for development and testing.
# Users can login with any username and specify their claims via the login form.
#
# LOGIN INSTRUCTIONS:
#   Username: any value (becomes the 'sub' claim)
#   Claims (JSON): paste one of these based on desired role:
#
#   Admin (full access):
#     {"groups": ["admin:all"]}
#
#   US Operator (region:us matches us-east, us-west, us-central, etc.):
#     {"groups": ["region:us", "write:staging"]}
#
#   EU Operator (region:eu matches eu-west, eu-central, etc.):
#     {"groups": ["region:eu", "write:staging"]}
#
#   Global Viewer (read-only access to all regions):
#     {"groups": ["region:*", "read:clusters"]}
#
# PREREQUISITES:
#   - cert-manager installed with spog-tls-issuer ClusterIssuer configured
#     (see TLS Setup section in the OIDC documentation)
#
# USAGE:
#   kubectl create namespace auth
#   kubectl apply -f deployment.yaml -n auth
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: mock-oidc-tls
  namespace: auth
spec:
  secretName: mock-oidc-tls
  issuerRef:
    name: spog-tls-issuer
    kind: ClusterIssuer
  dnsNames:
    - auth.spog.local
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: mock-oauth2-server-config
  namespace: auth
data:
  # JSON configuration for mock-oauth2-server
  # See: https://github.com/navikt/mock-oauth2-server#configuration
  # Claims are provided via the login form, not tokenCallbacks
  config.json: |
    {
      "interactiveLogin": true,
      "httpServer": "NettyWrapper"
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mock-oauth2-server
  namespace: auth
  labels:
    app: mock-oauth2-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mock-oauth2-server
  template:
    metadata:
      labels:
        app: mock-oauth2-server
    spec:
      containers:
        - name: mock-oauth2-server
          image: ghcr.io/navikt/mock-oauth2-server:2.1.10
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          env:
            - name: SERVER_PORT
              value: "8080"
            - name: JSON_CONFIG_PATH
              value: "/config/config.json"
          volumeMounts:
            - name: config
              mountPath: /config
              readOnly: true
          resources:
            requests:
              memory: "128Mi"
              cpu: "50m"
            limits:
              memory: "256Mi"
              cpu: "200m"
          livenessProbe:
            httpGet:
              path: /default/.well-known/openid-configuration
              port: http
            initialDelaySeconds: 10
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /default/.well-known/openid-configuration
              port: http
            initialDelaySeconds: 5
            periodSeconds: 5
      volumes:
        - name: config
          configMap:
            name: mock-oauth2-server-config
---
apiVersion: v1
kind: Service
metadata:
  name: mock-oauth2-server
  namespace: auth
  labels:
    app: mock-oauth2-server
spec:
  type: ClusterIP
  ports:
    - port: 8080
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app: mock-oauth2-server
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: mock-oauth2-server
  namespace: auth
spec:
  # Leave ingressClassName unset to use cluster's default ingress controller
  # ingressClassName: traefik
  tls:
    - hosts:
        - auth.spog.local
      secretName: mock-oidc-tls
  rules:
    - host: auth.spog.local
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: mock-oauth2-server
                port:
                  number: 8080

Step 3: Verify Deployment

Bash
kubectl -n auth wait --for=condition=Available deployment/mock-oauth2-server --timeout=60s
curl -k https://auth.spog.local/default/.well-known/openid-configuration

Deploy Dex

Dex provides a lightweight OIDC bridge with LDAP support. It's ideal when you don't need a user management UI.

LDAP Server Required

These instructions assume you have an LDAP server. See Deploy OpenLDAP below to set up a test server.

Step 1: Add Helm Repository

Bash
helm repo add dex https://charts.dexidp.io
helm repo update

Step 2: Create Namespace

Bash
kubectl create namespace dex

Step 3: Deploy Dex with LDAP Connector

The values file includes a cert-manager annotation that automatically creates the TLS secret.

Bash
helm install dex dex/dex -n dex \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/3rd-party-examples/dex/examples/with-ldap.yaml"
Values file contents
YAML
# Dex Helm Values - LDAP Connector Configuration
# Uses the dex/dex chart with LDAP backend for enterprise authentication
# Repository: https://charts.dexidp.io
#
# This configuration connects Dex to an OpenLDAP server for user authentication.
# LDAP group memberships become the 'groups' claim in the JWT token, which
# REGO policies use for authorization (input.user.groups).
---
# Ingress configuration
ingress:
  enabled: true
  # Leave className unset to use cluster's default ingress controller
  className: ""
  # cert-manager will automatically create the TLS secret using the spog-tls-issuer
  annotations:
    cert-manager.io/cluster-issuer: "spog-tls-issuer"
  hosts:
    - host: auth.spog.local
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: dex-tls
      hosts:
        - auth.spog.local

# Dex configuration
config:
  # Issuer URL - must match what clients expect
  issuer: "https://auth.spog.local"

  # Storage backend
  storage:
    type: memory
    # For production:
    # type: kubernetes
    # config:
    #   inCluster: true

  # Web server configuration
  web:
    allowedOrigins:
      - "https://console.spog.local"

  # OAuth2 configuration
  oauth2:
    responseTypes: ["code"]
    skipApprovalScreen: true
    # Always include groups claim
    alwaysIssue:
      - "openid"
      - "profile"
      - "email"
      - "groups"

  # Disable static password database when using LDAP
  enablePasswordDB: false

  # Static OIDC clients
  staticClients:
    - id: spog-console
      name: "SPOG Console"
      redirectURIs:
        - "https://console.spog.local/authz/callback"
      public: true

  # LDAP Connector
  connectors:
    - type: ldap
      name: "OpenLDAP"
      id: ldap
      config:
        # LDAP server connection
        host: openldap.ldap.svc.cluster.local:389
        insecureNoSSL: true
        # For LDAPS (production):
        # host: openldap.ldap.svc.cluster.local:636
        # insecureNoSSL: false
        # insecureSkipVerify: false  # Set to true if using self-signed certs

        # Bind credentials for searching
        bindDN: cn=admin,dc=spog,dc=local
        bindPW: admin123  # Use Kubernetes secret in production!

        # User search configuration
        userSearch:
          # Base DN for user searches
          baseDN: ou=users,dc=spog,dc=local
          # Filter for user objects
          filter: "(objectClass=inetOrgPerson)"
          # Attribute used for username
          username: uid
          # Attribute used for user ID
          idAttr: uid
          # Attribute for email
          emailAttr: mail
          # Attribute for display name
          nameAttr: cn

        # Group search configuration
        groupSearch:
          # Base DN for group searches
          baseDN: ou=groups,dc=spog,dc=local
          # Filter for group objects
          filter: "(objectClass=groupOfNames)"
          # Attribute for group name (becomes groups claim)
          nameAttr: cn
          # User-to-group membership mapping
          userMatchers:
            - userAttr: DN
              groupAttr: member

# Service configuration
service:
  type: ClusterIP
  ports:
    http:
      port: 5556

# Resources
resources:
  requests:
    memory: "64Mi"
    cpu: "50m"
  limits:
    memory: "128Mi"
    cpu: "200m"

The key configuration points:

  • enablePasswordDB: false - Uses only LDAP for authentication
  • connectors[].config.host - OpenLDAP service address
  • connectors[].config.userSearch - How to find users in the directory
  • connectors[].config.groupSearch - How to find groups (becomes groups claim)

Verify Deployment:

Bash
kubectl -n dex wait --for=condition=Available deployment/dex --timeout=60s
curl -k https://auth.spog.local/.well-known/openid-configuration

Deploy Keycloak

The Keycloak Operator provides a Kubernetes-native way to deploy and manage Keycloak instances. Use Keycloak when you need an admin UI, advanced claim mapping, or built-in MFA.

Step 1: Install Keycloak Operator

Install CRDs:

Bash
1
2
3
# Apply Keycloak CRDs (version 26.4.7)
kubectl apply -f https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/26.4.7/kubernetes/keycloaks.k8s.keycloak.org-v1.yml
kubectl apply -f https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/26.4.7/kubernetes/keycloakrealmimports.k8s.keycloak.org-v1.yml

Deploy operator:

Bash
1
2
3
4
5
# Create namespace
kubectl create namespace keycloak

# Deploy operator
kubectl -n keycloak apply -f https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/26.4.7/kubernetes/kubernetes.yml

Verify operator is running:

Bash
kubectl -n keycloak get pods -l app.kubernetes.io/name=keycloak-operator

Step 2: Deploy PostgreSQL Database

Keycloak requires a database. We use the Zalando Postgres Operator.

Install Zalando Postgres Operator:

Bash
1
2
3
4
5
helm repo add postgres-operator-charts https://opensource.zalando.com/postgres-operator/charts/postgres-operator
helm repo update
helm install postgres-operator postgres-operator-charts/postgres-operator \
  --namespace postgres-operator \
  --create-namespace

Create PostgreSQL cluster:

Bash
kubectl apply -f - <<EOF
apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
  name: keycloak-db
  namespace: keycloak
spec:
  teamId: "keycloak"
  volume:
    size: 1Gi
  numberOfInstances: 1  # Use 2+ for HA in production
  users:
    keycloak:
      - superuser
      - createdb
  databases:
    keycloak: keycloak
  postgresql:
    version: "17"
EOF

Step 3: Deploy Keycloak

Deploy the Keycloak instance with TLS certificate and ingress. This applies the certificate, Keycloak CR, and ingress in one step.

Bash
1
2
3
4
kubectl apply -n keycloak \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/3rd-party-examples/keycloak-operator/manifests/keycloak/certificate.yaml" \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/3rd-party-examples/keycloak-operator/manifests/keycloak/keycloak.yaml" \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/3rd-party-examples/keycloak-operator/manifests/keycloak/ingress.yaml"
Certificate manifest
YAML
# TLS Certificate for Keycloak
# Prerequisites: cert-manager with spog-tls-issuer ClusterIssuer configured
# (see TLS Setup section in the OIDC documentation)
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: keycloak-tls
  namespace: keycloak
spec:
  secretName: keycloak-tls
  issuerRef:
    name: spog-tls-issuer
    kind: ClusterIssuer
  dnsNames:
    - auth.spog.local
Keycloak manifest
YAML
apiVersion: k8s.keycloak.org/v2alpha1
kind: Keycloak
metadata:
  name: keycloak
  namespace: keycloak
spec:
  instances: 1

  db:
    vendor: postgres
    host: keycloak-db
    database: keycloak
    usernameSecret:
      name: keycloak.keycloak-db.credentials.postgresql.acid.zalan.do
      key: username
    passwordSecret:
      name: keycloak.keycloak-db.credentials.postgresql.acid.zalan.do
      key: password

  http:
    httpEnabled: true

  hostname:
    hostname: auth.spog.local  # Change for production
    strict: false

  proxy:
    headers: xforwarded
Ingress manifest
YAML
# Keycloak Ingress
#
# Standard Kubernetes Ingress with TLS termination at the ingress level.
# This pattern works with any ingress controller (Traefik, NGINX, etc.)
#
# The TLS certificate is created by cert-manager (see certificate.yaml).
# Keycloak serves HTTP internally; the ingress handles HTTPS externally.
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: keycloak-ingress
  namespace: keycloak
spec:
  # Leave ingressClassName unset to use cluster's default ingress controller
  # ingressClassName: traefik
  tls:
    - hosts:
        - auth.spog.local
      secretName: keycloak-tls
  rules:
    - host: auth.spog.local
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: keycloak-service
                port:
                  number: 8080

Step 4: Get Admin Credentials

Bash
1
2
3
4
5
6
7
# Username
kubectl -n keycloak get secret keycloak-initial-admin \
  -o jsonpath='{.data.username}' | base64 -d && echo

# Password
kubectl -n keycloak get secret keycloak-initial-admin \
  -o jsonpath='{.data.password}' | base64 -d && echo

Access the admin console at https://auth.spog.local/admin

Step 5: Configure SPOG Realm

Bash
kubectl apply -n keycloak -f "https://doc.powerdns.com/spog/1.0.0/helm/3rd-party-examples/keycloak-operator/manifests/keycloak/spog-realm.yaml"
Manifest contents
YAML
# SPOG Realm Import
#
# Configures the SPOG realm in Keycloak with:
#   - Client scope mapping user 'scopes' attribute to 'groups' claim
#   - spog-console OIDC client
#   - Test users with scope-based access (remove in production)
#
# Prerequisites:
#   - Keycloak instance running (keycloak.yaml applied)
#
# NOTE: URLs assume HTTPS on standard port 443. If your cluster uses a different
# port, update the redirect URIs and web origins accordingly.
#
# Usage:
#   kubectl apply -f spog-realm.yaml -n keycloak
---
apiVersion: k8s.keycloak.org/v2alpha1
kind: KeycloakRealmImport
metadata:
  name: spog-realm
  namespace: keycloak
spec:
  keycloakCRName: keycloak
  realm:
    realm: spog
    enabled: true
    displayName: "SPOG Authentication"

    # Client scopes for OIDC claims
    # NOTE: Built-in scopes (openid, profile, email) must be explicitly defined
    # because KeycloakRealmImport doesn't auto-create them like the admin console does.
    clientScopes:
      # Built-in OIDC scopes (required for standard OIDC flow)
      - name: openid
        protocol: openid-connect
        attributes:
          include.in.token.scope: "true"
        protocolMappers:
          - name: sub
            protocol: openid-connect
            protocolMapper: oidc-sub-mapper
            config:
              id.token.claim: "true"
              access.token.claim: "true"
      - name: profile
        protocol: openid-connect
        attributes:
          include.in.token.scope: "true"
        protocolMappers:
          - name: username
            protocol: openid-connect
            protocolMapper: oidc-usermodel-property-mapper
            config:
              user.attribute: username
              claim.name: preferred_username
              id.token.claim: "true"
              access.token.claim: "true"
          - name: full name
            protocol: openid-connect
            protocolMapper: oidc-full-name-mapper
            config:
              id.token.claim: "true"
              access.token.claim: "true"
      - name: email
        protocol: openid-connect
        attributes:
          include.in.token.scope: "true"
        protocolMappers:
          - name: email
            protocol: openid-connect
            protocolMapper: oidc-usermodel-property-mapper
            config:
              user.attribute: email
              claim.name: email
              id.token.claim: "true"
              access.token.claim: "true"
          - name: email verified
            protocol: openid-connect
            protocolMapper: oidc-usermodel-property-mapper
            config:
              user.attribute: emailVerified
              claim.name: email_verified
              jsonType.label: boolean
              id.token.claim: "true"
              access.token.claim: "true"

      # Groups scope - maps user 'scopes' attribute to 'groups' claim
      # This provides compatibility with Mock OIDC and Dex which also use 'groups'
      - name: groups
        protocol: openid-connect
        attributes:
          include.in.token.scope: "true"
        protocolMappers:
          - name: groups
            protocol: openid-connect
            protocolMapper: oidc-usermodel-attribute-mapper
            config:
              user.attribute: scopes
              claim.name: groups
              jsonType.label: String
              id.token.claim: "true"
              access.token.claim: "true"
              multivalued: "true"

    # SPOG Console client
    clients:
      - clientId: spog-console
        name: "SPOG Console"
        enabled: true
        publicClient: true
        standardFlowEnabled: true
        rootUrl: "https://console.spog.local"
        redirectUris:
          - "https://console.spog.local/*"
        webOrigins:
          - "https://console.spog.local"
        defaultClientScopes:
          - "openid"
          - "profile"
          - "email"
          - "groups"

    # Realm roles (for RBAC - alternative to scope-based access)
    roles:
      realm:
        - name: admin
          description: "Full administrative access"
        - name: operator
          description: "Operational access"
        - name: viewer
          description: "Read-only access"

    # Test users with scope-based access (remove in production)
    # All passwords: secret
    users:
      - username: admin
        enabled: true
        email: admin@spog.local
        credentials:
          - type: password
            value: secret
            temporary: false
        attributes:
          scopes: ["admin:all"]
      - username: us-operator
        enabled: true
        email: us-operator@spog.local
        credentials:
          - type: password
            value: secret
            temporary: false
        attributes:
          # region:us matches us-east, us-west, us-central, etc.
          scopes: ["region:us", "write:staging", "write:development"]
      - username: eu-operator
        enabled: true
        email: eu-operator@spog.local
        credentials:
          - type: password
            value: secret
            temporary: false
        attributes:
          # region:eu matches eu-west, eu-central, eu-north, etc.
          scopes: ["region:eu", "write:staging", "write:development"]
      - username: prod-admin
        enabled: true
        email: prod-admin@spog.local
        credentials:
          - type: password
            value: secret
            temporary: false
        attributes:
          scopes: ["region:*", "write:production"]
      - username: global-viewer
        enabled: true
        email: global-viewer@spog.local
        credentials:
          - type: password
            value: secret
            temporary: false
        attributes:
          scopes: ["region:*", "read:clusters"]
      - username: dev-user
        enabled: true
        email: dev-user@spog.local
        credentials:
          - type: password
            value: secret
            temporary: false
        attributes:
          scopes: ["region:us-east", "write:development"]

Step 5: Configure SPOG Realm

Configure the realm without local users (users will be federated from LDAP):

Bash
kubectl apply -n keycloak -f "https://doc.powerdns.com/spog/1.0.0/helm/3rd-party-examples/keycloak-operator/manifests/keycloak/spog-realm-ldap.yaml"
Manifest contents
YAML
# SPOG Realm Import (LDAP Federation)
#
# Configures the SPOG realm in Keycloak for LDAP-federated users:
#   - Client scope mapping Keycloak groups to 'groups' claim
#   - spog-console OIDC client
#   - NO local users (users come from LDAP federation)
#
# Prerequisites:
#   - Keycloak instance running (keycloak.yaml applied)
#   - LDAP federation configured in Keycloak admin console
#
# NOTE: URLs assume HTTPS on standard port 443. If your cluster uses a different
# port, update the redirect URIs and web origins accordingly.
#
# Usage:
#   kubectl apply -f spog-realm-ldap.yaml -n keycloak
---
apiVersion: k8s.keycloak.org/v2alpha1
kind: KeycloakRealmImport
metadata:
  name: spog-realm
  namespace: keycloak
spec:
  keycloakCRName: keycloak
  realm:
    realm: spog
    enabled: true
    displayName: "SPOG Authentication"

    # Client scopes for OIDC claims
    # NOTE: Built-in scopes (openid, profile, email) must be explicitly defined
    # because KeycloakRealmImport doesn't auto-create them like the admin console does.
    clientScopes:
      # Built-in OIDC scopes (required for standard OIDC flow)
      - name: openid
        protocol: openid-connect
        attributes:
          include.in.token.scope: "true"
        protocolMappers:
          - name: sub
            protocol: openid-connect
            protocolMapper: oidc-sub-mapper
            config:
              id.token.claim: "true"
              access.token.claim: "true"
      - name: profile
        protocol: openid-connect
        attributes:
          include.in.token.scope: "true"
        protocolMappers:
          - name: username
            protocol: openid-connect
            protocolMapper: oidc-usermodel-property-mapper
            config:
              user.attribute: username
              claim.name: preferred_username
              id.token.claim: "true"
              access.token.claim: "true"
          - name: full name
            protocol: openid-connect
            protocolMapper: oidc-full-name-mapper
            config:
              id.token.claim: "true"
              access.token.claim: "true"
      - name: email
        protocol: openid-connect
        attributes:
          include.in.token.scope: "true"
        protocolMappers:
          - name: email
            protocol: openid-connect
            protocolMapper: oidc-usermodel-property-mapper
            config:
              user.attribute: email
              claim.name: email
              id.token.claim: "true"
              access.token.claim: "true"
          - name: email verified
            protocol: openid-connect
            protocolMapper: oidc-usermodel-property-mapper
            config:
              user.attribute: emailVerified
              claim.name: email_verified
              jsonType.label: boolean
              id.token.claim: "true"
              access.token.claim: "true"

      # Groups scope - maps Keycloak group membership to 'groups' claim
      # Groups are federated from LDAP via the User Federation configuration
      - name: groups
        protocol: openid-connect
        attributes:
          include.in.token.scope: "true"
        protocolMappers:
          - name: groups
            protocol: openid-connect
            protocolMapper: oidc-group-membership-mapper
            config:
              claim.name: groups
              full.path: "false"
              id.token.claim: "true"
              access.token.claim: "true"

    # SPOG Console client
    clients:
      - clientId: spog-console
        name: "SPOG Console"
        enabled: true
        publicClient: true
        standardFlowEnabled: true
        rootUrl: "https://console.spog.local"
        redirectUris:
          - "https://console.spog.local/*"
        webOrigins:
          - "https://console.spog.local"
        defaultClientScopes:
          - "openid"
          - "profile"
          - "email"
          - "groups"

    # Realm roles (for RBAC - alternative to scope-based access)
    roles:
      realm:
        - name: admin
          description: "Full administrative access"
        - name: operator
          description: "Operational access"
        - name: viewer
          description: "Read-only access"

    # Disable password update prompts for LDAP users
    # (passwords are managed in LDAP, not Keycloak)
    requiredActions:
      - alias: UPDATE_PASSWORD
        name: Update Password
        enabled: false
        defaultAction: false

    # No local users - users are federated from LDAP
    # Configure LDAP federation in Keycloak admin console after applying this realm

Deploy OpenLDAP

For LDAP integration, you need an LDAP directory. This section covers deploying OpenLDAP as a reference implementation.

Install OpenLDAP

We use the openldap-stack-ha Helm chart.

Step 1: Add Helm Repository

Bash
helm repo add jp-gouin https://jp-gouin.github.io/helm-openldap/
helm repo update

Step 2: Deploy OpenLDAP

Bash
1
2
3
4
kubectl create namespace ldap

helm install openldap jp-gouin/openldap-stack-ha -n ldap \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/3rd-party-examples/openldap/examples/spog-directory.yaml"
Values file contents
YAML
# OpenLDAP Helm Values for SPOG Directory
# Uses the jp-gouin/openldap-stack-ha chart
# Repository: https://jp-gouin.github.io/helm-openldap/
---
global:
  # LDAP Admin credentials
  adminUser: admin
  adminPassword: admin123  # Change this in production!
  configUser: config
  configPassword: config123  # Change this in production!
  # Domain configuration
  ldapDomain: "dc=spog,dc=local"

replicaCount: 1

# Disable web-based admin UIs - use ldapadd CLI instead
phpldapadmin:
  enabled: false

ltb-passwd:
  enabled: false

# Persistence
persistence:
  enabled: true
  size: 1Gi

# Service configuration
service:
  type: ClusterIP
  ldapPort: 389
  ldapsPort: 636

# TLS configuration (optional)
tls:
  enabled: false
  # For production, enable TLS:
  # enabled: true
  # secret: openldap-tls

# Resource limits
resources:
  requests:
    memory: "256Mi"
    cpu: "100m"
  limits:
    memory: "512Mi"
    cpu: "500m"

# Custom LDIF files to initialize the directory
# The base.ldif file in ../data/ should be imported after deployment
# using the ldapadd CLI command
customLdifFiles: {}
  # You can embed LDIF directly here, or import via ldapadd CLI
  # Example:
  # 00-root.ldif: |
  #   dn: dc=spog,dc=local
  #   objectClass: top
  #   objectClass: dcObject
  #   objectClass: organization
  #   dc: spog
  #   o: SPOG Organization

Import Directory Structure

The LDIF below creates a directory structure that maps to SPOG's authorization model:

Text Only
1
2
3
4
5
6
7
8
9
dc=spog,dc=local              # Domain root
├── ou=users                  # User accounts
│   ├── uid=admin
│   ├── uid=us-operator
│   └── ...
└── ou=groups                 # Group memberships
    ├── cn=admins
    ├── cn=operators
    └── cn=viewers

How it works with Dex:

  • Groups → JWT claims: Dex's groupSearch config finds groups where the user is a member. These become the groups claim in the JWT token.
  • Groups → REGO policies: SPOG's REGO policies check input.user.groups to authorize access to clusters and operations.

How it works with Keycloak:

  • LDAP → Keycloak: Keycloak's LDAP federation syncs users and groups from the LDAP directory into Keycloak.
  • Groups → JWT claims: The realm's groups client scope maps Keycloak group memberships to the groups claim in the JWT token.
  • Groups → REGO policies: SPOG's REGO policies check input.user.groups to authorize access to clusters and operations.

Save the LDIF content below to a file called base.ldif, then import it.

Start the port-forward:

Bash
kubectl port-forward -n ldap svc/openldap 1389:389

In another terminal, import the LDIF:

Bash
1
2
3
4
ldapadd -x -H ldap://localhost:1389 \
  -D "cn=admin,dc=spog,dc=local" \
  -w admin123 \
  -f base.ldif
LDIF Content
base.ldif
# SPOG LDAP Directory Structure
# This LDIF file defines the base directory structure for SPOG authentication
# Import this into your LDAP server after deployment
#
# Group names (cn) become the 'groups' claim in the JWT token via Dex.
# SPOG's REGO policies check these groups for authorization.

# Root domain
dn: dc=spog,dc=local
objectClass: top
objectClass: dcObject
objectClass: organization
dc: spog
o: SPOG Organization

# Organizational Unit for Users
dn: ou=users,dc=spog,dc=local
objectClass: organizationalUnit
ou: users
description: SPOG Users

# Organizational Unit for Groups
dn: ou=groups,dc=spog,dc=local
objectClass: organizationalUnit
ou: groups
description: SPOG Authorization Groups

# ============================================================================
# Groups - these names become JWT 'groups' claim values
# ============================================================================

# Full admin access
dn: cn=admin:all,ou=groups,dc=spog,dc=local
objectClass: groupOfNames
cn: admin:all
description: Full administrative access to all clusters
member: uid=admin,ou=users,dc=spog,dc=local

# Regional access groups
dn: cn=region:us,ou=groups,dc=spog,dc=local
objectClass: groupOfNames
cn: region:us
description: Access to US region clusters (us-east, us-west, us-central, etc.)
member: uid=us-operator,ou=users,dc=spog,dc=local

dn: cn=region:eu,ou=groups,dc=spog,dc=local
objectClass: groupOfNames
cn: region:eu
description: Access to EU region clusters (eu-west, eu-central, eu-north, etc.)
member: uid=eu-operator,ou=users,dc=spog,dc=local

dn: cn=region:us-east,ou=groups,dc=spog,dc=local
objectClass: groupOfNames
cn: region:us-east
description: Access to US-East region only
member: uid=dev-user,ou=users,dc=spog,dc=local

dn: cn=region:*,ou=groups,dc=spog,dc=local
objectClass: groupOfNames
cn: region:*
description: Access to all regions
member: uid=prod-admin,ou=users,dc=spog,dc=local
member: uid=global-viewer,ou=users,dc=spog,dc=local

# Write permission groups
dn: cn=write:staging,ou=groups,dc=spog,dc=local
objectClass: groupOfNames
cn: write:staging
description: Write access to staging environments
member: uid=us-operator,ou=users,dc=spog,dc=local
member: uid=eu-operator,ou=users,dc=spog,dc=local

dn: cn=write:development,ou=groups,dc=spog,dc=local
objectClass: groupOfNames
cn: write:development
description: Write access to development environments
member: uid=us-operator,ou=users,dc=spog,dc=local
member: uid=eu-operator,ou=users,dc=spog,dc=local
member: uid=dev-user,ou=users,dc=spog,dc=local

dn: cn=write:production,ou=groups,dc=spog,dc=local
objectClass: groupOfNames
cn: write:production
description: Write access to production environments
member: uid=prod-admin,ou=users,dc=spog,dc=local

# Read permission groups
dn: cn=read:clusters,ou=groups,dc=spog,dc=local
objectClass: groupOfNames
cn: read:clusters
description: Read-only access to cluster information
member: uid=global-viewer,ou=users,dc=spog,dc=local

# ============================================================================
# Users - All passwords: secret
# ============================================================================

# Admin - full administrative access
dn: uid=admin,ou=users,dc=spog,dc=local
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
uid: admin
cn: Admin User
sn: User
givenName: Admin
mail: admin@spog.local
userPassword: secret
uidNumber: 10000
gidNumber: 10000
homeDirectory: /home/admin
loginShell: /bin/bash

# US Operator - US regions, staging/development write access
dn: uid=us-operator,ou=users,dc=spog,dc=local
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
uid: us-operator
cn: US Operations
sn: Operations
givenName: US
mail: us-operator@spog.local
userPassword: secret
uidNumber: 10001
gidNumber: 10001
homeDirectory: /home/us-operator
loginShell: /bin/bash

# EU Operator - EU regions, staging/development write access
dn: uid=eu-operator,ou=users,dc=spog,dc=local
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
uid: eu-operator
cn: EU Operations
sn: Operations
givenName: EU
mail: eu-operator@spog.local
userPassword: secret
uidNumber: 10002
gidNumber: 10002
homeDirectory: /home/eu-operator
loginShell: /bin/bash

# Production Admin - all regions, production write access only
dn: uid=prod-admin,ou=users,dc=spog,dc=local
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
uid: prod-admin
cn: Production Admin
sn: Admin
givenName: Production
mail: prod-admin@spog.local
userPassword: secret
uidNumber: 10003
gidNumber: 10003
homeDirectory: /home/prod-admin
loginShell: /bin/bash

# Global Viewer - all regions, read-only access
dn: uid=global-viewer,ou=users,dc=spog,dc=local
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
uid: global-viewer
cn: Global Viewer
sn: Viewer
givenName: Global
mail: global-viewer@spog.local
userPassword: secret
uidNumber: 10004
gidNumber: 10004
homeDirectory: /home/global-viewer
loginShell: /bin/bash

# Developer - limited dev access (US-East, development only)
dn: uid=dev-user,ou=users,dc=spog,dc=local
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
uid: dev-user
cn: Developer
sn: User
givenName: Dev
mail: dev-user@spog.local
userPassword: secret
uidNumber: 10005
gidNumber: 10005
homeDirectory: /home/dev-user
loginShell: /bin/bash

Test Users

Username Password Scopes Dashboards
admin secret admin:all All
us-operator secret region:us, write:staging/dev Global (US clusters only)
eu-operator secret region:eu, write:staging/dev Global (EU clusters only)
prod-admin secret region:*, write:production All
global-viewer secret region:*, read:clusters Global, US, EU
dev-user secret region:us-east, write:dev Global (us-east only)

Configure Keycloak LDAP Federation

With OpenLDAP deployed, configure Keycloak to federate users from LDAP.

  1. Access the Keycloak admin console at https://auth.spog.local/admin

  2. Select the spog realm, then go to User FederationAdd providerldap

  3. Configure connection settings:

Setting Value
Connection URL ldap://openldap.ldap.svc.cluster.local:389
Bind DN cn=admin,dc=spog,dc=local
Bind Credential admin123
  1. Configure user search:
Setting Value
Edit Mode READ_ONLY
Users DN ou=users,dc=spog,dc=local
Username LDAP attribute uid
RDN LDAP attribute uid
UUID LDAP attribute entryUUID
User Object Classes inetOrgPerson
  1. Expand Advanced settings and configure:
Setting Value
Trust Email ON
  1. Click Save to create the LDAP provider

  2. Select the newly created LDAP provider, then go to the Mappers tab

  3. Click Add mapper and configure a Group Mapper to sync LDAP groups:

Setting Value
Mapper Type group-ldap-mapper
LDAP Groups DN ou=groups,dc=spog,dc=local
Group Name LDAP Attribute cn
Group Object Classes groupOfNames
Membership LDAP Attribute member
  1. Return to the LDAP provider settings, open the Action dropdown, and select Sync all users to import LDAP users

Connect Glass UI to OIDC

After deploying your OIDC provider, configure Glass UI to use it.

Hostname Must Match Redirect URIs

Your Glass UI hostname must be set to console.spog.local for OIDC to work with the example configurations. The redirect URIs configured in your OIDC provider must exactly match your Glass UI's ingress hostname.

Don't have Glass UI running yet?

Follow the Quickstart Guide to deploy Glass UI first. Make sure to set GLASS_HOSTNAME=console.spog.local.

Connect Glass UI to Mock OIDC

First, create the ConfigMap with your mkcert CA so the policy service can validate the OIDC provider's certificate:

Bash
1
2
3
kubectl create configmap mkcert-ca \
  --from-file=ca-bundle.crt="$(mkcert -CAROOT)/rootCA.pem" \
  -n controlplane

Then deploy with the OIDC and mkcert CA configuration:

Bash
1
2
3
4
5
6
7
8
helm upgrade glass-ui oci://registry.open-xchange.com/cc-glass/glass-ui \
  --version "1.0.0" \
  -n controlplane \
  --set global.imagePullSecretsList[0]=registry-credentials \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/oidc-mock.yaml" \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/mkcert-ca.yaml" \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/demo-policies.yaml" \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/demo-dashboards.yaml"
OIDC configuration
YAML
# Glass UI OIDC Configuration - Mock OAuth2 Server
#
# Configures Glass UI to authenticate via Mock OAuth2 Server.
# Use this with helm/3rd-party-examples/mock-oauth2-server/manifests/deployment.yaml
#
# Prerequisites:
# - Mock OAuth2 Server deployed at https://auth.spog.local
# - cert-manager installed with ClusterIssuer configured
#
# NOTE: The mock server uses "/default" as the issuer ID, so the authority
# URL must include this path: https://auth.spog.local/default
#
# This file only configures authentication. For authorization policies that
# grant permissions based on group claims, also apply oidc-mock-demo-policies.yaml.
#
# Deploy with:
#   helm upgrade glass-ui oci://registry.open-xchange.com/glass/glass-ui \
#     -n controlplane \
#     --set global.imagePullSecretsList[0]=registry-credentials \
#     -f oidc-mock.yaml \
#     -f oidc-mock-demo-policies.yaml

# Global TLS - enables HTTPS for UI ingress and NATS WebSocket
globalTls:
  enabled: true
  secretName: "glass-ui-tls"
  certificate:
    enabled: true
    issuerRef:
      name: "spog-tls-issuer"
      kind: "ClusterIssuer"

# Global ingress hostname
globalIngress:
  host: "console.spog.local"

# OIDC configuration for policy service
# Note: Mock OAuth2 Server uses "/default" as the issuer ID
policy:
  oidc:
    enabled: true
    issuerUrl: "https://auth.spog.local/default"

# UI login configuration
# Note: authority must include "/default" for mock server
ui:
  config:
    loginConfig:
      loginType: oidc
      authority: "https://auth.spog.local/default"
      client_id: "spog-console"
      redirect_uri: "https://console.spog.local/authz/callback"
      post_logout_redirect_uri: "https://console.spog.local"
      additionalScopes:
        - groups
Demo policies
YAML
# Glass UI Demo Policies - Group-Based Authorization
#
# REGO policies that grant permissions based on group claims from the JWT token.
# These policies work with any OIDC provider (Mock OIDC, Dex, Keycloak).
#
# Use with demo-dashboards.yaml and your OIDC config file:
#
#   helm upgrade glass-ui oci://registry.open-xchange.com/glass/glass-ui \
#     -n controlplane \
#     --set global.imagePullSecretsList[0]=registry-credentials \
#     -f oidc-{mock,dex,keycloak}-mkcert.yaml \
#     -f demo-policies.yaml \
#     -f demo-dashboards.yaml
#
# The policies use the 'groups' claim from the JWT token. All OIDC providers
# (Mock OIDC, Dex, Keycloak) are configured to use this claim. Example values:
#
#   Admin:          ["admin:all"]
#   US Operator:    ["region:us", "write:staging"]
#   EU Operator:    ["region:eu", "write:staging"]
#   Prod Admin:     ["region:*", "write:production"]
#   Global Viewer:  ["region:*", "read:clusters"]
#
# For production, see Authentication & Authorization guide for RBAC/ABAC patterns.

policy:
  policies:
    # Dashboard and Navigation Permission Flags
    # These control which menus and dashboards are visible in the UI
    pdns_global_flags.rego: |
      package pdns_global_flags

      import data.user

      # Global overview dashboard - all authenticated non-robot users
      dashboard_global_overview if {
        not input.robot
      }

      # Regional dashboards - for admins and global viewers who need to drill down
      # Regional operators see only their region via "All Clusters" (REGO-filtered)
      dashboard_us_region if user.is_admin
      dashboard_us_region if user.has_global_access

      dashboard_eu_region if user.is_admin
      dashboard_eu_region if user.has_global_access

      # Production dashboard - admin or production operators
      dashboard_production if user.is_admin
      dashboard_production if user.has_production_access

      # Navigation items
      navigation_clusters if true
      navigation_dns_check if true
      navigation_admin if user.is_admin

    # Permission definitions based on groups claim
    pdns_permissions.rego: |
      package pdns_permissions

      import data.user

      # All authenticated users can connect
      connect if true

      # Read permissions - based on group membership
      read if user.is_admin
      read if user.has_read_access

      # Read logs - admin or regional operators
      read_logs if user.is_admin
      read_logs if user.is_operator

      # Clear cache - admin or operators with write access
      clear_cache if user.is_admin
      clear_cache if user.has_write_access

      # Restart instances - admin only
      restart_instance_set if user.is_admin

      # Delete pod - admin only
      delete_pod if user.is_admin

      # DNS check - any authenticated user
      dns_check if true

      # Write operations - admin or write access
      write if user.is_admin
      write if user.has_write_access

    # User authorization logic
    user.rego: |
      package user

      # All OIDC providers (Mock OIDC, Dex, Keycloak) use the 'groups' claim
      # Keycloak maps user 'scopes' attribute to 'groups' claim for compatibility
      default groups := []
      groups := input.user.groups if input.user.groups

      # Admin - full access
      is_admin if "admin:all" in groups

      # Read access - explicit read group or any regional access
      has_read_access if "read:clusters" in groups
      has_read_access if has_region_access

      # Regional operators
      is_operator if has_region_access

      # Write access - explicit write group
      has_write_access if "write:staging" in groups
      has_write_access if "write:development" in groups
      has_write_access if "write:production" in groups
      has_write_access if is_admin

      # Regional access checks

      # Helper rules for global flags (dashboard/navigation visibility)
      has_global_access if "region:*" in groups
      has_production_access if "write:production" in groups

      # Wildcard: "region:*" grants access to all regions
      has_region_access if "region:*" in groups

      # Prefix matching: "region:us" matches us-east, us-west, us-central, etc.
      has_region_access if {
          some group in groups
          startswith(group, "region:")
          prefix := trim_prefix(group, "region:")
          prefix != "*"
          some region in input.cluster.labels.region
          startswith(region, concat("", [prefix, "-"]))
      }

      # Exact matching: "region:us-east" matches only us-east
      has_region_access if {
          some group in groups
          startswith(group, "region:")
          region := trim_prefix(group, "region:")
          region != "*"
          region in input.cluster.labels.region
      }
Demo dashboards
YAML
# Glass UI Demo Dashboards and Navigation
#
# Provides a minimal set of dashboards and navigation menus for demo/testing.
# Works with any OIDC provider (Mock OIDC, Dex, Keycloak) when paired with demo-policies.yaml.
#
# The navigation items use `requires` to control visibility based on REGO policy flags.
# These flags are defined in demo-policies.yaml.
#
# Required policy flags (provided by demo-policies.yaml):
#   - dashboard_global_overview: Shows global overview dashboard
#   - dashboard_us_region: Shows US regional dashboard
#   - dashboard_eu_region: Shows EU regional dashboard
#   - dashboard_production: Shows production dashboard
#   - navigation_dns_check: Shows DNS Query action
#   - navigation_admin: Shows admin debug pages
#
# Usage (works with any OIDC provider):
#   helm upgrade glass-ui oci://registry.open-xchange.com/glass/glass-ui \
#     -n controlplane \
#     --set global.imagePullSecretsList[0]=registry-credentials \
#     -f oidc-{mock,dex,keycloak}-mkcert.yaml \
#     -f demo-policies.yaml \
#     -f demo-dashboards.yaml

globalConfig:
  dashboards:
    # Global Overview - available to all authenticated users
    global-overview:
      title: "DNS Infrastructure Overview"
      description: "All DNS clusters you have access to"
      url: "/"
      requires:
        - "dashboard_global_overview"
      graphs:
        - title: "All DNS Clusters"
          widget: "cc-state-tree-table"
          args:
            filter: ""
        - title: "Infrastructure Health"
          widget: "cc-state-readiness-heatmap"
          args:
            filter: "group by role"
        - title: "DNS Network Topology"
          widget: "cc-state-cytoscape"
          args:
            filter: ""
            layout: "Hierarchical"

    # US Region Dashboard
    us-region:
      title: "US Regional Operations"
      description: "US DNS infrastructure (us-east, us-west, us-central)"
      url: "/dashboards/us"
      requires:
        - "dashboard_us_region"
      graphs:
        - title: "US Clusters"
          widget: "cc-state-tree-table"
          args:
            filter: "region in (\"us-east\", \"us-west\", \"us-central\")"
        - title: "US Health Status"
          widget: "cc-state-readiness-heatmap"
          args:
            filter: "region in (\"us-east\", \"us-west\", \"us-central\") group by role"

    # EU Region Dashboard
    eu-region:
      title: "EU Regional Operations"
      description: "EU DNS infrastructure (eu-west, eu-central, eu-north)"
      url: "/dashboards/eu"
      requires:
        - "dashboard_eu_region"
      graphs:
        - title: "EU Clusters"
          widget: "cc-state-tree-table"
          args:
            filter: "region in (\"eu-west\", \"eu-central\", \"eu-north\")"
        - title: "EU Health Status"
          widget: "cc-state-readiness-heatmap"
          args:
            filter: "region in (\"eu-west\", \"eu-central\", \"eu-north\") group by role"

    # Production Dashboard - admin only
    production:
      title: "Production Overview"
      description: "Production environment clusters"
      url: "/dashboards/production"
      requires:
        - "dashboard_production"
      graphs:
        - title: "Production Clusters"
          widget: "cc-state-tree-table"
          args:
            filter: "environment = \"production\""
        - title: "Production Health"
          widget: "cc-state-readiness-heatmap"
          args:
            filter: "environment = \"production\" group by region"

  navigation:
    menus:
      - name: "Dashboards"
        sections:
          - name: "Overview"
            items:
              - name: "All Clusters"
                url: "/"
          - name: "Regional"
            items:
              - name: "US Region"
                url: "/dashboards/us"
                requires:
                  - "dashboard_us_region"
              - name: "EU Region"
                url: "/dashboards/eu"
                requires:
                  - "dashboard_eu_region"
          - name: "Environment"
            items:
              - name: "Production"
                url: "/dashboards/production"
                requires:
                  - "dashboard_production"

Connect Glass UI to Dex

First, create the ConfigMap with your mkcert CA:

Bash
1
2
3
kubectl create configmap mkcert-ca \
  --from-file=ca-bundle.crt="$(mkcert -CAROOT)/rootCA.pem" \
  -n controlplane

Then deploy with the OIDC and mkcert CA configuration:

Bash
1
2
3
4
5
6
7
8
helm upgrade glass-ui oci://registry.open-xchange.com/cc-glass/glass-ui \
  --version "1.0.0" \
  -n controlplane \
  --set global.imagePullSecretsList[0]=registry-credentials \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/oidc-dex.yaml" \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/mkcert-ca.yaml" \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/demo-policies.yaml" \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/demo-dashboards.yaml"
OIDC configuration
YAML
# Glass UI OIDC Configuration - Dex
#
# Configures Glass UI to authenticate via Dex OIDC provider.
# Use this with helm/3rd-party-examples/dex/examples/with-ldap.yaml
#
# Prerequisites:
# - Dex deployed at https://auth.spog.local with LDAP connector
# - OpenLDAP or compatible LDAP directory available
# - cert-manager installed with ClusterIssuer configured
#
# NOTE: URLs assume HTTPS on standard port 443. If your cluster uses a different
# port, update the redirect_uri and post_logout_redirect_uri accordingly.
#
# Deploy with:
#   helm upgrade glass-ui oci://registry.open-xchange.com/glass/glass-ui \
#     -n controlplane \
#     --set global.imagePullSecretsList[0]=registry-credentials \
#     -f oidc-dex.yaml

# Global TLS - enables HTTPS for UI ingress and NATS WebSocket
globalTls:
  enabled: true
  secretName: "glass-ui-tls"
  certificate:
    enabled: true
    issuerRef:
      name: "spog-tls-issuer"
      kind: "ClusterIssuer"

# Global ingress hostname
globalIngress:
  host: "console.spog.local"

# OIDC configuration for policy service
policy:
  oidc:
    enabled: true
    issuerUrl: "https://auth.spog.local"

# UI login configuration
ui:
  config:
    loginConfig:
      loginType: oidc
      authority: "https://auth.spog.local"
      client_id: "spog-console"
      redirect_uri: "https://console.spog.local/authz/callback"
      post_logout_redirect_uri: "https://console.spog.local"
      additionalScopes:
        - groups
Demo policies
YAML
# Glass UI Demo Policies - Group-Based Authorization
#
# REGO policies that grant permissions based on group claims from the JWT token.
# These policies work with any OIDC provider (Mock OIDC, Dex, Keycloak).
#
# Use with demo-dashboards.yaml and your OIDC config file:
#
#   helm upgrade glass-ui oci://registry.open-xchange.com/glass/glass-ui \
#     -n controlplane \
#     --set global.imagePullSecretsList[0]=registry-credentials \
#     -f oidc-{mock,dex,keycloak}-mkcert.yaml \
#     -f demo-policies.yaml \
#     -f demo-dashboards.yaml
#
# The policies use the 'groups' claim from the JWT token. All OIDC providers
# (Mock OIDC, Dex, Keycloak) are configured to use this claim. Example values:
#
#   Admin:          ["admin:all"]
#   US Operator:    ["region:us", "write:staging"]
#   EU Operator:    ["region:eu", "write:staging"]
#   Prod Admin:     ["region:*", "write:production"]
#   Global Viewer:  ["region:*", "read:clusters"]
#
# For production, see Authentication & Authorization guide for RBAC/ABAC patterns.

policy:
  policies:
    # Dashboard and Navigation Permission Flags
    # These control which menus and dashboards are visible in the UI
    pdns_global_flags.rego: |
      package pdns_global_flags

      import data.user

      # Global overview dashboard - all authenticated non-robot users
      dashboard_global_overview if {
        not input.robot
      }

      # Regional dashboards - for admins and global viewers who need to drill down
      # Regional operators see only their region via "All Clusters" (REGO-filtered)
      dashboard_us_region if user.is_admin
      dashboard_us_region if user.has_global_access

      dashboard_eu_region if user.is_admin
      dashboard_eu_region if user.has_global_access

      # Production dashboard - admin or production operators
      dashboard_production if user.is_admin
      dashboard_production if user.has_production_access

      # Navigation items
      navigation_clusters if true
      navigation_dns_check if true
      navigation_admin if user.is_admin

    # Permission definitions based on groups claim
    pdns_permissions.rego: |
      package pdns_permissions

      import data.user

      # All authenticated users can connect
      connect if true

      # Read permissions - based on group membership
      read if user.is_admin
      read if user.has_read_access

      # Read logs - admin or regional operators
      read_logs if user.is_admin
      read_logs if user.is_operator

      # Clear cache - admin or operators with write access
      clear_cache if user.is_admin
      clear_cache if user.has_write_access

      # Restart instances - admin only
      restart_instance_set if user.is_admin

      # Delete pod - admin only
      delete_pod if user.is_admin

      # DNS check - any authenticated user
      dns_check if true

      # Write operations - admin or write access
      write if user.is_admin
      write if user.has_write_access

    # User authorization logic
    user.rego: |
      package user

      # All OIDC providers (Mock OIDC, Dex, Keycloak) use the 'groups' claim
      # Keycloak maps user 'scopes' attribute to 'groups' claim for compatibility
      default groups := []
      groups := input.user.groups if input.user.groups

      # Admin - full access
      is_admin if "admin:all" in groups

      # Read access - explicit read group or any regional access
      has_read_access if "read:clusters" in groups
      has_read_access if has_region_access

      # Regional operators
      is_operator if has_region_access

      # Write access - explicit write group
      has_write_access if "write:staging" in groups
      has_write_access if "write:development" in groups
      has_write_access if "write:production" in groups
      has_write_access if is_admin

      # Regional access checks

      # Helper rules for global flags (dashboard/navigation visibility)
      has_global_access if "region:*" in groups
      has_production_access if "write:production" in groups

      # Wildcard: "region:*" grants access to all regions
      has_region_access if "region:*" in groups

      # Prefix matching: "region:us" matches us-east, us-west, us-central, etc.
      has_region_access if {
          some group in groups
          startswith(group, "region:")
          prefix := trim_prefix(group, "region:")
          prefix != "*"
          some region in input.cluster.labels.region
          startswith(region, concat("", [prefix, "-"]))
      }

      # Exact matching: "region:us-east" matches only us-east
      has_region_access if {
          some group in groups
          startswith(group, "region:")
          region := trim_prefix(group, "region:")
          region != "*"
          region in input.cluster.labels.region
      }
Demo dashboards
YAML
# Glass UI Demo Dashboards and Navigation
#
# Provides a minimal set of dashboards and navigation menus for demo/testing.
# Works with any OIDC provider (Mock OIDC, Dex, Keycloak) when paired with demo-policies.yaml.
#
# The navigation items use `requires` to control visibility based on REGO policy flags.
# These flags are defined in demo-policies.yaml.
#
# Required policy flags (provided by demo-policies.yaml):
#   - dashboard_global_overview: Shows global overview dashboard
#   - dashboard_us_region: Shows US regional dashboard
#   - dashboard_eu_region: Shows EU regional dashboard
#   - dashboard_production: Shows production dashboard
#   - navigation_dns_check: Shows DNS Query action
#   - navigation_admin: Shows admin debug pages
#
# Usage (works with any OIDC provider):
#   helm upgrade glass-ui oci://registry.open-xchange.com/glass/glass-ui \
#     -n controlplane \
#     --set global.imagePullSecretsList[0]=registry-credentials \
#     -f oidc-{mock,dex,keycloak}-mkcert.yaml \
#     -f demo-policies.yaml \
#     -f demo-dashboards.yaml

globalConfig:
  dashboards:
    # Global Overview - available to all authenticated users
    global-overview:
      title: "DNS Infrastructure Overview"
      description: "All DNS clusters you have access to"
      url: "/"
      requires:
        - "dashboard_global_overview"
      graphs:
        - title: "All DNS Clusters"
          widget: "cc-state-tree-table"
          args:
            filter: ""
        - title: "Infrastructure Health"
          widget: "cc-state-readiness-heatmap"
          args:
            filter: "group by role"
        - title: "DNS Network Topology"
          widget: "cc-state-cytoscape"
          args:
            filter: ""
            layout: "Hierarchical"

    # US Region Dashboard
    us-region:
      title: "US Regional Operations"
      description: "US DNS infrastructure (us-east, us-west, us-central)"
      url: "/dashboards/us"
      requires:
        - "dashboard_us_region"
      graphs:
        - title: "US Clusters"
          widget: "cc-state-tree-table"
          args:
            filter: "region in (\"us-east\", \"us-west\", \"us-central\")"
        - title: "US Health Status"
          widget: "cc-state-readiness-heatmap"
          args:
            filter: "region in (\"us-east\", \"us-west\", \"us-central\") group by role"

    # EU Region Dashboard
    eu-region:
      title: "EU Regional Operations"
      description: "EU DNS infrastructure (eu-west, eu-central, eu-north)"
      url: "/dashboards/eu"
      requires:
        - "dashboard_eu_region"
      graphs:
        - title: "EU Clusters"
          widget: "cc-state-tree-table"
          args:
            filter: "region in (\"eu-west\", \"eu-central\", \"eu-north\")"
        - title: "EU Health Status"
          widget: "cc-state-readiness-heatmap"
          args:
            filter: "region in (\"eu-west\", \"eu-central\", \"eu-north\") group by role"

    # Production Dashboard - admin only
    production:
      title: "Production Overview"
      description: "Production environment clusters"
      url: "/dashboards/production"
      requires:
        - "dashboard_production"
      graphs:
        - title: "Production Clusters"
          widget: "cc-state-tree-table"
          args:
            filter: "environment = \"production\""
        - title: "Production Health"
          widget: "cc-state-readiness-heatmap"
          args:
            filter: "environment = \"production\" group by region"

  navigation:
    menus:
      - name: "Dashboards"
        sections:
          - name: "Overview"
            items:
              - name: "All Clusters"
                url: "/"
          - name: "Regional"
            items:
              - name: "US Region"
                url: "/dashboards/us"
                requires:
                  - "dashboard_us_region"
              - name: "EU Region"
                url: "/dashboards/eu"
                requires:
                  - "dashboard_eu_region"
          - name: "Environment"
            items:
              - name: "Production"
                url: "/dashboards/production"
                requires:
                  - "dashboard_production"

Connect Glass UI to Keycloak

First, create the ConfigMap with your mkcert CA:

Bash
1
2
3
kubectl create configmap mkcert-ca \
  --from-file=ca-bundle.crt="$(mkcert -CAROOT)/rootCA.pem" \
  -n controlplane

Then deploy with the OIDC and mkcert CA configuration:

Bash
1
2
3
4
5
6
7
8
helm upgrade glass-ui oci://registry.open-xchange.com/cc-glass/glass-ui \
  --version "1.0.0" \
  -n controlplane \
  --set global.imagePullSecretsList[0]=registry-credentials \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/oidc-keycloak.yaml" \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/mkcert-ca.yaml" \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/demo-policies.yaml" \
  -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/demo-dashboards.yaml"
OIDC configuration
YAML
# Glass UI OIDC Configuration - Keycloak
#
# Configures Glass UI to authenticate via Keycloak OIDC provider.
# Use this with Keycloak deployed via the Keycloak Operator.
#
# Prerequisites:
# - Keycloak deployed at https://auth.spog.local
# - SPOG realm created with spog-console client
# - cert-manager installed with ClusterIssuer configured
#
# NOTE: URLs assume HTTPS on standard port 443. If your cluster uses a different
# port, update the redirect_uri and post_logout_redirect_uri accordingly.
#
# Deploy with:
#   helm upgrade glass-ui oci://registry.open-xchange.com/glass/glass-ui \
#     -n controlplane \
#     --set global.imagePullSecretsList[0]=registry-credentials \
#     -f oidc-keycloak.yaml

# Global TLS - enables HTTPS for UI ingress and NATS WebSocket
globalTls:
  enabled: true
  secretName: "glass-ui-tls"
  certificate:
    enabled: true
    issuerRef:
      name: "spog-tls-issuer"
      kind: "ClusterIssuer"

# Global ingress hostname
globalIngress:
  host: "console.spog.local"

# OIDC configuration for policy service
policy:
  oidc:
    enabled: true
    issuerUrl: "https://auth.spog.local/realms/spog"

# UI login configuration
ui:
  config:
    loginConfig:
      loginType: oidc
      authority: "https://auth.spog.local/realms/spog"
      client_id: "spog-console"
      redirect_uri: "https://console.spog.local/authz/callback"
      post_logout_redirect_uri: "https://console.spog.local"
      additionalScopes:
        - groups
Demo policies
YAML
# Glass UI Demo Policies - Group-Based Authorization
#
# REGO policies that grant permissions based on group claims from the JWT token.
# These policies work with any OIDC provider (Mock OIDC, Dex, Keycloak).
#
# Use with demo-dashboards.yaml and your OIDC config file:
#
#   helm upgrade glass-ui oci://registry.open-xchange.com/glass/glass-ui \
#     -n controlplane \
#     --set global.imagePullSecretsList[0]=registry-credentials \
#     -f oidc-{mock,dex,keycloak}-mkcert.yaml \
#     -f demo-policies.yaml \
#     -f demo-dashboards.yaml
#
# The policies use the 'groups' claim from the JWT token. All OIDC providers
# (Mock OIDC, Dex, Keycloak) are configured to use this claim. Example values:
#
#   Admin:          ["admin:all"]
#   US Operator:    ["region:us", "write:staging"]
#   EU Operator:    ["region:eu", "write:staging"]
#   Prod Admin:     ["region:*", "write:production"]
#   Global Viewer:  ["region:*", "read:clusters"]
#
# For production, see Authentication & Authorization guide for RBAC/ABAC patterns.

policy:
  policies:
    # Dashboard and Navigation Permission Flags
    # These control which menus and dashboards are visible in the UI
    pdns_global_flags.rego: |
      package pdns_global_flags

      import data.user

      # Global overview dashboard - all authenticated non-robot users
      dashboard_global_overview if {
        not input.robot
      }

      # Regional dashboards - for admins and global viewers who need to drill down
      # Regional operators see only their region via "All Clusters" (REGO-filtered)
      dashboard_us_region if user.is_admin
      dashboard_us_region if user.has_global_access

      dashboard_eu_region if user.is_admin
      dashboard_eu_region if user.has_global_access

      # Production dashboard - admin or production operators
      dashboard_production if user.is_admin
      dashboard_production if user.has_production_access

      # Navigation items
      navigation_clusters if true
      navigation_dns_check if true
      navigation_admin if user.is_admin

    # Permission definitions based on groups claim
    pdns_permissions.rego: |
      package pdns_permissions

      import data.user

      # All authenticated users can connect
      connect if true

      # Read permissions - based on group membership
      read if user.is_admin
      read if user.has_read_access

      # Read logs - admin or regional operators
      read_logs if user.is_admin
      read_logs if user.is_operator

      # Clear cache - admin or operators with write access
      clear_cache if user.is_admin
      clear_cache if user.has_write_access

      # Restart instances - admin only
      restart_instance_set if user.is_admin

      # Delete pod - admin only
      delete_pod if user.is_admin

      # DNS check - any authenticated user
      dns_check if true

      # Write operations - admin or write access
      write if user.is_admin
      write if user.has_write_access

    # User authorization logic
    user.rego: |
      package user

      # All OIDC providers (Mock OIDC, Dex, Keycloak) use the 'groups' claim
      # Keycloak maps user 'scopes' attribute to 'groups' claim for compatibility
      default groups := []
      groups := input.user.groups if input.user.groups

      # Admin - full access
      is_admin if "admin:all" in groups

      # Read access - explicit read group or any regional access
      has_read_access if "read:clusters" in groups
      has_read_access if has_region_access

      # Regional operators
      is_operator if has_region_access

      # Write access - explicit write group
      has_write_access if "write:staging" in groups
      has_write_access if "write:development" in groups
      has_write_access if "write:production" in groups
      has_write_access if is_admin

      # Regional access checks

      # Helper rules for global flags (dashboard/navigation visibility)
      has_global_access if "region:*" in groups
      has_production_access if "write:production" in groups

      # Wildcard: "region:*" grants access to all regions
      has_region_access if "region:*" in groups

      # Prefix matching: "region:us" matches us-east, us-west, us-central, etc.
      has_region_access if {
          some group in groups
          startswith(group, "region:")
          prefix := trim_prefix(group, "region:")
          prefix != "*"
          some region in input.cluster.labels.region
          startswith(region, concat("", [prefix, "-"]))
      }

      # Exact matching: "region:us-east" matches only us-east
      has_region_access if {
          some group in groups
          startswith(group, "region:")
          region := trim_prefix(group, "region:")
          region != "*"
          region in input.cluster.labels.region
      }
Demo dashboards
YAML
# Glass UI Demo Dashboards and Navigation
#
# Provides a minimal set of dashboards and navigation menus for demo/testing.
# Works with any OIDC provider (Mock OIDC, Dex, Keycloak) when paired with demo-policies.yaml.
#
# The navigation items use `requires` to control visibility based on REGO policy flags.
# These flags are defined in demo-policies.yaml.
#
# Required policy flags (provided by demo-policies.yaml):
#   - dashboard_global_overview: Shows global overview dashboard
#   - dashboard_us_region: Shows US regional dashboard
#   - dashboard_eu_region: Shows EU regional dashboard
#   - dashboard_production: Shows production dashboard
#   - navigation_dns_check: Shows DNS Query action
#   - navigation_admin: Shows admin debug pages
#
# Usage (works with any OIDC provider):
#   helm upgrade glass-ui oci://registry.open-xchange.com/glass/glass-ui \
#     -n controlplane \
#     --set global.imagePullSecretsList[0]=registry-credentials \
#     -f oidc-{mock,dex,keycloak}-mkcert.yaml \
#     -f demo-policies.yaml \
#     -f demo-dashboards.yaml

globalConfig:
  dashboards:
    # Global Overview - available to all authenticated users
    global-overview:
      title: "DNS Infrastructure Overview"
      description: "All DNS clusters you have access to"
      url: "/"
      requires:
        - "dashboard_global_overview"
      graphs:
        - title: "All DNS Clusters"
          widget: "cc-state-tree-table"
          args:
            filter: ""
        - title: "Infrastructure Health"
          widget: "cc-state-readiness-heatmap"
          args:
            filter: "group by role"
        - title: "DNS Network Topology"
          widget: "cc-state-cytoscape"
          args:
            filter: ""
            layout: "Hierarchical"

    # US Region Dashboard
    us-region:
      title: "US Regional Operations"
      description: "US DNS infrastructure (us-east, us-west, us-central)"
      url: "/dashboards/us"
      requires:
        - "dashboard_us_region"
      graphs:
        - title: "US Clusters"
          widget: "cc-state-tree-table"
          args:
            filter: "region in (\"us-east\", \"us-west\", \"us-central\")"
        - title: "US Health Status"
          widget: "cc-state-readiness-heatmap"
          args:
            filter: "region in (\"us-east\", \"us-west\", \"us-central\") group by role"

    # EU Region Dashboard
    eu-region:
      title: "EU Regional Operations"
      description: "EU DNS infrastructure (eu-west, eu-central, eu-north)"
      url: "/dashboards/eu"
      requires:
        - "dashboard_eu_region"
      graphs:
        - title: "EU Clusters"
          widget: "cc-state-tree-table"
          args:
            filter: "region in (\"eu-west\", \"eu-central\", \"eu-north\")"
        - title: "EU Health Status"
          widget: "cc-state-readiness-heatmap"
          args:
            filter: "region in (\"eu-west\", \"eu-central\", \"eu-north\") group by role"

    # Production Dashboard - admin only
    production:
      title: "Production Overview"
      description: "Production environment clusters"
      url: "/dashboards/production"
      requires:
        - "dashboard_production"
      graphs:
        - title: "Production Clusters"
          widget: "cc-state-tree-table"
          args:
            filter: "environment = \"production\""
        - title: "Production Health"
          widget: "cc-state-readiness-heatmap"
          args:
            filter: "environment = \"production\" group by region"

  navigation:
    menus:
      - name: "Dashboards"
        sections:
          - name: "Overview"
            items:
              - name: "All Clusters"
                url: "/"
          - name: "Regional"
            items:
              - name: "US Region"
                url: "/dashboards/us"
                requires:
                  - "dashboard_us_region"
              - name: "EU Region"
                url: "/dashboards/eu"
                requires:
                  - "dashboard_eu_region"
          - name: "Environment"
            items:
              - name: "Production"
                url: "/dashboards/production"
                requires:
                  - "dashboard_production"

Verification

  1. Access SPOG console: https://console.spog.local
  2. Click Login - you should be redirected to the OIDC provider
  3. Enter credentials (see below)
  4. Verify redirect back to SPOG with authenticated session
  5. Check user claims at /debug/user endpoint

Mock OIDC Server Login

The mock server accepts any username with any password. Enter your desired claims as JSON in the Claims field on the login form.

How it works

The Username becomes the sub (subject) claim in the JWT token. The Claims field lets you specify any additional claims, including groups for authorization.

Admin (all dashboards):

JSON
{"groups": ["admin:all"]}

US Operator (Global dashboard, US clusters only):

JSON
{"groups": ["region:us", "write:staging", "write:development"]}

EU Operator (Global dashboard, EU clusters only):

JSON
{"groups": ["region:eu", "write:staging", "write:development"]}

Prod Admin (all dashboards):

JSON
{"groups": ["region:*", "write:production"]}

Global Viewer (Global, US, EU dashboards - read-only):

JSON
{"groups": ["region:*", "read:clusters"]}

LDAP Login

Use the credentials from your LDAP directory. With the test OpenLDAP setup:

Username Password Scopes Dashboards
admin secret admin:all All
us-operator secret region:us, write:staging/dev Global (US clusters only)
eu-operator secret region:eu, write:staging/dev Global (EU clusters only)
prod-admin secret region:*, write:production All
global-viewer secret region:*, read:clusters Global, US, EU
dev-user secret region:us-east, write:dev Global (us-east only)

Group memberships from LDAP become the groups claim in the JWT token automatically.

Keycloak Login

Username Password Scopes Dashboards
admin secret admin:all All
us-operator secret region:us, write:staging/dev Global (US clusters only)
eu-operator secret region:eu, write:staging/dev Global (EU clusters only)
prod-admin secret region:*, write:production All
global-viewer secret region:*, read:clusters Global, US, EU
dev-user secret region:us-east, write:dev Global (us-east only)

Quick Reference

Endpoint URL
OIDC Discovery https://auth.spog.local/default/.well-known/openid-configuration
Endpoint URL
OIDC Discovery https://auth.spog.local/.well-known/openid-configuration
Endpoint URL
Admin Console https://auth.spog.local/admin
OIDC Discovery https://auth.spog.local/realms/spog/.well-known/openid-configuration

What You've Built

Mock OIDC Authentication Flow

Dex + LDAP Authentication Flow

Keycloak Authentication Flow (Local Users)

Keycloak + LDAP Authentication Flow


Production Deployment

This guide uses mkcert for local TLS certificates. For production deployments:

  1. Use a real TLS solution - See the cert-manager documentation for Let's Encrypt setup with your ingress controller
  2. Skip the mkcert-ca values file - When deploying Glass UI, omit the -f "https://doc.powerdns.com/spog/1.0.0/helm/glass-ui/examples/mkcert-ca.yaml" line since your OIDC provider will have a publicly trusted certificate
  3. Use your own domain - Replace *.spog.local hostnames with your actual domain in all configurations

What's Next?