End-to-end guide for deploying the EQTY Lab Governance Platform on Kubernetes with Keycloak as the identity provider.
- Overview
- Prerequisites
- Infrastructure Setup
- Domain & TLS Configuration
- Deploying Keycloak
- Generating Configuration with govctl
- Running Keycloak Bootstrap
- Creating Kubernetes Secrets
- Configuring values.yaml
- Deploying the Governance Platform
- Post-Install Setup & Verification
The Governance Platform consists of four microservices deployed via a single Helm umbrella chart (governance-platform), backed by a PostgreSQL database, and integrated with an external Keycloak instance for identity and access management.
flowchart TD
A[👥 Users] --> PE[🌍 Public Endpoint]
PE --> K8s
subgraph K8s[☸️ Kubernetes Cluster]
I[🚦 Ingress - NGINX + TLS] --> GS[🖥️ Governance Studio]
GS --> GSV[⚙️ Governance Service]
GS --> AUTH[🔐 Auth Service]
GS --> INT[🛡️ Integrity Service]
GSV --> DB[🗄️ PostgreSQL]
AUTH --> DB
INT --> DB
end
subgraph EXT[🧩 External Dependencies]
KC[🔑 Keycloak - IdP]
OS[📦 Object Storage]
KM[🗝️ Key Management - Azure Key Vault / AWS KMS]
end
K8s --> EXT
| Service | Language | Description | Ingress Path |
|---|---|---|---|
| governance-studio | React | Web UI for governance workflows | / |
| governance-service | Go | Backend API, workflow engine, worker | /governanceService/ |
| auth-service | Go | Authentication, authorization, token exchange | /authService/ |
| integrity-service | Rust | Verifiable credentials and lineage tracking | /integrityService/ |
| PostgreSQL | — | Shared database (Bitnami Helm chart) | Internal only |
All four application services are exposed through a single domain via NGINX Ingress with path-based routing. PostgreSQL is internal to the cluster.
These components live outside the governance-platform Helm chart and must be provisioned separately before deploying.
| Dependency | Purpose | Required? |
|---|---|---|
| Keycloak | Identity provider — manages users, realms, OAuth clients | Yes |
| Object Storage | Artifact and document storage (Azure Blob, GCS, or AWS S3) | Yes |
| Key Management | DID signing key management for verifiable credentials (Azure Key Vault or AWS KMS) | Yes |
| DNS | A-record or CNAME pointing your domain to the cluster ingress | Yes |
| TLS Certificates | cert-manager with a ClusterIssuer/Issuer, or pre-provisioned certs | Yes |
The deployment uses an umbrella chart pattern. You deploy a single chart (governance-platform) which pulls in all subcharts as dependencies:
charts/
├── governance-platform/ # Umbrella chart — deploy this
│ ├── Chart.yaml # Declares subchart dependencies
│ ├── values.yaml # Default values for all services
│ ├── templates/ # Shared resources (secrets, config)
│ └── examples/ # Ready-to-use values files
│ ├── values-keycloak.yaml # Keycloak deployment example
│ ├── values-auth0.yaml # Auth0 deployment example
│ ├── values-entra.yaml # Microsoft Entra ID deployment example
│ └── secrets-sample.yaml # Secrets template
├── governance-studio/ # Frontend subchart
├── governance-service/ # Backend API subchart
├── integrity-service/ # Credentials/lineage subchart
├── auth-service/ # Authentication subchart
└── keycloak-bootstrap/ # Keycloak realm/client configuration (standalone)
The keycloak-bootstrap chart is deployed separately — it runs a one-time Kubernetes Job that configures the Keycloak realm, OAuth clients, scopes, and an initial admin user.
The Keycloak bootstrap creates three OAuth clients in the governance realm:
| Client ID | Type | Purpose |
|---|---|---|
governance-platform-frontend |
Public (SPA) | Browser-based authentication for governance-studio |
governance-platform-backend |
Confidential | Service-to-service auth, has service account with query-users and view-users roles |
governance-worker |
Confidential (service account only) | Automated governance workflow execution |
The end-to-end deployment follows this order:
1. Provision infrastructure (storage, key management, DNS, TLS)
│
2. Deploy Keycloak (if self-hosted)
│
3. Generate configuration with govctl (bootstrap, secrets, values files)
│
4. Run keycloak-bootstrap (creates realm, clients, admin user in Keycloak)
│
5. Create Kubernetes secrets (uses Keycloak-generated client secrets)
│
6. Configure values.yaml
│
7. Deploy governance-platform (Helm umbrella chart)
│
├── PostgreSQL starts, initializes databases
├── governance-service starts, runs migrations
├── auth-service, integrity-service, governance-studio start
├── Post-install hook creates organization + admin user in DB
│
8. Post-install verification
Key ordering note: The
keycloak-bootstrapchart must be run before deploying the governance-platform, because the platform services need valid OAuth client credentials at startup. The governance-platform chart includes a Helm post-install hook that automatically creates the organization and platform-admin user in the database after deployment.
| Tool | Minimum Version | Purpose |
|---|---|---|
| kubectl | 1.21+ | Kubernetes cluster management |
| Helm | 3.8+ | Chart deployment |
| jq | 1.6+ | JSON processing (used by helper scripts) |
| curl | — | API calls (used by helper scripts) |
| openssl | — | Generating random secrets |
- Kubernetes 1.21+ with RBAC enabled
- NGINX Ingress Controller installed and configured as the default ingress class (see
scripts/nginx.sh) - cert-manager installed with a ClusterIssuer or Issuer configured for TLS (see
scripts/cert-issuer.sh) - Sufficient resources for the platform (recommended minimums):
| Component | CPU Request | Memory Request | Storage |
|---|---|---|---|
| governance-service | 250m | 256Mi | — |
| auth-service | 250m | 256Mi | — |
| integrity-service | 250m | 256Mi | — |
| governance-studio | 100m | 128Mi | — |
| PostgreSQL | 500m | 1Gi | 10Gi PVC |
A running Keycloak server accessible from within the Kubernetes cluster. This can be:
- Self-hosted in the same cluster — deployed via the Bitnami Keycloak Helm chart or the official Keycloak Operator
- Self-hosted on a separate cluster or VM
- Managed Keycloak service (e.g., Red Hat SSO)
Requirements:
- Keycloak admin credentials available (username + password for the
masterrealm) - Network connectivity from the governance namespace pods to Keycloak's HTTP port
- If using an external Keycloak, a publicly accessible URL (e.g.,
https://keycloak.your-domain.com) - If using an in-cluster Keycloak, internal service DNS is sufficient (e.g.,
http://keycloak:8080/keycloak)
Platform images are hosted on GitHub Container Registry (GHCR). You need:
- A GitHub Personal Access Token (PAT) with
read:packagesscope - Or access to a mirror registry containing the platform images
Depending on your cloud provider, provision the following before deployment:
Object Storage (one of):
- Azure Blob Storage — storage account + container(s) for governance artifacts and integrity store
- Google Cloud Storage — bucket(s) + service account with storage admin permissions
- AWS S3 — bucket(s) + IAM user/role with read/write access
Key Management (for verifiable credential signing — choose one):
- Azure Key Vault — vault instance + service principal with key sign/verify permissions
- AWS KMS — IAM user/role with kms:CreateKey, kms:Sign, kms:Verify, kms:DescribeKey, kms:GetPublicKey, kms:CreateAlias, kms:ScheduleKeyDeletion permissions
A domain name (or subdomain) that you control, with the ability to create A-records or CNAMEs pointing to your cluster's ingress controller external IP.
The platform uses a single domain with path-based routing:
| URL Path | Service |
|---|---|
https://governance.your-domain.com/ |
governance-studio (UI) |
https://governance.your-domain.com/governanceService/ |
governance-service (API) |
https://governance.your-domain.com/authService/ |
auth-service |
https://governance.your-domain.com/integrityService/ |
integrity-service |
Keycloak typically runs on a separate domain (e.g., https://keycloak.your-domain.com) or on the same domain under a subpath (e.g., https://governance.your-domain.com/keycloak).
Before proceeding, confirm:
- Kubernetes cluster is running and
kubectlis configured - NGINX Ingress Controller is installed
- cert-manager is installed with a working Issuer/ClusterIssuer
- Keycloak is deployed and accessible
- Keycloak admin credentials are known
- Object storage is provisioned (Azure Blob, GCS, or S3)
- Key management is provisioned (Azure Key Vault or AWS KMS) for VC signing
- DNS domain is available and you can create records
- GitHub PAT with
read:packagesscope is available - Helm 3.8+ and kubectl 1.21+ are installed locally
Provision the following cloud resources before deploying. The platform requires object storage and a key management provider (Azure Key Vault or AWS KMS) for DID signing. A running Kubernetes cluster with kubectl configured is assumed.
Terraform alternative: These resources can also be provisioned using Terraform instead of the CLI commands below.
Choose one storage provider. Each service has its own provider setting (config.storageProvider for governance-service, config.integrityAppBlobStoreType for integrity-service), so they can be configured independently if needed.
Create a storage account and two containers:
# Create storage account
az storage account create \
--name yourstorageaccount \
--resource-group your-resource-group \
--location eastus \
--sku Standard_LRS
# Create containers
az storage container create --name governance-artifacts --account-name yourstorageaccount
az storage container create --name integrity-store --account-name yourstorageaccount
# Get the account key (needed for secrets later)
az storage account keys list --account-name yourstorageaccount --query '[0].value' -o tsvYou'll need these values for your values.yaml:
| Value | governance-service field | integrity-service field |
|---|---|---|
| Storage account name | azureStorageAccountName |
integrityAppBlobStoreAccount |
| Artifacts container | azureStorageContainerName |
— |
| Integrity container | — | integrityAppBlobStoreContainer |
Create two buckets and a service account:
# Create buckets
gcloud storage buckets create gs://your-governance-artifacts --location=us-central1
gcloud storage buckets create gs://your-integrity-store --location=us-central1
# Create service account
gcloud iam service-accounts create governance-storage \
--display-name="Governance Platform Storage"
# Grant access
for BUCKET in your-governance-artifacts your-integrity-store; do
gcloud storage buckets add-iam-policy-binding gs://$BUCKET \
--member="serviceAccount:governance-storage@your-project.iam.gserviceaccount.com" \
--role="roles/storage.objectAdmin"
done
# Create key (needed for secrets later)
gcloud iam service-accounts keys create service-account.json \
--iam-account=governance-storage@your-project.iam.gserviceaccount.comYou'll need these values for your values.yaml:
| Value | governance-service field | integrity-service field |
|---|---|---|
| Artifacts bucket | gcsBucketName |
— |
| Integrity bucket | — | integrityAppBlobStoreGcsBucket |
| Integrity folder (optional) | — | integrityAppBlobStoreGcsFolder |
Create two buckets and an IAM user:
# Create buckets
aws s3 mb s3://your-governance-artifacts --region us-east-1
aws s3 mb s3://your-integrity-store --region us-east-1
# Create IAM user with programmatic access
aws iam create-user --user-name governance-storage
aws iam attach-user-policy --user-name governance-storage \
--policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess # Or a scoped policy
# Create access key (needed for secrets later)
aws iam create-access-key --user-name governance-storageYou'll need these values for your values.yaml:
| Value | governance-service field | integrity-service field |
|---|---|---|
| Region | awsS3Region |
integrityAppBlobStoreAwsRegion |
| Artifacts bucket | awsS3BucketName |
— |
| Integrity bucket | — | integrityAppBlobStoreAwsBucket |
| Integrity folder (optional) | — | integrityAppBlobStoreAwsFolder |
The auth-service uses a key management provider for DID signing key management. It dynamically creates per-user signing keys. Choose one of the following providers.
The service principal needs key create/delete permissions in addition to sign/verify.
# Create Key Vault
az keyvault create \
--name your-keyvault \
--resource-group your-resource-group \
--location eastus
# Create service principal
az ad sp create-for-rbac --name governance-keyvault-sp
# Grant key and secret permissions to the service principal
az keyvault set-policy \
--name your-keyvault \
--spn <service-principal-app-id> \
--key-permissions create delete get list encrypt decrypt unwrapKey wrapKey sign verify \
--secret-permissions get list set deleteNote: The service principal requires
createanddeletekey permissions because the auth-service creates individual DID signing keys per user in the Key Vault at login time.
You'll need these values for your values.yaml and secrets.yaml:
| Value | Field |
|---|---|
| Vault URL | auth-service.config.keyManagement.azure_key_vault.vaultUrl |
| Tenant ID | auth-service.config.keyManagement.azure_key_vault.tenantId |
| Service principal client ID | Secret: platform-azure-key-vault → client-id |
| Service principal client secret | Secret: platform-azure-key-vault → client-secret |
To retrieve the service principal credentials:
# The client ID (appId) is returned by az ad sp create-for-rbac
# To find it later:
az ad sp list --display-name governance-keyvault-sp --query '[0].appId' -o tsv
# The client secret (password) is returned at creation time only
# To generate a new one:
az ad sp credential reset --id <service-principal-app-id> --query password -o tsvCreate an IAM user or role with KMS permissions for DID signing.
# Create IAM policy for KMS access
aws iam create-policy \
--policy-name governance-kms-signing \
--policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"kms:CreateKey",
"kms:CreateAlias",
"kms:DeleteAlias",
"kms:DescribeKey",
"kms:GetPublicKey",
"kms:ListAliases",
"kms:ListKeys",
"kms:ScheduleKeyDeletion",
"kms:Sign",
"kms:Verify",
"kms:TagResource"
],
"Resource": "*"
}]
}'
# Create IAM user and attach policy
aws iam create-user --user-name governance-kms-user
aws iam attach-user-policy \
--user-name governance-kms-user \
--policy-arn arn:aws:iam::YOUR_ACCOUNT_ID:policy/governance-kms-signing
# Create access keys
aws iam create-access-key --user-name governance-kms-userYou'll need these values for your values.yaml and secrets.yaml:
| Value | Field |
|---|---|
| Region | auth-service.config.keyManagement.aws_kms.region |
| Access Key ID | Secret: platform-aws-kms → access-key-id |
| Secret Access Key | Secret: platform-aws-kms → secret-access-key |
| Session Token | Secret: platform-aws-kms → session-token (optional) |
After completing this section, you should have:
| Resource | What You Need for Later |
|---|---|
| Object storage | Account name/keys, 2 container/bucket names |
| Key management | Azure Key Vault: vault URL, tenant ID, SP client ID/secret — or — AWS KMS: region, access key ID, secret access key |
These values will be used in Section 8 (Creating Secrets) and Section 9 (Configuring values.yaml).
If not already installed, use the provided helper script:
./scripts/nginx.shThis installs the ingress-nginx Helm chart into the ingress-nginx namespace.
The platform requires one domain for the governance services. Keycloak can run on a separate domain or on the same domain under /keycloak.
Create DNS records pointing to your NGINX Ingress Controller's external IP:
# Find your ingress controller's external IP or hostname in the EXTERNAL-IP column
# Note: On EKS this will be a hostname (e.g., xxx.elb.amazonaws.com) rather than an IP
kubectl get svc -n ingress-nginx ingress-nginx-controllerThen create A-records (or CNAME records if using an EKS load balancer hostname):
| Record | Type | Value |
|---|---|---|
governance.your-domain.com |
A | <ingress-external-ip> |
keycloak.your-domain.com (if separate domain) |
A | <ingress-external-ip> |
The platform uses cert-manager to automatically provision TLS certificates from Let's Encrypt.
If not already installed, use the provided helper script:
./scripts/cert-issuer.shThis installs cert-manager into the ingress-nginx namespace. To install into a different namespace:
./scripts/cert-issuer.sh --namespace cert-managercert-manager supports two issuer types:
- Issuer — namespace-scoped. Can only issue certificates for ingress resources within the same namespace. Use the
cert-manager.io/issuerannotation in your ingress. - ClusterIssuer — cluster-wide. Can issue certificates for ingress resources in any namespace. Use the
cert-manager.io/cluster-issuerannotation in your ingress.
The example values files use a namespace-scoped Issuer with the cert-manager.io/issuer annotation. If you prefer a ClusterIssuer (e.g., to share one issuer across multiple namespaces), adjust the kind and ingress annotations accordingly.
Option A: Namespace-scoped Issuer (used by example values)
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
namespace: governance
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: <email address>
privateKeySecretRef:
name: letsencrypt-production
solvers:
- http01:
ingress:
ingressClassName: nginx
EOFIngress annotation: cert-manager.io/issuer: "letsencrypt-prod"
Option B: ClusterIssuer
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: <email address>
privateKeySecretRef:
name: letsencrypt-production
solvers:
- http01:
ingress:
ingressClassName: nginx
EOFIngress annotation: cert-manager.io/cluster-issuer: "letsencrypt-prod"
Replace <email address> with your actual email address. This email is used by Let's Encrypt for certificate expiration notifications.
Note: The Issuer name (
letsencrypt-prod) must match the corresponding annotation in your ingress configuration. If you switch from Issuer to ClusterIssuer, update allcert-manager.io/issuerannotations tocert-manager.io/cluster-issuerin your values file.
Each service's ingress is configured with:
- A
cert-manager.io/issuerannotation that references the Issuer - A
tlsblock specifying the TLS secret name and hostname
For example, from values-keycloak.yaml:
ingress:
enabled: true
className: "nginx"
annotations:
cert-manager.io/issuer: "letsencrypt-prod"
hosts:
- host: governance.your-domain.com
paths:
- path: "/authService(/|$)(.*)"
pathType: ImplementationSpecific
tls:
- secretName: prod-tls-secret
hosts:
- governance.your-domain.comcert-manager watches for ingress resources with the cert-manager.io/issuer annotation and automatically requests and renews certificates. The certificate is stored in the Kubernetes secret specified by secretName (e.g., prod-tls-secret).
All four services share the same TLS secret name and hostname since they run on the same domain with different paths.
After DNS propagation:
# Verify DNS resolution
dig governance.your-domain.com
# After deploying (Section 10), verify TLS certificate
kubectl get certificate -n governance
kubectl describe certificate -n governanceThe Governance Platform requires a running Keycloak instance. This section covers deploying Keycloak into the same Kubernetes cluster. If you already have a Keycloak instance running, skip to creating the required secrets and then proceed to Section 7.
If not already created:
kubectl create namespace governanceThe recommended approach for in-cluster Keycloak is the Bitnami Helm chart:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo updateCreate a values file for your Keycloak deployment (e.g., keycloak-values.yaml):
# Keycloak server configuration
auth:
adminUser: admin
adminPassword: "" # Will be set via existing secret
existingSecret: "keycloak-admin"
passwordSecretKey: "password"
# Run Keycloak under /keycloak subpath
httpRelativePath: "/keycloak/"
# Production mode with TLS termination at ingress
production: true
# PostgreSQL - use a dedicated database or the platform's shared database
postgresql:
enabled: true
auth:
postgresPassword: "" # Set via secret or generate
database: keycloak
# Ingress configuration
ingress:
enabled: true
ingressClassName: "nginx"
hostname: governance.your-domain.com # Or keycloak.your-domain.com
path: /keycloak
annotations:
cert-manager.io/issuer: "letsencrypt-prod"
tls: true
# Resource limits
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1GiBefore deploying Keycloak, create the secrets that both Keycloak and the bootstrap job will need:
# Keycloak admin password (master realm)
kubectl create secret generic keycloak-admin \
--from-literal=password="$(openssl rand -base64 32)" \
--namespace governance
# Platform admin password (governance realm user — created by bootstrap)
kubectl create secret generic platform-admin \
--from-literal=password="$(openssl rand -base64 32)" \
--namespace governancehelm upgrade --install keycloak bitnami/keycloak \
--namespace governance \
--values keycloak-values.yaml \
--wait \
--timeout 10m# Check pod status
kubectl get pods -l app.kubernetes.io/name=keycloak -n governance
# Check readiness
kubectl get pod -l app.kubernetes.io/name=keycloak -n governance \
-o jsonpath='{.items[0].status.conditions[?(@.type=="Ready")].status}'
# Test internal connectivity (should return HTML or redirect)
kubectl run curl-test --rm -it --image=curlimages/curl --restart=Never -n governance -- \
curl -s -o /dev/null -w "%{http_code}" http://keycloak:9000/keycloak/health/readyYou should see Ready: True and an HTTP 200 from the health endpoint.
If Keycloak is running outside the cluster, you need to ensure:
- Network reachability — pods in the governance namespace can reach the Keycloak URL
- Internal URL — the bootstrap chart defaults to
http://keycloak:8080/keycloak. Override this in the bootstrap values if your Keycloak uses a different internal URL:
keycloak:
url: "https://keycloak.your-domain.com"- Admin credentials — the
keycloak-adminsecret must still be created in the governance namespace with the external Keycloak's admin password
With Keycloak running, proceed to Section 6 to generate your deployment configuration files, or skip ahead to Section 7 if you prefer to configure files manually.
The govctl CLI tool generates the configuration files needed for the remaining deployment steps — bootstrap values, Helm values, and secrets. This is the recommended approach, as it produces a consistent, minimal configuration based on your environment.
Note: This tool generates the minimum viable configuration to get up and running. For advanced or service-specific options, refer to the individual chart READMEs under
charts/.
Requires Python 3.10+. From the govctl/ directory:
# With uv (recommended)
uv pip install -e .
# Or with pip
python3 -m venv env && source env/bin/activate
pip install -e .Verify the installation:
govctl --helpThe interactive wizard walks you through cloud provider, domain, environment, auth provider, and registry configuration:
govctl initFor non-interactive usage (all flags required):
govctl init -I \
--cloud <gcp|aws|azure> \
--domain governance.your-domain.com \
--environment staging \
--auth keycloak| Flag | Short | Description |
|---|---|---|
--cloud |
-c |
Cloud provider (gcp, aws, azure) |
--domain |
-d |
Deployment domain |
--environment |
-e |
Environment name |
--auth |
-a |
Auth provider (auth0, keycloak, entra) |
--output |
-o |
Output directory (default: output) |
--interactive/--no-interactive |
-i/-I |
Toggle interactive mode |
govctl produces the following files in the output directory:
| File | Contents | Used In |
|---|---|---|
bootstrap-{env}.yaml |
Keycloak realm, clients, scopes, admin user config | Section 7 — Running Keycloak Bootstrap |
secrets-{env}.yaml |
Secret values (some auto-generated, some to fill in) | Section 8 — Creating Kubernetes Secrets |
values-{env}.yaml |
Helm values for all platform services | Section 9 — Configuring values.yaml |
After generating your files:
- Review
bootstrap-{env}.yamlandvalues-{env}.yamlfor correctness - Fill in any remaining placeholder values in
secrets-{env}.yaml(marked with# REQUIREDcomments) - Continue to Section 7 to run the Keycloak bootstrap using your generated bootstrap file
Skipping govctl: If you prefer to configure files manually, you can start from the example values files in
charts/governance-platform/examples/andcharts/keycloak-bootstrap/examples/instead. The subsequent sections cover both approaches.
The keycloak-bootstrap chart runs a Kubernetes Job that configures Keycloak via its Admin REST API. It creates the governance realm, OAuth clients, custom scopes, service account roles, and an initial platform-admin user.
If you generated files with govctl in Section 6, use your
bootstrap-{env}.yamland skip to Run the Bootstrap.
Start from the example values file and customize it for your environment:
cp charts/keycloak-bootstrap/examples/values.yaml bootstrap-values.yamlEdit bootstrap-values.yaml and replace all CHANGE_ME_DOMAIN_HERE placeholders with your actual domain:
# Client redirect URIs and web origins
clients:
frontend:
redirectUris:
- "https://governance.your-domain.com/*"
- "http://localhost:5173/*"
webOrigins:
- "https://governance.your-domain.com"
- "http://localhost:5173"
backend:
redirectUris:
- "https://governance.your-domain.com/authService/*"
webOrigins:
- "https://governance.your-domain.com"
# Admin user email
users:
admin:
email: "admin@your-domain.com"If your Keycloak is not reachable at the default http://keycloak:8080/keycloak, update the connection settings:
keycloak:
url: "https://keycloak.your-domain.com" # External URL
# or
url: "http://keycloak.other-namespace.svc:8080/keycloak" # Cross-namespaceOptionally, customize the Keycloak login page branding for the governance realm:
keycloak:
realm:
displayName: "Governance Platform"
displayNameHtml: '<div class="kc-logo-text"><span>Your Organization</span></div>'The displayNameHtml field controls the HTML branding shown on the Keycloak login page for the governance realm. It defaults to a generic Keycloak logo text if not set.
./scripts/keycloak/bootstrap-keycloak.sh -f /path/to/bootstrap-values.yaml -n governanceThe script validates prerequisites (Keycloak running, secrets exist), runs the Helm chart, monitors the job, and displays the results.
helm upgrade --install keycloak-bootstrap ./charts/keycloak-bootstrap \
--namespace governance \
--values /path/to/bootstrap-values.yaml \
--wait \
--timeout 10mMonitor the job:
# Watch job status
kubectl get jobs -l app.kubernetes.io/instance=keycloak-bootstrap -n governance -w
# View logs
kubectl logs job/keycloak-bootstrap -n governance -f| Resource | Details |
|---|---|
| Realm | governance with brute force protection, SSO sessions, token lifespans |
| Frontend client | governance-platform-frontend — public SPA client |
| Backend client | governance-platform-backend — confidential, service account with query-users and view-users roles |
| Worker client | governance-worker — confidential, service account only |
| Custom scopes | 8 authorization scopes (governance, integrity, organizations, projects, evaluations) |
| Platform admin user | platform-admin in the governance realm |
The backend and worker client secrets are auto-generated by Keycloak during bootstrap. You must retrieve them to create the platform's Kubernetes secrets in the next step.
# Port-forward the Keycloak service
kubectl port-forward svc/keycloak 8080:8080 -n governance &
# Get admin password
ADMIN_PASS=$(kubectl get secret keycloak-admin -n governance -o jsonpath='{.data.password}' | base64 -d)
# Get admin token
TOKEN=$(curl -s -X POST "http://localhost:8080/keycloak/realms/master/protocol/openid-connect/token" \
-d "username=admin" \
-d "password=$ADMIN_PASS" \
-d "grant_type=password" \
-d "client_id=admin-cli" | jq -r '.access_token')
# Get backend client secret
curl -s -H "Authorization: Bearer $TOKEN" \
"http://localhost:8080/keycloak/admin/realms/governance/clients?clientId=governance-platform-backend" \
| jq -r '.[0].secret'
# Get worker client secret
curl -s -H "Authorization: Bearer $TOKEN" \
"http://localhost:8080/keycloak/admin/realms/governance/clients?clientId=governance-worker" \
| jq -r '.[0].secret'
# Stop port-forward
kill %1If Keycloak is accessible via an external URL, you can skip the port-forward and use the external URL directly (e.g., https://governance.your-domain.com/keycloak).
- Navigate to
https://governance.your-domain.com/keycloak/admin - Select the governance realm
- Go to Clients > governance-platform-backend > Credentials tab
- Copy the Client secret
- Repeat for governance-worker
Save these secrets — you'll need them in Section 8 to create the
platform-keycloakandplatform-governance-workerKubernetes secrets.
# Test realm discovery endpoint
curl -s https://governance.your-domain.com/keycloak/realms/governance/.well-known/openid-configuration | jq '.issuer'
# Expected output: "https://governance.your-domain.com/keycloak/realms/governance"| Issue | Solution |
|---|---|
| Job fails with "Failed to get admin token" | Verify keycloak-admin secret password matches the actual Keycloak admin password |
| Job fails with connection refused | Check keycloak.url in values — ensure Keycloak is reachable from within the cluster |
| Realm already exists | The bootstrap is idempotent — it updates existing resources rather than failing |
| Job times out | Check Keycloak pod logs: kubectl logs -l app.kubernetes.io/name=keycloak -n governance |
The governance-platform chart requires several Kubernetes secrets to be available at deploy time. There are three ways to create them — choose one approach and follow only that subsection.
Note: Regardless of which approach you choose, the
keycloak-adminandplatform-adminsecrets were already created in Section 5. The instructions below cover all remaining secrets.
| Approach | Best For | What You Do |
|---|---|---|
| Option A — kubectl | Environments without file-based secrets management | Run kubectl create secret commands yourself. Secrets live outside of Helm and persist across helm uninstall / helm install cycles. |
| Option B — Helm-managed secrets | Teams with encrypted secrets workflows (SOPS, sealed-secrets, etc.) | Fill in a secrets values file and pass it to helm install. Helm creates the Secret objects for you. Keeps everything declarative. |
| Option C — govctl | Any environment (generates files for Option B) | Run govctl init to auto-generate random values; fill in provider credentials; then use the output as a Helm values file (same as Option B). |
Important: Do not mix approaches. If you use Option B or C (Helm-managed), do not also create the same secrets with
kubectl— Helm will fail if the Secret objects already exist. Conversely, if you use Option A (kubectl), leaveglobal.secrets.createat its default value offalse.
| Secret Name | Used By | Keys |
|---|---|---|
keycloak-admin |
Keycloak, bootstrap | password |
platform-admin |
Bootstrap | password |
platform-database |
governance-service, auth-service, integrity-service | username, password |
platform-keycloak |
auth-service, governance-service | service-account-client-id, service-account-client-secret, token-exchange-private-key |
platform-auth-service |
auth-service | api-secret, jwt-secret |
platform-encryption-key |
governance-service, auth-service | encryption-key |
platform-governance-worker |
governance-service worker | encryption-key, client-id, client-secret |
platform-azure-blob |
governance-service, integrity-service (Azure) | account-key, connection-string |
platform-aws-s3 |
governance-service, integrity-service (AWS) | access-key-id, secret-access-key |
platform-gcs |
governance-service, integrity-service (GCS) | service-account-json |
platform-azure-key-vault |
auth-service | client-id, client-secret, tenant-id, vault-url |
platform-aws-kms |
auth-service | access-key-id, secret-access-key, session-token (optional) |
platform-image-pull-secret |
All services | Docker registry credentials |
Create each secret manually. Secrets are managed outside of Helm, so they persist across helm uninstall / helm install cycles.
Run these commands in order, replacing placeholder values with your actual credentials.
kubectl create secret generic platform-database \
--from-literal=username=postgres \
--from-literal=password="$(openssl rand -hex 32)" \
--namespace governanceUse the backend client secret retrieved from Keycloak in Section 7.
Generate an RSA private key for token exchange signing:
openssl genrsa -out token-exchange-key.pem 2048Create the secret:
kubectl create secret generic platform-keycloak \
--from-literal=service-account-client-id=governance-platform-backend \
--from-literal=service-account-client-secret=YOUR_BACKEND_CLIENT_SECRET \
--from-file=token-exchange-private-key=token-exchange-key.pem \
--namespace governanceNote: The token exchange private key is used by auth-service to sign token exchange requests with Keycloak. If you used
govctl init, this key is auto-generated in your secrets file.
kubectl create secret generic platform-auth-service \
--from-literal=api-secret="$(openssl rand -base64 32)" \
--from-literal=jwt-secret="$(openssl rand -base64 32)" \
--namespace governancekubectl create secret generic platform-encryption-key \
--from-literal=encryption-key="$(openssl rand -base64 32)" \
--namespace governanceUse the worker client secret retrieved from Keycloak in Section 7:
kubectl create secret generic platform-governance-worker \
--from-literal=encryption-key="$(openssl rand -base64 32)" \
--from-literal=client-id=governance-worker \
--from-literal=client-secret=YOUR_WORKER_CLIENT_SECRET \
--namespace governanceAzure Blob:
kubectl create secret generic platform-azure-blob \
--from-literal=account-key=YOUR_AZURE_STORAGE_ACCOUNT_KEY \
--from-literal=connection-string="DefaultEndpointsProtocol=https;AccountName=yourstorageaccount;AccountKey=YOUR_KEY;EndpointSuffix=core.windows.net" \
--namespace governanceAWS S3:
kubectl create secret generic platform-aws-s3 \
--from-literal=access-key-id=YOUR_AWS_ACCESS_KEY_ID \
--from-literal=secret-access-key=YOUR_AWS_SECRET_ACCESS_KEY \
--namespace governanceGCS:
kubectl create secret generic platform-gcs \
--from-file=service-account-json=service-account.json \
--namespace governanceAzure Key Vault:
kubectl create secret generic platform-azure-key-vault \
--from-literal=client-id=YOUR_AZURE_CLIENT_ID \
--from-literal=client-secret=YOUR_AZURE_CLIENT_SECRET \
--from-literal=tenant-id=YOUR_AZURE_TENANT_ID \
--from-literal=vault-url=https://your-vault.vault.azure.net/ \
--namespace governanceAWS KMS:
kubectl create secret generic platform-aws-kms \
--from-literal=access-key-id=YOUR_AWS_ACCESS_KEY_ID \
--from-literal=secret-access-key=YOUR_AWS_SECRET_ACCESS_KEY \
--namespace governance
# Optionally add: --from-literal=session-token=YOUR_AWS_SESSION_TOKENkubectl create secret docker-registry platform-image-pull-secret \
--docker-server=ghcr.io \
--docker-username=YOUR_GITHUB_USERNAME \
--docker-password=YOUR_GITHUB_PAT \
--docker-email=YOUR_EMAIL \
--namespace governanceAfter creating all secrets, skip ahead to Verify Secrets.
Instead of creating secrets with kubectl, you can declare secret values in a YAML file and let Helm create the Secret objects during helm install.
- Copy the sample secrets file to a secure location outside your repo:
cp charts/governance-platform/examples/secrets-sample.yaml my-secrets.yaml-
Open
my-secrets.yamland:- Ensure
global.secrets.createis set totrue - Set
global.secrets.auth.providertokeycloak - Uncomment the
keycloakblock underglobal.secrets.authand fill in the backend client secret from Section 7 - Fill in all
REPLACE_WITH_*values for your chosen storage provider, key management, and image registry - Generate random values where indicated (e.g.,
openssl rand -base64 32for encryption keys)
- Ensure
-
When deploying in Section 10, pass both your secrets file and values file to Helm:
helm upgrade --install governance-platform ./charts/governance-platform \
--namespace governance \
--values my-secrets.yaml \
--values my-values.yaml \
--wait --timeout 15mWarning: Never commit
my-secrets.yamlto version control. Add it to.gitignore.
If you ran govctl init in Section 6, it generated a secrets-{env}.yaml file with random values already filled in for database password, API secrets, JWT secret, encryption keys, and the RSA private key.
-
Open
secrets-{env}.yamland fill in the remaining values marked with# REQUIREDcomments: -
The generated file has
global.secrets.create: true, so Helm will create the secrets for you. When deploying in Section 10, pass it alongside your values file:
helm upgrade --install governance-platform ./charts/governance-platform \
--namespace governance \
--values secrets-staging.yaml \
--values values-staging.yaml \
--wait --timeout 15mIf you created secrets with kubectl (Option A), verify they exist before proceeding:
# List all platform secrets
kubectl get secrets -n governance | grep platform
# Verify a specific secret has the expected keys
kubectl get secret platform-keycloak -n governance -o jsonpath='{.data}' | jq 'keys'If you used Option B or C, Helm creates the secrets during helm install — skip this step and continue to Section 9.
The governance-platform Helm chart is configured through a single values file. Start from the Keycloak example and customize it for your environment.
You can either copy the example values file manually or use govctl to generate both values and secrets files interactively:
# Option A: Copy the example and customize manually
cp charts/governance-platform/examples/values-keycloak.yaml my-values.yaml
# Option B: Use govctl to generate values and secrets
govctl initIf using govctl, it will generate a values-{env}.yaml and secrets-{env}.yaml pre-configured for your cloud provider, domain, and auth provider. See the govctl README for details.
If starting from the example file, values-keycloak.yaml has all four services pre-configured for Keycloak with placeholder values you need to replace.
Set the domain and auth provider at the top of your values file:
global:
domain: "governance.your-domain.com"
environmentType: "production" # Options: development, staging, productionThe global.secrets.create setting controls how secrets are provided. Leave it at false (default) if you created secrets with kubectl (Section 8, Option A). Set it to true only if you are using Helm-managed secrets via a secrets file (Section 8, Option B or Option C).
The auth-service handles authentication, authorization, and token exchange. Key configuration areas:
auth-service:
config:
# Identity Provider — must match your Keycloak setup
idp:
provider: "keycloak"
issuer: "https://governance.your-domain.com/keycloak/realms/governance"
keycloak:
realm: "governance"
adminUrl: "https://governance.your-domain.com/keycloak"
clientId: "governance-platform-frontend"
enableUserManagement: true
# Token Exchange — enables service-to-service token exchange
tokenExchange:
enabled: true
keyId: "auth-service-prod-001" # Unique key identifier
# Key Management — for DID signing keys (choose one provider)
keyManagement:
provider: "azure_key_vault" # Options: azure_key_vault, aws_kms
azure_key_vault:
vaultUrl: "https://your-keyvault.vault.azure.net/"
tenantId: "your-azure-tenant-id"
# aws_kms:
# region: "us-east-1"| Field | Description | Where to Get It |
|---|---|---|
idp.issuer |
Keycloak realm issuer URL | https://<domain>/keycloak/realms/governance |
idp.keycloak.adminUrl |
Keycloak base URL (used for Admin API calls) | Your Keycloak URL without /realms/... |
idp.keycloak.clientId |
Frontend client ID | Set during bootstrap |
keyManagement.provider |
Key management provider (azure_key_vault or aws_kms) |
Choose based on your cloud provider |
keyManagement.azure_key_vault.vaultUrl |
Azure Key Vault URL | From Section 3 |
keyManagement.azure_key_vault.tenantId |
Azure AD tenant ID | From your Azure subscription |
keyManagement.aws_kms.region |
AWS KMS region | Your AWS region (e.g., us-east-1) |
The governance-service is the main backend API. Configure storage and Keycloak:
governance-service:
config:
# Storage — choose one provider
storageProvider: "azure_blob" # Options: azure_blob, gcs, aws_s3
azureStorageAccountName: "your-storage-account"
azureStorageContainerName: "your-governance-artifacts"
# GCS alternative:
# storageProvider: "gcs"
# gcsBucketName: "your-governance-artifacts-bucket"
# AWS S3 alternative:
# storageProvider: "aws_s3"
# awsS3Region: "us-east-1"
# awsS3BucketName: "your-governance-artifacts-bucket"
# Keycloak — must match auth-service config
keycloakUrl: "https://governance.your-domain.com/keycloak"
keycloakRealm: "governance"The frontend application. Configure Keycloak connection and feature flags:
governance-studio:
config:
keycloakUrl: "https://governance.your-domain.com/keycloak"
keycloakRealm: "governance"
keycloakClientId: "governance-platform-frontend"
# Feature flags
features:
governance: true # Governance workflows
lineage: true # Lineage trackingImportant: The
keycloakClientIdmust match the frontend client ID created during bootstrap (governance-platform-frontend).
The integrity-service handles verifiable credentials. Configure its storage (can use a different provider than governance-service if needed):
integrity-service:
config:
integrityAppBlobStoreType: "azure_blob"
integrityAppBlobStoreAccount: "your-storage-account"
integrityAppBlobStoreContainer: "your-integrity-store"
# AWS S3 alternative:
# integrityAppBlobStoreType: "aws_s3"
# integrityAppBlobStoreAwsRegion: "us-east-1"
# integrityAppBlobStoreAwsBucket: "your-integrity-store-bucket"
# GCS alternative:
# integrityAppBlobStoreType: "gcs"
# integrityAppBlobStoreGcsBucket: "your-integrity-store-bucket"Each service needs an ingress block. All four services share the same domain with path-based routing, but annotations vary per service. If you used govctl or started from values-keycloak.yaml, the ingress is already configured correctly.
Key differences between services:
| Service | Path Pattern | Notes |
|---|---|---|
| governance-studio | / (pathType: Prefix) |
No regex or rewrite annotations |
| governance-service | /governanceService(/|$)(.*) |
Regex rewrite to /$2 |
| auth-service | /authService(/|$)(.*) |
Regex rewrite + extra buffer size annotations (proxy-buffer-size, client-header-buffer-size, large-client-header-buffers) |
| integrity-service | /integrityService(/|$)(.*) |
Regex rewrite + proxy-body-size: "0" (unlimited) |
Note: All four services must use the same
tls.secretName(e.g.,prod-tls-secret). cert-manager creates this secret automatically when it provisions the TLS certificate.
The Bitnami PostgreSQL chart is included as a dependency. Configure storage and resources:
postgresql:
enabled: true
primary:
persistence:
enabled: true
size: 10Gi
# Uses cluster default StorageClass when set to "".
# Override per CSP if needed: GKE="standard", AKS="managed-csi", EKS="gp3", etc.
storageClass: ""
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 2000m
memory: 2GiThe database password is pulled from the platform-database secret created in Section 8.
The governance-platform chart includes a Helm post-install/post-upgrade hook that automatically creates the organization and platform-admin user in the database after deployment. Enable it in your values file:
keycloak:
createOrganization: true
realmName: "governance" # Must match your Keycloak realm name
displayName: "Governance Studio" # Human-readable organization name
createPlatformAdmin: true
platformAdminEmail: "" # Defaults to admin@<global.domain>| Field | Description | Where to Get It |
|---|---|---|
createOrganization |
Enable organization creation in the database | Set to true |
realmName |
Keycloak realm name (used as the organization name) | Must match auth-service.config.idp.keycloak.realm |
displayName |
Human-readable organization display name | Your choice |
createPlatformAdmin |
Enable platform-admin user creation in the database | Set to true |
platformAdminEmail |
Email of the platform admin user in Keycloak | Defaults to admin@<global.domain> if left empty |
The hook runs as a Kubernetes Job after Helm install/upgrade. It waits for database migrations to complete, looks up the platform admin's Keycloak user ID by email, then creates (or updates) the organization and admin user records. The hook is idempotent — it's safe to run on every upgrade.
Before deploying, verify your values file has:
-
global.domainset to your actual domain -
auth-service.config.idp.issuerpointing to your Keycloak realm -
auth-service.config.idp.keycloak.adminUrlpointing to your Keycloak -
auth-service.config.keyManagementprovider and credentials configured (Azure Key Vault or AWS KMS) -
governance-service.config.storageProviderand storage fields set -
governance-studio.config.keycloakUrlandkeycloakRealmset -
integrity-service.config.integrityAppBlobStoreTypeand storage fields set - All ingress
hostfields set to your domain - All ingress
tlsblocks using the samesecretName -
keycloak.createOrganizationset totrue -
keycloak.platformAdminEmailset (ifcreatePlatformAdministrue, or leave empty to default toadmin@<global.domain>)
Before installing, pull the subchart dependencies:
helm dependency update ./charts/governance-platformThis downloads the Bitnami PostgreSQL chart and links the local subcharts (auth-service, governance-service, governance-studio, integrity-service).
If you created secrets with kubectl (Section 8, Option A):
helm upgrade --install governance-platform ./charts/governance-platform \
--namespace governance \
--create-namespace \
--values /path/to/my-values.yaml \
--wait \
--timeout 15mIf you are using Helm-managed secrets (Section 8, Option B or C): pass the secrets file before the values file so that values can override if needed:
helm upgrade --install governance-platform ./charts/governance-platform \
--namespace governance \
--create-namespace \
--values /path/to/my-secrets.yaml \
--values /path/to/my-values.yaml \
--wait \
--timeout 15mThe Helm install proceeds in this order:
- PostgreSQL starts and initializes the
governancedatabase - governance-service starts, runs database migrations on startup
- auth-service and integrity-service start (depend on database being ready)
- governance-studio starts (static frontend, no database dependency)
- Post-install hook runs — waits for migrations to complete, then creates the organization and platform-admin user in the database (if
keycloak.createOrganizationis enabled)
The --wait flag ensures Helm waits for all pods to reach Ready state before returning.
# Watch all pods come up
kubectl get pods -n governance -w
# Check deployment status
kubectl get deployments -n governance
Expected pod status once healthy:
NAME READY STATUS AGE
governance-platform-auth-service-xxxxx-xxxxx 1/1 Running 2m
governance-platform-governance-service-xxxxx-xxxxx 1/1 Running 2m
governance-platform-governance-studio-xxxxx-xxxxx 1/1 Running 2m
governance-platform-integrity-service-xxxxx-xxxxx 1/1 Running 2m
governance-platform-postgresql-0 1/1 Running 3m
Pod stuck in CrashLoopBackOff:
# Check pod logs
kubectl logs -l app.kubernetes.io/instance=governance-platform -n governance --all-containers
# Check specific service
kubectl logs deployment/governance-platform-auth-service -n governancePod stuck in ImagePullBackOff:
# Verify image pull secret exists and is correct
kubectl get secret platform-image-pull-secret -n governance -o jsonpath='{.data.\.dockerconfigjson}' | base64 -d | jq .Database connection errors:
# Check PostgreSQL is running
kubectl get pod governance-platform-postgresql-0 -n governance
# Verify database secret
kubectl get secret platform-database -n governance -o jsonpath='{.data.password}' | base64 -dIngress not working:
# Check ingress resources were created
kubectl get ingress -n governance
# Check cert-manager certificate status
kubectl get certificate -n governance
kubectl describe certificate -n governanceIf you enabled keycloak.createOrganization in your values file (see Section 9), the Helm post-install hook automatically creates the organization and platform-admin user in the database. Verify the hook job completed successfully:
# Check the hook job status
kubectl get jobs -n governance -l "app.kubernetes.io/component=keycloak-setup"
# View hook job logs if needed
kubectl logs -n governance -l "app.kubernetes.io/component=keycloak-setup" --tail=50The hook:
- Waits for database migrations to complete (checks for required tables)
- Creates (or updates) the organization in the database using the configured
realmName - Looks up the platform admin's Keycloak user ID by email (using
platformAdminEmailor defaulting toadmin@<global.domain>) - Creates (or updates) the platform-admin user in the database with the resolved Keycloak ID
- Sets up the organization membership with
organization_ownerrole
The hook is idempotent — it runs on every helm upgrade and safely skips records that already exist.
# All services should return healthy responses
DOMAIN="governance.your-domain.com"
# Governance Studio (should return 200)
curl -s -o /dev/null -w "%{http_code}" https://$DOMAIN/
# Governance Service health
curl -s https://$DOMAIN/governanceService/health | jq .
# Auth Service health
curl -s https://$DOMAIN/authService/health | jq .
# Integrity Service health
curl -s https://$DOMAIN/integrityService/health/v1 | jq .# OpenID Connect discovery endpoint (should return JSON with issuer)
curl -s https://$DOMAIN/keycloak/realms/governance/.well-known/openid-configuration | jq '.issuer'
# Test token exchange — get a token using the backend service account
BACKEND_SECRET=$(kubectl get secret platform-keycloak -n governance -o jsonpath='{.data.service-account-client-secret}' | base64 -d)
curl -s -X POST "https://$DOMAIN/keycloak/realms/governance/protocol/openid-connect/token" \
-d "grant_type=client_credentials" \
-d "client_id=governance-platform-backend" \
-d "client_secret=$BACKEND_SECRET" \
| jq '.access_token | split(".") | .[1] | @base64d | fromjson | {sub, azp, realm_access}'# Check organization was created
kubectl exec -n governance governance-platform-postgresql-0 -- \
env PGPASSWORD=$(kubectl get secret platform-database -n governance -o jsonpath='{.data.password}' | base64 -d) \
psql -U postgres -d governance -c \
"SELECT id, name, display_name, idp_provider FROM organization;"
# Check platform-admin user exists
kubectl exec -n governance governance-platform-postgresql-0 -- \
env PGPASSWORD=$(kubectl get secret platform-database -n governance -o jsonpath='{.data.password}' | base64 -d) \
psql -U postgres -d governance -c \
"SELECT u.email, u.display_name, u.idp_provider, uom.roles
FROM users u
JOIN user_organization_memberships uom ON u.id = uom.user_id
WHERE u.email LIKE 'admin@%';"- Navigate to
https://governance.your-domain.comin your browser - You should be redirected to the Keycloak login page for the
governancerealm - Log in with the platform-admin credentials:
- Username:
platform-admin - Password: retrieve from the secret:
kubectl get secret platform-admin -n governance -o jsonpath='{.data.password}' | base64 -d
- Username:
- After login, you should be redirected back to Governance Studio with full access
Your Governance Platform is now running with:
- Keycloak managing identity and access for the
governancerealm - Three OAuth clients (frontend, backend, worker)
- Platform-admin user with
organization_ownerrole - All four services accessible via path-based routing on a single domain
- TLS certificates managed by cert-manager
- PostgreSQL with all required schemas
Users must be created in Keycloak first before they can be added to Governance Studio:
-
Create the user in Keycloak:
- Go to the Keycloak Admin Console > governance realm > Users > Add user
- Set username, email, first/last name, and enable the account
- Under the Credentials tab, set a password (or configure email verification)
-
Add the user in Governance Studio:
- Log in as
platform-admin - Navigate to Organization > Members (
https://governance.your-domain.com/organization/members) - Add the user by email and assign a role
- Log in as
The user can then log in to Governance Studio with their Keycloak credentials.
| Resource | URL |
|---|---|
| Governance Studio | https://governance.your-domain.com/ |
| Governance Service API | https://governance.your-domain.com/governanceService/ |
| Auth Service API | https://governance.your-domain.com/authService/ |
| Integrity Service API | https://governance.your-domain.com/integrityService/ |
| Keycloak Admin Console | https://governance.your-domain.com/keycloak/admin |
| Keycloak Realm Settings | https://governance.your-domain.com/keycloak/admin/governance/console |
| OIDC Discovery | https://governance.your-domain.com/keycloak/realms/governance/.well-known/openid-configuration |