Skip to main content
  1. Posts/

Deploying authentik using vSphere Pods

·1756 words·9 mins

I’m very much a fan of vSphere Pods. Running Kubernetes Pods directly on ESX just feels more elegant than deploying a VKS cluster and then deploying pods inside that cluster. Plus, you can view the status of pods from vSphere UI, without having to pull out kubectl. So whenever I look into a new Kubernetes application, I always evaluate if it could work well with vSphere Pods.

Generally speaking, I’ve found that Kubernetes applications can work with vSphere Pods as long as they stick to the basic resources – like pods, services, ingress, and read-write-once persistent volume claims (RWO PVCs). Applications that rely on things like custom resource definitions, cluster scope configurations, and read-write-many (RWX) PVCs are right out.

authentik is one of those compatible applications. It only uses pods, services, ingress, and RWO PVCs. Since VCF 9 allows you to use generic SAML 2.0 providers as an identity source, authentik is a great no-cost option for that.

Pre-requisites #

  • Deploy vSphere Supervisor
    • vSphere 9.0: Any deployment option (NSX with VPCs, NSX Classic, or vSphere Distributed Switch)
    • vSphere 8.0 and earlier: Only with NSX networking

Installation #

vSphere Supervisor Setup #

First, we need to create a namespace for our authentik deployment. Navigate to Supervisor Management in vSphere UI and then go to the namespaces section.

New Namespace

Click on New Namespace and then input your desired values. Here, I’ll be creating a namespace named authentik-demo using the default network settings.

Namespace Naming

Then add the storage policy that authentik should use for its PVCs.

Add Storage Policies

Select Storage Policies

Storage Policies Added

authentik Kubernetes Deployment #

The authentik documentation site provides instructions on how to deploy to a Kubernetes platform. These instructions are a great starting point, but do require us to make some changes to the input values file.

The starter values.yaml file they provide is as follows:

authentik:
    secret_key: "PleaseGenerateASecureKey"
    # This sends anonymous usage-data, stack traces on errors and
    # performance data to sentry.io, and is fully opt-in
    error_reporting:
        enabled: true
    postgresql:
        password: "ThisIsNotASecurePassword"

server:
    ingress:
        # Specify kubernetes ingress controller class name
        ingressClassName: nginx | traefik | kong
        enabled: true
        hosts:
            - authentik.domain.tld

postgresql:
    enabled: true
    auth:
        password: "ThisIsNotASecurePassword"

You can view all the possible configuration options at ArtifactHub.

And be sure to follow their instructions on generating secure passwords for the database and secret key, using either of the two commands. This is important because the authentik-worker pod won’t start unless secret_key is at least 50 characters long.

$ pwgen -s 50 1
$ openssl rand 60 | base64 -w 0

There are three things we need to change in values.yaml to get authentik to work with vSphere Pods.

Disable Cluster Role Creation - serviceAccount.create #

First, we need to disable the option to create a service account for authentik. Because this option attempts to create a cluster role for authentik, which is a cluster scope option, it is the very first thing that causes the deployment to fail.

Error: Unable to continue with install: could not get information about the resource ClusterRole "authentik-authentik-demo" in namespace "": clusterroles.rbac.authorization.k8s.io "authentik-authentik-demo" is forbidden: User "sso:[email protected]" cannot get resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope

To resolve this, we simply need to add the following option to values.yaml.

serviceAccount:
  create: false
Keep in mind that this option is required for the managed outposts feature of authentik. If that’s a requirement for you, then you’ll need to deploy authentik in a VKS cluster instead.

Set the Default Storage Class - global.defaultStorageClass #

The next thing we need to specify is the global.defaultStorageClass value. Without that setting, the PVCs will fail to create since vSphere Supervisor doesn’t know that storage class to use for authentik’s PVCs. We can’t specify a default storage class in vSphere Supervisor because it’s a cluster scope setting.

$ kubectl patch storageclass smith-home-mgmt-nfs-nvme -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Error from server (Forbidden): storageclasses.storage.k8s.io "smith-home-mgmt-nfs-nvme" is forbidden: User "sso:[email protected]" cannot patch resource "storageclasses" in API group "storage.k8s.io" at the cluster scope

Trying to deploy authentik without this value results in the following errors in the postgresql stateful set.

Events:
  Type     Reason        Age                 From                    Message
  ----     ------        ----                ----                    -------
  Warning  FailedCreate  79s (x12 over 90s)  statefulset-controller  create Pod authentik-postgresql-0 in StatefulSet authentik-postgresql failed error: failed to create PVC data-authentik-postgresql-0: admission webhook "validate-quota-on-create.k8s.io" denied the request: Operation denied, No StorageClass is provided for PVC data-authentik-postgresql-0,namespace authentik-demo   
  Warning  FailedCreate  69s (x13 over 90s)  statefulset-controller  create Claim data-authentik-postgresql-0 for Pod authentik-postgresql-0 in StatefulSet authentik-postgresql failed error: admission webhook "validate-quota-on-create.k8s.io" denied the request: Operation denied, No StorageClass is provided for PVC data-authentik-postgresql-0,namespace authentik-dem

To resolve this, we just need to add the following entry into values.yaml.

global:
  defaultStorageClass: <INPUT_YOUR_STORAGE_CLASS_HERE>

Set the Networking Type - server.ingress.ingressClassName or server.service.type #

You might have gathered from the default values.yaml file that we need to specify the name of the ingress controller we want to use. vSphere Supervisor only supports Contour, so if you want to use ingress for authentik, make sure you install the Contour Supervisor Service first.

After you install Contour, simply update ingressClassName in values.yaml.

server:
  ingress:
    ingressClassName: contour
    enabled: true
    hosts:
      - <INPUT_YOUR_HOSTNAME_HERE>

If you’d prefer to not use ingress, you do have the option to expose authentik using a load balancer service instead. To do so, replace the ingress section under server with a service.type of LoadBalancer.

server:
  service:
    type: LoadBalancer

Set the Resource Requests for the Pods - server|worker.resources.requests.memory #

The final thing we need to change are the memory requests for the server and worker pods. vSphere Pods actually run within a lightweight CRX VM, with only its requested memory and CPU allocated to it. The implication here is that whatever memory you request for it is effectively the limit for that pod. Unlike a normal Kubernetes cluster, vSphere Pods don’t automatically use extra memory from the host beyond its initial request.

By default, vSphere Pods are deployed with allocations for 1000m of CPU (or 1 vCPU) and 512Mi of memory. This blog post by Vino Alex explains it a bit more. You can also confirm this for yourself by examining the allocated vCPUs and memory in ESX host client for a vSphere Pod that doesn’t have any resource requests.

Default vSphere Pod Allocations

When I didn’t request more memory, I found that authentik’s worker pod would error out and be evicted shortly after creation with the reason of InsufficientFreeMemory. Meaning that the pod reached its memory limit. To resolve this, I added a memory request of 2Gi for the worker and server pod by using following options in the value.yaml file.

server:
  resources:
    requests:
      memory: "2Gi"

worker:
  resources:
    requests:
      memory: "2Gi"

I haven’t noticed the same issue with the postgresql pod, so the default memory allocation of 512 Mi seems to work for it – at least initially. But if you deploy this in production, I would keep a close eye on the memory usage for all the pods and increase its memory request if needed.

Installing with Helm #

All of these changes make up the following values.yaml file.

authentik:
  secret_key: "PleaseGenerateASecureKey"
  # This sends anonymous usage-data, stack traces on errors and
  # performance data to sentry.io, and is fully opt-in
  error_reporting:
      enabled: true
  postgresql:
      password: "ThisIsNotASecurePassword"

server:
  resources:
    requests:
      memory: "2Gi"
  service:
    type: LoadBalancer
  # # Uncomment this section if you plan to use ingress 
  # #  and then comment out the service section
  # ingress:
    # ingressClassName: contour
    # enabled: true
    # hosts:
      # - <INPUT_YOUR_HOSTNAME_HERE>

worker:
  resources:
    requests:
      memory: "2Gi"

global:
  defaultStorageClass: <INPUT_YOUR_STORAGE_CLASS_HERE>

postgresql:
  enabled: true
  auth:
      password: "ThisIsNotASecurePassword"

serviceAccount:
  create: false
Remember to replace the password and secret_key values with passwords longer than 50 characters! This is a reminder that the authentik-worker pods won’t start unless authentik.secret_key is longer than 50 characters.

Now you can finally deploy authentik! Follow the remaining steps from the documentation to deploy authentik using helm.

# Add the authentik repo to your local helm repository
helm repo add authentik https://charts.goauthentik.io
helm repo update
# Install the latest version of authentik in the 'authentik-demo' namespace
helm upgrade --namespace authentik-demo --install authentik authentik/authentik -f values.yaml

Then track the deployment progress using these kubectl commands.

# View status of pod deployments
kubectl -n authentik-demo get deployments
kubectl -n authentik-demo get pods

# If the deployments take longer than expected, view the logs with these commands
kubectl -n authentik-demo logs deployment/authentik-server
kubectl -n authentik-demo logs deployment/authentik-worker
kubectl -n authentik-demo logs statefulsets/authentik-postgresql

# If using a load balancer, view the namespace's services to get the external IP 
kubectl -n authentik-demo get services

# If using ingress, view the namespace's ingress resources to confirm the hostname
# Then, view the services for the contour supervisor service namespace to get the external IP
kubectl -n authentik-demo get ingress
kubectl -n svc-contour-domain-c[NUM] get services

If you mess up and want to restart the deployment from scratch, use the following helm command.

helm uninstall --namespace authentik-demo authentik 

You may also need to delete the postgresql PVC, if you already started configuring authentik.

kubectl -n authentik-demo delete pvc data-authentik-postgresql-0

Initial Setup #

Now that authentik is successfully deployed, access it through the load balancer IP or the ingress hostname, depending on what networking option you chose.

authentik Login Screen

To setup the default administrator, navigate to /if/flow/initial-setup/. This will let you configure the initial password for akadmin.

authentik Initial Setup Flow

And now you’re ready to get authentik configured for your environment! Setting up your SAML and SCIM providers to work with VCF SSO is for another blog post, though. 🙂

authentik Application Page

11/02/2025: As of version 2025.10, there’s a bug that prevents you from creating the SCIM provider. There’s an open issue for this in the authentik Github repo.

If you want the latest version of authentik but you don’t want to wait for SCIM to be fixed, you could probably still use SAML with just-in-time (JIT) provisioning. But this does lock you into using JIT for your VCF SSO source. At least not without resetting the SSO config, which comes with some interruptions in service.

Another option is to just use the previous version of authentik until the issue is resolved. All you need to do is add the --version 2025.8.4 flag to your helm upgrade command. Keep in mind, if you already deployed authentik using the latest version, you’ll need to delete that deployment and then redeploy with version 2025.8.4. Downgrading an existing deployment has not worked in my tests.

helm upgrade --namespace authentik-demo --install authentik authentik/authentik -f values.yaml --version 2025.8.4

It’s also important to note that Redis was removed in version 2025.10. However, it’s still required for version 2025.8.4. If you’re going to use version 2025.8.4, make sure you add in the following values to your values.yaml file!

redis:
  enabled: true