One of the driving forces behind creating the Octopus Server Linux Container was so Octopus could run in a container in Kubernetes for Octopus Cloud. With the release of the Octopus Server Linux Container image in 2020.6, this option is available for those who want to host Octopus in their own Kubernetes clusters.
This page describes how to run Octopus Server in Kubernetes, along with platform specific considerations when using different Kubernetes providers such as Azure AKS and Google GKE.
Since Octopus High Availability (HA) and Kubernetes go hand in hand, this guide will show how to support scaling Octopus Server instances with multiple HA nodes. It assumes a working knowledge of Kubernetes concepts, such as Pods, Services, Persistent volume claims and Stateful Sets.
Pre-requisites
Whether you are running Octopus in a Container using Docker or Kubernetes, or running it on Windows Server, there are a number of items to consider when creating an Octopus High Availability cluster:
- A Highly available SQL Server database
- A shared file system for Artifacts, Packages, Task Logs, and Event Exports
- A Load balancer for traffic to the Octopus Web Portal
- Access to each Octopus Server node for Polling Tentacles
The following sections describe these in more detail.
SQL Server Database
How the database is made highly available is really up to you; to Octopus, it’s just a connection string. We are not experts on SQL Server high availability, so if you have an on-site DBA team, we recommend using them. There are many options for high availability with SQL Server, and Brent Ozar also has a fantastic set of resources on SQL Server Failover Clustering if you are looking for an introduction and practical guide to setting it up.
Octopus High Availability works with:
If you plan to host Octopus in Kubernetes using one of managed Kubernetes platforms from Cloud providers, for example AWS, Azure, or GCP, then a good option to consider for your SQL Server database is their database PaaS offering as well.
For more details on the different hosted database options, refer to the documentation for each Cloud provider:
Running SQL in a Container
Its possible to run SQL Server in a container. This can be useful when running a Proof of Concept (PoC) with Octopus in Kubernetes.
The following YAML creates a single instance of SQL Server Express that can be deployed to a Kubernetes cluster. It creates a persistent volume claim to store the database files, a service to expose the database internally, and the database itself.
Although Octopus supports SQL Server Express, the edition has limitations. For more details, see the Microsoft SQL Server editions documentation.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mssql-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: Service
metadata:
name: mssql
spec:
type: ClusterIP
ports:
-
port: 1433
targetPort: 1433
protocol: TCP
selector:
app: mssql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-deployment
labels:
app: mssql
spec:
selector:
matchLabels:
app: mssql
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mssql
spec:
terminationGracePeriodSeconds: 10
securityContext:
fsGroup: 10001
volumes:
- name: mssql_db
persistentVolumeClaim:
claimName: mssql-data
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: Express
- name: ACCEPT_EULA
value: 'Y'
- name: SA_PASSWORD
value: Password01!
volumeMounts:
- name: mssql_db
mountPath: /var/opt/mssql
Change the SA Password:
If you use the YAML definition above, remember to change the SA_PASSWORD
from the value used here.
Load balancer
A Load balancer is required to direct traffic to the Octopus Web Portal, and optionally a way to access each of the Octopus Server nodes in an Octopus High Availability cluster may be required if you’re using Polling Tentacles.
Octopus Web Portal load balancer
The Octopus Web Portal is a React single page application (SPA) that can direct all backend requests to any Octopus Server node. This means we can expose all Octopus Server nodes through a single load balancer for the web interface.
The following YAML creates a load balancer service directing web traffic on port 80
to pods with the label app:octopus
:
apiVersion: v1
kind: Service
metadata:
name: octopus-web
spec:
type: LoadBalancer
ports:
- name: web
port: 80
targetPort: 8080
protocol: TCP
selector:
app: octopus
Octopus Server Node load balancer
Unlike the Octopus Web Portal, Polling Tentacles must be able to connect to each Octopus node individually to pick up new tasks. Our Octopus HA cluster assumes two nodes, therefore a load balancer is required for each node to allow direct access.
The following YAML creates load balancers with separate public IPs for each node. They direct web traffic to each node on port 80
and Polling Tentacle traffic on port 10943
.
The octopus-0
load balancer:
apiVersion: v1
kind: Service
metadata:
name: octopus-0
spec:
type: LoadBalancer
ports:
- name: web
port: 80
targetPort: 8080
protocol: TCP
- name: tentacle
port: 10943
targetPort: 10943
protocol: TCP
selector:
statefulset.kubernetes.io/pod-name: octopus-0
The octopus-1
load balancer:
apiVersion: v1
kind: Service
metadata:
name: octopus-1
spec:
type: LoadBalancer
ports:
- name: web
port: 80
targetPort: 8080
protocol: TCP
- name: tentacle
port: 10943
targetPort: 10943
protocol: TCP
selector:
statefulset.kubernetes.io/pod-name: octopus-1
Note the selectors of:
statefulset.kubernetes.io/pod-name: octopus-0
andstatefulset.kubernetes.io/pod-name: octopus-1
These labels are added to pods created as part of a Stateful Set, and the values are the combination of the Stateful Set name and the pod index.
For more information on Polling Tentacles with High Availability refer to our documentation on the topic.
File Storage
To share common files between the Octopus Server nodes, we need access to a minimum of three shared volumes that multiple pods can read to and write from simultaneously:
- Artifacts
- Packages
- Task Logs
- Event Exports
These are created via persistent volume claims with an access mode of ReadWriteMany
to indicate they are shared between multiple pods.
Most of the YAML in this guide can be used with any Kubernetes provider. However, the YAML describing file storage can have differences between each Kubernetes provider as they typically expose different names for their shared file systems via the storageClassName
property.
To find out more about storage classes, refer to the Kubernetes Storage Classes documentation.
Whilst it is possible to mount external storage by manually defining Persistent Volume definitions in YAML, Cloud providers offering Kubernetes managed services typically include the option to dynamically provision file storage based on persistent volume claim definitions.
The next sections describe how to create file storage for use with Octopus running in Kubernetes using different Kubernetes providers to dynamically provision file storage.
AKS storage
The following YAML creates the shared persistent volume claims that will host the artifacts, built-in feed packages, the task logs, and event exports using the azurefile
storage class, which is specific to Azure AKS:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: artifacts-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: repository-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-logs-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: event-exports-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile
resources:
requests:
storage: 1Gi
GKE storage
The following YAML creates the shared persistent volume claims that will host the artifacts, built-in feed packages, the task logs, and event exports using the standard-rwx
storage class from the Google Filestore CSI driver.
GKE Cluster version pre-requisite: To use the Filestore CSI driver, your clusters must use GKE version 1.21 or later. The Filestore CSI driver is supported for clusters using Linux.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: artifacts-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: standard-rwx
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: repository-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: standard-rwx
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-logs-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: standard-rwx
resources:
requests:
storage: 1Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: event-exports-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: standard-rwx
resources:
requests:
storage: 1Gi
If you are running a GKE cluster in a non-default VPC network in Google Cloud, you may need to define your own storage class specifying the network name. The following YAML shows creating a storage class that can be used with a non-default VPC network in GKE called my-custom-network-name
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: my-custom-network-csi-filestore
provisioner: filestore.csi.storage.gke.io
parameters:
# "CIDR range to allocate Filestore IP Ranges from"
# reserved-ipv4-cidr: 192.168.92.22/26
# # standard (default) or premier or enterprise
# tier: premier
# # Name of the VPC. Note that non-default VPCs require special firewall rules to be setup.
network: my-custom-network-name
allowVolumeExpansion: true
Firewall rules may also need to be configured to allow the Kubernetes cluster access to the Filestore. Refer to the Filestore Firewall rules for further details.
Once the storage class has been defined, you can mount your persistent volume claims using the name of the storage class. In the example above that was named my-custom-network-csi-filestore
.
Sharing a single volume
Defining multiple persistent volume claims results in multiple storage buckets being created, one for each claim. This can result in an increased storage cost.
Another option is to create a single persistent volume claim that can be shared for each directory needed for Octopus. The following YAML shows an example of creating a single persistent volume claim, using the azurefile
storage class:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: octopus-storage-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile
resources:
requests:
storage: 4Gi
In the volumes
definition of the Stateful Set (used for your Octopus Server nodes), you can mount the single volume:
volumes:
- name: octopus-storage-vol
persistentVolumeClaim:
claimName: octopus-storage-claim
Then you can reference the volume multiple times in the volumeMounts
definition. This is achieved by using the volumeMounts.subPath property to specify a sub-path inside the referenced volume instead of it’s root:
volumeMounts:
- name: octopus-storage-vol
mountPath: /repository
subPath: repository
- name: octopus-storage-vol
mountPath: /artifacts
subPath: artifacts
- name: octopus-storage-vol
mountPath: /taskLogs
subPath: taskLogs
- name: octopus-storage-vol
mountPath: /eventExports
subPath: eventExports
Deploying the Octopus Server
Once the pre-requisites are in place, we need to install the Octopus Server. This can be done via our official Helm chart, or the Kubernetes resources can be directly deployed.
Helm Chart
An official chart for deploying Octopus in a Kubernetes cluster via Helm is available.
See the usage instructions for details of how to install the Helm chart.
The chart packages are available on DockerHub.
Octopus Server nodes
As an alternative to using Helm, you can directly install the Kubernetes resources into your cluster.
Octopus requires the server component to be installed as a Stateful Set.
A Stateful Set is used rather than a Kubernetes Deployment, as the Stateful Set provides:
- Fixed names
- Consistent ordering
- An initial deployment process that rolls out one pod at a time, ensuring each is healthy before the next is started.
This functionality works very nicely when deploying Octopus, as we need to ensure that Octopus instances start sequentially so only one instance attempts to apply updates to the database schema. However redeployments (e.g. when upgrading) do need special consideration, see the Upgrading Octopus in Kubernetes section for further details.
The following YAML below creates a Stateful Set with two pods. These pods will be called octopus-0
and octopus-1
, which will also be the value assigned to the statefulset.kubernetes.io/pod-name
label. This is how we can link services exposing individual pods.
The pods then mount a single shared volume for the artifacts, built-in feed packages, task logs, the server task logs, and event exports for each pod.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: octopus
spec:
selector:
matchLabels:
app: octopus
serviceName: "octopus"
replicas: 2
template:
metadata:
labels:
app: octopus
spec:
affinity:
# Try and keep Octopus nodes on separate Kubernetes nodes
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- octopus
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
volumes:
- name: octopus-storage-vol
persistentVolumeClaim:
claimName: octopus-storage-claim
containers:
- name: octopus
image: octopusdeploy/octopusdeploy:2022.4.8471
securityContext:
privileged: true
env:
- name: ACCEPT_EULA
# "Y" means accepting the EULA at https://octopus.com/company/legal
value: "Y"
- name: OCTOPUS_SERVER_NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DB_CONNECTION_STRING
value: Server=mssql,1433;Database=Octopus;User Id=SA;Password=Password01!
- name: ADMIN_USERNAME
value: admin
- name: ADMIN_PASSWORD
value: Password01!
- name: ADMIN_EMAIL
value: admin@example.org
- name: OCTOPUS_SERVER_BASE64_LICENSE
# Your base64 encoded license key goes here. When using more than one node, a HA license is required. Without a HA license, the stateful set can have a replica count of 1.
value: <License-goes-here>
- name: MASTER_KEY
# The base64 Master Key to use to connect to an existing database. If not supplied, and the database does not exist, it will generate a new one.
value: <MasterKey-goes-here>
ports:
- containerPort: 8080
name: web
- containerPort: 10943
name: tentacle
volumeMounts:
- name: octopus-storage-vol
mountPath: /artifacts
subPath: artifacts
- name: octopus-storage-vol
mountPath: /repository
subPath: repository
- name: octopus-storage-vol
mountPath: /taskLogs
subPath: taskLogs
- name: octopus-storage-vol
mountPath: /eventExports
subPath: eventExports
- name: octopus-storage-vol
mountPath: /home/octopus/.octopus/OctopusServer/Server/Logs
subPathExpr: serverLogs/$(OCTOPUS_SERVER_NODE_NAME)
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- '[[ -f /Octopus/Octopus.Server ]] && EXE="/Octopus/Octopus.Server" || EXE="dotnet /Octopus/Octopus.Server.dll"; $EXE node --instance=OctopusServer --drain=true --wait=600 --cancel-tasks;'
# postStart must finish in 5 minutes or the container will fail to create
postStart:
exec:
command:
- /bin/bash
- -c
- 'URL=http://localhost:8080; x=0; while [ $x -lt 9 ]; do response=$(/usr/bin/curl -k $URL/api/octopusservernodes/ping --write-out %{http_code} --silent --output /dev/null); if [ "$response" -ge 200 ] && [ "$response" -le 299 ]; then break; fi; if [ "$response" -eq 418 ]; then [[ -f /Octopus/Octopus.Server ]] && EXE="/Octopus/Octopus.Server" || EXE="dotnet /Octopus/Octopus.Server.dll"; $EXE node --instance=OctopusServer --drain=false; now=$(date); echo "${now} Server cancelling drain mode." break; fi; now=$(date); echo "${now} Server is not ready, can not disable drain mode."; sleep 30; done;'
readinessProbe:
exec:
command:
- /bin/bash
- -c
- URL=http://localhost:8080; response=$(/usr/bin/curl -k $URL/api/serverstatus/hosted/internal --write-out %{http_code} --silent --output /dev/null); /usr/bin/test "$response" -ge 200 && /usr/bin/test "$response" -le 299 || /usr/bin/test
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 60
livenessProbe:
exec:
command:
- /bin/bash
- -c
- URL=http://localhost:8080; response=$(/usr/bin/curl -k $URL/api/octopusservernodes/ping --write-out %{http_code} --silent --output /dev/null); /usr/bin/test "$response" -ge 200 && /usr/bin/test "$response" -le 299 || /usr/bin/test "$response" -eq 418
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 10
startupProbe:
exec:
command:
- /bin/bash
- -c
- URL=http://localhost:8080; response=$(/usr/bin/curl -k $URL/api/octopusservernodes/ping --write-out %{http_code} --silent --output /dev/null); /usr/bin/test "$response" -ge 200 && /usr/bin/test "$response" -le 299 || /usr/bin/test "$response" -eq 418
failureThreshold: 30
periodSeconds: 60
Change the Default values:
If you use the YAML definition above, remember to change the default values entered including the Admin Username, Admin Password, and the version of the octopusdeploy/octopusdeploy
image to use. You also need to provide values for the License Key and database Master Key.
Once fully deployed, this Stateful Set configuration will have three load balancers, and three public IPs.
The octopus-web
service is used to access the web interface. The Octopus Web Portal can make requests to any node, so load balancing across all the nodes means the web interface is accessible even if one node is down.
The octopus-0
service is used to point Polling Tentacles to the first node, and the octopus-1
service is used to point Polling Tentacles to the second node. We have also exposed the web interface through these services, which gives the ability to directly interact with a given node, but the octopus-web
service should be used for day to day work as it is load balanced.
The next sections describe the Stateful Set definition in more detail.
Octopus Server Pod affinity
For a greater degree of reliability, Pod anti-affinity rules are used in the Stateful Set to ensure Octopus pods are not placed onto the same node. This ensures the loss of a node does not bring down the Octopus High Availability cluster.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- octopus
topologyKey: kubernetes.io/hostname
Octopus Server Pod logs
In addition to the shared folders that are mounted for Packages, Artifacts, Task Logs, and Event Exports, each Octopus Server node (Pod) also writes logs to a local folder in each running container.
To mount the same volume used for the shared folders for the server logs, we need a way to create a sub-folder on the external volume that’s unique to each Octopus Server node running in a Pod.
It’s possible to achieve this using a special expression known as subPathExpr. The server logs folder is mounted to the unique sub-folder determined by the environment variable OCTOPUS_SERVER_NODE_NAME
, which is simply the pod name.
- name: octopus-storage-vol
mountPath: /home/octopus/.octopus/OctopusServer/Server/Logs
subPathExpr: serverLogs/$(OCTOPUS_SERVER_NODE_NAME)
An alternative option to using an external volume mount is to use a volume claim template.
For each VolumeClaimTemplate entry defined in a StatefulSet
, each Pod receives one PersistentVolumeClaim. When a Pod is (re-)scheduled onto a node, its volumeMounts
mount the PersistentVolumes associated with its PersistentVolume Claims.
volumeClaimTemplates:
- metadata:
name: server-logs-vol
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 200Mi
You can then mount the folder for the server logs, with each Octopus Server Node (Pod) getting its own persistent storage:
- name: server-logs-vol
mountPath: /home/octopus/.octopus/OctopusServer/Server/Logs
Note: The PersistentVolumes associated with the Pods’ PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted. This must be done manually.
Container lifecycle hooks
The preStop
hook is used to drain an Octopus Server node before it is stopped. This gives the node time to complete any running tasks and prevents it from starting new tasks. The postStart
start hook does the reverse and disables drain mode when the Octopus Server node is up and running.
Readiness, start up, and liveness probes
The readinessProbe
is used to ensure the Octopus Server node is responding to network traffic before the pod is marked as ready. The startupProbe
is used to delay the livenessProbe until such time as the node is started, and the livenessProbe
runs continuously to ensure the Octopus Server node is functioning correctly.
UI-only and back-end nodes
When managing an Octopus High Availability cluster, it can be beneficial to separate the Octopus Web Portal from the deployment orchestration of tasks that Octopus Server provides. It’s possible to create UI-only nodes that have the sole responsibility to serve web traffic for the Octopus Web Portal and the Octopus REST API.
By default, all Octopus Server nodes are task nodes because the default task cap is set to 5
. To create UI-only Octopus Server nodes, you need to set the task cap for each node to 0
.
When running Octopus in Kubernetes, it’d be nice to increase the replicaCount
property and direct web traffic to only certain pods in our Stateful Set. However, it takes additional configuration to set-up UI only nodes as the Stateful Set workload we created previously has web traffic directed to pods with the label app:octopus
. It’s not currently possible to use Match expressions to direct web traffic to only certain pods.
In order to create UI-only nodes in Kubernetes, you need to perform some additional configuration:
- Create an additional Stateful Set just for the UI-only nodes, for example called
octopus-ui
. - Change the container lifecycle hooks for the
octopus-ui
Stateful Set to ensure the nodes don’t start drained, and includes thenode
command to set the task cap to0
. - Update the
octopus-web
Load balancer Service to direct traffic to pods with the labelapp:octopus-ui
.
If you use Polling Tentacles, you don’t need to export port 10943
on the LoadBalancer
Service definition for the UI-only nodes as they won’t be responsible for handling deployments or other tasks. For the same reason, you don’t need to configure any Polling Tentacles to poll UI-only nodes.
Accessing Server node logs
When running Octopus Server on Windows Server, to access the logs for an Octopus Server Node, you’d typically either log into the Server using Remote Desktop and access them locally, or you might publish the logs to a centralized logging tool.
In Kubernetes there are a number of different options to access the Octopus Server Node logs.
Using kubectl
you can access the logs for each pod by running the following commands:
# Get logs for Node 0
kubectl logs octopus-0
# Get logs for Node 1
kubectl logs octopus-1
If you want to watch the logs in real-time, you can tail the logs using the following commands:
# Tail logs for Node 0
kubectl logs octopus-0 -f
# Tail logs for Node 1
kubectl logs octopus-1 -f
Sometimes it can be useful to see all of the logs in one. For this you can use the octopus
label selector:
kubectl logs -l app=octopus
You can also view the logs interactively. Here is an example opening an interactive shell to the octopus-0
pod and tailing the Server logs:
kubectl exec -it octopus-0 bash
# Change PWD to Logs folder
cd /home/octopus/.octopus/OctopusServer/Server/Logs
# Tail logs
sudo tail OctopusServer.txt
If you’ve configured your Octopus Server node logs to be mounted to a unique folder per pod on an external volume then it’s possible to mount the external volume on a virtual machine that can access the volume.
Here is an example of installing the necessary tooling and commands to mount a Google NFS Filestore volume called vol
, accessible by the private IP address of 10.0.0.1
in a Linux VM:
# Install tools
sudo apt-get -y update &&
sudo apt-get install nfs-common
# Make directory to mount
sudo mkdir -p /mnt/octo-ha-nfs
# Mount NFS
sudo mount 10.0.0.1:/vol1 /mnt/octo-ha-nfs
# tail logs
sudo tail /mnt/octo-ha-nfs/serverLogs/OctopusServer.txt
Upgrading Octopus in Kubernetes
An initial deployment of the Stateful Set described above works exactly as Octopus requires; one pod at a time is successfully started before the next. This gives the first node a chance to update the SQL schema with any required changes, and all other nodes start-up and share the already configured database.
One limitation with Stateful Sets is how they process updates. For example, if the Docker image version was updated, by default the rolling update strategy is used. A rolling update deletes and recreates each pod, which means that during the update there will be a mix of old and new versions of Octopus. This won’t work, as the new version may apply schema updates that the old version can not use, leading to unpredictable results at best, and could result in corrupted data.
The typical solution to this problem is to use a recreate deployment strategy. Unfortunately, Stateful Sets don’t support the recreate strategy.
What this means is that the Stateful Set can not be updated in place, but instead must be deleted and then a new version deployed.
Delete the Stateful Set
To delete the Stateful Set, first you can verify if the set exists by running the following kubectl
command:
kubectl get statefulset octopus -o jsonpath="{.status.replicas}"
This checks to see if the Stateful Set exists by retrieving it’s replicas
count. If the Stateful Set doesn’t exist, then the command will complete with an error:
Error from server (NotFound): statefulsets.apps "octopus" not found.
If the Stateful Set exists, you can delete it by running the following kubectl
command:
kubectl delete statefulset octopus
Deploy new Stateful Set
Once the old Stateful Set has been deleted, the new fresh copy of the Stateful Set can then be deployed. It will start the new pods one by one, allowing the database update to complete as expected.
Octopus in Kubernetes with SSL
It’s recommended best practice to access your Octopus instance over a secure HTTPS connection.
Whilst this guide doesn’t include specific instructions on how to configure access to Octopus Server in Kubernetes using an SSL/TLS certificate, there are many guides available.
In Kubernetes, this can be configured using an Ingress Controller, for example NGINX.
For web traffic destined for the Octopus Web Portal and REST API, you would terminate SSL on the ingress controller. For Polling Tentacles, passthrough would need to be allowed, usually on port 10943
.
The following YAML creates an Ingress controller and routes to the service with name octopus
on port 8080
:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: octopus-ingress-example
spec:
tls:
- hosts:
- your.octopus.domain.com
# This assumes tls-secret exists and the SSL
# certificate contains a CN for your.octopus.domain.com
secretName: tls-secret
ingressClassName: nginx
rules:
- host: your.octopus.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
# This assumes octopus exists and routes to healthy endpoints
service:
name: octopus
port:
number: 8080
Trusting custom/internal Certificate Authority (CA)
Octopus Server can interface with several external sources (feeds, git repos, etc.), and those sources are often configured to use SSL/TLS for secure communication. It is common for organizations to have their own Certificate Authority (CA) servers for their internal networks. A CA server can issue SSL certificates for internal resources, such as build servers or internally hosted applications, without purchasing from a third-party vendor. Technologies such as Group Policy Objects (GPO) can configure machines (servers and clients) to trust the CA automatically, so users don’t have to configure trust for them manually. However, this is not inherited in Kubernetes containers. When attempting to configure a connection to an external resource with an untrusted CA, you’ll most likely encounter an error similar to this:
Could not connect to the package feed. The SSL connection could not be established, see inner exception. The remote certificate is invalid because of errors in the certificate chain: UntrustedRoot
Kubernetes provides a method to incorporate a certificate without having to modify hosts or embed the certificate within the container by using a ConfigMap.
Create ConfigMap
To apply the certificate to the cluster, you’ll need to first get the certificate file in either .pem
or .crt
format. Once you have the file, use kubectl
to create a ConfigMap from it
kubectl -n <namespace> create configmap ca-pemstore --from-file=my-cert.pem
Add ConfigMap to YAML for Octopus Server
With the ConfigMap created, add a volumeMounts
component to the container section for Octopus Server. The following is an abbreviated portion of the YAML
...
containers:
- name: octopus
image: octopusdeploy/octopusdeploy:2022.4.8471
volumeMounts:
- name: ca-pemstore
mountPath: /etc/ssl/certs/my-cert.pem
subPath: my-cert.pem
readOnly: false
...
Add the following excerpt to the end of the Octopus Server YAML
volumes:
- name: ca-pemstore
configMap:
name: ca-pemstore
Octopus in Kubernetes example
View a working example that deploys an Octopus High Availability configuration to a GKE Kubernetes cluster in our samples instance.
The runbook consists of a number of Deploy Kubernetes YAML steps that deploy the resources discussed in this guide.
Help us continuously improve
Please let us know if you have any feedback about this page.
Page updated on Tuesday, April 23, 2024