Migrating Confluence instance to Kubernetes
Platform Notice: Data Center - This article applies to Atlassian products on the Data Center platform.
Note that this knowledge base article was created for the Data Center version of the product. Data Center knowledge base articles for non-Data Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.
*Except Fisheye and Crucible
Summary
This document has the purpose to provide instructions for migrating a Confluence Server or DC instance, running on physical hosts or virtual machines, to a Kubernetes cluster.
Environment
- Confluence instance either DC or Server v7.19.x.
- Backend Postgres database installed on a separated host and accessible by the Kubernetes cluster.
Solution
Please follow the steps outlined below to complete this migration:
- Create a volume named connie-confluence-shared-home for hosting Confluence shared home in the Kubernetes cluster.
- Transfer to the volume created above:
- The content of the shared home folder if the migration is from a Confluence DC instance with clustering enabled.
The content of the folder named shared-home if the migration is from Server or Non Clustered instance.
Once the copy has been completed, the file confluence.cfg.xml will appear like the one below:<?xml version="1.0" encoding="UTF-8"?> <confluence-configuration> <setupStep>complete</setupStep> <setupType>custom</setupType> <buildNumber>8804</buildNumber> <properties> <property name="access.mode">READ_WRITE</property> <property name="atlassian.license.message">--- LICENSE KEY ----</property> <property name="hibernate.setup">true</property> <property name="jwt.private.key">--- JWT private key ---</property> <property name="jwt.public.key">--- JWT private key ---</property> <property name="lucene.index.dir">${localHome}/index</property> <property name="synchrony.service.authtoken">--- Synchrony auth Token ---</property> </properties> </confluence-configuration>
confluence.cfg.xml needs to be modified to enable the clustering configuration, the section to be added is enclosed between the comments Enable clustering:
<?xml version="1.0" encoding="UTF-8"?> <confluence-configuration> <setupStep>complete</setupStep> <setupType>cluster</setupType> <buildNumber>8804</buildNumber> <properties> <property name="access.mode">READ_WRITE</property> <property name="atlassian.license.message">--- LICENSE KEY ----</property> <property name="hibernate.setup">true</property> <property name="jwt.private.key">--- JWT private key ---</property> <property name="jwt.public.key">--- JWT private key ---</property> <property name="lucene.index.dir">${localHome}/index</property> <property name="hibernate.setup">true</property> <property name="synchrony.service.authtoken">--- Synchrony auth Token ---</property> <!-- Enable clustering --> <property name="confluence.cluster">true</property> <property name="confluence.cluster.authentication.enabled">true</property> <property name="confluence.cluster.authentication.secret">73bf51061d2e1f3e73f643374a6d3f2ced8371f3</property> <!-- Enable clustering : in the line above it is advised to change the secret value --> </properties> </confluence-configuration>
Besides this file make sure to copy the attachments folder to this PV from the Confluence Server instance.
Make sure that the ownership of the files in the PV is assigned to user and group 2002, e.g. running the below command:
sudo chown -R 2002:2002 <SHARED_HOME_PATH>
Create a secret for storing the Confluence DC License (replace
$CONNIELICENSE
with the appropriate value all in one line):kubectl create secret generic connielicense --from-literal=license-key="$CONNIELICENSE" -n confluence
Create a secret to store the DB credentials (replace
$DBPASSWORD
and$DBUSER
with the appropriate values):kubectl create secret generic conniedb --from-literal=username=$DBUSER --from-literal=password=$DBPASSWORD -n confluence
Create the following
values.yaml
(replacing the values starting with $ appropriately and any other value that doesn't fit your needs)replicaCount: 1 image: repository: atlassian/confluence pullPolicy: IfNotPresent tag: "7.19.18" serviceAccount: create: true name: imagePullSecrets: [] annotations: {} role: create: true clusterRole: create: true name: roleBinding: create: true clusterRoleBinding: create: true name: eksIrsa: roleArn: database: type: mysql url: jdbc:postgresql://POSTGRESQL_CONNECTION_STRING credentials: secretName: conniedb usernameSecretKey: username passwordSecretKey: password volumes: localHome: persistentVolumeClaim: create: true storageClassName: $STORAGE_CLASS resources: requests: storage: 100Gi customVolume: {} mountPath: "/var/atlassian/application-data/confluence" sharedHome: persistentVolumeClaim: create: false customVolume: persistentVolumeClaim: claimName: connie-confluence-shared-home mountPath: "/var/atlassian/application-data/shared-home" subPath: nfsPermissionFixer: enabled: true mountPath: "/shared-home" imageRepo: alpine imageTag: latest command: synchronyHome: persistentVolumeClaim: create: true storageClassName: $STORAGE_CLASS resources: requests: storage: 5Gi customVolume: { } mountPath: "/var/atlassian/application-data/confluence" additional: [] additionalSynchrony: [] defaultPermissionsMode: 484 ingress: create: false host: $FULLY_QUALIFIED_DOMAIN_NAME_CONFLUENCE_SITE confluence: service: port: 80 type: ClusterIP loadBalancerIP: annotations: {} securityContextEnabled: true securityContext: fsGroup: 2002 containerSecurityContext: {} umask: "0022" setPermissions: true ports: http: 8090 hazelcast: 5701 license: secretName: connielicense secretKey: license-key readinessProbe: initialDelaySeconds: 10 periodSeconds: 5 failureThreshold: 6 accessLog: enabled: true mountPath: "/opt/atlassian/confluence/logs" localHomeSubPath: "logs" clustering: enabled: true usePodNameAsClusterNodeName: true s3AttachmentsStorage: bucketName: bucketRegion: endpointOverride: resources: jvm: maxHeap: "1g" minHeap: "1g" reservedCodeCache: "256m" container: requests: cpu: "2" memory: "2G" shutdown: terminationGracePeriodSeconds: 25 command: "/shutdown-wait.sh" forceConfigUpdate: false additionalJvmArgs: [] additionalLibraries: [] additionalBundledPlugins: [] additionalVolumeMounts: [] additionalEnvironmentVariables: [] additionalPorts: [] additionalVolumeClaimTemplates: [] topologySpreadConstraints: [] jvmDebug: enabled: false synchrony: enabled: true replicaCount: 1 podAnnotations: {} service: port: 80 type: ClusterIP loadBalancerIP: annotations: {} securityContextEnabled: true securityContext: fsGroup: 2002 containerSecurityContext: {} setPermissions: true ports: http: 8091 hazelcast: 5701 readinessProbe: healthcheckPath: "/synchrony/heartbeat" initialDelaySeconds: 5 periodSeconds: 1 failureThreshold: 10 resources: jvm: minHeap: "1g" maxHeap: "2g" stackSize: "2048k" container: requests: cpu: "2" memory: "2.5G" additionalJvmArgs: [] shutdown: terminationGracePeriodSeconds: 25 additionalLibraries: [] additionalVolumeMounts: [] additionalPorts: [] topologySpreadConstraints: []
Deploy using the above
values.yaml
running:helm install connie atlassian-data-center/confluence --namespace confluence --values connie-values.yaml
- Once the deployment is completed, check that the Confluence instance running on Kubernetes can be accessed correctly and the attachments are accessible.