Confluence 9.2 Long Term Support release performance report

Still need help?

The Atlassian Community is here for you.

Ask the community

Would you like to measure Confluence Data Center performance without having to run your own tests? We’ve benchmarked it for you.

In line with our performance reports for Jira and Jira Service Management, the Confluence Data Center performance report provides general performance benchmarks through a framework of Non-Functional Requirements thresholds.


About Long Term Support releases

We recommend upgrading Confluence Data Center regularly. If your organization's processes mean you can only upgrade about once a year, upgrading to a Long Term Support release is a good option. It provides continued access to critical security, stability, data integrity, and performance issues.

On this page

Highlights

As with all Long Term Support (LTS) releases, we aim to provide the same, if not better, performance. Confluence Data Center 9.2 LTS testing demonstrates largely stable performance across the product, with a few improvements.

Here are some highlights seen during the testing we ran:

  • Common tasks such as View page, View dashboard, Create page, and Publish page are all well below thresholds.

  • Signing in and out of Confluence is speedy and also far below thresholds.

  • Search time is improved by 64% thanks to 9.2 LTS using OpenSearch to handle remote indexing (recommended hardware).

We will continue to improve future performance and scalability so that teams can move with ease through their workspace, and our largest customers can scale confidently.

Performance

This year, we’re showcasing the results of our performance testing through a new framework that focuses on Non-Functional Requirements (NFR) thresholds for essential user actions. These thresholds serve as benchmarks for reliability, responsiveness, and scalability, ensuring that our system adheres to the expected performance standards.

We established a target Key Performance Indicator (KPI) threshold for three key types of actions.

Action typeResponse time in ms
50th percentile90th percentile
Page interactions25003000
Engagement actions25003000
Content management30005000

The recorded response time benchmarks provide insight into the system’s performance:

  • 50th percentile - Showcases the average performance for most users. It's less affected by extreme outliers, so it shows the central tendency of response times.

  • 90th percentile - Shows performance for worst-case scenarios or high load conditions, which may affect a smaller portion of users.

Action typeResponse time in ms
50th percentile90th percentile

Page interactions

Target KPI threshold25003000
View Dashboard726783
View attachment16092116
View blog post9531179
View page532616
View page (w/ small attachments)614753
View page (w/ large attachments615711
View page after publish518612
View page after publish (w/ small attachments)593781
View page after publish (w/ large attachments)594698
Edit page10181214
Edit page (from shared link)13761710
Edit page (w/ attachments)636735
Create blog post9911318
Create page (collaborative)13761529
Create page (single user)547700
Publish page410467
Publish page (w/ small attachments)648708
Publish page (w/ large attachments)650725
Log in396412
Log out620659

Engagement actions

Target KPI threshold25003000
Comment15271774
Like page4521021

Content management

Target KPI threshold30005000
Upload attachment26093424
Search25844542

OpenSearch

Results in this section compare OpenSearch and Lucene within Confluence 9.2 LTS. Details of Confluence and OpenSearch configuration can be found in the Test environment section.

Searching with OpenSearch is 64% faster

By using OpenSearch as the index and search engine we saw improvements in search performance and related functions.

Full site reindexing time reduced by 26% with OpenSearch

We took advantage of the OpenSearch distributed processing architecture for quicker indexing to make it more efficient for customer instances growing in size and scale. The challenges we saw with the complex index management process in Lucene which often led to inconsistent search results and indexing failures for clustered instances were adressed with OpenSearch and its resilient and simplified indexing process. 

Testing methodology

The following sections detail the testing environment, including hardware specifications and the methodology we used in our performance tests.

How we tested

The following table represents the dataset we used in our tests:

Data

Value

Pages

~900,000

Blogposts

~100,000

Attachments

~2,300,000

Comments

~6,000,000

Spaces

~5,000

Users

~5,000

Actions performed

We chose a mix of actions that would represent a sample of the most common user actions. An action in this context is a complete user operation, like opening a page in the browser window. The following table details the actions that we included in the script, for our testing persona, indicating how many times each action is repeated during a single test run.

Action

Number of times an action is performed in a single test run

Create page (single user)

367

Publish page

266

Publish page (w/ small attachments)

36

Publish page (w/ large attachments)

25

View page

423

View page (w/ small attachments)

72

View page (w/ large attachments)

35

View page after publish

212

View page after publish (w/ small attachments)

41

View page after publish (w/ large attachments)

38

Search

1,638

Log in

532

Log out

266

Create page (collaborative)

532

Create blog post

2,049

View blog post

269

Edit page

532

Edit page (from shared link)

532

Edit page (w/ attachments)

61

View Dashboard

532

Like page

601

Upload attachment

1,404

View attachment

1,231

Comment

798

Test environment

The performance tests were all run on a set of AWS EC2 instances. For each test, the entire environment was reset and rebuilt, and then each test started with a five-minute warm-up to prepare instance caches, with each test running for 60 minutes. Below, we show the details of the environments used for Confluence Data Center, as well as the specifications of the EC2 instances.

All tests were run a number of times for consistent results.

Here are the details of our Confluence Data Center test environment:

  • 2 Confluence nodes

  • Database on a separate node

  • Shared home directory on a separate NFS Server node

  • Load Balancer (AWS ALB Application Load Balancer)

Confluence Data Center with 2 nodes

Hardware

Software

EC2 type

m5.2xlarge

Operating system

Amazon Linux 2

vCPUs

8

Java platform

Temurin-17.0.4+8

Memory (GiB)

31

Java options

8 GB heap

Physical Processor

Intel(R) Xeon(R) Platinum 8259CL

Clock Speed (GHz)

2.50

CPU Architecture

x86_64

Storage

AWS EBS 200 GB GP2

Database

Hardware

Software

EC2 type

db.m5.xlarge

Database:

PostgreSQL 16.3

vCPUs

4

Operating system:

AWS managed

Memory (GiB)

16

Physical Processor

General Purpose SSD (gp2)

Storage (GiB)

700

NFS server

Hardware

Software

EC2 type

m4.large

Operating system

Amazon Linux 2

vCPUs

2

Memory (GiB)

8

Memory per vCPU (GiB)

4

Physical Processor

Intel(R) Xeon(R) Platinum 8175M

Base Clock Speed (GHz)

2.50

CPU Architecture

x86_64

Storage

AWS EBS 100 GB GP2

OpenSearch server

Master nodes

Hardware

Software

EC2 type

m6g.large.search

OpenSearch

OpenSearch 2.11

Number of nodes

3

Operating system

Amazon Linux 2

vCPUs

2

Memory (GiB)

8

Data nodes

Hardware

Software

Instance type

r6g.large.search

OpenSearch

OpenSearch 2.11

Number of nodes

3

Operating system





AWS managed





vCPUs

2

Memory (GiB)

16

Storage type

EBS General Purpose SSD (gp3)

Storage (GiB)

100

Provisioned IOPS

3000

Provisioned throughput (MiB/s)

125


Last modified on Dec 20, 2024

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.