Aggregation API upgrade guide

Preparing for Jira 10.5

On this page

Still need help?

The Atlassian Community is here for you.

Ask the community

Introduction

In Jira 10.4, we introduced a new search API and deprecated the Lucene-specific org.apache.lucene.search.Collector, recommending its replacement with com.atlassian.jira.search.index.IndexSearcher#scan(SearchRequest request, Function<Document, Boolean> callback). However, we've recognized that using the scan method for aggregations is inefficient due to high data transfer volumes for local processing.

To address this, we're introducing a new aggregation API designed specifically for aggregation operations.

This page details the new aggregation methods and provides migration examples from Lucene-specific collectors to the aggregation API.

Aggregation methods

In Jira 10.5, we’re introducing two aggregation types: metric and bucket aggregations.

Metric aggregations

Metric aggregations calculate values over a set of documents. In Jira 10.5, the following metric aggregations are available:

  • AvgAggregation: Computes the average of numeric values for a field across all matching documents.
  • MaxAggregation: Determines the maximum numeric value for a field across all matching documents.
  • SumAggregation: Calculates the sum of numeric values for a field across all matching documents.

Bucket aggregations

Bucket aggregations group documents based on specified criteria and can include sub-aggregations, unlike metric aggregations. In Jira 10.5, the following bucket aggregations are available:

  • DateHistogramAggregation: Groups documents into buckets based on date intervals.
  • TermsAggregation: Groups documents into buckets based on unique terms within a specified field.

We plan to introduce more aggregations in the future. You can suggest new aggregations by filing a suggestion ticket on JAC with the Search - Search API component.

Good to know

To perform aggregation on a field, ensure that the field is indexed with doc values enabled.

Migrating collectors to Aggregation API

Example 1 - Compute average value of a number custom field

Legacy Lucene

Aggregation API

public class AverageValueCollector extends SimpleCollector {

    private final String customFieldId;

    private double sum = 0.0;
    private long count = 0;
    private NumericDocValues customFieldValues;

    public AverageValueCollector(final String customFieldId) {
        this.customFieldId = customFieldId;
    }

    @Override
    public void collect(final int docId) throws IOException {
        if (customFieldValues.advanceExact(docId)) {
            sum += customFieldValues.longValue(); // Assuming the custom field indexes a long value
            count++;
        }
    }

    @Override
    protected void doSetNextReader(final LeafReaderContext context) throws IOException {
        customFieldValues = context.reader().getNumericDocValues(customFieldId);
    }

    @Override
    public boolean needsScores() {
        return false;
    }

    public double getResult() {
        return count > 0 ? sum / count : 0.0;
    }
}

// ...

final var collector = new AverageValueCollector("customfield_10000");

searchProvider.search(SearchQuery.create(query, searcher), collector);

collector.getResult();
// Create an average aggregation on the "customfield_10000"
final var aggregation = new AvgAggregation("customfield_10000");

// Define the aggregation request with the custom "avg_cf" name
final var aggregationRequest = AggregationRequest.builder()
        .addAggregation("avg_cf", aggregation)
        .build();

// Execute the search
final var searchResponse = searchService.search(DocumentSearchRequest.builder()
        .jqlQuery(query)
        .searcher(searcher)
        .aggregationRequest(aggregationRequest)
        .build(), new PagerFilter<>(0));

// Read the results
final var avgResult = searchResponse.getAggregationResponse().get().getAvg("avg_cf");

avgResult.getValue();

Example 2 - Count issues by project

Legacy Lucene

Aggregation API

public class IssuePerProjectCollector extends SimpleCollector {
  
    private final Map<Long, Long> projectToIssueCount = new HashMap<>();
    private SortedDocValues docIdToProjectIdValues;

    @Override
    public void collect(final int docId) throws IOException {
        if (docIdToProjectIdValues.advanceExact(docId)) {
            final long projectId = Long.parseLong(docIdToProjectIdValues.binaryValue().utf8ToString());
            projectToIssueCount.merge(projectId, 1L, Long::sum);
        }
    }

    @Override
    protected void doSetNextReader(final LeafReaderContext context) throws IOException {
        docIdToProjectIdValues = context.reader().getSortedDocValues(DocumentConstants.PROJECT_ID);
    }

    @Override
    public boolean needsScores() {
        return false;
    }

    public Map<Long, Long> getResult() {
        return projectToIssueCount;
    }
}

// ...

final var collector = new IssuePerProjectCollector();

searchProvider.search(SearchQuery.create(query, searcher), collector);

collector.getResult();
// Create a terms aggregation on the PROJECT_ID field
final var aggregation = TermsAggregation.builder()
        .withField(DocumentConstants.PROJECT_ID)
        //.withSubAggregation(...) - you can define a sub-aggregation too
        .build();

// Define the aggregation request with the custom "count_issues_by_project" name
final var aggregationRequest = AggregationRequest.builder()
        .addAggregation("count_issues_by_project", aggregation)
        .build();

// Execute the search
final var searchResponse = searchService.search(DocumentSearchRequest.builder()
        .jqlQuery(query)
        .searcher(searcher)
        .aggregationRequest(aggregationRequest)
        .build(), new PagerFilter<>(0));

// Read the results
final var termsResult = searchResponse.getAggregationResponse().get().getTerms("count_issues_by_project");

termsResult.getBuckets().forEach(bucket -> {
    final var projectId = Long.parseLong(bucket.getKey());
    final var issueCount = bucket.getDocCount();

    // Handle the results ...
});


Last modified on Feb 21, 2025

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.