Jira is unresponsive either consistently or randomly due to stuck threads and / or thread pool exhaustion

Still need help?

The Atlassian Community is here for you.

Ask the community

Platform Notice: Data Center - This article applies to Atlassian products on the Data Center platform.

Note that this knowledge base article was created for the Data Center version of the product. Data Center knowledge base articles for non-Data Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Summary

At times Jira performance may slowly deteriorate and appear as unresponsive.

Environment

Any Jira version on Windows or Linux

Diagnosis

  • Tomcat log file catalina.out in $JIRA_INST/logs/ will show many exceptions like below:

    26-Oct-2022 17:41:42.110 WARNING [ContainerBackgroundProcessor[StandardEngine[Catalina]]] org.apache.catalina.valves.StuckThreadDetectionValve.notifyStuckThreadDetected Thread [http-nio-8080-exec-75] (id=[438]) has been active for [121,849] milliseconds (since [10/26/22 5:39 PM]) to serve the same request for https://URL/plugins/servlet/nfj/PushNotification and may be stuck (configured threshold for this StuckThreadDetectionValve is [120] seconds). There is/are [45] thread(s) in total that are monitored by this Valve and may be stuck.
        java.lang.Throwable
            at com.infosysta.jira.nfj.servlet.Servlet_PushNotification.doGet(Servlet_PushNotification.java:30)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:655)
            at javax.servlet.http.HttpServlet.service(HttpServlet.java:764)

    (info) Notice in the above example /servlet/nfj/PushNotification is just one example, it can be anything, a good giveaway here is that there's a high number of of this threads (45) that maybe stuck

  • Generate at least 6 thread dumps, waiting 10 seconds in between: Troubleshooting Jira performance with Thread dumps; Once generated, inspect the dumps to try and identify long running threads amongst those dumps (Feel free to log a support request with Atlassian Support to assist with that)
  • Inspect one of the thread dumps and count how many http threads are there (simple search for 'http-' in a text editor to count the number of matches will do). Compare that number to the maxThreads value defined in the main Tomcat configuration file - server.xml; If the number of http threads in a thread dump is higher than the maximum number of threads configured in Tomcat - that means that the maximum thread pool has maxed out

Cause

A third party application is holding onto all of Tomcat's threads, in the above log example it was due to "In-app & Desktop Notifications for Jira" add-on, however, it could be other add-ons;

These threads could also eventually cause Tomcat running out of available threads, whereby the number of http threads in the application will exceed the maximum number of threads that the Tomcat is configured to handle. The application will still allow new threads to queue up, however, they will have to wait for the other threads to complete, thus slowing down the application significantly.

The above is only a high level cause, to identify the culprit, further, in depth investigation maybe required


Solution

  • Once the suspect add-on has been identified  - try and disable that add-on (if possible) to see if the performance has improved. If it has - please reach out to the add-on vendor for further guidance;
  • If thread pool exhaustion is identified, potentially maxThreads= value in server.xml maybe increased slightly (depending on the number of threads you see in the thread dumps) but that will need to be done with caution as the higher this number is, the more resources Jira will need to process the number of threads comfortably without causing performance bottlenecks in other parts of the application, i.e. along with increasing maxThreads you may need to increase the heap size (xmx value) and DB pool size (dbconfig.xml). Reach out to Atlassian Support if in doubt.



Last modified on Feb 20, 2025

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.