[MDEV-21452] Use condition variables and normal mutexes instead of InnoDB os_event and mutex Created: 2020-01-10  Updated: 2023-09-01  Resolved: 2020-12-16

Status: Closed
Project: MariaDB Server
Component/s: Locking
Affects Version/s: 10.4.10
Fix Version/s: 10.6.0

Type: Bug Priority: Major
Reporter: Daniel Black Assignee: Marko Mäkelä
Resolution: Fixed Votes: 0
Labels: performance

Attachments: File MDEV-21452-nbl.ods     File MDEV-21452.ods     File MDEV-21452.ods     PNG File Screenshot from 2020-03-26 12-08-54.png     File master.properties.60     File my.cnf    
Issue Links:
Blocks
blocks MDEV-23169 Optimize InnoDB code around mutexes a... Closed
is blocked by MDEV-16232 Use fewer mini-transactions Stalled
is blocked by MDEV-16406 Refactor the InnoDB record locks Open
is blocked by MDEV-20612 Improve InnoDB lock_sys scalability Closed
is blocked by MDEV-22871 Contention on the buf_pool.page_hash Closed
is blocked by MDEV-23888 Potential server hang on replication ... Closed
is blocked by MDEV-24142 rw_lock_t has unnecessarily complex w... Closed
Problem/Incident
causes MDEV-24637 fts_slots is being accessed after it ... Closed
causes MDEV-26779 reduce lock_sys.wait_mutex contention... Closed
causes MDEV-29896 mariadb-backup got stuck with --throt... Closed
Relates
relates to MDEV-7109 Add support for INFORMATION_SCHEMA.IN... Closed
relates to MDEV-15653 [Draft] Assertion `lock_word <= 0x200... Closed
relates to MDEV-18250 [Draft] Server crashed in dirname_len... Closed
relates to MDEV-24426 fil_crypt_thread keep spinning even i... Closed
relates to MDEV-24449 Corruption of system tablespace or la... Closed
relates to MDEV-24845 Oddities around innodb_fatal_semaphor... Closed
relates to MDEV-24973 Performance schema duplicates rarely ... Closed
relates to MDEV-25267 Reported latching order violation in ... Open
relates to MDEV-25890 Trying to lock mutex ... when the mut... Closed
relates to MDEV-27985 buf_flush_freed_pages() causes InnoDB... Closed
relates to MDEV-28157 SAFE_MUTEX and DBUG corrupt memory | ... Confirmed
relates to MDEV-15706 Remove information_schema.innodb_metr... Open
relates to MDEV-15752 Possible race between DDL and accessi... Confirmed
relates to MDEV-21330 Lock monitor doesn't print a semaphor... Closed
relates to MDEV-22782 SUMMARY: AddressSanitizer: unknown-cr... Closed
relates to MDEV-23399 10.5 performance regression with IO-b... Closed
relates to MDEV-23472 ASAN use-after-poison in LatchCounter... Closed
relates to MDEV-24630 MY_RELAX_CPU assembly instruction upg... Closed
relates to MDEV-30951 Make small facelift to Innotop perl s... Open
relates to MDEV-32065 Always check whether lock is free at ... Open

 Description   

investigation suggested by marko on zulip after reading http://smalldatum.blogspot.com/2020/01/it-is-all-about-constant-factors.html

no patches - built just straight from 10.4.10 release tag. Built with cmake -DMUTEXTYPE=$type -DCMAKE_PREFIX_INSTALL=/scratch/mariadb-10.4.10-$type $HOME/mariadb-10.4.10
Distro ubuntu-18.04 compiler.

TPCCRunner test:

POWER8, altivec supported - 20 core, 8 thread/core

$ tail  fullrun-master-fstn4-mariadb-10.4.10-futex-28444.txt   fullrun-master-fstn4-mariadb-10.4.10-event-48215.txt  fullrun-master-fstn4-mariadb-10.4.10-sys-60112.txt
==> fullrun-master-fstn4-mariadb-10.4.10-futex-28444.txt <==
 
              timestamp          tpm      avg_rt      max_rt   avg_db_rt   max_db_rt
                average  2519939.03       50.01         687       50.00         687
 
   All phase Transactions: 100508512
Warmup phase Transactions: 24910341
   Run phase Transactions: 75598171
 
Waiting slaves to terminate users.
All slaves disconnected.
 
==> fullrun-master-fstn4-mariadb-10.4.10-event-48215.txt <==
 
              timestamp          tpm      avg_rt      max_rt   avg_db_rt   max_db_rt
                average  1944470.28       63.97         782       63.96         782
 
   All phase Transactions: 466885487
Warmup phase Transactions: 350217270
   Run phase Transactions: 116668217
 
Waiting slaves to terminate users.
All slaves disconnected.
 
==> fullrun-master-fstn4-mariadb-10.4.10-sys-60112.txt <==
 
              timestamp          tpm      avg_rt      max_rt   avg_db_rt   max_db_rt
                average  2412875.70       51.72         846       51.71         846
 
   All phase Transactions: 579124495
Warmup phase Transactions: 434351953
   Run phase Transactions: 144772542
 
Waiting slaves to terminate users.
All slaves disconnected.

Note: while futex was run for much less time - innodb_buffer_pool_dump_pct=100 from the last run and the 30 minutes was receiving consistent throughput.

Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz - 22 core, 4 thread/core

==> fullrun-master-ka4-mariadb-10.4.10-event-rr-50070.txt <==
 
              timestamp          tpm      avg_rt      max_rt   avg_db_rt   max_db_rt
                average  3354020.40       34.35         487       34.32         487
 
   All phase Transactions: 132952152
Warmup phase Transactions: 32331540
   Run phase Transactions: 100620612
 
Waiting slaves to terminate users.
All slaves disconnected.
 
==> fullrun-master-ka4-mariadb-10.4.10-futex-rr-63543.txt <==
 
              timestamp          tpm      avg_rt      max_rt   avg_db_rt   max_db_rt
                average  3362135.83       33.50         604       33.48         604
 
   All phase Transactions: 131218680
Warmup phase Transactions: 30354605
   Run phase Transactions: 100864075
 
Waiting slaves to terminate users.
All slaves disconnected.
 
==> fullrun-master-ka4-mariadb-10.4.10-sys-rr-56865.txt <==
 
              timestamp          tpm      avg_rt      max_rt   avg_db_rt   max_db_rt
                average  3363324.87       34.13         996       34.11         996
 
   All phase Transactions: 132642637
Warmup phase Transactions: 31742891
   Run phase Transactions: 100899746
 
Waiting slaves to terminate users.
All slaves disconnected.



 Comments   
Comment by Marko Mäkelä [ 2020-01-10 ]

svoj, can you please test

cmake -DMUTEXTYPE:STRING=event …
cmake -DMUTEXTYPE:STRING=sys …
cmake -DMUTEXTYPE:STRING=futex …

on different instruction set architectures and possibly several recent microarchitectures of AMD64 implementations (such as Intel Skylake and its ‘temporal neighbours’)?

If the system mutex is not significantly slower than the specialized ones on any platform (GNU/Linux on AMD64, Aarch64; Windows on AMD64), I think that we should remove the specialized code. danblack conducted his tests on POWER 8.

Comment by Axel Schwenke [ 2020-03-23 ]

On my 16-core, 32-thread Intel machines I see no significant difference between the different mutex implementations. See attached spread sheet MDEV-21452.ods

Comment by Sergey Vojtovich [ 2020-03-23 ]

You won't see any difference for sure until backup locks is the largest bottleneck. With backup locks disabled I can immediately see 20% throughput decline in oltp_update_index benchmark.

Backup locks disabled with this patch:

diff --git a/sql/handler.cc b/sql/handler.cc
index eb7b5b8..012ef20 100644
--- a/sql/handler.cc
+++ b/sql/handler.cc
@@ -1567,7 +1567,7 @@ int ha_commit_trans(THD *thd, bool all)
   DBUG_PRINT("info", ("is_real_trans: %d  rw_trans:  %d  rw_ha_count: %d",
                       is_real_trans, rw_trans, rw_ha_count));
 
-  if (rw_trans)
+  if (0 && rw_trans)
   {
     /*
       Acquire a metadata lock which will ensure that COMMIT is blocked
diff --git a/sql/sql_base.cc b/sql/sql_base.cc
index c41e08e..f9e3f34 100644
--- a/sql/sql_base.cc
+++ b/sql/sql_base.cc
@@ -2100,7 +2100,7 @@ bool open_table(THD *thd, TABLE_LIST *table_list, Open_table_context *ot_ctx)
   }
 
   if (!(flags & MYSQL_OPEN_HAS_MDL_LOCK) &&
-      table->s->table_category < TABLE_CATEGORY_INFORMATION)
+      table->s->table_category < TABLE_CATEGORY_INFORMATION && 0)
   {
     /*
       We are not under LOCK TABLES and going to acquire write-lock/

Results for event vs sys mutex:

[ 10s ] thds: 40 tps: 235212.45 qps: 235212.45 (r/w/o: 0.00/235212.45/0.00) lat (ms,95%): 0.28 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 40 tps: 239224.64 qps: 239224.64 (r/w/o: 0.00/239224.64/0.00) lat (ms,95%): 0.27 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 40 tps: 235586.62 qps: 235586.62 (r/w/o: 0.00/235586.62/0.00) lat (ms,95%): 0.27 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 40 tps: 237855.90 qps: 237855.90 (r/w/o: 0.00/237855.90/0.00) lat (ms,95%): 0.26 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 40 tps: 242333.85 qps: 242333.95 (r/w/o: 0.00/242333.95/0.00) lat (ms,95%): 0.25 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 40 tps: 230731.39 qps: 230731.29 (r/w/o: 0.00/230731.29/0.00) lat (ms,95%): 0.27 err/s: 0.00 reconn/s: 0.00
[ 70s ] thds: 40 tps: 227764.88 qps: 227764.88 (r/w/o: 0.00/227764.88/0.00) lat (ms,95%): 0.26 err/s: 0.00 reconn/s: 0.00
[ 80s ] thds: 40 tps: 233360.57 qps: 233360.57 (r/w/o: 0.00/233360.57/0.00) lat (ms,95%): 0.26 err/s: 0.00 reconn/s: 0.00
[ 90s ] thds: 40 tps: 234724.00 qps: 234724.00 (r/w/o: 0.00/234724.00/0.00) lat (ms,95%): 0.26 err/s: 0.00 reconn/s: 0.00
[ 100s ] thds: 40 tps: 233649.87 qps: 233649.87 (r/w/o: 0.00/233649.87/0.00) lat (ms,95%): 0.27 err/s: 0.00 reconn/s: 0.00
[ 110s ] thds: 40 tps: 228001.25 qps: 228001.45 (r/w/o: 0.00/228001.45/0.00) lat (ms,95%): 0.26 err/s: 0.00 reconn/s: 0.00
[ 120s ] thds: 40 tps: 229956.01 qps: 229955.81 (r/w/o: 0.00/229955.81/0.00) lat (ms,95%): 0.27 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            0
        write:                           28084784
        other:                           0
        total:                           28084784
    transactions:                        28084784 (234027.56 per sec.)
    queries:                             28084784 (234027.56 per sec.)
 
 
 
 
[ 10s ] thds: 40 tps: 187679.62 qps: 187679.62 (r/w/o: 0.00/187679.62/0.00) lat (ms,95%): 0.33 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 40 tps: 192377.30 qps: 192377.30 (r/w/o: 0.00/192377.30/0.00) lat (ms,95%): 0.31 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 40 tps: 194527.13 qps: 194527.13 (r/w/o: 0.00/194527.13/0.00) lat (ms,95%): 0.30 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 40 tps: 187974.25 qps: 187974.25 (r/w/o: 0.00/187974.25/0.00) lat (ms,95%): 0.33 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 40 tps: 189598.45 qps: 189598.45 (r/w/o: 0.00/189598.45/0.00) lat (ms,95%): 0.30 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 40 tps: 192302.85 qps: 192302.85 (r/w/o: 0.00/192302.85/0.00) lat (ms,95%): 0.30 err/s: 0.00 reconn/s: 0.00
[ 70s ] thds: 40 tps: 190873.45 qps: 190873.45 (r/w/o: 0.00/190873.45/0.00) lat (ms,95%): 0.30 err/s: 0.00 reconn/s: 0.00
[ 80s ] thds: 40 tps: 182795.87 qps: 182795.97 (r/w/o: 0.00/182795.97/0.00) lat (ms,95%): 0.34 err/s: 0.00 reconn/s: 0.00
[ 90s ] thds: 40 tps: 188988.71 qps: 188988.61 (r/w/o: 0.00/188988.61/0.00) lat (ms,95%): 0.31 err/s: 0.00 reconn/s: 0.00
[ 100s ] thds: 40 tps: 190645.87 qps: 190645.97 (r/w/o: 0.00/190645.97/0.00) lat (ms,95%): 0.30 err/s: 0.00 reconn/s: 0.00
[ 110s ] thds: 40 tps: 191076.30 qps: 191076.20 (r/w/o: 0.00/191076.20/0.00) lat (ms,95%): 0.30 err/s: 0.00 reconn/s: 0.00
[ 120s ] thds: 40 tps: 190803.80 qps: 190803.90 (r/w/o: 0.00/190803.90/0.00) lat (ms,95%): 0.30 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            0
        write:                           22797410
        other:                           0
        total:                           22797410
    transactions:                        22797410 (189967.22 per sec.)
    queries:                             22797410 (189967.22 per sec.)

I tend to remember I saw larger difference in 10.2. In 10.3 we have trx_sys.mutex eliminated from hot path, so it matters less. FWIU in 10.5 we have log_sys mutexes eliminated from hot path, so it matters even less. But still 20% on not that recent hardware is a lot.

All in all event mutex seem to give better throughput compared to pthread mutex under certain loads. Implementing own locking primitives is a dead end for sure. But until we fix contention it'd probably be wise to keep them.

Comment by Marko Mäkelä [ 2020-03-24 ]

The tests innodb.innodb_sys_semaphore_waits and sys_vars.innodb_fatal_semaphore_wait_threshold depend on MUTEXTYPE=event.
If the server is built with MUTEXTYPE=sys or MUTEXTYPE=futex, then the InnoDB internal watchdog for hangs will not kill the server. If we were to change the option on any platform (presumably, other than IA-32 and AMD64) , we would have to decide what to do with this watchdog killer. I am rather sure that it cannot be made to work with native mutex, and maybe not with Linux fast userspace mutex either.

Comment by Axel Schwenke [ 2020-03-24 ]

New results in MDEV-21452-nbl.ods

The server was now patched to disable backup locks. I also added the update_index test from sysbench OLTP (updating an indexed column with autocommit). Now I see a clear performance impact for 'sys' mutexes.

Comment by Krunal Bauskar [ 2020-03-26 ]

We decided to evaluate the performance of the said mutex change on all aarch. I evaluated it for ARM. Here are the results.

We didn't observe any major regression or performance gain arising from switching to sys mutex or futex. So, for now, the recommendation would be to continue with the existing default (event).

To find out more about how we executed the test you can checkout
https://github.com/mysqlonarm/benchmark-suites
https://github.com/mysqlonarm/benchmark-suites/blob/master/sysbench/conf/mdb_mutex_study.cnf

Comment by Marko Mäkelä [ 2020-06-01 ]

I think that before we can proceed on this, we must remove some already known InnoDB mutex contention points, mainly in lock_sys and log_sys.

Comment by Daniel Black [ 2020-06-02 ]

Both futex and sys provide perf improvements over event with Power only, even with the lock_sys/log_sys currently. Maybe I can try to pick one of them trying to make an assumption on the potential gains of lock_sys/log_sys, however it doesn't obviously have to be permanent.

I assume you have significant confidence in the futex and sys implementations despite their non-default status?

Comment by Marko Mäkelä [ 2020-06-15 ]

One more case of underlying contention was filed in MDEV-22871.

Comment by Marko Mäkelä [ 2020-06-15 ]

danblack, the current default seems to work fine on AMD64 and ARM. I am OK to change the mutex type on POWER for some or all InnoDB mutexes on 10.5 or 10.6, if it can be demonstrated to improve the performance.

I believe that we must also deal with MDEV-22871 and MDEV-21212 fairly soon, targeting the 10.5 release.

Comment by Marko Mäkelä [ 2020-08-22 ]

While working on MDEV-23399, it occurred to me that the InnoDB os_event_t (which emulates the old synchronization primitive of Microsoft Windows NT) is rather inefficient, because it consists of a mutex and a condition variable. Usually the events are used in combination with another mutex. If we used conventional mutexes and condition variables (via the mysql_mutex_t and mysq_cond_t wrappers), we could simplify some inter-thread communication and reduce overhead.

Removing the InnoDB mutex implementation would also remove some instrumentation that is currently available either via SHOW ENGINE INNODB MUTEX or INFORMATION_SCHEMA.innodb_metrics (MDEV-15706). MDEV-23472 shows an example stack traces of that instrumentation. Because also mutexes in unused transaction objects are instrumented, we had to complicate the AddressSanitizer instrumentation (MDEV-15030, MDEV-22782) to allow the mutexes of freed transaction objects to be accessed.

I do not think that we can replace all InnoDB rw_lock_t with mysql_rwlock_t. Emulating the custom SX mode with two rw-locks would most likely incur some performance overhead.

Comment by Marko Mäkelä [ 2020-09-27 ]

I started work to replace os_event_t and ib_mutex_t with mysql_cond_t and mysql_mutex_t. So far, this includes simplifying the InnoDB rw_lock_t to use a mutex and two condition variables, instead of using two os_event_t.

We still have 10 os_event_t and 45 ib_mutex_t left.

Comment by Marko Mäkelä [ 2020-09-28 ]

We have 8 os_event_create() and 37 mutex_create() calls left. In fts_cache_t, two rw_lock_t can be replaced with ordinary mutex. Some mutexes were orphan.

Comment by Marko Mäkelä [ 2020-09-29 ]

In lock_wait_suspend_thread() we seem to have some low-hanging fruit:

	mysql_mutex_lock(&lock_sys.mutex);
	while (trx->lock.wait_lock) {
		mysql_cond_wait(&slot->cond, &lock_sys.mutex);
	}
	mysql_mutex_unlock(&lock_sys.mutex);
 
	thd_wait_end(trx->mysql_thd);

We should actually protect trx->lock.wait_lock with trx_t::mutex to reduce contention on lock_sys.mutex.

Comment by Marko Mäkelä [ 2020-10-01 ]

SHOW ENGINE INNODB MUTEX as well as INFORMATION_SCHEMA.INNODB_MUTEXES would no longer display any information about InnoDB mutexes. They may output information about InnoDB homebrew rw_lock_t.

Comment by Matthias Leich [ 2020-11-05 ]

The corresponding source tree behaved well during RQG testing.

Comment by Marko Mäkelä [ 2020-12-04 ]

INFORMATION_SCHEMA.INNODB_MUTEXES (MDEV-7399) did not report any mutex information since MariaDB 10.2.2. It was removed in MDEV-24142 along with replacing the InnoDB buf_block_t::lock and dict_index_t::lock with something simpler. Other rw-locks had already been replaced with simpler ones in MDEV-24167.

The command SHOW ENGINE INNODB MUTEX will return no output, because no InnoDB-specific instrumentation of rw-locks or mutexes will exist.

Also removed will be the InnoDB latching order checks and the debug parameter innodb_sync_debug as well as the view INFORMATION_SCHEMA.INNODB_SYS_SEMAPHORE_WAITS.

In the current state of development, the parameter innodb_fatal_semaphore_wait_threshold will not be enforced at all, and there will be no "Long semaphore wait" messages. I will have to implement something special (probably timeout-based acquisition) for dict_sys.mutex and lock_sys.mutex. Those should cover most hangs.

Comment by Matthias Leich [ 2020-12-09 ]

commit 9159383f32d8350dfa91bb62c825c64b1dc091d1 (HEAD, origin/bb-10.6-MDEV-21452)
behaved well during RQG testing.
Bad effects observed are in the in MariaDB versions without MDEV-21452 too.

Comment by Marko Mäkelä [ 2020-12-11 ]

I implemented special enforcement of innodb_fatal_semaphore_wait_threshold for dict_sys.mutex and lock_sys.mutex. Due to an observed performance regression at high concurrency, I removed the lock_sys.mutex instrumentation and retained only the one on dict_sys.mutex. If pthread_mutex_trylock() fails, then the current thread would compare-and-swap 0 with its current time before waiting in pthread_mutex_lock(). Either the srv_monitor_task() or a subsequent thread that attempts to acquire dict_sys.mutex would then enforce the innodb_fatal_semaphore_wait_threshold and kill the process if necessary.

While rewriting the test sys_vars.innodb_fatal_semaphore_wait_threshold accordingly, I noticed that not all hangs would be caught even in the data dictionary cache. For example, if a DDL operation hung while holding both dict_sys.latch and dict_sys.mutex, a subsequent DDL operation would hang while waiting for dict_sys.latch, before even starting the wait for dict_sys.mutex. But, DML threads that are trying to open a table would acquire dict_sys.mutex and be subject to the watchdog. Hopefully this type of watchdog testing will be adequate. We could of course add more instrumentation to debug builds.

Comment by Marko Mäkelä [ 2020-12-16 ]

The main reason for having the homebrew mutexes was that their built-in spin loops could lead to better performance than the native implementation on contended mutexes.

Some performance regression was observed for larger thread counts (exceeding the CPU core count) when updating non-indexed columns. I suspect that the culprit is contention on lock_sys.mutex, and I believe that implementing MDEV-20612 will address that.

Also log_sys.mutex is known to be a source of contention, but it was changed to a native mutex already in MDEV-23855. MDEV-23855 also removed some contention on fil_system.mutex, but kept it as a homebrew mutex. Contention on these mutexes should be reduced further in MDEV-14425.

Comment by Marko Mäkelä [ 2020-12-16 ]

We observed frequent timeouts and extremely slow execution time of the test mariabackup.xb_compressed_encrypted especially on Microsoft Windows builders. Those machines have 4 processor cores, and they run 4 client/server process pairs in parallel. (Our Linux builders have a lot more processor cores.) The used to specify innodb_encryption_threads=4. That is, there was one page cleaner thread doing the actual work of writing data pages, and 4 ‘manager’ threads that fight each other to see who gets to wield the shovel and add more dirt to the pile that the page cleaner is trying to transport away. Changing the test to use innodb_encryption_threads=1 seems to have fixed the problem.

With the previous setting, the test timed out on win32-debug on two successive runs; with the lower setting innodb_encryption_threads=1 it passed (at least once), consuming 13, 14, and 41 seconds on win32-debug and 14, 22, 27 seconds on win64-debug. On a previous run with innodb_encryption_threads=4 , the execution time was more than 500 seconds on win64-debug, and for 2 of the 3 innodb_page_size values, the execution time exceeded 900 seconds on win32-debug.

Thanks to wlad for making the observation that the encryption threads were conflicting with each other. In MDEV-22258 we did experiment with different settings, and back then (still with the homebrew mutexes) there seemed to be some benefit of having multiple encryption (page-dirtying) threads.

This highlights a benefit of the homebrew mutexes that we removed: Spinning may yield a little better throughput when there is a lot of contention. I agree with the opnion that svoj has stated earlier: it is better to fix the underlying contention than to implement workarounds. I am confident that with MDEV-14425 and MDEV-20612 we will regain some scalability when the number of concurrent connections exceeds the number of processor cores. We already reduced buf_pool.mutex contention in MDEV-15053 and MDEV-23399 et al, and fil_system.mutex contention in MDEV-23855.

Comment by Marko Mäkelä [ 2020-12-17 ]

The problem with the constantly sleeping and waking encryption threads was partially addressed in MDEV-24426. On GNU/Linux, with the native mutexes and condition variables, the CPU usage was low, but with the homebrew mutexes and events all threads seemed to spinning constantly. Maybe on Microsoft Windows the flood of sleeps and wakeups performs worse?

Generated at Thu Feb 08 09:07:15 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.