Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-31311

The test innodb.log_file_size_online occasionally hangs

Details

    Description

      The MDEV-27812 test innodb.log_file_size_online occasionally fails like this on Microsoft Windows:

      10.9 717e3b3cfdb167e8b930323397dc6e852ef94f17

      innodb.log_file_size_online 'encrypted,innodb' w8 [ fail ]  timeout after 900 seconds
              Test ended at 2023-05-11 16:08:09
       
      Test case timeout after 900 seconds
       
      == D:/winx64-packages/build/mysql-test/var/8/log/log_file_size_online.log == 
      SET GLOBAL innodb_log_file_size=4194304;
       
       == D:/winx64-packages/build/mysql-test/var/8/tmp/analyze-timeout-mysqld.1.err ==
      SHOW PROCESSLIST;
      Id	User	Host	db	Command	Time	State	Info	Progress
      4	root	localhost:49727	test	Query	940	NULL	SET GLOBAL innodb_log_file_size=4194304	0.000
      5	root	localhost:64052	NULL	Query	0	starting	SHOW PROCESSLIST	0.000
       
      mysqltest failed but provided no output
      The result from queries just before the failure was:
      < snip >
      SET GLOBAL innodb_log_file_size=4194304;
       
       
       - saving 'D:/winx64-packages/build/mysql-test/var/8/log/innodb.log_file_size_online-encrypted,innodb/' to 'D:/winx64-packages/build/mysql-test/var/log/innodb.log_file_size_online-encrypted,innodb/'
      

      One failure that I checked seemed to have an idle buf_flush_page_cleaner() thread and the SET GLOBAL thread looping in innodb_log_file_size_update(). Possibly a wake-up of the page cleaner is missing.

      We will need more data from a debugger in order to understand what is going on. The contents of the global data structure log_sys during the hang needs to be known.

      Attachments

        Issue Links

          Activity

            I observed a failure of this test on a IA-32 GNU/Linux builder:

            10.9 eb6b521f1b9e9e88da489798c200c4f071280189

            innodb.log_file_size_online 'innodb,slow' w2 [ fail ]  timeout after 900 seconds
            …
            Thread 3 (Thread 0xae75fb40 (LWP 23745)):
            #0  0xb701cd3a in ?? () from /lib/i386-linux-gnu/libc.so.6
            #1  0x813aba85 in FreeState (cs=0xafc00680, free_state=1) at /home/buildbot/buildbot/build/mariadb-10.9.8/dbug/dbug.c:1642
            #2  0x813a9f62 in _db_pop_ () at /home/buildbot/buildbot/build/mariadb-10.9.8/dbug/dbug.c:935
            #3  0x8138b76d in safe_mutex_lock (mp=0x81e6c6c0 <buf_pool+16896>, my_flags=0, file=0x817d6fb8 "/home/buildbot/buildbot/build/mariadb-10.9.8/storage/innobase/handler/ha_innodb.cc", line=18517) at /home/buildbot/buildbot/build/mariadb-10.9.8/mysys/thr_mutex.c:400
            #4  0x8138564d in psi_mutex_lock (that=0x81e6c6c0 <buf_pool+16896>, file=0x817d6fb8 "/home/buildbot/buildbot/build/mariadb-10.9.8/storage/innobase/handler/ha_innodb.cc", line=18517) at /home/buildbot/buildbot/build/mariadb-10.9.8/mysys/my_thr_init.c:487
            #5  0x80f432e0 in inline_mysql_mutex_lock (that=0x81e6c6c0 <buf_pool+16896>, src_file=0x817d6fb8 "/home/buildbot/buildbot/build/mariadb-10.9.8/storage/innobase/handler/ha_innodb.cc", src_line=18517) at /home/buildbot/buildbot/build/mariadb-10.9.8/include/mysql/psi/mysql_thread.h:746
            #6  0x80f70cc4 in innodb_log_file_size_update (thd=0xafc015c0, var=0x828726c8 <srv_log_file_size>, save=0xafc37e30) at /home/buildbot/buildbot/build/mariadb-10.9.8/storage/innobase/handler/ha_innodb.cc:18517
            #7  0x807729f8 in sys_var_pluginvar::global_update (this=0x8410ffd0, thd=0xafc015c0, var=0xafc37e20) at /home/buildbot/buildbot/build/mariadb-10.9.8/sql/sql_plugin.cc:3632
            …
             
            Thread 2 (Thread 0xb05fcb40 (LWP 23730)):
            #0  0xb775bc31 in __kernel_vsyscall ()
            #1  0xb735fa8c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/i386-linux-gnu/libpthread.so.0
            #2  0x8138bb8f in safe_cond_wait (cond=0x81e6c754 <buf_pool+17044>, mp=0x81e6c6c0 <buf_pool+16896>, file=0x818c5e40 "/home/buildbot/buildbot/build/mariadb-10.9.8/storage/innobase/buf/buf0flu.cc", line=2370) at /home/buildbot/buildbot/build/mariadb-10.9.8/mysys/thr_mutex.c:494
            #3  0x8125eb95 in buf_flush_page_cleaner () at /home/buildbot/buildbot/build/mariadb-10.9.8/storage/innobase/buf/buf0flu.cc:2370
            

            Here, the failure to specify cmake -DWITH_DBUG_TRACE=OFF should have contributed to some slowness, but I do not think it should cause the hang.

            marko Marko Mäkelä added a comment - I observed a failure of this test on a IA-32 GNU/Linux builder: 10.9 eb6b521f1b9e9e88da489798c200c4f071280189 innodb.log_file_size_online 'innodb,slow' w2 [ fail ] timeout after 900 seconds … Thread 3 (Thread 0xae75fb40 (LWP 23745)): #0 0xb701cd3a in ?? () from /lib/i386-linux-gnu/libc.so.6 #1 0x813aba85 in FreeState (cs=0xafc00680, free_state=1) at /home/buildbot/buildbot/build/mariadb-10.9.8/dbug/dbug.c:1642 #2 0x813a9f62 in _db_pop_ () at /home/buildbot/buildbot/build/mariadb-10.9.8/dbug/dbug.c:935 #3 0x8138b76d in safe_mutex_lock (mp=0x81e6c6c0 <buf_pool+16896>, my_flags=0, file=0x817d6fb8 "/home/buildbot/buildbot/build/mariadb-10.9.8/storage/innobase/handler/ha_innodb.cc", line=18517) at /home/buildbot/buildbot/build/mariadb-10.9.8/mysys/thr_mutex.c:400 #4 0x8138564d in psi_mutex_lock (that=0x81e6c6c0 <buf_pool+16896>, file=0x817d6fb8 "/home/buildbot/buildbot/build/mariadb-10.9.8/storage/innobase/handler/ha_innodb.cc", line=18517) at /home/buildbot/buildbot/build/mariadb-10.9.8/mysys/my_thr_init.c:487 #5 0x80f432e0 in inline_mysql_mutex_lock (that=0x81e6c6c0 <buf_pool+16896>, src_file=0x817d6fb8 "/home/buildbot/buildbot/build/mariadb-10.9.8/storage/innobase/handler/ha_innodb.cc", src_line=18517) at /home/buildbot/buildbot/build/mariadb-10.9.8/include/mysql/psi/mysql_thread.h:746 #6 0x80f70cc4 in innodb_log_file_size_update (thd=0xafc015c0, var=0x828726c8 <srv_log_file_size>, save=0xafc37e30) at /home/buildbot/buildbot/build/mariadb-10.9.8/storage/innobase/handler/ha_innodb.cc:18517 #7 0x807729f8 in sys_var_pluginvar::global_update (this=0x8410ffd0, thd=0xafc015c0, var=0xafc37e20) at /home/buildbot/buildbot/build/mariadb-10.9.8/sql/sql_plugin.cc:3632 …   Thread 2 (Thread 0xb05fcb40 (LWP 23730)): #0 0xb775bc31 in __kernel_vsyscall () #1 0xb735fa8c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/i386-linux-gnu/libpthread.so.0 #2 0x8138bb8f in safe_cond_wait (cond=0x81e6c754 <buf_pool+17044>, mp=0x81e6c6c0 <buf_pool+16896>, file=0x818c5e40 "/home/buildbot/buildbot/build/mariadb-10.9.8/storage/innobase/buf/buf0flu.cc", line=2370) at /home/buildbot/buildbot/build/mariadb-10.9.8/mysys/thr_mutex.c:494 #3 0x8125eb95 in buf_flush_page_cleaner () at /home/buildbot/buildbot/build/mariadb-10.9.8/storage/innobase/buf/buf0flu.cc:2370 Here, the failure to specify cmake -DWITH_DBUG_TRACE=OFF should have contributed to some slowness, but I do not think it should cause the hang.

            oleg.smirnov provided the contents of log_sys and some of buf_pool.flush_list from one hang. The latest checkpoint is slightly ahead of log_sys.resize_lsn, which might be the reason that the page cleaner is not being woken up. For some reason, the log resizing had not been completed when the previous checkpoint finished.

            I will have to review the logic in detail and come up with a patch, so that the page cleaner will keep running in this case.

            marko Marko Mäkelä added a comment - oleg.smirnov provided the contents of log_sys and some of buf_pool.flush_list from one hang. The latest checkpoint is slightly ahead of log_sys.resize_lsn , which might be the reason that the page cleaner is not being woken up. For some reason, the log resizing had not been completed when the previous checkpoint finished. I will have to review the logic in detail and come up with a patch, so that the page cleaner will keep running in this case.

            I tried a one-line change to wake up the page cleaner, but it did not help. Both before and after that change, we would actually have log_sys.first_lsn updated to be not earlier than the log_sys.resize_lsn, that is, the resizing appeared to be sort-of completed in log_t::write_checkpoint(). oleg.smirnov is now testing a fix to log_t::write_checkpoint() that could ensure proper completion, or provide more information in the case that it hangs again.

            I tried and failed to reproduce this hang on my local AMD64 GNU/Linux system, both with/without the "fake PMEM" for log stored in /dev/shm, and with/without the revised fix. On Buildbot, the hangs are by far most frequent on various Windows builders, but there have been some hangs on kvm-fulltest2 (32-bit GNU/Linux debug build), IBM AIX, and some FreeBSD.

            marko Marko Mäkelä added a comment - I tried a one-line change to wake up the page cleaner, but it did not help. Both before and after that change, we would actually have log_sys.first_lsn updated to be not earlier than the log_sys.resize_lsn , that is, the resizing appeared to be sort-of completed in log_t::write_checkpoint() . oleg.smirnov is now testing a fix to log_t::write_checkpoint() that could ensure proper completion, or provide more information in the case that it hangs again. I tried and failed to reproduce this hang on my local AMD64 GNU/Linux system, both with/without the "fake PMEM" for log stored in /dev/shm , and with/without the revised fix. On Buildbot, the hangs are by far most frequent on various Windows builders, but there have been some hangs on kvm-fulltest2 (32-bit GNU/Linux debug build), IBM AIX, and some FreeBSD.

            It could be that in log_t::resize_start() we had log_sys.write_lsn==log_sys.first_lsn (that is, no log had been written), or possibly exactly 511 bytes had been written (due to some background processing, such as purging some history of committed transactions after ./mtr --bootstrap).

            In both my GNU/Linux environment and oleg.smirnov’s Microsoft Windows environment, the log would be resized from 10MiB to 5MiB at server startup, and on my system that will cause some log to be written:

            #0  log_t::write_buf<true> (this=0x55b3572eb100 <log_sys>) at /mariadb/10.9/storage/innobase/log/log0log.cc:846
            #1  0x000055b3565a9e2c in log_write_up_to (lsn=<optimized out>, lsn@entry=58195, durable=true, callback=<optimized out>, callback@entry=0x0) at /mariadb/10.9/storage/innobase/log/log0log.cc:924
            #2  0x000055b35649aead in log_checkpoint_low (oldest_lsn=58179, end_lsn=58179) at /mariadb/10.9/storage/innobase/buf/buf0flu.cc:1914
            #3  0x000055b35649c4ab in log_make_checkpoint () at /mariadb/10.9/storage/innobase/buf/buf0flu.cc:1964
            #4  0x000055b3566e8274 in create_log_file (create_new_db=false, lsn=lsn@entry=58179) at /mariadb/10.9/storage/innobase/srv/srv0start.cc:247
            #5  0x000055b3566e6fe7 in srv_start (create_new_db=false) at /mariadb/10.9/storage/innobase/srv/srv0start.cc:1397
            

            Between that point of time and the resizing to 4MiB there were no log writes in a local non-hanging rr replay trace that I analyzed. On my system, the page cleaner would be woken up by resizing:

            #0  buf_pool_t::page_cleaner_set_idle (this=this@entry=0x55b3569c0a40 <buf_pool>, deep_sleep=false) at /mariadb/10.9/storage/innobase/include/buf0buf.h:1790
            #1  0x000055b35649c8f4 in buf_flush_ahead (lsn=lsn@entry=58179, furious=false) at /mariadb/10.9/storage/innobase/buf/buf0flu.cc:2073
            #2  0x000055b3565aa4b6 in log_t::resize_start (this=0x55b3572eb100 <log_sys>, size=<optimized out>) at /mariadb/10.9/storage/innobase/log/log0log.cc:500
            #3  0x000055b3563e4d03 in innodb_log_file_size_update (thd=0x556814000d58, var=<optimized out>, save=<optimized out>) at /mariadb/10.9/storage/innobase/handler/ha_innodb.cc:18496
            

            I will provide one more patch to oleg.smirnov, to add some output to log_t::resize_start() so that we will know more what happens in those runs that result in a hang.

            If the hang is caused by there being exactly 511 bytes of log writes, the hang might go away if the test is run as follows:

            ./mtr --mysqld=--innodb-fast-shutdown=0 --parallel=2 innodb.log_file_size_online
            

            It would also be useful to save a copy of var/install.db for a run that leads to a hang. It might be more easily reproducible by starting the server on a copy of such a data directory. (The bootstrap is not exactly deterministic.)

            marko Marko Mäkelä added a comment - It could be that in log_t::resize_start() we had log_sys.write_lsn==log_sys.first_lsn (that is, no log had been written), or possibly exactly 511 bytes had been written (due to some background processing, such as purging some history of committed transactions after ./mtr --bootstrap ). In both my GNU/Linux environment and oleg.smirnov ’s Microsoft Windows environment, the log would be resized from 10MiB to 5MiB at server startup, and on my system that will cause some log to be written: #0 log_t::write_buf<true> (this=0x55b3572eb100 <log_sys>) at /mariadb/10.9/storage/innobase/log/log0log.cc:846 #1 0x000055b3565a9e2c in log_write_up_to (lsn=<optimized out>, lsn@entry=58195, durable=true, callback=<optimized out>, callback@entry=0x0) at /mariadb/10.9/storage/innobase/log/log0log.cc:924 #2 0x000055b35649aead in log_checkpoint_low (oldest_lsn=58179, end_lsn=58179) at /mariadb/10.9/storage/innobase/buf/buf0flu.cc:1914 #3 0x000055b35649c4ab in log_make_checkpoint () at /mariadb/10.9/storage/innobase/buf/buf0flu.cc:1964 #4 0x000055b3566e8274 in create_log_file (create_new_db=false, lsn=lsn@entry=58179) at /mariadb/10.9/storage/innobase/srv/srv0start.cc:247 #5 0x000055b3566e6fe7 in srv_start (create_new_db=false) at /mariadb/10.9/storage/innobase/srv/srv0start.cc:1397 Between that point of time and the resizing to 4MiB there were no log writes in a local non-hanging rr replay trace that I analyzed. On my system, the page cleaner would be woken up by resizing: #0 buf_pool_t::page_cleaner_set_idle (this=this@entry=0x55b3569c0a40 <buf_pool>, deep_sleep=false) at /mariadb/10.9/storage/innobase/include/buf0buf.h:1790 #1 0x000055b35649c8f4 in buf_flush_ahead (lsn=lsn@entry=58179, furious=false) at /mariadb/10.9/storage/innobase/buf/buf0flu.cc:2073 #2 0x000055b3565aa4b6 in log_t::resize_start (this=0x55b3572eb100 <log_sys>, size=<optimized out>) at /mariadb/10.9/storage/innobase/log/log0log.cc:500 #3 0x000055b3563e4d03 in innodb_log_file_size_update (thd=0x556814000d58, var=<optimized out>, save=<optimized out>) at /mariadb/10.9/storage/innobase/handler/ha_innodb.cc:18496 I will provide one more patch to oleg.smirnov , to add some output to log_t::resize_start() so that we will know more what happens in those runs that result in a hang. If the hang is caused by there being exactly 511 bytes of log writes, the hang might go away if the test is run as follows: ./mtr --mysqld=--innodb-fast-shutdown=0 --parallel=2 innodb.log_file_size_online It would also be useful to save a copy of var/install.db for a run that leads to a hang. It might be more easily reproducible by starting the server on a copy of such a data directory. (The bootstrap is not exactly deterministic.)

            oleg.smirnov ran some more tests for me. If the test is run as

            perl mysql-test-run.pl innodb.log_file_size_online --mysqld=--innodb-fast-shutdown=0 --parallel=2
            

            then it would not hang at all. This suggests that in order for the hang to be possible, there must be some concurrent log writes (due to the purge of history of committed transactions that were run during the bootstrap).

            He also provided me with a copy of the data directory from the hang, the contents of log_sys, as well as some output from additional diagnostics:

            2023-07-03 14:16:52 0 [Note] C:/10.6/bld/sql//Debug/mariadbd.exe: ready for connections.
            Version: '10.9.8-MariaDB-debug-log'  socket: ''  port: 16000  Source distribution
            2023-07-03 14:16:52 4 [Note] resize: 57748,57748,58191,58191
            

            In the copy of the data directory, there is a freshly created ib_logfile0 (size: 5MiB) and a being-created ib_logfile101. Both filed contain identical log record payload starting at byte offset 0x3000. The 1094 bytes correspond to the difference of the latest LSN and the latest checkpoint LSN (which also is the creation LSN of both log files): 0xe5da-0xe194=1094.

            All pending log had been written to both files. Log resizing would not complete unless there is another log checkpoint. The log_sys.resize_lsn was 0xe194 (at the latest checkpoint), so no write of the buffer pool was ever initiated, and no log checkpoint occurred.

            Based on my reading of log_t::write_checkpoint(), I think that the last 2 hunks of the following should fix this:

            diff --git a/storage/innobase/log/log0log.cc b/storage/innobase/log/log0log.cc
            index 423203a805c..7f3583dbdd8 100644
            --- a/storage/innobase/log/log0log.cc
            +++ b/storage/innobase/log/log0log.cc
            @@ -489,6 +489,8 @@ log_t::resize_start_status log_t::resize_start(os_offset_t size) noexcept
                         (~lsn_t{get_block_size() - 1} & (write_lsn - first_lsn));
                     }
                   }
            +      sql_print_information("resize: " LSN_PF "," LSN_PF "," LSN_PF "," LSN_PF,
            +                            first_lsn, start_lsn, write_lsn, get_lsn());
                   resize_lsn.store(start_lsn, std::memory_order_relaxed);
                   status= success ? RESIZE_STARTED : RESIZE_FAILED;
                 }
            @@ -497,7 +499,7 @@ log_t::resize_start_status log_t::resize_start(os_offset_t size) noexcept
               log_resize_release();
             
               if (start_lsn)
            -    buf_flush_ahead(start_lsn, false);
            +    buf_flush_ahead(start_lsn + 1, false);
             
               return status;
             }
            diff --git a/storage/innobase/buf/buf0flu.cc b/storage/innobase/buf/buf0flu.cc
            index 90263757c19..02e1f592124 100644
            --- a/storage/innobase/buf/buf0flu.cc
            +++ b/storage/innobase/buf/buf0flu.cc
            @@ -1858,14 +1858,17 @@ inline void log_t::write_checkpoint(lsn_t end_lsn) noexcept
               log_resize_release();
             
               if (UNIV_LIKELY(resizing <= 1));
            -  else if (resizing > checkpoint_lsn)
            -    buf_flush_ahead(resizing, false);
            +  else if (resizing >= checkpoint_lsn)
            +    buf_flush_ahead(resizing + 1, false);
               else if (resizing_completed)
                 ib::info() << "Resized log to " << ib::bytes_iec{resizing_completed}
                   << "; start LSN=" << resizing;
               else
            +  {
                 sql_print_error("InnoDB: Resize of log failed at " LSN_PF,
                                 get_flushed_lsn());
            +    buf_flush_ahead(get_lsn(), false);
            +  }
             }
             
             /** Initiate a log checkpoint, discarding the start of the log.
            

            The last part of the last chunk is only there to make the error handling a little more robust. That error was never triggered during any test.

            marko Marko Mäkelä added a comment - oleg.smirnov ran some more tests for me. If the test is run as perl mysql-test-run.pl innodb.log_file_size_online --mysqld=--innodb-fast-shutdown=0 --parallel=2 then it would not hang at all. This suggests that in order for the hang to be possible, there must be some concurrent log writes (due to the purge of history of committed transactions that were run during the bootstrap). He also provided me with a copy of the data directory from the hang, the contents of log_sys , as well as some output from additional diagnostics: 2023-07-03 14:16:52 0 [Note] C:/10.6/bld/sql//Debug/mariadbd.exe: ready for connections. Version: '10.9.8-MariaDB-debug-log' socket: '' port: 16000 Source distribution 2023-07-03 14:16:52 4 [Note] resize: 57748,57748,58191,58191 In the copy of the data directory, there is a freshly created ib_logfile0 (size: 5MiB) and a being-created ib_logfile101 . Both filed contain identical log record payload starting at byte offset 0x3000. The 1094 bytes correspond to the difference of the latest LSN and the latest checkpoint LSN (which also is the creation LSN of both log files): 0xe5da-0xe194=1094. All pending log had been written to both files. Log resizing would not complete unless there is another log checkpoint. The log_sys.resize_lsn was 0xe194 (at the latest checkpoint), so no write of the buffer pool was ever initiated, and no log checkpoint occurred. Based on my reading of log_t::write_checkpoint() , I think that the last 2 hunks of the following should fix this: diff --git a/storage/innobase/log/log0log.cc b/storage/innobase/log/log0log.cc index 423203a805c..7f3583dbdd8 100644 --- a/storage/innobase/log/log0log.cc +++ b/storage/innobase/log/log0log.cc @@ -489,6 +489,8 @@ log_t::resize_start_status log_t::resize_start(os_offset_t size) noexcept (~lsn_t{get_block_size() - 1} & (write_lsn - first_lsn)); } } + sql_print_information("resize: " LSN_PF "," LSN_PF "," LSN_PF "," LSN_PF, + first_lsn, start_lsn, write_lsn, get_lsn()); resize_lsn.store(start_lsn, std::memory_order_relaxed); status= success ? RESIZE_STARTED : RESIZE_FAILED; } @@ -497,7 +499,7 @@ log_t::resize_start_status log_t::resize_start(os_offset_t size) noexcept log_resize_release(); if (start_lsn) - buf_flush_ahead(start_lsn, false); + buf_flush_ahead(start_lsn + 1, false); return status; } diff --git a/storage/innobase/buf/buf0flu.cc b/storage/innobase/buf/buf0flu.cc index 90263757c19..02e1f592124 100644 --- a/storage/innobase/buf/buf0flu.cc +++ b/storage/innobase/buf/buf0flu.cc @@ -1858,14 +1858,17 @@ inline void log_t::write_checkpoint(lsn_t end_lsn) noexcept log_resize_release(); if (UNIV_LIKELY(resizing <= 1)); - else if (resizing > checkpoint_lsn) - buf_flush_ahead(resizing, false); + else if (resizing >= checkpoint_lsn) + buf_flush_ahead(resizing + 1, false); else if (resizing_completed) ib::info() << "Resized log to " << ib::bytes_iec{resizing_completed} << "; start LSN=" << resizing; else + { sql_print_error("InnoDB: Resize of log failed at " LSN_PF, get_flushed_lsn()); + buf_flush_ahead(get_lsn(), false); + } } /** Initiate a log checkpoint, discarding the start of the log. The last part of the last chunk is only there to make the error handling a little more robust. That error was never triggered during any test.

            What worked in the end was to invoke buf_flush_ahead() on buf_pool.get_oldest_modification(0)+1, to ensure that the checkpoint will be advanced and the resizing will eventually complete.

            marko Marko Mäkelä added a comment - What worked in the end was to invoke buf_flush_ahead() on buf_pool.get_oldest_modification(0)+1 , to ensure that the checkpoint will be advanced and the resizing will eventually complete.

            People

              marko Marko Mäkelä
              marko Marko Mäkelä
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.