Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-11802

innodb.innodb_bug14676111 fails in buildbot due to InnoDB purge failing to start when there is work to do

Details

    • 10.2.4-1, 10.2.4-2

    Description

      http://buildbot.askmonty.org/buildbot/builders/p8-rhel7-bintar-debug/builds/2018/steps/test/logs/stdio

      innodb.innodb_bug14676111 'innodb_plugin' w1 [ fail ]
              Test ended at 2017-01-05 22:06:59
       
      CURRENT_TEST: innodb.innodb_bug14676111
      --- /home/buildbot/maria-slave/power8-vlp03-bintar-debug/build/mysql-test/suite/innodb/r/innodb_bug14676111.result	2017-01-05 19:54:53.473261302 -0500
      +++ /home/buildbot/maria-slave/power8-vlp03-bintar-debug/build/mysql-test/suite/innodb/r/innodb_bug14676111.reject	2017-01-05 22:06:58.634010067 -0500
      @@ -22,7 +22,7 @@
       test.t1	analyze	status	OK
       select CLUST_INDEX_SIZE from information_schema.INNODB_SYS_TABLESTATS where NAME = 'test/t1';
       CLUST_INDEX_SIZE
      -8
      +10
       delete from t1 where a=5;
       set global innodb_purge_stop_now=ON;
       set global innodb_purge_run_now=ON;
       
      mysqltest: Result length mismatch
      

      Started happening quite regularly in buildbot on 10.1 tree since January 5, 2017.
      Observed on win32-debug, winx64-debug, p8-rhel7-bintar-debug.

      Attachments

        Issue Links

          Activity

            The test has not been changed recently. Also the InnoDB purge code has not been changed recently, as far as I can tell.
            The result difference comes from this snippet:

            delete from t1 where a=4;
            set global innodb_purge_stop_now=ON;
            set global innodb_purge_run_now=ON;
            --source include/wait_innodb_all_purged.inc
            #deleting 1 record of 2 records don't cause merge artificially.
            #current tree form
            #      (1, 5)
            #    (1)    (5)
            #  (1, 3)     (5)
            #(1, 2) (3)     (5)
             
            analyze table t1;
            select CLUST_INDEX_SIZE from information_schema.INNODB_SYS_TABLESTATS where NAME = 'test/t1';
            

            As the above comment shows, the tree size really should be 8 pages.

            I suspect that triggering the purge or waiting for the purge to finish does not work reliably. Similar problems have existed in the MySQL 5.7 tests, in particular innodb.index_merge_threshold (which is missing from MariaDB 10.2).

            Because this started on January 5, I think that this could be related to my attempt to address MDEV-8139 by porting the fix of Bug#24450908 UNDO LOG EXISTS AFTER SLOW SHUTDOWN from MySQL 5.7.

            marko Marko Mäkelä added a comment - The test has not been changed recently. Also the InnoDB purge code has not been changed recently, as far as I can tell. The result difference comes from this snippet: delete from t1 where a=4; set global innodb_purge_stop_now=ON; set global innodb_purge_run_now=ON; --source include/wait_innodb_all_purged.inc #deleting 1 record of 2 records don't cause merge artificially. #current tree form # (1, 5) # (1) (5) # (1, 3) (5) #(1, 2) (3) (5)   analyze table t1; select CLUST_INDEX_SIZE from information_schema.INNODB_SYS_TABLESTATS where NAME = 'test/t1'; As the above comment shows, the tree size really should be 8 pages. I suspect that triggering the purge or waiting for the purge to finish does not work reliably. Similar problems have existed in the MySQL 5.7 tests, in particular innodb.index_merge_threshold (which is missing from MariaDB 10.2). Because this started on January 5, I think that this could be related to my attempt to address MDEV-8139 by porting the fix of Bug#24450908 UNDO LOG EXISTS AFTER SLOW SHUTDOWN from MySQL 5.7.

            There was a previous attempt at fixing this in MDEV-4396.

            marko Marko Mäkelä added a comment - There was a previous attempt at fixing this in MDEV-4396 .

            I was hoping that MDEV-11947 would fix this, but this test did fail in 10.1 with the MDEV-11947 fix included

            Apparently it is still possible that InnoDB purge sometimes gets stuck, or wait_for_innodb_purge.inc is not working reliably.

            In Oracle MySQL 5.7, another test that is randomly failing due to the same issue is innodb.index_merge_threshold (which is missing from MariaDB 10.2). There were attempts to ‘fix’ the failure by ‘kicking’ the purge threads by issuing DML operations on other InnoDB tables than the one that is being tested. I think that the proper fix is to ensure that purge does not get stuck in the first place.

            marko Marko Mäkelä added a comment - I was hoping that MDEV-11947 would fix this, but this test did fail in 10.1 with the MDEV-11947 fix included Apparently it is still possible that InnoDB purge sometimes gets stuck, or wait_for_innodb_purge.inc is not working reliably. In Oracle MySQL 5.7, another test that is randomly failing due to the same issue is innodb.index_merge_threshold (which is missing from MariaDB 10.2). There were attempts to ‘fix’ the failure by ‘kicking’ the purge threads by issuing DML operations on other InnoDB tables than the one that is being tested. I think that the proper fix is to ensure that purge does not get stuck in the first place.

            The test gcol.innodb_virtual_debug_purge that was introduced in MDEV-5800 is occasionally failing due to a timeout. The underlying reason should be the same: InnoDB purge threads are occasionally stuck, or losing a signal to resume work.

            marko Marko Mäkelä added a comment - The test gcol.innodb_virtual_debug_purge that was introduced in MDEV-5800 is occasionally failing due to a timeout. The underlying reason should be the same: InnoDB purge threads are occasionally stuck, or losing a signal to resume work.

            marko,

            FWIW, please note that so far gcol.innodb_virtual_debug_purge has been failing with a timeout only on embedded server (and it happens often enough). Here is a stack trace from a hanging test:

            10.2 92bbf4ad0477e09bcc86907696cd114ef42e6914

            #3  <signal handler called>
            #4  0x00007fd1f212918d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
            #5  0x00007fd1f32b89ce in os_thread_sleep (tm=100000) at /data/src/10.2-bug/storage/innobase/os/os0thread.cc:225
            #6  0x00007fd1f30dba8e in logs_empty_and_mark_files_at_shutdown () at /data/src/10.2-bug/storage/innobase/log/log0log.cc:2125
            #7  0x00007fd1f32560f1 in innodb_shutdown () at /data/src/10.2-bug/storage/innobase/srv/srv0start.cc:2729
            #8  0x00007fd1f2ff876d in innobase_end (hton=0x7fd1f60e3510, type=HA_PANIC_CLOSE) at /data/src/10.2-bug/storage/innobase/handler/ha_innodb.cc:4598
            #9  0x00007fd1f2e4775b in ha_finalize_handlerton (plugin=0x7fd1f60c2210) at /data/src/10.2-bug/sql/handler.cc:451
            #10 0x00007fd1f2bb13d1 in plugin_deinitialize (plugin=0x7fd1f60c2210, ref_check=true) at /data/src/10.2-bug/sql/sql_plugin.cc:1217
            #11 0x00007fd1f2bb184d in reap_plugins () at /data/src/10.2-bug/sql/sql_plugin.cc:1294
            #12 0x00007fd1f2bb37db in plugin_shutdown () at /data/src/10.2-bug/sql/sql_plugin.cc:1945
            #13 0x00007fd1f2a52dce in clean_up (print_message=false) at /data/src/10.2-bug/libmysqld/../sql/mysqld.cc:2201
            #14 0x00007fd1f2a5b8ae in end_embedded_server () at /data/src/10.2-bug/libmysqld/lib_sql.cc:648
            #15 0x00007fd1f2a4905e in mysql_server_end () at /data/src/10.2-bug/libmysqld/libmysql.c:211
            #16 0x00007fd1f2a082c4 in cleanup_and_exit (exit_code=0) at /data/src/10.2-bug/client/mysqltest.cc:1498
            #17 0x00007fd1f2a1a786 in main (argc=34, argv=0x7ffc9df8d978) at /data/src/10.2-bug/client/mysqltest.cc:9582
            

            elenst Elena Stepanova added a comment - marko , FWIW, please note that so far gcol.innodb_virtual_debug_purge has been failing with a timeout only on embedded server (and it happens often enough). Here is a stack trace from a hanging test: 10.2 92bbf4ad0477e09bcc86907696cd114ef42e6914 #3 <signal handler called> #4 0x00007fd1f212918d in nanosleep () at ../sysdeps/unix/syscall-template.S:81 #5 0x00007fd1f32b89ce in os_thread_sleep (tm=100000) at /data/src/10.2-bug/storage/innobase/os/os0thread.cc:225 #6 0x00007fd1f30dba8e in logs_empty_and_mark_files_at_shutdown () at /data/src/10.2-bug/storage/innobase/log/log0log.cc:2125 #7 0x00007fd1f32560f1 in innodb_shutdown () at /data/src/10.2-bug/storage/innobase/srv/srv0start.cc:2729 #8 0x00007fd1f2ff876d in innobase_end (hton=0x7fd1f60e3510, type=HA_PANIC_CLOSE) at /data/src/10.2-bug/storage/innobase/handler/ha_innodb.cc:4598 #9 0x00007fd1f2e4775b in ha_finalize_handlerton (plugin=0x7fd1f60c2210) at /data/src/10.2-bug/sql/handler.cc:451 #10 0x00007fd1f2bb13d1 in plugin_deinitialize (plugin=0x7fd1f60c2210, ref_check=true) at /data/src/10.2-bug/sql/sql_plugin.cc:1217 #11 0x00007fd1f2bb184d in reap_plugins () at /data/src/10.2-bug/sql/sql_plugin.cc:1294 #12 0x00007fd1f2bb37db in plugin_shutdown () at /data/src/10.2-bug/sql/sql_plugin.cc:1945 #13 0x00007fd1f2a52dce in clean_up (print_message=false) at /data/src/10.2-bug/libmysqld/../sql/mysqld.cc:2201 #14 0x00007fd1f2a5b8ae in end_embedded_server () at /data/src/10.2-bug/libmysqld/lib_sql.cc:648 #15 0x00007fd1f2a4905e in mysql_server_end () at /data/src/10.2-bug/libmysqld/libmysql.c:211 #16 0x00007fd1f2a082c4 in cleanup_and_exit (exit_code=0) at /data/src/10.2-bug/client/mysqltest.cc:1498 #17 0x00007fd1f2a1a786 in main (argc=34, argv=0x7ffc9df8d978) at /data/src/10.2-bug/client/mysqltest.cc:9582

            elenst, I filed MDEV-12057 for the embedded server shutdown hang.

            marko Marko Mäkelä added a comment - elenst , I filed MDEV-12057 for the embedded server shutdown hang.

            The function trx_purge_stop() is calling os_event_reset(purge_sys->event) before calling rw_lock_x_lock(&purge_sys->latch). The os_event_set() call in srv_purge_coordinator_suspend() is protected by that X-latch. It would seem a good idea to protect both calls with purge_sys->latch.

            marko Marko Mäkelä added a comment - The function trx_purge_stop() is calling os_event_reset(purge_sys->event) before calling rw_lock_x_lock(&purge_sys->latch). The os_event_set() call in srv_purge_coordinator_suspend() is protected by that X-latch. It would seem a good idea to protect both calls with purge_sys->latch.

            Revision (2 commits, the first adjusting tests only) pushed to bb-10.0-marko (changes to tests) and bb-10.2-marko (changes to tests).

            Both versions fix the potential race in trx_purge_stop() by acquiring purge_sys->latch before signaling purge_sys->event.
            In 10.2 we will fix a potential race that was introduced in MySQL 5.7 (and MariaDB 10.2.2), by invoking log_mutex_enter() at the end of log_write_flush_to_disk_low().

            marko Marko Mäkelä added a comment - Revision (2 commits, the first adjusting tests only) pushed to bb-10.0-marko ( changes to tests ) and bb-10.2-marko ( changes to tests ). Both versions fix the potential race in trx_purge_stop() by acquiring purge_sys->latch before signaling purge_sys->event. In 10.2 we will fix a potential race that was introduced in MySQL 5.7 (and MariaDB 10.2.2 ), by invoking log_mutex_enter() at the end of log_write_flush_to_disk_low().

            ok to push after documenting srv_buf_dump_event.

            jplindst Jan Lindström (Inactive) added a comment - ok to push after documenting srv_buf_dump_event.

            I documented also srv_monitor_event and srv_error_event in 10.0. Events related to mutexes, rw-locks and fulltext indexes were not documented or reviewed by me.
            In 10.2 there are further events (such as srv_buf_resize_event) that I did not review or document.
            After some time we should learn if the change to trx_purge_stop() really fixed the random test failures on buildbot. I was not able to repeat the failures myself this time, even without the patch. Earlier, the tests have occasionally failed locally.

            marko Marko Mäkelä added a comment - I documented also srv_monitor_event and srv_error_event in 10.0. Events related to mutexes, rw-locks and fulltext indexes were not documented or reviewed by me. In 10.2 there are further events (such as srv_buf_resize_event) that I did not review or document. After some time we should learn if the change to trx_purge_stop() really fixed the random test failures on buildbot. I was not able to repeat the failures myself this time, even without the patch. Earlier, the tests have occasionally failed locally.

            The test innodb.innodb_bug14676111 failed on 10.1 on 64-bit Windows today. Unfortunately the purge can still remain stuck, or the test is badly written (does not properly wait for the purge to run into completion).
            It is worth noting that there are two progress measures for purge. Advancing the purge_sys->read_view gives purge a permission to remove purgeable records. Actually removing the records could take some time, depending on system load.

            It seems that wait_innodb_all_purged.inc is attempting to wait for the actual removal, by waiting the debug status variable INNODB_PURGE_TRX_ID_AGE to reach zero. I do not see anything obviously wrong in the instrumentation.

            marko Marko Mäkelä added a comment - The test innodb.innodb_bug14676111 failed on 10.1 on 64-bit Windows today. Unfortunately the purge can still remain stuck, or the test is badly written (does not properly wait for the purge to run into completion). It is worth noting that there are two progress measures for purge. Advancing the purge_sys->read_view gives purge a permission to remove purgeable records. Actually removing the records could take some time, depending on system load. It seems that wait_innodb_all_purged.inc is attempting to wait for the actual removal, by waiting the debug status variable INNODB_PURGE_TRX_ID_AGE to reach zero. I do not see anything obviously wrong in the instrumentation.

            The 5.5 test innodb.innodb_bug14676111 unnecessarily relies on InnoDB purge. It can simply do BEGIN;INSERT;ROLLBACK to achieve the same result synchronously (the rollback of an insert immediately removes the record from the index B-tree).
            The 10.2 test gcol.innodb_virtual_debug_purge is trickier, because it really needs to test the purge.

            marko Marko Mäkelä added a comment - The 5.5 test innodb.innodb_bug14676111 unnecessarily relies on InnoDB purge. It can simply do BEGIN;INSERT;ROLLBACK to achieve the same result synchronously (the rollback of an insert immediately removes the record from the index B-tree). The 10.2 test gcol.innodb_virtual_debug_purge is trickier, because it really needs to test the purge.

            This commit in bb-10.2-marko should fix the underlying issues. The fix is needed for the test innodb.truncate_purge_debug introduced in my clean-up of an Oracle bug fix and test.
            The test gcol.innodb_virtual_debug_purge and other tests using wait_innodb_all_purged.inc should still be cleaned up. I think that we should avoid using the debug variables innodb_purge_stop_now and innodb_purge_run_now, and just rely on reaching "History list length 0" in the SHOW ENGINE INNODB STATUS output.

            marko Marko Mäkelä added a comment - This commit in bb-10.2-marko should fix the underlying issues. The fix is needed for the test innodb.truncate_purge_debug introduced in my clean-up of an Oracle bug fix and test . The test gcol.innodb_virtual_debug_purge and other tests using wait_innodb_all_purged.inc should still be cleaned up. I think that we should avoid using the debug variables innodb_purge_stop_now and innodb_purge_run_now, and just rely on reaching "History list length 0" in the SHOW ENGINE INNODB STATUS output.

            This was a prerequisite for cleaning up a test case for a TRUNCATE performance fix that I merged from MySQL 5.7 as part of MDEV-11751.

            We might want to backport this fix to 10.0 and 10.1 later. The impact of this bug is that even when InnoDB is idle or running mostly read-only operations and it is able to purge old history (a sort of garbage collection), it is not doing so.

            marko Marko Mäkelä added a comment - This was a prerequisite for cleaning up a test case for a TRUNCATE performance fix that I merged from MySQL 5.7 as part of MDEV-11751 . We might want to backport this fix to 10.0 and 10.1 later. The impact of this bug is that even when InnoDB is idle or running mostly read-only operations and it is able to purge old history (a sort of garbage collection), it is not doing so.

            This issue is still not fully fixed.
            I pushed a workaround for MDEV-12808 that wakes up the purge whenever SHOW ENGINE INNODB STATUS is executed.

            marko Marko Mäkelä added a comment - This issue is still not fully fixed. I pushed a workaround for MDEV-12808 that wakes up the purge whenever SHOW ENGINE INNODB STATUS is executed.

            Related note from MDEV-13603: A slow shutdown (innodb_fast_shutdown=0) will not always run purge into completion if innodb_purge_rseg_truncate_frequency=1 was not set early enough before the shutdown.

            marko Marko Mäkelä added a comment - Related note from MDEV-13603 : A slow shutdown (innodb_fast_shutdown=0) will not always run purge into completion if innodb_purge_rseg_truncate_frequency=1 was not set early enough before the shutdown.
            marko Marko Mäkelä added a comment - Possibly related to this: MySQL Bug #75231 records are not purged after a delete operation

            While testing my fix for MDEV-14799 I noticed that MariaDB 10.0 and 10.1 appear to run purge less frequently than 5.5 or 10.2.
            In MariaDB 10.2.6 while trying to fix this bug, I made trx_commit_in_memory() invoke srv_wake_purge_thread_if_not_active() even for read-only transactions, because I was thinking of a following type of scenario, which covers the MySQL Bug #75231 scenario:

            connection 1;
            START TRANSACTION WITH CONSISTENT SNAPSHOT;
            connection 2;
            DELETE FROM t1;
            COMMIT; -- purge cannot do anything yet, because of the read view that was opened above
            connection 1;
            COMMIT; -- only now the purge is possible
            

            marko Marko Mäkelä added a comment - While testing my fix for MDEV-14799 I noticed that MariaDB 10.0 and 10.1 appear to run purge less frequently than 5.5 or 10.2. In MariaDB 10.2.6 while trying to fix this bug , I made trx_commit_in_memory() invoke srv_wake_purge_thread_if_not_active() even for read-only transactions, because I was thinking of a following type of scenario, which covers the MySQL Bug #75231 scenario: connection 1; START TRANSACTION WITH CONSISTENT SNAPSHOT; connection 2; DELETE FROM t1; COMMIT ; -- purge cannot do anything yet, because of the read view that was opened above connection 1; COMMIT ; -- only now the purge is possible

            One of possible culprits can be MVCC::view_open(), specifically this code:

                            /* NOTE: This can be optimised further, for now we only
                            resuse the view iff there are no active RW transactions.
             
                            There is an inherent race here between purge and this
                            thread. Purge will skip views that are marked as closed.
                            Therefore we must set the low limit id after we reset the
                            closed status after the check. */
             
                            if (trx_is_autocommit_non_locking(trx) && view->empty()) {
             
                                    view->m_closed = false;
            

            While thread is in this gap, concurrent purge thread may clone stale "oldest" view. That is purge thread won't be able to purge some newer committed transactions while we're in this gap.

                                    if (view->m_low_limit_id == trx_sys_get_max_trx_id()) {
                                            return;
                                    } else {
                                            view->m_closed = true;
                                    }
                            }
            

            svoj Sergey Vojtovich added a comment - One of possible culprits can be MVCC::view_open(), specifically this code: /* NOTE: This can be optimised further, for now we only resuse the view iff there are no active RW transactions.   There is an inherent race here between purge and this thread. Purge will skip views that are marked as closed. Therefore we must set the low limit id after we reset the closed status after the check. */   if (trx_is_autocommit_non_locking(trx) && view->empty()) {   view->m_closed = false; While thread is in this gap, concurrent purge thread may clone stale "oldest" view. That is purge thread won't be able to purge some newer committed transactions while we're in this gap. if (view->m_low_limit_id == trx_sys_get_max_trx_id()) { return; } else { view->m_closed = true; } }

            OTOH according to gdb innodb.innodb_bug14676111 never calls MVCC::view_open() with view != NULL. So this code should never be executed.

            svoj Sergey Vojtovich added a comment - OTOH according to gdb innodb.innodb_bug14676111 never calls MVCC::view_open() with view != NULL. So this code should never be executed.

            svoj, please note that I rewrote the test so that it does not rely on purge any more, but on rollback instead.

            I also worked around possible lost signals by making SHOW ENGINE INNODB STATUS trigger purge. To better analyze this problem, these changes should be reverted.

            marko Marko Mäkelä added a comment - svoj , please note that I rewrote the test so that it does not rely on purge any more, but on rollback instead. I also worked around possible lost signals by making SHOW ENGINE INNODB STATUS trigger purge . To better analyze this problem, these changes should be reverted.

            As noted in MDEV-13779, srv_release_threads() can fail to wake up some purge worker threads. MDEV-13779 introduced an extra check to ensure that all purge workers will be woken up at shutdown, but they might occasionally remain sleeping during normal operation.

            The fix of MDEV-12708 in MariaDB 10.2.6 introduced the work-around that SHOW ENGINE INNODB STATUS will wake up purge threads. This seems to work fine in wait_all_purged.inc.

            With MDEV-13603 in MariaDB 10.3.6, innodb_fast_shutdown=0 should always run full purge. Older versions could fail to purge some of the last active transactions.

            marko Marko Mäkelä added a comment - As noted in MDEV-13779 , srv_release_threads() can fail to wake up some purge worker threads. MDEV-13779 introduced an extra check to ensure that all purge workers will be woken up at shutdown, but they might occasionally remain sleeping during normal operation. The fix of MDEV-12708 in MariaDB 10.2.6 introduced the work-around that SHOW ENGINE INNODB STATUS  will wake up purge threads . This seems to work fine in wait_all_purged.inc . With MDEV-13603 in MariaDB 10.3.6, innodb_fast_shutdown=0  should always run full purge. Older versions could fail to purge some of the last active transactions.

            With the current work-around (SHOW ENGINE INNODB STATUS will initiate a purge), this is not a practical problem any more.
            The mysql-test/suite/innodb/include/wait_all_purged.inc appears to be working reliably both in non-debug and debug builds of the server.

            marko Marko Mäkelä added a comment - With the current work-around ( SHOW ENGINE INNODB STATUS will initiate a purge), this is not a practical problem any more. The mysql-test/suite/innodb/include/wait_all_purged.inc appears to be working reliably both in non-debug and debug builds of the server.

            The wait_all_purged.inc was introduced in MDEV-12698 (10.2.7, 10.3.1).
            I do not remember seeing failures related to this lately.
            The actual problem still exists (purge may be idling when there is work to do), but it does not show up in tests, because wait_all_purged.inc will trigger purge by issuing SHOW ENGINE INNODB STATUS. MDEV-16260 should fix the problem and remove the work-around of triggering the purge when executing that statement.

            marko Marko Mäkelä added a comment - The wait_all_purged.inc was introduced in MDEV-12698 (10.2.7, 10.3.1). I do not remember seeing failures related to this lately. The actual problem still exists (purge may be idling when there is work to do), but it does not show up in tests, because wait_all_purged.inc will trigger purge by issuing SHOW ENGINE INNODB STATUS . MDEV-16260 should fix the problem and remove the work-around of triggering the purge when executing that statement.

            I think that the original problem (that purge fails to remove some history of committed transactions) might be fixed by MDEV-30671. There appears to be a correctness problem. When discarding rollback segments, purge only considers currently active transactions, and does not check if all history contained in the rollback segment actually has been purged.

            The CHECK TABLE…EXTENDED implemented in MDEV-24402 will warn about history that should have been purged. The history would be purged when rebuilding the table, for example, by OPTIMIZE TABLE.

            marko Marko Mäkelä added a comment - I think that the original problem (that purge fails to remove some history of committed transactions) might be fixed by MDEV-30671 . There appears to be a correctness problem. When discarding rollback segments, purge only considers currently active transactions, and does not check if all history contained in the rollback segment actually has been purged. The CHECK TABLE…EXTENDED implemented in MDEV-24402 will warn about history that should have been purged. The history would be purged when rebuilding the table, for example, by OPTIMIZE TABLE .

            People

              marko Marko Mäkelä
              elenst Elena Stepanova
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.