Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-10327

Assertion `Handlerton: _use_count != 0 ' failed (errno=0) in ha_tokudb.cc:315 release on server shutdown

Details

    Description

      Affects buildbot:
      http://buildbot.askmonty.org/buildbot/builders/bld-dan-debug/builds/3429/steps/test_1/logs/stdio
      (search for tokudb.background_job_manager)

      I also get it outside buildbot by running

      perl ./mtr tokudb.background_job_manager --repeat=20 --force-restart
      

      160704  1:17:13 [Note] /home/elenst/git/10.0/sql/mysqld: Normal shutdown
       
      160704  1:17:13 [Note] Event Scheduler: Purging the queue. 0 events
      /home/elenst/git/10.0/storage/tokudb/ha_tokudb.cc:315 release: Assertion `Handlerton: _use_count != 0 ' failed (errno=0)
      : Success
      Backtrace: (Note: toku_do_assert=0x0x7fc0fd569073)
       
      ...
       
      Engine status:
      BUILD_ID = 0
      Env panic code: 18446744073709551615
      Env panic string: /home/elenst/git/10.0/storage/tokudb/ha_tokudb.cc:315 release: Assertion `Handlerton: _use_count != 0 ' failed (errno=0)
       
      time of environment creation: Mon Jul  4 01:17:03 2016
      time of engine startup: Mon Jul  4 01:17:03 2016
      time now: Mon Jul  4 01:17:13 2016
      db opens: 99
      db closes: 99
      num open dbs now: 0
      max open dbs: 28
      period, in ms, that recovery log is automatically fsynced: 0
      dictionary inserts: 80
      dictionary inserts fail: 0
      dictionary deletes: 28
      dictionary deletes fail: 0
      dictionary updates: 0
      dictionary updates fail: 0
      dictionary broadcast updates: 1
      dictionary broadcast updates fail: 0
      dictionary multi inserts: 0
      dictionary multi inserts fail: 0
      dictionary multi deletes: 0
      dictionary multi deletes fail: 0
      dictionary updates multi: 0
      dictionary updates multi fail: 0
      le: max committed xr: 2
      le: max provisional xr: 2
      le: expanded: 0
      le: max memsize: 6944
      le: size of leafentries before garbage collection (during message application): 3715
      le: size of leafentries after garbage collection (during message application): 3528
      le: size of leafentries before garbage collection (outside message application): 0
      le: size of leafentries after garbage collection (outside message application): 0
      checkpoint: period: 60
      checkpoint: footprint: 0
      checkpoint: last checkpoint began: Mon Jul  4 01:17:03 2016
      checkpoint: last complete checkpoint began: Mon Jul  4 01:17:03 2016
      checkpoint: last complete checkpoint ended: Mon Jul  4 01:17:03 2016
      checkpoint: time spent during checkpoint (begin and end phases): 0
      checkpoint: time spent during last checkpoint (begin and end phases): 0
      checkpoint: last complete checkpoint LSN: 8
      checkpoint: checkpoints taken : 1
      checkpoint: checkpoints failed: 0
      checkpoint: waiters now: 0
      checkpoint: waiters max: 0
      checkpoint: non-checkpoint client wait on mo lock: 0
      checkpoint: non-checkpoint client wait on cs lock: 0
      checkpoint: checkpoint begin time: 18
      checkpoint: long checkpoint begin time: 0
      checkpoint: long checkpoint begin count: 0
      checkpoint: checkpoint end time: 356409
      checkpoint: long checkpoint end time: 0
      checkpoint: long checkpoint end count: 0
      cachetable: miss: 6
      cachetable: miss time: 136
      cachetable: prefetches: 0
      cachetable: size current: 6581
      cachetable: size limit: 4581650022
      cachetable: size writing: 0
      cachetable: size nonleaf: 0
      cachetable: size leaf: 6165
      cachetable: size rollback: 416
      cachetable: size cachepressure: 0
      cachetable: size currently cloned data for checkpoint: 0
      cachetable: evictions: 49
      cachetable: cleaner executions: 14
      cachetable: cleaner period: 1
      cachetable: cleaner iterations: 5
      cachetable: number of waits on cache pressure: 0
      cachetable: time waiting on cache pressure: 0
      cachetable: number of long waits on cache pressure: 0
      cachetable: long time waiting on cache pressure: 0
      cachetable: client pool: number of threads in pool: 4
      cachetable: client pool: number of currently active threads in pool: 0
      cachetable: client pool: number of currently queued work items: 0
      cachetable: client pool: largest number of queued work items: 1
      cachetable: client pool: total number of work items processed: 1
      cachetable: client pool: total execution time of processing work items: 0
      cachetable: cachetable pool: number of threads in pool: 8
      cachetable: cachetable pool: number of currently active threads in pool: 0
      cachetable: cachetable pool: number of currently queued work items: 0
      cachetable: cachetable pool: largest number of queued work items: 5
      cachetable: cachetable pool: total number of work items processed: 53
      cachetable: cachetable pool: total execution time of processing work items: 22
      cachetable: checkpoint pool: number of threads in pool: 1
      cachetable: checkpoint pool: number of currently active threads in pool: 0
      cachetable: checkpoint pool: number of currently queued work items: 0
      cachetable: checkpoint pool: largest number of queued work items: 0
      cachetable: checkpoint pool: total number of work items processed: 0
      cachetable: checkpoint pool: total execution time of processing work items: 0
      locktree: memory size: 0
      locktree: memory size limit: 520642048
      locktree: number of times lock escalation ran: 0
      locktree: time spent running escalation (seconds): 0.000000
      locktree: latest post-escalation memory size: 0
      locktree: number of locktrees open now: 2
      locktree: number of pending lock requests: 0
      locktree: number of locktrees eligible for the STO: 0
      locktree: number of times a locktree ended the STO early: 0
      locktree: time spent ending the STO early (seconds): 0.000000
      locktree: number of wait locks: 0
      locktree: time waiting for locks: 0
      locktree: number of long wait locks: 0
      locktree: long time waiting for locks: 0
      locktree: number of lock timeouts: 0
      locktree: number of waits on lock escalation: 0
      locktree: time waiting on lock escalation: 0
      locktree: number of long waits on lock escalation: 0
      locktree: long time waiting on lock escalation: 0
      ft: dictionary updates: 0
      ft: dictionary broadcast updates: 4
      ft: descriptor set: 30
      ft: messages ignored by leaf due to msn: 5
      ft: total search retries due to TRY_AGAIN: 0
      ft: searches requiring more tries than the height of the tree: 0
      ft: searches requiring more tries than the height of the tree plus three: 0
      ft: leaf nodes flushed to disk (not for checkpoint): 47
      ft: leaf nodes flushed to disk (not for checkpoint) (bytes): 24576
      ft: leaf nodes flushed to disk (not for checkpoint) (uncompressed bytes): 30287
      ft: leaf nodes flushed to disk (not for checkpoint) (seconds): 0.000531
      ft: nonleaf nodes flushed to disk (not for checkpoint): 6
      ft: nonleaf nodes flushed to disk (not for checkpoint) (bytes): 3072
      ft: nonleaf nodes flushed to disk (not for checkpoint) (uncompressed bytes): 2133
      ft: nonleaf nodes flushed to disk (not for checkpoint) (seconds): 0.000033
      ft: leaf nodes flushed to disk (for checkpoint): 3
      ft: leaf nodes flushed to disk (for checkpoint) (bytes): 1536
      ft: leaf nodes flushed to disk (for checkpoint) (uncompressed bytes): 435
      ft: leaf nodes flushed to disk (for checkpoint) (seconds): 0.000005
      ft: nonleaf nodes flushed to disk (for checkpoint): 0
      ft: nonleaf nodes flushed to disk (for checkpoint) (bytes): 0
      ft: nonleaf nodes flushed to disk (for checkpoint) (uncompressed bytes): 0
      ft: nonleaf nodes flushed to disk (for checkpoint) (seconds): 0.000000
      ft: uncompressed / compressed bytes written (leaf): UNKNOWN STATUS TYPE: 6
      ft: uncompressed / compressed bytes written (nonleaf): UNKNOWN STATUS TYPE: 6
      ft: uncompressed / compressed bytes written (overall): UNKNOWN STATUS TYPE: 6
      ft: nonleaf node partial evictions: 0
      ft: nonleaf node partial evictions (bytes): 0
      ft: leaf node partial evictions: 0
      ft: leaf node partial evictions (bytes): 0
      ft: leaf node full evictions: 45
      ft: leaf node full evictions (bytes): 56039
      ft: nonleaf node full evictions: 4
      ft: nonleaf node full evictions (bytes): 21273
      ft: leaf nodes created: 48
      ft: nonleaf nodes created: 4
      ft: leaf nodes destroyed: 0
      ft: nonleaf nodes destroyed: 0
      ft: bytes of messages injected at root (all trees): 4327
      ft: bytes of messages flushed from h1 nodes to leaves: 3938
      ft: bytes of messages currently in trees (estimate): 389
      ft: messages injected at root: 22
      ft: broadcast messages injected at root: 0
      ft: basements decompressed as a target of a query: 0
      ft: basements decompressed for prelocked range: 0
      ft: basements decompressed for prefetch: 0
      ft: basements decompressed for write: 0
      ft: buffers decompressed as a target of a query: 0
      ft: buffers decompressed for prelocked range: 0
      ft: buffers decompressed for prefetch: 0
      ft: buffers decompressed for write: 0
      ft: pivots fetched for query: 0
      ft: pivots fetched for query (bytes): 0
      ft: pivots fetched for query (seconds): 0.000000
      ft: pivots fetched for prefetch: 0
      ft: pivots fetched for prefetch (bytes): 0
      ft: pivots fetched for prefetch (seconds): 0.000000
      ft: pivots fetched for write: 0
      ft: pivots fetched for write (bytes): 0
      ft: pivots fetched for write (seconds): 0.000000
      ft: basements fetched as a target of a query: 0
      ft: basements fetched as a target of a query (bytes): 0
      ft: basements fetched as a target of a query (seconds): 0.000000
      ft: basements fetched for prelocked range: 0
      ft: basements fetched for prelocked range (bytes): 0
      ft: basements fetched for prelocked range (seconds): 0.000000
      ft: basements fetched for prefetch: 0
      ft: basements fetched for prefetch (bytes): 0
      ft: basements fetched for prefetch (seconds): 0.000000
      ft: basements fetched for write: 0
      ft: basements fetched for write (bytes): 0
      ft: basements fetched for write (seconds): 0.000000
      ft: buffers fetched as a target of a query: 0
      ft: buffers fetched as a target of a query (bytes): 0
      ft: buffers fetched as a target of a query (seconds): 0.000000
      ft: buffers fetched for prelocked range: 0
      ft: buffers fetched for prelocked range (bytes): 0
      ft: buffers fetched for prelocked range (seconds): 0.000000
      ft: buffers fetched for prefetch: 0
      ft: buffers fetched for prefetch (bytes): 0
      ft: buffers fetched for prefetch (seconds): 0.000000
      ft: buffers fetched for write: 0
      ft: buffers fetched for write (bytes): 0
      ft: buffers fetched for write (seconds): 0.000000
      ft: leaf compression to memory (seconds): 0.014784
      ft: leaf serialization to memory (seconds): 0.000291
      ft: leaf decompression to memory (seconds): 0.000063
      ft: leaf deserialization to memory (seconds): 0.000034
      ft: nonleaf compression to memory (seconds): 0.003540
      ft: nonleaf serialization to memory (seconds): 0.000065
      ft: nonleaf decompression to memory (seconds): 0.000000
      ft: nonleaf deserialization to memory (seconds): 0.000000
      ft: promotion: roots split: 4
      ft: promotion: leaf roots injected into: 111
      ft: promotion: h1 roots injected into: 18
      ft: promotion: injections at depth 0: 4
      ft: promotion: injections at depth 1: 5
      ft: promotion: injections at depth 2: 0
      ft: promotion: injections at depth 3: 0
      ft: promotion: injections lower than depth 3: 0
      ft: promotion: stopped because of a nonempty buffer: 0
      ft: promotion: stopped at height 1: 4
      ft: promotion: stopped because the child was locked or not at all in memory: 0
      ft: promotion: stopped because the child was not fully in memory: 0
      ft: promotion: stopped anyway, after locking the child: 0
      ft: basement nodes deserialized with fixed-keysize: 6
      ft: basement nodes deserialized with variable-keysize: 0
      ft: promotion: succeeded in using the rightmost leaf shortcut: 0
      ft: promotion: tried the rightmost leaf shorcut but failed (out-of-bounds): 0
      ft: promotion: tried the rightmost leaf shorcut but failed (child reactive): 0
      ft: cursor skipped deleted leaf entries: 0
      ft flusher: total nodes potentially flushed by cleaner thread: 3
      ft flusher: height-one nodes flushed by cleaner thread: 3
      ft flusher: height-greater-than-one nodes flushed by cleaner thread: 0
      ft flusher: nodes cleaned which had empty buffers: 0
      ft flusher: nodes dirtied by cleaner thread: 1
      ft flusher: max bytes in a buffer flushed by cleaner thread: 237
      ft flusher: min bytes in a buffer flushed by cleaner thread: 38
      ft flusher: total bytes in buffers flushed by cleaner thread: 430
      ft flusher: max workdone in a buffer flushed by cleaner thread: 189
      ft flusher: min workdone in a buffer flushed by cleaner thread: 0
      ft flusher: total workdone in buffers flushed by cleaner thread: 189
      ft flusher: times cleaner thread tries to merge a leaf: 0
      ft flusher: cleaner thread leaf merges in progress: 0
      ft flusher: cleaner thread leaf merges successful: 0
      ft flusher: nodes dirtied by cleaner thread leaf merges: 0
      ft flusher: total number of flushes done by flusher threads or cleaner threads: 4
      ft flusher: number of in memory flushes: 4
      ft flusher: number of flushes that read something off disk: 0
      ft flusher: number of flushes that triggered another flush in child: 0
      ft flusher: number of flushes that triggered 1 cascading flush: 0
      ft flusher: number of flushes that triggered 2 cascading flushes: 0
      ft flusher: number of flushes that triggered 3 cascading flushes: 0
      ft flusher: number of flushes that triggered 4 cascading flushes: 0
      ft flusher: number of flushes that triggered 5 cascading flushes: 0
      ft flusher: number of flushes that triggered over 5 cascading flushes: 0
      ft flusher: leaf node splits: 11
      ft flusher: nonleaf node splits: 0
      ft flusher: leaf node merges: 0
      ft flusher: nonleaf node merges: 0
      ft flusher: leaf node balances: 0
      hot: operations ever started: 0
      hot: operations successfully completed: 0
      hot: operations aborted: 0
      hot: max number of flushes from root ever required to optimize a tree: 0
      txn: begin: 185
      txn: begin read only: 0
      txn: successful commits: 181
      txn: aborts: 4
      logger: next LSN: 582
      logger: writes: 85
      logger: writes (bytes): 54280
      logger: writes (uncompressed bytes): 54280
      logger: writes (seconds): 0.001229
      logger: number of long logger write operations: 0
      indexer: number of indexers successfully created: 0
      indexer: number of calls to toku_indexer_create_indexer() that failed: 0
      indexer: number of calls to indexer->build() succeeded: 0
      indexer: number of calls to indexer->build() failed: 0
      indexer: number of calls to indexer->close() that succeeded: 0
      indexer: number of calls to indexer->close() that failed: 0
      indexer: number of calls to indexer->abort(): 0
      indexer: number of indexers currently in existence: 0
      indexer: max number of indexers that ever existed simultaneously: 0
      loader: number of loaders successfully created: 1
      loader: number of calls to toku_loader_create_loader() that failed: 0
      loader: number of calls to loader->put() succeeded: 4
      loader: number of calls to loader->put() failed: 0
      loader: number of calls to loader->close() that succeeded: 1
      loader: number of calls to loader->close() that failed: 0
      loader: number of calls to loader->abort(): 0
      loader: number of loaders currently in existence: 0
      loader: max number of loaders that ever existed simultaneously: 1
      memory: number of malloc operations: 0
      memory: number of free operations: 0
      memory: number of realloc operations: 0
      memory: number of malloc operations that failed: 0
      memory: number of realloc operations that failed: 0
      memory: number of bytes requested: 0
      memory: number of bytes used (requested + overhead): 0
      memory: number of bytes freed: 0
      memory: largest attempted allocation size: 0
      memory: size of the last failed allocation attempt: 0
      memory: estimated maximum memory footprint: 0
      memory: mallocator version: 3.6.0-0-g46c0af68bd248b04df75e4f92d5fb804c3d75340
      memory: mmap threshold: 4194304
      filesystem: ENOSPC redzone state: 0
      filesystem: threads currently blocked by full disk: 0
      filesystem: number of operations rejected by enospc prevention (red zone): 0
      filesystem: most recent disk full: Thu Jan  1 02:00:00 1970
      filesystem: number of write operations that returned ENOSPC: 0
      filesystem: fsync time: 9791372
      filesystem: fsync count: 331
      filesystem: long fsync time: 0
      filesystem: long fsync count: 0
      context: tree traversals blocked by a full fetch: 0
      context: tree traversals blocked by a partial fetch: 0
      context: tree traversals blocked by a full eviction: 0
      context: tree traversals blocked by a partial eviction: 0
      context: tree traversals blocked by a message injection: 0
      context: tree traversals blocked by a message application: 0
      context: tree traversals blocked by a flush: 0
      context: tree traversals blocked by a the cleaner thread: 0
      context: tree traversals blocked by something uninstrumented: 0
      context: promotion blocked by a full fetch (should never happen): 0
      context: promotion blocked by a partial fetch (should never happen): 0
      context: promotion blocked by a full eviction (should never happen): 0
      context: promotion blocked by a partial eviction (should never happen): 0
      context: promotion blocked by a message injection: 0
      context: promotion blocked by a message application: 0
      context: promotion blocked by a flush: 0
      context: promotion blocked by the cleaner thread: 0
      context: promotion blocked by something uninstrumented: 0
      context: something uninstrumented blocked by something uninstrumented: 0
       
      Memory usage:
      Arena 0:
      system bytes     =          0
      in use bytes     =          0
      Total (incl. mmap):
      system bytes     =          0
      in use bytes     =          0
      max mmap regions =          0
      max mmap bytes   =          0
      160704  1:17:13 [ERROR] mysqld got signal 6 ;
      

      Stack trace from 10.0 0fdb17e6c3f50ae22eb97b6363bcbd8b0cd9e040

      #5  0x00007fc104bb8b8b in abort () from /lib/x86_64-linux-gnu/libc.so.6
      #6  0x00007fc0fd568dbf in toku_do_backtrace_abort () at /home/elenst/git/10.0/storage/tokudb/PerconaFT/portability/toku_assert.cc:147
      #7  0x00007fc0fd568ebd in toku_do_assert_fail (expr_as_string=0x7fc0e0bfd280 "Handlerton: _use_count != 0 ", function=0x7fc0fd5709e9 <TOKUDB_SHARE::release()::__FUNCTION__> "release", file=0x7fc0fd56c378 "/home/elenst/git/10.0/storage/tokudb/ha_tokudb.cc", line=315, caller_errno=0) at /home/elenst/git/10.0/storage/tokudb/PerconaFT/portability/toku_assert.cc:168
      #8  0x00007fc0fd46fe60 in env_crash (db_env=0x7fc104026280, msg=0x7fc0e0bfd280 "Handlerton: _use_count != 0 ", fun=0x7fc0fd5709e9 <TOKUDB_SHARE::release()::__FUNCTION__> "release", file=0x7fc0fd56c378 "/home/elenst/git/10.0/storage/tokudb/ha_tokudb.cc", line=315, caller_errno=0) at /home/elenst/git/10.0/storage/tokudb/PerconaFT/src/ydb.cc:2338
      #9  0x00007fc0fd455fd2 in toku_hton_assert_fail (expr_as_string=0x7fc0fd56c4b8 "_use_count != 0", fun=0x7fc0fd5709e9 <TOKUDB_SHARE::release()::__FUNCTION__> "release", file=0x7fc0fd56c378 "/home/elenst/git/10.0/storage/tokudb/ha_tokudb.cc", line=315, caller_errno=0) at /home/elenst/git/10.0/storage/tokudb/hatoku_hton.cc:162
      #10 0x00007fc0fd428180 in TOKUDB_SHARE::release (this=0x7fc0eadcc070) at /home/elenst/git/10.0/storage/tokudb/ha_tokudb.cc:315
      #11 0x00007fc0fd443d99 in tokudb::analyze::standard_t::on_destroy (this=0x7fc0eadc38b0) at /home/elenst/git/10.0/storage/tokudb/ha_tokudb_admin.cc:470
      #12 0x00007fc0fd45f3fd in tokudb::background::job_manager_t::job_t::destroy (this=0x7fc0eadc38b0) at /home/elenst/git/10.0/storage/tokudb/tokudb_background.h:189
      #13 0x00007fc0fd462056 in tokudb::background::job_manager_t::job_t::cancel (this=0x7fc0eadc38b0) at /home/elenst/git/10.0/storage/tokudb/tokudb_background.h:186
      #14 0x00007fc0fd461987 in tokudb::background::job_manager_t::cancel (this=0x7fc10402da70, job=0x7fc0eadc38b0) at /home/elenst/git/10.0/storage/tokudb/tokudb_background.cc:236
      #15 0x00007fc0fd460fee in tokudb::background::job_manager_t::destroy (this=0x7fc10402da70) at /home/elenst/git/10.0/storage/tokudb/tokudb_background.cc:71
      #16 0x00007fc0fd461a22 in tokudb::background::destroy () at /home/elenst/git/10.0/storage/tokudb/tokudb_background.cc:247
      #17 0x00007fc0fd4577ac in tokudb_end (hton=0x7fc10405de70, type=HA_PANIC_CLOSE) at /home/elenst/git/10.0/storage/tokudb/hatoku_hton.cc:665
      #18 0x0000000000861f79 in ha_finalize_handlerton (plugin=0x7fc0ff7f97b0) at /home/elenst/git/10.0/sql/handler.cc:440
      #19 0x0000000000673b32 in plugin_deinitialize (plugin=0x7fc0ff7f97b0, ref_check=true) at /home/elenst/git/10.0/sql/sql_plugin.cc:1176
      #20 0x0000000000673ef8 in reap_plugins () at /home/elenst/git/10.0/sql/sql_plugin.cc:1249
      #21 0x0000000000675d7f in plugin_shutdown () at /home/elenst/git/10.0/sql/sql_plugin.cc:1900
      #22 0x000000000059abff in clean_up (print_message=true) at /home/elenst/git/10.0/sql/mysqld.cc:2023
      #23 0x000000000059a9c1 in unireg_end () at /home/elenst/git/10.0/sql/mysqld.cc:1936
      #24 0x000000000059a8e3 in kill_server (sig_ptr=0x0) at /home/elenst/git/10.0/sql/mysqld.cc:1864
      #25 0x000000000059a900 in kill_server_thread (arg=0x7fc0ea7fed60) at /home/elenst/git/10.0/sql/mysqld.cc:1887
      #26 0x0000000000e25c9c in pfs_spawn_thread (arg=0x7fc0ea82c0f0) at /home/elenst/git/10.0/storage/perfschema/pfs.cc:1860
      #27 0x00007fc105d82e9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
      #28 0x00007fc104c72cbd in clone () from /lib/x86_64-linux-gnu/libc.so.6
      

      Note: current 10.2 is also affected.

      Attachments

        Issue Links

          Activity

            prohaska7 Rich Prohaska added a comment -

            The crash on mariadb occurs because a background job object is cancelled twice. This screws up the tokudb use count and causes the assert to fire. If the cancel logic is only run once, by checking if the background job object has already run the cancel logic, then the crash no longer occurs.

            On a side note, this test runs fine (at least 200 iterations) on percona server 5.7. I suspect that the tokudb handler interface is being used slightly differently on mariadb compared with percona server 5.7.

            prohaska7 Rich Prohaska added a comment - The crash on mariadb occurs because a background job object is cancelled twice. This screws up the tokudb use count and causes the assert to fire. If the cancel logic is only run once, by checking if the background job object has already run the cancel logic, then the crash no longer occurs. On a side note, this test runs fine (at least 200 iterations) on percona server 5.7. I suspect that the tokudb handler interface is being used slightly differently on mariadb compared with percona server 5.7.
            prohaska7 Rich Prohaska added a comment -

            Upstream tokudb bug described in https://tokutek.atlassian.net/browse/DB-1004.

            prohaska7 Rich Prohaska added a comment - Upstream tokudb bug described in https://tokutek.atlassian.net/browse/DB-1004 .

            Another variation:
            http://buildbot.askmonty.org/buildbot/builders/bld-dan-debug/builds/3489/steps/test/logs/stdio

            ***Warnings generated in error logs during shutdown after running tests: tokudb.background_job_manager
             
            /opt/buildbot-slave/mariadb/dan_demeter1/build/storage/tokudb/tokudb_thread.h:251 lock: Assertion `Handlerton: r == 0 ' failed (errno=0)
            Env panic string: /opt/buildbot-slave/mariadb/dan_demeter1/build/storage/tokudb/tokudb_thread.h:251 lock: Assertion `Handlerton: r == 0 ' failed (errno=0)
            Attempting backtrace. You can use the following information to find out
            

            elenst Elena Stepanova added a comment - Another variation: http://buildbot.askmonty.org/buildbot/builders/bld-dan-debug/builds/3489/steps/test/logs/stdio ***Warnings generated in error logs during shutdown after running tests: tokudb.background_job_manager   /opt/buildbot-slave/mariadb/dan_demeter1/build/storage/tokudb/tokudb_thread.h:251 lock: Assertion `Handlerton: r == 0 ' failed (errno=0) Env panic string: /opt/buildbot-slave/mariadb/dan_demeter1/build/storage/tokudb/tokudb_thread.h:251 lock: Assertion `Handlerton: r == 0 ' failed (errno=0) Attempting backtrace. You can use the following information to find out
            prohaska7 Rich Prohaska added a comment - - edited

            This branch (https://github.com/prohaska7/mariadb-server/tree/mdev-10327) contains a preliminary fix from George Lorch @ Percona, and has run 100's of iterations of the background_job_manager test on mariadb successfully. DB-1004 has a more complete bug fix.

            prohaska7 Rich Prohaska added a comment - - edited This branch ( https://github.com/prohaska7/mariadb-server/tree/mdev-10327 ) contains a preliminary fix from George Lorch @ Percona, and has run 100's of iterations of the background_job_manager test on mariadb successfully. DB-1004 has a more complete bug fix.

            People

              serg Sergei Golubchik
              elenst Elena Stepanova
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.