[MDEV-24188] Hang in buf_page_create() after reusing a previously freed page Created: 2020-11-10  Updated: 2021-04-19  Resolved: 2020-11-13

Status: Closed
Project: MariaDB Server
Component/s: Storage Engine - InnoDB
Affects Version/s: 10.2.35, 10.2.36, 10.3.26, 10.3.27, 10.4.16, 10.4.17, 10.5.7, 10.5.8
Fix Version/s: 10.2.37, 10.3.28, 10.4.18, 10.5.9

Type: Bug Priority: Blocker
Reporter: Matthias Leich Assignee: Marko Mäkelä
Resolution: Fixed Votes: 0
Labels: hang, regression, rr-profile

Attachments: File MDEV-24182.cc     File MDEV-24182.yy     File MDEV-24182.zz     PNG File screenshot-1.png    
Issue Links:
Blocks
Duplicate
duplicates MDEV-24504 [FATAL] InnoDB: Semaphore wait has la... Closed
is duplicated by MDEV-24375 Semaphore wait has lasted > 600 seconds Closed
Problem/Incident
is caused by MDEV-23456 fil_space_crypt_t::write_page0() is a... Closed
Relates
relates to MDEV-22456 Dropping the adaptive hash index may ... Closed
relates to MDEV-23452 Assertion `buf_page_get_io_fix(bpage)... Closed
relates to MDEV-24227 mysql_install_db hangs for ~45 minutes Closed
relates to MDEV-24829 10.5.8 fails to startup on approx 10%... Closed

 Description   

Work flow:
1. Start the server
2. One session creates a table, switches Autocommit off and starts to insert records into the table.
     There is no concurrent activity in the server.
3. After less than 100 (?) records inserted (it looks as if there was no commit or rollback) InnoDB reports
     [ERROR] [FATAL] InnoDB: Semaphore wait has lasted > 300 seconds.
     and the server aborts.
     
origin/bb-10.2-MDEV-24182 2b62e15e479aca04326b40e3090c97cdd2f0c1c3 2020-11-10
Per Marko: It could be that this bug does not exist in 10.5.
 
 
RQG
====
git clone https://github.com/mleich1/rqg --branch experimental RQG
 
perl rqg.pl \
--grammar=MDEV-24182.yy \
--gendata=MDEV-24182.zz \
--mysqld=--innodb_use_native_aio=1 \
--mysqld=--innodb_lock_schedule_algorithm=fcfs \
--mysqld=--loose-idle_write_transaction_timeout=0 \
--mysqld=--loose-idle_transaction_timeout=0 \
--mysqld=--loose-idle_readonly_transaction_timeout=0 \
--mysqld=--connect_timeout=60 \
--mysqld=--interactive_timeout=28800 \
--mysqld=--slave_net_timeout=60 \
--mysqld=--net_read_timeout=30 \
--mysqld=--net_write_timeout=60 \
--mysqld=--loose-table_lock_wait_timeout=50 \
--mysqld=--wait_timeout=28800 \
--mysqld=--lock-wait-timeout=86400 \
--mysqld=--innodb-lock-wait-timeout=50 \
--no-mask \
--queries=10000000 \
--seed=random \
--reporters=Backtrace \
--reporters=ErrorLog \
--reporters=Deadlock1 \
--validators=None \
--mysqld=--log_output=none \
--mysqld=--log-bin \
--mysqld=--log_bin_trust_function_creators=1 \
--mysqld=--loose-debug_assert_on_not_freed_memory=0 \
--engine=InnoDB \
--sqltrace=MarkErrors \
--restart_timeout=120 \
--mysqld=--plugin-load-add=file_key_management.so \
--mysqld=--loose-file-key-management-filename=/RQG/conf/mariadb/encryption_keys.txt \
--duration=300 \
--mysqld=--loose-innodb_fatal_semaphore_wait_threshold=300 \
--mysqld=--innodb_stats_persistent=off \
--mysqld=--loose-max-statement-time=30 \
--threads=2 \
--mysqld=--innodb_page_size=4K \
--mysqld=--innodb-buffer-pool-size=8M \
--duration=300 \
--no_mask \
--workdir=<local settings> \
--vardir=<local settings> \
--mtr-build-thread=<local settings> \
--basedir1=<local settings> \
--script_debug=_nix_ \
--rr=Extended \
--rr_options=--chaos
 
Please note that many of the values set had zero impact on the failing RQG run.
Because they have their impact in a phase of the workflow which was not reached at all.
Examples:
--grammar=MDEV-24182.yy \
--queries=10000000 \
--threads=2 \
--seed=random \
--reporters=...
But the tool rqg.pl might insist in values.



 Comments   
Comment by Matthias Leich [ 2020-11-10 ]

sdp:/RQG/storage/1605011175/tmp/dev/shm/vardir/1605011175/158/1/rr
_RR_TRACE_DIR="." rr replay --mark-stdio
 
Per Marko:
#0  buf_page_set_io_fix (io_fix=BUF_IO_WRITE, bpage=0x5f92090d02c0)
    at /Server/bb-10.2-MDEV-24182/storage/innobase/include/buf0buf.ic:547
#1  buf_flush_page (buf_pool=buf_pool@entry=0x61b000001580, 
    bpage=bpage@entry=0x5f92090d02c0, 
    flush_type=flush_type@entry=BUF_FLUSH_LIST, sync=sync@entry=false)
    at /Server/bb-10.2-MDEV-24182/storage/innobase/buf/buf0flu.cc:1184
#2  0x000055a55e22daae in buf_flush_try_neighbors (page_id=..., 
    flush_type=flush_type@entry=BUF_FLUSH_LIST, n_flushed=<optimized out>, 
    n_to_flush=n_to_flush@entry=200)
    at /Server/bb-10.2-MDEV-24182/storage/innobase/buf/buf0flu.cc:1458
#3  0x000055a55e22e84f in buf_flush_page_and_try_neighbors (
    bpage=bpage@entry=0x5f92091196d0, 
    flush_type=flush_type@entry=BUF_FLUSH_LIST, 
    n_to_flush=n_to_flush@entry=200, count=count@entry=0x440a6bae6f20)
    at /Server/bb-10.2-MDEV-24182/storage/innobase/buf/buf0flu.cc:1530
#4  0x000055a55e230f9b in buf_do_flush_list_batch (
    buf_pool=buf_pool@entry=0x61b000001580, min_n=min_n@entry=200, 
    lsn_limit=lsn_limit@entry=18446744073709551615)
    at /Server/bb-10.2-MDEV-24182/storage/innobase/buf/buf0flu.cc:1786
#5  0x000055a55e231a27 in buf_flush_batch (
    buf_pool=buf_pool@entry=0x61b000001580, 
    flush_type=flush_type@entry=BUF_FLUSH_LIST, min_n=min_n@entry=200, 
    lsn_limit=lsn_limit@entry=18446744073709551615, n=n@entry=0x440a6bae7560)
 
Current event: 952443

Comment by Marko Mäkelä [ 2020-11-10 ]

To me, this looked like a deadlock between the page flush (which io-fixes the page) and buf_page_create(), which will remain in the following loop. Something else had already acquired block->lock in exclusive mode. I did not check what that was: I now see that the page latch would be acquired after the wait loop:

		case BUF_BLOCK_FILE_PAGE:
			buf_block_fix(block);
			const int32_t num_fix_count =
				mtr->get_fix_count(block) + 1;
			buf_page_mutex_enter(block);
			while (buf_block_get_io_fix(block) != BUF_IO_NONE
			       || (num_fix_count
				   != block->page.buf_fix_count)) {
				buf_page_mutex_exit(block);
				buf_pool_mutex_exit(buf_pool);
				rw_lock_x_unlock(hash_lock);
 
				os_thread_yield();
 
				buf_pool_mutex_enter(buf_pool);
				rw_lock_x_lock(hash_lock);
				buf_page_mutex_enter(block);
			}
 
			rw_lock_x_lock(&block->lock);
			buf_page_mutex_exit(block);

In any case, both threads would remain blocked. It is possible that 10.5.7 is not affected by this, thanks to various changes (the latest one being MDEV-23855).

Comment by Marko Mäkelä [ 2020-11-12 ]

As far as I can tell, the following happened:

  1. Thread 1 write-fixes the page at event 952443.
  2. Thread 1 acquires SX-latch on the page at event 952443.
  3. Thread 12 releases the io-fix and the SX-latch, on write completion, at event 952526 .

The problematic wait started in Thread 33 around the same time the io-fix was set by Thread 1. The wait condition in buf_page_create() that we implemented in MDEV-23456 must be inaccurate. I will have to single-step Thread 33 at the machine instruction level in order to see the values of the variables, because higher-level debugging information is not available at that spot due to optimizations.

I suspect that an even rarer variant of this hang might be possible. A mini-transaction that had previously freed a page might be reusing the page in buf_page_create() again. In this case, I did not find the block in mtr_t::m_memo. We do have the constraint that a mini-transaction must not acquire further page latches after allocating a page. That constraint could apply to freeing pages as well, but I did not check that yet.

Comment by Marko Mäkelä [ 2020-11-12 ]

I made a mistake in my analysis and was looking at the wrong block. The correct block indeed is one that the mini-transaction had previously x-latched. The mtr->m_memo.m_first_block.m_data starts with the following:

{0x618000039310, 0x80, 0x5f92090c6cf0, 0x2, 0x5f92091337f0, 0x2, 0x5f9209096860, 0x2,

The block of interest is 0x5f9209096860, and the 0x2 is MTR_MEMO_PAGE_X_FIX. The page had been freed in our mini-transaction. We seem to have at least B-tree level 2 here (and innodb_page_size=4k to help with that).

#0  0x000055a55e1ee99f in buf_page_set_file_page_was_freed (page_id=...) at /Server/bb-10.2-MDEV-24182/storage/innobase/buf/buf0buf.cc:3605
#1  0x000055a55e3dd4c1 in fseg_free_page (seg_header=<optimized out>, space_id=4, page=4565, mtr=mtr@entry=0x6fb73d4e6f80) at /Server/bb-10.2-MDEV-24182/storage/innobase/include/buf0types.h:139
#2  0x000055a55e0ea00d in btr_page_free (index=index@entry=0x618000039108, block=block@entry=0x5f9209096860, mtr=mtr@entry=0x6fb73d4e6f80, blob=blob@entry=false)
    at /Server/bb-10.2-MDEV-24182/storage/innobase/include/buf0types.h:159
#3  0x000055a55e101dc7 in btr_compress (cursor=cursor@entry=0x6fb73d4e5140, adjust=adjust@entry=0, mtr=mtr@entry=0x6fb73d4e6f80) at /Server/bb-10.2-MDEV-24182/storage/innobase/btr/btr0btr.cc:3958
#4  0x000055a55e13e7f1 in btr_cur_compress_if_useful (cursor=cursor@entry=0x6fb73d4e5140, adjust=adjust@entry=0, mtr=mtr@entry=0x6fb73d4e6f80)
    at /Server/bb-10.2-MDEV-24182/storage/innobase/include/dict0dict.ic:1174
#5  0x000055a55e1632e1 in btr_cur_pessimistic_delete (err=err@entry=0x6fb73d4e4e20, has_reserved_extents=has_reserved_extents@entry=1, cursor=cursor@entry=0x6fb73d4e5140, flags=flags@entry=16, 
    rollback=rollback@entry=false, mtr=mtr@entry=0x6fb73d4e6f80) at /Server/bb-10.2-MDEV-24182/storage/innobase/btr/btr0cur.cc:5428
#6  0x000055a55e164343 in btr_cur_node_ptr_delete (parent=parent@entry=0x6fb73d4e5140, mtr=mtr@entry=0x6fb73d4e6f80) at /Server/bb-10.2-MDEV-24182/storage/innobase/btr/btr0cur.cc:5471
#7  0x000055a55e163c41 in btr_cur_pessimistic_delete (err=err@entry=0x6fb73d4e5470, has_reserved_extents=has_reserved_extents@entry=1, cursor=cursor@entry=0x6fb73d4e5750, flags=flags@entry=16, 
    rollback=rollback@entry=false, mtr=mtr@entry=0x6fb73d4e6f80) at /Server/bb-10.2-MDEV-24182/storage/innobase/btr/btr0cur.cc:5388
#8  0x000055a55e164343 in btr_cur_node_ptr_delete (parent=parent@entry=0x6fb73d4e5750, mtr=mtr@entry=0x6fb73d4e6f80) at /Server/bb-10.2-MDEV-24182/storage/innobase/btr/btr0cur.cc:5471
#9  0x000055a55e100cdf in btr_compress (cursor=cursor@entry=0x6fb73d4e6530, adjust=adjust@entry=0, mtr=mtr@entry=0x6fb73d4e6f80) at /Server/bb-10.2-MDEV-24182/storage/innobase/btr/btr0btr.cc:3719
#10 0x000055a55e13e7f1 in btr_cur_compress_if_useful (cursor=cursor@entry=0x6fb73d4e6530, adjust=adjust@entry=0, mtr=mtr@entry=0x6fb73d4e6f80)
    at /Server/bb-10.2-MDEV-24182/storage/innobase/include/dict0dict.ic:1174
#11 0x000055a55e1632e1 in btr_cur_pessimistic_delete (err=err@entry=0x6fb73d4e6460, has_reserved_extents=has_reserved_extents@entry=1, cursor=cursor@entry=0x6fb73d4e6530, flags=flags@entry=16, 
    rollback=rollback@entry=false, mtr=mtr@entry=0x6fb73d4e6f80) at /Server/bb-10.2-MDEV-24182/storage/innobase/btr/btr0cur.cc:5428
#12 0x000055a55e0e5d6e in btr_insert_into_right_sibling (flags=flags@entry=0, cursor=cursor@entry=0x6fb73d4e6bc0, offsets=offsets@entry=0x6fb73d4e6a30, heap=<optimized out>, tuple=tuple@entry=0x61b000037df8, 
    n_ext=n_ext@entry=0, mtr=<optimized out>) at /Server/bb-10.2-MDEV-24182/storage/innobase/btr/btr0btr.cc:2691
#13 0x000055a55e102be2 in btr_page_split_and_insert (flags=flags@entry=0, cursor=cursor@entry=0x6fb73d4e6bc0, offsets=offsets@entry=0x6fb73d4e6a30, heap=heap@entry=0x6fb73d4e6b10, 
    tuple=tuple@entry=0x61b000037df8, n_ext=<optimized out>, mtr=<optimized out>) at /Server/bb-10.2-MDEV-24182/storage/innobase/btr/btr0btr.cc:2800
#14 0x000055a55e13b6d6 in btr_cur_pessimistic_insert (flags=flags@entry=0, cursor=cursor@entry=0x6fb73d4e6bc0, offsets=offsets@entry=0x6fb73d4e6a30, heap=heap@entry=0x6fb73d4e6b10, 
    entry=entry@entry=0x61b000037df8, rec=rec@entry=0x6fb73d4e6a50, big_rec=<optimized out>, n_ext=0, thr=<optimized out>, mtr=<optimized out>) at /Server/bb-10.2-MDEV-24182/storage/innobase/btr/btr0cur.cc:3432
#15 0x000055a55ddec9f8 in row_ins_sec_index_entry_low (flags=flags@entry=0, mode=<optimized out>, mode@entry=33, index=index@entry=0x618000039108, offsets_heap=offsets_heap@entry=0x619003741580, 
    heap=heap@entry=0x61900373fc80, entry=entry@entry=0x61b000037df8, trx_id=<optimized out>, thr=<optimized out>) at /Server/bb-10.2-MDEV-24182/storage/innobase/row/row0ins.cc:3064
#16 0x000055a55ddf8eb8 in row_ins_sec_index_entry (index=index@entry=0x618000039108, entry=entry@entry=0x61b000037df8, thr=thr@entry=0x621000089ce8)
    at /Server/bb-10.2-MDEV-24182/storage/innobase/row/row0ins.cc:3229
#17 0x000055a55ddf92b1 in row_ins_index_entry (index=0x618000039108, entry=0x61b000037df8, thr=thr@entry=0x621000089ce8) at /Server/bb-10.2-MDEV-24182/storage/innobase/row/row0ins.cc:3262
#18 0x000055a55ddf9563 in row_ins_index_entry_step (node=node@entry=0x623000012280, thr=thr@entry=0x621000089ce8) at /usr/include/c++/9/bits/stl_iterator.h:819
#19 0x000055a55ddfa76f in row_ins (node=node@entry=0x623000012280, thr=thr@entry=0x621000089ce8) at /Server/bb-10.2-MDEV-24182/storage/innobase/row/row0ins.cc:3548
#20 0x000055a55ddfb288 in row_ins_step (thr=thr@entry=0x621000089ce8) at /Server/bb-10.2-MDEV-24182/storage/innobase/row/row0ins.cc:3668
#21 0x000055a55de46c89 in row_insert_for_mysql (mysql_rec=mysql_rec@entry=0x626000036128 "", prebuilt=0x623000011988) at /Server/bb-10.2-MDEV-24182/storage/innobase/row/row0mysql.cc:1411
#22 0x000055a55db0a61d in ha_innobase::write_row (this=0x61c0000448a8, record=<optimized out>) at /Server/bb-10.2-MDEV-24182/storage/innobase/handler/ha_innodb.cc:8193

We had X-latched the page much earlier.

I think that we must rewrite the MDEV-23456 fix and skip the wait if we are holding an X-latch on the page. We should probably assert that we are not holding anything else than X-latch on the page. Holding only SX-latch is not allowed (it is for allocation bitmap pages or index root pages only), and neither is buffer-fix or S-latch. Here is where we set the I/O fix:

#0  buf_page_set_io_fix (io_fix=BUF_IO_WRITE, bpage=0x5f9209096860) at /Server/bb-10.2-MDEV-24182/storage/innobase/include/buf0buf.ic:547
#1  buf_flush_page (buf_pool=buf_pool@entry=0x61b000001580, bpage=bpage@entry=0x5f9209096860, flush_type=flush_type@entry=BUF_FLUSH_LIST, sync=sync@entry=false)
    at /Server/bb-10.2-MDEV-24182/storage/innobase/buf/buf0flu.cc:1184
#2  0x000055a55e22daae in buf_flush_try_neighbors (page_id=..., flush_type=flush_type@entry=BUF_FLUSH_LIST, n_flushed=<optimized out>, n_to_flush=n_to_flush@entry=200)
    at /Server/bb-10.2-MDEV-24182/storage/innobase/buf/buf0flu.cc:1458
#3  0x000055a55e22e84f in buf_flush_page_and_try_neighbors (bpage=bpage@entry=0x5f9209096860, flush_type=flush_type@entry=BUF_FLUSH_LIST, n_to_flush=n_to_flush@entry=200, count=count@entry=0x440a6bae6f20)
    at /Server/bb-10.2-MDEV-24182/storage/innobase/buf/buf0flu.cc:1530
#4  0x000055a55e230f9b in buf_do_flush_list_batch (buf_pool=buf_pool@entry=0x61b000001580, min_n=min_n@entry=200, lsn_limit=lsn_limit@entry=18446744073709551615)
    at /Server/bb-10.2-MDEV-24182/storage/innobase/buf/buf0flu.cc:1786

A little later, that thread would block, waiting for SX-latch on the block, while our buf_page_create() is waiting for the io-fix to be released. Hence, it is a deadlock (or livelock).

Comment by Marko Mäkelä [ 2020-11-12 ]

The wait loop was originally added in MDEV-23452 to fix a regression that was caused by MDEV-22456.

Edit: it looks like MDEV-23456 fix makes it impossible to prevent this hang by

SET GLOBAL innodb_adaptive_hash_index=OFF;

Comment by Matthias Leich [ 2020-11-13 ]

The source trees
origin/bb-10.2-MDEV-24188 e62ed4c5b8fa566786700ac6ca58dfea0761ebfb 2020-11-13
origin/bb-10.5-MDEV-24188 e598f98d028dad53b5765d87230a5761f118caf8 2020-11-13
behaved well during RQG testing.
The bad effect mentioned on top was not replayed again and there were no new 
bad effects related to the fix.

Comment by Marko Mäkelä [ 2020-11-13 ]

I pushed this to 10.2 and and merged up to 10.5 immediately.

Comment by Olaf Buitelaar [ 2020-11-16 ]

I seem to suffer from the same issue;

2020-11-16 13:23:35 0 [Note] InnoDB: A semaphore wait:
--Thread 139696564532992 has waited at btr0cur.cc line 1480 for 122.00 seconds the semaphore:
SX-lock on RW-latch at 0x55943e5fbbb0 created in file dict0dict.cc line 2160
a writer (thread id 139627021195008) has reserved it in mode  SX
number of readers 0, waiters flag 1, lock_word: 10000000
Last time write locked in file dict0stats.cc line 1969
 
If the lock is held for more than (however it happend only in 10.5.7, not yet observed in 10.5.8) 10mins, the server crashes;
2020-11-13 12:51:37 0 [ERROR] [FATAL] InnoDB: Semaphore wait has lasted > 600 seconds. We intentionally crash the server because it appears to be hung.
201113 12:51:37 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
 
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
 
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
 
Server version: 10.5.7-MariaDB-1:10.5.7+maria~focal-log
key_buffer_size=12582912
read_buffer_size=131072
max_used_connections=41
max_threads=1202
thread_count=43
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 2658343 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
 
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x0 thread_stack 0x49000
mysqld(my_print_stacktrace+0x32)[0x55cb65360742]
mysqld(handle_fatal_signal+0x485)[0x55cb64db7e95]

Any chance a release might be pushed forward to address this? Also is there an configuration option to disable the forced shutdown after 10min?

Comment by Marko Mäkelä [ 2020-11-16 ]

olafbuitelaar, thank you for the report. You can use

SET GLOBAL innodb_fatal_semaphore_wait_threshold=300;

to have the server abort in 5 minutes (300 seconds) instead of the default timeout. I do not think that it is useful to let the server continue in a livelocked state, because service to some connections will be denied, depending on which latches the hung buf_page_create() threads are holding. Eventually, all I/O threads would be blocked and nothing could be accessed. If the hung buf_page_create() is executed as part of a CREATE TABLE operation, then all other InnoDB threads will be blocked, waiting for the data dictionary latch.

Today, we double-checked that the wait loop that was originally added in MDEV-23452 is indeed necessary. If buf_page_create() did not check that no io-fix is set, it could acquire the block->lock in exclusive mode after another thread that is executing buf_flush_page() set the write-fix and released buf_pool.mutex but did not yet acquire the block->lock in shared-exclusive mode. Note: The io_fix is protected by the buffer pool mutex, which buf_page_create() is holding.

The hang was caused because in MDEV-23456 we did not add a condition before the wait loop: if the buf_page_create() thread already holds an exclusive latch on the block (because the page was freed earlier during the mini-transaction before being reallocated), we would in vain wait for the io-fix to be removed.

The probability of this hang can be reduced by configuring some parameters related to page flushing, but I do not think that it can be prevented completely.

Comment by Olaf Buitelaar [ 2020-11-16 ]

Thank you for your reply. If i can provide more information please let me know. I'll try to tweak the parameters related to page flushing. We use ```create table``` regularly to create temporary tables.
I'm not sure if it's related but since i've upgraded from 10.4.14 to 10.5.7 (the 12th and 13th to 10.5.8), the row locking times seems to be very irregular;

Comment by Marko Mäkelä [ 2020-11-17 ]

olafbuitelaar, please be aware that due to MDEV-24096 it is not safe to use 10.5.7 if any indexed virtual columns exist (including any that were created automatically). I think that row locking occurs at a higher level, above the buffer pool and page latch layer. This bug causes a real hang (livelock) that will remain until the server is killed. If any transaction that is holding row locks is being blocked due to this livelock, then of course other transactions could end up waiting for those row locks longer (until a lock wait timeout terminates the waits). As far as I can tell, such affected row locks should never be released (until the entire server process is killed).

MDEV-24227 could be a duplicate of this report. Given that this hang seems to be relatively easy to reproduce, I think that we should consider issuing an unscheduled release.

Comment by Marko Mäkelä [ 2020-11-17 ]

MDEV-24227 turned out to be unrelated to this hang.

The scenario of this hang is that a page had been freed, a page write (or thanks to MDEV-15528, eviction) had been initiated, and the same page (with the same tablespace identifier and page number) was allocated and reused for something else. Until MDEV-12227 is fixed, we are unnecessarily writing pages of temporary tables to the data file. I think that repeatedly creating, populating and dropping InnoDB temporary tables should be a good way of reproducing this hang.

Generated at Thu Feb 08 09:28:06 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.