Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-28803

ERROR 1206 (HY000): The total number of locks exceeds the lock table size

Details

    Description

      SET GLOBAL innodb_buffer_pool_size=12*1024*1024;
      CREATE TABLE t1 (d DOUBLE);
      INSERT INTO t1 VALUES (0x0061),(0x0041),(0x00E0),(0x00C0),(0x1EA3),(0x1EA2),(0x00E3),(0x00C3),(0x00E1),(0x00C1),(0x1EA1),(0x1EA0);
      INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6;
      

      Will lead to:

      10.10.0 081a284712bb661349e2e3802077b12211cede3e (Optimized)

      10.10.0-opt>INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6;
      ERROR 1206 (HY000): The total number of locks exceeds the lock table size
      

      In 10.6 to 10.10 only.
      Note that in this ticket, unlike MDEV-28800, no initial innodb-buffer-pool-size is set, though the buffer pool is resized in the same was as MDEV-28800.

      Attachments

        Issue Links

          Activity

            I think that this and MDEV-28800 are siblings of each other. There shouldn’t be any difference between 10.8, 10.9, 10.10 in InnoDB that would explain why we sometimes notice that the record locks are consuming too much memory, and sometimes won’t.

            I can confirm that on my system, a non-debug build of 10.8 62419b1733042c30414a4feed89c79aebb5621af will only allow the file t1.ibd to grow to 28MiB, which is the same result that I got on a 10.10 32edabd1f2fa0cf9b2cf41f326d399ef0348fa30 debug build. On a non-debug build of the same 10.10 revision, the file grew to 29 MiB before the message was output.

            In MDEV-28800, it is implied that locks are not consuming so much memory in 10.7. There have been no changes to the InnoDB locking subsystem since version 10.6. Hence, the reason for this regression must be somewhere outside InnoDB. But, I do not see any such regression:

            10.7 fe75e5e5b1c5856fdfc9bd97265ba6ebe272f549

            mysqltest: At line 8: query 'INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6' failed: <Unknown> (2013): Lost connection to server during query
            …
            2022-06-13  9:51:14 0 [Note] InnoDB: Completed to resize buffer pool from 8388608 to 16777216.
            2022-06-13  9:51:14 0 [Note] InnoDB: Completed resizing buffer pool at 220613  9:51:14.
            2022-06-13  9:51:19 4 [Warning] InnoDB: Over 67 percent of the buffer pool is occupied by lock heaps or the adaptive hash index! Check that your transactions do not set too many row locks. innodb_buffer_pool_size=15M. Starting the InnoDB Monitor to print diagnostics.
            2022-06-13  9:51:19 4 [Warning] InnoDB: Difficult to find free blocks in the buffer pool (21 search iterations)! 21 failed attempts to flush a page! Consider increasing innodb_buffer_pool_size. Pending flushes (fsync) log: 0; buffer pool: 0. 231 OS file reads, 2707 OS file writes, 242 OS fsyncs.
            2022-06-13  9:51:27 4 [ERROR] [FATAL] InnoDB: Over 95 percent of the buffer pool is occupied by lock heaps or the adaptive hash index! Check that your transactions do not set too many row locks, or review if innodb_buffer_pool_size=15M could be bigger.
            220613  9:51:27 [ERROR] mysqld got signal 6 ;
            

            The test case that I used was as follows:

            --source include/have_innodb.inc
            SET GLOBAL innodb_buffer_pool_size=12*1024*1024;
             
            CREATE TABLE t1 (d DOUBLE) ENGINE=InnoDB;
             
            INSERT INTO t1 VALUES (0x0061),(0x0041),(0x00E0),(0x00C0),(0x1EA3),(0x1EA2),(0x00E3),(0x00C3),(0x00E1),(0x00C1),(0x1EA1),(0x1EA0);
             
            INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6;
            

            cd mysql-test
            ./mtr innodb.name_of_test
            

            Please double-check the affected versions.

            The excessive record locking would be fixed by MDEV-24813. It is not fixable in the InnoDB subsystem itself.

            marko Marko Mäkelä added a comment - I think that this and MDEV-28800 are siblings of each other. There shouldn’t be any difference between 10.8, 10.9, 10.10 in InnoDB that would explain why we sometimes notice that the record locks are consuming too much memory, and sometimes won’t. I can confirm that on my system, a non-debug build of 10.8 62419b1733042c30414a4feed89c79aebb5621af will only allow the file t1.ibd to grow to 28MiB, which is the same result that I got on a 10.10 32edabd1f2fa0cf9b2cf41f326d399ef0348fa30 debug build. On a non-debug build of the same 10.10 revision, the file grew to 29 MiB before the message was output. In MDEV-28800 , it is implied that locks are not consuming so much memory in 10.7. There have been no changes to the InnoDB locking subsystem since version 10.6. Hence, the reason for this regression must be somewhere outside InnoDB. But, I do not see any such regression: 10.7 fe75e5e5b1c5856fdfc9bd97265ba6ebe272f549 mysqltest: At line 8: query 'INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6' failed: <Unknown> (2013): Lost connection to server during query … 2022-06-13 9:51:14 0 [Note] InnoDB: Completed to resize buffer pool from 8388608 to 16777216. 2022-06-13 9:51:14 0 [Note] InnoDB: Completed resizing buffer pool at 220613 9:51:14. 2022-06-13 9:51:19 4 [Warning] InnoDB: Over 67 percent of the buffer pool is occupied by lock heaps or the adaptive hash index! Check that your transactions do not set too many row locks. innodb_buffer_pool_size=15M. Starting the InnoDB Monitor to print diagnostics. 2022-06-13 9:51:19 4 [Warning] InnoDB: Difficult to find free blocks in the buffer pool (21 search iterations)! 21 failed attempts to flush a page! Consider increasing innodb_buffer_pool_size. Pending flushes (fsync) log: 0; buffer pool: 0. 231 OS file reads, 2707 OS file writes, 242 OS fsyncs. 2022-06-13 9:51:27 4 [ERROR] [FATAL] InnoDB: Over 95 percent of the buffer pool is occupied by lock heaps or the adaptive hash index! Check that your transactions do not set too many row locks, or review if innodb_buffer_pool_size=15M could be bigger. 220613 9:51:27 [ERROR] mysqld got signal 6 ; The test case that I used was as follows: --source include/have_innodb.inc SET GLOBAL innodb_buffer_pool_size=12*1024*1024;   CREATE TABLE t1 (d DOUBLE ) ENGINE=InnoDB;   INSERT INTO t1 VALUES (0x0061),(0x0041),(0x00E0),(0x00C0),(0x1EA3),(0x1EA2),(0x00E3),(0x00C3),(0x00E1),(0x00C1),(0x1EA1),(0x1EA0);   INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6; cd mysql-test ./mtr innodb.name_of_test Please double-check the affected versions. The excessive record locking would be fixed by MDEV-24813 . It is not fixable in the InnoDB subsystem itself.

            Double-checked affected versions. Issue only appears on optimized builds. Earlier testing had included debug builds on which the issue does not show. Found that only optimized builds and only 10.8-10.10 are affected.

            Roel Roel Van de Paar added a comment - Double-checked affected versions. Issue only appears on optimized builds. Earlier testing had included debug builds on which the issue does not show. Found that only optimized builds and only 10.8-10.10 are affected.

            I do not think that anything has been changed in InnoDB transactional locks since version 10.6. In MDEV-28800, I posted a failure that I reproduced on 10.6.

            I think that there is a design issue that the transactional lock objects will be allocated from the buffer pool. If an excessive number of record locks is being created, InnoDB will report an error or crash due to running out of buffer pool. That an error is being reported is a nice thing, compared to a crash. What exactly do you expect to be fixed in the scope of this ticket?

            marko Marko Mäkelä added a comment - I do not think that anything has been changed in InnoDB transactional locks since version 10.6. In MDEV-28800 , I posted a failure that I reproduced on 10.6. I think that there is a design issue that the transactional lock objects will be allocated from the buffer pool. If an excessive number of record locks is being created, InnoDB will report an error or crash due to running out of buffer pool. That an error is being reported is a nice thing, compared to a crash. What exactly do you expect to be fixed in the scope of this ticket?
            Roel Roel Van de Paar added a comment - - edited

            Your comments made me think further and helped me to understand why this currently seems to show on 10.8-10.10 only: MDEV-25342 - i.e. the lower default-possible value for innodb_buffer_pool_size (due to a reduction of innodb_buffer_pool_chunk_size) w/ thanks for the pointer danblack. So, with the first query in the testcase, the actual buffer pool in 10.8+ becomes 12582912, and before 10.8 it becomes 134217728 (>10x as much).

            I tested the same on 10.6 (and 10.7) using --innodb_buffer_pool_chunk_size=2097152 and in that case, the error can be produced there also:

            10.6.9 05d049bdbe6814aee8f011fbd0d915f9d82a30ee (Optimized)

            10.6.9-opt>INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6;
            ERROR 1206 (HY000): The total number of locks exceeds the lock table size
            

            However, on 10.5 (and 10.3, 10.4), the query succeeds (using the same setup):

            10.5.17 2840d7750db11a8d2ab3f212a05f5afefaef6d4d (Optimized)

            10.5.17-opt>INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6;
            Query OK, 2985984 rows affected (42.419 sec)
            Records: 2985984  Duplicates: 0  Warnings: 0
            

            So the action item for this ticket is: why does 10.6-10.10 fail with the error whereas 10.3-10.5 will process the query fine, when using the same innodb_buffer_pool_chunk_size and innodb_buffer_pool_size?

            Roel Roel Van de Paar added a comment - - edited Your comments made me think further and helped me to understand why this currently seems to show on 10.8-10.10 only: MDEV-25342 - i.e. the lower default-possible value for innodb_buffer_pool_size (due to a reduction of innodb_buffer_pool_chunk_size ) w/ thanks for the pointer danblack . So, with the first query in the testcase, the actual buffer pool in 10.8+ becomes 12582912, and before 10.8 it becomes 134217728 (>10x as much). I tested the same on 10.6 (and 10.7) using --innodb_buffer_pool_chunk_size=2097152 and in that case, the error can be produced there also: 10.6.9 05d049bdbe6814aee8f011fbd0d915f9d82a30ee (Optimized) 10.6.9-opt>INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6; ERROR 1206 (HY000): The total number of locks exceeds the lock table size However, on 10.5 (and 10.3, 10.4), the query succeeds (using the same setup): 10.5.17 2840d7750db11a8d2ab3f212a05f5afefaef6d4d (Optimized) 10.5.17-opt>INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6; Query OK, 2985984 rows affected (42.419 sec) Records: 2985984 Duplicates: 0 Warnings: 0 So the action item for this ticket is: why does 10.6-10.10 fail with the error whereas 10.3-10.5 will process the query fine, when using the same innodb_buffer_pool_chunk_size and innodb_buffer_pool_size ?
            Roel Roel Van de Paar added a comment - - edited

            Found one more odd difference. When starting 10.8 to 10.10 (all debug, and using CLI) with --innodb_buffer_pool_chunk_size=2097152, only 10.10 will produce the ERROR 1206 error. 10.8 and 10.9 will continue processing the query.

            Roel Roel Van de Paar added a comment - - edited Found one more odd difference. When starting 10.8 to 10.10 (all debug, and using CLI) with --innodb_buffer_pool_chunk_size=2097152 , only 10.10 will produce the ERROR 1206 error. 10.8 and 10.9 will continue processing the query.

            The buffer pool resizing is inaccurate and somewhat nondeterministic by design. I do not think that it can be fixed easily, other than by reimplementing the buffer pool resizing in a different way: Allocate a reasonable maximum amount of contiguous 64-bit virtual address space for the buffer pool, but only map the requested amount of memory for it. I have discussed it with danblack in the past.

            I think that what is needed to address this bug report is to ensure that the crashes (MDEV-28800) are avoided, and that the error message is refined to mention innodb_buffer_pool_size, because the explicit record locks will be allocated from the InnoDB buffer pool. It is a reasonable design, only the enforcement of the maximum allocation size is sloppy and the error message is not helpful for a user.

            marko Marko Mäkelä added a comment - The buffer pool resizing is inaccurate and somewhat nondeterministic by design. I do not think that it can be fixed easily, other than by reimplementing the buffer pool resizing in a different way: Allocate a reasonable maximum amount of contiguous 64-bit virtual address space for the buffer pool, but only map the requested amount of memory for it. I have discussed it with danblack in the past. I think that what is needed to address this bug report is to ensure that the crashes ( MDEV-28800 ) are avoided, and that the error message is refined to mention innodb_buffer_pool_size , because the explicit record locks will be allocated from the InnoDB buffer pool. It is a reasonable design, only the enforcement of the maximum allocation size is sloppy and the error message is not helpful for a user.

            Understood, thank you.

            Roel Roel Van de Paar added a comment - Understood, thank you.
            danblack Daniel Black added a comment -

            can't product any more as mtr test with 12M

            like MDEV-33324 - producible with 5M innodb buffer pool size

            10.6.20-MariaDB source revision a68e74b5a450c9de5b6b9459fd60e36a2fb0545c
            11.6.2-MariaDB source revision b7bca3ff71615ab918410f02ffae74f8d66ff03f

            Using 5M pool using lock_memory.test test added in MDEV-28800 left (POOL_SIZE - (FREE_BUFFERS + DATABASE_PAGES)) as 9 pages like the test expects.

            danblack Daniel Black added a comment - can't product any more as mtr test with 12M like MDEV-33324 - producible with 5M innodb buffer pool size 10.6.20-MariaDB source revision a68e74b5a450c9de5b6b9459fd60e36a2fb0545c 11.6.2-MariaDB source revision b7bca3ff71615ab918410f02ffae74f8d66ff03f Using 5M pool using lock_memory.test test added in MDEV-28800 left (POOL_SIZE - (FREE_BUFFERS + DATABASE_PAGES)) as 9 pages like the test expects.

            Issue reproducible on current 10.6, 10.11, 11.7 optimized builds with this CLI testcase:

            # mysqld options required for replay: --innodb_buffer_pool_chunk_size=2097152
            SET GLOBAL innodb_buffer_pool_size=12*1024*1024;
            CREATE TABLE t1 (d DOUBLE);
            INSERT INTO t1 VALUES (0x0061),(0x0041),(0x00E0),(0x00C0),(0x1EA3),(0x1EA2),(0x00E3),(0x00C3),(0x00E1),(0x00C1),(0x1EA1),(0x1EA0);
            INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6,t1 t7;
            

            CS 10.6.20 cd97caef84a842cf388866cfc0a0ec32b86a9c13 (Optimized)

            10.6.20-opt>INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6,t1 t7;
            ERROR 1206 (HY000): The total number of locks exceeds the lock table size
            

            CS 10.11.10 8a6a4c947a0ca3d2fdca752d7440bdc5c6c83e37 (Optimized)

            10.11.10-opt>INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6,t1 t7;
            ERROR 1206 (HY000): The total number of locks exceeds the lock table size
            

            CS 11.7.0 4016c905cbabea7f29ed282dc2125254c7c0d419 (Optimized)

            11.7.0-opt>INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6,t1 t7;
            ERROR 1206 (HY000): The total number of locks exceeds the lock table size
            

            All versions produce output similar to the following in the error log:

            CS 11.7.0 4016c905cbabea7f29ed282dc2125254c7c0d419 (Optimized)

            2024-10-21  6:47:04 0 [Note] /test/MD141024-mariadb-11.7.0-linux-x86_64-opt/bin/mariadbd: ready for connections.
            Version: '11.7.0-MariaDB'  socket: '/test/MD141024-mariadb-11.7.0-linux-x86_64-opt/socket.sock'  port: 11392  MariaDB Server
            2024-10-21  6:47:21 0 [Note] InnoDB: Resizing buffer pool from 128.000MiB to 12.000MiB (unit = 2.000MiB).
            2024-10-21  6:47:21 0 [Note] InnoDB: Disabling adaptive hash index.
            2024-10-21  6:47:21 0 [Note] InnoDB: Withdrawing blocks to be shrunken.
            2024-10-21  6:47:21 0 [Note] InnoDB: Start to withdraw the last 7308 blocks.
            2024-10-21  6:47:21 0 [Note] InnoDB: Withdrawing blocks. (7308/7308).
            2024-10-21  6:47:21 0 [Note] InnoDB: Withdrew 7308 blocks from free list. Tried to relocate 0 blocks (7308/7308).
            2024-10-21  6:47:21 0 [Note] InnoDB: Withdrawn target: 7308 blocks.
            2024-10-21  6:47:21 0 [Note] InnoDB: Latching entire buffer pool.
            2024-10-21  6:47:21 0 [Note] InnoDB: Resizing buffer pool from 64 chunks to 6 chunks.
            2024-10-21  6:47:21 0 [Note] InnoDB: 58 Chunks (7308 blocks) were freed.
            2024-10-21  6:47:21 0 [Note] InnoDB: Resizing other hash tables.
            2024-10-21  6:47:21 0 [Note] InnoDB: Resized hash tables: lock_sys, adaptive hash index, and dictionary.
            2024-10-21  6:47:21 0 [Note] InnoDB: Completed resizing buffer pool from 134217728 to 12582912 bytes.
            2024-10-21  6:47:56 4 [Warning] InnoDB: Over 67 percent of the buffer pool is occupied by lock heaps or the adaptive hash index! Check that your transactions do not set too many row locks. innodb_buffer_pool_size=11M. Starting the InnoDB Monitor to print diagnostics.
             
            =====================================
            2024-10-21 06:47:56 0x14a5966006c0 INNODB MONITOR OUTPUT
            =====================================
            

            Not reproducible in MTR.

            Roel Roel Van de Paar added a comment - Issue reproducible on current 10.6, 10.11, 11.7 optimized builds with this CLI testcase: # mysqld options required for replay: --innodb_buffer_pool_chunk_size=2097152 SET GLOBAL innodb_buffer_pool_size=12*1024*1024; CREATE TABLE t1 (d DOUBLE ); INSERT INTO t1 VALUES (0x0061),(0x0041),(0x00E0),(0x00C0),(0x1EA3),(0x1EA2),(0x00E3),(0x00C3),(0x00E1),(0x00C1),(0x1EA1),(0x1EA0); INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6,t1 t7; CS 10.6.20 cd97caef84a842cf388866cfc0a0ec32b86a9c13 (Optimized) 10.6.20-opt>INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6,t1 t7; ERROR 1206 (HY000): The total number of locks exceeds the lock table size CS 10.11.10 8a6a4c947a0ca3d2fdca752d7440bdc5c6c83e37 (Optimized) 10.11.10-opt>INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6,t1 t7; ERROR 1206 (HY000): The total number of locks exceeds the lock table size CS 11.7.0 4016c905cbabea7f29ed282dc2125254c7c0d419 (Optimized) 11.7.0-opt>INSERT INTO t1 SELECT t1.* FROM t1,t1 t2,t1 t3,t1 t4,t1 t5,t1 t6,t1 t7; ERROR 1206 (HY000): The total number of locks exceeds the lock table size All versions produce output similar to the following in the error log: CS 11.7.0 4016c905cbabea7f29ed282dc2125254c7c0d419 (Optimized) 2024-10-21 6:47:04 0 [Note] /test/MD141024-mariadb-11.7.0-linux-x86_64-opt/bin/mariadbd: ready for connections. Version: '11.7.0-MariaDB' socket: '/test/MD141024-mariadb-11.7.0-linux-x86_64-opt/socket.sock' port: 11392 MariaDB Server 2024-10-21 6:47:21 0 [Note] InnoDB: Resizing buffer pool from 128.000MiB to 12.000MiB (unit = 2.000MiB). 2024-10-21 6:47:21 0 [Note] InnoDB: Disabling adaptive hash index. 2024-10-21 6:47:21 0 [Note] InnoDB: Withdrawing blocks to be shrunken. 2024-10-21 6:47:21 0 [Note] InnoDB: Start to withdraw the last 7308 blocks. 2024-10-21 6:47:21 0 [Note] InnoDB: Withdrawing blocks. (7308/7308). 2024-10-21 6:47:21 0 [Note] InnoDB: Withdrew 7308 blocks from free list. Tried to relocate 0 blocks (7308/7308). 2024-10-21 6:47:21 0 [Note] InnoDB: Withdrawn target: 7308 blocks. 2024-10-21 6:47:21 0 [Note] InnoDB: Latching entire buffer pool. 2024-10-21 6:47:21 0 [Note] InnoDB: Resizing buffer pool from 64 chunks to 6 chunks. 2024-10-21 6:47:21 0 [Note] InnoDB: 58 Chunks (7308 blocks) were freed. 2024-10-21 6:47:21 0 [Note] InnoDB: Resizing other hash tables. 2024-10-21 6:47:21 0 [Note] InnoDB: Resized hash tables: lock_sys, adaptive hash index, and dictionary. 2024-10-21 6:47:21 0 [Note] InnoDB: Completed resizing buffer pool from 134217728 to 12582912 bytes. 2024-10-21 6:47:56 4 [Warning] InnoDB: Over 67 percent of the buffer pool is occupied by lock heaps or the adaptive hash index! Check that your transactions do not set too many row locks. innodb_buffer_pool_size=11M. Starting the InnoDB Monitor to print diagnostics.   ===================================== 2024-10-21 06:47:56 0x14a5966006c0 INNODB MONITOR OUTPUT ===================================== Not reproducible in MTR.
            Roel Roel Van de Paar added a comment - - edited

            Using the testcase from the last comment, the issue is reproducible from 10.5 to 11.7 (current release versions tested), as well as in MySQL 5.7, 8.0 and 9.1. For MySQL 5.5 and 5.6 it is also reproducible (without using the there-unsupported --innodb_buffer_pool_chunk_size) by setting --innodb_buffer_pool_size=12582912 at server startup (not dynamic).

            Current summary

            Marko mentioned earlier "I think that what is needed to address this bug report is ... (removed already fixed item) ... and that the error message is refined to mention innodb_buffer_pool_size, because the explicit record locks will be allocated from the InnoDB buffer pool. It is a reasonable design, only the enforcement of the maximum allocation size is sloppy and the error message is not helpful for a user." which looks to be the best way forward for this ticket.

            (And the excessive locking issue remains as MDEV-24813)

            Roel Roel Van de Paar added a comment - - edited Using the testcase from the last comment, the issue is reproducible from 10.5 to 11.7 (current release versions tested), as well as in MySQL 5.7, 8.0 and 9.1. For MySQL 5.5 and 5.6 it is also reproducible (without using the there-unsupported --innodb_buffer_pool_chunk_size ) by setting --innodb_buffer_pool_size=12582912 at server startup (not dynamic). Current summary Marko mentioned earlier "I think that what is needed to address this bug report is ... (removed already fixed item) ... and that the error message is refined to mention innodb_buffer_pool_size, because the explicit record locks will be allocated from the InnoDB buffer pool. It is a reasonable design, only the enforcement of the maximum allocation size is sloppy and the error message is not helpful for a user." which looks to be the best way forward for this ticket. (And the excessive locking issue remains as MDEV-24813 )

            People

              marko Marko Mäkelä
              Roel Roel Van de Paar
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.