We noticed after upgrading from MySQL 5.5 to MariaDB 10.1 that looping over a table through the C API doesn't return all rows. It only happens on a sufficiently large table.
Attached is a small Python script which produces such a table, and a C program that triggers the bug. Running make will compile the C file and produce a file setup.sql which you can import in your database. When it is imported, MYSQLPASSWORD= ./main -s (other flags might be needed, see ./main -h) shows the number of rows received, which should of course agree with SELECT COUNT(*) FROM foo; from the mysql client.
We noticed after upgrading from MySQL 5.5 to MariaDB 10.1 that looping over a table through the C API doesn't return all rows. It only happens on a sufficiently large table.
Attached is a small Python script which produces such a table, and a C program that triggers the bug. Running {{make}} will compile the C file and produce a file {{setup.sql}} which you can import in your database. When it is imported, {{MYSQLPASSWORD= ./main -s}} shows the number of rows received, which should of course agree with {{SELECT COUNT(*) FROM foo;}} from the mysql client.
We noticed after upgrading from MySQL 5.5 to MariaDB 10.1 that looping over a table through the C API doesn't return all rows. It only happens on a sufficiently large table.
Attached is a small Python script which produces such a table, and a C program that triggers the bug. Running {{make}} will compile the C file and produce a file {{setup.sql}} which you can import in your database. When it is imported, {{MYSQLPASSWORD= ./main -s}} (other flags might be needed, see {{./main -h}}) shows the number of rows received, which should of course agree with {{SELECT COUNT(*) FROM foo;}} from the mysql client.
We noticed after upgrading from MySQL 5.5 to MariaDB 10.1 that looping over a table through the C API doesn't return all rows. It only happens on a sufficiently large table.
Attached is a small Python script which produces such a table, and a C program that triggers the bug. Running {{make}} will compile the C file and produce a file {{setup.sql}} which you can import in your database. When it is imported, {{MYSQLPASSWORD= ./main -s}} (other flags might be needed, see {{./main -h}}) shows the number of rows received, which should of course agree with {{SELECT COUNT(*) FROM foo;}} from the mysql client.
We noticed after upgrading from MySQL 5.5 to MariaDB 10.1 that looping over a table through the C API doesn't return all rows. It only happens on a sufficiently large table.
Attached is a small Python script which produces such a table, and a C program that triggers the bug. Running {{make}} will compile the C file and produce a file {{setup.sql}} which you can import in your database. When it is imported, {{MYSQLPASSWORD= ./main -s}} (other flags might be needed, see {{./main -h}}) shows the number of rows received, which should of course agree with {{SELECT COUNT(\*) FROM foo;}} from the mysql client.
main.c prepares a simple SELECT columns FROM table
and executes it with the CURSOR_TYPE_READ_ONLY flag
on the server this goes to mysql_open_cursor() that creates a Select_materialize object
Select_materialize::send_result_set_metadata() invokes create_result_table() with keep_row_order=TRUE
before my commit (referenced above) keep_row_order caused Aria tables to be created with the DYNAMIC record format
with the google encryption patch Aria can preserve the row order in the BLOCK format too
so now all temporary tables use BLOCK format to avoid leaking unencrypted data to disk (and BLOCK is faster anyway)
in the debug build that query hits an assert in allocate_head() function in ma_bitmap.c:
if (insert_order)
{
uint last_insert_page= share->last_insert_page;
uint byte= 6 * (last_insert_page / 16);
first_pattern= last_insert_page % 16;
DBUG_ASSERT(data + byte < end);
data+= byte;
}
Sergei Golubchik
added a comment - - edited What happens here:
a large table (1*10 6 rows)
main.c prepares a simple SELECT columns FROM table
and executes it with the CURSOR_TYPE_READ_ONLY flag
on the server this goes to mysql_open_cursor() that creates a Select_materialize object
Select_materialize::send_result_set_metadata() invokes create_result_table() with keep_row_order=TRUE
before my commit (referenced above) keep_row_order caused Aria tables to be created with the DYNAMIC record format
with the google encryption patch Aria can preserve the row order in the BLOCK format too
so now all temporary tables use BLOCK format to avoid leaking unencrypted data to disk (and BLOCK is faster anyway)
in the debug build that query hits an assert in allocate_head() function in ma_bitmap.c :
if (insert_order)
{
uint last_insert_page= share->last_insert_page;
uint byte= 6 * (last_insert_page / 16);
first_pattern= last_insert_page % 16;
DBUG_ASSERT(data + byte < end);
data+= byte;
}
roblem was that insert-order (enforced by the optimizer) did not handle the case where the bitmap changed to a new one.
Fixed by remembering the last bitmap page used and to force usage of this when inserting new rows
Michael Widenius
added a comment - roblem was that insert-order (enforced by the optimizer) did not handle the case where the bitmap changed to a new one.
Fixed by remembering the last bitmap page used and to force usage of this when inserting new rows
What happens here:
if (insert_order)
{
uint last_insert_page= share->last_insert_page;
uint byte= 6 * (last_insert_page / 16);
first_pattern= last_insert_page % 16;
DBUG_ASSERT(data + byte < end);
data+= byte;
}