|
Hi,
I missed migrating configuration to the new install and XtraDB engine did not find tables presumably because tablespace is on different filesystem/path with innodb_file_per_table. After I put in correct configuration
So, did you upgrade an existing installation, or did you create a new installation?
What kind of configuration was it that you missed migrating and that you later added?
At which point did you run initial mysql_upgrade?
Please describe step-by-step what exactly you did.
Thanks.
|
|
I was upgrading from SuSE SLES 11 SP3 standard install with following relevant configuration:
sql-mode = "TRADITIONAL"
innodb_data_home_dir = /var/lib/innodb/
innodb-file-per-table
innodb-file-format = Barracuda
innodb_buffer_pool_size = 47G
log-error = /var/log/mysqld.log
slow-query-log-file = /var/log/mysqld-slow.log
slow-query-log = on
innodb-flush-method = O_DIRECT
skip-external-locking
skip-name-resolve
After installing MariaDB 10.0.9 I forgot about the custom InnnoDB tablespace path, started MariaDB and ran mysql_upgrade, which informed me it couldn't find shared tablespace for the individual InnoDB files. I replaced the original configuration, restarted MariaDB and re-ran mysql_upgrade with --force. The .ibd files for the missing tables exist, but apparently they're not in the correct shared tablespace showing with "Table 'mysql.innodb_index_stats' doesn't exist in engine" etc. There is a related bug entry at http://bugs.mysql.com/bug.php?id=67179
I'm guessing I have to either delete the missing tables and possibly run mysql_install_db, or possibly run ALTER TABLE ... IMPORT TABLESPACE for them, though I was expecting mysql_upgrade --force to do it as those tables were added in the upgrade.
|
|
Oh also I'm using the official MariaDB-10 RHEL5 packages on the SuSE-11.3 machines, MariaDB-10.0.10-rhel5-x86_64-server.rpm currently. On other identical servers that hasn't caused problems, I just forgot that this one had shared tablespace on different RAID, and forgot to move the configuration before upgrading, which is presumably the cause of the problem.
|
|
Thanks for the details, it is much clearer now.
I don't think that much can be done here.
Please consider what really happens with the system tables during your scenario:
- your pre-upgrade system tablespace sits in /var/lib/innodb/.
- you upgraded and started the server without pointing at /var/lib/innodb. It could not find the old innodb system tablespace, so it created a new one in the datadir.
- you ran mysql_upgrade, it created system tables; a table information is spread over three files: .frm file (table definition), .ibd file (table data), and a chunk of information in innodb system tablespace. So, for example for gtid_slave_pos table, mysql_upgrade created datadir/mysql/gtid_slave_pos.frm, datadir/mysql/gtid_slave_pos.ibd, and also updated innodb system tablespace which is currently in the datadir.
- then you shut down the server and restarted it with the option pointing at the pre-upgrade system tablespace in /var/lib/innodb. So, now you still have .frm file and .ibd file, but the system tablespace does not have any information about the newly created tables.
- you re-ran mysql_upgrade with --force. If there had not been any trace of the table, it would have re-created it. But since you have the .frm file, the server thinks there is the table (you could see it for example if you ran SHOW TABLES IN mysql via MySQL client. We cannot not make mysql_upgrade re-create an existing table (even when it's technically possible, which is not quite the case here), it would have been a disaster, because important data could have been lost.
So, mysql_upgrade leaves the table be. But the .frm file points at InnoDB as the owner of the table, while InnoDB has no information about the table since you discarded the system tablespace where it was written into. So, the table cannot be used.
I think in this case it is reasonable to require manual intervention from the user, since manual intervention caused the trouble in the first place. The easiest remedy would be to remove .frm and .ibd files from the file system. Then mysql_upgrade --force would have re-created the tables without a problem.
The MySQL bug report from the link that you provided shows that Oracle also thinks there should be no automated recovery. As you can see, it describes two scenarios, and the solution for the Windows part was not to damage the table space in the first place, and for the Linux part – not to forget to point at the right data location.
I will close the report for now, please comment to re-open if you disagree.
|
|
Folks,
I'm seeing the same issue on a recent case I'm working with. The whole story is, the customer has a Percona Server in 5.6 and we're moving it to a new MariaDB 10.1.21, fresh setup on Debian. Due to the size of databases into that PS instance, we decided to stream a backup using Xtrabackup/Innobackupex from PS into MariaDB, which for sure gave a new structure for mysql system database tables.
After getting MariaDB up, it starts complaining the below:
2017-03-30 7:00:11 140536679413504 [Warning] InnoDB: Cannot open table mysql/gtid_slave_pos from the internal data dictionary of InnoDB though the .frm file for the table exists. See http://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting.html for how you can resolve the problem.
|
Running mysql_upgrade and again, mysql_upgrade with -f, it was expected that the issue could be solved as mysql_upgrade is expected to adjust existing tables and create new ones to make it compatible with the MySQL flavor in use. But, it didn't do anything but a simple mysqlcheck (this is expected as well). After mysql_upgrade I restart MariaDB, but, it didn't do anything as well regarding the missed table.
I managed to configure replication and resume it, slave is catching up, but, the SHOW SLAVE STATUS output is now like below:
root@xyz [(none)] > show slave status\G
|
*************************** 1. row ***************************
|
Slave_IO_State: Queueing master event to the relay log
|
Master_Host: mymasterland.com
|
Master_User: user
|
Master_Port: 3306
|
Connect_Retry: 60
|
Master_Log_File: mysql-bin.080032
|
Read_Master_Log_Pos: 50789050
|
Relay_Log_File: mysql-relay-bin.000002
|
Relay_Log_Pos: 28209607
|
Relay_Master_Log_File: mysql-bin.080028
|
Slave_IO_Running: Yes
|
Slave_SQL_Running: Yes
|
Replicate_Do_DB:
|
Replicate_Ignore_DB:
|
Replicate_Do_Table:
|
Replicate_Ignore_Table:
|
Replicate_Wild_Do_Table:
|
Replicate_Wild_Ignore_Table:
|
Last_Errno: 1932
|
{color:red} Last_Error: Unable to load replication GTID slave state from mysql.gtid_slave_pos: Table 'mysql.gtid_slave_pos' doesn't exist in engine{color}
|
Skip_Counter: 0
|
Exec_Master_Log_Pos: 72128992
|
Relay_Log_Space: 287375796
|
Until_Condition: None
|
Until_Log_File:
|
Until_Log_Pos: 0
|
Master_SSL_Allowed: No
|
Master_SSL_CA_File:
|
Master_SSL_CA_Path:
|
Master_SSL_Cert:
|
Master_SSL_Cipher:
|
Master_SSL_Key:
|
Seconds_Behind_Master: 123503
|
Master_SSL_Verify_Server_Cert: No
|
Last_IO_Errno: 0
|
Last_IO_Error:
|
Last_SQL_Errno: 1932
|
{color:red} Last_SQL_Error: Unable to load replication GTID slave state from mysql.gtid_slave_pos: Table 'mysql.gtid_slave_pos' doesn't exist in engine{color}
|
Replicate_Ignore_Server_Ids:
|
Master_Server_Id: 203
|
Master_SSL_Crl:
|
Master_SSL_Crlpath:
|
Using_Gtid: No
|
Gtid_IO_Pos:
|
Replicate_Do_Domain_Ids:
|
Replicate_Ignore_Domain_Ids:
|
Parallel_Mode: conservative
|
1 row in set (0.00 sec)
|
|
Creating the table manually:
root@xyz [mysql] > CREATE TABLE IF NOT EXISTS `gtid_slave_pos` (
|
-> `domain_id` int(10) unsigned NOT NULL,
|
-> `sub_id` bigint(20) unsigned NOT NULL,
|
-> `server_id` int(10) unsigned NOT NULL,
|
-> `seq_no` bigint(20) unsigned NOT NULL,
|
-> PRIMARY KEY (`domain_id`,`sub_id`)
|
-> ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COMMENT='Replication slave GTID position';
|
Query OK, 0 rows affected, 1 warning (0.00 sec)
|
|
root@xyz [mysql] > show warnings;
|
+-------+------+---------------------------------------+
|
| Level | Code | Message |
|
+-------+------+---------------------------------------+
|
| Note | 1050 | Table 'gtid_slave_pos' already exists |
|
+-------+------+---------------------------------------+
|
1 row in set (0.00 sec)
|
|
root@xyz [mysql] > stop slave; start slave;
|
Query OK, 0 rows affected (0.00 sec)
|
|
Query OK, 0 rows affected, 1 warning (0.00 sec)
|
|
root@xyz [mysql] > show warnings;
|
+-------+------+------------------------------------------------------+
|
| Level | Code | Message |
|
+-------+------+------------------------------------------------------+
|
| Error | 1932 | Table 'mysql.gtid_slave_pos' doesn't exist in engine |
|
+-------+------+------------------------------------------------------+
|
1 row in set (0.00 sec)
|
|
|
So, ... trapped.
|
|
A dirty trick that will make it:
1. move out of the mysql directory the gtid_slave_pos.ibd if you have it orphaned w/out a associate frm file;
2. create the table;
[root@xyz /mysql/data/mysql]# mv gtid_slave_pos.ibd /tmp
|
|
root@xyz [mysql] > CREATE TABLE `gtid_slave_pos` ( `domain_id` int(10) unsigned NOT NULL, `sub_id` bigint(20) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `seq_no` bigint(20) unsigned NOT NULL, PRIMARY KEY (`domain_id`,`sub_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COMMENT='Replication slave GTID position';
|
Query OK, 0 rows affected (0.00 sec)
|
|
root@xyz [mysql] > show slave status\G
|
*************************** 1. row ***************************
|
Slave_IO_State:
|
Master_Host: ip
|
Master_User: repl
|
Master_Port: 3306
|
Connect_Retry: 60
|
Master_Log_File: mysql-bin.080283
|
Read_Master_Log_Pos: 99235519
|
Relay_Log_File: mysql-relay-bin.000100
|
Relay_Log_Pos: 6255811
|
Relay_Master_Log_File: mysql-bin.080077
|
Slave_IO_Running: No
|
Slave_SQL_Running: No
|
Replicate_Do_DB:
|
Replicate_Ignore_DB:
|
Replicate_Do_Table:
|
Replicate_Ignore_Table:
|
Replicate_Wild_Do_Table:
|
Replicate_Wild_Ignore_Table:
|
Last_Errno: 1932
|
Last_Error: Unable to load replication GTID slave state from mysql.gtid_slave_pos: Table 'mysql.gtid_slave_pos' doesn't exist in engine
|
Skip_Counter: 0
|
Exec_Master_Log_Pos: 8049665
|
Relay_Log_Space: 12886385816
|
Until_Condition: None
|
Until_Log_File:
|
Until_Log_Pos: 0
|
Master_SSL_Allowed: No
|
Master_SSL_CA_File:
|
Master_SSL_CA_Path:
|
Master_SSL_Cert:
|
Master_SSL_Cipher:
|
Master_SSL_Key:
|
Seconds_Behind_Master: NULL
|
Master_SSL_Verify_Server_Cert: No
|
Last_IO_Errno: 0
|
Last_IO_Error:
|
Last_SQL_Errno: 1932
|
Last_SQL_Error: Unable to load replication GTID slave state from mysql.gtid_slave_pos: Table 'mysql.gtid_slave_pos' doesn't exist in engine
|
Replicate_Ignore_Server_Ids:
|
Master_Server_Id: 203
|
Master_SSL_Crl:
|
Master_SSL_Crlpath:
|
Using_Gtid: No
|
Gtid_IO_Pos:
|
Replicate_Do_Domain_Ids:
|
Replicate_Ignore_Domain_Ids:
|
Parallel_Mode: conservative
|
1 row in set (0.00 sec)
|
|
root@xyz [mysql] > stop slave; start slave;
|
Query OK, 0 rows affected (0.00 sec)
|
|
Query OK, 0 rows affected (0.00 sec)
|
|
root@xyz [mysql] > show slave status\G
|
*************************** 1. row ***************************
|
Slave_IO_State: Queueing master event to the relay log
|
Master_Host: 72.52.238.151
|
Master_User: repl
|
Master_Port: 3306
|
Connect_Retry: 60
|
Master_Log_File: mysql-bin.080285
|
Read_Master_Log_Pos: 57407579
|
Relay_Log_File: mysql-relay-bin.000100
|
Relay_Log_Pos: 27394765
|
Relay_Master_Log_File: mysql-bin.080077
|
Slave_IO_Running: Yes
|
Slave_SQL_Running: Yes
|
Replicate_Do_DB:
|
Replicate_Ignore_DB:
|
Replicate_Do_Table:
|
Replicate_Ignore_Table:
|
Replicate_Wild_Do_Table:
|
Replicate_Wild_Ignore_Table:
|
Last_Errno: 0
|
Last_Error:
|
Skip_Counter: 0
|
Exec_Master_Log_Pos: 43858064
|
Relay_Log_Space: 12986599236
|
Until_Condition: None
|
Until_Log_File:
|
Until_Log_Pos: 0
|
Master_SSL_Allowed: No
|
Master_SSL_CA_File:
|
Master_SSL_CA_Path:
|
Master_SSL_Cert:
|
Master_SSL_Cipher:
|
Master_SSL_Key:
|
Seconds_Behind_Master: 108834
|
Master_SSL_Verify_Server_Cert: No
|
Last_IO_Errno: 0
|
Last_IO_Error:
|
Last_SQL_Errno: 0
|
Last_SQL_Error:
|
Replicate_Ignore_Server_Ids:
|
Master_Server_Id: 203
|
Master_SSL_Crl:
|
Master_SSL_Crlpath:
|
Using_Gtid: No
|
Gtid_IO_Pos:
|
Replicate_Do_Domain_Ids:
|
Replicate_Ignore_Domain_Ids:
|
Parallel_Mode: conservative
|
1 row in set (0.00 sec)
|
After restarting MariaDB again just to make sure we get a clean log:
2017-03-30 8:36:08 140448477591808 [Note] Server socket created on IP: '::'.
|
2017-03-30 8:36:08 140448477591808 [Note] Reading of all Master_info entries succeded
|
2017-03-30 8:36:08 140448477591808 [Note] Added new Master_info '' to hash table
|
2017-03-30 8:36:08 140448477591808 [Note] /usr/sbin/mysqld: ready for connections.
|
Version: '10.1.22-MariaDB-1~xenial' socket: '/var/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
|
|
|
Sure, if you do something like that with an InnoDB table, you'll get a problem, there is no mystery here. You should be able to drop the table easily, but you'll have to do it manually, mysql_upgrade has no way of knowing that you tampered with the data on the file system level.
The question is how it happened in real life upon upgrade, assuming that you didn't do anything of the kind on purpose.
|
|
serg,
We have been getting a steady inflow of complaints about this problem – inconsistent state of gtid_slave_pos table after an upgrade; not just inside this bug report, there have been more. While it appears to be caused by operational errors of different sorts (see an example described in my comment of 2014-04-03), the nature of complaint is "mysql_upgrade must fix it, but it does not". Indeed, currently it can't, as it would go beyond normal DDL. Do you think we should try to do something smart about it in mysql_upgrade?
|
|
It wouldn't be beyond normal DDL. Nothing that DROP+CREATE wouldn't fix. I just don't think mysql_upgrade should drop tables automatically, sounds like a rather dangerous thing to do.
|
|
I think we will get a lot more problems, complains, and bug reports if mysql_upgrade will start silently dropping tables, causing a data loss, as when it'll require manual intervention (as of now) after the data were manually tampered with.
Won't fix.
|