[MDEV-4368] Client hangs when using an imported MySQL database with wrong ownership Created: 2013-04-04  Updated: 2022-09-08

Status: Open
Project: MariaDB Server
Component/s: None
Affects Version/s: 10.0.1, 5.5.30
Fix Version/s: 5.5

Type: Bug Priority: Minor
Reporter: Mark Trutter (Inactive) Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: database-import, table-name-completion
Environment:

OpenSuse 12.3 64bit using the standard repository's MariaDB package version 5.5.29


Attachments: File hg19.tar.gz    

 Description   

I have imported a Mysql-database from a remote Mysql-server. For that purpose I've created an empty database of the respective name locally and then copied the database files to /var/lib/mysql/DATABASENAME using rsync (This works perfectly in MySQL).
Connecting to the database in the Mariadb-client ("use DATABASENAME;") the client hangs forever with the message:

Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Doing so, that is starting the client with the -A option I have no problem and can query the database.
So I suppose it is some error related to indexing of the imported database table-names.

Connecting to the database via jdbc (using the mysql-java-connector) my program hangs as well and here I don't have the -A option

I'm not sure whether this way of importing a Mysql database is supported by MariaDB-MySQL compatibility, but since the remote MySQL server doesn't support mysql-dump I don't have another option.

#####
For reproducability the commands to import the database:
In the mariadb-client:
create database hg19;
On the command-line:
sudo rsync -v -r rsync://hgdownload.cse.ucsc.edu/mysql/hg19/refGene* /var/lib/mysql/hg19/

(The source is a public mysql server at ucsc for bioinformatics)



 Comments   
Comment by Elena Stepanova [ 2013-04-04 ]

Hi Mark,

Not sure which version of MySQL database you imported the schema from, but it requires mysql_upgrade (which is normally the case when you do upgrade or crossgrade between different versions).
The problem you described is reproducible here, and there are lots of complaints
130404 13:35:54 [Warning] Checking table: './hg19/refGene'
130404 13:35:55 [ERROR] mysqld: Table './hg19/refGene' is marked as crashed and should be repaired

in the error log, but it all goes away after I run mysql_upgrade. Please try it out.

Comment by Mark Trutter (Inactive) [ 2013-04-04 ]

Hi Elena,

The server version of the public Mysql-server at ucsc is 5.0.45.
I've run mysql_upgrade on my machine, the script returns OK,
but the client still hangs if I type: use hg19;

Comment by Elena Stepanova [ 2013-04-04 ]

Mark,

Could you please quote the server error log since the server start and to the end (including the period when you ran mysql_upgrade and tried to connect to the database)?

Thanks

Comment by Mark Trutter (Inactive) [ 2013-04-04 ]

So, I stopped the server and deleted var/log/mysql/mysqld.log to reset it.
Then I started the server, did the mysql_upgrade, and then entered "use hg19;" at the mysql-console.
The error log doesn't seem very informative and mysql_upgrade obviously run without error:

130404 16:21:15 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
130404 16:21:15 InnoDB: The InnoDB memory heap is disabled
130404 16:21:15 InnoDB: Mutexes and rw_locks use GCC atomic builtins
130404 16:21:15 InnoDB: Compressed tables use zlib 1.2.7
130404 16:21:15 InnoDB: Initializing buffer pool, size = 128.0M
130404 16:21:15 InnoDB: Completed initialization of buffer pool
130404 16:21:15 InnoDB: highest supported file format is Barracuda.
130404 16:21:15 InnoDB: Waiting for the background threads to start
130404 16:21:16 Percona XtraDB (http://www.percona.com) 1.1.8-29.3 started; log sequence number 1601100
130404 16:21:16 [Note] Event Scheduler: Loaded 0 events
130404 16:21:16 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.29-MariaDB-log' socket: '/var/run/mysql/mysql.sock' port: 3306 Source distribution
130404 16:27:40 [ERROR] mysqld: Table './hg19/refGene' is marked as crashed and should be repaired
130404 16:27:40 [Warning] Checking table: './hg19/refGene'
130404 16:27:40 [ERROR] mysqld: Table './hg19/refGene' is marked as crashed and should be repaired
130404 16:27:40 [Warning] Checking table: './hg19/refGene'
. . .

Comment by Mark Trutter (Inactive) [ 2013-04-04 ]

I've done a couple of more tests.
The error does not occur with other tables (e.g. ensGene) of the same database, so there might be some problem with that particular table (refGene).

On the other hand it is a fact that MySQL itself can work on the table without an error and MariaDB can work on the table with option -A (without apparent problems).
Also if MariaDB encounters a deficient table it should give an error-message and return to the prompt. Right now it hangs forever (at 100% CPU usage).

Comment by Mark Trutter (Inactive) [ 2013-04-04 ]

One correction to the last comment:

MariaDB cannot work on the deficient refGene table when started with option -A, but MySQL can.
With option -A you can use the database and work on other tables, but when you do a select on refGene the client hangs as well.

Comment by Elena Stepanova [ 2013-04-04 ]

What happens if you do CHECK TABLE refGene ?

Comment by Mark Trutter (Inactive) [ 2013-04-04 ]

I think we're coming closer:

hg19.refGene check warning 1 client is using or hasn't closed the table properly
hg19.refGene check Error File './hg19/refGene.frm' not found (Errcode: 13)
hg19.refGene check status Table is already up to date

/var/lib/mysql/hg19/refGene.frm is there and its size is identical to ensGene.frm (9131).
There is apperently a problem with refGene.frm - one that MySQL can handle or doesn't care about.

Comment by Mark Trutter (Inactive) [ 2013-04-04 ]

OK this is definitely not a bug of the database system in the strict sense of the word since the error is primarily on the side of the datahere.

Nevertheless this error might be useful to improve MariaDB's handling of faulty database tables.
If for example you're running a java web-application with database-background it would be pretty bad if your application just hangs forever (at 100% cpu) without a hint to the source of the problem.
An exception with a useful hint to the faulty database table would be much nicer

Comment by Mark Trutter (Inactive) [ 2013-04-04 ]

It might be a good idea to mirror the refGene table to a machine of yours (as described above).
That way you could use it in order to investigate why mariadb is hanging here.

Since refGene is a widely used database in bioinformatics I assume that the faulty data-source will be corrected very soon.

Comment by Elena Stepanova [ 2013-04-04 ]

>> It might be a good idea to mirror the refGene table to a machine of yours (as described above).
>> That way you could use it in order to investigate why mariadb is hanging here.

I did it from the start, and I observed the behavior that you described, but the problem went away after I changed the permissions on the table and ran mysql_upgrade.
Which reminds me that I asked you about mysql_upgrade, but not about permissions: if you copy the database the same way you suggested, via sudo rsync .. , could it be that you didn't change the owner of the copied database to 'mysql' (which is probably the account used for your mariadb server) afterwards?

Comment by Mark Trutter (Inactive) [ 2013-04-05 ]

You're right, actually I hadn't changed permissions of the files which is probably why repairing with mysql_upgrade hasn't worked out for me

Still I think that it might be a nice improvement of the database system if it would return with an error-message instead of hanging.
The log-file tells me that a problem with the table has been recognized correctly, so it might also be possible to catch that error, output an error-message and break the action.

The faulty table has been corrected on the remote server in the meantime.

Thanks a lot for your very nice support !!!

Comment by Elena Stepanova [ 2013-04-05 ]

I agree, it was confusing, and it still seems to be a bug, although not critical since there is an external reason and a good workaround.
Actually, I see 2 problems here.

1. When you attempt to use this database, what looks like a hanging is actually an endless loop – we can see that the error log keeps getting new messages
130405 16:37:09 [ERROR] mysqld: Table './hg19/refGene' is marked as crashed and should be repaired
130405 16:37:09 [Warning] Checking table: './hg19/refGene'
130405 16:37:09 [ERROR] mysqld: Table './hg19/refGene' is marked as crashed and should be repaired
130405 16:37:09 [Warning] Checking table: './hg19/refGene'

server uses as much cpu as it can get
8129 elenst 20 0 1189m 331m 7520 S 100 4.2 0:22.89 mysqld

and memory footprint also grows
8129 elenst 20 0 1253m 405m 7520 S 100 5.1 2:20.42 mysqld

The process is in checking table stage
MariaDB [test]> show processlist;
-------------------------------------------------------------------------------

Id User Host db Command Time State Info Progress

-------------------------------------------------------------------------------

2 root localhost:59183 hg19 Field List 233 Checking table refGene 0.000
3 root localhost:59184 test Query 0 NULL show processlist 0.000

-------------------------------------------------------------------------------
2 rows in set (0.00 sec)

Obviously all of that shouldn't be happening, the server doesn't need to loop over the broken table. Instead, it should check it once and (at least) throw the error which it produces if we forcefully kill the thread checking the table:

130405 16:42:57 [Warning] Checking table: './hg19/refGene'
130405 16:42:57 [ERROR] mysqld: Table './hg19/refGene' is marked as crashed and should be repaired
130405 16:42:57 [Warning] Checking table: './hg19/refGene'
130405 16:42:57 [Warning] Recovering table: './hg19/refGene'
130405 16:42:57 [ERROR] Couldn't repair table: hg19.refGene

2. The error message in the error log could be much clearer. The server is able to detect the reason of the problem, we can see it in the CHECK TABLE output:

MariaDB [test]> check table hg19.refGene;
--------------------------------------------------------------------------------+

Table Op Msg_type Msg_text

--------------------------------------------------------------------------------+

hg19.refGene check warning 1 client is using or hasn't closed the table properly
hg19.refGene check Error File './hg19/refGene.frm' not found (Errcode: 13)
hg19.refGene check status Table is already up to date

--------------------------------------------------------------------------------+
3 rows in set (0.28 sec)

Errcode 13 is 'Permission denied', so it should be a good clue.

REPAIR TABLE diagnostics is not bad either:

MariaDB [test]> repair table hg19.refGene;
--------------------------------------------------------+

Table Op Msg_type Msg_text

--------------------------------------------------------+

hg19.refGene repair Error Table 'refGene' is read only
hg19.refGene repair status Operation failed

--------------------------------------------------------+
2 rows in set (0.00 sec)

But the error log only keeps saying "is marked as crashed and should be repaired".

Comment by Elena Stepanova [ 2013-04-05 ]

To reproduce:

  • create a clean database, e.g. using mysql_install_db;
  • unpack the attached archive hg19.tar.gz in the datadir;
  • change ownership for the hg19 folder (recursively) e.g.
    sudo chown -R root:root hg19
  • start server on this datadir (not as root), default options are enough
  • run mysql -uroot <connect options> hg19
  • observe hanging, check error log to see it being flooded by error messages.

Not reproducible on MySQL (tried 5.5, 5.6).

Generated at Thu Feb 08 06:55:54 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.