If I understand it correctly, transactions issued on different nodes in a Galera-Maria cluster should be free of lost update anomalies, as claimed in https://galeracluster.com/library/training/tutorials/supporting-transaction-isolation-levels.html. However, we have observed such anomaly in our tests.
We have set up our tests with the following configuration:
We run a cluster of two nodes with docker-compose, using mariadb 10.7.3. We use a simple database schema with a single two-column table and each row representing a key-value pair:
We have one client session on each node; both sessions are executed concurrently. First, we initialize the table (using a different session) with values set to 0. After the initialization, both clients run a stream of transactions produced by our workload generator. The values written by the clients are guaranteed to be unique.
The results observed by the client are in client-result.log:
where each transaction is identified by (session id, txn id). In a session, transactions with larger txn ids are executed after those with smaller ids finish. Queries are shown as READ(var, val) or WRITE(var, val).
When running the experiment, a failed transaction is retried until all its operations succeed and the transaction is successfully committed. Only the successful shot is logged on the client side, and all earier executions are shown as ROLLBACK in the query logs.
Both transactions txn (1, 5) and txn (2, 13) read var=0, val=4 written by txn (1, 4) and both transaction successfully commit their writes on var=0. A lost update found! See lost-update.pdf for the complete scenario.
The server-side logs of the nodes are attached in server-logs.zip. The query logs and error logs are stored in mariadb_general.log and mariadb_error.log. The binary logs are in mariadb_bin*.
The tools to reproduce the violation are attached in tools.zip.
1. Start the galera cluster in docker
The docker-compose file is generator/docker/docker-compose.yml. Database
logs are stored in /tmp/ in the containers.
2. Record and verify a history
First build the tools:
Then generate txns and record history:
Note that, since we are running black-box testing with randomized workloads, we cannot reproduce exactly the same violating histories. As we observed, violations manifest very frequently, e.g., we observed 8 violating histories out of 10 collected histories.