[MXS-1270] MaxScale v2.1.2 Read Cache is not discarded after update/insert Created: 2017-05-19  Updated: 2017-05-23  Resolved: 2017-05-22

Status: Closed
Project: MariaDB MaxScale
Component/s: cache
Affects Version/s: 2.1.2
Fix Version/s: N/A

Type: Bug Priority: Major
Reporter: Sven Abels Assignee: Johan Wikman
Resolution: Not a Bug Votes: 0
Labels: None
Environment:

Ubuntu 16.04 within Docker (Swarm Mode, v17.05)


Issue Links:
Blocks
is blocked by MXS-1134 Cache Invalidation Closed

 Description   

We used the new cache filter of MaxScale with v2.1.2 in combination with thew readwritesplit and 3 servcers (but we also tested it with readconnroute and only 1 server with the same result).

We got an old script which updates a value in a mySQL table and afterwards it selects a sum where this value is used in a join. The result is that the sum still delivers the old value from the MaxScale cache. Afterwards, even when reloading the whole webpage with a fresh select, the statement is still outdated until the cache TTL is passed or until we restart MaxScale.

Shouldn't the cache data for this stable/query be discarded by MaxScale automatically after an update/insert?

Our cache config is:

[Cache]
type=filter
module=cache
hard_ttl=300
soft_ttl=290
max_resultset_size=512Ki
max_size=100Mi

the script causing this problem is a bit durty but easy to read:

   $conn = mysqli_connect($hostname, $username, $password, $database);
    $json = array('status' => 'false');
    $sql = "Select id from task_data where task_id =".$_REQUEST["task_id"]." and month =".$_REQUEST["month"]." and year = ".$_REQUEST["year"];
    $result = $conn->query($sql);
    if($result->num_rows > 0){
		$data_id = $result->fetch_assoc()["id"];
		$sql = "Update task_data set hours = ".$_REQUEST["hours"]." where id = ".$data_id;
		$result = $conn->query($sql);
		if($result){
			$json['status'] = 'true';	
		}
    }
	else{
		$sql = "Insert into task_data (task_id, month, year, hours) Values (".$_REQUEST["task_id"].", ".$_REQUEST["month"].", ".$_REQUEST["year"].", ".$_REQUEST["hours"].");";
		$result = $conn->query($sql);
		if($result){
			$json['status'] = 'true';	
		}
	}
 
	if($json["status"] == "true"){
		
		$task_hours = $conn->query("select ifnull(round(sum(hours),2), 0) as hours from project join task on (project.id = task.project_id) join task_data on (task_data.task_id = task.id) where month = ".$_REQUEST["month"]." and year = ".$_REQUEST['year']." and project.id = ".$_REQUEST["project_id"].";")->fetch_assoc()["hours"];
		
		$project_hours = $work_hours = $conn->query("select ifnull(round(sum(worked_hours),2), 0) as hours from project join day on (day.project_id = project.id) join data on(data.id = day.data_id) where month = ".$_REQUEST["month"]." and year = ".$_REQUEST['year']." and project.id = ".$_REQUEST["project_id"].";")->fetch_assoc()["hours"];
		
		$json["hours"] = $task_hours;
		$json["project_hours"] = $project_hours;
	}



 Comments   
Comment by markus makela [ 2017-05-21 ]

This is a current limitation in the cache filter: https://mariadb.com/kb/en/mariadb-enterprise/cache/#invalidation

Comment by Johan Wikman [ 2017-05-22 ]

If invalidation is important then there is a workaround that may or may not work for you.

As the cache is shared by all users, but specific users can be exempted from being served cached data, if you, after an insert or an update, fetch the data using a user for whom data is not cached, then the data in the cache will be refreshed for all.

Comment by Johan Wikman [ 2017-05-22 ]

Closing as this is documented behavior.

Comment by Sven Abels [ 2017-05-22 ]

Thanks a lot for pointing me to this.

Comment by Johan Wikman [ 2017-05-23 ]

Btw, have you considered whether you can have

selects=assume_cacheable

in your configuration?

https://mariadb.com/kb/en/mariadb-enterprise/cache/#selects

Namely, if you can, it gives a significant performance boost.

Comment by Sven Abels [ 2017-05-23 ]

@Johan: Thanks for the tip. However, unfortunately, we cannot avoid that the app writes from time to time and exempting a user for all read parts would work but then we would loose the speed of the cache completely. So it looks like we can't really use the cache feature - or only for a very small number of scenarios - until the automatic cache invalidation is available.

Generated at Thu Feb 08 04:05:29 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.