[MXS-401] Possible MaxScale hangs? Created: 2015-10-08 Updated: 2016-05-31 Resolved: 2016-05-31 |
|
| Status: | Closed |
| Project: | MariaDB MaxScale |
| Component/s: | Core |
| Affects Version/s: | 1.2.1 |
| Fix Version/s: | 2.0.0 |
| Type: | Bug | Priority: | Major |
| Reporter: | Kolbe Kegel (Inactive) | Assignee: | Johan Wikman |
| Resolution: | Cannot Reproduce | Votes: | 0 |
| Labels: | None | ||
| Environment: |
Galera, RWSplit, CentOS 7, MS Azure |
||
| Attachments: |
|
| Description |
|
I'm trying to do some benchmarking through MaxScale using linkbench and I'm running into a problem where the linkbench threads will occasionally fail to close. I really can't tell whether this is a problem in MariaDB, a problem in MaxScale, or a problem in the Java client program (which I obviously did not write and have not studied). I don't think I ever see this hang when I have linkbench connected straight to the backend (bypassing MaxScale). Right now, I have a state where I've manually killed all app connections across all backends, but MaxScale still shows open sessions that are in "Session ready for routing" with associated DCBs in "DCB in the polling loop". Does it seem right that MaxScale sessions/DCBs would be in those states even when the associated threads on the backends have been killed?
|
| Comments |
| Comment by Johan Wikman [ 2015-11-22 ] |
|
A couple of locking issues has recently been uncovered.
Both of these have now been fixed in develop. |
| Comment by Johan Wikman [ 2015-11-24 ] |
|
The localtime issue was a false flag. There was another issue that only made it appear as if localtime could also cause lockups. |
| Comment by Dipti Joshi (Inactive) [ 2015-12-01 ] |
|
Is this duplicate of |
| Comment by Johan Wikman [ 2015-12-15 ] |
|
With some limited testing this could not be repeated. However, as a more thorough attempt at trying to repeat the behaviour be warranted, it is tentatively moved to 1.3.1. |
| Comment by Johan Wikman [ 2016-05-31 ] |
|
I'll close this as this was reported for 1.2.1, we could not reproduce it, and we are now at 2.0.0. If something similar is detected, please reopen this one or create a new issue. |