[MDEV-7202] [PATCH] additional statistics for parallel replication - Slave_parallel_eventqueue_size/Slave_parallel_eventqueue_freepending Created: 2014-11-25 Updated: 2015-04-04 Resolved: 2015-04-04 |
|
| Status: | Closed |
| Project: | MariaDB Server |
| Component/s: | Replication |
| Fix Version/s: | N/A |
| Type: | Task | Priority: | Minor |
| Reporter: | Daniel Black | Assignee: | Unassigned |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | parallelslave | ||
| Attachments: |
|
||||||||
| Issue Links: |
|
||||||||
| Description |
|
in Attached patch adds a total status for all threads for the Slave_parallel_eventqueue_size/Slave_parallel_eventqueue_freepending. Rather/in addition to totals, would push a per thread status as slave_parallel_eventqueue_0_size be acceptable? Anything else useful to capture/graph here? |
| Comments |
| Comment by Kristian Nielsen [ 2015-02-04 ] |
|
Ok, I (finally) got to look at this patch. > Attached patch adds a total status for all threads for the The patch exposes the loc_qev_size and qev_free_pending fields as status This is completely internal detail to the memory management of the parallel > Rather/in addition to totals, would push a per thread status as Do you mean here that there would be N status variables, one for each worker > I thought some additional status would be helpful. I 100% agree that more monitoring of parallel replication is needed. With respect to size of event queues, the issue here is that the code does not I'm trying to think of a way to get size of pending events without introducing The SQL driver thread takes LOCK_rpl_thread whenever an event is queued. And Under LOCK_parallel_entry, a worker thread could update a counter of size of In general, I'm unsure how to balance the need for more monitoring against the |
| Comment by Kristian Nielsen [ 2015-02-04 ] |
|
I tried to assign the issue back to user Daniel Black, but that did not seem possible |
| Comment by Daniel Black [ 2015-04-03 ] |
|
pivanof suggested much better options in MDEV-7340 so lets close this and continue there. |