Details
-
Bug
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Fixed
-
10.1(EOL)
-
None
Description
The point-select sysbench benchmark I ran recently on 10.2 shows rather high time spent in
high resolution timer syscalls (clock_gettime on Linux, and QueryPerformanceCounter on Windows)
The inspection reveals that clock() call introduced inside trx_start_low by this patch https://github.com/MariaDB/server/commit/74961760a4837d2deb33336329c28cf9ad9b4e9e is responsible for the most calls (also, in 10.2, clock() was erroneously done twice).
The call can be replaced by the value of THD::start_utime.