All that happend after a system crash caused by a game (X:Rebirth). As the storage contains information about private and business e-mails I am not willing and legally not allowed to disclose the data.
__memcpy_ssse3 () at ../sysdeps/x86_64/multiarch/memcpy-ssse3.S:2848
2848 ../sysdeps/x86_64/multiarch/memcpy-ssse3.S: Datei oder Verzeichnis nicht gefunden.
(gdb) frame 4
#4 trx_purge_get_next_rec (n_pages_handled=n_pages_handled@entry=0x7fffdcc0dde0, heap=0x555557a4e8c0) at /usr/src/debug/mariadb-10.0.22/storage/xtradb/trx/trx0purge.cc:908
908 /usr/src/debug/mariadb-10.0.22/storage/xtradb/trx/trx0purge.cc: Datei oder Verzeichnis nicht gefunden.
(gdb) info locals
rec = 0x7fffe7f6d008 ""
rec2 = <optimized out>
offset = 4104
page_no = 633
space = 0
zip_size = 0
mtr = {memo = {heap = 0x0, used = 32,
data = "\001\000\000\000\000\000\000\000\200\310\377\342\377\177\000\000\001\000\000\000\000\000\000\000\200\321\377\342\377\177", '\000' <repeats 58 times>, "\t\000\000\000\000\000\000\000\200\000\000\000\000\000\000\000\260\000\000\000\000\000\000\000\002\000\000\000\000\000\000\000\t\000\000\000\062", '\000' <repeats 19 times>, "[\000\000\000n", '\000' <repeats 19 times>, "w\000\000\000|", '\000' <repeats 11 times>, " \000\000\330\377\177\000\000\200", '\000' <repeats 23 times>..., base = {count = 93825018611448, start = 0x68, end = 0x7fffd80266c8}, list = {prev = 0x555555dc3214 <dict_table_stats_lock(dict_table_t*, unsigned long)+596>, next = 0x11}}, log = {heap = 0x0, used = 0,
data = "\000\000\000\000\000\000\000\000]\000\000\000n\000\000\000\000\252\220y%\rg\220\000\000\000\000\000\000\000\000\000\t\000\330\377\177\000\000\300\350\244WUU\000\000\000\000\000\000\000\000\000\000\300\066\343VUU\000\000\000\000\000\000\000\000\000\000\350\343\244WUU\000\000_\031\316UUU\000\000\000\000\000\000\000\000\000\000\025", '\000' <repeats 16 times>, "\004", '\000' <repeats 15 times>, "\252\220y%\rg\220h\004\000\000\000\000\000\000\000\252\220y%\rg\220\200p\345VUU\000\000\300\350\244WUU\000\000(\351\244WUU\000\000\300\350\244WUU\000\000\300\066\343VUU\000\000<6\323UUU\000\000"..., base = {count = 0, start = 0x0, end = 0x555557a4e3e8}, list = {prev = 0x555557a22158,
If there was a system crash that was resolved by forcibly switching the power off, it is possible that we are witnessing the results of another part of the hardware/firmware/software stack failing. Maybe fsync() is not working on the hardware as expected, and some writes are being reordered, causing a situation where some of the recovered InnoDB data pages correspond to a wrong logical time (too old or too new LSN).
Admittedly, it is unlikely that a Linux kernel hang triggered by a GPU intensive game would cause the hard disk firmware to hang. Perhaps the system was not completely frozen, and it was just in the middle of performing a write when the power was switched off?
How were the configuration parameters innodb_doublewrite, innodb_flush_method, innodb_checksum_algorithm, innodb_checksums set? Were any non-default InnoDB or XtraDB configuration parameters specified?
I did some more code review. It seems to me that purge should sequentially read the undo logs of each purgeable transaction. The wrong purge_sys->offset (4104 == 0x1008) should actually have been read from the same page, at some earlier address than 0x1008. The next-record pointer is in the first two bytes of the undo log record, and it must not be past the currently marked end of the page; see trx_undo_page_get_next_rec().
I think that the undo log pages are being written in an append-only fashion, so the next-record pointer should be in the immediately preceding record.
It seems that trx_undo_page_get_next_rec() can start reading an undo page from an arbitrary offset (purge_sys->hdr_offset) that we could have read as corrupted from some other page. There are quite a few bytes "0x10 0x08" in the undo log page, and of course it is also possible that purge_sys->hdr_offset was directly initialized as 0x1008 from some other page.
With the available data, it is not possible for me to debug this further. If you still have the data and are willing to investigate, it would be interesting to see which undo pages were accessed by trx0purge.cc before the corruption was noticed. That could then allow us to determine the cause of the corruption. Because I do not remember seeing anything like this while working on MySQL and InnoDB since 2003, I would tend to believe that the problem lies outside InnoDB. That said, I am not fully familiar of the changes made in XtraDB or MariaDB.
Marko Mäkelä
added a comment - Thanks, Matthias!
If there was a system crash that was resolved by forcibly switching the power off, it is possible that we are witnessing the results of another part of the hardware/firmware/software stack failing. Maybe fsync() is not working on the hardware as expected, and some writes are being reordered, causing a situation where some of the recovered InnoDB data pages correspond to a wrong logical time (too old or too new LSN).
Admittedly, it is unlikely that a Linux kernel hang triggered by a GPU intensive game would cause the hard disk firmware to hang. Perhaps the system was not completely frozen, and it was just in the middle of performing a write when the power was switched off?
How were the configuration parameters innodb_doublewrite, innodb_flush_method, innodb_checksum_algorithm, innodb_checksums set? Were any non-default InnoDB or XtraDB configuration parameters specified?
I did some more code review. It seems to me that purge should sequentially read the undo logs of each purgeable transaction. The wrong purge_sys->offset (4104 == 0x1008) should actually have been read from the same page, at some earlier address than 0x1008. The next-record pointer is in the first two bytes of the undo log record, and it must not be past the currently marked end of the page; see trx_undo_page_get_next_rec().
I think that the undo log pages are being written in an append-only fashion, so the next-record pointer should be in the immediately preceding record.
It seems that trx_undo_page_get_next_rec() can start reading an undo page from an arbitrary offset (purge_sys->hdr_offset) that we could have read as corrupted from some other page. There are quite a few bytes "0x10 0x08" in the undo log page, and of course it is also possible that purge_sys->hdr_offset was directly initialized as 0x1008 from some other page.
With the available data, it is not possible for me to debug this further. If you still have the data and are willing to investigate, it would be interesting to see which undo pages were accessed by trx0purge.cc before the corruption was noticed. That could then allow us to determine the cause of the corruption. Because I do not remember seeing anything like this while working on MySQL and InnoDB since 2003, I would tend to believe that the problem lies outside InnoDB. That said, I am not fully familiar of the changes made in XtraDB or MariaDB.
I attached the complete config of the server instance as it is returned by the --verbose --help option. I still have the data available. What futher information can I provide?
Matthias Fehring
added a comment - Hello Marko,
I attached the complete config of the server instance as it is returned by the --verbose --help option. I still have the data available. What futher information can I provide?
complete-config.txt
Hello Matthias,
Sorry, I missed your update.
I see that the InnoDB doublewrite buffer and the page checksums were enabled. So, crash recovery should have performed as expected. It is theoretically possible that a corrupted page accidentally got a valid-looking checksum, though.
The page checksum of the attached file undo_page.bin (which was extracted from gdb) is valid, which suggests that there was no recent redo log activity for the page, or a redo log checkpoint was written by one of the startup attempts before the code crashed when accessing the seemingly corrupted undo log.
As far as I can tell, given that you (quite understandably) do not want to share the data files for deeper analysis, the only way to move forward with this is that someone starts up the server on the data files under a debugger, setting breakpoints in InnoDB code to figure out where the corruption originally occurs.
Less likely options are that someone else runs into this bug, or that you are able to create a reduced test case for repeating the problem.
One more thing that you could perhaps try is to check the diagnostics of the storage medium. Does smartctl -a report any serious errors, such as sector relocation? I am thinking of a possible chain of events like this: The server crashed. On the first crash recovery attempt (applying the redo log), reading the undo log page failed, and the hard drive substituted an empty page (filled with zeros). This would pass the InnoDB page checksum. Then, the redo log records would be applied and the page would be rewritten, except for some ‘holes’ that were not covered by the redo log records.
Also, I wonder if there is some tool like memtest86 which would test the reliability of file systems and the block device. Running memtest86 or similar could also be a good idea, because InnoDB page checksums are only calculated immediately before writing a block back to disk. A low-probability RAM corruption caused by unreliable hardware could go unnoticed for a long time.
Marko Mäkelä
added a comment - Hello Matthias,
Sorry, I missed your update.
I see that the InnoDB doublewrite buffer and the page checksums were enabled. So, crash recovery should have performed as expected. It is theoretically possible that a corrupted page accidentally got a valid-looking checksum, though.
The page checksum of the attached file undo_page.bin (which was extracted from gdb) is valid, which suggests that there was no recent redo log activity for the page, or a redo log checkpoint was written by one of the startup attempts before the code crashed when accessing the seemingly corrupted undo log.
As far as I can tell, given that you (quite understandably) do not want to share the data files for deeper analysis, the only way to move forward with this is that someone starts up the server on the data files under a debugger, setting breakpoints in InnoDB code to figure out where the corruption originally occurs.
Less likely options are that someone else runs into this bug, or that you are able to create a reduced test case for repeating the problem.
One more thing that you could perhaps try is to check the diagnostics of the storage medium. Does smartctl -a report any serious errors, such as sector relocation? I am thinking of a possible chain of events like this: The server crashed. On the first crash recovery attempt (applying the redo log), reading the undo log page failed, and the hard drive substituted an empty page (filled with zeros). This would pass the InnoDB page checksum. Then, the redo log records would be applied and the page would be rewritten, except for some ‘holes’ that were not covered by the redo log records.
Also, I wonder if there is some tool like memtest86 which would test the reliability of file systems and the block device. Running memtest86 or similar could also be a good idea, because InnoDB page checksums are only calculated immediately before writing a block back to disk. A low-probability RAM corruption caused by unreliable hardware could go unnoticed for a long time.
While using Thunderbird since this issue happened to my Kontact MariaDB database, I totally forgot about this issue report.
I checked the SMART values of my storage device and everything is fine with it. I also die an overnight memtest86 run to test my system memory, also without any issues.
To be able to use KDE's Kontact again, I deleted the old data (including the corrupted database) an redownloaded the emails from the IMAP server.
So I am sorry not being able to help to resolve this anymore. Maybe it was a weird combination of different circumstances that led tot this situation.
I think this can be closed as WONTFIX or something similar.
Thanks to all for your efforts.
Matthias Fehring
added a comment - While using Thunderbird since this issue happened to my Kontact MariaDB database, I totally forgot about this issue report.
I checked the SMART values of my storage device and everything is fine with it. I also die an overnight memtest86 run to test my system memory, also without any issues.
To be able to use KDE's Kontact again, I deleted the old data (including the corrupted database) an redownloaded the emails from the IMAP server.
So I am sorry not being able to help to resolve this anymore. Maybe it was a weird combination of different circumstances that led tot this situation.
I think this can be closed as WONTFIX or something similar.
Thanks to all for your efforts.
People
Marko Mäkelä
Matthias Fehring
Votes:
0Vote for this issue
Watchers:
5Start watching this issue
Dates
Created:
Updated:
Resolved:
Git Integration
Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.
{"report":{"fcp":877.2000000476837,"ttfb":203.5,"pageVisibility":"visible","entityId":58477,"key":"jira.project.issue.view-issue","isInitial":true,"threshold":1000,"elementTimings":{},"userDeviceMemory":8,"userDeviceProcessors":64,"apdex":0.5,"journeyId":"5798f7d0-3dce-45f2-9a12-7e93eb0fee58","navigationType":0,"readyForUser":1005.2000000476837,"redirectCount":0,"resourceLoadedEnd":912.6000000238419,"resourceLoadedStart":211.80000007152557,"resourceTiming":[{"duration":99.29999995231628,"initiatorType":"link","name":"https://jira.mariadb.org/s/2c21342762a6a02add1c328bed317ffd-CDN/lu2bu7/820016/12ta74/0a8bac35585be7fc6c9cc5a0464cd4cf/_/download/contextbatch/css/_super/batch.css","startTime":211.80000007152557,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":211.80000007152557,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":311.10000002384186,"responseStart":0,"secureConnectionStart":0},{"duration":100,"initiatorType":"link","name":"https://jira.mariadb.org/s/7ebd35e77e471bc30ff0eba799ebc151-CDN/lu2bu7/820016/12ta74/8679b4946efa1a0bb029a3a22206fb5d/_/download/contextbatch/css/jira.browse.project,project.issue.navigator,jira.view.issue,jira.general,jira.global,atl.general,-_super/batch.css?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&slack-enabled=true","startTime":212.10000002384186,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":212.10000002384186,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":312.10000002384186,"responseStart":0,"secureConnectionStart":0},{"duration":286.09999990463257,"initiatorType":"script","name":"https://jira.mariadb.org/s/fbf975c0cce4b1abf04784eeae9ba1f4-CDN/lu2bu7/820016/12ta74/0a8bac35585be7fc6c9cc5a0464cd4cf/_/download/contextbatch/js/_super/batch.js?locale=en","startTime":212.30000007152557,"connectEnd":313.7000000476837,"connectStart":313.7000000476837,"domainLookupEnd":313.7000000476837,"domainLookupStart":313.7000000476837,"fetchStart":212.30000007152557,"redirectEnd":0,"redirectStart":0,"requestStart":313.89999997615814,"responseEnd":498.39999997615814,"responseStart":325.5,"secureConnectionStart":313.7000000476837},{"duration":397.40000009536743,"initiatorType":"script","name":"https://jira.mariadb.org/s/099b33461394b8015fc36c0a4b96e19f-CDN/lu2bu7/820016/12ta74/8679b4946efa1a0bb029a3a22206fb5d/_/download/contextbatch/js/jira.browse.project,project.issue.navigator,jira.view.issue,jira.general,jira.global,atl.general,-_super/batch.js?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&locale=en&slack-enabled=true","startTime":212.89999997615814,"connectEnd":212.89999997615814,"connectStart":212.89999997615814,"domainLookupEnd":212.89999997615814,"domainLookupStart":212.89999997615814,"fetchStart":212.89999997615814,"redirectEnd":0,"redirectStart":0,"requestStart":314.5,"responseEnd":610.3000000715256,"responseStart":326.60000002384186,"secureConnectionStart":212.89999997615814},{"duration":128.30000007152557,"initiatorType":"script","name":"https://jira.mariadb.org/s/94c15bff32baef80f4096a08aceae8bc-CDN/lu2bu7/820016/12ta74/c92c0caa9a024ae85b0ebdbed7fb4bd7/_/download/contextbatch/js/atl.global,-_super/batch.js?locale=en","startTime":213,"connectEnd":213,"connectStart":213,"domainLookupEnd":213,"domainLookupStart":213,"fetchStart":213,"redirectEnd":0,"redirectStart":0,"requestStart":317.60000002384186,"responseEnd":341.3000000715256,"responseStart":339.3000000715256,"secureConnectionStart":213},{"duration":130.20000004768372,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2bu7/820016/12ta74/1.0/_/download/batch/jira.webresources:calendar-en/jira.webresources:calendar-en.js","startTime":213.10000002384186,"connectEnd":213.10000002384186,"connectStart":213.10000002384186,"domainLookupEnd":213.10000002384186,"domainLookupStart":213.10000002384186,"fetchStart":213.10000002384186,"redirectEnd":0,"redirectStart":0,"requestStart":317.7000000476837,"responseEnd":343.3000000715256,"responseStart":340.60000002384186,"secureConnectionStart":213.10000002384186},{"duration":130.5,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2bu7/820016/12ta74/1.0/_/download/batch/jira.webresources:calendar-localisation-moment/jira.webresources:calendar-localisation-moment.js","startTime":213.20000004768372,"connectEnd":213.20000004768372,"connectStart":213.20000004768372,"domainLookupEnd":213.20000004768372,"domainLookupStart":213.20000004768372,"fetchStart":213.20000004768372,"redirectEnd":0,"redirectStart":0,"requestStart":318,"responseEnd":343.7000000476837,"responseStart":341.60000002384186,"secureConnectionStart":213.20000004768372},{"duration":104,"initiatorType":"link","name":"https://jira.mariadb.org/s/b04b06a02d1959df322d9cded3aeecc1-CDN/lu2bu7/820016/12ta74/a2ff6aa845ffc9a1d22fe23d9ee791fc/_/download/contextbatch/css/jira.global.look-and-feel,-_super/batch.css","startTime":213.30000007152557,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":213.30000007152557,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":317.3000000715256,"responseStart":0,"secureConnectionStart":0},{"duration":130.5,"initiatorType":"script","name":"https://jira.mariadb.org/rest/api/1.0/shortcuts/820016/47140b6e0a9bc2e4913da06536125810/shortcuts.js?context=issuenavigation&context=issueaction","startTime":213.39999997615814,"connectEnd":213.39999997615814,"connectStart":213.39999997615814,"domainLookupEnd":213.39999997615814,"domainLookupStart":213.39999997615814,"fetchStart":213.39999997615814,"redirectEnd":0,"redirectStart":0,"requestStart":319.2000000476837,"responseEnd":343.89999997615814,"responseStart":342,"secureConnectionStart":213.39999997615814},{"duration":104.39999997615814,"initiatorType":"link","name":"https://jira.mariadb.org/s/3ac36323ba5e4eb0af2aa7ac7211b4bb-CDN/lu2bu7/820016/12ta74/d176f0986478cc64f24226b3d20c140d/_/download/contextbatch/css/com.atlassian.jira.projects.sidebar.init,-_super,-project.issue.navigator,-jira.view.issue/batch.css?jira.create.linked.issue=true","startTime":213.5,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":213.5,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":317.89999997615814,"responseStart":0,"secureConnectionStart":0},{"duration":136.89999997615814,"initiatorType":"script","name":"https://jira.mariadb.org/s/3339d87fa2538a859872f2df449bf8d0-CDN/lu2bu7/820016/12ta74/d176f0986478cc64f24226b3d20c140d/_/download/contextbatch/js/com.atlassian.jira.projects.sidebar.init,-_super,-project.issue.navigator,-jira.view.issue/batch.js?jira.create.linked.issue=true&locale=en","startTime":213.60000002384186,"connectEnd":213.60000002384186,"connectStart":213.60000002384186,"domainLookupEnd":213.60000002384186,"domainLookupStart":213.60000002384186,"fetchStart":213.60000002384186,"redirectEnd":0,"redirectStart":0,"requestStart":323.89999997615814,"responseEnd":350.5,"responseStart":349.3000000715256,"secureConnectionStart":213.60000002384186},{"duration":505.1999999284744,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2bu7/820016/12ta74/1.0/_/download/batch/jira.webresources:bigpipe-js/jira.webresources:bigpipe-js.js","startTime":216.30000007152557,"connectEnd":216.30000007152557,"connectStart":216.30000007152557,"domainLookupEnd":216.30000007152557,"domainLookupStart":216.30000007152557,"fetchStart":216.30000007152557,"redirectEnd":0,"redirectStart":0,"requestStart":710.1000000238419,"responseEnd":721.5,"responseStart":720.2000000476837,"secureConnectionStart":216.30000007152557},{"duration":695.3999999761581,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2bu7/820016/12ta74/1.0/_/download/batch/jira.webresources:bigpipe-init/jira.webresources:bigpipe-init.js","startTime":217.20000004768372,"connectEnd":217.20000004768372,"connectStart":217.20000004768372,"domainLookupEnd":217.20000004768372,"domainLookupStart":217.20000004768372,"fetchStart":217.20000004768372,"redirectEnd":0,"redirectStart":0,"requestStart":902.3000000715256,"responseEnd":912.6000000238419,"responseStart":912,"secureConnectionStart":217.20000004768372},{"duration":170.40000009536743,"initiatorType":"xmlhttprequest","name":"https://jira.mariadb.org/rest/webResources/1.0/resources","startTime":629.3999999761581,"connectEnd":629.3999999761581,"connectStart":629.3999999761581,"domainLookupEnd":629.3999999761581,"domainLookupStart":629.3999999761581,"fetchStart":629.3999999761581,"redirectEnd":0,"redirectStart":0,"requestStart":769.2000000476837,"responseEnd":799.8000000715256,"responseStart":799.1000000238419,"secureConnectionStart":629.3999999761581},{"duration":124.60000002384186,"initiatorType":"xmlhttprequest","name":"https://jira.mariadb.org/rest/webResources/1.0/resources","startTime":833.2000000476837,"connectEnd":833.2000000476837,"connectStart":833.2000000476837,"domainLookupEnd":833.2000000476837,"domainLookupStart":833.2000000476837,"fetchStart":833.2000000476837,"redirectEnd":0,"redirectStart":0,"requestStart":927.5,"responseEnd":957.8000000715256,"responseStart":954.3999999761581,"secureConnectionStart":833.2000000476837},{"duration":160.5,"initiatorType":"script","name":"https://www.google-analytics.com/analytics.js","startTime":870.7000000476837,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":870.7000000476837,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":1031.2000000476837,"responseStart":0,"secureConnectionStart":0}],"fetchStart":0,"domainLookupStart":0,"domainLookupEnd":0,"connectStart":0,"connectEnd":0,"requestStart":61,"responseStart":203,"responseEnd":217,"domLoading":206,"domInteractive":1072,"domContentLoadedEventStart":1072,"domContentLoadedEventEnd":1120,"domComplete":1446,"loadEventStart":1446,"loadEventEnd":1446,"userAgent":"Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)","marks":[{"name":"bigPipe.sidebar-id.start","time":1042.5},{"name":"bigPipe.sidebar-id.end","time":1043.3000000715256},{"name":"bigPipe.activity-panel-pipe-id.start","time":1043.5},{"name":"bigPipe.activity-panel-pipe-id.end","time":1046},{"name":"activityTabFullyLoaded","time":1139.6000000238419}],"measures":[],"correlationId":"46d047219b67e1","effectiveType":"4g","downlink":9.6,"rtt":0,"serverDuration":87,"dbReadsTimeInMs":11,"dbConnsTimeInMs":18,"applicationHash":"9d11dbea5f4be3d4cc21f03a88dd11d8c8687422","experiments":[]}}
All that happend after a system crash caused by a game (X:Rebirth). As the storage contains information about private and business e-mails I am not willing and legally not allowed to disclose the data.
__memcpy_ssse3 () at ../sysdeps/x86_64/multiarch/memcpy-ssse3.S:2848
2848 ../sysdeps/x86_64/multiarch/memcpy-ssse3.S: Datei oder Verzeichnis nicht gefunden.
(gdb) frame 4
#4 trx_purge_get_next_rec (n_pages_handled=n_pages_handled@entry=0x7fffdcc0dde0, heap=0x555557a4e8c0) at /usr/src/debug/mariadb-10.0.22/storage/xtradb/trx/trx0purge.cc:908
908 /usr/src/debug/mariadb-10.0.22/storage/xtradb/trx/trx0purge.cc: Datei oder Verzeichnis nicht gefunden.
(gdb) info locals
rec = 0x7fffe7f6d008 ""
rec2 = <optimized out>
offset = 4104
page_no = 633
space = 0
zip_size = 0
mtr = {memo = {heap = 0x0, used = 32,
data = "\001\000\000\000\000\000\000\000\200\310\377\342\377\177\000\000\001\000\000\000\000\000\000\000\200\321\377\342\377\177", '\000' <repeats 58 times>, "\t\000\000\000\000\000\000\000\200\000\000\000\000\000\000\000\260\000\000\000\000\000\000\000\002\000\000\000\000\000\000\000\t\000\000\000\062", '\000' <repeats 19 times>, "[\000\000\000n", '\000' <repeats 19 times>, "w\000\000\000|", '\000' <repeats 11 times>, " \000\000\330\377\177\000\000\200", '\000' <repeats 23 times>..., base = {count = 93825018611448, start = 0x68, end = 0x7fffd80266c8}, list = {prev = 0x555555dc3214 <dict_table_stats_lock(dict_table_t*, unsigned long)+596>, next = 0x11}}, log = {heap = 0x0, used = 0,
data = "\000\000\000\000\000\000\000\000]\000\000\000n\000\000\000\000\252\220y%\rg\220\000\000\000\000\000\000\000\000\000\t\000\330\377\177\000\000\300\350\244WUU\000\000\000\000\000\000\000\000\000\000\300\066\343VUU\000\000\000\000\000\000\000\000\000\000\350\343\244WUU\000\000_\031\316UUU\000\000\000\000\000\000\000\000\000\000\025", '\000' <repeats 16 times>, "\004", '\000' <repeats 15 times>, "\252\220y%\rg\220h\004\000\000\000\000\000\000\000\252\220y%\rg\220\200p\345VUU\000\000\300\350\244WUU\000\000(\351\244WUU\000\000\300\350\244WUU\000\000\300\066\343VUU\000\000<6\323UUU\000\000"..., base = {count = 0, start = 0x0, end = 0x555557a4e3e8}, list = {prev = 0x555557a22158,
next = 0x7ffff5e2e254 <__GI___libc_malloc+84>}}, inside_ibuf = 0, modifications = 0, made_dirty = 0, n_log_recs = 0, n_freed_pages = 0, log_mode = 21, start_lsn = 8104, end_lsn = 10405299918667295232, trx = 0x0}
(gdb) p *n_pages_handled
$1 = 166