[CONC-536] Seemingly Random Unknown Errors Created: 2021-03-22 Updated: 2021-03-22 |
|
| Status: | Open |
| Project: | MariaDB Connector/C |
| Component/s: | API |
| Affects Version/s: | 3.1.8, 3.1.9, 3.1.10, 3.1.11, 3.1.12 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major |
| Reporter: | Jared Bellows | Assignee: | Georg Richter |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | None | ||
| Description |
|
Some queries break under unknown circumstances and are able to run successfully by changing whitespace. Unsure as to why whitespace makes a difference, but while debugging the issue came across the following set of code in mariadb_lib.c beginning at line 2335:
This bit of code reads the field information for a result set. The ma_result_set_rows function returns and 8 or 9 depending on whether the client and server support EXTENDED_METADATA, 8 if not and 9 if so. The problem is that the payload for field metadata has 6 length encoded fields and the rest of the fields are packed fields. When trying to read the 9th field as length encoded, it will sometimes try to read beyond the length of the current MySQL packet, which causes the unknown error to be triggered. The previous version of this call had a hardcoded "8" in place of the ma_result_set_rows function, with 3.1.7 being the last version with that in place. There are no issues with the previous versions, and because the population of the type metadata field is populated in the unpack call in the supplied code, it does not seem necessary to try to read 9 in the db_read_rows call. Code commit that changed this part of the code: https://github.com/mariadb-corporation/mariadb-connector-c/commit/6632cb69d7acf3c3d9ceb0dd78a952a4d514cb5b |
| Comments |
| Comment by Georg Richter [ 2021-03-22 ] |
|
Hi Jared, thanks for your ticket!
Thanks! |
| Comment by Jared Bellows [ 2021-03-22 ] |
|
We ran this against the 10.5.9 version of the server, and we have trying to work out the circumstances that trigger the issue. Thus far looking at configuration differences between servers has not yielded the answer yet. I am preparing to run debugger on the server to step through what is happening from that side to see if I can come up with the reproducible steps. We did run this previously against 10.2 without issue. That version does not share the extended metadata attribute which is required to trigger the processing of 9 fields instead of 8. I am reticent to share the dumps I've taken directly, but I can share the portions of the server response where, after stepping through the connector code in debugger, I discovered what was happening, and where there might be a long standing issue in the server. This is a redacted MySQL packet for the response where the connector throws an unknown error:
The connector reads through this packet reading the length encoded metadata fields denoted by addresses 0x04, 0x08, 0x0F, 0x1E, 0x2D, and 0x3F. The next set of fields to be read are packed fields with the length of those packed fields noted at 0x51. In all 3.1 versions of the connector, it treats this section as a length encoded field so reads the next 12 (0x0c) bytes as the 7th field. When it tries to read the 8th field, it is beyond the end of the packet and into the allocated buffer. In my runs this seems to always have been empty resulting in another 0 length field (0x00). For the 9th field it reads a value of the length in excess of the difference of end_to - to resulting in the unknown error. The case in which the query "works" is nearly identical, except that instead of 0x0c at the 0x51 address, a null byte (0x00) in inserted, shifting the rest of the contents by one byte. In that scenario the 7th field is a 0 byte field and the 8th field is read as the 0x0c and the next 12 bytes are read for the 8th field. The 9th field is read as a 0 byte field and the loop completes without issue. I will spend some time trying to figure out why the server gives different response for the same query and server configuration and if I can discover that, might be able to provide a reproducible scenario. |