[MXS-1259] Add truncate statements and all table DDL to avrorouter Created: 2017-05-04 Updated: 2019-09-04 Resolved: 2019-09-04 |
|
| Status: | Closed |
| Project: | MariaDB MaxScale |
| Component/s: | avrorouter |
| Affects Version/s: | 2.0.5 |
| Fix Version/s: | N/A |
| Type: | New Feature | Priority: | Minor |
| Reporter: | Jonathan Day (Inactive) | Assignee: | Todd Stoffel (Inactive) |
| Resolution: | Won't Do | Votes: | 0 |
| Labels: | None | ||
| Environment: |
SUSE |
||
| Description |
| Comments |
| Comment by markus makela [ 2017-05-05 ] |
|
In theory it should be relatively straightforward to add support for this using a global Avro file for DDLs and other query events. Interleaving these events into the table specific streams is a bit more complicated as the data would need to be fetched from a different file. A better solution would be to provide a more direct access to the binlog stream and convert that on the fly to JSON and pipe it directly to Kafka without the data ever hitting the disk. This would have superior performance compared to the current solution but it would remove the capability to query for historical data. |
| Comment by Johan Wikman [ 2017-05-09 ] |
|
But those two are not in conflict. You could provide direct access to the stream while also storing the data to disk. However, it's not entirely obvious that there is a great performance hit. Namely, if the data is being streamed while it is being received, then the data is likely to be found in the block cache. That is, even if the data conceptually is fetched from the disk, it might still be served directly from memory. |
| Comment by markus makela [ 2017-05-09 ] |
|
The comment was mainly on the fact that it goes against how the data is stored in the Avro files and that a mechanism that seeks into the DDL file needs to be added. |