[MCOL-590] No extent information on Insert/Updates with table corruption Created: 2017-02-23  Updated: 2017-09-20  Resolved: 2017-09-20

Status: Closed
Project: MariaDB ColumnStore
Component/s: DMLProc
Affects Version/s: 1.0.7
Fix Version/s: Icebox

Type: Bug Priority: Critical
Reporter: Bernd Helm Assignee: David Hall (Inactive)
Resolution: Cannot Reproduce Votes: 1
Labels: None
Environment:

Debian 8 with kernel 4.9 with 4pm combined install


Attachments: File columnstoreSupportReport.tar.gz    
Sprint: 2017-4, 2017-5, 2017-6, 2017-7, 2017-8, 2017-9, 2017-10

 Description   

We have huge problems with columnstore breaking on a regular basis.
we are importing data to a "source" table using cpimport and updating other tables using UPDATE with join and insert with left joins, updating/inserting 10000's of rows at once.
it happens on multiple tables, some times on the insert, some times on the update. the import works some interations and then crashes.
the affected table (more specific the latest partition) is then broken and the only way to fix it is to drop the partition. also, as we use transactions, the rollback is not working and the data is inconsistend between the tables that are updated within this transaction.

this also (more rarely) happend on infinidb 4.6.7

this is what we do:

cat data* | /usr/local/mariadb/columnstore/bin/cpimport stats stats_update -s ',' -E '"' -e 3000
BEGIN;
update
      `stats_hour` AS s
      JOIN
      (
        SELECT sr.`hash_hour`, sr.`datum_hour`,..
         FROM stats_update sr
         GROUP BY sr.`hash_hour`, sr.`datum_hour`
      ) AS su
      ON su.`datum_hour` = s.datum AND s.hash = su.`hash_hour`
      SET
      s.`views` =  s.`views` + su.`views`,
s.`clicks` =  s.`clicks` + su.`clicks`,
..
      WHERE 1;
 
INSERT INTO stats_hour 
    SELECT
    su.`hash_hour`,
    MAX(su.`datum_hour`),
       MAX(su.`user_id`),
...
sum(su.`unique_pa_clicks`)
    FROM `stats_update` su LEFT JOIN `stats_hour` s ON  su.`datum_hour` = s.datum AND s.hash = su.`hash_hour`
    WHERE s.hash IS NULL 
    GROUP BY
     su.`hash_hour`
2017-02-23 19:34: exception 'Common\JobBase' code HY000 with message 'SQLSTATE[HY000]: General error: 1815 Internal error: CAL0006: There is no extent information for table stats_hour'

from the logs:

crit.log
Feb 23 19:34:04 ics1 controllernode[7785]: 04.910677 |0|0|0| C 29 CAL0000: ExtentMap::getDbRootHWMInfo(): OID 3543 has HWM extent that is UNAVAILABLE for DBRoot1; part#: 3, seg#: 3, fbo: 8192, localHWM: 0, lbid: 12061696

Feb 23 19:33:41 ics1 writeengineserver[7301]: 41.831126 |0|0|0| D 32 CAL0000: 1465 : Message Queue is empty; Stopping CF Thread
Feb 23 19:33:42 ics1 cpimport.bin[5362]: 42.174918 |0|0|0| I 34 CAL0083: BulkLoad: JobId-3202; finished loading table stats.stats_update; 120000 rows inserted
Feb 23 19:33:42 ics1 writeengine[5362]: 42.175028 |0|0|0| I 19 CAL0008: Bulkload |Job: /usr/local/mariadb/columnstore/data/bulk/tmpjob/3202_D20170223_T193324_S352165_Job_3202.xml |For table stats.stats_update: 120000 rows processed and 120000 rows inserted.
Feb 23 19:33:42 ics1 messagequeue[3133]: 42.183386 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 28 inet: 137.74.204.237 port: 59138; Will retry.
Feb 23 19:33:42 ics1 writeenginesplit[5317]: 42.185871 |0|0|0| I 33 CAL0098: Received a Cpimport Pass from PM2.
Feb 23 19:33:42 ics1 cpimport.bin[5362]: 42.198629 |0|0|0| I 34 CAL0082: End BulkLoad: JobId-3202; status-SUCCESS
Feb 23 19:33:42 ics1 messagequeue[3133]: 42.210929 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 20 inet: 137.74.204.236 port: 43672; Will retry.
Feb 23 19:33:42 ics1 writeengineserver[7301]: 42.211010 |0|0|0| I 32 CAL0000: 1465 : cpimport exit on success
Feb 23 19:33:42 ics1 writeengineserver[7301]: 42.211277 |0|0|0| D 32 CAL0000: 1465 : onCpimportSuccess BrmReport Send
Feb 23 19:33:42 ics1 writeengineserver[7301]: 42.211345 |0|0|0| D 32 CAL0000: 1465 : onReceiveEOD : child ID = 0
Feb 23 19:33:42 ics1 writeengineserver[7301]: 42.211407 |0|0|0| D 32 CAL0000: 1465 : onReceiveEOD : child ID = 0
Feb 23 19:33:42 ics1 writeenginesplit[5317]: 42.213387 |0|0|0| I 33 CAL0098: Received a Cpimport Pass from PM1.
Feb 23 19:33:42 ics1 messagequeue[3133]: 42.230724 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 27 inet: 137.74.204.239 port: 47908; Will retry.
Feb 23 19:33:42 ics1 writeenginesplit[5317]: 42.250687 |0|0|0| I 33 CAL0098: Received a Cpimport Pass from PM4.
Feb 23 19:33:42 ics1 writeengineserver[7301]: 42.250727 |0|0|0| D 32 CAL0000: 1465 : OnReceiveCleanup arrived
Feb 23 19:33:42 ics1 writeenginesplit[5317]: 42.392919 |0|0|0| I 33 CAL0000: Released Table Lock
Feb 23 19:33:50 ics1 messagequeue[7301]: 50.403361 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 11 inet: 137.74.204.236 port: 60368; Will retry.
Feb 23 19:33:50 ics1 messagequeue[3133]: 50.414428 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 22 inet: 137.74.204.236 port: 43624; Will retry.
Feb 23 19:33:50 ics1 dmlpackageproc[7785]: 50.505621 |405|853|0| D 21 CAL0001: Start SQL statement:  update#012      `stats_hour` AS s#012      JOIN#012      (#012        SELECT sr.`hash_hour`, sr.`datum_hour`, sum(`views`) as `views`, sum(`clicks`) as `clicks`, sum(`unique_clicks`) as `unique_clicks`, sum(`unique_sys_clicks`) as `unique_sys_clicks`, sum(`unique_pa_clicks`) as `unique_pa_clicks`#012         FROM stats_update sr#012         /*SOURCEWHERE*/#012         GROUP BY sr.`hash_hour`, sr.`datum_hour`#012      ) AS su#012      ON su.`datum_hour` = s.datum AND s.hash = su.`hash_hour`#012      SET#012      s.`views` =  s.`views` + su.`views`,#012s.`clicks` =  s.`clicks` + su.`clicks`,#012s.`unique_clicks` =  s.`unique_clicks` + su.`unique_clicks`,#012s.`unique_sys_clicks` =  s.`unique_sys_clicks` + su.`unique_sys_clicks`,#012s.`unique_pa_clicks` =  s.`unique_pa_clicks` + su.`unique_pa_clicks`#012      WHERE 1 /*EXTRAWHERE*/;|stats|
Feb 23 19:33:53 ics1 dmlpackageproc[7785]: 53.651961 |405|853|0| D 21 CAL0001: End SQL statement
Feb 23 19:33:53 ics1 messagequeue[7785]: 53.670058 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 21 inet: 137.74.204.236 port: 8630; Will retry.
Feb 23 19:33:53 ics1 messagequeue[7785]: 53.670151 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 24 inet: 137.74.204.239 port: 8630; Will retry.
Feb 23 19:33:53 ics1 messagequeue[7785]: 53.670208 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 23 inet: 137.74.204.238 port: 8630; Will retry.
Feb 23 19:33:53 ics1 messagequeue[7785]: 53.670212 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 22 inet: 137.74.204.237 port: 8630; Will retry.
Feb 23 19:33:53 ics1 messagequeue[7785]: 53.670710 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 15 inet: 137.74.204.236 port: 34708; Will retry.
Feb 23 19:33:53 ics1 dmlpackageproc[7785]: 53.741662 |405|853|0| D 21 CAL0001: Start SQL statement:  update#012      `stats_day` AS s#012      JOIN#012      (#012        SELECT sr.`hash_day`, sr.`datum`, sum(`views`) as `views`, sum(`clicks`) as `clicks`, sum(`unique_clicks`) as `unique_clicks`, sum(`unique_sys_clicks`) as `unique_sys_clicks`, sum(`unique_pa_clicks`) as `unique_pa_clicks`#012         FROM stats_update sr#012         /*SOURCEWHERE*/#012         GROUP BY sr.`hash_day`, sr.`datum`#012      ) AS su#012      ON su.`datum` = s.datum AND s.hash = su.`hash_day`#012      SET#012      s.`views` =  s.`views` + su.`views`,#012s.`clicks` =  s.`clicks` + su.`clicks`,#012s.`unique_clicks` =  s.`unique_clicks` + su.`unique_clicks`,#012s.`unique_sys_clicks` =  s.`unique_sys_clicks` + su.`unique_sys_clicks`,#012s.`unique_pa_clicks` =  s.`unique_pa_clicks` + su.`unique_pa_clicks`#012      WHERE 1 /*EXTRAWHERE*/;|stats|
Feb 23 19:34:01 ics1 dmlpackageproc[7785]: 01.956476 |405|853|0| D 21 CAL0001: End SQL statement
Feb 23 19:34:02 ics1 messagequeue[7785]: 02.141810 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 21 inet: 137.74.204.236 port: 8630; Will retry.
Feb 23 19:34:02 ics1 messagequeue[7785]: 02.141925 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 24 inet: 137.74.204.239 port: 8630; Will retry.
Feb 23 19:34:02 ics1 messagequeue[7785]: 02.141925 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 23 inet: 137.74.204.238 port: 8630; Will retry.
Feb 23 19:34:02 ics1 messagequeue[7785]: 02.141925 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 22 inet: 137.74.204.237 port: 8630; Will retry.
Feb 23 19:34:02 ics1 messagequeue[7785]: 02.142450 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 17 inet: 137.74.204.236 port: 34892; Will retry.
Feb 23 19:34:04 ics1 dmlpackageproc[7785]: 04.849423 |405|853|0| D 21 CAL0001: Start SQL statement:  INSERT INTO stats_hour      select *  from infinidb_vtable.$vtable_405; |stats|
Feb 23 19:34:04 ics1 controllernode[7785]: 04.910677 |0|0|0| C 29 CAL0000: ExtentMap::getDbRootHWMInfo(): OID 3543 has HWM extent that is UNAVAILABLE for DBRoot1; part#: 3, seg#: 3, fbo: 8192, localHWM: 0, lbid: 12061696
Feb 23 19:34:05 ics1 dmlpackageproc[7785]: 05.377736 |405|853|0| D 21 CAL0001: End SQL statement with error
Feb 23 19:34:05 ics1 messagequeue[7785]: 05.378100 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 21 inet: 137.74.204.236 port: 8630; Will retry.
Feb 23 19:34:05 ics1 messagequeue[7785]: 05.378152 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 24 inet: 137.74.204.239 port: 8630; Will retry.
Feb 23 19:34:05 ics1 messagequeue[7785]: 05.378174 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 23 inet: 137.74.204.238 port: 8630; Will retry.
Feb 23 19:34:05 ics1 messagequeue[7785]: 05.378198 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 22 inet: 137.74.204.237 port: 8630; Will retry.
Feb 23 19:34:05 ics1 dmlpackageproc[7785]: 05.402107 |0|0|0| E 21 CAL0006: There is no extent information for table stats_hour
Feb 23 19:34:05 ics1 dmlpackageproc[7785]: 05.499266 |405|853|0| D 21 CAL0001: Start SQL statement:  ROLLBACK
Feb 23 19:34:09 ics1 dmlpackageproc[7785]: 09.456582 |405|853|0| D 21 CAL0001: End SQL statement
Feb 23 19:34:09 ics1 messagequeue[7785]: 09.456984 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 20 inet: 137.74.204.236 port: 8630; Will retry.
Feb 23 19:34:09 ics1 messagequeue[7785]: 09.457043 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 21 inet: 137.74.204.237 port: 8630; Will retry.
Feb 23 19:34:09 ics1 messagequeue[7785]: 09.457075 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 22 inet: 137.74.204.238 port: 8630; Will retry.
Feb 23 19:34:09 ics1 messagequeue[7785]: 09.457079 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 23 inet: 137.74.204.239 port: 8630; Will retry.
Feb 23 19:34:09 ics1 messagequeue[7785]: 09.457415 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 15 inet: 137.74.204.236 port: 35416; Will retry.
Feb 23 19:34:09 ics1 dmlpackageproc[7785]: 09.492138 |405|0|0| D 21 CAL0001: Start SQL statement:  ROLLBACK
Feb 23 19:34:09 ics1 dmlpackageproc[7785]: 09.492219 |405|0|0| D 21 CAL0001: End SQL statement
Feb 23 19:34:09 ics1 messagequeue[7785]: 09.492454 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 20 inet: 137.74.204.236 port: 8630; Will retry.
Feb 23 19:34:09 ics1 messagequeue[7785]: 09.492626 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 22 inet: 137.74.204.238 port: 8630; Will retry.
Feb 23 19:34:09 ics1 messagequeue[7785]: 09.492643 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 21 inet: 137.74.204.237 port: 8630; Will retry.
Feb 23 19:34:09 ics1 messagequeue[7785]: 09.492724 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 23 inet: 137.74.204.239 port: 8630; Will retry.
Feb 23 19:34:09 ics1 messagequeue[7785]: 09.493010 |0|0|0| W 31 CAL0071: InetStreamSocket::read: EOF during readToMagic: socket read error: Success; InetStreamSocket: sd: 17 inet: 137.74.204.236 port: 35682; Will retry.



 Comments   
Comment by David Thompson (Inactive) [ 2017-02-24 ]

When you had this issue with infinidb did it basically behave exactly the same or differently? Is the only difference the frequency.

In looking at your code one thing that sticks out as to something that that has changed since infinidb is that insert into selects are by default converted into a stream into the cpimport code. If the behavior is different or even if not then you can disable this:
https://mariadb.com/kb/en/mariadb/columnstore-batch-insert-mode/

Thanks for all the detailed info, it should help triage.

Comment by Bernd Helm [ 2017-02-24 ]

the "no extent information available" error did definitely also occur on infinidb and the partitions were crashed afterwards. i currently do not know if the getDbRootHWMInfo also looked the same. the frequency thing is a bit hard to tell, as it seems that it depends on the data range that is updated. (i.e. stats_hour which contains 24 times the data of stats_day is more likely to crash). it may be impossible to reproduce it with only 1000 rows. on our setup, we have ~100k rows on the source side and 12-30 million rows per day on the destination side.
on infinidb, the single server setup is much more stable. those errors, that happend on infinidb 5 server setup once or twice a month, rarly happen on the single server installation that serves as backup and is running the same updates. it could be related to the amount of db-roots involved, so for reproducing, try a mutli server setup with a higher dbroot count on each.
i also noticed that columnstore produces wrong results; i update both tables stats_day and stats_hour within the same transactions and always add the same amount of clicks to both. on columnstore, it happend reproduceable that those click counts on both tables, that were matching before, are not matching anymore after an update. this never happend on infinidb. something is definitely going wrong there.

regarding the batch insert mode, this does not apply to this story, as the batch insert mode is only active on NON-transactional inserts. these crashes only happen on transactional inserts/updates. inserts with cpimport are stable for me.

Comment by David Thompson (Inactive) [ 2017-02-25 ]

You are correct that the batch insert mode would not apply due to this being in a transaction.

The error line:
Feb 23 19:34:04 ics1 controllernode[7785]: 04.910677 |0|0|0| C 29 CAL0000: ExtentMap::getDbRootHWMInfo(): OID 3543 has HWM extent that is UNAVAILABLE for DBRoot1; part#: 3, seg#: 3, fbo: 8192, localHWM: 0, lbid: 12061696

Is likely the root cause, my guess is that the insert requires creation of a new extent because all across pms are full and that there is some race condition between the creation of the extent and preparing for the specific insert. This would also explain why it's sporadic. We have made some performance improvements to reduce contention so it's possible one of these is making this more likely or it's just pure coincidence on the volume / frequency of data making this more likely to happen.

Comment by Bernd Helm [ 2017-02-27 ]

Thank you for your attention.

We have installed InfiniDB 4.6.7 on the same server and ran the same data importer on the same data. its running fine since Friday, while columnstore crashed after some minutes (and i have replayed the tables and retried 3 times before giving up). so i am now certain that the table crash frequency of columnstore is much higher, because other factors are almost eliminated.

Comment by David Hall (Inactive) [ 2017-02-28 ]

There's a good possibility that running update and then insert in the same transaction is causing the problem:

My theory is thus:
1) update stats_day causes version numbers to increase on the updated columns in the extent map for the partitions updated. This may be the partition that isn't full yet.
2) insert sees this and attempts to create a new partition since the one that it should insert to is of a new version (this is a bug) and creates a new partition. But since the the other partition isn't full yet, logic gets confused and we see the error.

Consider the following where stats_day.hash is oid 3543 (I imported the dump of extent map from calpontsupportreport).
MariaDB [dhall]> select * from editem where fileID=3543 and dbroot=1;
-----------------------------------------------------------------------------------------------------------+

rstart rsize fileID blockoffset hwm partNum segnum dbroot colWid status hi_val lo_val seq isValid

-----------------------------------------------------------------------------------------------------------+

1032192 8 3543 0 0 0 1 1 8 1 -1406600244076 1179684682204 1 2
83870720 8 3543 8192 16383 0 1 1 8 0 -2593172762379 372025278032 1 2
84213760 8 3543 0 0 1 1 1 8 1 -1186220995098 50062474090 1 2
84796416 8 3543 8192 16383 1 1 1 8 0 -14983050786 835237993668 1 2
85234688 8 3543 0 0 2 2 1 8 1 -3967190796206 2014535934913 1 2
85474304 8 3543 8192 16383 2 2 1 8 0 -343962270290 10752077427063 1 2
86231040 8 3543 0 8165 3 3 1 8 0 -3561685951823 2607766414395 37 2
12061696 8 3543 8192 0 3 3 1 8 1 0 -1 3 0

-----------------------------------------------------------------------------------------------------------+
8 rows in set (0.01 sec)

You can see that the last segment has a block offset > than the high water mark (HWM) of the previous segment and that the segment status is 1. This is not allowed and thus the error. (status: 0 is AVAILABLE and 1 is UNAVAILABLE. Segments are created UNAVAILABLE and updated by some later operations.)

Also note that the seq # for the next to last segment is 37. This implies that the extent was updated 37 times during this transaction.

I believe that if a COMMIT were issued before the INSERT, the problem will be resolved and should be used as a work around until this bug is fixed.

Earlier in the log, I see a number of errors of the following type:
Feb 23 11:19:16 ics1 PrimProc[29018]: 16.707044 |0|0|0| C 28 CAL0000: Invalid Range from HWM for lbid 82677624, range size should be <= blocksReadAhead: HWM 7465, dbroot 1, highfbo 9215, lowfbo 8704, blocksReadAhead 512, range size -1238

These are indications of a bad extent map. There's lots of verboseness there, but basically, HWM can't be less than lowfbo, which is the block offset from the start of the file to the beginning of the segment we're looking at. HWM should point to the offset from the start of the file to the Logical Block (LBID) of the last block in the segment. 'range size' as reported in the error is the difference between them. It should never be negative which would indicate the data block is before the start of the segment.

Unfortunately, the reported extent map appears to no longer have these corruptions, so further analysis is difficult.

It's my hope that these anomalies are caused by the same forces at work in the main concern and that these errors will evaporate when COMMIT before INSERT is tried.

Comment by David Thompson (Inactive) [ 2017-03-07 ]

Hi Bernd, have you had a chance to review David Hall's suggestion, this would be a possible workaround and also help confirm this is indeed the bug?

Comment by David Thompson (Inactive) [ 2017-03-27 ]

Hi Bernd, did you have a chance to review this?

Comment by Bernd Helm [ 2017-03-27 ]

sorry for having you ask twice;

no, i had no chance to test this.
but IIRC i also had an crash on the first update, but im not sure.

anyways, it would be the better if you could reproduce it yourself. i will setup an vm to test this out (i hope its reproduceable with a single-vm multi-dbroot setup, we will see).
if you agree i can provide you with some GB of csv files and a script that makes it possible to reproduce it within some minutes, so you can test and confirm a future fix yourself.
i can send you either the csvs + scripts or everything together within the VM.

Comment by David Thompson (Inactive) [ 2017-03-27 ]

May be best to provide us data and scripts as that makes it more independently reproducible. Probably best if you can email the details offline.

Generated at Thu Feb 08 02:22:13 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.