[MDEV-21423] lock-free trx_sys get performance regression cause by lf_find and ut_delay Created: 2020-01-05 Updated: 2024-01-02 |
|
| Status: | Stalled |
| Project: | MariaDB Server |
| Component/s: | Storage Engine - InnoDB |
| Affects Version/s: | 10.5.0 |
| Fix Version/s: | 10.6 |
| Type: | Bug | Priority: | Major |
| Reporter: | zongzhi chen | Assignee: | Marko Mäkelä |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | innodb, lock-free, performance | ||
| Environment: |
linux |
||
| Attachments: |
|
||||||||||||||||||||||||||||
| Issue Links: |
|
||||||||||||||||||||||||||||
| Epic Link: | InnoDB trx_sys improvements | ||||||||||||||||||||||||||||
| Description |
|
Hello, guys we have port the lock-free trx-sys, however I find that the oltp_read_write case get too much performance regression compare with non-lock-free version.. This is my sysbench test configure
There is another issue that relate to the lock-free trx_sys Below is the sysbench result:
|
| Comments |
| Comment by Sergey Vojtovich [ 2020-01-06 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
baotiao, feels like another iterator issue. Was this ARM or Intel? How many cores? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by zongzhi chen [ 2020-01-06 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
this is intel with 64 cores. I guess the performance cause by the iterate the lf_hash and copy the item to create new readview. When there is little items in lf_hash, then the iterator operation is faster than the lock, memcpy, unlock operation. However, when there is lots of items in lf_hash, the iterator operation will cause much more time.. so if there is an operation can directly copy the lf_hash and store it in readview without lock? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Sergey Vojtovich [ 2020-01-06 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
No locks acquired during ReadView snapshot. I'm afraid direct copy won't work either. If we compare original:
versus new:
memcpy() is certainly cheaper than lf_hash iteration. But it requires all 560 threads to acquire trx_sys.mutex, which is certainly more expensive. I will need to look into that. It feels like the problem is the same as in MDEV-20630. IIRC for 560 threads we must have 1024 dummy nodes, which have to be iterated. Did you try comparing 500 threads? Or 1000 threads? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by zongzhi chen [ 2020-01-06 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Yes, I try to comparing 560 threads. If you make lf_hash_iterate less overhead. I MDEV-20630 don't need to reinit the hash_table size.. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Sergey Vojtovich [ 2020-01-06 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Right, I guess if MDEV-20630 can be worked around be reinit-ing the hash, this one can't. Testing 500 and 1000 threads (lf vs non-lf) should prove my guess (I'm expecting lf to be faster with these thread counts). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Sergey Vojtovich [ 2020-01-08 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
I kind of reproduced this on my Broadwell pre-lf (1464f4808c08155fd10ba09eae2bb2c3f177482c) And I confirm iterator is high in the profile. baotiao, are you benchmarking on SkyLake? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Sergey Vojtovich [ 2020-01-08 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
May be this is worth exploring:
The above limits number of dummy nodes. It brings performance to pre-lf level. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Sergei Golubchik [ 2020-01-23 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
As far as I understand, if you have less dummy nodes, you'll have more hash collisions. Basically all nodes between two dummy nodes is one hash bucket. So with a limit like csize < 128 you pretty much change the hash from O(1) to O(N). May be it'd be better to increase MAX_LOAD? It'll still be O(1) but with a larger constant factor. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by zongzhi chen [ 2020-02-21 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
increase MAX_LOAD also cause more conflict.. as you said, change the algorithm from O(1) to O(MAX_LOAD).. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Sergey Vojtovich [ 2020-05-16 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Looks like my patches for MDEV-20630 did make some difference for this workload: https://github.com/MariaDB/server/commits/bb-10.5-svoj-MDEV-20630 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-12-17 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
I think that the lock-free hash table belongs to the runtime team, which sanja is the leader of. InnoDB trx_sys.rw_trx_hash is merely using that code. There are also bigger worries: ThreadSanitizer is giving warnings, and apparently there are occasional crashes on the ARM platform. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Daniel Black [ 2021-04-22 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
ARM fixed in | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2021-10-29 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
In 10.6, thanks to On ARMv8 we might want to implement | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Vladislav Vaintroub [ 2021-10-29 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
I'm running a synthetic benchmark, which does
from 10 to 10 000 concurrent client connections, with table defined as
Problem - if I run 256 clients after 10 000 clients, I get 700 000 queries per minute. If I run it before, I get 1 200 000 queries per minute. That's 40% regression. The profiler confirms that it is searching for something in lf_hash. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Vladislav Vaintroub [ 2021-10-29 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
I agree with markoInnodb should try another kind of hash, that does not suffer garbage collections issues, especially since there now high performance hashes in Innodb, that can be protected by new scalable slim reader-writer lock, transactional memory lock guard, and what not. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Sergey Vojtovich [ 2021-10-29 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
FWIW: the biggest problem with lf_hash is memory fragmentation, which causes iterator to thrash dTLB. I had ~80% performance improvement by allocating nodes close to each other. This solution is under 100 LOC. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Vladislav Vaintroub [ 2021-10-29 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
I trust that, and I'd take what works, as long as it works. Currently, lf thing suffers GC issues. and non-lf thing maybe would not. We're not using lock-free because lock-free algorithms are beautiful. It should improve performance, that's all | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Sergey Vojtovich [ 2021-10-29 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
wlad uhm, does this issue have anything to do with GC? I thought it is about poor performance of lf_hash iterator, which doesn't touch GC at all. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Sergey Vojtovich [ 2021-10-29 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
...and you would certainly not see this problem with UPDATE t1 set i = i+1, because it doesn't snapshot ReadVew. It should be something like sysbench oltp_rw, probably in read-committed mode, with backup locks disabled (unless they were fixed in the meantime). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Vladislav Vaintroub [ 2021-10-29 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Nah, I do see this problem exactly with this workload. Below the full technology demonstration, I did not use sysbench this time . I also reran it, just to make sure I did not make a mistake. match_pins around 15% CPU. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Sergey Vojtovich [ 2021-10-29 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
wlad, what you see is a different problem. Original problem was about lf_hash iterator, but your updates unveil pure lf_hash GC issue. Nevertheless having GC (match_pins()) consume 15% of CPU is not acceptable indeed. One thing worth keeping in mind is that these 15% come from 2 lf_hash'es: MDL and trx_sys. There's also lf_hash of table definition cache on the way, but it has different load pattern and shouldn't contribute to match_pins(). It is also nice to see numbers like 700 000QPS and 1 200 000QPS, IIRC in pre-lf trx_sys (10.2) we couldn't go over ~100 000QPS. Speaking of updates. With current lf_hash implementation rw_trx_hash concurrency tends to 100%, that is threads can perform hash operations without blocking each other. But it has overhead: GC. If you partition rw_trx_hash and use mutex/rw_lock for protection, concurrency is going to be lower, roughly (100 - 100 / N)% (where N is number of partitions). That is if you have 4 partitions it will be 75%, for 10 partitions - 90%, etc. I must admit I'm unsure which approach is going to provide better throughput. Partitioned hash sounds promising. Although lf_hash has room for improvement as well: it should be possible to make GC much faster than it currently is. Speaking of quintessence of the problem: current MVCC implementation would probably never scale well for over 10000 connections. It has to snapshot identifiers of all connections, organise them for quick access (currently it is sort+bsearch). It is obvious that the more connections one has, the more expensive MVCC handling becomes. Dunno, probably there're better MVCC approaches out there. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Vladislav Vaintroub [ 2021-10-29 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
svoj, to clarify - the numbers I gave is queries per minute, not per second. It would be fabulous is this would be QPS, but in fact I run test for a minute, and reported the overall number (bear in mind, it is a laptop with 4 real cores, so maybe it is still fine this way). There is also quite a contention on a single row that the test operates on. 15 % come almost all from trx_sys.
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Sergey Vojtovich [ 2021-10-29 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
oh, 4 cores, c'mon Note that your profiler refers to lf_dynarray_iterate(), while the bug is about lf_hash_iterate(). But match_pins() is a problem for sure. BTW, I'd suggest you to check if TLB load is an issue. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2021-10-31 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
svoj, for buf_pool.page_hash and lock_sys.rec_hash we have one rw-lock per cache line. That is, on AMD64 with 64-byte cache lines, each rw-lock covers 7 pointers. As a side effect of acquiring the lock, you will also have loaded those 7 pointers.
The maximum number of concurrent transactions is innodb_page_size/16*128. With the default innodb_page_size=16k and a 64-byte cache line, we might allocate a hash array for all 131,072 elements, or 149,796 elements with the rw-locks (2,340 of them). I think that using a bit over 1 megabyte for the hash array might be acceptable. Using your formula, this would reduce concurrency by less than 5‰. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Sergey Vojtovich [ 2021-10-31 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
marko, yeah, it may work and fix issue described by Wlad. Although it is not evident (to me) if it is going to fix snapshot performance where you need to collect all active rw transactions id's as well as aggregate min(trx->no). Luckily rw_trx_hash_t framework is friendly for experiments. Hope it can get tested on systems that have more than 4 laptop cores too. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2021-11-18 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
svoj, to my understanding, our performance testing on Microsoft Windows is currently limited to laptop-grade hardware. On GNU/Linux we have more choice. The hash table traversal for making a snapshot could acquire a shared lock on one hash array slice at a time. That lock could be elided on processors that support it, as in Somewhat related to this, in | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Vladislav Vaintroub [ 2021-11-18 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
The 4 core laptop outlines the performance bugs much better than GNU/Linux, for which , no matter what patch is applied , all test differences would usually fall into 1-2% noise ratio (also, GNU/Linux has a real problem with providing a succinct, readable, usable profiler output). Until Linux can provide a profile that can pinpoint a problem, I'd have to stick to Windows laptops, with 4 core, or whatever cores we'll have. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2022-01-28 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
I have not been actively working on this, but I keep this "in progress" so that I will not forget about this. Hopefully I will be able to return to this soon. Like I wrote earlier, if we assume the default innodb_page_size=16k, the new trx_sys.rw_trx_hash could be an array that is a bit over 1 megabyte on a 64-bit system. The first element in each cache line would be a slim mutex or rw-lock (4 or 8 bytes), and the subsequent elements would be pointers to something similar to what we currently have:
We might eliminate the mutex and simply let each hash array mutex protect the contents of all hash bucket chains that reside in that cache line (it would be 7 on AMD64 and up to 15 on IA-32). If we do that, we might just let the transaction commit number be a normal variable. But, we would need a next-bucket pointer. The new structure could be like this:
This is 32 bytes per element (or 24 on a 32-bit system, or 16 if we omitted the redundant id field). We might try to pack multiple elements per cache line (2 on AMD64) to improve the locality of reference. If the element mutex turns out to be necessary for performance, then we would probably want to have only one element per cache line. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2022-04-07 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
An alternative hash table element could be as follows:
In this variant, the trx->id would not be duplicated, and the next pointer would be in trx_t in the same cache line with n_ref, id, state and possibly also rw_trx_hash_element. On AMD64, this would allow us to pack 3 elements and one rw-lock in a 64-byte cache line. On ARMv8, it would be 7 elements per 128-byte cache line. We might go one more step further and move also the commit number to trx_t::no, and directly store trx_t pointers in the hash array (7 to 15 trx_t* per cache line). In trx_t, we would like all of the following to be in a single cache line: n_ref, state, id, no, next, rw_trx_hash_element, mutex. We seem to be fine: 4+4+8+8+ptr+ptr+4=36 or 44 bytes (more than 4 bytes for mutex if SUX_LOCK_GENERIC is needed, that is, the underlying operating system does not support anything futex-like). If the underlying operating system does not support futex, I would use the 32-bit std::atomic based rw_lock as a pure spinlock, like we do for the buf_pool.page_hash latches. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2022-04-08 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
wlad, I posted a rough prototype. It survived the regression tests, but I did not check performance yet. It turns out that thanks to having the commit number directly in trx_t::no, we do not need any back pointer from trx_t to the hash table element. The only extra thing we need is a trx_t::rw_trx_hash pointer to the next trx_t in the hash bucket chain. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2022-04-13 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
I refined my prototype, implemented the cache line alignment of trx_t and while doing that, replaced pointless 8×256 bytes of padding with alignment constraints. I tried to determine the optimal compile-time constant size for the hash table. On my dual Intel Xeon E5-2630 v4 system (2×2×10 threads), the sweet spot of throughput (average number of transactions per second) according to my quick testing (30-second sysbench oltp_update_index with 8×100,000 rows and innodb_flush_log_at_trx_commit=0 on fast NVMe storage) appears to be 64 payload cells if we ignore the last column. For higher loads, 256 (and maybe even more) cells would seem to help.
The performance dip at 80 concurrent connections was due to a checkpoint flush in the middle of my scripted prepare+benchmark run. In each case, the top throughput was reached at 160 concurrent connections (4 times the number of CPU hardware threads).
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2022-04-13 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
I conducted some further tests on 10.9:
Like with the 10.6 test, at 80 concurrent connections there was some concurrent checkpoint flushing activity. Unfortunately, we can see that at higher concurrency, the lock-free hash table performs better. For 10.6 it did not seem to be the case. The table below duplicates the table of the previous comment, but adds a bottom row for the baseline performance:
The different characteristics in 10.9 could be explained by In any case, I think that the cache line optimized trx_t allocation could be adopted (after some performance testing). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2022-04-14 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
I applied some cleanup in
The numbers at 80 concurrent connections are unreliable, because a checkpoint flush occurred during that in my microbenchmark. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2022-04-14 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
I merged MDEV-28313 to 10.9 to create a new baseline, and then merged this on top. The results are just as bad as with 10.6. We can only observe an improvement at 20 concurrent connections; there is a regression in all other cases:
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2022-04-26 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Apart from some recovery code, there are two sources of trx_sys.rw_trx_hash traversal, which will be affected by MDEV-20630:
In MySQL 8.0.29, trx_rw_min_trx_id() was partly removed as redundant and partly replaced with a simpler operation. I created something similar, replacing trx_sys.get_min_trx_id() with trx_sys.find_same_or_older(). I expect it to improve performance of operations that involve secondary indexes. The new hash table traversal trx_sys.find_same_or_older() employs short circuit evaluation, in a strictly read-only operation (not even acquiring any mutex). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2022-04-28 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
I reran the 30-second 8×100,000-row Sysbench oltp_update_index with innodb_flush_log_at_trx_commit=0 to get some quick indication of the impact:
The last two rows indicate that there is quite a bit of variation in the throughput, in addition to the checkpoint glitch that occurs during the 80-connection test. The combination with MDEV-26603 must also be tested against a baseline with innodb_flush_log_at_trx_commit=1:
So, unfortunately even this fix does not cure the counterintuitive regression revealed by MDEV-26603. axel, can you please run your standard benchmarks on 10.6+patch? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2022-04-29 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
I filed a separate ticket I guess that our standard test batteries might not exercise locking conflicts at all, especially on secondary indexes. Something bigger like TPC-C might show a difference. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2023-03-14 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2023-10-01 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
MDEV-20630 may be more rewarding to fix first. I see that the lock-free hash table is using std::memory_order_seq_cst, while a less intrusive memory order (or explicit memory barriers) might work. I have not studied that code in much detail. What I attempted so far was to make InnoDB invoke the expensive operations less often ( | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by JiraAutomate [ 2023-12-05 ] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Automated message: |