[MXS-3141] No persistent connections Created: 2020-08-28  Updated: 2021-04-19  Resolved: 2021-04-19

Status: Closed
Project: MariaDB MaxScale
Component/s: Core
Affects Version/s: 2.5.2, 2.5.3, 2.5.5
Fix Version/s: N/A

Type: Bug Priority: Major
Reporter: Alex Assignee: markus makela
Resolution: Done Votes: 0
Labels: None
Environment:

Debian 9, Debian 10



 Description   

I've enabled persistent connections but they are not used.

I have more then 4000 connections in TIME_WAIT state per server

Application connects via PHP 7.4 mysqli

maxctrl show server:

├─────────────────────┼──────────────────────────────────────────────┤
│ Statistics          │ {                                            │
│                     │     "active_operations": 1,                  │
│                     │     "adaptive_avg_select_time": "373.941us", │
│                     │     "connections": 289,                      │
│                     │     "max_connections": 386,                  │
│                     │     "persistent_connections": 0,             │
│                     │     "routed_packets": 37211148,              │
│                     │     "total_connections": 2258233             │
│                     │ }                                            │
├─────────────────────┼──────────────────────────────────────────────┤
│ Parameters          │ {                                            │
│                     │     "address": "XXX.XXX.XX.XXX",             │
│                     │     "disk_space_threshold": null,            │
│                     │     "extra_port": 0,                         │
│                     │     "monitorpw": null,                       │
│                     │     "monitoruser": null,                     │
│                     │     "persistmaxtime": 60000,                 │
│                     │     "persistpoolmax": 500,                   │
│                     │     "port": 3306,                            │
│                     │     "priority": 0,                           │
│                     │     "proxy_protocol": false,                 │
│                     │     "rank": "primary",                       │
│                     │     "socket": null,                          │
│                     │     "ssl": false,                            │
│                     │     "ssl_ca_cert": null,                     │
│                     │     "ssl_cert": null,                        │
│                     │     "ssl_cert_verify_depth": 9,              │
│                     │     "ssl_cipher": null,                      │
│                     │     "ssl_key": null,                         │
│                     │     "ssl_verify_peer_certificate": false,    │
│                     │     "ssl_verify_peer_host": false,           │
│                     │     "ssl_version": "MAX"                     │
│                     │ }                                            │
└─────────────────────┴──────────────────────────────────────────────┘



 Comments   
Comment by markus makela [ 2020-08-28 ]

Can you add the maxscale.cnf? Please make sure to remove passwords and IP addresses from the configuration before posting it.

Comment by Alex [ 2020-08-28 ]

[maxscale]
threads=auto
log_warning=1
log_notice=1
log_warn_super_user=1
 
 
[flea]
type=server
address=XXX.XX.XX.XX
port=3306
protocol=MariaDBBackend
persistpoolmax=500
persistmaxtime=60s
 
[odonata]
type=server
address=XXX.XX.XXX.XX
port=3306
protocol=MariaDBBackend
persistpoolmax=500
persistmaxtime=60s
 
 
[TheMonitor]
type=monitor
module=mariadbmon
servers=flea, odonata
user=maxscale
Password=###
 
[Read-Write-Service]
type=service
router=readwritesplit
cluster=TheMonitor
user=maxscale
password=#####
auth_all_servers=true
log_auth_warnings=true
max_slave_replication_lag=1s
use_sql_variables_in=master
master_reconnection=true
slave_selection_criteria=ADAPTIVE_ROUTING
max_sescmd_history=1500
prune_sescmd_history=true
master_accept_reads=true
strict_multi_stmt=true
strict_sp_calls=true
master_failure_mode=fail_on_write
retry_failed_reads=true
delayed_retry=true
delayed_retry_timeout=30s
transaction_replay=true
transaction_replay_attempts=20
transaction_replay_retry_on_deadlock=true
causal_reads=fast
causal_reads_timeout=5s
 
[Read-Write-Listener]
type=listener
authenticator_options=log_password_mismatch=true
service=Read-Write-Service
protocol=MariaDBClient
port=3306

Comment by markus makela [ 2020-08-31 ]

It's possible that all the connections in the pool are already in use and there are no connections left over. I think we can improve the statistics output of the servers to show how many connections have been taken from the connection pool.

Comment by Alex [ 2020-08-31 ]

I've checked on the server side. Connections are not persist. Besides:

persistpoolmax=500

"connections": 289

persistmaxpool is 500 connections and there are only 289 connections

Comment by markus makela [ 2020-09-01 ]

Ah yes, that is of course true. I apologize for missing that.

Do you see this behavior with the latest 2.4 release as well? We're in the process of releasing MaxScale 2.5.3 and when it is released, we'd recommend testing with that as well to see if this is a side-effect of some other bug.

Comment by Alex [ 2020-09-04 ]

Sorry, We are not using 2.4. So cannot tell.

Comment by Alex [ 2020-09-11 ]

Installed 2.5.3 - same problem

Comment by markus makela [ 2020-12-08 ]

We added new statistics in MaxScale 2.5.4 that better track the persistent connection usage. Have you had the chance to upgrade to it yet and see what the statistics are? The statistics in question are max_pool_size, reused_connections and connection_pool_empty in the output of maxctrl show server.

Comment by Alex [ 2020-12-08 ]

I've upgraded to 2.5.5. Will check and report

Comment by Alex [ 2020-12-08 ]

Statistics from two servers:

├─────────────────────┼─────────────────────────────────────────────┤
│ Statistics          │ {                                           │
│                     │     "active_operations": 14,                │
│                     │     "adaptive_avg_select_time": "306us",    │
│                     │     "connection_pool_empty": 337424682,     │
│                     │     "connections": 849,                     │
│                     │     "max_connections": 1681,                │
│                     │     "max_pool_size": 274,                   │
│                     │     "persistent_connections": 0,            │
│                     │     "reused_connections": 632737,           │
│                     │     "routed_packets": 577852213,            │
│                     │     "total_connections": 338057419          │
│                     │ }                                           │

├─────────────────────┼──────────────────────────────────────────────┤
│ Statistics          │ {                                            │
│                     │     "active_operations": 14,                 │
│                     │     "adaptive_avg_select_time": "269.363us", │
│                     │     "connection_pool_empty": 173275192,      │
│                     │     "connections": 865,                      │
│                     │     "max_connections": 1753,                 │
│                     │     "max_pool_size": 19,                     │
│                     │     "persistent_connections": 0,             │
│                     │     "reused_connections": 687079,            │
│                     │     "routed_packets": 842551155,             │
│                     │     "total_connections": 173962271           │
│                     │ }                                            │

Comment by markus makela [ 2020-12-14 ]

Looks like the connection pool is working correctly: it's empty because it's being constantly used.

It does seem a little suspicious that the pool never gets big enough to fill up completely. This might still be something we have to investigate as the number of times the pool has been empty is pretty high compared to the times it's not.

One thing we could also track is the number of connections that were closed and not put into the pool. This could explain why the pool is not used if the connections are somehow ineligible from being pooled.

Comment by markus makela [ 2021-03-10 ]

If you increase persistmaxtime, does the amount of connections stored in the pool increase? On average, how long are your client connections to MaxScale?

Comment by Alex [ 2021-04-19 ]

Fixed in 2.5.10 together with "Packets out of order"

Comment by markus makela [ 2021-04-19 ]

OK, that's great to hear. I'll go ahead and close this issue then. We can just assume that the fix was related to MXS-3436.

Generated at Thu Feb 08 04:19:15 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.