Details
-
Bug
-
Status: Closed (View Workflow)
-
Minor
-
Resolution: Incomplete
-
1.1.5post1
-
None
-
* lib version: 1.1.5post3, not 1.15post1
* conda python environment
* Ubuntu 22.04 LTS, mate-desktop
* PyCharm Professional
* SqlAlchemy
-
3.10.9
Description
I'm running a query on millions of records and need to use server side cursors. I'm able to get streaming working, but when I close the result set either via exiting the context manager block or via an explicit #close() everything hangs while it waits for the driver to pull in and discard the remaining data associated with the server side cursor.
The mariadb client code indicates that a #close() on a cursor is supposed to:
```python
def close(self):
"""
Closes the cursor.
If the cursor has pending or unread results, .close() will cancel them
so that further operations using the same connection can be executed.
The cursor will be unusable from this point forward; an Error
(or subclass) exception will be raised if any operation is attempted
with the cursor."
"""
CONPY-231: fix memory leak
if self._data:
del self._data
if not self.connection._closed:
super().close()
```
Shouldn't the #close() on a result object stop the steaming of data from the server instead of causing a potentially many hour or longer hang waiting on the result set to be streamed over the network then discarded? Is this incomplete or incorrect documentation?
I did find in the pymysql lib where it explicitly exhausts the cursor on close with a comment that there isn't a way to stop the stream of data.