_ssl modifications in 2.7

I recently pushed into action a plan that had been brewing for long while: To make the SSL module in the standard library work for our StacklessIO socket library.

The problem with _ssl is that it just uses internally an native socket BIO.  It doesn’t allow the application to control details of the communications.  This is unsuitable for anyone that is using a non-standard socket implementation.

I admit that I didn’t look around, otherwise I probably would have stumbled across pyOpenSSL I simply assumed that the standard library ssl was the only one and that was clearly unsuitable since it does its own socket IO.

So, anyway, what I did was to add two different features to _ssl.sslwrap():

  1. Instead of only allowing wrapping a socket, one can pass in a Python object. This is assumed to be a PyBIO object, an object that implements the read() and write() methods. the SSLContext object thus created will internally create a BIO pair object and then any “read” or “write” calls on it may invoke corresponding Python callbacks to the PyBIO object to satisfy the SSL context’s need to send or receive data.
    This feature can be used to send the data over whatever Python IO channel one chooses to implement in Python.
  2. Additionally, the user may wrap None. In this case, a “naked” SSLContext will be created, with a BIO pair. This is then suitable for use by any other C code that knows about Open SSL, the layout of PySSLObject and how to pump data back and forth out of a BIO pair. To facilitate this, I export the _ssl api in the same way that _socket does (to _ssl), using a new function PySSLModule_ImportModuleAndAPI(void).
    The purpose of this feature is to be able to use standard Python code to manage certificates and set up the SSL contex, then hand off this context to a custom transport layer written in C/C++.

This is now complete. The changes were not large. I have written extenstions to the test_ssl.py unittests that wrap a regular socket in a PyBIO and the entire thing works.

We are now using this in EVE, to add SSL support to the backend webserver. We need to pipe data trough the _ssl module and back into our code where stackless stack swapping magic happens to emulate blocking IO.

The question that remains, is this: I´d like to contribute this stuff back, but due to Python 2.x being feature frozen, there is no good place for it. Maybe I could push this into 3.x but I haven’t looked. Maybe it does ssl differently.

So it remains, like many other goodies, part of the custom branch of stackless 2.7 that CCP uses. For the time being.

Advertisement

Winsock timeout / closesocket()

I was working on StacklessIO for Stackless Python. When running the test_socket unittest, I came across a single failure: The testInsideTimeout would fail, with the server receiving a ECONNRESET when trying to write its “done!” string.

Normally this shouldn’t happen. Because even though the client has closed the connection, a RST packet is only sent in response to the send() call, so at the time of calling send all appears well and the call should succeed.

Investigating this further, it turned out to be due to the way I’m implementing timeout.

StacklessIO uses Winsock overlapped IO.  A request is issued and then when it is finished a thread waiting on a IOCompletion port will wake up and cause the waiting tasklet to be resumed.  To time out, for example, a recv() call, I schedule a Windows timer as well.  If it fires before the request is done, the tasklet is woken up with a timeout error.  There appears to be no way in the API to cancel a pending IO request, so at this point, the IO is still pending.

Anyway, this all is well and good, but where does the RST come from then?  Well, when the timeout occurs, the tasklet wakes up and the socket is closed.  And calling closesocket() on a connection with pending IO has at least two effects, only one of which is documented:

  1. All pending IO is canceled with the WSA_OPERATION_ABORTED error.
  2. A RST is sent to the remote party

I’ve never seen the latter behaviour documented.  But apparently then, calling closesocket() when IO is pending is equivalent to an abortive close().

I’m not sure if this is significant.  If a socket call times out, the usual recommendation is to close the connection anyway since the connection may be in an undefined state due to race conditions.  But it is a bit annoying all the same.