I sometimes get this question. And instead of starting a rant about microthreads, co-routines, tasklets and channels, I present the essential piece of code from the implementation:
The Code:
/*
the frame dispatcher will execute frames and manage
the frame stack until the "previous" frame reappears.
The "Mario" code if you know that game :-)
*/
PyObject *
slp_frame_dispatch(PyFrameObject *f, PyFrameObject *stopframe, int exc, PyObject *retval)
{
PyThreadState *ts = PyThreadState_GET();
++ts->st.nesting_level;
/*
frame protocol:
If a frame returns the Py_UnwindToken object, this
indicates that a different frame will be run.
Semantics of an appearing Py_UnwindToken:
The true return value is in its tempval field.
We always use the topmost tstate frame and bail
out when we see the frame that issued the
originating dispatcher call (which may be a NULL frame).
*/
while (1) {
retval = f->f_execute(f, exc, retval);
if (STACKLESS_UNWINDING(retval))
STACKLESS_UNPACK(retval);
/* A soft switch is only complete here */
Py_CLEAR(ts->st.del_post_switch);
f = ts->frame;
if (f == stopframe)
break;
exc = 0;
}
--ts->st.nesting_level;
/* see whether we need to trigger a pending interrupt */
/* note that an interrupt handler guarantees current to exist */
if (ts->st.interrupt != NULL &&
ts->st.current->flags.pending_irq)
slp_check_pending_irq();
return retval;
}
(This particular piece of code is taken from an experimental branch called stackless-tealet, selected for clarity)
What is it?
It is the frame execution code. A top level loop that executes Python function frames. A “frame” is the code sitting inside a Python function.
Why is it important?
It is important in the way it contrasts to C Python.
Regular C Python uses the C execution stack, mirroring the execution stack of the Python program that it is interpreting. When a Python function foo(), calls a python function bar(), this happens by a recursive invocation of the C function PyEval_EvalFrame(). This means that in order to reach a certain state of execution of a C Python program, the interpreter needs to be in a certain state of recursion.
In Stackless Python, the C stack is decoupled from the Python stack as much as possible. The next frame to be executed is placed in ts->frame and the frame chain is executed in a loop.
This allows two important things:
The state of execution of a Stackless python program can be saved and restored easily. All that is required is the ability to pickle execution frames and other runtime structures (Stackless adds that pickling functionality). The recursion state of a Python program can be restored without having the interpreter enter the same level of C recursion.
Frames can be executed in any order. This allows many tasklets to be created and code that switches between them. Microthreads, if you will. Co-routines, if you prefer that term. But without forcing the use of the generator mechanism that C python has (in fact, generators can be more easily and elegantly implemented using this system).
That’s it!
Stackless Python is stackless, because the C stack has been decoupled from the python stack. Context switches become possible, as well as the dynamic management of execution state.
Okay, there’s more:
Stack slicing: A clever way of switching context even when the C stack gets in the way
A framework of tasklets andchannels to exploint execution context switching
A scheduler to keep everything running
Sadly, development and support for Stackless Python has slowed down in the last few years. It however astonishes me that the core idea of stacklessnesshasn’t been embraced by C Python even yet.
Recently I decided to port a little package that I had to Python 3, and ran into the traceback reference cycle problem. This blog is the result of the detective work I had to do, both to re-familiarize myself with this issue (I haven’t been doing this sort of stuff for a few years) and to uncover the exact behaviour in Python 3.
Background
In Python 2, exceptions are stored internally as three separate objects: The type, the value and the traceback objects. The value is normally an instance of the type by the time your python code runs, so mostly we are dealing with value and traceback only. There are two pitfalls one should be aware of when writing exception handling code.
The traceback cycle problem
Normally, you don’t worry about the traceback object. You write code like:
def foo():
try:
return bar()
except Exception as e:
print "got this here", e
The trouble starts when you want to do something with the traceback. This could be to log it, or maybe translate the exception to something else:
def foo():
try:
return bar()
except Exception as e:
type, val, tb = sys.exc_info()
print "got this here", e, repr(tb)
The problem is that the traceback, stored in tb, holds a reference to the execution frame of foo, which again holds the definition of tb. This is a cyclic referenceand it means that both the traceback, and all the frames it contains, won’t disappear immediately.
This is the “traceback reference cycle problem” and it should be familiar to serious Python developers. It is a problem because a traceback contains a link to all the frames from the point of catching the exception to where it occurred, along with all temporary variables. The cyclic garbage collector will eventually reclaim it (if enabled) but that occurs unpredictably, and at some later point. The ensuing memory wastage may be problematic, the latency involved when gc finally runs, or it may cause problems with unittests that rely on reference counts to detect when objects die. Ideally, things should just go away when not needed anymore.
The same problem occurs whenever the traceback is present in a frame where an exception is raised, or caught. For example, this pattern here will also cause the problem in the called function translate() because tb is present in the frame where it is raised.
def translate(tp, val, tb):
# translate this into a different exception and re-raise
raise MyException(str(val)), None, tb
In python 2, the standard solution is to either avoid retrieving the traceback object if possible, e.g. by using
tp, val = sys.exc_info()[:2]
or by explicitly clearing it yourself and thus removing the cycle:
def translate(tp, val, tb):
# translate this into a different exception and re-raise
try:
raise MyException(str(val)), None, tb
finally:
del tb
By vigorous use of try-finally, the prudent programmer avoids leaving references to traceback objects on the stack.
The lingering exception problem
A related problem is the lingering exception problem. It occurs when exceptions are caught and handled in a function that then does not exit, for example a driving loop:
def mainloop():
while True:
try:
do_work()
except Exception as e:
report_error(e)
As innocent as this code may look, it suffers from a problem: The most recently caught exception stays alive in the system. This includes its traceback, even though it is no longer used in the code. Even clearing the variable won’t help:
report_error(e)
e = None
This is because of the following clause from the Python documentation:
If no expressions are present, raise re-raises the last exception that was active in the current scope.
In Python 2, the exception is kept alive internally, even after the try-except construct has been exited, as long as you don’t return from the function.
The standard solution to this, in Python 2, is to use the exc_clear() function from the sys module:
def mainloop():
while True:
try:
do_work()
except Exception as e:
report_error(e)
sys.exc_clear() # clear the internal traceback
The prudent programmer liberally sprinkles sys.exc_clear() into his mainloop.
Python 3
In Python 3, two things have happened that change things a bit.
The traceback has been rolled into the exception object
sys.exc_clear() has been removed.
Let’s look at the implications in turn.
Exception.__traceback__
While it unquestionably makes sense to bundle the traceback with the exception instance as an attribute, it means that traceback reference cycles can become much more common. No longer is it sufficient to refrain from examining sys.exc_info(). Whenever you store an exception object in a variable local to a frame that is part of its traceback, you get a cycle. This includes both the function where the exception is raised, and where it is caught.
Code like this is suspect:
def catch():
try:
result = bar()
except Exception as e:
result = e
return result
The variable result is part of the frame that is referenced result.__traceback__ and a cycle has been created.
(Note that the variable e is not problematic. In Python 3, this variable is automatically cleared when the except clause is exited.)
similarly:
def reraise(tp, value, tb=None):
if value is None:
value = tp()
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
Both of these cases can be handled with a well placed try-finally to clear the variables result,value and tb respectively:
def catch():
try:
result = bar()
except Exception as e:
result = e
try:
return result
finally:
del result
def reraise(tp, value, tb=None):
if value is None:
value = tp()
try:
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
finally:
del value, tb
Note that the caller of reraise() also has to clear his locals that he used as an argument, because the same exception is being re-raised and the caller's frame will get added to the exception:
try:
reraise(*exctuple):
finally:
del exctuple
The lesson learned from this is the following:
Don’t store exceptions in local objects for longer than necessary. Always clear such variables when leaving the function using try-finally.
sys.exc_clear()
This method has been removed in Python 3. This is because it is no longer needed:
def mainloop():
while True:
try:
do_work()
except Exception as e:
report_error(e)
assert sys.exc_info() == (None, None, None)
As long as sys.exc_info() was empty when the function was called, i.e. it was not called as part of exception handling, then the inner exception state is clear outside the Except clause.
However, if you want to hang on to an exception for some time, and are worried about reference cycles or memory usage, you have two options:
clear the exception’s __traceback__ attribute:
e.__traceback__ = None
use the new traceback.clear_frames() function
traceback.clear_frames(e.__traceback__)
clear_frames() was added to remove local variables from tracebacks in order to reduce their memory footprint. As a side effect, it will clear the reference cycles.
Conclusion
Exception reference cycles are a nuisance when developing robust Python applications. Python 3 has added some extra pitfalls. Even though the local variable of the except clause is automatically cleared, the user must himself clear any other variables that might contain exception objects.
Time to write a little bit about this little project of mine.
tl;dr
Multithreading more responsive in a Python 2.7. 30% more requests per second. Satisfaction guaranteed!
Introduction
After leaving my old job at CCP Games last year, I had the urge to try to collect some of the stuff that we had done for Python 2.7 over there and make it available to the world. So I started this little fork off 2.7.
The idea was to have a place to add “improvements” to vanilla (as opposed to Stackless) 2.7, so that they could be kept separately and in sync with CPython 2.7.
Thus far, what I’ve been mostly focusing on is modernizing thread support. (for a full list of changes, see the whatsnew file).
When we were working on DUST 514 for the Playstation I had to make certain improvements to make networking work more efficiently on that platform. We were interfacing stackless python with the native http api of the PS3, and had to use blocking worker threads. Marshaling from those threads to tasklets was causing unnecessary latency.
We ended up doing a lot of experiments with condition variables, in the end, providing native C implementations to minimize GIL thrashing and reducing wakeup latency to the minimum.
In PythonPlus I have done this and some other stuff in order to improve threading performance.
The threading related changes cover among other things:
Adding timeout parameters to blocking calls as in the 3.x api.
Adding a native threading.Condition object
Improving the GIL
Adding a native Condition object aims to reduce the thread thrashing that is otherwise associated with condition variables, since a lot lof locking and context switching needs to happen for a thread to wake up with the normal .py version of those constructs. To do this, however, the internal non-recursive locks need to be implemented using a lock and a condition variable themselves, rather than using native semaphore objects.
Changing the lock types used required the GIL to be visited, since the behaviour of the old GIL was just a random side effect of the choice of internal locks. This also allowed me to address the old Beazley problem while at it.
The GIL change is minor. It is simply a separate function, and when a CPU bound thread wishes to yield the GIL to another thread, it calls a new api function, _PyThread_yield_GIL(). Threads that are trying to re-aquire the GIL after unlocking them, are considered to be IO threads and have priority for the GIL when a CPU thread yields it. But if no such thread is present, then the GIL won’t actually be yielded 99 out of every 100 yields. This minimizes unnecessary thrashing among CPU threads, while allowing IO threads to quickly get their foot in when required.
Performance
I quickly got this all up and running, but then I had to prove it to be actually better than regular 2.7. To do this, I set up two test scenarios:
Tools/plus/giltest.py – a test platform to measure performance of concurrent cpu threads as well as the performance of pairs of producer/consumer threads synchronized either with threading.Condition or threading.Lock
Tools/plus/testserver.py – a multithreaded webserver using a pool of thread and socketserver.py, being exercised by ab.
On windows, I found it easy to see improvements. I got the GIL to behave better and I got the web server to increase throughput. producer/consumer pairs using Condition variables got a real performance boost and those IO threads got a priority boost over regular CPU bound threads as expected.
However, my virtual linux box was more disappointing. Tests showed that just replacing the native non-recursive lock which was based on the posix sem_t object with a construct using pthread_mutex_t and pthread_cond_t, slowed down execution.
Fixing linux
I decided that there ought ot be no good reason for a pthread_cond_t to be so much slower than a sem_t, so I decided to write my own condition object using a sem_t. To make a long story short, it worked. My emulated condition variable (written using a pthread_mutex_t and a sem_t) is faster than a pthread_condition_t. At least on my dual core virtual box. Go figure.
The making of this new condition variable is a topic for a blog post on its own. I doggedly refused to look up other implementations of condition variables based on semaphores, and wanted to come up with a solution on my own that did not violate the more subtle promises that the protocol makes. Along the way, I was guided by failing unittests of the threading.Barrier class, which relies on the underlying threading.Condition to be true to its promise. I was actually stuck on this problem for a few months, but after a recent eureka moment I think I succeeded.
The results
So, this has been some months in the making. I set up the header files so that various aspects of my patch could be switched on or off, and a macro especially for performance testing then sets these in a sensible way.
giltest.py
First, the results of the giltest.py file, with various settings of the macro and on windows and linux:
Some notes on this are in order.
e is “efficiency”, the cpu throughput of two concurrent cpu threads (incrementing a variable) compared to just one thread.
prod/con is a pair of producer/consumer threads using a threading.Lock primitive, and the column shows the number of transactions in a time-frame (one second)
The green bit shows why a GIL improvement was necessary since IO threads just couldn’t get any priority over a cpu thread. This column is showing prod/con transactions in the presence of a cpu thread.
In the end, the improvements on linux are modest. Maybe it is because of my virtual machine. But the Beazley problem is fixed, and IO responsiveness picks up. On windows it is more pronounced.
The final column is a pair of producer/consumer threads synchronized using a threading.Condition object. Notice on windows how performance picks up almost threefold, ending up being some 60% of a pair that’s synchronized with a threading.Lock.
testserver.py
Now for more real-world like results. Here the aim was to show that running many requests in parallel was better handled using the new system. Again, improvements on linux are harder to gauge. In fact, my initial attempts were so disappointing on linux that I almost scrapped the project. But when I thought to rewrite the condition variable, things changed.
Notice how performance picks up with “emulated condvar” on linux (green boxes) (on windows, it is always emulated)
p=1 and p=10 are the number of parallel requests that are made. “ab” is single threaded, it shoots off n requests and then waits for them all to finish before doing the next batch, so this is perhaps not completely real-world.
On linux, rps (requests per second) go up for the multi-threaded case, both when we add the new GIL (better IO responsiveness) and when we add the native threading.Condition. Combined, it improves 30%.
On windows, we see the same, except that the biggest improvement is when we modify the locks (orange boxes).
On windows, we achieve better throughput with multithreading. I.e. multiple requests now work better than single requests, whereas on linux, multiple requests performed worse.
Conclusion
These tests were performed on a dual core laptop, running windows 7. The linux tests were done in a virtual ubuntu machine on the same laptop, using two cpus. I’m sure that the virtual nature has its effect on the results, and so, caveat emptor.
Overall, we get 30% improvement in responsiveness when there are multiple threads doing requests using this new threading code in Python Plus. For real world applications serving web pages, that ought to matter.
On windows, the native implementation of threading.Condition provides a staggering 167% boost in performance of two threads doing rendezvous using a condition variable.
While optimizing the linux case, I uncovered how pthread_cond_t is curiously inefficient. A “greedy” implementation of a condition variable using the posix sem_t showed dramatic improvement on my virtual machine. I haven’t replicated this on native linux, but I suspect that the implementors of the pthread library are using explicit scheduling, whereas we rely on the presumably greedy scheduling semantics of the semaphore primitive. But perhaps a separate blog post on this is in order, after some more research.
I made a short presentation to my colleagues the other day about how we use the killing of tasklets as a clean and elegant way to tear down services and workers in a Stackless Python program.
My colleague Rob Galanakis wrote a short blog post on his impressions of it.
I haven’t written anything on Python here in a good while. But that doesn’t mean I haven’t been busy wrestling with it. I’ll need to take a look at my Perforce changelists over the last months and take stock. In the meantime, I’d like to rant a bit about a most curious peculiarity of Python that I came across a while back.
After a long hiatus, the Cosmic Percolator is back in action. Now it is time to rant about all things Python, I think. Let’s start with this here, which came out from work I did last year.
Stackless has had an “atomic” feature for a long time. In this post I am going to explain its purpose and how I reacently extended it to make working with OS threads easier.
Scheduling
In Stackless python, scheduling it cooperative. This means that a tasklet is normally uninterrupted until it explicitly does something that would cause another one to run, like sending a message over a channel. This allows one to write logic in stackless without worrying too much about synchronization.
However, there is an important exception to this: It is possible to run stackless tasklets throught the watchdog and this will interrupt a running tasklet if it exceeds a pre-determined number of executed opcodes:
while True:
interrupted = stackless.run(100)
if interrupted:
print interrupted, "has been running quite a bit!"
interrupted.insert()
else:
break # Ok, nothing runnable anymore
This code may cause a tasklet to be interrupted at an arbitrary point (actually during a tick interval, the same point that yields the GIL) and cause a switch to the main tasklet.
Of course, not all code uses this execution mode, but never the less, it has always been considered a good idea to be aware of this. For this reason, an atomic mode has been supported which would inhibit this involuntary switching in sensitive areas:
the atomic state is a property of each tasklet and so even when there is voluntary switching performed while a non-zero atomic state is in effect, it has no effect on other tasklets. Its only effect is to inhibit involuntary switching of the tasklet on which it is set.
A Concrete Example
To better illustrate its use, lets take a look at the implementation of the Semaphore from stacklesslib (stacklesslib.locks.Semaphore):
class Semaphore(LockMixin):
def __init__(self, value=1):
if value < 0:
raise ValueError
self._value = value
self._chan = stackless.channel()
set_channel_pref(self._chan)
def acquire(self, blocking=True, timeout=None):
with atomic():
# Low contention logic: There is no explicit handoff to a target,
# rather, each tasklet gets its own chance at acquiring the semaphore.
got_it = self._try_acquire()
if got_it or not blocking:
return got_it
wait_until = None
while True:
if timeout is not None:
# Adjust time. We may have multiple wakeups since we are a
# low-contention lock.
if wait_until is None:
wait_until = elapsed_time() + timeout
else:
timeout = wait_until - elapsed_time()
if timeout < 0:
return False
try:
lock_channel_wait(self._chan, timeout)
except:
self._safe_pump()
raise
if self._try_acquire():
return True
def _try_acquire(self):
if self._value > 0:
self._value -= 1
return True
return False
This code illustrates how the atomic state is incremented (via a context manager) and kept non-zero while we are doing potentially sensitive things, in this case, doing logic based on self._value. Since this is code that is used for implementing a Semaphore, which itself forms the basis of other stacklesslib.locks objects such as CriticalSection and Condition objects, this is the only way we have to ensure atomicity.
Threads
It is worth noting that using the atomic property has largely been confined to such library code as the above. Most stackless programs indeed do not run the watchdog in interruptible mode, or they use the so-called soft-interrupt mode which breaks the scheduler only at the aforementioned voluntary switch points.
However, in the last two years or so, I have been increasingly using Stackless Python in conjunction with OS threads. All the stackless constructs, such as channels and tasklets work with threads, with the caveat that synchronized rendezvous isn’t possible between tasklets of different threads. A channel.send() where the recipient is a tasklet from a different thread from the sender will always cause the target to become runnable in that thread, rather than to cause immediate switching.
Using threads has many benefits. For one, it simplifies certain IO operations. Handing a job to a tasklet on a different thread won’t block the main thread. And using the usual tasklet communication channels to talk uniformly to all tasklets, whether they belong to this thread or another, makes the architecture uniform and elegant.
The locking constructs in stacklesslib also all make use of non-immediate scheduling. While we use the stackless.channel object to wait, we make no assumptions about immediate execution when a target is woken up. This makes them usable for synchronization between tasklets of different threads.
Or, this is what I thought, until I started getting strange errors and realized that tasklet.atomic wasn’t inhibiting involuntary switching between threads!
The GIL
You see, Python internally can arbitrarily stop executing a particular thread and start running another. This is called yielding the GIL and it happens at the same part in the evaluation loop as that involuntary breaking of a running tasklet would have been performed. And stackless’ atomic property din’t affect this behaviour. If the python evaluation loop detects that another thread is runnable and waiting to execute python code, it may arbitrariliy yield the GIL to that thread and wait to reacquire the GIL again.
When using the above lock to synchronize tasklets from two threads, we would suddenly have a race condition, because the atomic context manager would no longer prevent two tasklets from making simultaneous modifications to self._value, if those tasklets came belonged to different threads.
A Conundrum
So, how to fix this? An obvious first avenue to explore would be to use one of the threading locks in addition to the atomic flag. For the sake of argument, let’s illustrate with a much simplified lock:
class SimpleLock(object):
def __init__(self):
self._chan = stackless.channel()
self._chan.preference = 0 # no preference, receiver is made runnable
self._state = 0
def acquire(self):
# oppertunistic lock, without explicit handoff.
with atomic():
while True:
if self._state == 0:
self._state = 1:
return
self._chan.receive()
def release():
with atomic():
self._state == 0
if self._chan.balance():
self._chan.send(None) # Wake up someone who is waiting
While this lock will work nicely with tasklets on the same thread. But when we try to use it for locking between two threads, the atomicity of changing self._state and examining self._chan.balance() won’t be maintained.
We can try to fix this with a proper thread lock:
class SimpleLockedLock(object):
def __init__(self):
self._chan = stackless.channel()
self._chan.preference = 0 # no preference, receiver is made runnable
self._state = 0
self._lock = threading.Lock()
def acquire(self):
# oppertunistic lock, without explicit handoff.
with atomic():
while True:
with self._lock:
if self._state == 0:
self._state = 1:
return
self._chan.receive()
def release():
with atomic():
with self._lock:
self._state == 0
if self._chan.balance():
self._chan.send(None) # Wake up someone who is waiting
This version is more cumbersome, of course, but the problem is, that it doesn’t really fix the issue. There is still a race condition in acquire(), between relesing self._lock and calling self._chan.receive().
Even if we were to modify self.chan.receive() to take a lock and atomically release it before blocking, and reaquire it before returning, that would be a very unsatisfying solution.
thankfully, since we needed to go and modify Stackless Python anyway, there was a much simpler solution.
Fixing Atomic
You see, Python is GIL synchronized. In the same way that only one tasklet of a particular thread is executing at the same time, then regular cPython is has the GIL property that only one of the processes thread is runinng python code at a time. So, at any one time, only one tasklet of one thread is running python code.
So, if atomic can inhibit involuntary switching between tasklets of the same threads, can’t we just extend it to inhibit involuntary switching between threads as well? Jessörry Bob, it turns out we can.
This is the fix (ceval.c:1166, python 2.7):
/* Do periodic things. Doing this every time through
the loop would add too much overhead, so we do it
only every Nth instruction. We also do it if
``pendingcalls_to_do'' is set, i.e. when an asynchronous
event needs attention (e.g. a signal handler or
async I/O handler); see Py_AddPendingCall() and
Py_MakePendingCalls() above. */
#ifdef STACKLESS
/* don't do periodic things when in atomic mode */
if (--_Py_Ticker < 0 && !tstate->st.current->flags.atomic) {
#else
if (--_Py_Ticker < 0) {
#endif
That’s it! Stackless’ atomic flag has been extended to also stop the involuntary yielding of the GIL from happening. Of course voluntary yielding, such as that which is done when performing blocking system calls, is still possible, much like voluntary switching between tasklets is also possible. But when the tasklet’s atomic value is non-zero, this guarantees that no unexpected switch to another tasklet, be it on this thread or another, happens.
This fix, dear reader, was sufficient to make sure that all the locking constructs in stacklesslib worked for all tasklets.
So, what about cPython?
It is worth noting that the locks in stacklesslib.locks can be used to replace the locks in threading.locks: If your program is just a regular threaded python program, then it will run correctly with the locks from stacklesslib.locks replacing the ones in threading.locks. This includes, Semaphore, Lock, RLock, Condition, Barrier, Event and so on. and all of them are now written in Python-land using regular Python constructs and made to work by the grace of the extended tasklet.atomic property.
Which brings me to ask the question: Why doesn’t cPython have the thread.atomic property?
I have seen countless questions on the python-dev mailing lists about whether this or that operation is atomic or not. Regularly one sees implementation changes to for example list and dict operations to add a new requirement that an operation be atomic wrt. thread switches.
Wouldn’t it be nice if the programmer himself could just say: “Ah, I’d like to make sure that my updating this container here will be atomic when seen from the other threads. Let’s just use the thread.atomic flag for that.”
For cPython, this would be a perfect light-weight atomic primitive. It would be very useful to synchronize access to small blocks of code like this. For other implementations of Python, those that are truly GIL free, a thread.atomic property could be implemented with a single system global threading.RLock. Provided that we add the caveat to a thread.atomic that it should be used by all agents accessing that data, we would now have a system for mutual access that wold work very cheaply on cPython and also work (via a global lock) on other implementations.
Let’s add thread.atomic to cPython
The reasons I am enthusiastic about seeing an “atomic” flag as part of cPython are twofold:
It would fill the role of a lightweight synchronization primitive that people are requesting where a true Lock is considered too expensive, and where it makes no sense to have a per-instance lock object.
More importantly, it will allow Stackless functionality to be added to cPython as a pure extension module, and it will allow such inter-thread operations to be added to Greenlet-based programs in the same way as we have solved the problem for Stackless Python.
Emulating an “atomic” flag in an truly multithreaded environment with a lock is not as simple as I first though. The cool thing about “atomic” is that it still allows the thread to block, e.g. on an IO operation, without affecting other threads. For an atomic-like lock to work, such a lock would need to be automatically yielded and re-acquired when blocking, bringing us back to a condition-variable-like model. Since the whole purpose of “atomic” is to be lightweight in a GIL-like environment, forcing it to be backwards compatible with a truly multi-threaded solution is counter-productive. So, “atomic” as a GIL only feature is the only thing that makes sense, for now. Unless I manage to dream up an alternative.
Intrigued by the name, I examined the header where it is defined, code.h:
...
void *co_zombieframe; /* for optimization only (see frameobject.c) */
...
} PyCodeObject;
It turns out that for every PyCodeObject object that has been executed, a PyFrameObject of a suitable size is cached and kept with the code object. Now, caching is fine and good, but this cache is unbounded. Every code object has the potential to hang on to a frame, which may then never be released.
Further, there is a separate freelist cache for PyFrameObjects already, in case a frame is not found on the code object:
if (free_list == NULL) {
f = PyObject_GC_NewVar(PyFrameObject, &PyFrame_Type,
extras);
if (f == NULL) {
Py_DECREF(builtins);
return NULL;
}
}
else {
assert(numfree > 0);
--numfree;
f = free_list;
free_list = free_list->f_back;
...
Always concious about memory these days, I tried disabling this in version 3.3 and running the pybench test. I was not able to see any conclusive difference in execution speed.
Update:
Disabling the zombieframe on the PS3 shaved off some 50k on startup. Not the jackpot, but still, small things add up.
——————————————————————————-
PYBENCH 2.1
——————————————————————————-
* using CPython 3.3.0a3+ (default, May 23 2012, 20:02:34) [MSC v.1600 64 bit (AMD64)]
* disabled garbage collection
* system check interval set to maximum: 2147483647
* using timer: time.perf_counter
* timer: resolution=2.9680909446810176e-07, implementation=QueryPerformanceCounter()
What follows is an account of how I found and fixed an insidious bug in Stackless Python which has been there for years. It’s one of those war stories. Perhaps a bit long winded and technical and full of exaggerations as such stories tend to be.
Background
Some weeks ago, because of a problem in the client library we are using, I had to switch the http library we are using on the PS3 from using non-blocking IO to blocking. Previously, we were were issuing all the non-blocking calls, the “select” and the tasklet blocking / scheduling on the main thread. This is similar to how gevent and other such libraries do things. Switching to blocking calls, however, meant doing things on worker threads.
The approach we took was to implement a small pool of pyton workers which could execute arbitrary jobs. A new utility function, stacklesslib.util.call_async() then performed the asynchronous call by dispatching it to a worker thread. The idea of an call_async() is to have a different tasklet execute the callable while the caller blocks on a channel. The return value, or error, is then propagated to the originating tasklet using that channel. Stackless channels can be used to communicate between threads too. And synchronizing threads in stackless is even more conveninent than regular Python because there is stackless.atomic, which not only prevents involuntary scheduling of tasklets, it also prevents automatic yielding of the GIL (cPython folks, take note!)
This worked well, and has been running for some time. The drawback to this approach, of course, is that we now need to keep python threads around, consuming stack space. And Python needs a lot of stack.
The problem
The only problem was, that there appeared to be a bug present. One of our developers complained that sometimes, during long downloads, the http download function would return None, rather than the expected string chunk.
Now, this problem was hard to reproduce. It required a specific setup and geolocation was also an issue. This developer is in California, using servers in London. Hence, there ensued a somewhat prolonged interaction (hindered by badly overlapping time-zones) where I would provide him with modified .py files with instrumentation, and he would provide me with logs. We quickly determined, to my dismay, that apparently, sometimes a string was turning into None, while in transit trough a channel.send() to a channel.receive(). This was most distressing. Particularly because the channel in question was transporting data between threads and this particular functionality of stackless has not been as heavily used as the rest.
Tracking it down
So, I suspected a race condition of some sorts. But a careful review of the channel code and the scheduling code presented no obvious candidates. Also, the somehwat unpopular GIL was being used throughout, which if done correctly ensures that things work as expected.
To cut a long story short, by a lucky coincidence I managed to reproduce a different manifestation of the problem. In some cases, a simple interaction with a local HTTP server would cause this to happen.
When a channel sends data between tasklets, it is temporarily stored on the target tasklet’s “tempval” attribute. When the target wakes up, this is then taken and returned as the result from the “receive()” call. I was able to establish that after sending the data, the target tasklet did indeed hold the correct string value in its “tempval” attribute. I then needed to find out where and why it was disappearing from that place.
By adding instrumentation code to the stackless core, I established that this was happening in the last line of the following snippet:
By setting a breakpoint, I was able to see that I was in the top level part of the “continue” bit of the “stack spilling” code
Stack spilling is a feature of stackless where the stack slicing mechanism is used to recycle a deep callstack. When it detects that the stack has grown beyond a certain limit, it is stored away, and a hard switch is done to the top again, where it continues its downwards crawl. This can help conserve stack address space, particularly on threads where the stack cannot grow dynamically.
So, something wrong with stack spilling, then. But even so, this was unexpected. Why was stack spilling happening when data was being transmitted across a channel? Stack spilling normally occurs only when nesting regular .py code and other such things.
By setting a breakpoint at the right place, where the stack spilling code was being invoked, I finally arrived at this callstack:
Type Function
PyObject* slp_eval_frame_newstack(PyFrameObject* f, int exc, PyObject* retval)
PyObject* PyEval_EvalFrameEx_slp(PyFrameObject* f, int throwflag, PyObject* retval)
PyObject* slp_frame_dispatch(PyFrameObject* f, PyFrameObject* stopframe, int exc, PyObject* retval)
PyObject* PyEval_EvalCodeEx(PyCodeObject* co, PyObject* globals, PyObject* locals, PyObject** args, int argcount, PyObject** kws, int kwcount, PyObject** defs, int defcount, PyObject* closure)
PyObject* function_call(PyObject* func, PyObject* arg, PyObject* kw)
PyObject* PyObject_Call(PyObject* func, PyObject* arg, PyObject* kw)
PyObject* PyObject_CallFunctionObjArgs(PyObject* callable)
void PyObject_ClearWeakRefs(PyObject* object)
void tasklet_dealloc(PyTaskletObject* t)
void subtype_dealloc(PyObject* self)
int slp_transfer(PyCStackObject** cstprev, PyCStackObject* cst, PyTaskletObject* prev)
PyObject* slp_schedule_task(PyTaskletObject* prev, PyTaskletObject* next, int stackless, int* did_switch)
PyObject* generic_channel_action(PyChannelObject* self, PyObject* arg, int dir, int stackless)
PyObject* impl_channel_receive(PyChannelObject* self)
PyObject* call_function(PyObject*** pp_stack, int oparg)
Notice the “subtype_dealloc”. This callstack indicates that in the channel receive code, after the hard switch back to the target tasklet, a Py_DECREF was causing side effects, which again caused stack spilling to occur. The place was this, in slp_transfer():
/* release any objects that needed to wait until after the switch. */
Py_CLEAR(ts->st.del_post_switch);
This is code that does cleanup after tasklet switch, such as releasing the last remaining reference of the previous tasklet.
So, the bug was clear then. It was twofold:
A Py_CLEAR() after switching was not careful enough to store the current tasklet’s “tempval” out of harms way of any side-effects a Py_DECREF() might cause, and
Stack slicing itself, when it happened, clobbered the current tasklet’s “tempval”
The bug was subsequently fixed by repairing stack spilling and spiriting “tempval” away during the Py_CLEAR() call.
Post mortem
The inter-thread communication turned out to be a red herring. The problem was caused by an unfortunate juxtaposition of channel communication, tasklet deletion, and stack spilling.
But why had we not seen this before? I think it is largely due to the fact that stack spilling only rarely comes into play on regular platforms. On the PS3, we deliberately set the threshold low to conserve memory space. This is also not the first stack-spilling related bug we have seen on the PS3, but the first one for two years. Hopefully it will be the last.