While working on our first customer project using Pyramid I stumbled on a curious problem when setting up HAProxy to load balance requests among the backends. I had configured HAProxy to use layer 7 health checks to make sure that the applications were correctly responding to HTTP requests. For some reason I was getting a lot of false negatives indicating that the backend servers were unavailable when in fact they were functioning properly. This lead me to inspect the network traffic between HAProxy and the application servers.
I had the following simple view in my application to respond to the HAProxy health checks
def ping(request): return Response('pong', content_type='text/plain')
which simply returns the string “pong” with a default set of HTTP headers. While inspecting the network traffic using Wireshark I noticed that this simple response was split into multiple TCP packets even though it could have easily fit in a single one. Additionally, it seemed that each HTTP header was sent in a separate TCP packet. Splitting the health check response into multiple packets was the reason behind the HAProxy problem because it caused HAProxy sometimes to truncate the response (I also found similar reports). After learning about the cause of the failing health checks I set out to find why exactly the HTTP headers were split into separate TCP packets.
Starting from paste.httpserver (which I was using to run the application) I was able track to problem down to BaseHTTPServer.BaseHTTPRequestHandler. The reason why the HTTP response is split into so many TCP packets originates from SocketServer.StreamRequestHandler which BaseHTTPRequestHandler inherits from. This is one of the convenience classes that provides a file-like API on top of a socket connection. More specifically, it provides two instance variables self.rfile and self.wfile which are file-like objects for reading from and writing to the connected socket, respectively. The comments in the StreamRequestHandler class contain the following
# Default buffer sizes for rfile, wfile. # We default rfile to buffered because otherwise it could be # really slow for large data (a getc() call per byte); we make # wfile unbuffered because (a) often after a write() we want to # read and we need to flush the line; (b) big writes to unbuffered # files are typically optimized by stdio even when big reads # aren't. rbufsize = -1 wbufsize = 0
The important part here is the buffering mode for the wfile object which is set to unbuffered. This results in each call to self.wfile.write() to send the data immediately. For a “chatty” application where the connected parties exchange messages frequently in alternating fashion this makes sense. However, for HTTP this assumption is suboptimal because in the common case the data transfer consists of a single exhange of information: the client sends a request and the application writes the response. Changing the wfile to use buffered I/O by setting
wbufsize = -1
I can see in Wireshark that the HTTP response is contained in a single TCP packet.
In case the body of the HTTP response is small there can be considerable overhead in sending the response in multiple TCP packets compared to a single packet. I wanted to benchmark this to see what the difference is between the two buffering modes. I set up the following environment
$ virtualenv-2.6 tcptest $ cd tcptest $ ./bin/easy_install Paste
and used the following script to run a simple WSGI app that returns 15 HTTP headers and a trivial body.
$ cat tcptest.py def simple_app(environ, start_response): status = '200 OK' headers = [ ('Content-type', 'text/plain'), ('Content-length', '4'), ('Server', 'paste.httpserver'), ('Date', 'Wed, 23 Feb 2011 15:17:48 GMT'), ('Last-Modified', 'Wed, 23 Feb 2011 11:15:06 GMT'), ('Etag', '"13cc73a-13591-49cf135880280"'), ('X-Foo1', 'bar1'), ('X-Foo2', 'bar2'), ('X-Foo3', 'bar3'), ('X-Foo4', 'bar4'), ('X-Foo5', 'bar5'), ('X-Foo6', 'bar6'), ('X-Foo7', 'bar7'), ('X-Foo8', 'bar8'), ('X-Foo9', 'bar9'), ] start_response(status, headers) return ['pong'] if __name__ == '__main__': import sys from paste import httpserver if sys.argv.strip() == 'buffered': print "Using buffered I/O for writing." httpserver.WSGIHandler.wbufsize = -1 else: print "Using unbuffered I/O for writing (default)" httpserver.serve(simple_app, host=sys.argv, port=sys.argv)
To benchmark the difference I started the script using both unbuffered and buffered I/O and ran Apache benchmark (ab) against it. I used a single thread to run 5000 requests against the script and measured the requests per second the server achieved.
$ ./bin/python tcptest.py unbuffered 192.168.0.1 8000 Using unbuffered I/O for writing (default) serving on http://192.168.0.1:8000 $ ab -c1 -n 5000 http://192.168.0.1:8000/ping ... Requests per second: 1036.11 [#/sec] (mean)
$ ./bin/python tcptest.py buffered 192.168.11.76 8009 Using buffered I/O for writing. serving on http://192.168.0.1:8000 $ ab -c 1 -n 5000 http://192.168.0.1:8000/ping ... Requests per second: 1893.12 [#/sec] (mean)
The absolute numbers are specific to my setup (MacbookPro) and not very interesting but the relative difference in the number of requests per second is quite significant. This is especially the case for small requests where the number of HTTP headers dominate over the response body size.
All implementations that inherit from BaseHTTPServer.BaseHTTPRequestHandler without modifying the write buffering will suffer from this issue. These include at least paste.httpserver and SimpleHTTPServer in the standard library. The wsgiref implementation in the standard library has the same underlying issue but does not suffer from it to the same degree due to the way it handles writing of the HTTP headers. paste.httpserver iterates over the HTTP headers and calls .write() on each header whereas wsgiref (actually wsgiref.headers.Headers) builds a string containing (most of) the headers that is sent using a single .write().
Recent HAProxy releases should work better with backends that split the response in multiple packets but considering the increase in performance it may still be useful to change the buffering mode in Python HTTP servers that have this issue.