14 January 2011

nginx & php-fpm - upstream sent too big header while reading response header from upstream

Out of the nowhere, this morning my nginx server was returning the following error:
2011/01/14 02:11:10 [error] 20350#0: *1 upstream sent too big header while reading response header from upstream, client: XX.XX.XX.XX, server: pictures4.net, request: "GET /albums.php?page=list&parent=156 HTTP/1.0", upstream: "fastcgi://127.0.0.1:9000", host: "www.pictures4.net"
I started googling for this error. Lots of answers were related with the proxy caching, but as long as i'm not using the proxy feature this is not relevant. Digging more, i found out (here) that there are some buffers for the fastcgi as well.

This is what the documentation says:
fastcgi_buffer_size

syntax: fastcgi_buffer_size the_size
default: fastcgi_buffer_size 4k/8k
context: http, server, location
This directive sets the buffer size for reading the header of the backend FastCGI process.

By default, the buffer size is equal to the size of one buffer in fastcgi_buffers. This directive allows you to set it to an arbitrary value.

fastcgi_buffers

syntax: fastcgi_buffers the_number is_size
default: fastcgi_buffers 8 4k/8k
context: http, server, location
This directive sets the number and the size of the buffers into which the reply from the FastCGI process in the backend is read.

By default, the size of each buffer is equal to the OS page size. Depending on the platform and architecture this value is one of 4k, 8k or 16k.
On Linux you can get the page size issuing:

getconf PAGESIZE

it returns the page size in bytes.

Example:
fastcgi_buffers 256 4k; # Sets the buffer size to 4k + 256 * 4k = 1028k

This means that any reply by the FastCGI process in the backend greater than 1M goes to disk. Only replies below 1M are handled directly in memory.
So, it seems that i didn't completely configured the nginx server... after adding my buffer values, everything was working just fine.

No comments:

Post a Comment