[UPDATE] Since writing this, I had another huge spike in traffic (20,000 visitors in a day), which again caused a crash. As a result, I installed WP-Super Cache, which is much more robust that WP-Cache. I also adjusted the theme to use a static stylesheet (style.css) rather than a dynamic one (style.php) -. This totally eliminates all load on the PHP and MySQL backend, vastly improving the situation. The use of style.php vs. style.css varies by theme, it’s an option in the tiga theme used here, but for many themes is unnecessary.
…..
Some time this morning my blog started acting a bit wonky, giving me errors about not being able to connect to the database, and then finally becoming inaccessible. I could not even get into my server control panel, and had to request a power cycle.
After it came back up, I found out why it was having problems. I’d been linked from reddit.com, a user-generated news site, and I was getting a new connection every few seconds, already several thousand connection had been made, and the sheer number of requests was crashing the server. All of the request were for this page:
https://cowboyprogramming.com/2007/01/04/mature-optimization-2/
But why was this making the server crash? Well, the most immediate problem was that wp-cache was not enabled. This meant that every page view had to be re-built from the MySQL database using PHP. If the requests are coming in faster than the page can be built, then they’ll get backed up. Eventually thing crunch to a stop, seeing as the basic workings of WordPress also require database connections. And since cPanel also uses a database, it too fails to work. I could probably improve this somewhat by increasing the number of connections, but really they should be cached.
So I re-enabled the cache, and things seemed to go swimmingly. I then took a look at what was being downloaded as part of the page. I was rather surprised to discover that the SyntaxHighlighter plugin was serving up a bunch of JavaScript, even though it was not used on this particular page. That seems like a bit of a bug. I was able to trim this down quite a bit by disabling every language except for C++. later I hacked it to only include the JavaScript if needed.
I then noticed that my header image and my background image were about 100K each, which is rather silly. I took them both into photoshop, and tweaked them down to about 15K. The background image was a GIF, but this was still compressable by a significant amount by just setting “lossy” to a small number like 4 or 5, and there was no perceptible degradation.
Finally, my Reddit effect was seemingly having a knock-on effect in South Korea, where I had linked to a 2.5MB pdf file that was hosted at Seoul National University. This was currently inaccessible. I’m not sure, but suspect a few people were trying to download it (I was at 5,000 visitors by then). I wanted to re-host it locally, but I could not even read it myself. So I just deleted the link, then 20 minutes later things had died down, so I was able to grab the file. I re-hosted it on my server, and put up a new link.
So my server has been up about six hours now, and since then it’s been continued to be hit fairly constantly from Reddit. In that time there has been 70,000 accesses (3.17 per second) , accounting for 724MB. It’s about 20 times the normal daily traffic for this site.
Here’s a graphical look at what happened:
As you can see, bandwidth is not really the problem. It peaks at 1Mb/sec, and my server can serve at 100Mb/sec. You could probably handle this load on a 768K DSL line.
The Server degrades severely at around 8:00AM, then around 8:20 is when I reboot it. I re-enable the cache a few minutes after this, but this does not have any effect on the bandwidth, seeing as the pages being served are identical, just less load on the PHP/MySQL back end.
Then at around 10:00AM I start optimizing, shrinking the background and header images, and trimming the syntax highlighter, which more than halves the bandwidth.