Code.RogerHub » vagrant https://rogerhub.com/~r/code.rogerhub The programming blog at RogerHub Fri, 27 Mar 2015 23:04:04 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.2 MongoDB power usage on laptop https://rogerhub.com/~r/code.rogerhub/infrastructure/558/mongodb-power-usage-on-laptop/ https://rogerhub.com/~r/code.rogerhub/infrastructure/558/mongodb-power-usage-on-laptop/#comments Fri, 27 Mar 2015 22:51:55 +0000 https://rogerhub.com/~r/code.rogerhub/?p=558 I’m setting up a Chef-based Ubuntu VM on my laptop, so I can get sandboxing and all the usual Ubuntu tools. Most things work on OS X too, but it’s a second-class citizen compared to poster child Ubuntu. I have Mongo and a few services set up there, for school stuff, and I noticed that Mongo regularly uses a significant amount of CPU time, even when it’s not handling any requests.

I attached strace to it with strace -f -p 1056, and here’s what I saw:

[pid 1118] select(12, [11], NULL, NULL, {0, 10000} <unfinished ...>
[pid 1056] select(11, [9 10], NULL, NULL, {0, 10000} <unfinished ...>
[pid 1118] <... select resumed> ) = 0 (Timeout)
[pid 1056] <... select resumed> ) = 0 (Timeout)
[pid 1118] select(12, [11], NULL, NULL, {0, 10000} <unfinished ...>
[pid 1056] select(11, [9 10], NULL, NULL, {0, 10000} <unfinished ...>
[pid 1112] <... nanosleep resumed> 0x7fd3895319a0) = 0
[pid 1112] nanosleep({0, 34000000}, <unfinished ...>
[pid 1118] <... select resumed> ) = 0 (Timeout)
[pid 1056] <... select resumed> ) = 0 (Timeout)
[pid 1118] select(12, [11], NULL, NULL, {0, 10000} <unfinished ...>
[pid 1056] select(11, [9 10], NULL, NULL, {0, 10000} <unfinished ...>
[pid 1118] <... select resumed> ) = 0 (Timeout)
[pid 1056] <... select resumed> ) = 0 (Timeout)
[pid 1118] select(12, [11], NULL, NULL, {0, 10000} <unfinished ...>
[pid 1056] select(11, [9 10], NULL, NULL, {0, 10000} <unfinished ...>
[pid 1118] <... select resumed> ) = 0 (Timeout)
[pid 1056] <... select resumed> ) = 0 (Timeout)
[pid 1118] select(12, [11], NULL, NULL, {0, 10000} <unfinished ...>
[pid 1056] select(11, [9 10], NULL, NULL, {0, 10000} <unfinished ...>
[pid 1112] <... nanosleep resumed> 0x7fd3895319a0) = 0

I looked online for an explanation, and I found this bug about MongoDB power usage. Apparently, it uses select() as a way to do timekeeping for all the ways Mongo uses wall time. Somebody proposed that alternative methods be used instead, which don’t require this kind of tight looping, but it looks like the Mongo team is busy with other stuff right now and won’t consider the patch.

So I added a clause in my Chef config to disable the MongoDB service from starting automatically on boot. I needed to specify the Upstart service provider explicitly in the configuration, since the mongodb package installs both a SysV script and Upstart manifest.

]]>
https://rogerhub.com/~r/code.rogerhub/infrastructure/558/mongodb-power-usage-on-laptop/feed/ 0
Archiving and compressing a dynamic web application https://rogerhub.com/~r/code.rogerhub/programming/508/archiving-and-compressing-a-dynamic-web-application/ https://rogerhub.com/~r/code.rogerhub/programming/508/archiving-and-compressing-a-dynamic-web-application/#comments Sun, 01 Sep 2013 05:04:22 +0000 https://rogerhub.com/~r/code.rogerhub/?p=508 From 1999 to mid-2011, the Daily Cal used an in-house CMS to run the website. It contains around 65,000 individual articles and thousands of images. But ever since we moved to WordPress, the old system has been collecting dust in the corner. It was about time that all of the old CMS content was archived as static HTML so that it could be served indefinitely in the future as server software evolves. To accomplish this, I set up a linux virtual machine with my trusty vagrant up utility on my spare home server.

Retrieving the application data

The CMS’s production server did not actually have enough free disk space to create an tarball copy of the application. But since there were on the order of 10,000 files involved, a recursive copy via scp would be too slow. In order to speed up the transfer process, I used a little gnu/linux philosophy and piped a few things together:

$ screen
$ ssh root@domain "tar -c /srv/archive" | tar xvf -

I decided that compression was not going to be very helpful because most of the data was jpeg and png images, which are not very compressible. Enabling compression would just slow things down, since the bottleneck would become the CPU rather than the network.

Preparing the application

The CMS had very few dependencies, which is not surprising given the state of PHP 10 years ago. I set up a simple nginx+php-fpm+mysql configuration with a single PHP worker thread. The crawl operation would be executed in serial anyway, so multiple workers would not be useful.

I also added an entry to the VM’s hosts file for the hostname of the production server. The server’s hostname was hardcoded in a few places, and I didn’t want the crawl operation sending requests out to Internet. Additionally, I set up a secondary web server configuration that served generated HTML from the output directory and static assets from the application data, so that I could preview the results as they were being generated.

Crawling the application

Generating the static pages was the hardest part of the archival process, and it took me around 5 retries to get it right. The primary purpose of the whole archival operation was to remove the dynamic portions of the web application. This meant that static versions of every single conceivable page request had to be run against the application and saved. I picked wget as my archival tool.

Articles in the CMS were stored in one of two places. However, the format of the URL was luckily the same for both. I dumped article IDs from the MySQL database and created a seed file of article URLs:

$ echo "SELECT article_id FROM dailycal.article;" | mysql -u root > article_ids
$ echo "SELECT id FROM dailycal.h_articles;" | mysql -u root  >> article_ids
$ sed -e 's/^/http:\/\/archive.dailycal.org\/article.php?id=/' -i article_ids

I didn’t see much point in setting a root password for the local MySQL installation, since this was a single-use VM anyway. Sed ate through the 65,326 article IDs in seconds. I then created a second seed file containing just the URL of the application root, from where (nearly) all other pages would be crawlable.

On the crawler’s final run, I set the following command line switches:

  • --trust-server-names – Sets the output file name according to the last request in a redirection chain. By default, wget uses the first request.
  • --append-output=generator.log – Outputs progress information to a file, so that I can run the main process in screen and monitor it with tail in follow mode.
  • --input-file=source.txt – Specifies the seed file of URLs.
  • --timestamping – Sets the file modification time according to HTTP headers, if available.
  • --no-host-directories – Disables the creation of per-host directories.
  • --default-page=index.php – Defines the name for request paths that end in a slash.
  • --html-extension – Appends the html file extension to all generated pages, even if another extension already exists.
  • --domains archive.dailycal.org – Restricts the crawl to only the application domain.

Additionally, I set the following switches to crawl through the links on the application’s root page.

  • -r – Enables recursive downloading.
  • --mirror – Sets infinite recursion depth and other useful options.

In total, 543MB of non-article HTML and 2.1GB of article HTML were generated. These are reasonable sizes, given how many URLs were crawled in total, but they were still a bit unwieldy to store. I looked for a solution.

Serving from a compressed archive

I knew a couple of data facts about the generated HTML:

  1. There was tons of redundancy. The articles shared much of their header and footer markup.
  2. Virtually all of the data consisted of printable characters and whitespace, which means considerably less unique information than 8 bits per byte.

Both of these factors made the HTML a good candidate for archive compression. My first thought was tar+gzip, but tar+gzip compression works on blocks, not files. To extract a single file, you’d need to parse all the data up till that file. A request for the last file in the archive could take 15 to 20 seconds!

Luckily, the zip file format maintains an index of individual files and compresses them individually, which means that single file extraction is instantaneous no matter where in the archive it is located. I opted to use fuse-zip, an extension for FUSE that lets you mount zip files onto the file system. Fully compressed, the 543MB of pages became a 92MB zip archive (83% deflation), and the 2.1GB of articles became a 407MB zip archive (81% deflation).

After everything was finished, I uploaded the newly created HTML archives to a new production server and shut down the old CMS. From there, a decade’s worth of archived articles can be maintained indefinitely for the future.

]]>
https://rogerhub.com/~r/code.rogerhub/programming/508/archiving-and-compressing-a-dynamic-web-application/feed/ 0