I attached strace to it with strace -f -p 1056
, and here’s what I saw:
[pid 1118] select(12, [11], NULL, NULL, {0, 10000} <unfinished ...> [pid 1056] select(11, [9 10], NULL, NULL, {0, 10000} <unfinished ...> [pid 1118] <... select resumed> ) = 0 (Timeout) [pid 1056] <... select resumed> ) = 0 (Timeout) [pid 1118] select(12, [11], NULL, NULL, {0, 10000} <unfinished ...> [pid 1056] select(11, [9 10], NULL, NULL, {0, 10000} <unfinished ...> [pid 1112] <... nanosleep resumed> 0x7fd3895319a0) = 0 [pid 1112] nanosleep({0, 34000000}, <unfinished ...> [pid 1118] <... select resumed> ) = 0 (Timeout) [pid 1056] <... select resumed> ) = 0 (Timeout) [pid 1118] select(12, [11], NULL, NULL, {0, 10000} <unfinished ...> [pid 1056] select(11, [9 10], NULL, NULL, {0, 10000} <unfinished ...> [pid 1118] <... select resumed> ) = 0 (Timeout) [pid 1056] <... select resumed> ) = 0 (Timeout) [pid 1118] select(12, [11], NULL, NULL, {0, 10000} <unfinished ...> [pid 1056] select(11, [9 10], NULL, NULL, {0, 10000} <unfinished ...> [pid 1118] <... select resumed> ) = 0 (Timeout) [pid 1056] <... select resumed> ) = 0 (Timeout) [pid 1118] select(12, [11], NULL, NULL, {0, 10000} <unfinished ...> [pid 1056] select(11, [9 10], NULL, NULL, {0, 10000} <unfinished ...> [pid 1112] <... nanosleep resumed> 0x7fd3895319a0) = 0
I looked online for an explanation, and I found this bug about MongoDB power usage. Apparently, it uses select() as a way to do timekeeping for all the ways Mongo uses wall time. Somebody proposed that alternative methods be used instead, which don’t require this kind of tight looping, but it looks like the Mongo team is busy with other stuff right now and won’t consider the patch.
So I added a clause in my Chef config to disable the MongoDB service from starting automatically on boot. I needed to specify the Upstart service provider explicitly in the configuration, since the mongodb package installs both a SysV script and Upstart manifest.
]]>You can run Vagrant on your laptop, but I think that it’s the wrong piece of hardware for the job. Long-running VM batch jobs and build environment VM’s should be run by headless servers, where you don’t have to worry about excessive heat, power consumption, huge amounts of I/O, and keeping your laptop on so it doesn’t suspend. My server at home is set up with:
You don’t need great hardware to run a couple of Linux VM’s. Since my server is basically hidden in the corner, the noisy fans are not a problem and actually do a great job of keeping everything cool under load. RAID mirroring (I’m hoping) will provide high availability, and since the server’s data is easily replaceable, I don’t need to worry about backups. Setting up your own server is usually cheaper than persistent storage on public clouds like AWS, but your mileage may vary.
Vagrant configuration is a single Ruby file named Vagrantfile in the working directory of your vagrant process. My basic Vagrantfile just sets up a virtual machine with Vagrant’s preconfigured Ubuntu 12.04LTS image. They offer other preconfigured images, but this is what I’m most familiar with.
# Vagrantfile
Vagrant.configure("2") do |config|
# Every Vagrant virtual environment requires a box to build off of.
config.vm.box = "precise32"
# The url from where the 'config.vm.box' box will be fetched if it
# doesn't already exist on the user's system.
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :forwarded_port, guest: 8080, host: 8080
# Enable public network access from the VM. This is required so that
# the machine can access the Internet and download required packages.
config.vm.network :public_network
end
For long-running batch jobs, I like keeping a CPU Execution Cap on my VM’s so that they don’t overwork the system. The cap keeps the temperature down and prevents the VM from interfering with other server processes. You can add an execution cap (for VirtualBox only) by appending the following before the end of your primary configuration block:
# Adding a CPU Execution Cap Vagrant.configure("2") do |config| ... config.vm.provider "virtualbox" do |v| v.customize ["modifyvm", :id, "--cpuexecutioncap", "40"] end end
After setting up Vagrant’s configuration, create a new directory containing only the Vagrantfile and run vagrant up
to set up the VM. Other useful commands include:
vagrant ssh
— Opens a shell session to the VMvagrant halt
— Halts the VM gracefully (Vagrant will connect via SSH)vagrant status
— Checks the current status of the VMvagrant destroy
— Destroys the VMFinally, to set up the build environment automatically every time you create a new Vagrant VM, you can write provisioners. Vagrant supports complex provisioning frameworks like Puppet and Chef, but you can also write a provisioner that’s just a shell script. To do so, add the following inside your Vagrantfile:
Vagrant.configure("2") do |config| ... config.vm.provision :shell, :path => "bootstrap.sh" end
Then just stick your provisioner next to your Vagrantfile, and it will execute every time you start your VM. You can write commands to fetch package lists and upgrade system software, or to install build dependencies and check out source code. By default, Vagrant’s current working directory is also mounted on the VM guest as a folder named vagrant in the file system root. You can refer to other provisioner dependencies this way.
Vagrant uses a single public/private keypair for all of its default images. The private key can usually be found in your home directory as ~/.vagrant.d/insecure_private_key
. You can add it to your ssh-agent and open your own SSH connections to your VM without Vagrant’s help.
Even if you accidentally mess up your Vagrant configuration, you can use VirtualBox’s built-in command-line tools to fix boot configuration issues or ssh daemon issues.
$ VBoxManage list vms ... $ VBoxManage controlvm <name|uuid> pause|reset|poweroff|etc ... $ VBoxHeadless -startvm <name|uuid> --vnc ... (connect via VNC)
The great thing about Vagrant’s VM-provider abstraction layer is that you can grab the VM images from VirtualBox and boot them on another server with VirtualBox installed, without Vagrant completely. Vagrant is a excellent support tool for programmers (and combined with SSH tunneling, it is great for web developers as well). If you don’t already have support from some sort of VM infrastructure, you should into possibilities with Vagrant.
]]>Elementary OS Luna is the first distribution of linux I’ve used where I don’t feel like changing a thing. The desktop environment defaults are excellent, and all of Ubuntu’s excellent hardware support, community of PPA’s, and familiar package manager are available. At the same time, there is a lot of graphical magic, window animation, and attention to detail that is quite similar to OS X. There is a minimal amount of hand-holding, and it’s quite easy to get used to the desktop because of the intuitive keyboard shortcuts and great application integration.
You can’t tell from just the screenshot above, but elementary OS is more than just a desktop environment. The distribution comes packaged with a bunch of custom-built applications, like the custom file manager and terminal app. Other apps like a IRC client, social media client, and system search are available in community PPA’s. I do most of my work through either the web browser, ssh, or vim. Important data on my laptop is limited to personal files on my Google Drive, and a directory of projects in active development that’s regularly backed up to my personal file server. Programs can be painlessly installed from the package manager, and configuration files are symlinked from my Google Drive. I’m not very attached to the current state of my laptop at any given moment because all the data on it is replaceable, with the exception of OS tweaks. I don’t like having to install a dozen customization packages to get my workflow to how I like it, so out-of-box experience is very important to me.
I would say that if you’re a regular linux user, you should at least give elementary OS Luna a try, even if it’s on a friend’s machine or in a VM. You may be surprised. I was.
]]>I set up the routers through their web interfaces over ethernet on my laptop. Here are some things to double check before you hook up the devices:
After the devices are set up and hooked up in their proper positions, perform a quick AP scan with your wireless card:
$ sudo iwlist wlan0 scan wlan0 Scan completed: Cell 01 - Address: XX:XX:XX.... Channel: ...
There should be 2 (or more) access point results that correspond to each of your routers. Configure your local wireless card to connect to each router in turn by specifying its MAC address in your OS’s configuration. Run diagnostics and make sure the connection is operational:
$ ip addr ... (an assigned address on the correct subnet) .. $ curl ifconfig.me ... (your public IP) ..
That’s it. Now breathe easier knowing you can watch Netflix in the yard without suffering degraded streams. Hoorah.
]]>I have a few kinds of data that I actively backup and check for integrity. (Don’t forget to verify your backups, or there isn’t any point in backing them up at all.) Here are all the kinds of data that might be on your computer:
Several backup solutions try to backup everything. This is not a good idea. First, there are a lot of files on your computer that are easily replaceable (system files) and others that you’d rather not keep in your backup archives (program files). Second, those solutions have no way of giving extra redundancy to the things that matter most, and less redundancy to things that matter less.
In addition to these files, here are some types of data that you might not usually think about backing up:
My backup solution is a mix consisting of free online version control sites, Google, Dropbox, and a personal file server. My code, documents (essays, forms, receipts), and configuration (bash, vim, keys, personal CA, wiki, profile pictures, etc..) are the most important part of my backup. I sync these with Insync to my Google Drive, where I’ve rented 100GB of cloud storage. My Google Drive is regularly backed up to my personal file server, with about 2 weeks of retention.
Disks and old computers are cheap. Get a high-availability file server set up in your home, and you can happily offload intensive tasks to it like virtual machines, backup services, and archival storage. Mine is configured with:
Backing up your Google Drive might sound funny to you, but it is a good precaution in case anything ever happens to your Google Account. Additionally, most of my program code is in either a public GitHub repository or a private BitBucket repository. Version control and social coding features like issues/pull requests give you additional benefits than simply backing up your code, and you should definitely be using some kind of VCS for any code you write.
For many of the projects that I am actively developing, I only use VCS and my file server. Git object data should not be backed up to cloud storage services like Google Drive because they change too often. My vim configuration is also stored on GitHub, to take advantage of git submodules for my vim plugins.
My personal photos are stored in Google+ Photos, formerly known as Picasa. They give you 15GB of shared storage for free, and if that’s not enough, additional space is cheap as dirt. My photos don’t have another level of redundancy like my code and configuration files do. They are less important to me, and Google can be trusted to sustain itself longer than any backup solution you create yourself.
I host a single VPS with Linode (that’s an affiliate link) that contains a good amount of irreplaceable data from my blogs and other services I host on it. Linode itself offers cheap and easy full-disk backups ($5/mo.) that I signed up for. Those backups aren’t intended for hardware failures so much as human error, because Linode already maintains high-availability redundant disk storage for all of its VPS nodes. Additionally, I backup the important parts of the server to my personal file server (/etc, /home, /srv, /var/log), for an extra level of redundancy.
Any pictures I collect from online news aggregators is dumped in my Google Drive and shares the same extra redundancy as my documents and personal configuration files. Larger media like videos are stored in one of my USB 3.0 flash drives, since they are regularly created and deleted.
I don’t back up system files, since Xubuntu is free and programs are only 1 package-manager command away. I don’t maintain extra redundancy for email for the same reason I don’t for photos.
A final thing to consider is the confidentiality of your backups. Whenever you upload data to a free public cloud storage service, you should treat the data as if it were being anonymously released to the public. In other words, personal data, cryptographic keys, and passwords should never be uploaded unencrypted to a public backup service. Things like PGP can help in this regard.
]]>GNU/bc
is a command-line calculator for linux. It does variables [a-z0-9_]+
and +-*/^
and basic math, all through the terminal, and while it’s somewhat useful by default, it can be made to be a lot better. Reading man bc
, you see that the program reads an environmental variable, BC_ENV_ARGS
, which refers to a list of command line arguments implicitly appended to every invocation of bc
. Quoting the source, those files “typically contain function definitions for functions the user wants defined every time bc is run”.
alias bc='bc -l'
First off, you’ll want to alias bc
with the -l
command line argument, which loads a bunch of math functions including the following:
s(x) Sine c(x) Cosine a(x) Arctangent l(x) Natural logarithm e(x) Exponential function j(n, x) Bessel function
They clearly defined this minimal set with the expectation that you’d extend them, so of course, I did. In ~/.bashrc
, I export
d BC_ENV_ARGS=~/.bcrc
, and started the following in ~/.bcrc
:
/* Config for bc */ scale=39
Scale is a special variable in bc. It determines the number of significant figures that bc will carry in its calculations, and 39 worked well for me. Then I defined some physical constants in the form of k_[a-z]+
:
print "\n" print "Usage: k_[?] with one of: c, g, atm (atmospheric), h, hbar, mu,\n" print " ep(silon), e (elementary charge), coulomb, me (electron),\n" print " mp, n (avogadro's), b (boltzmann), r (gas), si(gma)\n\n" k_c = 299792458 /* Speed of Light */ k_g = 6.67384 * 10^-11 /* Universal Gravitation */ k_atm = 100325 /* Atmospheric pressure */ k_h = 6.62606957 * 10^-34 /* Planck's constant */ k_hbar = 1.054571726 * 10^-34 /* H Bar */ k_mu = 1.256637061 * 10^-6 /* Vacuum permeability */ k_ep = 8.854187817 * 10^-12 /* Vacuum permittivity */ k_epsilon = 8.854187817 * 10^-12 /* Vacuum permittivity */ k_e = 1.602176565 * 10^-19 /* Elementary charge */ k_coulomb = 8.987551787 * 10^9 /* Coulomb's constant */ k_me = 9.10938294 * 10^-31 /* Rest mass of an electron */ k_mp = 1.672621777 * 10^-27 /* Rest mass of a proton */ k_n = 6.02214129 * 10^23 /* Avogadro's number */ k_b = 1.3806488 * 10^-23 /* Boltzmann's constant */ k_r = 8.3144621 /* Ideal gas constant */ k_si = 5.670373 * 10^-8 /* Stefan-Boltzmann constant */ k_sigma = 5.670373 * 10^-8 /* Stefan-Boltzmann constant */ pi = 3.1415926535897932384626433832795028841968 /* pi */
And then these:
print "Usage: s(x) c(x) t(x) as(x) ac(x) at(x)\n" print " csc(x) sec(x) cot(x) e(x)\n\n" define t(x) { return s(x)/c(x); } define as(x) { return 2*a(x/(1+sqrt(1-x^2))); } define ac(x) { return 2*a(sqrt(1-x^2)/(1+x)); } define at(x) { return a(x); } define csc(x) { return 1/s(x); } define sec(x) { return 1/c(x); } define cot(x) { return c(x)/s(x); }
I printed the default mathlib functions in the Usage
for completeness. Stuff like sqrt()
works too. I use bc
primarily for physics homework, since built-in GUI calculators often have better base-conversion and visualization abilities for programming.