Last year, I took some of the money I made from running this website and bought myself a dslr camera off of Amazon. It was the most money I had ever spent on anything for myself1. I couldn’t really say why I decided to buy a camera out of the blue, but I think I have a better grasp on the reasons now. First of all, I’ve always been sort of interested in photography and optics. I just liked taking pictures of stuff, even if it was with my pinhole cell phone camera or the webcam on my laptop. I knew a lot more about optical physics and lenses than most kids my age too. In college, I posted a lot of photos of architecture at Berkeley and things I saw at department stores on my status update blog. And before that, I posted cell phone pictures of my day-to-day experiences right here on RogerHub.
I was born with hyperopia, or far-sightedness as it’s known colloquially. In first grade, I got my first pair of glasses. Glasses didn’t make anything much clearer, but I wore them all the same. With those glasses, I felt like I was looking at a computer screen watching somebody else live their life, and I could control their actions without any fear of embarrassment or harm to myself. This was hardly a surprise, given how long I used to spend in front of the computer every afternoon.
The hyperopia gradually turned to myopia, and then my eyes were just like everybody else’s. Every hour I sat in front of a screen on the Internet or reading a book, I thought I was draining away the small amount of visual clarity I had left in my eyes. Some days, I wanted more than anything else to wake up and see clearly once again. The guilt occasionally turned to depression, but that never stopped me from sitting on the computer all day. I’m still unsure whether ruining my vision was worth the precocious programming experience I got in return, but I probably would have done the same thing if I could do it all over again.
During my first year in college, there was this guy who sat in the very first row of one of my lectures every day. You could tell his vision was shit from the heft of his glasses and the way he’d zoom in on the lecture slides on his mac book2. Once in a while, I’d look over at his usual seat and check out what he was doing. Among his usual activities were TA’ing for a introductory freshman computer science course, scrolling through the articles on his Google Reader, and shopping for nice cameras. At the time, I thought it was so lamentably ironic how a guy with that kind of vision could be so interested in taking pictures. But to a lesser degree, I was the same way. The surrogate eye of a digital optical sensor provided all of the exciting clarity that our own eyes could not.
Another part of me honestly just wanted to sink some cash. I was bored. In college, housing and entertainment (e.g. reading and learning) were already provided for. I didn’t have much time to buy expensive clothes, and I already bought every laptop upgrade I had ever wanted. Making money is a lot more rewarding when you’ve got something to save for. My camera became that thing.
So, I got my camera, and I was immediately disappointed. Photography is an expensive hobby, and it seemed like expensive was the only kind of hobby I was making recently. I took a look at other people’s photos on 500px and flickr, and noticed that they were all using bigger cameras with bigger, full-frame sensors and expensive, long lenses with gaping wide apertures. Beside that, it seemed like you’d need a remote trigger, a tripod, an enormous flash unit, an external backup hard drive, darkroom software, and a dozen other accessories to even get started. I didn’t even own a bag. Moreover, I didn’t live next to a Canadian lake or in a cabin on some foggy mountain range. I certainly didn’t have adorable toddlers or pets either. All I had to take pictures of were my floormates and the people walking by underneath my fourth-floor window. I thought I just needed to buy more stuff.
Google+ introduced a revamp of their photos service just a few months ago. Included in the update was a new auto-enhancement feature that they dubbed “your darkroom in a datacenter”. I was skeptical of the service at first. Image enhancement usually involves trying to grab more detail from an artifact-ridden jpeg and applying ostentatious colors/vignetting all around. As it turns out, this particular update was more philosophy than software.
By the time the new G+ photo features were released, I already had some experience post-processing raw sensor data in darkroom software. If you’re not familiar with the process, post-processing refers to intentional manipulation of the data that is recorded directly by the image sensor in order to fix exposure and white balance or get a particular effect in your pictures. Processing raw data is usually more flexible than manipulating an image in photoshop because camera sensors usually store 24 or even 48 bits of data per color channel per pixel in a captured image, compared to jpeg’s 8 bits. Recording the raw sensor data gives you a lot more room to mess up, because you can still recover a lot of data from a bad shot later in post-processing.
I took a look at Google’s post processing techniques by uploading a few pictures I took and had already processed myself. The results were surprisingly good. I’m convinced that Google designed their processing software with a fundamentally different philosophy than many people, including myself, held about the way a photo should look. G+ applies a few easily distinguishable modifications to just about any photo you throw at it. Google tries to:
- Recover detail from overexposed (all white) and underexposed (all black) patches
- Burn the edges of the photo, especially the parts that are out of focus
- Add roughness and relief to detailed surfaces like skin, carpet, and wood
- Brighten faces where it can find them
- Reduce the amount of darkness in blacks and lightness in whites
In particular, the algorithm never messes with the white balance or creates any unrealistic washed-out or excessively vibrant colors. Their primary goal seems to be to recover as much detail and clarity as possible, which sounds like a fairly standard thing to do. However, this isn’t always the case, especially with the artistic photos that people usually admire.
Google+ photos doesn’t care about artistic effects3 or faithful reproductions. Its algorithms are more concerned about providing the most amount of information per unit area of the photo, taking into account the differential brightness and contrasts of different desktop, tablet, and smartphone screens. The goal is to make it easy to recognize and remember the people in your pictures, even if it means changing the look of the environment around them.
I think this is an excellent philosophy and reveals an important fact to keep in mind: normal people, like you and me, who aren’t professional photographers, should focus on taking pictures of people, not food or sunsets or things. In the long run, portraits are the photos you enjoy and value the most. Silly practices like shooting in RAW on a vacation or day trip just get in the way of this goal4. The tools on G+ may not give you the flexibility or recovery power to do all the artistic photographic effects you want to do, but they do an okay job of automatic processing and save you a bunch of time that you could be spending taking more pictures. So stop shying away when I point my camera at your face. Those pictures are not for you. They’re for your kids, once you get around to that.
- The rest of the pricey stuff didn’t really count–housing, my tuition, etc. ↩
- I could make out the text from a few rows back. ↩
- Think about all of the filters on instagram that flatten the dynamic range or intentionally makes them grainy. ↩
- My camera is set to basic quality and small image resolution, which spits out pictures that are usually around only 1 megabyte, versus the 25 megabyte size of its RAW files. Small size makes the photos easier to move around and upload, which is all I ever do with them now anyway. ↩