Code.RogerHub » dailycal https://rogerhub.com/~r/code.rogerhub The programming blog at RogerHub Fri, 27 Mar 2015 23:04:04 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.2 Exporting comments from Facebook Comments to Disqus https://rogerhub.com/~r/code.rogerhub/programming/528/exporting-comments-from-facebook-comments-to-disqus/ https://rogerhub.com/~r/code.rogerhub/programming/528/exporting-comments-from-facebook-comments-to-disqus/#comments Sun, 16 Mar 2014 05:37:32 +0000 https://rogerhub.com/~r/code.rogerhub/?p=528 About 14 months ago, The Daily Californian switched from Disqus to Facebook Comments in order to clean up the comments section of the website. But recently, the decision was reversed, and it was decided that Disqus would make a come back. I worked with one of my Online Developers to facilitate the process.

One of the stipulations of the Facebook to Disqus transition was that the Online Team would transfer a year’s worth of Facebook comments over to Disqus so that they would still remain on the website. Facebook doesn’t natively support any kind of data export feature for their commenting platform at the time of writing. I don’t expect that they will in the future. However, they provide a reliable API to their comments database. With a bit of ingenuity and persistence, we were able to successfully import over 99% of existing Facebook comments into Disqus.

Overview of the process

Disqus supports imports in the form of custom WXR files. These are WordPress-compatible XML files that contain things about posts (titles, dates, preview, id, etc.) and comments (name, date, IP, content, etc.).

The Daily Cal uses WordPress and the official Disqus WordPress plugin. The plugin identifies threads with a combination of WordPress’s internal post ID and a short permalink. Thread identifiers look like this:

var disqus_identifier = '528 https://rogerhub.com/~r/code.rogerhub/?p=528';

This one is taken right from the source code of this post (you can see for yourself).

Facebook, on the other hand, identifies threads by the content page’s URL. After all, their system was created for arbitrary content, not just blogs. The Facebook Open Graph API provides a good amount of information about comments. There’s enough information to identify multiple comments posted by a single user. There’s accurate timestamp and reply relationships. There isn’t any personal information like IP addresses, but names are provided.

The overall process looked like this:

  1. On dailycal.org, we needed an API endpoint to grab page URLs along with other information about threads on the site.
  2. For each of these URLs, we check Facebook’s Open Graph API for comments that were posted on that URL. If there are any, then we put them into our local database.
  3. After we are done processing comments for all of the articles ever published on dailycal.org, we can export them to Disqus-compatible WXR and upload them to Disqus.

This seems like a pretty straight-forward data hacking project, but there were a couple of issues that we ran into.

Nitty-gritty details

The primary API endpoint for Facebook’s Comment API is graph.facebook.com/comments/. This takes a couple of GET parameters:

  • limit — A maximum number of comments to return
  • ids — A comma-delimited list of article URLs
  • fields — For getting comment replies

The API supports pagination with cursors (next/prev URLs), but to make things more simple, we just hardcoded a limit parameter of 9999. By default, the endpoint will return only top-level comments. To get comment replies, you need to add a fields=comments parameter to the request.

You can make a few sample requests to get a feel for the structure of the JSON data returned.

Disqus supports a well-documented XML-based import format. In our case, we decided that names were sufficient identification, although Disqus will support some social login tagging in the import format as well. The format specifies a place for the article content, which is an odd request, since article content is usually quite massive. We decided to supply just an excerpt of the article content rather than the entire page.

There were a few more precautions we took before we started development. In order to lower suspicion around our scraping activity on Facebook as well as our own web application firewall (WAF), we grabbed the user agent of a typical Google Chrome client running on Windows 7 x86-64, and used that for all of our requests. We also created a couple of test import “sites” on Disqus, since the final destination of our generated WXR was the Disqus site that we used a year ago before the switch to Facebook. There isn’t any way to copy comments or clone a Disqus site, so we didn’t want to make any mistakes.

Unicode support and escape sequences

The first version of our program had terrible support for any kind of non-ASCII character. It’s not that our commenters were all typing in Arabic or something (actually, we had a couple of comments that really were in Arabic). Smart quotes are used in plain English, and they ruin the process as well.

Facebook’s API spits out JSON data, and uses JSON’s encoding. For example, the double left quotation mark, otherwise known as lrquo in HTML/XML, is encoded as \u201c using JSON’s unicode standard. However, the JSON data also contains HTML-encoded entities like &. (Update: It appears that Facebook has corrected this issue.)

Python’s JSON library will take care of JSON escape sequences as it decodes the string into a native dictionary. However, the script applies HTML entity decoding on that result, in case there are any straggling escape sequences left. Since Disqus’s WXR format suggests that you throw the comment content into a CDATA block, all you need to escape is the CDATA ending sequence, ]]>. You can do this by splitting it up into 2 CDATA sections (e.g. ]]]><![CDATA[]>)

HTTP exceptions

Our API endpoint would timeout or throw an error every once in a while. To make our scraper more robust, we set up handlers for HTTP error responses. The scraper would retry an API request at least 5 times before giving up. If none of the attempts are successful, the URL is logged to the database for further debugging.

Commenter email addresses

Every commenter listed in the WXR needs an email address. Comments with the same email address will be tied together, and if somebody ever registers a Disqus account with that email address, they can claim the comments as their own (and edit/delete them). Facebook provides a unique ID that will associate multiple comments by the same person. But since Facebook comments also allows AOL and Yahoo! logins, not every comment has such an ID. Our script used the Facebook-provided ID when it was present, and generated a random one outside of Facebook’s typical range when it wasn’t. All of the emails ended with @dailycal.org, which meant that we would retain control over the registration verification, in case we needed it.

Edge cases

Disqus requires that comments be at least 2 characters long. There were a couple of Facebook comments that consisted of just one word: “B” or “G” or the like. These had to be filtered out before the XML export process.

We also ran into a case where a visitor commented “B<newline>” on the Facebook comments. For Disqus, this still counts as one character, since the CDATA body is stripped of leading and trailing whitespace before processing. The first version of our script didn’t strip whitespace before checking the length, so it failed to filter out this erroneous comment.

Timezone issues

After a couple of successful trials, we created a test site in Disqus and imported a sample of the generated WXR. Everything looked good until we went and cross-referenced Disqus’s data with the Facebook comments that were displayed on the site. The comment times appeared to be around 5 hours off!

Here in California, we’re GMT -0800, so there wasn’t a clear explanation why the comment times were delayed. WXR specified GMT times, which we verified were correct. The bug seemed to be on Disqus’s end. We contacted Disqus support and posted on their developer’s mailing list, but after around a week, we decided that it would be easiest to just counteract the delay with an offset in the opposite direction.

import datetime
time_offset = datetime.timedelta(0, -5*60*60)
comment_date = (datetime.datetime.strptime(
    comment['created_time'], "%Y-%m-%dT%H:%M:%S+0000"
    ) + time_offset).strftime("%Y-%m-%d %H:%M:%S")

Conclusion

After a dozen successful test runs, we pulled the trigger and unloaded the WXR onto the live Disqus site. The import process finished within 5 minutes, and everything worked without a hitch.

]]>
https://rogerhub.com/~r/code.rogerhub/programming/528/exporting-comments-from-facebook-comments-to-disqus/feed/ 7
Archiving and compressing a dynamic web application https://rogerhub.com/~r/code.rogerhub/programming/508/archiving-and-compressing-a-dynamic-web-application/ https://rogerhub.com/~r/code.rogerhub/programming/508/archiving-and-compressing-a-dynamic-web-application/#comments Sun, 01 Sep 2013 05:04:22 +0000 https://rogerhub.com/~r/code.rogerhub/?p=508 From 1999 to mid-2011, the Daily Cal used an in-house CMS to run the website. It contains around 65,000 individual articles and thousands of images. But ever since we moved to WordPress, the old system has been collecting dust in the corner. It was about time that all of the old CMS content was archived as static HTML so that it could be served indefinitely in the future as server software evolves. To accomplish this, I set up a linux virtual machine with my trusty vagrant up utility on my spare home server.

Retrieving the application data

The CMS’s production server did not actually have enough free disk space to create an tarball copy of the application. But since there were on the order of 10,000 files involved, a recursive copy via scp would be too slow. In order to speed up the transfer process, I used a little gnu/linux philosophy and piped a few things together:

$ screen
$ ssh root@domain "tar -c /srv/archive" | tar xvf -

I decided that compression was not going to be very helpful because most of the data was jpeg and png images, which are not very compressible. Enabling compression would just slow things down, since the bottleneck would become the CPU rather than the network.

Preparing the application

The CMS had very few dependencies, which is not surprising given the state of PHP 10 years ago. I set up a simple nginx+php-fpm+mysql configuration with a single PHP worker thread. The crawl operation would be executed in serial anyway, so multiple workers would not be useful.

I also added an entry to the VM’s hosts file for the hostname of the production server. The server’s hostname was hardcoded in a few places, and I didn’t want the crawl operation sending requests out to Internet. Additionally, I set up a secondary web server configuration that served generated HTML from the output directory and static assets from the application data, so that I could preview the results as they were being generated.

Crawling the application

Generating the static pages was the hardest part of the archival process, and it took me around 5 retries to get it right. The primary purpose of the whole archival operation was to remove the dynamic portions of the web application. This meant that static versions of every single conceivable page request had to be run against the application and saved. I picked wget as my archival tool.

Articles in the CMS were stored in one of two places. However, the format of the URL was luckily the same for both. I dumped article IDs from the MySQL database and created a seed file of article URLs:

$ echo "SELECT article_id FROM dailycal.article;" | mysql -u root > article_ids
$ echo "SELECT id FROM dailycal.h_articles;" | mysql -u root  >> article_ids
$ sed -e 's/^/http:\/\/archive.dailycal.org\/article.php?id=/' -i article_ids

I didn’t see much point in setting a root password for the local MySQL installation, since this was a single-use VM anyway. Sed ate through the 65,326 article IDs in seconds. I then created a second seed file containing just the URL of the application root, from where (nearly) all other pages would be crawlable.

On the crawler’s final run, I set the following command line switches:

  • --trust-server-names – Sets the output file name according to the last request in a redirection chain. By default, wget uses the first request.
  • --append-output=generator.log – Outputs progress information to a file, so that I can run the main process in screen and monitor it with tail in follow mode.
  • --input-file=source.txt – Specifies the seed file of URLs.
  • --timestamping – Sets the file modification time according to HTTP headers, if available.
  • --no-host-directories – Disables the creation of per-host directories.
  • --default-page=index.php – Defines the name for request paths that end in a slash.
  • --html-extension – Appends the html file extension to all generated pages, even if another extension already exists.
  • --domains archive.dailycal.org – Restricts the crawl to only the application domain.

Additionally, I set the following switches to crawl through the links on the application’s root page.

  • -r – Enables recursive downloading.
  • --mirror – Sets infinite recursion depth and other useful options.

In total, 543MB of non-article HTML and 2.1GB of article HTML were generated. These are reasonable sizes, given how many URLs were crawled in total, but they were still a bit unwieldy to store. I looked for a solution.

Serving from a compressed archive

I knew a couple of data facts about the generated HTML:

  1. There was tons of redundancy. The articles shared much of their header and footer markup.
  2. Virtually all of the data consisted of printable characters and whitespace, which means considerably less unique information than 8 bits per byte.

Both of these factors made the HTML a good candidate for archive compression. My first thought was tar+gzip, but tar+gzip compression works on blocks, not files. To extract a single file, you’d need to parse all the data up till that file. A request for the last file in the archive could take 15 to 20 seconds!

Luckily, the zip file format maintains an index of individual files and compresses them individually, which means that single file extraction is instantaneous no matter where in the archive it is located. I opted to use fuse-zip, an extension for FUSE that lets you mount zip files onto the file system. Fully compressed, the 543MB of pages became a 92MB zip archive (83% deflation), and the 2.1GB of articles became a 407MB zip archive (81% deflation).

After everything was finished, I uploaded the newly created HTML archives to a new production server and shut down the old CMS. From there, a decade’s worth of archived articles can be maintained indefinitely for the future.

]]>
https://rogerhub.com/~r/code.rogerhub/programming/508/archiving-and-compressing-a-dynamic-web-application/feed/ 0
Anti-fraud Detection in Best of Berkeley https://rogerhub.com/~r/code.rogerhub/programming/135/anti-fraud-detection-in-best-of-berkeley/ https://rogerhub.com/~r/code.rogerhub/programming/135/anti-fraud-detection-in-best-of-berkeley/#comments Thu, 11 Apr 2013 08:02:53 +0000 https://rogerhub.com/~r/code.rogerhub/?p=135 Near the end of Spring Break, I helped build the back-end for the Daily Cal’s Best of Berkeley voting website. The awards are given to restaurants and organizations chosen via public voting over the period of a week. Somewhere during the development, we decided it’d be more effective to implement fraud detection rather than prevention. The latter would only encourage evasion and resistance while with the former, we could sit idle and track fraud as it occurred. It made for some interesting statistics too.

graph2

This first one’s simple. One of the candidates earned a highly suspicious number of submissions where they were the only choice selected in any of the 39 categories and 195 total candidates. Our fraud-recognition system aggregated sets of submission choices and raised alerts when abnormal numbers of identical choices appeared. The graph shows the frequency of submissions where only this candidate was selected and demonstrates the regularity and nonrandomness of these fraudulent entries.

Combined with data from tracking cookies and user agents, it’s safe to say that these submissions could be cast out.

graph1

The system also calculated and analyzed the elapsed time between successive submissions. It alerted both for abnormal volume, when a large number of submissions were received in a small time, and for abnormal regularity, when submissions came in at abnormally regular intervals. From the graph, it looks like it takes about 10.5 to 12 seconds for the whole process: reload, scroll, check vote, scroll, submit.

The calculations for this alert were a bit trickier than I expected. At first, I thought of using a queue where old entries would be discarded:

s := queue()
for each sort_by_time(submission):
  s.add(submission)
  s.remove_entries_before(5 minutes ago)
  if s.length > threshold:
    send_alert
    s->clear

This doesn’t work very well. Each time the queue length exceeded the threshold, it would flush the queue and notify that threshold submissions were detected in abnormal volume. So, I added a minimum time-out before another alert would be raised.

s := queue()
last_alert := 0
for each sort_by_time(submission):
  s.add(submission)
  s.remove_entries_before(5 minutes ago)
  if s.length > threshold and now - last_alert > threshold:
    send_alert
    s->clear
    last_alert := now

The regularity detector performed a similar task, except it would store each of the time differences in a list, sort it, and then run with a smaller threshold (around 0.3 seconds). Ideally, observations about true randomness suggest that these bars should be more or less horizontal, but this is hardly the case. After these fraudulent entries were removed, this particular candidate was left with a paltry 70-some votes, about 5% of its pre-filtered count.

]]>
https://rogerhub.com/~r/code.rogerhub/programming/135/anti-fraud-detection-in-best-of-berkeley/feed/ 0
Generating on-the-fly filler text in PHP https://rogerhub.com/~r/code.rogerhub/programming/80/generating-on-the-fly-filler-text-in-php/ https://rogerhub.com/~r/code.rogerhub/programming/80/generating-on-the-fly-filler-text-in-php/#comments Tue, 12 Mar 2013 05:57:34 +0000 https://rogerhub.com/~r/code.rogerhub/?p=80

Update

I’ve updated the code and text of this post to reflect the latest version of the code.

For one of the projects I’ve been working on recently, I needed huge amounts of filler text (we’re talking about a megabyte) for lorem ipsum placeholder copy. Copy is the journalistic term for plain ol’ text in an article or an advertisement, in contrast with pictures or design elements. When you design for type, it’s often helpful to have text that looks like it could be legitimate writing instead of a single word repeated or purely random characters. From this rose the art of lorem ipsum, which intelligently crafts words with pronounceable syllables and varying lengths.

It’s a rather complicated process to generate high-quality lorem ipsum, but the following will do an acceptable job with much fewer lines of code.

/**
 * Helper function that generates filler text
 *
 * @param $type is target length in characters
 */
protected function filler_text($type) {
  
  /**
   * Source text for lipsum
   */
  static $lipsum_source = array(
    "Lorem ipsum dolor sit amet, consectetur adipiscing elit",
    "Aliquam <a href='/'>sodales blandit felis</a>, vitae imperdiet nisl",
    ...
    "Quisque ullamcorper aliquet ante, sit amet molestie magna auctor nec",
  );
  if ($type == 'title') {
    // Titles average 3 to 6 words
    $length = rand(3, 6);
    $ret = "";
    for ($i = 0; $i < $length; $i++) {
      if (!$i) {
        $ret = ucwords($this->array_random_value(explode(" ", strip_tags($this->array_random_value($lipsum_source))))) . ' ';
      } else {
        $ret .= strtolower($this->array_random_value(explode(" ", strip_tags($this->array_random_value($lipsum_source))))) . ' ';
      }
    }
    return trim($ret);
  } else if ($type == 'post') {
    $ret = "";
    $order = array('paragraph');
    $order_length = rand(12, 19);
    for ($n = 0; $n < $order_length; $n++) {
      $choice = rand(0, 8);
      switch ($choice) {
      case 0: $order[] = 'list'; break;
      case 1: $order[] = 'image'; break;
      case 2: $order[] = 'blockquote'; break;
      default: $order[] = 'paragraph'; break;
      }
    }
    for ($n = 0; $n < count($order); $n++) {
      switch ($order[$n]) {
      case 'paragraph':
        $length = rand(2,7);
        $ret .= '<p>';
        for ($i = 0; $i < $length; $i++) {
          if ($i) $ret .= ' ';
          $ret .= $this->array_random_value($lipsum_source) . '.';
        }
        $ret .= "</p>\n";
        break;
      case 'image':
        $ret .= "<p><img src='http://placehold.it/900x580' /></p>\n";
        break;
      case 'list':
        $tag = (rand(0, 1)) ? 'ul' : 'ol';
        $ret .= "<$tag>\n";
        $length = rand(2,5);
        for ($i = 0; $i < $length; $i++) {
          $ret .= "<li>" . $this->array_random_value($lipsum_source) . "</li>\n";
        }
        $ret .= "</$tag>\n";
        break;
      case 'blockquote':
        $length = rand(2,7);
        $ret .= '<blockquote><p>';
        for ($i = 0; $i < $length; $i++) {
          if ($i) $ret .= ' ';
          $ret .= $this->array_random_value($lipsum_source) . '.';
        }
        $ret .= "</p></blockquote>\n";
        break;
      }
    }
    
    return $ret;
  }
}

First of all, you’ll need some filler text to use as a seed for this function. The term seed is a heavily-used term in computer science. It usually refers to an initial value used in some sort of deterministic pseudo-random number generator (or PRNG). Most programming languages have built-in libraries that provide randomness-generation. Many of these implementations are not actually random, but deterministic algorithms that are pure functions of some environmental variable, usually the timestamp, and the number of calls to the algorithm preceding it: n − 1 for round n.

The seed in this case is just a bunch of pre-generated lorem ipsum that you can grab anywhere online. The heart of the code just breaks down this text into sentences and picks a number of them to fit into a new sentence.

Lorem ipsum is rarely useful as one enormous chunk of text. Most frequently, copy is broken into paragraphs of varying lengths, which is this next enhancement. The code alternates between paragraphs, images, lists, and blockquotes to keep things more interesting. You can get my post-generating plugin on WordPress.org.

]]>
https://rogerhub.com/~r/code.rogerhub/programming/80/generating-on-the-fly-filler-text-in-php/feed/ 0
Tidying up SASS with a one-liner https://rogerhub.com/~r/code.rogerhub/terminal-fu/34/tidying-up-sass-with-a-one-liner/ https://rogerhub.com/~r/code.rogerhub/terminal-fu/34/tidying-up-sass-with-a-one-liner/#comments Sat, 02 Mar 2013 22:05:36 +0000 https://rogerhub.com/~r/code.rogerhub/?p=34 At the Daily Cal, we maintain a ton of CSS code for our website’s WordPress theme. But instead of using a single enormous stylesheet, we check in SASS files to version control which are recompiled on deployment (or for development testing). In one directory, we have a bunch of scss-type files like so:

./
../
.sass-cache/
_archive.scss
_blogs.scss
...
style.scss
_wp.scss

The files that begin with an underscore are SASS Partials, meaning that they don’t get built themselves, but are imported by other files. In this case, style.scss imports everything in the directory and spits out a style.css that complies with WordPress’s theme standards.

(Without style.css, WordPress won’t recognize a theme, since all the metadata for the theme is contained within that stylesheet. Either way, the file needs to be built since without it, there’d be no styling.)

I was working on the code base yesterday and came up with this one-liner to do a bit of code-cleanup. I’ll explain it further in steps:

$ for i in $(find . -name "_*.scss" -type f); do sass-convert --in-place $i; done

The primary part of this line lies inside the $(...). The dollarsign-parentheses combo tells bash (or any other POSIX-complaint shell) to execute its contents before proceeding. You may also be familiar with the back-tick notation `...` of executing commands.

$ find . -name "_*.scss" -type f

Find is a part of GNU findutils along with xargs and locate that searches for files. It takes [options] [path] [expression]. In this case, I wanted to match all the scss partial files in the current directory, which happen to match the wildcard expression _*.scss (note the preceding underscore). A single dot . refers to the current working directory (see pwd). You may be familiar with its variant, the double dot .., which matches the parent directory.

(Fun fact: Hidden files, like the configuration files in your home directory, usually begin with a period because both single dot and double dot begin with a period. The presence of a period at the start of their names was used to exclude them from directory listings without imagining that hidden files would later make use of this quirk.)

for i in ...
do
  ...
  ...
done

The above loops through a list of space-separated elements (like 1 2 3 4), puts each in $i, and executes the suite of instructions specified between the keyword do and done. I chose the letter $i arbitrarily, but it’s one that’s typically used as a loop placeholder.

SASS comes with a command sass-convert that will convert between different CSS-variants (sass, scss, css) with the added bonus of syntax-checking and code-tidying. You can convert CSS to SASS with something like:

$ sass-convert --from css --to sass foo.css bar.sass

If you make extensive use of nested selectors, sass-convert will combine those for you. This utility can also be used to convert formats to themselves with the --in-place option. Putting it all together, we get this one-liner that loops through all the _*.scss file in the current directory and converts them in-place:

$ for i in $(find . -name "_*.scss" -type f); do sass-convert --in-place $i; done

This operation doesn’t change the output whatsoever. Even CSS multiline comments are left in place! (SASS removes single-line // comments by default, since they aren’t valid CSS syntax.)

And that’s it! One 6000-line patch later, and all the SCSS looks gorgeous. The indentation hierarchy is uniform and everything is super-readable.

]]>
https://rogerhub.com/~r/code.rogerhub/terminal-fu/34/tidying-up-sass-with-a-one-liner/feed/ 0