## Monday, April 22, 2013

### Coding Practice: Quicksort

I've mentioned sorting algorithms several times in the past, with a specific focus on Mergesort. Today's article introduces Quicksort, another common sorting algorithm. The article starts with an intuitive, non-technical description. Next, the article presents the C code and a hand-wavy theoretical analysis of its computational complexity, backed by a pinch of practical results. The article concludes with a comparison with the Mergesort algorithm.

### Intuitive Description

The most intuitive description of the Quicksort algorithm is credited to its inventor, Tony Hoare:
"Just grab a thing and compare the other things with it."
The trick is that Quicksort "grabs" and "compares" intelligently, avoiding unnecessary comparisons and allowing it to sort a collection in logarithmic time. Specifically, this is achieved by partitioning the collection around the thing we just grabbed (called the "pivot") into two smaller collections. Everything smaller than or equal to the pivot goes into the left sub-collection, and everything else goes into the right sub-collection. The two sub-collections can then be Quicksort-ed independently, recursively. The recursion terminates when the sub-collections contain less than two elements.

### Code and Analysis of Computational Complexity

The code for Quicksort is fairly straightforward:

Most of the work is performed in the partition method, which can be implemented in-place.

The computational complexity of Quicksort depends on the selection of the pivot element. In the best case, the selected pivot is the median of the collection and the partition step divides the collection into two smaller collections of identical size. Since the size of the sorted collection is halved at each step of the recursion, the best case complexity of Quicksort is $O(N \log N)$. In the worst case, the selected pivot is the minimum or maximum of the collection, and the partition step achieves very little. The worst case complexity is $O(N^2)$.

There are several ways to select the pivot element, the simplest being selecting the first, last or middle element of the collection. Since selecting the first or last element can lead to worst-case performance if the array is already sorted, selecting the middle element is the better option of the three.

The effect of pivot selection on the complexity of Quicksort can be observed empirically, by counting the number of comparisons for three different types of input: random, sorted and uniform; and for three different pivot selection methods: first, last and middle. Here are some results (sorting 100 input elements, showing the number of comparisons first, last, middle selection modes, respectively):
• random (mean over 100 runs): 713.28, 715.17, 713.25
• sorted: 1001, 1001, 543
• same: 1001, 1001, 1001
The above results support what is already well-known: merge sort performs worst when given sorted and uniform input. The former can be dealt with by selecting the middle element as the pivot (or even randomizing the pivot selection). The latter can be dealt with by checking for uniform input prior to sorting, which will take O(N).

To obtain these results, I used GDB (to set breakpoints and count the number of hits), Python (to generate the input) and bash (to tie everything together). The entire code for reproducing these results is here.

### Comparison with Mergesort

Mergesort and Quicksort are both divide-and-conquer sorting algorithms. They work by first dividing the input data into parts and then recursively processing each part separately. However, there are significant differences between them.
1. First, Quicksort does all of its work in the divide (partition) step. The conquer step is trivial, since after recursion is complete, the array is completely sorted. In contrast, Mergesort does very little work in the divide step, and does most of its work after the recursion is complete.
2. Second, the algorithms have different computational complexity: Mergesort is consistent $O(N \log N)$, Quicksort is $O(N \log N)$, $O(N \log N)$ and $O(N^2)$ in the best, average and worst-case, respectively.
3. Third, the algorithms have different space complexity: unlike Mergesort, Quicksort's partition step can be implemented in-place without significant impact on complexity.
4. Fourth, unlike Mergesort, Quicksort is not a stable sorting algorithm, since the partition step reorders elements. Stable implementations of Quicksort do exist, but are not in-place.
5. Finally, Mergesort is easier to parallelize than Quicksort, since the divide step is simpler with the former.

### Conclusion

If you're one of the chosen few that managed to soldier on through the entire article, give yourself a pat on the back. Thanks for reading the entire thing. Please reward yourself with a refreshing chuckle at this sorting-related xkcd.com comic:

## Monday, April 8, 2013

### An (unexpected) defense of Microsoft Store...

I know I had a lot of fun bashing Microsoft and their Online Store last week, but being a fair and level-minded individual, I feel that I do need to say some things in their defense.

While they failed to obtain my business for the University 365 offer (a blunder that they will surely regret for decades), a quick read of the MS Office license revealed that, as a proud owner of the Home and Student version, I can install it on one more PC.  Which is what I promptly did (can't beat free!).

Unfortunately, the smooth sailing ended here.  In blind defiance of the above-mentioned license, the installed program refused to authenticate, and threatened to disable itself within a month if I did not provide it with a new license key, which, as we all know, costs bags of money.  Having read the The License, I was fully confident in my self-righteousness.  There was no way I was going to pay for something that was already mine.  I did the unthinkable.  I picked up my phone and called the Verification Hotline.

The Verification Hotline is the last resort for people that want to authenticate a Microsoft product, but for one reason or another can't do so over the Internet.  It was well after 7pm when I called, so I half-expected to be kindly asked to call back the next day.  Fortunately, these expectations were misplaced, and I was treated to a warming chat with... a computer.  To proceed with verification, you need to enter something like 64 digits (through the keypad!) to identify your install.  It's difficult to convey the rush of adrenaline as you power towards the last couple of digits.  I've never diffused a bomb, or issued a launch code for an ICBM, but I guess those experiences would come pretty close.

After all that, I got through to an operator.  Finally, a chance to plead my case...  in Japanese.  Great.  After a long and thorough discussion about when and how I installed The Product, the operator agreed to activate my installation.  To do that, I had to enter another missile launch code into my Office install, as she was reading it out.  Another 64 digits or so, and my efforts would finally bear fruit.

My call got cut off after 10 digits.  Game over, man!

Unable to control the fury, I redialed the number, and mashed the keypad until an option to talk to an operator was presented.  I had naively expected that somehow, the person I was talking to before would be there, and we could pick up where we left off...  Alas, that was not to be.  The voice on the other side of the phone was cold and distant.  "I'm afraid you'll have to start again...", she said apologetically.

Like I mentioned earlier, I'm a fairly persistent guy when I need to be.  I persevered.  Entering the 64-digit launch code a second time through was nowhere as painful as the first.  I had the thought that by the time I'd have gone through the process another 3 or 4 times, I'd have the whole thing memorized.  It's really no big deal -- back in the good old days of Windows 95, I reinstalled the O/S so often I had the whole product key committed to memory.

While what I've written so far doesn't really do much in defense of the Microsoft Store, there really is a happy ending to all this.  After I entered my launch code a second time, I didn't have to jump through any more hoops.  The kind soul I spoke to the first time through pre-recorded the authorization code for me, and all I had to do was punch that into my Office install.  All done!  And it only took half an hour...

Furthermore, I had to recently stumble into The Store on an unrelated issue.  I was surprised to see something that I don't recall seeing before -- a live chat option.  You click on that, and get to talk to a real person.  Straightaway.  It's brilliant!  If only that was there a week ago -- I wouldn't have had to rant.  Oh well.  Better out than in, they say.

## Thursday, April 4, 2013

### The Decimator, 2.0

A little while ago, I wrote about the woes of dealing with numbers in Japanese notation.  Since I never let an opportunity to procrastinate to pass me by, I also posted a brief JavaScript (The Decimator™) to help deal with the confusion.  A friend of mine pointed out that it doesn't help with some use cases, such as 千5百万 (that's 15 million, but you knew that, right?).  And thus another opportunity to procrastinate presented itself, and now, I give you the Decimator™, 2.0:

http://mpenkov.github.com/decimator/

It accepts free text input, and handles both traditional (Kanji only) and mixed (Arabic numerals plus Kanji) numbers.  Feel free to give it a whirl.

## Thursday, March 28, 2013

### Surviving Customer Support

I'm trying to get my hands on an install of MS Office 2013 (well, I really only need PowerPoint to do my presentations, and Word to occasionally read files sent by people who don't know better).  Since I'm an honest individual, I decide to do the unthinkable and actually pay for Microsoft software.  However, I'm also a student, so I'm looking for a way to pay the least amount possible and still keep a clear conscience.  It appears the best option is the University 365 deal that's exclusive to full-time students. The catch: they have to verify you're a student before you can purchase or even download anything. This is where the fun begins.

You get three options for verification: through your school, an ISIC card, or manually typing a verification code into a text box. The first option sounds like the best, except it doesn't work: you get redirected to your school (in my case, Hokkaido University), you enter a username and password, and... get redirected to the start page. One option down, two to go. No, scratch that, I don't have an ISIC card. Well, that pretty much leaves one option -- get in touch with support and ask for a verification code. Sounds easy, huh?

Actually, it isn't. Go to the MS store and search for a way to contact their support team. Bonus question: try to find a way that doesn't involve picking up a telephone. Go on, I'll wait.

Back already?  Did you find an email?  Let me know if you did, cause I didn't.  As a consolation prize, I found a chat option...  that I can't use right now because it's outside of US business hours.  Seriously?  Is it really that hard to foresee that not everyone is going to be in the same time zone as the US?  It's almost as if Microsoft haven't heard of this wonderful thing called "eemail", which allows people across the globe to communicate without having to arrange a mutually convenient time.

Alright, let's try a different angle: let's go through the local Japanese MS site.  Maybe they have a workable support option.  Here's their product lineup.  I'll save you the time of having to click through to it:

Now, it's starting to get ridiculous.  Let's find the Microsoft Store in Japan through Google.  Thankfully, that site doesn't die with a server error.  But hey, remember that thing you wanted to buy half an hour ago...  yeah, Microsoft University 365?  It's not there!

I'm a patient and persistent guy.  Let's try googling around for "Microsoft University 365".  Here's the first hit from Google, and it looks really promising.  There's no link for purchasing information, but there is a "learn more" link.  Let's click on that.

Patience...  running...  out...

Microsoft, here I am, practically begging you to take my money so I can start using your software, and you still can't manage to keep my business.  Maybe I'll just have to go and pirate it like the rest of the world does.  I value my conscience, but I also value my time and sanity, and they are both taking a huge hit by having to deal with your online store.  If you insist on copying Apple and dressing up your staff to look like a bunch of clowns, then perhaps you should also look at copying Apple and making it easy for people to actually buy your products.

## Sunday, March 24, 2013

### Coding Practice: Binary Search

If you were given a full deck of cards and asked for search for a card (say, the Queen of Hearts), how would you do it? You could go about it several ways: start searching from the top of the deck, bottom of the deck, pick cards randomly... In the worst case, you'd have to shuffle through the entire deck (a whopping 52 cards) before reaching your goal. If I asked you to do this many times, and you were patient enough to oblige, you'd quickly grow tired of shuffling through the entire deck and look for ways to make your ordeal less monotonic. You'd put the cards in an order, using your favorite sorting algorithm (for example, merge-sort), using the value and suit of each card as the key.

Once the deck is sorted, then searching through becomes simpler.  There are several ways to perform the search.  One of the simplest to explain is the binary search, where you pick a card from the middle of the deck (dividing it into lower and upper halves), and compare it to the value you're searching.  If you got lucky and it's the card you want, then you're all done.  If the card you want is less than the card you just picked, then you look at the lower half.  Otherwise, you look at the upper half.

What's the computational complexity of this search?  At each step, you effectively halve the number of cards you have to look at next: 52, 26, 13, 6, 3, 1.   The number of search steps is thus $\log_2 52$, or approximately 6 steps.  The overall complexity is thus $\log N$, where $N$ is the number of elements to search.

The coding problem this time was to implement a binary search algorithm that searches a sorted array.  It works mostly like what I described above, with the exception of how it handles elements with the same value.  When the array contains more than one element of the same value (e.g. a messed-up deck with 2 Queens of Hearts), then the algorithm searches for the first occurrence of the element.

Here is a JavaScript demo (butt-ugly, but does what it's supposed to). The "<<" and ">>" arrows to move to the previous and next search steps, respectively. At each step, the blue, orange and red numbers represent the first, middle and end of the search, respectively. A green element indicates the search has reached its goal and has terminated.

## Saturday, March 16, 2013

### Counting Large Numbers in Japanese: a PITA

In Japan, the units people use to represent large numbers such as currency differ significantly to the "rest of the world".  Most people are used to decimal units that increase in powers of 3: thousands ($10^3$), millions ($10^6$), billions ($10^9$), trillions ($10^{12}$).  This system, known as the short scale, is also consistent with how numbers are written: as triplets, with a separator (such as a space, comma or otherwise).  Hundreds are a bit of an exception to this pattern.

In Japan, they also have hundreds, but beyond that, the units increase in powers of 4: ten thousand ($10^4$), 100 million ($10^8$), trillion ($10^{12}$).  Unsurprisingly, they have names for these units, too: man (万), oku (憶), and chou (兆), respectively.  They also have thousands ($10^3$), but that's kind of an exception.  They also have the infamous hyaku-man (百万), the hundred ten-thousands.  You may know that as a million.  Despite this "power of four" rule, the Japanese still write numbers as triplets, except using their own units: for example, car prices are often written as 123,000万円 (or 12.3億円, depending on the psychological trick the car dealership believes in).  For a non-Japanese person, that's 1,230,000,000 or 1.23 billion yen.  Simple, right?  All you need to do is mentally add a couple of zeros, shift the commas across, regroup the zeros...

Wrong.  I mess this up all the time, like when I'm reading the news or shopping around for my next BMW.  It happens frequently enough to be frustrating, but not frequently enough to learn my lesson. So, behold: I give you the "The Decimator": a JavaScript that converts Japanese numbers to our familiar decimal notation.
=

## Sunday, February 24, 2013

### This Week's Coding Practice: Stacks and the Tower of Hanoi

A stack is a common data type that allows its elements to be accessed in LIFO (last in, first out) order. Many parts of the physical world around us function like a stack. A common analogy is a stack of plates on a table: it's simple to put a new plate on top or remove the top-most plate. It's impossible to insert a new plate into the middle of the stack. It's also impossible to remove any plates other than the top-most one. I ran into yet another real-world example of a stack the other day: a narrow rectangular parking lot with only one side facing a driveway. The lot has just enough space to fit around 4-5 cars end-to-end. When people want to park, they just drive straight in. Eventually, they get parked in by someone, who also gets parked in by somebody else, and so on. Lucky last doesn't get parked in by anyone else, but gets nagged by all the people he parked in whenever they want to leave.

Another popular example of a stack is the famous Tower of Hanoi puzzle.  There are three rods, and an arbitrary number of disks of unique sizes.  Initially, all the disks are placed on the first rod, ordered by size (smallest disk at the top).  The goal of the puzzle is to move all the disks to another rod, one by one, without ever putting a larger disk on top of a smaller one.  The task for this week was to write an algorithm that solves this puzzle.

It may not be obvious from the description, but each peg can be modeled as a stack, since its physically impossible to access elements in any order other than LIFO (for example, randomly, or FIFO).  The entire puzzle is then modeled by 3 separate stacks.

How does one go about solving the puzzle?  It turns out that it has been thoroughly studied, and several well-known recursive and iterative algorithms exist.  However, since I have a tendency to do things the hard way, I didn't study those algorithms prior to solving the puzzle, and came up with a less elegant but home-brewed solution on my own.

Having spent a significant amount of time on the problems at Project Euler, I've acquired a instinctive reaction: whenever facing a problem for the first time, try brute force.  It's almost never the best (or even borderline satisfactory) solution, but is a fairly quick way of getting acquainted with a problem.  It's also a good starting point for trying out other things.  Lastly, it's better than nothing.

With that in mind, I approached the puzzle as a search problem (a more general instance of the binary search).  At each step, you represent the current state of the puzzle as 3 individual states.  To get to the next steps, a disks is moved from one stack to another, without violating the rules of the puzzle.  Once you get to a state in which all of the discs are on the second (or third) stack, then you've solved the puzzle.  In which order should we look at the next steps?  One simple way is DFS (depth-first search).

I quickly realized that a lot of the resulting steps weren't helpful in solving the puzzle: moving a single disc backwards and forwards between two rods achieves nothing.  It also causes some branches of the DFS to be infinitely long, meaning the search will fail to terminate.  One way to get around this problem is to search only those states that haven't been encountered yet.  This requires keeping track of the states encountered thus far, which could be done with a hash map.

As it turns out, DFS isn't the best way to perform the search, since the solution it yields isn't necessarily the simplest.  This can be addressed by switching to breadth-first search (BFS): the solution it yields will have the minimum possible number of steps (which, as it turns out, is equal to $2^{n-1}$).

After writing my solution out on paper, I quickly coded it up in Python.  Since I had a bit of spare time, I decided to try out my JavaScript chops and implement something I could show off graphically.  While fiddling with JS took me way longer than coming up with the actual algorithm, I got there in the end, and came up with this:

The search problem can be taken further by implementing an A* search that uses a heuristic to determine the best action to take at each step.  If a suitable heuristic can be formulated, this kind of search can significantly reduce the running time and memory usage of the algorithm.  It was tempting to keep digging, but I'll leave that to another day.

Full source (including the Python code) is available as a gist.