Okay, let's assume that you're going to be doing internal, sequential sorting, in place, on an array. There are tons of algorithms to choose from, and although many of them are faster than others in general, for most sorting algorithms there does exist a situation where a particular algorithm is a better choice than any other. First of all, many of the algorithms below make use of “Swap”. This can be a function or a macro, whose sole purpose is to swap two items from anywhere in memory.

One such definition for this is: I'll also present a few other necessary odds and ends used by some algorithms:

Probably the most well-known sorting algorithm is bubble-sort. In bubble sort, the values float one at a time to the end of the array, slowly moving one item into position each pass. The code consists of two loops, a compare, and a swap (of adjacent items). Bubble Sort is stable because it only ever swaps adjacent items, and only if the first one is strictly greater than the second one. Sometimes people implement Simple Sort, thinking they have actually implemented Bubble Sort, but if it doesn't always compare adjacent items, it isn't Bubble Sort. Depending on the order of the for-loops this is sometimes called Sink Sort.

First outer loop pass (swaps in red, comparisons not resulting a swap are omitted):

Swap: 17, 37, 29, 48, 13, 42, 50, 4, 63

Swap: 17, 37, 29, 13, 48, 42, 50, 4, 63

Swap: 17, 37, 29, 13, 42, 48, 50, 4, 63

Swap: 17, 37, 29, 13, 42, 48, 4, 50, 63

Second outer loop pass:

Swap: 17, 29, 37, 13, 42, 48, 4, 50, 63

Swap: 17, 29, 13, 37, 42, 48, 4, 50, 63

Swap: 17, 29, 13, 37, 42, 4, 48, 50, 63

Third outer loop pass:

Swap: 17, 13, 29, 37, 42, 4, 48, 50, 63

Swap: 17, 13, 29, 37, 4, 42, 48, 50, 63

Fourth outer loop pass:

Swap: 13, 17, 29, 37, 4, 42, 48, 50, 63

Swap: 13, 17, 29, 4, 37, 42, 48, 50, 63

Fifth outer loop pass:

Swap: 13, 17, 4, 29, 37, 42, 48, 50, 63

Sixth outer loop pass:

Swap: 13, 4, 17, 29, 37, 42, 48, 50, 63

Seventh outer loop pass:

Swap: 4, 13, 17, 29, 37, 42, 48, 50, 63

As you can see, some items take a long time to reach their destination (like the 4 above) because they can only move one position for each outer loop pass. These are called turtles, and are the main speed problem with Bubble Sort. Items moving in the other direction move fast (like 48 above) and are called hares.

http://en.wikipedia.org/wiki/Bubble_sort

http://xlinux.nist.gov/dads/HTML/bubblesort.html

http://www.sorting-algorithms.com/bubble-sort

Again, this is still Bubble Sort, and so is stable.

Unlike Bubble Sort, this algorithm is NOT stable.

http://www.sortieralgorithmen.de/simplesort/index.html

Also called "Cocktail Sort". There are some algorithms based on Bubble Sort, which improve the algorithm somewhat. For example Shaker Sort, which performs Bubble Sort, passes in alternating directions. The alternating passes mean many items repeatedly move back and forth from one spot, to the next spot over, and then back to the first spot, giving a noticeable shaking effect if animated. This allows both small and large items to move towards their destination, as the algorithm progresses.

Shaker Sort is stable.

http://en.wikipedia.org/wiki/Shaker_sort

http://xlinux.nist.gov/dads/HTML/shakerSort.html

Also called "Brick sort". Another algorithm is to alternately swap the even items with their next item for one pass, and then the odd items with their next item for the next pass. This allows both large and small items to go towards their destination as we progress too. It is similar to Shaker Sort and Improved Bubble Sort, but doesn’t have a way of stopping early. The good news is that the algorithm lends itself well to parallelisation because all of the compare-and-swap operations in each pass could theoretically be performed at once. Multi-core CPUs could be utilised to do this, but perhaps to very little advantage since there are many asymptotically faster algorithms anyway.

Odd Even Transposition Sort is stable.

http://www.cs.rit.edu/~atk/Java/Sorting/sorting.html

Several Unique Sort is NOT stable.

http://www.critticall.com/sort.html

Many people have tried different shrink factors other than 1.3 with this algorithm, and there are even lists of hand-optimised gap tables out there. But I still find that it’s very hard to beat the factor of 1.3 and the special modification to prevent gaps of nine or ten (known as combSort11), by any significant amount, and some other gap-tables are actually slower. The same early-exit trick of improved bubble sort is also utilised here.

Comb sort is NOT stable.

http://en.wikipedia.org/wiki/Comb_sort

http://xlinux.nist.gov/dads/HTML/combSort.html

The same technique as is used in
Shell Sort (see further down the page) can be used with Bubble Sort. Each pass consists of a bubble sort
pass, instead of an insertion pass as would be done in Shell Sort. This
algorithm is rarely used as it exists mostly to demonstrate that the principle
used in Shell Sort can be used independently of the H-Sorting algorithm used.

Like Shell Sort,
H-Bubble Sort is NOT stable.

http://www.cs.princeton.edu/~rs/shell/paperF.pdf

The same technique as is used in Shell Sort (see further down the page) can be used with Shaker Sort. Each pass consists of a upwards and a downwards
shaker sort pass, instead of an insertion pass as would be done in Shell Sort.
This algorithm is also rarely used as it exists mostly to demonstrate that the
principle used in Shell Sort can be used independently of the H-Sorting
algorithm used.

Like Shell Sort, H-Shake Sort is NOT stable.

http://www.cs.princeton.edu/~rs/shell/paperF.pdf

The same technique as is used in Shell
Sort (see further down
the page) can be used with Brick Sort.
Each pass consists of an odd and an even brick sort pass, instead of an
insertion pass as would be done in Shell Sort.
This algorithm is also rarely used as it exists mostly to demonstrate that the
principle used in Shell Sort can be used independently of the H-Sorting
algorithm used.

Like Shell Sort, H-Brick Sort is NOT stable.

http://www.cs.princeton.edu/~rs/shell/paperF.pdf

Another basic sorting strategy is to simply search for the next largest (or smallest if you prefer) item on each pass and move it into position. This gives the advantage that we only need to perform one swap on each pass, making it much better in terms of less copying, but performing the same number of comparisons. At first one might think this would make a huge speed difference, but in reality it often doesn't make that much difference. This algorithm is called Selection Sort.

Selection Sort on an array is NOT stable because in order to place an item in its final spot, we move the item that was there to somewhere that might be before some other item that was equal to it, thus reversing the order of items that were equal (if you follow).

http://en.wikipedia.org/wiki/Selection_sort

http://xlinux.nist.gov/dads/HTML/selectionSort.html

http://warp.povusers.org/DoubleBurstSelectionSort/

http://www.softpanorama.org/Algorithms/Sorting/selection_sort.shtml

Bingo Sort on an array is NOT stable for the same reason as Selection Sort.

http://xlinux.nist.gov/dads/HTML/bingosort.html

Exact sort is NOT stable.

http://www.geocities.com/p356spt/

Another well known sorting technique is Insertion Sort. During Insertion Sort, you always have a sorted portion and an unsorted portion. Initially the sorted portion consists of only 1 item. Each subsequent step involves finding a place in the sorted portion for a single item from the unsorted portion, and inserting that item into the sorted portion. Thus enlarging the sorted portion and shrinking the unsorted portion, until the unsorted portion becomes empty. At that point, the entire thing is sorted. An unfortunate downside to this technique is that Inserting the item usually means moving many items over to make room for it. Usually the search starts from the position just to the right of where the unsorted item lies, such that if the array is already sorted, it is very fast to insert. I prefer to first test if the item to be inserted actually needs any moving at all, and if not (i.e. it comes after all items in the sorted portion), we simply extend the sorted portion to include this item, and don't move anything. In fact, this is among the fastest algorithms for sorting an almost sorted array. When it is completely sorted, there are O(n) compares and no swaps. That's the same amount of work as testing if an array is sorted already! However if the array starts off in reverse order, it will be very slow because many moves and many comparisons will be required. Note: Moving items over to make room for insertion, can be done by means of repeated swapping; however I prefer to simply remember the value to insert, move the required items all over by one, and then place the remembered item into the hole.

Insertion sort is stable.

http://en.wikipedia.org/wiki/Insertion_sort

http://xlinux.nist.gov/dads/HTML/insertionSort.html

*As far as I am aware, this
algorithm is my own creation. Therefore I do not yet have any links to other
sites with the algorithm. Furthermore its usefulness is extremely low anyway, as
it is rather large and not significantly faster than insertion sort.*

Binary Insertion sort can be stable if written carefully, as shown.

http://en.wikipedia.org/wiki/Insertion_sort

http://xlinux.nist.gov/dads/HTML/binaryinsort.html

Interpolation Insertion sort can also be stable if written carefully, as shown.

Strand sort is stable.

http://xlinux.nist.gov/dads/HTML/strandSort.html

Bidirectional Strand Sort is stable if you only look for explicitly decreasing sequences before reversing them, as done here.

*As far as I am aware, this
algorithm is my own creation. Therefore I do not yet have any links to other
sites with the algorithm. Furthermore its usefulness is extremely low anyway.*

It uses an index list internally, allowing insertion into the sorted portion (of the index list) in constant time. However unlike list-based insertion sort, it can find the point of insertion in faster than O(n) time. This is done by sampling through the original array over some of the items that have already been inserted. Instead of examining half of them on average, it explicitly scans the square root of the number inserted so far at even intervals, and uses this as a best guess. From there it scans linearly forwards to find the real insertion point. Of m items already inserted, this samples sqrt(m) items initially, plus on average sqrt(m)/2 further items to find the real insertion point, assuming the initial sampling gave a close approximation. This leads to a typical execution time of about O(n

Just like list-based Insertion Sort, Order Sort is stable.

http://www.ddj.com/dept/cpp/184402000

This sorting algorithm is NOT stable.

*I have not seen Shell Sort
written like this elsewhere. Therefore I do not yet have any links to other
sites with the algorithm. Furthermore it has no advantage over normal Shell
Sort, which does not use recursion.*

The other way to improve insertion sort is of course, Shell Sort invented by Donald Shell in 1959. Whereby the same principle as Comb Sort is used, except that the insertion method is used to k-sort the items on each pass. There are many variants of this algorithm alone, but basically they all just use different number sequences for the gap sizes in each pass. The Incerpj - Sedgewick sequence used here is currently regarded as the best and is about O(nlog

This sorting algorithm is NOT stable.

http://xlinux.nist.gov/dads/HTML/shellsort.html

http://www.research.att.com/~njas/sequences/A036569

http://www.cs.fiu.edu/~weiss/Shellsort.html

This sorting algorithm is NOT stable.

http://yagni.com/combsort/correspondence.php

The below implementation of this algorithm is NOT stable, though it should be able to be modified to be stable

http://en.wikipedia.org/wiki/Library_sort

http://www.cs.sunysb.edu/~bender/pub/FUN2004-librarysort.ps

In-Place Merge Sort is stable.

http://www.cs.ubc.ca/spider/harrison/Java/sorting-demo.html

http://xlinux.nist.gov/dads/HTML/bitonicSort.html

http://www.tools-of-computing.com/tc/CS/Sorts/bitonic_sort.htm

Merge Sort is fairly recursive, and obviously it would be nice to not use so much stack space. With some effort we can translate the algorithm to its iterative 'breadth-first' form. However, doing this it seems quite difficult at first to deal with arrays that are not a power of two in size. No Problem, we just pretend it is a power of two, but ignore any items outside the actual range. We then treat the array as n sequences of length 1. We merge odd and even sub-arrays to produce n/2 sequences of length 2. Then we again merge odd and even sub-arrays to produce n/4 sequences of length 4, etc. Until finally we merge 2 sequences of length n/2 to complete the sorting.

Breadth-First Merge Sort is stable.

We could have used the same 'breadth-first' idea just mentioned back when we were doing bitonic merges too. Build tiny increasing and decreasing sequences and use bitonic merge on those until you have a single bitonic sequence, and then make that in to the first half of a bitonic sequence, conveniently known as a monotonic (i.e. sorted) sequence.

http://www.tools-of-computing.com/tc/CS/Sorts/bitonic_sort.htm

1. If both of the next items in the
input buffers are larger than the last item added to the output buffer: Append the one with the larger top value,
keeping the average of the top values as low as possible, giving maximum chance
of being able to further extend the run with the next item added.

2. If both of the next items in the input buffers are smaller than the last item
added to the output buffer:
Again, append it to the one with the larger top value, keeping the average of
the top values as low as possible, making it more likely that the next item will
be able to once again further extend the run.

3. If one of the next items in the input buffers is less than the last item
added to the output buffer, and the other is greater: Append the one with the
larger top value because that extends the run length whereas appending the one
from the other buffer would not. This is important because
otherwise run lengths will not increase very quickly and the sort would actually
become O(n^{2}).

Natural Merge Sort is NOT stable.

http://www.eternallyconfuzzled.com/tuts/algorithms/jsw_tut_sorting.aspx#merge