Okay, now let's assume that you're going to be doing internal, sequential sorting, in place, on a singly-linked-list.

There are fewer possible
algorithms to choose from than with array based sorting, because it's no longer
feasible to do random access. You could still do random access by having a
function which skipped past (*m-1)* items to get to item (*m)*, if you
were hell bent on it, but this would clearly always give you increasingly
terrible performance as the number of items increased. So for now lets consider
such things as not an option.

There are some things that do work better than as with arrays when sorting lists though, such as the ability to insert an item without having to copy all the others after it along one place.

One other possibility is to sort the list by swapping the data within each node, instead of reordering the list by swapping pointers. For the most part I'd advise against doing this because for larger item types it is definitely going to be slower than just copying pointers around. Also, it realistically limits your choice of algorithm even further beyond those listed below. You tend to lose both the advantages of arrays and those of lists. I.e. no cheap random access, and no cheap inserting in the middle. However the technique does also have its place in some circumstances, which is all I'll say about it for now.

Now, I'll define a few necessary
odds and ends used by some algorithms, and then jump into the algorithms:

fakePrev is a hackish way of simplifying code to remove the special case code
usually required for removing from the head of a list. If you're uncomfortable
using it, then simply set the variable it is assigned to NULL, check for NULL
where it is used, and figure out the extra code required instead.

Bubble Sort is always stable.

Bubble Sort is always stable.

Bubble Sort is always stable.

Odd Even Transposition sort is stable.

The version below contains a little optimisation that makes the sequence of gap values finish with the optimal sequence for gap values less than 11, which means the algorithm here is really CombSort11.

Comb sort is NOT stable. http://en.wikipedia.org/wiki/Comb_sort

Insertion sort is stable.

http://datastructures.itgo.com/lists/dynamic/sort.htm

Strand sort is stable.

Bidirectional Strand Sort is stable if you only look for explicitly decreasing sequences before reversing them, as done here.

But wait, there's yet another advantage. Remember what happens when one of the input arrays to a merge became empty? You have to copy the remaining items from the other list to the output buffer. With list, we can simply concatenate all of the items leftover in one of the lists to the final list, in a single pointer assignment.

Well so far it all seems pretty
awesome. However there is some bad news. Each recursive call that splits the
list in two can't simply pick the average of two numbers any more. It now
becomes necessary to split the list into two lists. Unfortunately, no matter how
you look at it, this always means a pass through the entire list. The best
method in practice is to have two cursors iterate through the list
simultaneously. One steps by two items every time, and one steps by one item
every time. That way, when on of them gets to the end, the other is in the
middle, a perfect place to cut the list in two.

As another bonus, it's also a
stable algorithm.

http://en.wikipedia.org/wiki/Merge_sort

http://www.chiark.greenend.org.uk/~sgtatham/algorithms/listsort.html

Natural Merge Sort is NOT stable. http://www.eternallyconfuzzled.com/tuts/algorithms/jsw_tut_sorting.aspx#merge

All tree sort techniques can be
used with arrays or lists or whatever because the original data is copied into a
different fundamental data structure for manipulation as it pleases. I won't
repeat any of it here; refer to the various
Tree Sorting techniques in the array
sorting section.

Tree sorts are stable.

Quick sorts are NOT stable.

The simple change is to scan through the list and use the first item you come across, whose next item is smaller, as the splitter.

With this simple change, consider the already sorted case: The search for a splitter never finds an item whose next item is smaller. It gets all the way to the end so therefore that portion of the list is already sorted, and that pass is done.

Now consider the reverse sorted
case: The search for a splitter picks the first item as the splitter, because
the next item is less than it. It then performs a partition step where every
item other than the splitter ends up going in the less than list (Sounds bad so far I know).
However, in the process of building up the partition lists, we do so
by pushing onto the front, which reverses the order of the items in the process. This means that the next pass is the same as the already sorted
case, and happens in O(n) time. Then you join back on the biggest item and
you're done. Both the sorted and reverse sorted cases become O(n). However the
O(n^{2}) worst case has certainly not gone away entirely. It's just that the worst case
will now appear when the list is ordered like this: largest, smallest,
second-largest, second-smallest, third-largest, third-smallest, etc. Fortunately
such an initial ordering is far less-likely in practice than the already sorted
or reverse sorted cases.

Another great spin-off of this splitter picking technique is that because the less and greatereq lists both always contain at least one item, and the SuperQuickSortAux function always returns the tail of the sorted sub-list, the concatenation of the two lists is extremely simple.

I haven't seen anyone else use this technique ever, so It's highly-likely I'm
the inventor of it, but I won't hold my breath.

Quick sorts are NOT stable.

__Example with random numbers:
__ Initial list: 17, 37, 48, 29, 13, 42, 50, 4, 63

Splitter chosen as 48, because the next number is less than it. The partition step begins with the following:

less(1): 17, 37

greatereq(1): 48

remaining un-partitioned list(1): 29, 13, 42, 50, 4

(Note that the first item of the greatereq list before partitioning, is always also the splitter)

After partitioning, these lists are as follows:

less(1): 4, 13, 29, 17, 37

greatereq(1): 63, 50, 48

Now we perform the recursive call on less(1), beginning with 4, 13, 29, 17, 37.

Before partition:

less(2): 4, 13

greatereq(2): 29

remaining un-partitioned list(2): 17, 37

after partitioning:

less(2): 17, 4, 13

greatereq(2): 37, 29

Now we perform the recursive call on less(2), beginning with 17, 4, 13.

Before partition:

less(3): {empty}

greatereq(3): 17

remaining un-partitioned list(3): 4, 13

after partitioning:

less(3): 4

greatereq(3): 13, 17

Omitting a trivial recursive call for greatereq(3), these are concatenated to become 4, 13, 17. Now we process greatereq(2) beginning with 37, 29.

Before partition:

less(4): {empty}

greatereq(4): 37

remaining un-partitioned list(4): 29

after partitioning:

less(4): 29

greatereq(4): 37

These are concatenated to become 29, 37. Now we process greatereq(1) beginning with 63, 50, 48.

Before partition:

less(5): {empty}

greatereq(5): 63

remaining un-partitioned list(5): 50, 48

after partitioning:

less(5): 48, 50

greatereq(5): 63

Omitting a trivial recursive call for less(5), these will be concatenated to form 48, 50, 63.

Then we concatenate more sub-lists after returning from some recursive calls, and then get:

4, 13, 17, 29, 37, 48, 50, 63.

Lo and behold, the list is sorted!

Quick sorts are NOT stable.

Quick sorts are NOT stable.

Bucket sort is stable if implemented carefully.

http://en.wikipedia.org/wiki/Bucket_sort

http://www.nist.gov/dads/HTML/bucketsort.html

Perm Sort is NOT stable.

http://cg.scs.carleton.ca/~morin/misc/sortalg/