Quick Sort (With 3 partitions)
In the field of Computer Science, we learned in the subject of “Design and Analysis of Algorithms” (a nightmare coming true), a sorting technique with a misleading name — “Quick Sort”. For anyone who knows this, this algorithm has a worst-case time complexity of O(n²), while Merge Sort has O(nlogn) under all conditions. But let us brush this under the carpet for now. Let us talk about Quick Sort. Conventionally, we divide the array into 2 parts in the partition phase. But in the approach mentioned, we will divide the same into 3 parts. So instead of having a single pivot, we’ll have 2 pivots, and that is what we intend to get. So let’s get started.
3-way partition algorithm
Before making the cake, let me tell you about the flour and eggs I am using for the same — the 3-way partition algorithm. Let me tell you what happens here. We have an array of numbers A. Also, we have 2 numbers a and b such that a<b. Our goal is that we want to partition the array such that we get all elements less than a in the left, all elements between a and b in the middle, and all elements more than b in the right. The algorithm is as follows:
On analyzing, we get the time complexity for this as O(n).
Back to our problem
The 3-way partitioning algorithm is modified as follows:
- The numbers a and b are our first and last elements. So our array for the algorithm is one less from both ends. If the first element is greater than the last element, they’re swapped.
- Once our algorithm is run, we swap our first element with the last element lesser than the first element. Last is in terms of position, not value. Similarly, the last element of the array is swapped with the first element greater than the last element. This way, we get fixed positions for the first and last elements of the array, creating 3 partitions.
- The same things are done with the individual 3 partitions formed by the pivots.
Consider the following array: [90 12 0 1000 27 555 6 89]
To start we initialize left=0 and right=7. So elements at left and right are 90 and 89 respectively.
We see that 90 > 89, so we swap them. We set low and mid to 1 and high to 6.
12 < 89 — swap, but they’re the same. So only increment — low=mid=2.
0 < 89 — swap, but they’re the same. So only increment — low=mid=3.
1000 > 90 — swap 1000 and 6. Decrement high — high=5.
6 < 89 — swap, but they’re the same. So only increment — low=mid=4.
27 < 89 — swap, but they’re the same. So only increment — low=mid=5.
555 > 90— swap, but they’re the same. So only decrement — high=4.
Now stop since mid has crossed high. Let us put 89 and 90 in their positions, and divide the array into 3 parts.
We shall go ahead in the same way in the case of subarray1 as before. For subarray2, we arrive at condition left ≥ right, so nothing in subarray2. In subarray3, we first see that A[left] ≥ A[right]. So we swap them. Then we find that the subarray has only 2 elements, so we stop there.
There are 3 recursive calls, each with an average of n/3 elements of the array. Only the while loop takes O(n) time, while others inclusive of swap operation are of constant time. So our recurrence relation is as follows:
T(n) = 3T(n/3) + O(n); n>2
T(n) = O(1) ; n ≤ 2
Solving the same we get T(n) = O(nlogn).
A Case to See
Consider if the array is sorted, but in the reverse order, like this:
[1000 555 90 89 27 12 6 0], and we make it as
1000 [555 90 89 27 12 6] 0
0 [555 90 89 27 12 6] 1000
Now from 555 to 6, neither any number is more than 100, nor less than 0. So only mid moves ahead, while low and high are at their same place. So “low-1” and “high+1” are the same as the first and the last elements respectively. So instead of the division of n into n/3, the division becomes n-2. So our recurrence relation becomes as follows:
T(n) = T(n-2) + O(n); n>2
T(n) = O(1); n≤2
Solving this, we get T(n)=O(n²).