Sorting is a fundamental concept in computer science programming, and to sort any given item it is necessary to choose an efficient algorithm. Quick sort is one such efficient algorithm for sorting the given list of data. It gains its popularity from the average-case performance O (n log n) of the divide-and-conquer strategy it uses, hence the preference for many developers working with large datasets.
In this article, we will learn about the quick sort algorithm in C, complete with code examples and a step-by-step explanation of its working. We will also discuss the time and space complexity, pros and cons, comparison with other sorting algorithms, and various applications of quick sort algorithms.
What is Sorting?
Sorting is the process of arranging a collection of items like numbers, names, or objects, into a specific order. An algorithm for sorting elements of a list into an ordered list is called a sorting algorithm in computer science. The most often used orders are numerical, lexicographical, and ascending or descending. The effectiveness of other algorithms (like search and merge algorithms) that depend on input data being in sorted lists is enhanced by efficient sorting. Sorting is also frequently helpful for generating output that is comprehensible by humans and for canonicalising data. For example:
Sorting is crucial in computer science and everyday applications because it:
Improves search efficiency
Facilitates data analysis and processing
Enhances data presentation and readability
It is a fundamental building block for many other algorithms
Get curriculum highlights, career paths, industry insights and accelerate your technology journey.
Download brochure
What is the Quick Sort Algorithm in C?
Quick Sort is an efficient sorting algorithm that works using the divide-and-conquer approach. It is based on comparing the element to be sorted and other elements. The core principle of QuickSort is to select a pivot element, take all the elements that are less than the pivot and greater than them, and divide them into two sites. The sub-arrays are then sorted individually or with recursive calls. This whole process of partition and sorting continues till every bit and piece of the array is properly arranged and in order.
Quick sort is famous for handling the sorting of large data effectively and makes use in various types of application systems owing to its good average case characteristics. Average case performance comes first when using Quick Sort only when needed. It has great success on huge datasets in particular. Quicksort is one of the most widely used sorting algorithms and can be found in many common programming packages due to its low O(log n) space consumption.
Characteristics of Quick Sort:
Best Case Complexity: O(n log n)
Average Case Complexity: O(n log n)
Worst Case Complexity: O(n^2)
Space Complexity: O(log n) (due to recursion)
Stable: No (elements with equal values may not retain their original order)
In-Place: Yes (does not require extra space for sorting)
Why use a Quick Algorithm?
Efficiency: Having an average time complexity of O(n log n), makes it faster than many other sorting algorithms, especially for large datasets.
In-place sorting: It doesn’t require much additional memory space. Most implementations only need O(log n) extra space for recursion.
Divide-and-conquer approach: The algorithm breaks the problem into smaller subproblems. This makes it suitable for parallelisation, potentially improving performance on multi-core systems.
Adaptability: Quick Sort can be optimised for various scenarios. For example, the choice of pivot can be adjusted based on the data characteristics.
Good cache performance: It has a good locality of reference, often leading to better performance on modern hardware due to the efficient use of CPU caches.
Widely implemented: Many standard libraries use Quick Sort or its variants as their default sorting algorithm.
Quick Sort also has some drawbacks:
The worst-case time complexity of O(n^2) if poorly implemented.
Not stable (doesn’t preserve the relative order of equal elements).
It can be outperformed by other algorithms in specific scenarios.
The algorithm uses the divide-and-conquer strategy to sort arrays. Let’s dive into a detailed explanation with examples and C code.
How Quick Sort Works
The Quick Sort algorithm sorts an array by selecting a pivot element and partitioning the array into two subarrays and then continues recursively to sort the left and right subarrays.
1. Choose a Pivot
Select a pivot element from the array. Common choices include the first element, the last element, or the median of the first, middle, and last elements.
2. Partition the Array
Rearrange the array such that all elements less than the pivot are to the left of the pivot and all elements greater than the pivot are to the right.
The pivot is now in its correct position in the sorted array.
Recursively Sort the Subarrays
Apply the Quick Sort algorithm recursively to the subarray on the left of the pivot.
Apply the Quick Sort algorithm recursively to the subarray on the right of the pivot.
3. Base Case
If the subarray has 0 or 1 element, it is already sorted and no further action is needed.
How to choose a Pivot element?
Selection of the pivot for the Quick Sort algorithm needs no extra effort or special technique. Choosing an appropriate pivot is crucial for the efficiency of Quick Sort. Two popular techniques in the listed strategies are:
First Element: Easy to implement danger of worst-case performance on sorted arrays.
Last Element: Simple but may lead to worst-case performance on sorted arrays.
Last Element: Reduces the chance of worst-case performance.
Median-of-Three: Uses the median of the first, middle, and last elements, providing a good balance.
Pseudocode for Quick Sort Algorithm
Here’s the pseudocode for the quick sort algorithm:
qS(nums, low, high)
if low < high then
// Partition the array and get the pivot index
pi = PARTITION(nums, low, high)
// Recursively apply quick sort on the left subarray
QS(nums, low, pi - 1)
// Recursively apply quick sort on the right subarray
QS(nums, pi + 1, high)
PARTITION(nums, low, high)
pivot = nums[high] // Selecting the pivot as the last element
i = low - 1 // Initial index for smaller element
for j = low to high - 1 do
if nums[j] < pivot then
i = i + 1 // Move the smaller element to the left of the pivot
SWAP(nums[i], nums[j]) // Swap the current element with element at i
SWAP(nums[i + 1], nums[high]) // Place the pivot at the correct position
return (i + 1) // Return the partition index
SWAP(a, b)
temp = a
a = b
b = temp
Example:
Let’s take an example to see how quick sort algorithm works in an array:
Step 1: Select the Pivot Element
The Quick Sort method requires us to select a pivot element in the first step. There are several methods for choosing the pivot:
As the first element of the array,
As the last element of the array (common and efficient),
As the median element, or
A random element from the array.
For simplicity and efficiency, let’s consider the last element of the array as the pivot.
Step 2: Partition the Array
Once the pivot is selected, the next step is to partition the array into two parts:
Elements having values less than the pivot are moved to the left side of the pivot.
Elements having values greater than the pivot are moved to the right side.
Here, the elements do not need to be sorted, but they are merely rearranged around the pivot, as we will sort them later on.
Rearrange the Elements Around the Pivot:
We use two variables, i and j:
i: This variable facilitates tracking the location of element swaps.
j: To compare the array elements with the pivot, this variable loops through each one.
Initialisation:
Assign i = low – 1 (low being the subarray’s initial index).
Assign j = low to the element that is now being compared to the pivot.
Steps in Rearranging Elements:
1. Start with the array:
nums = [11, 9, 6, 16, 7] (pivot = 7)
2. Initialise:
low = 0, high = 4 (indices of the first and last elements).
i = -1 and j = 0 (i.e., i = low – 1 and j = low).
3. Compare each element with the pivot:
Compare the value at index j with the pivot:
If the value at nums[j] is less than the pivot, increment i and swap nums[i] and nums[j].
4. Partitioning the Array:
Start comparing nums[j] with the pivot 7:
First comparison: 11 > 7, no swap, increment j.
Second comparison: 9 > 7, no swap, increment j.
Third comparison: 6 < 7, increment i to 0, swap nums[i] and nums[j] (swap 11 and 6).
Now, the array looks like this: [6, 9, 11, 16, 7], continue comparing the remaining elements, and now the Fourth comparison: 16 > 7, no swap.
5. Final Swap with Pivot:
After the comparisons, increment i and swap the element at i+1 with the pivot.
In this case, swap nums[i+1] with nums[high] (swap 9 and 7). The array now looks like this:
nums = [6, 7, 11, 16, 9]
Step 3:Divide and Conquer
Once the partitioning step is complete, the pivot is now in its correct position. The array is divided into two subarrays:
Sub-array 1: Elements before the pivot (smaller elements).
Sub-array 2: Elements after the pivot (larger elements).
For the array [6, 7, 11, 16, 9], the subarrays would be:
Left subarray: [6]
Right subarray: [11, 16, 9]
Step 4: Recursive Quick Sort on Subarrays
The next step is to recursively apply the Quick Sort algorithm to the left and right subarrays.
1. Repeat the process for the left subarray [6]:
Since this subarray has only one element, it is already sorted.
2. Repeat the process for the right subarray [11, 16, 9]:
Select 9 as the pivot.
Partition the right subarray around 9.
This gives the final sorted subarray.
Step 5:Final Sorted Array
After recursively sorting the subarrays, the entire array will be fully sorted.
nums = [6, 7, 9, 11, 16]
Quick Sort Program in C
Recursive Quick Sort Program in C
Code:
#include <stdio.h>
// Function to swap two elements (p and q)
void swap(int *p, int *q) {
int temp = *p;
*p = *q;
*q = temp;
}
// Partition function
int partition(int nums[], int low, int high) {
int pivot = nums[high]; // Select the last element as pivot
int i = (low - 1); // Index of smaller element
for (int j = low; j <= high - 1; j++) {
// If the current element is smaller than or equal to pivot
if (nums[j] <= pivot) {
i++; // Increment index of smaller element
swap(&nums[i], &nums[j]);
}
}
swap(&nums[i + 1], &nums[high]);
return (i + 1);
}
// Recursive Quick Sort function
void qSort(int nums[], int low, int high) {
if (low < high) {
int pi = partition(nums, low, high); // Partitioning index
// Recursively sort elements before and after partition
qSort(nums, low, pi - 1);
qSort(nums, pi + 1, high);
}
}
// Main function
int main() {
int nums[] = {8, 9, 14, 12, 3, 5, 10, 18, 20}; // create an array
int size = sizeof(nums) / sizeof(*nums); // get the array size
qSort(nums, 0, size - 1); // use the qSort function
// print the output
int i;
for (i = 0; i < size; i++){
printf("%d ", nums[i]); }
return 0;
}
Output:
3 5 8 9 10 12 14 18 20
Explanation:
Initialisation and Array Setup: The quick sorting algorithm is implemented and for this example, the array nums[] = {8, 9, 14, 12, 3, 5, 10, 18, 20} is declared in the main method. To determine the size of the array, it is sufficient to divide the total bytes occupied by an array by the size of a single element. The function qSort() is invoked with the array, and its boundary indices low = 0 and high = size – 1.
Partition Function: Here, the partitioning step chooses nums[high] which is the last element in the array as the pivot. The rationale of the partition function is that it tries to take an array and position everything less than or equal to the pivot on the left and any element that is greater to the right of the pivot. This is done by measuring each element against the pivot and iteratively moving elements about the pivot, using the swap() function when applicable.
Recursive call: This is followed by recursive sorting of the array which has therefore been partitioned. The pivot index (pi) is returned, effectively splitting the array into two parts – the part before the pivot and the part after the pivot. The qSort() function is then invoked again for the left subarray in the case of the pivot subarray index being greater than the low index and for the right subarray if the index of the pivot subarray is less than the high index. Recursion proceeds until the smallest size situation occurs; i.e. when the size of the subarray is only 1 (low >= high) which is assumed to be in a sorted order.
Iterative Quick Sort Program in C
Code:
#include <stdio.h>
#include <stdlib.h>
// Function to swap two elements (p and q)
void swap(int *p, int *q) {
int temp = *p;
*p = *q;
*q = temp;
}
// Partition function
int partition(int nums[], int low, int high) {
int pivot = nums[high]; // Select the last element as pivot
int i = (low - 1); // Index of smaller element
for (int j = low; j <= high - 1; j++) {
// If current element is smaller than or equal to pivot
if (nums[j] <= pivot) {
i++; // Increment index of smaller element
swap(&nums[i], &nums[j]);
}
}
swap(&nums[i + 1], &nums[high]);
return (i + 1);
}
// Iterative Quick Sort function
void qSortIterative(int nums[], int low, int high) {
// Create an auxiliary stack
int *stack = (int *)malloc((high - low + 1) * sizeof(int));
// Initialise top of stack
int top = -1;
// Push initial values of low and high to stack
stack[++top] = low;
stack[++top] = high;
// Keep popping from stack while it is not empty
while (top >= 0) {
// Pop high and low
high = stack[top--];
low = stack[top--];
// Partition the array and get the pivot index
int pi = partition(nums, low, high);
// If there are elements on the left side of the pivot, push left side to stack
if (pi - 1 > low) {
stack[++top] = low;
stack[++top] = pi - 1;
}
// If there are elements on the right side of the pivot, push right side to stack
if (pi + 1 < high) {
stack[++top] = pi + 1;
stack[++top] = high;
}
}
// Free the stack
free(stack);
}
// Main function
int main() {
int nums[] = {8, 9, 14, 12, 3, 5, 10, 18, 20}; // create an array
int size = sizeof(nums) / sizeof(*nums); // get the array size
qSortIterative(nums, 0, size - 1); // use the qSortIterative function
// print the output
int i;
for (i = 0; i < size; i++){
printf("%d ", nums[i]); }
return 0;
}
Output:
3 5 8 9 10 12 14 18 20
Explanation:
Initialisation: In this example, we are performing a quick sort using an iterative approach. The function qSortIterative() is called with the same parameters (low = 0, high = size – 1) in the main function.
Stack-Based Approach: In the iterative method, the necessity of recursion is eliminated by the use of a stack. The sub-arrays which need to be sorted are indexed and these indices are stored in a stack. At first, the low and high values of the complete array are added to the stack. It is worked on, using a stack, where two last popped values are used to set the current low and high of a splitting process.
Partition Function: The partitioning step remains the same as that in the recursive version.
Handling Subarrays with Stack: Once the algorithm partitions the array, it will determine whether or not there are elements that have been partitioned on either side of the pivot. If there are any elements on the left side of the pivot (pi – 1 > low), the left sub-array indices are pushed to the stack, and vice versa for the right side. This process goes on until the stack is empty, meaning the whole array is sorted.
Final Output: When all iterations and partitioning are done, the sorted array is printed as follows – [3, 5, 8, 9, 10, 12, 14, 18, 20].
Time Complexity Analysis of Quick Sort
Best Case: O(n log n) is the best case performance that occurs when the pivot divides the array into two equal parts.
Average Case: O(n log n) is the average case, which is a random pivot selection situation. The array should be divided into two approximately equal halves by the pivot element in the average scenario.
Worst Case: O(n^2) is the worst case and is done when the array is divided into one element and the remaining elements rather than into balanced halves by the pivot. For example, when the array is already sorted or reverse-sorted and the pivot is always the smallest or largest member.
Space Complexity Analysis of Quick Sort
Recursive Approach:
Best and Average Case Complexity: In the best and average circumstances, O(log n) for the recursion stack space.
Worst Case Complexity: In the worst scenario, space complexity is O(n) because of the recursion depth.
Iterative Approach:
O(n) is the space complexity of the stack that holds the subarray indices.
Pros and Cons of Quick Sort
Pros
Faster: Quick Sort is faster in practice than other O(n log n) algorithms when dealing with large-size arrays.
In-Place Sorting: Requires very little extra memory.
Cache-Friendly: Performs better in the cache than other algorithms, such as merge sort.
Cons
Not Stable: The relative order of equal components may not hold.
Worst-Case Performance: If not handled carefully, it can deteriorate to O(n^2).
Though its worst-case behaviour and lack of consistency are downsides, Quick Sort thrives with average performance and is particularly efficient for huge datasets.
Merge Sort: It needs more memory, but it is consistently O(n log n) stable.
Heap Sort: It is slower in practice because of subpar cache performance, but it offers O(n log n) guarantees with O(1) space.
Bubble and Insertion Sort: While easy and helpful for teaching purposes or small datasets, Bubble Sort and Insertion Sort are inefficient for large datasets.
Radix Sort: For integers or fixed-length strings, Radix Sort can be quicker than Quick Sort; however, it is not comparison-based and uses more memory.
Comparison of different sorting algorithms with Quick Sort:
Algorithm
Best-case Time Complexity
Worst-case Time Complexity
Space Complexity
Quick Sort
O (n log n)
O(n^2)
O (log n)
Merge Sort
O (n log n)
O (n log n)
O (N)
Heap Sort
O (n log n)
O (n log n)
O(1)
Bubble Sort
O (N)
O(n^2)
O(1)
Insertion Sort
O (N)
O(n^2)
O(1)
Radix Sort
O (NK)
O (NK)
O (N + K)
Applications of Quick Sort Algorithm
Telecommunications: Quick Sort is implemented in network routers that use message passing to manage data packets so that it efficiently organises packets according to destination addresses.
Data Compression: Sorting forms the basis of lossless compression algorithms (e.g., Huffman coding), and Quick Sort is commonly used for arranging frequency tables during the preprocessing stage.
Artificial Intelligence (AI): Quick Sort is used to order large search spaces or databases efficiently, especially in heuristic searches where quick lookup is needed.
Real-Time Systems: In systems that require real-time processing, Quick Sort’s average-case time performance allows event or task queues to be sorted efficiently.
Genomics: In bioinformatics, Quick Sort can be employed for ordering huge genomic datasets, which thereby assists in tasks such as alignment of DNA sequences and searching through genetic information.
File Systems: Quick Sort is applied in ordering large search spaces or databases in an operating system and real-time environment for efficient file sorting, especially when carrying out heuristic searches where quick lookup is required.
Quick Sort stands as one of the most powerful sorting algorithm methods as it is simple to understand, as well as efficient to use. The average complexity of O(n log n), allows it to be applied in many practical situations, particularly in cases where efficient in-place sorting is required. It has some weaknesses, including being unstable and having a poor case scenario where performance drops. However, if care is taken in implementation, the algorithm can be better than many other sorting algorithms.
FAQs
What is Quick Sort in C?
Quick Sort is a very efficient sorting technique that employs the divide and conquer methodology to perform sorting by dividing the array into parts concerning a pivot element which is then sorted recursively using the same method.
What is partitioning in Quick Sort?
The partition method picks the appropriate element as the pivot and rearranges the array such that all the items less than or equal to the pivot are to the left and all the bigger items are to the right of the pivot.
What is the worst-case time complexity of Quick Sort?
Quick Sort is said to have average time complexity of O(n logn) and also the best case time complexity. Unfortunately in the most extreme case, its time complexity can be O (n^2) in instances where pivot choices are very poor.
Is quicksort faster than merge sort?
For arrays, quick sort is desirable, however, for linked lists, merge sort is recommended. Quicksort is faster than merge sort (in many circumstances, such as in virtual memory environments) because it shows strong cache locality.
Which is the fastest sorting method?
The fastest sorting method depends on the context, input size, and data distribution. Generally, Quick Sort is considered one of the fastest sorting algorithms due to its average-case time complexity of O(n log n) and in-place sorting capabilities. However, other algorithms like Merge Sort and Heap Sort also provide O(n log n) performance with more predictable behaviour in the worst case. For smaller datasets, Insertion Sort can be more efficient due to its low overhead.
Hero Vired is a leading LearnTech company dedicated to offering cutting-edge programs in collaboration with top-tier global institutions. As part of the esteemed Hero Group, we are committed to revolutionizing the skill development landscape in India. Our programs, delivered by industry experts, are designed to empower professionals and students with the skills they need to thrive in today’s competitive job market.