CPU Scheduling in Operating System: What are Its Types of Processes?

DevOps & Cloud Engineering
Internship Assurance
DevOps & Cloud Engineering

CPU Scheduling is a process in which the operating system determines which process will get allocated the CPU next. It is an essential element in the design and operation of an operating system (OS). It selects which of the processes in the ready queue will be assigned the CPU, hence maximising system performance. It is essential to make sure that resources are used as efficiently as possible and that the system functions properly even when there is a high demand for it.


In this article, we will learn about CPU scheduling in the operating system. We will learn about the types of CPU scheduling, including preemptive and non-preemptive algorithms. We will also understand the criteria for CPU scheduling and the different algorithms like Priority Scheduling, Round Robin (RR), First-Come, First-Served (FCFS), and many others.


What is CPU Scheduling?

Before we learn what CPU scheduling is, we need to understand what a process is. A process is an instance of a program that is being executed in the CPU to perform a set of tasks. A process includes the code for the program, its ongoing operations, and the resources it uses. OS launches a process to carry out the program when you run one, like opening an MS Office application or a web browser.


To understand a bit more about processes, let’s say you have a process running on a uni-programming system (uses a single thread) that needs input/output. In a single-threaded system, the CPU gets idle when not working and waits for the I/O operations to finish. In multiprogramming systems (uses multiple threads at a time), when a running process pauses for input/output, the CPU does not stop working. It begins additional processes to run to use as much CPU as possible.


Now, to maximise the CPU use, how does the CPU choose which process in the ready queue should be carried out next? The CPU scheduling method comes into this picture, which is the process of “scheduling” the processes.


CPU Scheduling is used to schedule the different processes of the CPU according to different aspects. To maximise system performance, CPU scheduling is a crucial method that decides which process in the ready queue will receive the CPU. CPU scheduling is known as process scheduling with the primary goal to create a quick and efficient system. It is essential to make sure that resources are used as efficiently as possible, do not remain idle when not in use, and that the system functions properly even when there is a high demand for it.


What are the Types of CPU Scheduling?

One essential component of the operating system is CPU scheduling, which determines which process in the ready queue gets CPU time. In an operating system, there are mainly two types of CPU scheduling: preemptive and non-preemptive scheduling.


Preemptive Scheduling


In preemptive scheduling, there may be an interruption in process allocation by the operating system. In this scheduling, the process of having a higher priority is executed before the lower priority processes. For example, an OS can interrupt a running process to allocate the CPU to a new process and execute the respective task.


Particularly in real-time and time-sharing systems, preemptive scheduling enables the OS to react to shifting system conditions faster.


Let’s understand an example of preemptive scheduling. Assume four processes P1, P2, P3, and P4 with the burst time values as 6, 8, 7, and 3 respectively. The time quantum given is 4 units.


Time 0-4: P1 executes for 4 units; the remaining burst time is 2.

Time 4-8: P2 executes for 4 units; the remaining burst time is 4.

Time 8-12: P3 executes for 4 units; the remaining burst time is 3.

Time 12-15: P4 executes for 3 units and moves to the next process.

Time 15-17: P1 executes for 2 units and moves to the next process.

Time 17-21: P2 executes for 4 units and moves to the next process.

Time 21-24: P3 executes for 3 units and moves to the next process.


Gantt Chart:


| P1    | P2   | P3  | P4   | P1    | P2   | P3  |


0      4       8       12      15      17     21   24


Non-Preemptive Scheduling


Non-preemptive scheduling requires new processes to wait until the running process finishes its CPU cycle. In this scheduling, the new processes are executed only after the running process has completed its task of execution. Until it is moved to the process waiting state or changed to terminated, the process retains the CPU’s resources, like CPU time. If the CPU is working on a process currently, it won’t stop until it’s finished.


Once the process has completed its execution, the CPU picks up the next process from the ready queue (a queue in which all the processes waiting for their execution are stored). The non-preemptive scheduling is easy to implement but comes with less efficient CPU utilisation and may increase the wait time for processes waiting in the ready queue.


Let’s understand an example of non-preemptive scheduling using the Shortest Job Next Algorithm. Assume four processes, P1, P2, P3, and P4, with the burst time values as 3, 12, 23, and 5 respectively.


When we execute these processes using the Shortest Job Next Algorithm, we do as follows:


Time 0-3: P1 executes for 3 units and moves to the next process.

Time 3-9: P4 executes for 5 units and moves to the next process.

Time 9-16: P2 executes for 12 units and moves to the next process.

Time 16-24: P3 executes for 23 units and moves to the next process.


Gantt Chart:


| P1    | P4   | P2  | P3   |


0      3       9       16     24


Types of Processes in CPU Scheduling

In CPU Scheduling there are various types of processes used to perform various tasks. It includes:


  • CPU-bound Processes: These programs execute with a higher CPU overhead and a lower throughput of I/O activities.
  • I/O-bound processes: These processes are used for all the input and output operations rather than to perform the computations.
  • Interactive processes: These processes directly interact with users, require fast reaction times, and entail user participation.
  • Batch processes: These processes are run in batch and can be planned to run during off-peak hours and operate without requiring user input.
  • Real-time processes: These processes are mostly processed promptly to operate correctly and are subject to tight time limits.
  • System processes: These processes are the unseen programs that oversee the resources and functions of the operating system.


Key Terms Used in CPU Scheduling

In an operating system, various important terms need to be understood before moving on to CPU Scheduling and learning about their algorithms. Here are some of the key terms used in CPU Scheduling:


  • Arrival time: The moment a process reaches the ready queue is known as the arrival time (AT).
  • Burst Time: It represents the length of time needed for a process to be executed or the amount of time the CPU needs to finish executing it. It is also known as the running time (RT) or execution time (ET) on occasion.
  • Waiting Time: A process’s waiting time (WT) is the fraction of time it takes for CPU resources to become available in the ready queue, calculated as the difference between its turn-around time and burst time (TAT – BT).
  • Response Time: A process’s response time (RT) is the amount of time it takes for its CPU resources to be assigned after it enters the ready queue.
  • Completion Time: As the name implies, this is the point in time at which the execution of a process is finished. It should not be mistaken for burst time.
  • Turn-Around Time (TAT): This term, which is also abbreviated as TAT, is the simple difference between the arrival and completion times (AT – CT).


DevOps & Cloud Engineering
Internship Assurance
DevOps & Cloud Engineering

CPU Scheduling Criteria

In the operating system, when we talk about CPU Scheduling criteria, the main goal here is to do two things. First, maximise the CPU Utilisation and Throughput. Second, try to minimise the response time, waiting time, and turnaround time (TAT). But what are the other criteria for CPU Scheduling? Below are the ones:


  • CPU Utilisation

It measures the amount of percentage time that the CPU is actively working on rather than being idle. If the scheduling was carried out in a way that maximises CPU use, it would make sense. A system’s overall performance depends on the efficient use of CPU resources, which is indicated by a high CPU usage rate.


  • Throughput

It refers to the number of processes completed per unit of time or simply the work done by the CPU in a time. So, every processor must be able to provide the maximum throughput for efficient utilisation.


  • Waiting Time

It is the time a process waits in the ready queue for its allocation by the CPU. A scheduling algorithm cannot alter the amount of time a process needs to finish its execution, but it can reduce the process’s waiting time for better optimisation.


  • Response Time

When you have an interactive system, response time is used. It is the amount of time between the process’s submission and the production of its first “response”—that becomes another CPU scheduling criterion. The CPU would want to reduce this time as much as possible.


  • Turnaround Time

It is the amount of time needed for a process to start, finish, and get into the ready queue. This time could be reduced with the help of an effective CPU scheduling criterion.


CPU Scheduling Algorithms

To effectively allocate the CPU’s available processing time among several computing processes for its resources, a CPU scheduling technique is required. They choose the sequence in which processes run and the amount of CPU time allotted to each task. Every algorithm has advantages and disadvantages, and the ones used depend on the goals and needs of the system. The CPU Scheduling algorithms are categorised mainly into two parts: preemptive and non-preemptive.


In preemptive scheduling, we have algorithms like Priority Scheduling, Longest remaining Job First Scheduling, Shortest remaining Job First Scheduling, and Round Robin.


In non-preemptive scheduling, we have algorithms like First Come First Serve (FCFS), Shortest Job First (SJF), Longest Job First (LJF), and Highest Response Ratio Next (HRRN).


Priority Scheduling

The priority scheduling algorithm is preemptive. It allows the operating system to interrupt and switch between the highest-priority processes to allocate the CPU. And if there are multiple processes with equal value, the FCFS (First Come First Serve) algorithm serves as the foundation for the most significant CPU allocation algorithm to allocate the CPU.



  • It schedules the processes based on priority.
  • When a higher priority process comes for execution, the current or lesser priority process gets replaced with a higher one.
  • Easy to implement.



  • The cost is less.
  • The complexity of the priority scheduling is also lower.
  • The waiting time is reduced for higher-priority processes.



  • The most common disadvantage of priority scheduling is the starvation problem (a process waits for a very long time for its CPU allocation).


Round Robin (RR)

The round-robin scheduling algorithm is the preemptive algorithm. In this algorithm, each process is assigned a fixed time quantum or a fixed time slot. Processes are carried out cyclically; if a procedure is not completed in the allotted time, it is preempted and pushed to the back of the ready queue.



  • It is used in time-sharing systems.
  • Used where each process needs the fair allocation of CPU time.
  • Starvation-free algorithm.
  • The most used algorithm in CPU scheduling.



  • A fair CPU scheduling algorithm.
  • New processes get added to the end of the ready queue which gives the advantage to each process.



  • Minimal Functioning CPU output will be reduced as a result of system slicing times.
  • The round-robin CPU scheduling method switches contexts more slowly.


Longest remaining Job First

The longest remaining job first algorithm takes those processes that have the longest burst time (processing time remaining for completion) in the operating system. It is used by the operating system to systematically allocate the CPU to processes. It is a preemptive scheduling algorithm.



  • Always the longest burst time process gets executed.
  • Systematic application of processes is executed.
  • It can be both preemptive or non-preemptive.
  • Until the longest process is executed, the CPU can’t be allocated to any other process.



  • It increases the throughput for long processes.
  • It reduces context switching.



  • There may be a convoy effect in the operating system
  • The average waiting time and average turnaround time are increased.


Shortest remaining Job First

The shortest remaining job first algorithm takes those processes that have the shortest burst time (processing time remaining for completion) in the operating system. It is used by the operating system to systematically allocate the CPU to processes. It is also a preemptive scheduling algorithm.



  • It is used in the case of time-sharing systems.
  • The processing is faster as compared to the SJF in the non-preemptive algorithm.
  • There is more context switching in the OS, and the CPU is busy most of the time.



  • Always the short burst time processes are executed first.
  • There is less overhead.
  • It minimises the average turnaround time.



  • It can also cause process starvation.
  • If short processes are added continuously, big processes can be postponed indefinitely.


First-Come, First-Served (FCFS)


The FCFS algorithm is the most basic operating system scheduling algorithm. According to the first-come, first-serve scheduling technique, which uses a FIFO(first in, first out) technique, the process that demands the CPU first gets it first. The sequence in which processes appear in the ready queue determines their scheduling.



  • Both preemptive and non-preemptive CPU scheduling techniques are supported by this algorithm.
  • All tasks are completed according to the first-come, first-served principle.
  • It is very simple to implement and use.



  • It is simple to implement.
  • Processes come first and are executed first.



  • The convoy effect can be seen.
  • The average waiting time and average turnaround time are increased, which leads to poor performance.
  • It is not an efficient algorithm.


Shortest Job First (SJF)


A scheduling technique called “shortest job first” (SJF) chooses the waiting process with the quickest execution time to go on to the next step. The next process chosen is the one with the least burst time. This scheduling technique might be preemptive or not but we consider this in the non-preemptive category.



  • It is useful for batch processing systems.
  • The shortest tasks are executed first; therefore waiting time gets reduced.
  • It is connected to every task as a duration to finish.



  • Less average turnaround time.
  • It is better than the FCFS algorithm.
  • This is used mostly in long-term CPU scheduling.



  • This may lead to starvation in the case of a smaller process.


Longest Job First (LJF)


This is the opposite of the shortest job first. It is also a non-preemptive algorithm. The process with the longest burst time is selected next. It may result in long wait times for short processes.



  • The process with the longest burst time always gets the CPU out of all the processes in a waiting queue.
  • The process that came first gets handled first if two processes have the same burst time.



  • The longest job tasks are executed first.
  • Most of the tasks get completed at the same time.




  • There may be a convoy effect in the operating system
  • The average waiting time and average turnaround time are increased.


Importance of CPU Scheduling

The purpose of CPU scheduling is to ensure that the operating system has at least chosen a process from the ready-to-use line whenever the CPU is idle. This process is needed to efficiently utilise the maximum performance of the CPU and get the best output from it. Here are some of the important aspects of CPU scheduling:


  1. It helps in producing the maximum output from the CPU performance, thereby increasing the maximum CPU utilisation.
  2. It ensures that the CPU is not idle and still working to get new processes.
  3. It helps in reducing the system overhead including less idle times, task prioritisation, context switching, etc.
  4. When using CPU scheduling, it also ensures that each process in the ready queue gets a fair chance to be executed by the CPU.
  5. It reduces the response time to take a request and process it. It is needed when the user wants a quick and fast response from the CPU.
  6. The waiting time is also reduced as efficient scheduling algorithms are being applied to get the best performance.
  7. More and more tasks are accomplished in a given period of time when using CPU scheduling. This is known as an increased Throughput.
  8. The turnaround time is also reduced when using the scheduling algorithms in the CPU.


Challenges in CPU Scheduling

Having the benefits of CPU scheduling also comes with numerous challenges that need to be addressed. Here are some of the common challenges in CPU scheduling:


  1. It may introduce the overhead when switching between different processes in a small interval of time. This is known as context-switching overhead.
  2. It can be difficult to make sure that real-time processes are completed by the deadline.
  3. When a priority process comes into the queue, the system may give preference to that process, and therefore, some processes may never be carried out.
  4. When processes are not properly scheduled, they might come to a state where they are waiting for infinite periods for each other to release each of their resources.
  5. As the number of processes increases, scheduling techniques must scale effectively.



CPU scheduling is the process by which the CPU decides which process in the ready queue will be picked up next to be allocated for the CPU. Operating systems can efficiently control the execution of processes, guaranteeing the best possible resource usage, the least amount of waiting time, and an equitable distribution of CPU time through the careful selection and tuning of scheduling algorithms.


In this article, we have learned in depth about CPU scheduling with its types, including preemptive and non-preemptive. We have also seen in detail the criteria for CPU scheduling along with the scheduling algorithms. Various scheduling methods, including Priority Scheduling, Round Robin (RR), Longest Remaining Job First, Shortest Job First (SJF), First-Come First-Served (FCFS), and others, each have advantages and disadvantages depending on the workload and system requirements.



Because CPU scheduling maximises the utilisation of CPU resources, it is essential to an operating system's ability to function effectively and efficiently. It facilitates the most efficient use of CPU resources, boosts system throughput, reduces turnaround and waiting times, and guarantees that CPU time is distributed fairly among processes.
The way the operating system manages process execution is the primary distinction between preemptive and non-preemptive CPU scheduling. Preemptive scheduling allows the operating system to switch to a higher-priority process by interrupting and suspending the one that is presently running. Non-preemptive scheduling, on the other hand, guarantees that a process cannot be stopped once it begins execution by allowing it to continue until it either finishes or willingly gives up the CPU.
Priority scheduling is the best CPU scheduling algorithm for operating systems used for systems that need stringent real-time processing for working on real-time tasks.
The CPU scheduler, also known as the process scheduler. It is a significant component of the operating system which is responsible for deciding which process will execute next on the CPU.
The Short-Term scheduler is also known as a CPU scheduler in the Operating System. From the ready queue, it selects one work, which is subsequently sent to the CPU for processing. A scheduling mechanism is used to decide which work will be sent to be completed.

Book a free counselling session


Get a personalized career roadmap

Get tailored program recommendations

Explore industry trends and job opportunities

left dot patternright dot pattern

Programs tailored for your Success


Data Science




Future Tech

Upskill with expert articles
View all
Hero Vired logo
Hero Vired is a leading LearnTech company dedicated to offering cutting-edge programs in collaboration with top-tier global institutions. As part of the esteemed Hero Group, we are committed to revolutionizing the skill development landscape in India. Our programs, delivered by industry experts, are designed to empower professionals and students with the skills they need to thrive in today’s competitive job market.

Data Science

Accelerator Program in Business Analytics & Data Science

Integrated Program in Data Science, AI and ML

Accelerator Program in AI and Machine Learning

Advanced Certification Program in Data Science & Analytics


Certificate Program in Full Stack Development with Specialization for Web and Mobile

Certificate Program in DevOps and Cloud Engineering

Certificate Program in Application Development

Certificate Program in Cybersecurity Essentials & Risk Assessment


Integrated Program in Finance and Financial Technologies

Certificate Program in Financial Analysis, Valuation and Risk Management


Certificate Program in Strategic Management and Business Essentials

Executive Program in Product Management

Certificate Program in Product Management

Certificate Program in Technology-enabled Sales

Future Tech

Certificate Program in Gaming & Esports

Certificate Program in Extended Reality (VR+AR)

Professional Diploma in UX Design

In the News
About Us
Contact us
Learning Hub
18003093939     ·     hello@herovired.com     ·    Whatsapp
Privacy policy and Terms of use

© 2024 Hero Vired. All rights reserved