Memory Management in OS (Operating System) Explained

Updated on August 14, 2024

Article Outline

Memory management is a critical challenge for the operating system. It allows for proper utilisation of a computer’s memory resources, including the system, programs, and data storage. If memory is not properly managed, our system may have difficulty handling more than one or two applications. In general, memory management enables a computer to do several tasks at the same time by dividing memory areas amongst processes.

 

When discussing memory management in OS, we commonly discuss memory allocation, deallocation, and utilisation. This procedure is a significant tool for improving the performance and reliability of computer systems. Say Something about Effective memory management may help you prevent issues like memory leaks, fragmentation, and consuming more memory than you need.

Importance of Memory Management for System Performance

Efficient memory management directly impacts system performance. By managing memory effectively, we can ensure that applications run faster and more efficiently. This management involves keeping track of memory allocation and ensuring that processes do not interfere with each other.

 

Here are some key points highlighting the importance of memory management:

 

  • Efficient Memory Use: Proper allocation and deallocation prevent wastage of memory.
  • Enhanced Multitasking: Allows multiple applications to run simultaneously without conflict.
  • Improved System Stability: Prevents crashes and errors due to memory conflicts.
  • Faster Application Performance: Optimised memory allocation ensures quick access to necessary resources.

Memory management in OS also helps in isolating different processes, ensuring that a malfunctioning application does not affect the entire system. This isolation is crucial for maintaining overall system stability and security.

*Image
Get curriculum highlights, career paths, industry insights and accelerate your technology journey.
Download brochure

Detailed Explanation of Contiguous Memory Management Techniques

Contiguous memory management techniques allocate a continuous block of memory to a process. This straightforward method has been used since the early days of computing.

Single Contiguous Memory Management

In single contiguous memory management, the memory is divided into two main parts. One part is allocated to the operating system, while the other is available for a single-user process.

 

Advantages:

 

  • Simple to implement
  • Easy to manage

Disadvantages:

 

  • Inefficient use of memory
  • Only one process can run at a time

In early computer systems, a single program would run to completion before another could start. This approach is no longer suitable for modern multitasking operating systems.

Multiple Partitioning: Fixed and Dynamic

Multiple partitioning was introduced to improve single contiguous memory management. This technique allows multiple processes to reside in memory simultaneously by dividing the memory into partitions.

 

Fixed Partitioning:

 

  • Memory is divided into fixed-sized partitions at the system start.
  • Each partition can hold one process.
  • If the process is smaller than the partition, the remaining space is wasted (internal fragmentation).

Dynamic Partitioning:

 

  • Partitions are created dynamically as processes are loaded.
  • Each process gets exactly as much memory as it needs.
  • This method minimises internal fragmentation but can lead to external fragmentation over time.

Example:

 

Consider a system with 1000 MB of memory. We might divide this memory into 10 partitions of 100 MB in fixed partitioning. If a process needs 80 MB, 20 MB will be wasted. In dynamic partitioning, the process would get exactly 80 MB, reducing waste but potentially leaving small unusable gaps over time.

 

Fixed Partitioning Example:

 

Partition Size (MB) Process Wasted Space (MB)
1 100 P1 20
2 100 P2 0
3 100 P3 50

 

Dynamic Partitioning Example:

 

Partition Size (MB) Process Wasted Space (MB)
1 80 P1 0
2 100 P2 0
3 50 P3 0

 

Also Read- What is process in OS

Understanding Non-Contiguous Memory Management Techniques

Non-contiguous memory management techniques allocate memory in non-adjacent blocks, allowing for more flexible and efficient memory use. This method helps to avoid the fragmentation issues associated with contiguous allocation.

Paging: Concepts and Advantages

Paging divides memory into fixed-size blocks called pages. The process’s address space is divided into pages, which are then mapped to physical memory frames. This mapping is handled by the memory management unit (MMU).

 

Advantages of Paging:

 

  • Eliminates external fragmentation
  • Simplifies memory allocation
  • Allows processes to use more memory than physically available (via virtual memory)

Example:

 

Let’s break down how paging works with a simple example. Suppose we have a process that requires 16 KB of memory, and each page is 4 KB. The process will be divided into 4 pages.

 

Paging Table Example:

 

Page Number Frame Number
0 5
1 8
2 2
3 4

 

When the CPU needs to access a memory location within this process, it uses the page number and offset to find the corresponding frame in physical memory. For example, if the CPU wants to access the 10th byte in the process, it translates this logical address to a physical address using the page table.

Logical to Physical Address Mapping:

Logical Address Page Number Offset Frame Number Physical Address
10 2 2 2 (2 * 4) + 2 = 10

 

Segmentation: How It Differs from Paging

 

Segmentation is another non-contiguous memory management technique. It divides memory into variable-sized segments based on the logical division of the program, such as functions, arrays, or objects.

 

Differences from Paging:

 

  • Segments are logical units defined by the programmer, while pages are fixed-size blocks defined by the system.
  • Segmentation can lead to external fragmentation, similar to dynamic partitioning, but provides a more logical way to manage memory.

Example:

 

Consider a program with three segments: code, data, and stack. Each segment can be placed anywhere in physical memory, and the segment table keeps track of the segment addresses.

 

Segmentation Table Example:

 

Segment Number Base Address Length
0 1000 400
1 1400 300
2 1700 600

 

When the CPU accesses a segment, the base address and offset are used to locate the exact physical address.

Logical to Physical Address Mapping in Segmentation:

 

Logical Address Segment Number Offset Base Address Physical Address
120 1 120 1400 1400 + 120 = 1520

Static and Dynamic Loading Mechanisms in Memory Management

Loading mechanisms in memory management in OS determine how programs are brought into memory for execution. Two primary types are static loading and dynamic loading.

Static Loading: Characteristics and Use Cases

In static loading, the entire program is loaded into memory before execution begins. All the necessary code and data are placed at fixed locations in memory.

 

Characteristics:

 

  • Simple implementation
  • Entire program resides in memory
  • Faster execution as everything is pre-loaded

Use Cases:

 

  • Small programs where memory usage is not a concern
  • Real-time systems where predictability is crucial

Example:

 

Program Segment Memory Address
Code 0x0000 – 0x1FFF
Data 0x2000 – 0x2FFF
Stack 0x3000 – 0x3FFF

 

When a statically loaded program runs, all its components are readily available in memory. This approach ensures quick access but can lead to inefficient memory use if the program does not fully utilise its allocated space.

Dynamic Loading: Benefits and Examples

Dynamic loading takes a more flexible approach. Parts of the program are loaded into memory only when needed.

 

Benefits:

 

  • Efficient memory usage
  • Reduced initial memory footprint
  • Allows for larger programs

Example:

 

Consider a text editor that dynamically loads a spell-checker only when a user initiates a spell check. This way, the memory is used efficiently, as the spell-checker is loaded only when required.

 

Program Segment Initial Load Address Dynamically Loaded Address
Core Editor 0x0000 – 0x1FFF N/A
Spell Checker N/A 0x4000 – 0x4FFF
Additional Features N/A 0x5000 – 0x5FFF

 

Dynamic loading allows for better memory management, especially for programs with features that are not always used.

Exploring Static and Dynamic Linking in Operating Systems

Linking involves combining various pieces of code and data into a single executable file. There are two primary methods: static and dynamic linking.

Static Linking: How It Works and Its Implications

In static linking, all the necessary libraries and modules are included in the executable at compile time.

 

Characteristics:

 

  • Self-contained executable
  • No runtime dependency on external libraries
  • Larger executable size

Implications:

 

  • Increased storage requirements
  • No need for external libraries at runtime
  • Easier distribution of programs

Example:

 

Program Segment Included Library Memory Address
Main Program Yes 0x0000 – 0x1FFF
Standard Library Yes 0x2000 – 0x2FFF
Custom Functions Yes 0x3000 – 0x3FFF

 

Static linking ensures that all necessary code is available within the executable, leading to reliable and predictable performance.

Dynamic Linking: Advantages Over Static Linking

Dynamic linking loads libraries at runtime rather than at compile time.

 

Advantages:

  • Smaller executable size
  • Shared libraries reduce memory usage
  • Easier updates to libraries

Example:

Consider a web browser that dynamically links to a multimedia library when playing a video. The library is not part of the browser’s core executable, saving space.

 

Program Segment Linked Library Load Time
Core Browser No Compile time
Multimedia Library Yes Runtime

 

Dynamic linking allows multiple programs to share the same library, reducing overall memory usage and making updates easier.

Detailed Look into Swapping Techniques in Memory Management

Swapping is a technique where processes are swapped between main memory and secondary storage to manage memory efficiently.

How Swapping Works in Operating Systems

Swapping temporarily moves processes out of main memory to secondary storage (like a hard drive) and brings them back when needed.

 

Process:

 

  • Identify the process to swap out
  • Move the process to secondary storage
  • Load a different process into the now free memory space
  • Swap back the original process when needed

Example:

 

Process Memory State Action
P1 In memory Swap out to storage
P2 Not in memory Swap in to memory
P3 In memory No action
P1 In storage Swap back when needed

 

Swapping helps in memory management in OS but can slow down system performance due to the time taken to move processes in and out of memory.

Impact of Swapping on System Performance

While swapping enables the execution of larger processes, it can impact performance. The time taken to swap processes can lead to delays, especially if the system frequently swaps processes in and out.

 

Advantages:

 

  • Enables multitasking
  • Allows execution of larger processes than available memory

Disadvantages:

 

  • Increased latency due to swap time
  • Potential slowdown of system performance

Example:

 

Time (ms) Process in Memory Swapped Process
0 P1 None
10 P2 P1
20 P3 None
30 P1 P2

 

Frequent swapping can degrade performance, but it’s necessary for systems with limited memory resources.

 

Also read- Types of Operating systems

Comprehensive Guide to Memory Allocation Methods

Memory allocation methods determine how memory is allocated to processes. Three common methods are First Fit, Best Fit, and Worst Fit.

First Fit: Simple and Fast Allocation

First Fit allocates the first available memory block that is large enough for the process.

 

Characteristics:

 

  • Simple to implement
  • Fast allocation

Example:

 

Memory Block Size (KB) Status
Block 1 50 Free
Block 2 100 Occupied (P1)
Block 3 75 Free
Block 4 200 Free

 

If a new process P2 requires 70 KB, it will be placed in Block 3 (the first suitable free block).

Best Fit: Minimising Wasted Space

Best Fit searches for the smallest available block that can accommodate the process, minimising wasted space.

 

Characteristics:

 

  • Reduces wasted memory
  • May be slower than First Fit due to searching

Example:

 

Memory Block Size (KB) Status
Block 1 50 Free
Block 2 100 Occupied (P1)
Block 3 75 Free
Block 4 200 Free

 

If process P2 requires 70 KB, it will be placed in Block 3, as it is the smallest block that fits.

Worst Fit: Using the Largest Available Block

Worst Fit allocates the largest available block to the process, aiming to leave the largest possible free space.

 

Characteristics:

 

  • Can lead to larger leftover holes
  • May result in inefficient memory use

Example:

 

Memory Block Size (KB) Status
Block 1 50 Free
Block 2 100 Occupied (P1)
Block 3 75 Free
Block 4 200 Free

 

If process P2 requires 70 KB, it will be placed in Block 4 (the largest available block).

Comparison Table of Allocation Methods:

Method Search Time Memory Utilisation Fragmentation Risk
First Fit Fast Moderate High
Best Fit Slow High Moderate
Worst Fit Slow Low High

 

Each allocation method has its own advantages and drawbacks. First Fit is fast but can lead to fragmentation. Best Fit is efficient but slower. Worst Fit can result in inefficient memory use.

Fragmentation Issues in Memory Management

Fragmentation happens when memory space is allocated inefficiently, resulting in wasted space. It is a typical memory management problem that might reduce system performance. There are two major forms of fragmentation: internal and external.

Internal fragmentation: causes and solutions

Internal fragmentation occurs when assigned memory blocks exceed the actual memory required by programs.

Causes:

 

  • Fixed-size partitioning often leads to internal fragmentation.
  • When a process does not use the entire allocated memory block, the unused portion becomes wasted space.

Solutions:

 

  • Use dynamic partitioning to allocate memory based on the actual size required by the process.
  • Employ memory allocation techniques like Best Fit to minimise wasted space.

Example:

 

Memory Block Allocated Size (KB) Required Size (KB) Wasted Space (KB)
Block 1 100 80 20
Block 2 150 120 30

In this example, the unused space within each allocated block leads to internal fragmentation.

External Fragmentation: Causes and Solutions

External fragmentation occurs when free memory is scattered in small blocks across the system, making it difficult to allocate contiguous memory for processes.

 

Causes:

  • Dynamic allocation and deallocation of memory create small gaps of free memory.
  • As processes are loaded and removed, the free memory becomes fragmented.

Solutions:

 

  • Use paging and segmentation to allow non-contiguous memory allocation.
  • Implement compaction, which consolidates free memory into a single block.

Example:

 

Memory Block Size (KB) Status
Block 1 50 Free
Block 2 200 Occupied
Block 3 75 Free
Block 4 300 Occupied
Block 5 25 Free

 

In this scenario, although there is enough total free memory (150 KB), it is scattered and cannot accommodate a process requiring 100 KB of contiguous space.

Techniques to Reduce Fragmentation

Reducing fragmentation is essential for efficient memory use. Here are some techniques:

 

  • Compaction: Move processes to consolidate free memory.
  • Paging: Allocate non-contiguous memory blocks to processes.
  • Segmentation: Use logical divisions of memory to reduce fragmentation.

Compaction Example:

 

Before Compaction After Compaction
[P1][Free][P2][Free][P3][Free] [P1][P2][P3][Free][Free][Free]

 

By shifting processes to one end, compaction creates a large contiguous block of free memory, reducing external fragmentation.

Differences Between Logical and Physical Address Spaces

Understanding the difference between logical and physical address spaces is crucial in memory management in OS.

Role of the Memory Management Unit (MMU)

The MMU translates logical addresses generated by a program into physical addresses used by the computer’s hardware.

 

  • Logical Address: The address generated by the CPU during program execution.
  • Physical Address: The actual address in memory where data is stored.

Example:

 

Logical Address Physical Address
0x000A 0x1A2B
0x000B 0x1A2C

 

The MMU uses a table to map logical addresses to physical addresses, ensuring that programs can run in their own logical space without interfering with each other.

How Logical Addresses Are Mapped to Physical Addresses

Logical addresses are mapped to physical addresses through a process called address translation. The MMU manages this translation using a page table.

 

Page Table Example:

 

Page Number Frame Number
0 5
1 8
2 2

 

When the CPU generates a logical address, the MMU translates it to the corresponding physical address using the page table.

Example:

 

Logical Address Page Number Offset Physical Address
0x000A 0 0x00A 0x500A
0x000B 0 0x00B 0x500B

 

The MMU ensures efficient memory management and isolation between processes.

Conclusion

In this blog, we aimed to learn about memory management in OS, its fundamentals, how memory is allocated (contiguous and non-contiguous), and how to prevent fragmentation. We also investigated static and dynamic loading and various linking approaches to better understand how programs are loaded into memory.

 

We discussed interchanging methods and allocation of memory techniques and their effects on the application. We also discussed logical and physical address spaces, as well as how the Memory Management Unit, or MMU, plays an important role in translating these addresses. If the above ideas are understood and used, system congestion may be reduced, multitasking capabilities gained, and RAM utilised to its full potential.

FAQs
Memory management provides control and organisation for the use of the memory of the computer in such a way that allows other processes and tasks to run concurrently and does not allow clashes to happen.
Paging also partitions the memory into equal parts or pages. It does not allow external fragmentation as it enables the allocation of different areas of memory to a process.
Static loading involves loading the entire program during linking or loading, while dynamic loading involves loading part of the program at a time or when needed; hence, dynamically loaded programs are more efficient than statically loaded programs.
Dynamic linking reduces the size of executables by linking libraries at runtime. It allows multiple programs to share the same library and makes updates easier.
Swapping can slow down system performance due to the time taken to move processes in and out of memory. However, it enables the execution of larger processes by managing memory efficiently.

Updated on August 14, 2024

Link

Upskill with expert articles

View all
Free courses curated for you
Basics of Python
icon
5 Hrs. duration
icon
Beginner level
icon
9 Modules
icon
Certification included
avatar
1800+ Learners
View
Essentials of Excel
icon
4 Hrs. duration
icon
Beginner level
icon
12 Modules
icon
Certification included
avatar
2200+ Learners
View
Basics of SQL
icon
12 Hrs. duration
icon
Beginner level
icon
12 Modules
icon
Certification included
avatar
2600+ Learners
View
next_arrow
Hero Vired logo
Hero Vired is a leading LearnTech company dedicated to offering cutting-edge programs in collaboration with top-tier global institutions. As part of the esteemed Hero Group, we are committed to revolutionizing the skill development landscape in India. Our programs, delivered by industry experts, are designed to empower professionals and students with the skills they need to thrive in today’s competitive job market.
Blogs
Reviews
Events
In the News
About Us
Contact us
Learning Hub
18003093939     ·     hello@herovired.com     ·    Whatsapp
Privacy policy and Terms of use

|

Sitemap

© 2024 Hero Vired. All rights reserved