Read "Streaming Systems" 1&2, Streaming 101 Read "F1, a distributed SQL database that scales" Read "Zanzibar, Google’s Consistent, Global Authorization System" Read "Spanner, Google's Globally-Distributed Database" Read "Designing Data-intensive applications" 12, The Future of Data Systems IOS development with Swift Read "Designing Data-intensive applications" 10&11, Batch and Stream Processing Read "Designing Data-intensive applications" 9, Consistency and Consensus Read "Designing Data-intensive applications" 8, Distributed System Troubles Read "Designing Data-intensive applications" 7, Transactions Read "Designing Data-intensive applications" 6, Partitioning Read "Designing Data-intensive applications" 5, Replication Read "Designing Data-intensive applications" 3&4, Storage, Retrieval, Encoding Read "Designing Data-intensive applications" 1&2, Foundation of Data Systems Three cases of binary search TAMU Operating System 2 Memory Management TAMU Operating System 1 Introduction Overview in cloud computing 2 TAMU Operating System 7 Virtualization TAMU Operating System 6 File System TAMU Operating System 5 I/O and Disk Management TAMU Operating System 4 Synchronization TAMU Operating System 3 Concurrency and Threading TAMU Computer Networks 5 Data Link Layer TAMU Computer Networks 4 Network Layer TAMU Computer Networks 3 Transport Layer TAMU Computer Networks 2 Application Layer TAMU Computer Networks 1 Introduction Overview in distributed systems and cloud computing 1 A well-optimized Union-Find implementation, in Java A heap implementation supporting deletion TAMU Advanced Algorithms 3, Maximum Bandwidth Path (Dijkstra, MST, Linear) TAMU Advanced Algorithms 2, B+ tree and Segment Intersection TAMU Advanced Algorithms 1, BST, 2-3 Tree and Heap TAMU AI, Searching problems Factorization Machine and Field-aware Factorization Machine for CTR prediction TAMU Neural Network 10 Information-Theoretic Models TAMU Neural Network 9 Principal Component Analysis TAMU Neural Network 8 Neurodynamics TAMU Neural Network 7 Self-Organizing Maps TAMU Neural Network 6 Deep Learning Overview TAMU Neural Network 5 Radial-Basis Function Networks TAMU Neural Network 4 Multi-Layer Perceptrons TAMU Neural Network 3 Single-Layer Perceptrons Princeton Algorithms P1W6 Hash Tables & Symbol Table Applications Stanford ML 11 Application Example Photo OCR Stanford ML 10 Large Scale Machine Learning Stanford ML 9 Anomaly Detection and Recommender Systems Stanford ML 8 Clustering & Principal Component Analysis Princeton Algorithms P1W5 Balanced Search Trees TAMU Neural Network 2 Learning Processes TAMU Neural Network 1 Introduction Stanford ML 7 Support Vector Machine Stanford ML 6 Evaluate Algorithms Princeton Algorithms P1W4 Priority Queues and Symbol Tables Stanford ML 5 Neural Networks Learning Princeton Algorithms P1W3 Mergesort and Quicksort Stanford ML 4 Neural Networks Basics Princeton Algorithms P1W2 Stack and Queue, Basic Sorts Stanford ML 3 Classification Problems Stanford ML 2 Multivariate Regression and Normal Equation Princeton Algorithms P1W1 Union and Find Stanford ML 1 Introduction and Parameter Learning

TAMU Operating System 2 Memory Management

2018-05-05

Dynamic memory allocation

When does new memory get allocated in the kernel?

  • New process gets created (fork())
  • New program gets loaded into memory (execve())
  • Process stack grows
  • Process expands heap (malloc())
  • Process “creates” (attaches to) shared memory segment (shmat())
  • Map a file to memory (mmap())

Fragmentation

  • External fragmentation
    • Has enough memory, but the free memory are distributed and cannot be grouped to host a large block
  • Internal fragmentation
    • Use memory blocks
    • Wasted space inside blocks

Allocation

Use free list to maintain free spaces.

Buddy system allocation:

Maintain multiple free lists, one for each power-of-2 size.

Allocation:

  • Increase size of request to next power of 2*.
  • Look up segment in free lists.
  • If exists, allocate.
  • If none exists, split next larger segment in half, put first half(the “buddy”) on free list, and return second half.

De-Allocation:

  • Return segment to free list.
  • Check if buddy is free. If so, coalesce.

Address spaces

All user programs think their memory spaces starts from 0x0, which are virtual addressing.

Paging

Naive relocation causes (external) fragmentation. Solve it by partition memory into equal-size portions.

Paging

  • Last frame may not be completely full.
  • Average internal fragmentation per block is typically half frame size.
    • Large frames cause more fragmentation.
    • Small frames cause more overhead (page table size, disk I/O)

Page table

Issues:

  • Page table must store one entry per page.
  • Naive page table can become very large.
    • 32-bit address space, 4kB pages, 4B entries
    • 4MB for each process

Hierarchical (Multilevel) paging

Page the page table

Multi-level paging

  • Too many levels will increase the search time
    • Needs to read memory more times

Inverted page table

Inverted paging

Pros:

  • Scales with physical memory
  • One table for whole system

Cons:

  • Long search times for large physical memories

Hashed page table

Hashed paging

Pros:

  • Scales with physical memory
  • One table for whole system

Cons:

  • How about collisions?

Translation Lookaside Buffers (TLB)

Use CPU to cache page number and frame number, allowing fast lookups.

TLB

  1. Split virtual address, use page number.
  2. Look in the TLB to see if we find translation entry for page.
    • If YES, use frame number.
    • If NO, system must locate page entry in main-memory-resident page table, load it into TLB, and start again.

Note:

  • One TLB per address space (process)
    • Need to flush TLB for every process switch.
  • One TLB for entire system
    • Add address space ID in TLB entries

Software-managed TLBs, MIPS style

TLB MIPS

  • Principle: Make do with as little hardware as possible.
  • Apart from a register with the ASID, the MMU is just the TLB.
    • The rest is all implemented in software!
  • When TLB cannot translate an address, a special exception (TLB refill) is raised.
    • Software then takes over.

TLB Refill Exception:

  1. Figure out if this virtual address is valid.
    • If not, trap to handling of address errors.
  2. If address is valid, construct TLB entry.
  3. If TLB already full, select an entry to discard.
  4. Write the new entry into the TLB.

Segmentation

Users think of memory in terms of segments (data, code, stack, objects, …). Segmentation allows for efficient allocation and management of semantically-related data segments

Segmentation

Pros:

  • Data in a segment typically semantically
    related
  • Protection can be associated with segments
    • read/write protection
    • range checks for arrays
  • Data and code sharing

Cons:

  • External fragmentation

Segmented paging

To elimimate external fragmentation.

Segmented paging

Paging on x86

Paging x86

  • CR0 controls whether to use paging
  • CR3 loads the page directory table address (the last entry in this table points to the table itself)
  • The page fault is Exception 14
    • err_code stores error code
    • offending address stored in CR2
  • use bit set whenever page is referenced
    • to identify recently-used pages
  • dirty bit set whenever page is written
    • to minimize I/O write operations
    • don’t need to write while not dirty
    • clear dirty bit when paging in

Page table entry

Page out

  • We may swap out entire processes from memory to disk
  • Can partially swap out memory of processes
    • Processes never use all their memory at once

Page fault

Process of a page fault:

Page fault

If all frames are used:

  1. Select a victim frame
  2. Update page table entry of victim to be invalid
  3. Page victim out

Policies

Locality of Reference: A program that references a location n at some point in time is likely to reference the same location n and locations in the immediate vicinity of n in the near future.

We can duplicate valid bit by a software-valid bit and have the kernel turn off the valid bit. The other bits can then be simulated in software.

Paging replacement policies:

  • FIFO
    • Replace the page that has been in memory for the longest period of time.
  • Optimal
    • Replace that page which will not be used for the longest period of time (in the future)
  • LRU
    • replace the page that has not been accessed for longest period of time (in the past)
  • 2nd chance: Associate a use_bit with every frame in memory.
    • Upon each reference, set use_bit to 1.
    • Keep a pointer to next “victim candidate” page.
    • To select victim: If current frame’s use bit is 0, select frame and increment pointer. Otherwise delete use bit and increment pointer.
  • Enhanced 2nd chance
    • Consider read/write activity of page: dirty bit
    • Algorithm same as 2nd Chance Algorithm, except that we scan for pages with both use bit and dirty bit equal to 0.
    • A dirty page is not selected until two full scans of the list later.

Each process requires a certain minimum set of pages to be resident in memory to run efficiently. Thrashing will happen when the basic needs of processes in a system cannot be satisfied.

Work set model

The memory management strategy follows two rules:

  • Rule 1: At each reference, the current working set is determined and only those pages belonging to the working set are retained in memory.
  • Rule 2: A program may run only if its entire current working set is in memory.

Problems:

  • Difficulty in keeping track of working set.
  • Estimation of appropriate window size.
    • too small -> page out useful pages
    • too big -> waste memory

Page buffering

  • Victim frames are not overwritten directly, but are removed from page table of process, and put into:
    • free frame list (clean frames)
    • modified frame list (modified frames)
  • Victims are picked from the free frame list in FIFO order.
  • If referenced page is in free or modified list, simply reclaim it.
  • Periodically (or when running out of free frames) write modified frame list to disk and add now clean frames to free list.

Recursive page table lookup

Idea is use the last entry in page directory table to point to the table itself.

Access PTE:

PTE access

Access PDE:

PDE access

Reference

This is my class notes while taking CSCE 611 at TAMU. Credit to the instructor Dr. Bettati.


Creative Commons License
Melon blog is created by melonskin. This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
© 2016-2019. All rights reserved by melonskin. Powered by Jekyll.