Categories
Uncategorized

I was learning how to write an operating system (VIII): memory management mechanisms and section pages

Foreword

Multi-process management and memory modules are two closely linked, since the process is run from memory fetch execution, first of all to the process of creating programs and data into memory. The user can execute the original program becomes a program in memory, and which relates to memory management.

Memory Load

  • Absolutely loaded.

    At compile time, if you know the program will reside in one location in memory, the compiler generates object code absolute addresses. According to the absolute address into the loader module, and program data into memory. After the load module is loaded into memory, since a logical address is identical with the actual address of the program, so the program does not need to modify the data and addresses.

  • Relocatable load.

    In a multiprogramming environment, the start address of the plurality of target modules are usually from 0, other addresses are relative to the program start address, this time should be loaded relocatable manner. The memory of the present case, the load module is loaded into the appropriate memory location. When loading the modification process of the target program instructions and data is referred to relocation, the address conversion is usually completed once loaded, so as a static relocation.
                Characterized by a job when loaded into memory, the dispenser must all required memory space, if there is not enough memory, it can not be installed once the addition operations in memory, and can not move in memory during the entire operation, not re-application memory space.

  • Dynamic run fashion into

    It has become a dynamic relocation in memory if the program moves, it is necessary to load a dynamic manner.

    Dynamic loader run time after the module is loaded into memory, it does not immediately load module relative addresses into absolute addresses, but to this address translation program really want to postpone execution only. Thus, all the addresses are relative addresses loaded into memory, this approach requires a support relocation register.
                Characterized by the program can be assigned to a discontinuous storage area; just before the program can be run loaded parts of its code to run, then during the program run, the application dynamically allocate memory needed; easy to share the program segment, It can provide a much larger address space than storage space to users.

Reference links

Therefore, the best way is to be loaded into memory at runtime dynamic loading, but this method requires a method to relocate. The relocation information is stored in each process’s PCB, which is stored base address of this memory, the final address is the logical address of the runtime + base address, and also provides hardware support for the corresponding calculations, also MMU is

Segmentation mechanism

However, in the eyes of the programmer: program consists of several portions (segments), each segment has its own characteristics, the use of: a read-only code section, code / data segment will not grow dynamically …. This leads to a segmented memory

Section

If the user process by the main program, the program word, and a stack composed of a piece of data, so that the user process can be divided into five segments, each addressing starts from 0, and using a contiguous address space (within the required period of continuous , intersegment not require continuous), its logical address consists of two parts: the segment number and an offset within the segment, denoted as S, W.

16-bit segment number, the offset section 16, then a power operation up to 16 = 65 536 16 2 segments, maximum segment length 64KB.

GDT and LDT

Each process has a segment table with the logical space of the main memory space is mapped, wherein a segment table entry corresponding to each segment of the process, the path length of the segment table entry records the start address of the segment and the segment in memory. After configuring the segment table, the implementation process by segment table lookup, find each segment corresponding to the memory area. Segment table for implementing the mapping between the logical end of the segment to the physical memory area.

And this is the segment table in protected mode before mentioned GDT and LDT

A processor is only one GDT. LDT (local descriptor table), a process a LDT, GTD is actually a “child table.”

LDT and GDT essence is the same, but LDT nested in the GDT.

LDTR have a dedicated register to the recording start position of the local descriptor table, it is recorded in a segment selector in the GDT. Essentially it is a LDT descriptor, the descriptor is stored in the GDT, the corresponding expression of this character will have a selection sub.

Memory partition and paging

After using a segmentation mechanism, we need to partition the memory, so that each segment is loaded into the appropriate memory partition

Memory allocation algorithms

When the process is loaded into main memory or change. If there are multiple large enough free block of memory, the operating system must determine the allocation of the memory block to the process used usually have these types of algorithms

  • First fit algorithm: free partition to address increasing order link. Sequential search when allocating memory, find the size to meet the requirements of the first free partition.

  • Best fit algorithm: free partition by partition formation capacity increment chain, find the one that meets the requirements of the free partition.

    The worst adaptation algorithm: adaptive algorithms have said the largest, free partition with a capacity descending order link. Find the first to meet the requirements of the free partition, which is the largest selection of partition.

  • Close adaptation algorithm: first fit algorithm, also known as cycle, evolved from the first-fit algorithm is made. The difference is that from the end to find a position when allocating memory starts to continue to find.

Paging

The introduction of paging memory efficiency is to solve problems caused by carrying out memory partition

Paging is the real memory space is divided into equal and fixed block size, the block is relatively small, as a basic unit of memory. Each process is also divided into units of blocks, when performing a process to apply the block-by-block basis in the main memory space. So this time the mapping of real physical memory address can not be reused in a set of segmentation mechanism

On the introduction page table concept: the establishment of a system page table for each process, recording the physical block number corresponding page in memory, so the conversion address conversion into a page table

Usually a page table register PTR in the system, is stored in a memory page table and a page table length initial value.

  • Address and the page number into the page offset portion of the two, and then retrieves the page number to the page table. .

  • The product page table start address and the page number and the length of the page table entries are added to produce the location of the entry in the page table, then the physical block numbers can be obtained from the page.

But the page table there are two problems:

    Page table memory occupied large

    Address page table needs to be accessed frequently, it must be very fast memory access speed

Multi-level page table

In order to solve too much memory page table occupied, on the introduction of multi-level page table

Ten power page directory 4-byte entry 2 corresponding to these two entry points, the ceiling 10 as a page directory table index used to find the two linear addresses

20 two physical base address of the page table entries of the page containing the relevant, two intermediate page table address using a linear 10 to find an index entry as

    Process accesses a logical address

    The linear address of the page number, the page table register and an outer layer (CR3) of the outer page table start address, two page table to find the start address

    By the start address two page table address plus the outer layer linear page address, the page table entry corresponding to find the two page table

    By the physical block number of the page table entry, the page address plus the linear address to find the physical address

Fast table

In order to solve the memory access speed, there is a fast table

Usually a page table register PTR in the system, is stored in a memory page table and a page table length initial value.

  • After the effective address given by the CPU, address translation performed by the hardware, and the page number into the register cache, the page number and the page number with all of the table fast compared simultaneously.

  • If there is a match is found the page number, the access request described in the fast page table entry table, it can be directly read from the page frame number corresponding to the page.

  • If none is found, you will need to access main memory page table, after reading the page table entries should also be stored in fast table, for possible later re-visit. But if fast table is full, it is necessary for which the old page table entry to be replaced in accordance with a certain algorithm.

Page conjunction with paragraph (virtual memory)

Now that you have pages and sections, programmer hopes to use segments, computer design want to use page, you need to combine the two

Therefore, the logical address and physical address conversion:

  • First, given a logical address,

  • Segment using its memory management unit, is broken descriptor in the GDT, first converted into a logical address to linear address,

  • Re-use of page table lookup memory management unit, into a physical address.

Virtual memory management

In actual operation, it is likely the currently available physical memory is much smaller than the allocated virtual memory (4G), this time on the need to request the page out and page replacement feature functions, that is, during the time of the address translation can not find the corresponding page , started a page fault handler to complete paging

This increases the four segments in the page table entry:

  • When the reference for indicating whether the page has been loaded into memory for shared access: Status P.

  • Access Field A: This page is used to record the number of times over a period of time being accessed, or how long it has recently been recorded on this page have not been accessed for reference when replacement algorithm swapped out page.

  • Modify bit M: indicates whether the page is loaded into memory after being modified.

  • External memory address: is used to indicate the addresses on the external memory are usually physical block number for reference when transferred to the page.

So now I find the physical address when he becomes:

  • First fast retrieval table

  • If the page to be accessed to find, access bit side page table modification, and then use the physical block number and a page address within a given page table entry physical address is formed.

  • If it is found on page page table entries should go into memory page table lookup, in contrast to the state of the bit in the page table entry of P, to see if the page has been loaded into memory, it is not transferred to generate a page fault, a request from the external memory page into memory.

Page replacement algorithm

Since there are pages swapped out, it will naturally have a corresponding different algorithms

Best replacement algorithm selected page will be eliminated, or a page within the maximum time no longer be accessed later and never used, so you can ensure the lowest rate of missing pages. But because it is currently unable to predict the process in several pages in memory that is the future for the longest time no longer be accessed, but this algorithm can not be achieved.

LRU algorithm

Select the page the most recent time unvisited be eliminated, he believes pages not visited in the past period of time, may not be accessed in the near future. The algorithm for each page to set up a field visit to records since the last time the page is accessed experienced, select an existing page in the value of the maximum shall be eliminated when the phase-out page.

LRU algorithm is generally implemented in two:

    Timestamp

Every time address to access all modification timestamp, out of the page when you need to select only the minimum number of

But the need to maintain a global clock, the need to find a minimum, achieve greater expense

    Page Stack

Are modified each time the address to access the stack, so that the phase-out time, simply swap out the bottom of the stack

But the above method with the same time stamp, achieved at the expense of very large

CLOCK algorithm

To the use of a page frame associated with each bit. When the first page is loaded into memory or accessed again, using the 1 position. Each time needs to be replaced, find use bit is set to replace the first frame 0. During the scanning process, if the bit is encountered using the frame 1, the use position is 0, the scanning continues. If a so-called frame bits are zero, the first frame is replaced

CLOCK performance close the LRU algorithm, by increasing the number of bits used, such that the CLOCK algorithm, but more efficient. On the basis of the use of bits on to add a bit modified, the resulting modified CLOCK replacement algorithm. In this way, each frame for one of the following four cases.

    Not been accessed recently, it has not been modified (u = 0, m = 0).

    Recently accessed, but not modify (u = 1, m = 0).

    Not been accessed recently, but modified (u = 0, m = 1).

    Recently accessed, modified (u = 1, m = 1).

Algorithm performs the following steps:

    From the current position of the pointer, the scan frame buffer. During the scanning process, the use of bits without any changes, select the first frame encountered (u = 0, m = 0) for replacement.

    If the step 1 fails, then re-scanned to find (u = 0, m = 1) frame. The first such frame encountered selected amount for replacement. In this sweep process surface, for each of the skipped frames, to its use bit is set to 0.

    If step 2 fails, the pointer back to its original position, and set using the bit frames are all 0. The first step is repeated, and if necessary repeat step 2. This will be found alternate frames.

The improved algorithm is better than a simple CLOCK CLOCK preferred algorithm is that there is no change in the time of replacement. Since the modified pages must be written back before being replaced, so this will save time.

Linux 0.11 Story

All are related to memory management process and to serve the birth, so first look at Linux 0.11 in the beginning of the process from the creation story

fork

    First created by a system call interrupted the process, fork () -> sys_fork-> copy_process

    Copy_process main role is to create a TSS descriptor file is set to allocate memory and sub-processes, etc.

    copy_mem here only new process set up your own page directory entries and page table entries, but no actual physical memory pages allocated for the new process. At this point all new processes to share memory pages and its parent process.

int copy_process(int nr,long ebp,long edi,long esi,long gs,long none,
        long ebx,long ecx,long edx,
        long fs,long es,long ds,
        long eip,long cs,long eflags,long esp,long ss)
{
    struct task_struct *p;
    int i;
    struct file *f;

    p = (struct task_struct *) get_free_page();
    if (!p)
        return -EAGAIN;
    task[nr] = p;
    *p = *current;  /* NOTE! this doesn't copy the supervisor stack */
    p->state = TASK_UNINTERRUPTIBLE;
    p->pid = last_pid;
    p->father = current->pid;
    p->counter = p->priority;
    p->signal = 0;
    p->alarm = 0;
    p->leader = 0;      /* process leadership doesn't inherit */
    p->utime = p->stime = 0;
    p->cutime = p->cstime = 0;
    p->start_time = jiffies;
    p->tss.back_link = 0;
    p->tss.esp0 = PAGE_SIZE + (long) p;
    p->tss.ss0 = 0x10;
    p->tss.eip = eip;
    p->tss.eflags = eflags;
    p->tss.eax = 0;
    p->tss.ecx = ecx;
    p->tss.edx = edx;
    p->tss.ebx = ebx;
    p->tss.esp = esp;
    p->tss.ebp = ebp;
    p->tss.esi = esi;
    p->tss.edi = edi;
    p->tss.es = es & 0xffff;
    p->tss.cs = cs & 0xffff;
    p->tss.ss = ss & 0xffff;
    p->tss.ds = ds & 0xffff;
    p->tss.fs = fs & 0xffff;
    p->tss.gs = gs & 0xffff;
    p->tss.ldt = _LDT(nr);
    p->tss.trace_bitmap = 0x80000000;
    if (last_task_used_math == current)
        __asm__("clts ; fnsave %0"::"m" (p->tss.i387));
    if (copy_mem(nr,p)) {
        task[nr] = NULL;
        free_page((long) p);
        return -EAGAIN;
    }
    for (i=0; ifilp[i]))
            f->f_count++;
    if (current->pwd)
        current->pwd->i_count++;
    if (current->root)
        current->root->i_count++;
    if (current->executable)
        current->executable->i_count++;
    set_tss_desc(gdt+(nr<<1)+FIRST_TSS_ENTRY,&(p->tss));
    set_ldt_desc(gdt+(nr<<1)+FIRST_LDT_ENTRY,&(p->ldt));
    p->state = TASK_RUNNING;    /* do this last, just in case */
    return last_pid;
}

int copy_mem(int nr,struct task_struct * p)
{
    unsigned long old_data_base,new_data_base,data_limit;
    unsigned long old_code_base,new_code_base,code_limit;

    code_limit=get_limit(0x0f);
    data_limit=get_limit(0x17);
    old_code_base = get_base(current->ldt[1]);
    old_data_base = get_base(current->ldt[2]);
    if (old_data_base != old_code_base)
        panic("We don't support separate I&D");
    if (data_limit < code_limit)
        panic("Bad data_limit");
    new_data_base = new_code_base = nr * 0x4000000;
    p->start_code = new_code_base;
    set_base(p->ldt[1],new_code_base);
    set_base(p->ldt[2],new_data_base);
    if (copy_page_tables(old_data_base,new_data_base,data_limit)) {
        printk("free_page_tables: from copy_mem\n");
        free_page_tables(new_data_base,data_limit);
        return -ENOMEM;
    }
    return 0;
}

page

    copy_page_tables is to copy the page directory entries and page table entries, thereby corresponding copy of the original page table and page directory physical memory pages mapped area are two sets of page tables and shared use. When you copy, the need to apply for a new page to hold a new page table, the original area of ​​physical memory to be shared. Since then the two processes (the parent and its child processes) to a shared memory area, until there is a process to perform operations, the kernel will allocate a new memory page (copy-on-write mechanism) to write process.

int copy_page_tables(unsigned long from,unsigned long to,long size)
{
    unsigned long * from_page_table;
    unsigned long * to_page_table;
    unsigned long this_page;
    unsigned long * from_dir, * to_dir;
    unsigned long nr;

    if ((from&0x3fffff) || (to&0x3fffff))
        panic("copy_page_tables called with wrong alignment");
    from_dir = (unsigned long *) ((from>>20) & 0xffc); /* _pg_dir = 0 */
    to_dir = (unsigned long *) ((to>>20) & 0xffc);
    size = ((unsigned) (size+0x3fffff)) >> 22;
    for( ; size-->0 ; from_dir++,to_dir++) {
        if (1 & *to_dir)
            panic("copy_page_tables: already exist");
        if (!(1 & *from_dir))
            continue;
        from_page_table = (unsigned long *) (0xfffff000 & *from_dir);
        if (!(to_page_table = (unsigned long *) get_free_page()))
            return -1;  /* Out of memory, see freeing */
        *to_dir = ((unsigned long) to_page_table) | 7;
        nr = (from==0)?0xA0:1024;
        for ( ; nr-- > 0 ; from_page_table++,to_page_table++) {
            this_page = *from_page_table;
            if (!(1 & this_page))
                continue;
            this_page &= ~2;
            *to_page_table = this_page;
            if (this_page > LOW_MEM) {
                *from_page_table = this_page;
                this_page -= LOW_MEM;
                this_page >>= 12;
                mem_map[this_page]++;
            }
        }
    }
    invalidate();
    return 0;
}

no_page

If you can not find the appropriate page, which is to be executed swapped in and out, before this CPU will first trigger abnormal missing page

    Handle page faults aborted, calls do_no_page

page_fault:
    xchgl %eax,(%esp)       # 取出错码到eax
    pushl %ecx
    pushl %edx
    push %ds
    push %es
    push %fs
    movl $0x10,%edx         # 置内核数据段选择符
    mov %dx,%ds
    mov %dx,%es
    mov %dx,%fs
    movl %cr2,%edx          # 取引起页面异常的线性地址
    pushl %edx              # 将该线性地址和出错码压入栈中,作为将调用函数的参数
    pushl %eax
    testl $1,%eax           # 测试页存在标志P(为0),如果不是缺页引起的异常则跳转
    jne 1f
    call do_no_page         # 调用缺页处理函数
    jmp 2f
1:  call do_wp_page         # 调用写保护处理函数
2:  addl $8,%esp            # 丢弃压入栈的两个参数,弹出栈中寄存器并退出中断。
    pop %fs
    pop %es
    pop %ds
    popl %edx
    popl %ecx
    popl %eax
    iret

    This function first attempts to share the page with the same file is loaded, or simply because the process of dynamic application memory pages but only one physical memory can be mapped. If the operation is unsuccessful shared, then only read from the corresponding file in the missing page of data to the specified linear address.

void do_no_page(unsigned long error_code,unsigned long address)
{
    int nr[4];
    unsigned long tmp;
    unsigned long page;
    int block,i;

    address &= 0xfffff000;
    tmp = address - current->start_code;
    if (!current->executable || tmp >= current->end_data) {
        get_empty_page(address);
        return;
    }
    if (share_page(tmp))
        return;
    if (!(page = get_free_page()))
        oom();
/* remember that 1 block is used for header */
    block = 1 + tmp/BLOCK_SIZE;
    for (i=0 ; i<4 ; block++,i++)
        nr[i] = bmap(current->executable,block);
    bread_page(page,current->executable->i_dev,nr);
    i = tmp + 4096 - current->end_data;
    tmp = page + 4096;
    while (i-- > 0) {
        tmp--;
        *(char *)tmp = 0;
    }
    if (put_page(page,address))
        return;
    free_page(page);
    oom();
}

summary

It’s a lengthy, because the things related to memory management are written together, there are three key points:

  • Section

    Extended memory segment to the GDT and IDT addressing physical address

  • Paging

    Since then drawn segment memory partition, the paging mechanism introduced to improve the efficiency, focus is paged memory management unit with a look-up table

  • 段页结合
    所以为了将段和页结合就需要一个机制来转化逻辑地址和物理地址,也就分为两步走

    1. Segment using its memory management unit, is broken descriptor in the GDT, first converted into a logical address to linear address,

    2. Re-use of page table lookup memory management unit, into a physical address.

Leave a Reply