[LWN Logo]
[LWN.net]
From:	 Daniel Phillips <phillips@bonn-fries.net>
To:	 lse-tech@lists.sourceforge.net
Subject: [Lse-tech] [PATCH] Nonlinear kernel virtual to physical mapping for uml
Date:	 Fri, 5 Apr 2002 22:57:54 +0200

Here's the config_nonlinear patch for uml.  I'm not going to go deeply into 
the theory here because for one thing we've all discussed it at some length, 
and I'm preparing a more detailed [rfc] for lkml.  However, I wanted to post
this on lse earlier rather than later though, so we can take a look at the
concrete code as opposed to theorizing about it.

The basic ideas are:

  - A new, conceptual address space 'logical' is inserted between 'virtual' 
    and 'physical'

  - Both bootmem and buddy-system allocations are carried out in the logical
    space rather than physical

  - The logical address space is continuous.  As well as allocation, all the
    usual Linux assumptions based on a linear memory map continue to hold.

  - The logical <==> physical translations are carried out with the aid of
    a pair of tables, indexed by a few high bits of the physical or logical
    address, respectively.  These tables are small.  In the current patch,
    each table is 32 longs in size, byte tables could be used just as well,

  - The unit of physical contiguity is called a 'section'.  The size of a
    section is defined by SECTION_SHIFT.  Logical and physical address
    spaces are up divided into sections of the same size.  Logical sections
    are mapped (via the table) onto physical sections in any order.
    Typically the logical map will have no holes in it (though there is no
    real requirement for this) and the physical map will have holes (after 
    all, this is why this patch was developed).

Compared to the incumbent config_discontigmem model, the config_nonlinear
approach offers a number of advantages:

  - Avoids table lookup for virtual_to_page and VALID_PAGE
  - Does not fragment the logical allocation space
  - Needs no _alloc_pages layer underneath alloc_pages
  - VALID_PAGE needs no table translation.

In addition, the config_nonlinear approach is nearly completely generic 
across architectures, whereas the config_discontigmem shows a disturbing
amount of variation across architectures (for no real reason other than code 
drift I believe) and is also bound together with the config_numa option in a 
rather unsatisfying way.

The config_nonlinear model introduces a number of new address translation 
functions.  The functions formerly known as virt_to_phys and phys_to_virt
are renamed:

  static inline unsigned long virtual_to_logical(void *v)
  static inline void *logical_to_virtual(unsigned long p)

The following four functions are the only ones that involve translation
through tables:

  static inline unsigned long virtual_to_physical(void *v)
  static inline void *physical_to_virtual(unsigned long p)
  static inline unsigned long physical_to_pagenum(unsigned long p)
  static inline unsigned long pagenum_to_physical(unsigned long n)

The reason that there are four instead of two is that the table lookup
can be optimized depending on the different usage.  Incidently, the
tables as implemented in the current patch tranlate section numbers to
page numbers.  Arguably, translating section numbers to section numbers
would be a superior approach in some cases, and this itself could be a
config option.

If config_nonlinear is 'n', the following functions have simple
definitions:

  #define virtual_to_physical virtual_to_logical
  #define physical_to_virtual logical_to_virtual
  #define pagenum_to_physical(n) ((n) << PAGE_SHIFT)
  #define physical_to_pagenum(p) ((p) >> PAGE_SHIFT)

Besides that, the only difference between config_nonlinear on and off
is the definition of the following:

  - init_nonlinear
  - show_nonlinear (debug output)

Much of the patch is devoted to partitioning the usage of __pa (and the
synonomous virt_to_phys) into virtual_to_logical and virtual_to_physical
according to usage.  (These are the same when config_nonlinear is 'n'.)
An analogous partition is done for __va (aka phys_to_virt).

As well as making necessary changes, the patch changes some names that
don't really need to be changed, i.e., phys_to_virt becomes
physical_to_virtual, which is not necessarily an improvement.  I'll be
playing with this a little, the final names aren't settled at all.  I'm
open to flames and other opinions.

The attached patch demonstrates the config_nonlinear principle by mapping 
each even megabyte of virtual memory to the corresponding odd megabyte of
'physical' (within uml's emulation context) memory, in other words, it swaps
every second megabyte.

This patch is not gauranteed to leave you with a fully functional uml
system, in fact, it probably doesn't now that I look at it a little more
closely.  However, it does boot, and the remaining debugging of the address 
translations can be carried out with the help of test programs running under 
uml.  So it's not far from being fully functional.

-- 
Daniel

--- ../2.4.17.uml.clean/arch/um/config.in	Mon Mar 25 17:27:25 2002
+++ ./arch/um/config.in	Fri Apr  5 10:30:08 2002
@@ -36,6 +36,7 @@
 bool '2G/2G host address space split' CONFIG_HOST_2G_2G
 bool 'Symmetric multi-processing support' CONFIG_UML_SMP
 define_bool CONFIG_SMP $CONFIG_UML_SMP
+bool 'Support for nonlinear physical memory' CONFIG_NONLINEAR
 string 'Default main console channel initialization' CONFIG_CON_ZERO_CHAN \
 	"fd:0,fd:1"
 string 'Default console channel initialization' CONFIG_CON_CHAN "xterm"
--- ../2.4.17.uml.clean/arch/um/kernel/mem.c	Mon Mar 25 17:27:26 2002
+++ ./arch/um/kernel/mem.c	Fri Apr  5 16:25:12 2002
@@ -122,8 +122,8 @@
 		printk ("Freeing initrd memory: %ldk freed\n", 
 			(end - start) >> 10);
 	for (; start < end; start += PAGE_SIZE) {
-		ClearPageReserved(virt_to_page(start));
-		set_page_count(virt_to_page(start), 1);
+		ClearPageReserved(virtual_to_page(start));
+		set_page_count(virtual_to_page(start), 1);
 		free_page(start);
 		totalram_pages++;
 	}
--- ../2.4.17.uml.clean/arch/um/kernel/process_kern.c	Mon Mar 25 17:27:26 2002
+++ ./arch/um/kernel/process_kern.c	Fri Apr  5 10:30:08 2002
@@ -501,12 +501,8 @@
 #ifdef CONFIG_SMP
 	return("(Unknown)");
 #else
-	unsigned long addr;
-
-	if((addr = um_virt_to_phys(current, 
-				   current->mm->arg_start)) == 0xffffffff) 
-		return("(Unknown)");
-	else return((char *) addr);
+	unsigned long addr = um_virt_to_phys(current, current->mm->arg_start);
+	return addr == 0xffffffff? "(Unknown)": physical_to_virtual(addr);
 #endif
 }
 
--- ../2.4.17.uml.clean/arch/um/kernel/um_arch.c	Mon Mar 25 17:27:27 2002
+++ ./arch/um/kernel/um_arch.c	Fri Apr  5 21:06:05 2002
@@ -270,6 +270,46 @@
 extern int jail;
 void *brk_start;
 
+#ifdef CONFIG_NONLINEAR
+unsigned long psection[MAX_SECTIONS];
+unsigned long vsection[MAX_SECTIONS];
+
+static int init_nonlinear(void)
+{
+	unsigned i, sect2pfn = SECTION_SHIFT - PAGE_SHIFT;
+	unsigned base_section = (PAGE_OFFSET - NONLINEAR_BASE) >> SECTION_SHIFT;
+
+	printk(">>> sections = %x\n", MAX_SECTIONS - base_section);
+	memset(psection, -1, sizeof(psection));
+	memset(vsection, -1, sizeof(vsection));
+	for (i = 0; i < MAX_SECTIONS - base_section; i++)
+		psection[base_section + i] = (i ^ (i >= 2)) << sect2pfn;
+
+	for (i = 0; i < MAX_SECTIONS; i++)
+		if (~psection[i] && psection[i] >> sect2pfn < MAX_SECTIONS)
+			vsection[psection[i] >> sect2pfn] = i << sect2pfn;
+
+	return 0;
+}
+
+static void show_nonlinear(void)
+{
+	int i;
+	printk(">>> Logical section to Physical num: ");
+	for (i = 0; i < MAX_SECTIONS; i++) printk("%lx ", psection[i]); printk("\n");
+	printk(">>> Physical section to Logical num: ");
+	for (i = 0; i < MAX_SECTIONS; i++) printk("%lx ", vsection[i]); printk("\n");
+}
+
+#else
+#  ifndef nil
+#    define nil do { } while (0)
+#  endif
+
+#define init_nonlinear() nil
+#define show_nonlinear() nil
+#endif
+
 int linux_main(int argc, char **argv)
 {
 	unsigned long start_pfn, end_pfn, bootmap_size;
@@ -294,6 +334,9 @@
 	/* Start physical memory at least 4M after the current brk */
 	uml_physmem = ROUND_4M(brk_start) + (1 << 22);
 
+	init_nonlinear();
+	show_nonlinear();
+
 	setup_machinename(system_utsname.machine);
 
 	argv1_begin = argv[1];
@@ -322,10 +365,10 @@
 	setup_memory();
 	high_physmem = uml_physmem + physmem_size;
 
-	start_pfn = PFN_UP(__pa(uml_physmem));
-	end_pfn = PFN_DOWN(__pa(high_physmem));
+	start_pfn = PFN_UP(virtual_to_logical(uml_physmem));
+	end_pfn = PFN_DOWN(virtual_to_logical(high_physmem));
 	bootmap_size = init_bootmem(start_pfn, end_pfn - start_pfn);
-	free_bootmem(__pa(uml_physmem) + bootmap_size, 
+	free_bootmem(virtual_to_logical(uml_physmem) + bootmap_size, 
 		     high_physmem - uml_physmem - bootmap_size);
   	uml_postsetup();
 
--- ../2.4.17.uml.clean/drivers/char/mem.c	Fri Dec 21 18:41:54 2001
+++ ./drivers/char/mem.c	Fri Apr  5 10:30:08 2002
@@ -79,7 +79,7 @@
 	unsigned long end_mem;
 	ssize_t read;
 	
-	end_mem = __pa(high_memory);
+	end_mem = virtual_to_logical(high_memory);
 	if (p >= end_mem)
 		return 0;
 	if (count > end_mem - p)
@@ -101,7 +101,7 @@
 		}
 	}
 #endif
-	if (copy_to_user(buf, __va(p), count))
+	if (copy_to_user(buf, logical_to_virtual(p), count))
 		return -EFAULT;
 	read += count;
 	*ppos += read;
@@ -114,12 +114,12 @@
 	unsigned long p = *ppos;
 	unsigned long end_mem;
 
-	end_mem = __pa(high_memory);
+	end_mem = virtual_to_logical(high_memory);
 	if (p >= end_mem)
 		return 0;
 	if (count > end_mem - p)
 		count = end_mem - p;
-	return do_write_mem(file, __va(p), p, buf, count, ppos);
+	return do_write_mem(file, logical_to_virtual(p), p, buf, count, ppos);
 }
 
 #ifndef pgprot_noncached
@@ -178,7 +178,7 @@
 		  test_bit(X86_FEATURE_CENTAUR_MCR, &boot_cpu_data.x86_capability) )
 	  && addr >= __pa(high_memory);
 #else
-	return addr >= __pa(high_memory);
+	return addr >= virtual_to_physical(high_memory); // bogosity alert!!
 #endif
 }
 
@@ -200,7 +200,7 @@
 	/*
 	 * Don't dump addresses that are not real memory to a core file.
 	 */
-	if (offset >= __pa(high_memory) || (file->f_flags & O_SYNC))
+	if (offset >= virtual_to_logical(high_memory) || (file->f_flags & O_SYNC))
 		vma->vm_flags |= VM_IO;
 
 	if (remap_page_range(vma->vm_start, offset, vma->vm_end-vma->vm_start,
--- ../2.4.17.uml.clean/fs/proc/kcore.c	Fri Sep 14 01:04:43 2001
+++ ./fs/proc/kcore.c	Fri Apr  5 10:30:08 2002
@@ -239,7 +239,7 @@
 	phdr->p_flags	= PF_R|PF_W|PF_X;
 	phdr->p_offset	= dataoff;
 	phdr->p_vaddr	= PAGE_OFFSET;
-	phdr->p_paddr	= __pa(PAGE_OFFSET);
+	phdr->p_paddr	= virtual_to_physical(PAGE_OFFSET);
 	phdr->p_filesz	= phdr->p_memsz = ((unsigned long)high_memory - PAGE_OFFSET);
 	phdr->p_align	= PAGE_SIZE;
 
@@ -256,7 +256,7 @@
 		phdr->p_flags	= PF_R|PF_W|PF_X;
 		phdr->p_offset	= (size_t)m->addr - PAGE_OFFSET + dataoff;
 		phdr->p_vaddr	= (size_t)m->addr;
-		phdr->p_paddr	= __pa(m->addr);
+		phdr->p_paddr	= virtual_to_physical(m->addr);
 		phdr->p_filesz	= phdr->p_memsz	= m->size;
 		phdr->p_align	= PAGE_SIZE;
 	}
@@ -382,7 +382,7 @@
 	}
 #endif
 	/* fill the remainder of the buffer from kernel VM space */
-	start = (unsigned long)__va(*fpos - elf_buflen);
+	start = (unsigned long) logical_to_virtual(*fpos - elf_buflen);
 	if ((tsz = (PAGE_SIZE - (start & ~PAGE_MASK))) > buflen)
 		tsz = buflen;
 		
--- ../2.4.17.uml.clean/include/asm-i386/io.h	Wed Mar 27 23:31:33 2002
+++ ./include/asm-i386/io.h	Fri Apr  5 20:49:51 2002
@@ -60,20 +60,6 @@
 #endif
 
 /*
- * Change virtual addresses to physical addresses and vv.
- * These are pretty trivial
- */
-static inline unsigned long virt_to_phys(void *address)
-{
-	return __pa(address);
-}
-
-static inline void * phys_to_virt(unsigned long address)
-{
-	return __va(address);
-}
-
-/*
  * Change "struct page" to physical address.
  */
 #define page_to_phys(page)	((page - mem_map) << PAGE_SHIFT)
--- ../2.4.17.uml.clean/include/asm-um/page.h	Wed Mar 27 23:31:33 2002
+++ ./include/asm-um/page.h	Fri Apr  5 20:49:42 2002
@@ -29,30 +29,88 @@
 
 #endif /* __ASSEMBLY__ */
 
+#define __va_space (8*1024*1024)
+
 extern unsigned long uml_physmem;
+extern unsigned long max_mapnr;
 
-#define __va_space (8*1024*1024)
+static inline int VALID_PAGE(struct page *page)
+{
+	return page - mem_map < max_mapnr;
+}
 
-static inline unsigned long __pa(void *virt)
+/* Logical/Virtual */
+
+static inline void *logical_to_virtual(unsigned long p)
 {
-	return (unsigned long) (virt) - PAGE_OFFSET;
+	return (void *) ((unsigned long) p + PAGE_OFFSET);
 }
 
-static inline void *__va(unsigned long phys)
+static inline unsigned long virtual_to_logical(void *v)
 {
-	return (void *) ((unsigned long) (phys) + PAGE_OFFSET);
+//	assert(it's a kernel virtual);
+	return (unsigned long) v - PAGE_OFFSET;
 }
 
-static inline struct page *virt_to_page(void *kaddr)
+#ifdef CONFIG_NONLINEAR
+#define MAX_SECTIONS (32)
+#define SECTION_SHIFT 20 /* 1 meg sections */
+#define SECTION_MASK (~(-1 << SECTION_SHIFT))
+#define NONLINEAR_BASE PAGE_OFFSET
+
+extern unsigned long psection[MAX_SECTIONS];
+extern unsigned long vsection[MAX_SECTIONS];
+
+#include <stdarg.h>
+#include <linux/linkage.h>
+
+asmlinkage int printk(const char *fmt, ...)
+	__attribute__ ((format (printf, 1, 2)));
+
+static inline unsigned long virtual_to_physical(void *v)
 {
-	return mem_map + (__pa(kaddr) >> PAGE_SHIFT);
+	unsigned long p = (unsigned long) v - NONLINEAR_BASE;
+	return (psection[p >> SECTION_SHIFT] << PAGE_SHIFT) + (p & SECTION_MASK);
 }
 
-extern unsigned long max_mapnr;
+static inline void *physical_to_virtual(unsigned long p)
+{
+	return (void *) ((vsection[p >> SECTION_SHIFT] << PAGE_SHIFT) + (p  & SECTION_MASK))
+		+ NONLINEAR_BASE;
+}
 
-static inline int VALID_PAGE(struct page *page)
+static inline unsigned long pagenum_to_physical(unsigned long n)
 {
-	return page - mem_map < max_mapnr;
+	return (psection[n >> (SECTION_SHIFT - PAGE_SHIFT)] +
+	    (n & (SECTION_MASK >> PAGE_SHIFT)))
+	    << PAGE_SHIFT;
+}
+
+static inline unsigned long physical_to_pagenum(unsigned long p)
+{
+	return vsection[p >> SECTION_SHIFT] + ((p & SECTION_MASK) >> PAGE_SHIFT);
+}
+
+#else
+#define virtual_to_physical virtual_to_logical
+#define physical_to_virtual logical_to_virtual
+#define pagenum_to_physical(n) ((n) << PAGE_SHIFT)
+#define physical_to_pagenum(p) ((p) >> PAGE_SHIFT)
+#endif /* CONFIG_NONLINEAR */
+
+static inline struct page *physical_to_page(unsigned long p)
+{
+	return mem_map + physical_to_pagenum(p);
+}
+
+static inline unsigned long virtual_to_pagenum(void *v)
+{
+	return virtual_to_logical(v) >> PAGE_SHIFT;
+}
+
+static inline struct page *virtual_to_page(void *v)
+{
+	return mem_map + virtual_to_pagenum(v);
 }
 
 #endif
--- ../2.4.17.uml.clean/include/asm-um/pgtable.h	Mon Mar 25 17:27:28 2002
+++ ./include/asm-um/pgtable.h	Fri Apr  5 20:49:51 2002
@@ -150,7 +150,7 @@
 
 #define BAD_PAGETABLE __bad_pagetable()
 #define BAD_PAGE __bad_page()
-#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
+#define ZERO_PAGE(vaddr) (virtual_to_page(empty_zero_page))
 
 /* number of bits that fit into a memory pointer */
 #define BITS_PER_PTR			(8*sizeof(unsigned long))
@@ -197,9 +197,8 @@
 #define page_address(page) ({ if (!(page)->virtual) BUG(); (page)->virtual; })
 #define __page_address(page) ({ PAGE_OFFSET + (((page) - mem_map) << PAGE_SHIFT); })
 #define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
-#define pte_page(x) \
-    (mem_map+((unsigned long)((__pa(pte_val(x)) >> PAGE_SHIFT))))
-#define pte_address(x) ((void *) ((unsigned long) pte_val(x) & PAGE_MASK))
+#define pte_page(x) (mem_map + physical_to_pagenum(pte_val(x)))
+#define pte_address(x) (physical_to_virtual(pte_val(x) & PAGE_MASK))
 
 static inline pte_t pte_mknewprot(pte_t pte)
 {
@@ -316,15 +315,14 @@
 #define mk_pte(page, pgprot) \
 ({					\
 	pte_t __pte;                    \
-                                        \
-	pte_val(__pte) = ((unsigned long) __va((page-mem_map)*(unsigned long)PAGE_SIZE + pgprot_val(pgprot)));         \
+	pte_val(__pte) = pagenum_to_physical(page - mem_map) + pgprot_val(pgprot); \
 	if(pte_present(__pte)) pte_mknewprot(pte_mknewpage(__pte)); \
 	__pte;                          \
 })
 
 /* This takes a physical page address that is used by the remapping functions */
 #define mk_pte_phys(physpage, pgprot) \
-({ pte_t __pte; pte_val(__pte) = physpage + pgprot_val(pgprot); __pte; })
+({ pte_t __pte; pte_val(__pte) = physpage + pgprot_val(pgprot); BUG(); __pte; })
 
 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 {
--- ../2.4.17.uml.clean/include/asm-um/processor-generic.h	Mon Mar 25 17:27:28 2002
+++ ./include/asm-um/processor-generic.h	Fri Apr  5 20:49:51 2002
@@ -115,7 +115,7 @@
 extern struct task_struct *alloc_task_struct(void);
 extern void free_task_struct(struct task_struct *task);
 
-#define get_task_struct(tsk)      atomic_inc(&virt_to_page(tsk)->count)
+#define get_task_struct(tsk)      atomic_inc(&virtual_to_page(tsk)->count)
 
 extern void release_thread(struct task_struct *);
 extern int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
--- ../2.4.17.uml.clean/include/linux/bootmem.h	Thu Nov 22 20:47:23 2001
+++ ./include/linux/bootmem.h	Fri Apr  5 20:49:51 2002
@@ -35,11 +35,11 @@
 extern void __init free_bootmem (unsigned long addr, unsigned long size);
 extern void * __init __alloc_bootmem (unsigned long size, unsigned long align, unsigned long goal);
 #define alloc_bootmem(x) \
-	__alloc_bootmem((x), SMP_CACHE_BYTES, __pa(MAX_DMA_ADDRESS))
+	__alloc_bootmem((x), SMP_CACHE_BYTES, virtual_to_logical((void *) MAX_DMA_ADDRESS))
 #define alloc_bootmem_low(x) \
 	__alloc_bootmem((x), SMP_CACHE_BYTES, 0)
 #define alloc_bootmem_pages(x) \
-	__alloc_bootmem((x), PAGE_SIZE, __pa(MAX_DMA_ADDRESS))
+	__alloc_bootmem((x), PAGE_SIZE, virtual_to_logical((void *) MAX_DMA_ADDRESS))
 #define alloc_bootmem_low_pages(x) \
 	__alloc_bootmem((x), PAGE_SIZE, 0)
 extern unsigned long __init free_all_bootmem (void);
@@ -50,9 +50,9 @@
 extern unsigned long __init free_all_bootmem_node (pg_data_t *pgdat);
 extern void * __init __alloc_bootmem_node (pg_data_t *pgdat, unsigned long size, unsigned long align, unsigned long goal);
 #define alloc_bootmem_node(pgdat, x) \
-	__alloc_bootmem_node((pgdat), (x), SMP_CACHE_BYTES, __pa(MAX_DMA_ADDRESS))
+	__alloc_bootmem_node((pgdat), (x), SMP_CACHE_BYTES, virtual_to_logical((void *) MAX_DMA_ADDRESS))
 #define alloc_bootmem_pages_node(pgdat, x) \
-	__alloc_bootmem_node((pgdat), (x), PAGE_SIZE, __pa(MAX_DMA_ADDRESS))
+	__alloc_bootmem_node((pgdat), (x), PAGE_SIZE, virtual_to_logical((void *) MAX_DMA_ADDRESS))
 #define alloc_bootmem_low_pages_node(pgdat, x) \
 	__alloc_bootmem_node((pgdat), (x), PAGE_SIZE, 0)
 
--- ../2.4.17.uml.clean/mm/bootmem.c	Fri Dec 21 18:42:04 2001
+++ ./mm/bootmem.c	Fri Apr  5 18:42:10 2002
@@ -51,7 +51,7 @@
 	pgdat_list = pgdat;
 
 	mapsize = (mapsize + (sizeof(long) - 1UL)) & ~(sizeof(long) - 1UL);
-	bdata->node_bootmem_map = phys_to_virt(mapstart << PAGE_SHIFT);
+	bdata->node_bootmem_map = logical_to_virtual(mapstart << PAGE_SHIFT);
 	bdata->node_boot_start = (start << PAGE_SHIFT);
 	bdata->node_low_pfn = end;
 
@@ -214,12 +214,12 @@
 			areasize = 0;
 			// last_pos unchanged
 			bdata->last_offset = offset+size;
-			ret = phys_to_virt(bdata->last_pos*PAGE_SIZE + offset +
+			ret = logical_to_virtual(bdata->last_pos*PAGE_SIZE + offset +
 						bdata->node_boot_start);
 		} else {
 			remaining_size = size - remaining_size;
 			areasize = (remaining_size+PAGE_SIZE-1)/PAGE_SIZE;
-			ret = phys_to_virt(bdata->last_pos*PAGE_SIZE + offset +
+			ret = logical_to_virtual(bdata->last_pos*PAGE_SIZE + offset +
 						bdata->node_boot_start);
 			bdata->last_pos = start+areasize-1;
 			bdata->last_offset = remaining_size;
@@ -228,7 +228,7 @@
 	} else {
 		bdata->last_pos = start + areasize - 1;
 		bdata->last_offset = size & ~PAGE_MASK;
-		ret = phys_to_virt(start * PAGE_SIZE + bdata->node_boot_start);
+		ret = logical_to_virtual(start * PAGE_SIZE + bdata->node_boot_start);
 	}
 	/*
 	 * Reserve the area now:
@@ -265,7 +265,7 @@
 	 * Now free the allocator bitmap itself, it's not
 	 * needed anymore:
 	 */
-	page = virt_to_page(bdata->node_bootmem_map);
+	page = virtual_to_page(bdata->node_bootmem_map);
 	count = 0;
 	for (i = 0; i < ((bdata->node_low_pfn-(bdata->node_boot_start >> PAGE_SHIFT))/8 + PAGE_SIZE-1)/PAGE_SIZE; i++,page++) {
 		count++;
--- ../2.4.17.uml.clean/mm/memory.c	Fri Dec 21 18:42:05 2001
+++ ./mm/memory.c	Fri Apr  5 17:04:53 2002
@@ -796,7 +796,7 @@
 	unsigned long phys_addr, pgprot_t prot)
 {
 	unsigned long end;
-
+BUG();
 	address &= ~PMD_MASK;
 	end = address + size;
 	if (end > PMD_SIZE)
@@ -806,7 +806,7 @@
 		pte_t oldpage;
 		oldpage = ptep_get_and_clear(pte);
 
-		page = virt_to_page(__va(phys_addr));
+		page = physical_to_page(phys_addr);
 		if ((!VALID_PAGE(page)) || PageReserved(page))
  			set_pte(pte, mk_pte_phys(phys_addr, prot));
 		forget_pte(oldpage);
--- ../2.4.17.uml.clean/mm/page_alloc.c	Tue Nov 20 01:35:40 2001
+++ ./mm/page_alloc.c	Fri Apr  5 18:28:30 2002
@@ -444,7 +444,7 @@
 void free_pages(unsigned long addr, unsigned int order)
 {
 	if (addr != 0)
-		__free_pages(virt_to_page(addr), order);
+		__free_pages(virtual_to_page(addr), order);
 }
 
 /*
@@ -735,7 +735,7 @@
 			struct page *page = mem_map + offset + i;
 			page->zone = zone;
 			if (j != ZONE_HIGHMEM)
-				page->virtual = __va(zone_start_paddr);
+				page->virtual = logical_to_virtual(zone_start_paddr);
 			zone_start_paddr += PAGE_SIZE;
 		}
 
--- ../2.4.17.uml.clean/mm/page_io.c	Tue Nov 20 00:19:42 2001
+++ ./mm/page_io.c	Fri Apr  5 18:27:58 2002
@@ -110,7 +110,7 @@
  */
 void rw_swap_page_nolock(int rw, swp_entry_t entry, char *buf)
 {
-	struct page *page = virt_to_page(buf);
+	struct page *page = virtual_to_page(buf);
 	
 	if (!PageLocked(page))
 		PAGE_BUG(page);
--- ../2.4.17.uml.clean/mm/slab.c	Fri Dec 21 18:42:05 2001
+++ ./mm/slab.c	Fri Apr  5 18:08:32 2002
@@ -506,7 +506,7 @@
 static inline void kmem_freepages (kmem_cache_t *cachep, void *addr)
 {
 	unsigned long i = (1<<cachep->gfporder);
-	struct page *page = virt_to_page(addr);
+	struct page *page = virtual_to_page(addr);
 
 	/* free_pages() does not clear the type bit - we do that.
 	 * The pages have been unlinked from their cache-slab,
@@ -1151,7 +1151,7 @@
 
 	/* Nasty!!!!!! I hope this is OK. */
 	i = 1 << cachep->gfporder;
-	page = virt_to_page(objp);
+	page = virtual_to_page(objp);
 	do {
 		SET_PAGE_CACHE(page, cachep);
 		SET_PAGE_SLAB(page, slabp);
@@ -1395,14 +1395,14 @@
 {
 	slab_t* slabp;
 
-	CHECK_PAGE(virt_to_page(objp));
+	CHECK_PAGE(virtual_to_page(objp));
 	/* reduces memory footprint
 	 *
 	if (OPTIMIZE(cachep))
 		slabp = (void*)((unsigned long)objp&(~(PAGE_SIZE-1)));
 	 else
 	 */
-	slabp = GET_PAGE_SLAB(virt_to_page(objp));
+	slabp = GET_PAGE_SLAB(virtual_to_page(objp));
 
 #if DEBUG
 	if (cachep->flags & SLAB_DEBUG_INITIAL)
@@ -1475,7 +1475,7 @@
 #ifdef CONFIG_SMP
 	cpucache_t *cc = cc_data(cachep);
 
-	CHECK_PAGE(virt_to_page(objp));
+	CHECK_PAGE(virtual_to_page(objp));
 	if (cc) {
 		int batchcount;
 		if (cc->avail < cc->limit) {
@@ -1557,8 +1557,8 @@
 {
 	unsigned long flags;
 #if DEBUG
-	CHECK_PAGE(virt_to_page(objp));
-	if (cachep != GET_PAGE_CACHE(virt_to_page(objp)))
+	CHECK_PAGE(virtual_to_page(objp));
+	if (cachep != GET_PAGE_CACHE(virtual_to_page(objp)))
 		BUG();
 #endif
 
@@ -1582,8 +1582,8 @@
 	if (!objp)
 		return;
 	local_irq_save(flags);
-	CHECK_PAGE(virt_to_page(objp));
-	c = GET_PAGE_CACHE(virt_to_page(objp));
+	CHECK_PAGE(virtual_to_page(objp));
+	c = GET_PAGE_CACHE(virtual_to_page(objp));
 	__kmem_cache_free(c, (void*)objp);
 	local_irq_restore(flags);
 }
--- ../2.4.17.uml.clean/mm/swapfile.c	Fri Dec 21 18:42:05 2001
+++ ./mm/swapfile.c	Fri Apr  5 18:28:55 2002
@@ -962,7 +962,7 @@
 		goto bad_swap;
 	}
 
-	lock_page(virt_to_page(swap_header));
+	lock_page(virtual_to_page(swap_header));
 	rw_swap_page_nolock(READ, SWP_ENTRY(type,0), (char *) swap_header);
 
 	if (!memcmp("SWAP-SPACE",swap_header->magic.magic,10))

_______________________________________________
Lse-tech mailing list
Lse-tech@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lse-tech