Skip to content

Instantly share code, notes, and snippets.

What would you like to do?
BadRAM for v2.6.32.x
From 5d920ec604a896cfd8419f9360f3a81adedd19eb Mon Sep 17 00:00:00 2001
From: Przemyslaw Pawelczyk <>
Date: Sat, 30 Jan 2010 20:57:06 +0100
Subject: [PATCH] BadRAM for v2.6.32.x
Attempt to adapt BadRAM patches to recent kernel versions. Includes
improvements and fixes(?). Were previous versions working on x86_64?
I am kernel newbie so all my below thoughts might be severly wrong.
I am not sure about correctness of x86_32 path involving highmem. It is
practically unchanged since v2.4.xy and was it right at the beginning?
But let's start with badram_markpages where reserve_bootmem() hasn't
been executed at all. TestSetPageBad(), i.e. test_and_set_bit() returns
the old value, therefore checking it once per each bad page provided via
next_masked_address() for non-zero value always fails. Condition must be
reversed if we want a sane behavior and I have already done it. On my
x86_64, function __free_pages_bootmem() isn't invoked on bad pages
(e.g. 0xa1YYYYYY), so those pages remain reserved. I think it's a better
approach and accordingly I fixed the rest of code to avoid clearance of
PG_reserved flag and totalhigh_pages incrementation (latter one just for
consistent bad pages "classification" -- "spoiling" second pool looks
bad). Next thing is init_page_count(), which is intended to be called
before page freeing, thus if bad page is hit, then this call possibly
should be removed in function add_one_highpage_init() -- also done.
What do you think about my changes?
Original idea and implementation for v2.2.xy comes from Rick van Rein.
Further work was done by many developers. Recent ones are Manish Pandya,
Anup Shan and Antoine Frenoy that were providing patches for v2.6.2y.
It should be also noted that patch for v2.6.28, which has been published
by Sebastian Geisler is particularly incomplete.
Cc: Rick van Rein <>
Documentation/badram.txt | 275 +++++++++++++++++++++++++++++++++++
Documentation/kernel-parameters.txt | 3 +
Documentation/memory.txt | 11 ++
arch/powerpc/mm/numa.c | 2 +-
arch/x86/Kconfig | 17 ++
arch/x86/configs/i386_defconfig | 1 +
arch/x86/configs/x86_64_defconfig | 1 +
arch/x86/include/asm/highmem.h | 2 +-
arch/x86/include/asm/numa_32.h | 4 +-
arch/x86/include/asm/page.h | 2 +
arch/x86/mm/highmem_32.c | 4 +-
arch/x86/mm/init_32.c | 54 +++++--
drivers/pci/intel-iommu.c | 5 +-
include/linux/kernel.h | 2 +
include/linux/mm.h | 5 +-
include/linux/page-flags.h | 11 ++
lib/cmdline.c | 66 +++++++++
lib/show_mem.c | 7 +-
mm/page_alloc.c | 101 ++++++++++++-
20 files changed, 552 insertions(+), 30 deletions(-)
create mode 100644 Documentation/badram.txt
diff --git a/CREDITS b/CREDITS
index 72b4878..d7a52fd 100644
@@ -2901,6 +2901,15 @@ S: 6 Karen Drive
S: Malvern, Pennsylvania 19355
+N: Rick van Rein
+D: Memory, the BadRAM subsystem dealing with statically challanged RAM modules.
+S: Haarlebrink 5
+S: 7544 WP Enschede
+S: The Netherlands
+P: 1024D/89754606 CD46 B5F2 E876 A5EE 9A85 1735 1411 A9C2 8975 4606
N: Stefan Reinauer
diff --git a/Documentation/badram.txt b/Documentation/badram.txt
new file mode 100644
index 0000000..fb473b7
--- /dev/null
+++ b/Documentation/badram.txt
@@ -0,0 +1,275 @@
+ RAM is getting smaller and smaller, and as a result, also more and more
+ vulnerable. This makes the manufacturing of hardware more expensive,
+ since an excessive amount of RAM chips must be discarded on account of
+ a single cell that is wrong. Similarly, static discharge may damage a
+ RAM module forever, which is usually remedied by replacing it
+ entirely.
+ This is not necessary, as the BadRAM code shows: By informing the Linux
+ kernel which addresses in a RAM are damaged, the kernel simply avoids
+ ever allocating such addresses but makes all the rest available.
+Reasons for this feature
+ There are many reasons why this kernel feature is useful:
+ - Chip manufacture is resource intensive; waste less and sleep better
+ - It's another chance to promote Linux as "the flexible OS"
+ - Some laptops have their RAM soldered in... and then it fails!
+ - It's plain cool ;-)
+Running example
+ To run this project, I was given two DIMMs, 32 MB each. One, that we
+ shall use as a running example in this text, contained 512 faulty bits,
+ spread over 1/4 of the address range in a regular pattern. Some tricks
+ with a RAM tester and a few binary calculations were sufficient to
+ write these faults down in 2 longword numbers.
+ The kernel recognised the correct number of pages with faults and did
+ not give them out for allocation. The allocation routines could
+ therefore progress as normally, without any adaption.
+ So, I gained 30 MB of DIMM which would otherwise have been thrown
+ away. After booting the kernel, the kernel behaved exactly as it
+ always had.
+Initial checks
+ If you experience RAM trouble, first read /usr/src/linux/memory.txt
+ and try out the mem=4M trick to see if at least some initial parts
+ of your RAM work well. The BadRAM routines halt the kernel in panic
+ if the reserved area of memory (containing kernel stuff) contains
+ a faulty address.
+Running a RAM checker
+ The memory checker is not built into the kernel, to avoid delays at
+ runtime. If you experience problems that may be caused by RAM, run
+ a good RAM checker, such as
+ The output of a RAM checker provides addresses that went wrong. In
+ the 32 MB chip with 512 faulty bits mentioned above, the errors were
+ found in the 8MB-16MB range (the DIMM was in slot #0) at addresses
+ xxx42f4
+ xxx62f4
+ xxxc2f4
+ xxxe2f4
+ and the error was a "sticky 1 bit", a memory bit that stayed "1" no
+ matter what was written to it. The regularity of this pattern
+ suggests the death of a buffer at the output stages of a row on one of
+ the chips. I expect such regularity to be commonplace. Finding this
+ regularity currently is human effort, but it should not be hard to
+ alter a RAM checker to capture it in some sort of pattern, possibly
+ the BadRAM patterns described below.
+ By the way, if you manage to get hold of memtest86 version 2.3 or
+ beyond, you can configure the printing mode to produce BadRAM patterns,
+ which find out exactly what you must enter on the LILO: commandline,
+ except that you shouldn't mention the added spacing. That means that
+ you can skip the following step, which saves you a *lot* of work.
+ Also by the way, if your machine has the ISA memory gap in the 15M-16M
+ range unstoppable, Linux can get in trouble. One way of handling that
+ situation is by specifying the total memory size to Linux with a boot
+ parameter mem=... and then to tell it to treat the 15M-16M range as
+ faulty with an additional boot parameter, for instance:
+ mem=24M badram=0x00f00000,0xfff00000
+ if you installed 24MB of RAM in total.
+ If you use this patch on an x86_64 architecture, your addresses are
+ twice as long. Fill up with zeroes in the address and with f's in
+ the mask. The latter example would thus become:
+ mem=24M badram=0x0000000000f00000,0xfffffffffff00000
+ The patch applies the changes to both x86 and x86_64 code bases
+ at the same time. Patching but not compiling maps the entire
+ source tree at once, which makes more sense than splitting the
+ patch into an x86 and x86_64 branch, because those two branches
+ could not be applied at the same time because they would overlap.
+Capturing errors in a pattern
+ Instead of manually providing all 512 errors to the kernel, it's nicer
+ to generate a pattern. Since the regularity is based on address decoding
+ software, which generally takes certain bits into account and ignores
+ others, we shall provide a faulty address F, together with a bit mask M
+ that specifies which bits must be equal to F. In C code, an address A
+ is faulty if and only if
+ (F & M) == (A & M)
+ or alternately (closer to a hardware implementation):
+ ~((F ^ A) & M)
+ In the example 32 MB chip, we had the faulty addresses in 8MB-16MB:
+ xxx42f4 ....0100....
+ xxx62f4 ....0110....
+ xxxc2f4 ....1100....
+ xxxe2f4 ....1110....
+ The second column represents the alternating hex digit in binary form.
+ Apperantly, the first and one-but last binary digit can be anything,
+ so the binary mask for that part is 0101. The mask for the part after
+ this is 0xfff, and the part before should select anything in the range
+ 8MB-16MB, or 0x00800000-0x01000000; this is done with a bitmask
+ 0xff80xxxx. Combining these partial masks, we get:
+ F=0x008042f4 M=0xff805fff
+ That covers everything for this DIMM; for more complicated failing
+ DIMMs, or for a combination of multiple failing DIMMs, it can be
+ necessary to set up a number of such F/M pairs.
+Rebooting Linux
+ Now that these patterns are known (and double-checked, the calculations
+ are highly error-prone... it would be neat to test them in the RAM
+ checker...) we simply restart Linux with these F/M pairs as a parameter
+ If you normally boot as follows:
+ LILO: linux
+ you should now boot with
+ LILO: linux badram=0x008042f4,0xff805fff
+ or perhaps by mentioning more F/M pairs in an order F0,M0,F1,M1,...
+ When you provide an odd number of arguments to badram, the default mask
+ 0xffffffff (only one address matched) is applied to the pattern.
+ Beware of the commandline length. At least up to LILO version 0.21,
+ the commandline is cut off after the 78th character; later versions
+ may go as far as the kernel goes, namely 255 characters. In no way is
+ it possible to enter more than 10 numbers to the badram boot option.
+ When the kernel now boots, it should not give any trouble with RAM.
+ Mind you, this is under the assumption that the kernel and its data
+ storage do not overlap an erroneous part. If this happens, and the
+ kernel does not choke on it right away, it will stop with a panic.
+ You will need to provide a RAM where the initial, say 2MB, is faultless
+ Now look up your memory status with
+ dmesg | grep ^Memory:
+ which prints a single line with information like
+ Memory: 158524k/163840k available
+ (940k kernel code,
+ 412k reserved,
+ 1856k data,
+ 60k init,
+ 0k highmem,
+ 2048k BadRAM)
+ The latter entry, the badram, is 2048k to represent the loss of 2MB
+ of general purpose RAM due to the errors. Or, positively rephrased,
+ instead of throwing out 32MB as useless, you only throw out 2MB.
+ If the system is stable (try compiling a few kernels, and do a few
+ finds in / or so) you may add the boot parameter to /etc/lilo.conf
+ as a line to _all_ the kernels that handle this trouble with a line
+ append="badram=0x008042f4,0xff805fff"
+ after which you run "lilo".
+ Warning: Don't experiment with these settings on your only boot image.
+ If the BadRAM overlays kernel code, data, init, or other reserved
+ memory, the kernel will halt in panic. Try settings on a test boot
+ image first, and if you get a panic you should change the order of
+ your DIMMs [which may involve buying a new one just to be able to
+ change the order].
+ You are allowed to enter any number of BadRAM patterns in all the
+ places documented in this file. They will all apply. It is even
+ possible to mention several BadRAM patterns in a single place. The
+ completion of an odd number of arguments with the default mask is
+ done separately for each badram=... option.
+Kernel Customisation
+ Some people prefer to enter their badram patterns in the kernel, and
+ this is also possible. In mm/page_alloc.c there is an array of unsigned
+ long integers into which the parameters can be entered, prefixed with
+ the number of integers (twice the number of patterns). The array is
+ named badram_custom and it will be added to the BadRAM list whenever an
+ option 'badram' is provided on the commandline when booting, either
+ with or without additional patterns.
+ For the previous example, the code would become
+ static unsigned long __initdata badram_custom[] = {
+ 2, // Number of longwords that follow, as F/M pairs
+ 0x008042f4L, 0xff805fffL,
+ };
+ Even on this place you may assume the default mask to be filled in
+ when you enter an odd number of longwords. Specify the number of
+ longwords to be 0 to avoid influence of this custom BadRAM list.
+BadRAM classification
+ This technique may start a lively market for "dead" RAM. It is important
+ to realise that some RAMs are more dead than others. So, instead of
+ just providing a RAM size, it is also important to know the BadRAM
+ class, which is defined as follows:
+ A BadRAM class N means that at most 2^N bytes have a problem,
+ and that all problems with the RAMs are persistent: They
+ are predictable and always show up.
+ The DIMM that serves as an example here was of class 9, since 512=2^9
+ errors were found. Higher classes are worse, "correct" RAM is of class
+ -1 (or even less, at your choice).
+ Class N also means that the bitmask for your chip (if there's just one,
+ that is) counts N bits "0" and it means that (if no faults fall in the
+ same page) an amount of 2^N*PAGESIZE memory is lost, in the example on
+ an x86 architecture that would be 2^9*4k=2MB, which accounts for the
+ initial claim of 30MB RAM gained with this DIMM.
+ Note that this scheme has deliberately been defined to be independent
+ of memory technology and of computer architecture.
+Known Bugs
+ LILO is known to cut off commandlines which are too long. For the
+ lilo-0.21 distribution, a commandline may not exceed 78 characters,
+ while actually, 255 would be possible [on x86, kernel 2.2.16].
+ LILO does _not_ report too-long commandlines, but the error will
+ show up as either a panic at boot time, stating
+ panic: BadRAM page in initial area
+ or the dmesg line starting with Memory: will mention an unpredicted
+ number of kilobytes. (Note that the latter number only includes
+ errors in accessed memory.)
+Future Possibilities
+ It would be possible to use even more of the faulty RAMs by employing
+ them for slabs. The smaller allocation granularity of slabs makes it
+ possible to throw out just, say, 32 bytes surrounding an error. This
+ would mean that the example DIMM only looses 16kB instead of 2MB.
+ It might even be possible to allocate the slabs in such a way that,
+ where possible, the remaining bytes in a slab structure are allocated
+ around the error, reducing the RAM loss to 0 in the optimal situation!
+ However, this yield is somewhat faked: It is possible to provide 512
+ pages of 32-byte slabs, but it is not certain that anyone would use
+ that many 32-byte slabs at any time.
+ A better solution might be to alter the page allocation for a slab to
+ have a preference for BadRAM pages, and given those a special treatment.
+ This way, the BadRAM would be spread over all the slabs, which seems
+ more likely to be a `true' pay-off. This would yield more overhead at
+ slab allocation time, but on the other hand, by the nature of slabs,
+ such allocations are made as rare as possible, so it might not matter
+ that much. I am uncertain where to go.
+ Many suggestions have been made to insert a RAM checker at boot time;
+ since this would leave the time to do only very meager checking, it
+ is not a reasonable option; we already have a BIOS doing that in most
+ systems!
+ It would be interesting to integrate this functionality with the
+ self-verifying nature of ECC RAM. These memories can even distinguish
+ between recorable and unrecoverable errors! Such memory has been
+ handled in older operating systems by `testing' once-failed memory
+ blocks for a while, by placing only (reloadable) program code in it.
+ Unfortunately, I possess no faulty ECC modules to work this out.
+Names and Places
+ The home page of this project is on
+ This page also links to Nico Schmoigl's experimental extensions to
+ this patch (with debugging and a few other fancy things).
+ In case you have experiences with the BadRAM software which differ from
+ the test reportings on that site, I hope you will mail me with that
+ new information.
+ The BadRAM project is an idea and implementation by
+ Rick van Rein
+ Haarlebrink 5
+ 7544 WP Enschede
+ The Netherlands
+ If you like it, a postcard would be much appreciated ;-)
+ Enjoy,
+ -Rick.
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 5f6aa11..478d294 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -42,6 +42,7 @@ parameter is applicable:
APM Advanced Power Management support is enabled.
AVR32 AVR32 architecture is enabled.
AX25 Appropriate AX.25 support is enabled.
+ BADRAM Support for faulty RAM chips is enabled.
BLACKFIN Blackfin architecture is enabled.
DRM Direct Rendering Management support is enabled.
EDD BIOS Enhanced Disk Drive Services (EDD) is enabled
@@ -379,6 +380,8 @@ and is between 256 and 4096 characters. It is defined in the file
autotest [IA64]
+ badram= [BADRAM] Avoid allocating faulty RAM addresses.
baycom_epp= [HW,AX25]
Format: <io>,<mode>
diff --git a/Documentation/memory.txt b/Documentation/memory.txt
index 802efe5..afe19f6 100644
--- a/Documentation/memory.txt
+++ b/Documentation/memory.txt
@@ -7,11 +7,22 @@ systems.
as you add more memory. Consider exchanging your
+ 4) A static discharge or production fault causes a RAM module
+ to have (predictable) errors, usually meaning that certain
+ bits cannot be set or reset. Instead of throwing away your
+ RAM module, you may read /usr/src/linux/Documentation/badram.txt
+ to learn how to detect, locate and circuimvent such errors
+ in your RAM module.
All of these problems can be addressed with the "mem=XXXM" boot option
(where XXX is the size of RAM to use in megabytes).
It can also tell Linux to use less memory than is actually installed.
If you use "mem=" on a machine with PCI, consider using "memmap=" to avoid
physical address space collisions.
+If this helps, read Documentation/badram.txt to learn how to
+find and circumvent memory errors.
See the documentation of your boot loader (LILO, grub, loadlin, etc.) about
how to pass options to the kernel.
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index b037d95..64d5353 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -129,7 +129,7 @@ static void __init get_node_active_region(unsigned long start_pfn,
node_ar->nid = nid;
node_ar->start_pfn = start_pfn;
node_ar->end_pfn = start_pfn;
- work_with_active_regions(nid, get_active_region_work_fn, node_ar);
+ work_with_active_regions(nid, get_active_region_work_fn, node_ar, NULL);
static void __cpuinit map_cpu_to_node(int cpu, int node)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index cb5a57c..1a09601 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1118,6 +1118,23 @@ config DIRECT_GBPAGES
support it. This can improve the kernel's performance a tiny bit by
reducing TLB pressure. If in doubt, say "Y".
+config BADRAM
+ bool "Work around bad spots in RAM"
+ default y
+ help
+ This small kernel extension makes it possible to use memory chips
+ which are not entirely correct. It works by never allocating the
+ places that are wrong. Those places are specified with the badram
+ boot option to LILO. Read Documentation/badram.txt and/or visit
+ for information.
+ This option co-operates well with a second boot option from LILO
+ that starts memtest86, which is able to automatically produce the
+ patterns for the commandline in case of memory trouble.
+ It is safe to say 'Y' here, and it is advised because there is no
+ performance impact.
# Common NUMA Features
config NUMA
bool "Numa Memory Allocation and Scheduler Support"
diff --git a/arch/x86/configs/i386_defconfig b/arch/x86/configs/i386_defconfig
index d28fad1..2106f30 100644
--- a/arch/x86/configs/i386_defconfig
+++ b/arch/x86/configs/i386_defconfig
@@ -300,6 +300,7 @@ CONFIG_X86_CPUID=y
# CONFIG_HIGHMEM64G is not set
diff --git a/arch/x86/configs/x86_64_defconfig b/arch/x86/configs/x86_64_defconfig
index 6c86acd..5f6758f 100644
--- a/arch/x86/configs/x86_64_defconfig
+++ b/arch/x86/configs/x86_64_defconfig
@@ -278,6 +278,7 @@ CONFIG_PREEMPT_VOLUNTARY=y
diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h
index 014c2b8..4601aa8 100644
--- a/arch/x86/include/asm/highmem.h
+++ b/arch/x86/include/asm/highmem.h
@@ -73,7 +73,7 @@ struct page *kmap_atomic_to_page(void *ptr);
#define flush_cache_kmaps() do { } while (0)
extern void add_highpages_with_active_regions(int nid, unsigned long start_pfn,
- unsigned long end_pfn);
+ unsigned long end_pfn, int *pbad);
#endif /* __KERNEL__ */
diff --git a/arch/x86/include/asm/numa_32.h b/arch/x86/include/asm/numa_32.h
index a372290..189e3df 100644
--- a/arch/x86/include/asm/numa_32.h
+++ b/arch/x86/include/asm/numa_32.h
@@ -5,9 +5,9 @@ extern int pxm_to_nid(int pxm);
extern void numa_remove_cpu(int cpu);
-extern void set_highmem_pages_init(void);
+extern void set_highmem_pages_init(int *pbad);
-static inline void set_highmem_pages_init(void)
+static inline void set_highmem_pages_init(int *pbad)
diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
index 625c3f0..9b4aa06 100644
--- a/arch/x86/include/asm/page.h
+++ b/arch/x86/include/asm/page.h
@@ -53,6 +53,8 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
extern bool __virt_addr_valid(unsigned long kaddr);
#define virt_addr_valid(kaddr) __virt_addr_valid((unsigned long) (kaddr))
+#define phys_to_page(x) pfn_to_page((unsigned long)(x) >> PAGE_SHIFT)
#endif /* __ASSEMBLY__ */
#include <asm-generic/memory_model.h>
diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c
index 63a6ba6..36c60aa 100644
--- a/arch/x86/mm/highmem_32.c
+++ b/arch/x86/mm/highmem_32.c
@@ -106,7 +106,7 @@ EXPORT_SYMBOL(kunmap_atomic);
-void __init set_highmem_pages_init(void)
+void __init set_highmem_pages_init(int *pbad)
struct zone *zone;
int nid;
@@ -125,7 +125,7 @@ void __init set_highmem_pages_init(void)
zone->name, nid, zone_start_pfn, zone_end_pfn);
add_highpages_with_active_regions(nid, zone_start_pfn,
- zone_end_pfn);
+ zone_end_pfn, pbad);
totalram_pages += totalhigh_pages;
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 30938c1..1412e0b 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -412,11 +412,16 @@ static void __init permanent_kmaps_init(pgd_t *pgd_base)
pkmap_page_table = pte;
-static void __init add_one_highpage_init(struct page *page, int pfn)
+static void __init add_one_highpage_init(struct page *page, int pfn,
+ int *bad)
+ *bad = 0;
- __free_page(page);
+ if (PageBad(page))
+ *bad = 1;
+ else
+ __free_page(page);
@@ -426,9 +431,10 @@ struct add_highpages_data {
static int __init add_highpages_work_fn(unsigned long start_pfn,
- unsigned long end_pfn, void *datax)
+ unsigned long end_pfn, void *datax,
+ int *pbad)
- int node_pfn;
+ int node_pfn, bad;
struct page *page;
unsigned long final_start_pfn, final_end_pfn;
struct add_highpages_data *data;
@@ -445,7 +451,9 @@ static int __init add_highpages_work_fn(unsigned long start_pfn,
if (!pfn_valid(node_pfn))
page = pfn_to_page(node_pfn);
- add_one_highpage_init(page, node_pfn);
+ add_one_highpage_init(page, node_pfn, &bad);
+ if (bad && pbad)
+ (*pbad)++;
return 0;
@@ -453,14 +461,14 @@ static int __init add_highpages_work_fn(unsigned long start_pfn,
void __init add_highpages_with_active_regions(int nid, unsigned long start_pfn,
- unsigned long end_pfn)
+ unsigned long end_pfn, int *pbad)
struct add_highpages_data data;
data.start_pfn = start_pfn;
data.end_pfn = end_pfn;
- work_with_active_regions(nid, add_highpages_work_fn, &data);
+ work_with_active_regions(nid, add_highpages_work_fn, &data, pbad);
@@ -859,7 +867,7 @@ static void __init test_wp_bit(void)
void __init mem_init(void)
- int codesize, reservedpages, datasize, initsize;
+ int codesize, reservedpages, badpages, datasize, initsize;
int tmp;
@@ -871,19 +879,37 @@ void __init mem_init(void)
totalram_pages += free_all_bootmem();
reservedpages = 0;
- for (tmp = 0; tmp < max_low_pfn; tmp++)
+ badpages = 0;
+ for (tmp = 0; tmp < max_low_pfn; tmp++) {
- * Only count reserved RAM pages:
+ * Only count reserved and bad RAM pages:
- if (page_is_ram(tmp) && PageReserved(pfn_to_page(tmp)))
- reservedpages++;
+ if (page_is_ram(tmp)) {
+ if (PageReserved(pfn_to_page(tmp)))
+ reservedpages++;
+ if (PageBad(pfn_to_page(tmp)))
+ badpages++;
+ }
+ }
- set_highmem_pages_init();
+ set_highmem_pages_init(&badpages);
codesize = (unsigned long) &_etext - (unsigned long) &_text;
datasize = (unsigned long) &_edata - (unsigned long) &_etext;
initsize = (unsigned long) &__init_end - (unsigned long) &__init_begin;
+ printk(KERN_INFO "Memory: %luk/%luk available (%dk kernel code, "
+ "%dk reserved, %dk data, %dk init, %ldk highmem, %dk BadRAM)\n",
+ nr_free_pages() << (PAGE_SHIFT-10),
+ num_physpages << (PAGE_SHIFT-10),
+ codesize >> 10,
+ reservedpages << (PAGE_SHIFT-10),
+ datasize >> 10,
+ initsize >> 10,
+ (unsigned long) (totalhigh_pages << (PAGE_SHIFT-10)),
+ badpages << (PAGE_SHIFT-10));
printk(KERN_INFO "Memory: %luk/%luk available (%dk kernel code, "
"%dk reserved, %dk data, %dk init, %ldk highmem)\n",
nr_free_pages() << (PAGE_SHIFT-10),
@@ -894,7 +920,7 @@ void __init mem_init(void)
initsize >> 10,
(unsigned long) (totalhigh_pages << (PAGE_SHIFT-10))
printk(KERN_INFO "virtual kernel memory layout:\n"
" fixmap : 0x%08lx - 0x%08lx (%4ld kB)\n"
diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c
index ba83495..312d2b3 100644
--- a/drivers/pci/intel-iommu.c
+++ b/drivers/pci/intel-iommu.c
@@ -2064,7 +2064,8 @@ static inline void iommu_prepare_isa(void)
static int md_domain_init(struct dmar_domain *domain, int guest_width);
static int __init si_domain_work_fn(unsigned long start_pfn,
- unsigned long end_pfn, void *datax)
+ unsigned long end_pfn, void *datax,
+ int * /*pbad*/)
int *ret = datax;
@@ -2106,7 +2107,7 @@ static int __init si_domain_init(int hw)
return 0;
for_each_online_node(nid) {
- work_with_active_regions(nid, si_domain_work_fn, &ret);
+ work_with_active_regions(nid, si_domain_work_fn, &ret, NULL);
if (ret)
return ret;
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index f4e3184..fc10f6c 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -201,6 +201,8 @@ extern int vsscanf(const char *, const char *, va_list)
extern int get_option(char **str, int *pint);
extern char *get_options(const char *str, int nints, int *ints);
+extern int get_longoption (char **str, unsigned long *plong);
+extern char *get_longoptions(const char *str, int nlongs, unsigned long *longs);
extern unsigned long long memparse(const char *ptr, char **retptr);
extern int core_kernel_text(unsigned long addr);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 11e5be6..94f3a10 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1039,8 +1039,9 @@ extern void get_pfn_range_for_nid(unsigned int nid,
extern unsigned long find_min_pfn_with_active_regions(void);
extern void free_bootmem_with_active_regions(int nid,
unsigned long max_low_pfn);
-typedef int (*work_fn_t)(unsigned long, unsigned long, void *);
-extern void work_with_active_regions(int nid, work_fn_t work_fn, void *data);
+typedef int (*work_fn_t)(unsigned long, unsigned long, void *, int *);
+extern void work_with_active_regions(int nid, work_fn_t work_fn, void *data,
+ int *pbad);
extern void sparse_memory_present_with_active_regions(int nid);
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 6b202b1..8899c9b 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -102,6 +102,9 @@ enum pageflags {
PG_mlocked, /* Page is vma mlocked */
+ PG_badram, /* BadRam page */
PG_uncached, /* Page has been mapped as uncached */
@@ -217,6 +220,14 @@ __PAGEFLAG(SlobFree, slob_free)
__PAGEFLAG(SlubFrozen, slub_frozen)
__PAGEFLAG(SlubDebug, slub_debug)
+TESTPAGEFLAG(Bad, badram)
+SETPAGEFLAG(Bad, badram)
+TESTSETFLAG(Bad, badram)
+#define PageBad(page) 0
* Private page markings that may be used by the filesystem that owns the page
* for its own purposes.
diff --git a/lib/cmdline.c b/lib/cmdline.c
index f5f3ad8..eacb2b9 100644
--- a/lib/cmdline.c
+++ b/lib/cmdline.c
@@ -114,6 +114,70 @@ char *get_options(const char *str, int nints, int *ints)
+ * get_longoption - Parse long from an option string
+ * @str: option string
+ * @plong: (output) long value parsed from @str
+ *
+ * Read a long from an option string; if available accept a subsequent
+ * comma as well.
+ *
+ * Return values:
+ * 0 - no long in string
+ * 1 - long found, no subsequent comma
+ * 2 - long found including a subsequent comma
+ */
+int get_longoption (char **str, unsigned long *plong)
+ char *cur = *str;
+ if (!cur || !(*cur))
+ return 0;
+ *plong = simple_strtoul(cur, str, 0);
+ if (cur == *str)
+ return 0;
+ if (**str == ',') {
+ (*str)++;
+ return 2;
+ }
+ return 1;
+ * get_longoptions - Parse a string into a list of longs
+ * @str: String to be parsed
+ * @nlongs: size of long array
+ * @longs: long array
+ *
+ * This function parses a string containing a comma-separated
+ * list of longs. The parse halts when the array is
+ * full, or when no more numbers can be retrieved from the
+ * string.
+ *
+ * Return value is the character in the string which caused
+ * the parse to end (typically a null terminator, if @str is
+ * completely parseable).
+ */
+char *get_longoptions(const char *str, int nlongs, unsigned long *longs)
+ int res, i = 1;
+ while (i < nlongs) {
+ res = get_longoption((char **)&str, longs + i);
+ if (res == 0)
+ break;
+ i++;
+ if (res == 1)
+ break;
+ }
+ longs[0] = i - 1;
+ return (char *)str;
* memparse - parse a string with mem suffixes into a number
* @ptr: Where parse begins
* @retptr: (output) Optional pointer to next char after parse completes
@@ -157,3 +221,5 @@ unsigned long long memparse(const char *ptr, char **retptr)
diff --git a/lib/show_mem.c b/lib/show_mem.c
index 238e72a..0825ad4 100644
--- a/lib/show_mem.c
+++ b/lib/show_mem.c
@@ -12,7 +12,7 @@
void show_mem(void)
pg_data_t *pgdat;
- unsigned long total = 0, reserved = 0, shared = 0,
+ unsigned long total = 0, reserved = 0, shared = 0, badram = 0,
nonshared = 0, highmem = 0;
printk(KERN_INFO "Mem-Info:\n");
@@ -37,6 +37,8 @@ void show_mem(void)
if (PageHighMem(page))
+ if (PageBad(page))
+ badram++;
if (PageReserved(page))
else if (page_count(page) == 1)
@@ -54,6 +56,9 @@ void show_mem(void)
printk(KERN_INFO "%lu pages HighMem\n", highmem);
printk(KERN_INFO "%lu pages reserved\n", reserved);
+ printk(KERN_INFO "(including %lu pages of BadRAM)\n", badram);
printk(KERN_INFO "%lu pages shared\n", shared);
printk(KERN_INFO "%lu pages non-shared\n", nonshared);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 902e5fc..00980e4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -10,6 +10,7 @@
* Reshaped it to be a zoned allocator, Ingo Molnar, Red Hat, 1999
* Discontiguous memory support, Kanoj Sarcar, SGI, Nov 1999
* Zone balancing, Kanoj Sarcar, SGI, Jan 2000
+ * BadRAM handling, Rick van Rein, Feb 2001
* Per cpu hot/cold page lists, bulk allocation, Martin J. Bligh, Sept 2002
* (lots of bits borrowed from Ingo Molnar & Andrew Morton)
@@ -618,9 +619,11 @@ void __meminit __free_pages_bootmem(struct page *page, unsigned int order)
if (order == 0) {
- set_page_count(page, 0);
- set_page_refcounted(page);
- __free_page(page);
+ if (!PageBad(page)) {
+ set_page_count(page, 0);
+ set_page_refcounted(page);
+ __free_page(page);
+ }
} else {
int loop;
@@ -3438,14 +3441,14 @@ void __init free_bootmem_with_active_regions(int nid,
-void __init work_with_active_regions(int nid, work_fn_t work_fn, void *data)
+void __init work_with_active_regions(int nid, work_fn_t work_fn, void *data, int *pbad)
int i;
int ret;
for_each_active_range_index_in_nid(i, nid) {
ret = work_fn(early_node_map[i].start_pfn,
- early_node_map[i].end_pfn, data);
+ early_node_map[i].end_pfn, data, pbad);
if (ret)
@@ -4928,6 +4931,94 @@ void *__init alloc_large_system_hash(const char *tablename,
return table;
+#ifdef CONFIG_X86_64
+# define PRIaddr "016lx"
+# define PRIaddr "08lx"
+ * Given a pointed-at address and a mask, increment the page so that the
+ * mask hides the increment. Return 0 if no increment is possible.
+ */
+static int __init next_masked_address(unsigned long *addrp, unsigned long mask)
+ unsigned long inc = 1;
+ unsigned long newval = *addrp;
+ while (inc & mask)
+ inc += inc;
+ while (inc != 0) {
+ newval += inc;
+ newval &= ~mask;
+ newval |= ((*addrp) & mask);
+ if (newval > *addrp) {
+ *addrp = newval;
+ return 1;
+ }
+ do {
+ inc += inc;
+ } while (inc & ~mask);
+ while (inc & mask)
+ inc += inc;
+ }
+ return 0;
+void __init badram_markpages(int argc, unsigned long *argv) {
+ unsigned long addr, mask;
+ while (argc-- > 0) {
+ addr = *argv++;
+ mask = (argc-- > 0) ? *argv++ : ~0L;
+ mask |= ~PAGE_MASK; /* Optimalisation */
+ addr &= mask; /* Normalisation */
+ do {
+ struct page *page = phys_to_page(addr);
+ printk(KERN_DEBUG " %"PRIaddr" = %"PRIaddr"\n",
+ addr >> PAGE_SHIFT,
+ (unsigned long)(page - mem_map));
+ if (!TestSetPageBad(page))
+ reserve_bootmem(addr, PAGE_SIZE,
+ } while (next_masked_address(&addr, mask));
+ }
+/* Enter your custom BadRAM patterns here as pairs of unsigned long integers. */
+/* For more information on these F/M pairs, refer to Documentation/badram.txt */
+static unsigned long __initdata badram_custom[] = {
+ 0, /* Number of longwords that follow, as F/M pairs */
+/*********** CONFIG_BADRAM: CUSTOMISABLE SECTION ENDS HERE *******************/
+static int __init badram_setup(char *str)
+ unsigned long opts[3];
+// BUG_ON(!mem_map);
+ printk(KERN_INFO "PAGE_OFFSET = 0x%"PRIaddr"\n", PAGE_OFFSET);
+ printk(KERN_INFO "BadRAM option is %s\n", str);
+ if (*str++ == '=')
+ while ((str = get_longoptions(str, 3, (long *) opts), *opts)) {
+ printk(KERN_INFO " --> marking 0x%"PRIaddr", 0x%"PRIaddr
+ " [%ld]\n", opts[1], opts[2], opts[0]);
+ badram_markpages(*opts, opts + 1);
+ if (*opts == 1)
+ break;
+ };
+ badram_markpages(*badram_custom, badram_custom + 1);
+ return 0;
+__setup("badram", badram_setup);
+#endif /* CONFIG_BADRAM */
/* Return a pointer to the bitmap storing bits affecting a block of pages */
static inline unsigned long *get_pageblock_bitmap(struct zone *zone,
unsigned long pfn)
Date: Fri, 28 Jan 2011 16:40:32 +0100
Subject: BadRAM for v2.6.32.x
To: Rick van Rein <>
Hello Rick,
I was doing some cleaning in my hard drive and I bumped into my BadRAM
related work. One year ago I tried to adapt BadRAM to recent kernel
version, i.e. As a base and reference patches for 2.6.26+
versions were used.
I tried because memory in my desktop computer
( started to show some
intermittent or just hard to trigger errors in the end of 2009. I
first noticed it morning after night of many successive kernel
compilations and some of them failed for the first time. It was
internal compiler error and later I read that it in all likelihood
means some hardware problems, e.g. in memory. Memtest86+ after many
iterations of all tests showed errors in 2 bits in totally different
locations, but in further test these memory addresses haven't changed.
AFAIR only test 5 could sometimes detect these errors and it was
hardcoded that this test is not reliable, so turning option to show
badram params gave me nothing. I patched memtest, so I got it finally
(I have pictures somewhere).
To be honest I don't remember what happens next apart from one thing.
I had newer kernel that was supported by BadRAM, so I wanted to fix
the lack of BadRAM, to solve the main problem. But I am not sure
whether I really fully completed my work and properly tested kernel w/
patch applied later. I'm sure it booted, but because these errors were
hard to trigger I give up on time consuming tests. I had small case
then and temperature inside was pretty high. Year ago I replaced it
with bigger one and I believed that problems with memory went away. I
even ran memtest for 2 or 3 iterations and I didn't got any error. But
a few days ago compiler internal error showed up again once (and only
once so far). Running make again solves problem and I am pretty busy,
so cannot test it properly.
I don't want my patch to be lost in abyss of my hard drive. I remember
that I put some effort to analyze, understand and fix what I though
was improperly done in recent ones. I may failed. I definitely no
longer remember anything from what I was precisely doing. ;-)
I applied it on top of 2.6.32.y branch (v2.6.32.28) and I'm attaching
you format-patch of it:
I just tested v2.6.32.21 kernel compilation and there was no problem.
Booted it flawlessly in qemu.
$ qemu -hda ./squeeze64.img -kernel
./linux- -append "root=\"/dev/hda\" ro"
-m 256
root@debian:~# cat /proc/meminfo | head -1
MemTotal: 253564 kB
$ qemu -hda ./squeeze64.img -kernel
./linux- -append "root=\"/dev/hda\"
badram=0x000000000a000000,0xffffffffff000000 ro" -m 256
root@debian:~# cat /proc/meminfo | head -1
MemTotal: 237180 kB
Now I see that I forgot to enhance mem_init() in arch/x86/mm/init_64.c
to show BadRAM memory size in Memory: line early print by kernel.
Maybe I also missed something else year ago?
I don't know whether you're still into BadRAM or not. If yes you
should carefully read my commit message and review this along with the
whole patch. If no, then I have at least backup of it in gmail, just
in case. :-)
For your convenience I'm pasting commit message below.
[[ snipped, check the patch ]]
P.S. I also found (after preparing BadRAM for, but maybe even
before, not sure now) "memmap=nn[KMG]$ss[KMG]" parameter, which isn't
as robust as badram, but allows reserving memory and is already in
mainline, so there is rather no chance of merging badram, at least in
current state. But I came up with idea to totally rework badram on top
of memmap, or more accurately, memmap internals. I might try it
someday. And this have a small chance of being mergable, at least I
think so.

This comment has been minimized.

Copy link
Owner Author

@przemoc przemoc commented Mar 27, 2013


This patch wasn't thoroughly tested, therefore may be HARMFUL and UNSAFE!
You have been warned! USE AT YOUR OWN RISK! NO WARRANTY!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment