From NAND chip to files

First of all, I am pretty happy to write this article because I usually don’t have a lot of opportunities to write about forensics topics on this blog. The main reason for that situation is because I am almost always working on that field for my employer so this does not have a place on this blog . But this time it was related to a spare time project I did during my holidays!

You’re not going to have a lot of details about the whole project because it is still ongoing and moreover I am working on it with a friend and we hope to do a bigger publication once we are done. Anyway, I went through a lot a caveats so I thought it was worth writing about that step in our study.

After having read some of my other blog posts, you may have noticed that I am usually targeting little embedded devices without a lot of computational power and consequently with a small sized firmware. And that’s why I am more used to deal with small SPI or I2C flash EPROM memory chips or even microcontrollers such as PIC or AVR. All of them rely on serial buses.

This time my target was “big” enough to require both NOR and NAND parallel flash memory chips.

Why having two memory technologies aside? You may be familiar with NAND memory chips because those are the ones that run your favorite USB key, MP3 player, or solid-state drive (SSD). Those are used for data storage because they are efficient when dealing with random access to data and programming time are low. Unfortunately, most microprocessors cannot boot natively from NAND chips because they use a multiplexed bus: address bus and data bus are shared and you use a couple of latches to get/set the address value or the data value.

Conversely, NOR technology is very close to what good old (UV/E)EPROM were (two separate buses) and that’s why they usually store boot sectors whereas NAND will store the filesystem.

Unlike the serial chips that are loaded at boot time once for all, parallel flash are just like a hard drive. Hence you cannot dump them in-circuit while the thing is powered on. If there is not easy way to get your hand on the system (through JTAG for example), you need to desolder that chip and that’s often called chip-off forensics. The chip I was dealing with was a TSOP package, as shown on the following picture, so I could either use my hot air gun with the right nozzle or Chipquik.

The chip before desoldering it:

And right after:

The latter one (Chipquik) is a special alloy which melting point is far below the usual melting point of your traditional solder. Combined with thermal inertia, it allows you to desolder easily the chip. Chipquik being quite expensive, I went for the hot air gun solution. As commercial gears are now using lead-free solder, it requires more eating to melt and that’s why I usually put extra lead solder on it to lower a bit the melting point as eating to much a chip might damage it. Of course, it would have been way more complex to do the same against a BGA chip because you have to reball the component after having removed it and it usually requires a very expensive machine to do this job.

In order to read a NAND flash chip that I had just desoldered, I basically had two options:

  1. Use a commercial tool
  2. Use a DIY NAND reader as described in SpriteMods’ blog

I had to be honest, as I was a bit out of time, I decided to go for the first option and use my TNM-5000 which comes with a lot of adapters including the TSOP-48 I need for that chip.

As you could expect from a commercial tool, it is very easy to use.  Put the chip in the reader, click “Detect chip”, then “Read” and then just click “Verify” to be sure the dump is correct. Piece of cake! But wait… the file is 264 MB for a 256 MB chip! What’s going on?!

Without a lot of time to learn about NAND flash I decided to quickly wire a FT2232H module to the TSOP48 adapter and give the DYI reader a try. The software allows to do three kind of memory dump:

  • main memory
  • OOB memory
  • both

Hmm… Things are becoming clearer. Those chips have extra memory, probably to deal with the internal cells that could die.

Just for the fun, I did the three kind of dumps. This way, I will be able to learn more about those memory chips.

Another reason is that I usually trust a bit more commercial products for that kind of stuff, even if they come from China :) So the first thing I did right after that was to compute MD5 hashes of those files. Normally, the full dump from both tools should be identical but they were not. A quick run of my favorite tool, vbindiff, shows that, from time to time, nibbles (half a byte) are not the same; That looks like random errors and I already described a way to get rid of them in a previous article.

But instead of dumping multiple times a 264 MB firmware, I decided to understand how NAND flash works and specially the spare data, to be able to use the dump from the TNM5000.

And I learned a lot reading Micron’s paper: NAND Flash 101 an introduction to NAND flash. This paper describes in detail how NAND and NOR chips work, their pros and cons, etc.

Regarding NAND flash, the memory is organised in pages and multiple pages constitute a block. A page is formed of the actual data and spare area used for error correction, bad area management and wear-leveling (cells have a limited lifetime so you need to distribute the data instead of always writing to the same area in order to optimize the whole lifetime of your chip). This is important because even though you can read or write one page at a time, you can only erase a block! If you are not familiar with memory chips, all you can do when writing to them is change a 1 into a 0. If you need to change a 0 to a 1, that’s an ERASE command and that’s why empty memory chips are full of 0xFF.

The following picture will give you an overview of a NAND memory organization:

Typical layout of a NAND memory chip

Typical layout of a NAND memory chip

Unfortunately, when reading Micron’s document and unlike the previous picture could make you think, the spare area is not always located at the end of a page.

There are basically two possible layouts as depicted here below:

Different possible layouts for a block within a NAND memory

Different possible layouts for a block within a NAND memory

And the layout is not tied to the memory chip itself. It is a software decision even though the “separate layout” seems to be the most popular.

By looking at the main memory dump I made earlier with the FT2232H module and the full dump, I can tell that the layout is the separate one indeed.

After some research, it seems that the Linux package mtd-utils used to have a tool called nandump to deal with those dumps. But it has been dropped for some reasons. Hence I decided to write my own tool to strip the spare area out of a full dump (otherwise the spare area will mess with the filesystem and I won’t be able to mount it).

The tool, as always, is available on my Bitbucket account.

It allows multiple things:

  • identify the chip parameters (page size, spare area size) by only giving the chip ID value (the 4 bytes answered when sending a 0x90 command to the chip or available in the component datasheet)
  • manually enter those parameters
  • save spare area in a separate file
  • save the main memory of a NAND dump in a file
  • handle the two kind of layouts

Last but not least, I was thinking of a way to automatically detect the layout and it worked on the chip I had. The idea is pretty simple (but it may fail from time to time): it computes all the spare areas for the two possible layouts and then computes the average Hamming distance between two spare areas. At the end, the least distance might match the layout the software developer chose. If you come with better ideas, I would be glad to test and implement those. Just leave a comment or send me an email :)

Here is a sample output of the tool:

$ ./Nand-dump-tool.py -i full-dump.bin --layout=guess -I adda1095 -o main-dump.bin
[*] Using given ID code
ID code                          : adda1095
Manufacturer                     : Hynix
Device                           : NAND 256MiB 3,3V 8-bit
Die/Package                      : 1
Cell type                        : 2 Level Cell
Simultaneously programmed paged  : 2
Interleave between multiple chips: False
Write cache                      : False
Page size                        : 2048 bytes (2 K)
Spare area size                  : 16 bytes / 512 byte
Block size                       : 131072 bytes (128 K)
Organization                     : X16
Serial access time               : 25 ns
OOB size                         : 64 bytes

[*] Guessing NAND layout using hamming distance...
[*] Guessed layout is: separate

[*] Start dumping...
[*] Finished
        Total: 276824064 bytes (264.00 MB)
        Data : 268435456 bytes (256.00 MB)
        OOB  : 8388608 bytes (8.00 MB)

After having split the dump, the next step is to mount the filesystem, which sounds pretty easy under Linux. But neither the file command nor the awesome binwalk tool could identify the filesystem! A quick hexdump of the first few bytes of the dump shows strings such as “UBI” and “UBIFS”. And a quick Google search shows that it is a compressed filesystem dedicated for NAND chips.

Note: it seems that file command is able to detect UBI filesystems but the package provided by Ubuntu is too old. After having compiled the version provided in binwalk source and having installed it on my system, it detected the filesystem correctly.

I have tried some classic forensics tools such as DFF, Sleuthkit, Encase or X-Ways but none of them seems to be capable of understanding this file system. It’s pretty strange but UBIFS might not be spread enough to have tools for it to recover deleted files or recover bad sectors. So the only option left is to use the Linux mount command. Fortunately, all the required tools to deal with UBI are included by default starting from Linux 2.6.27. But there is still one nasty thing here: to mount a UBIFS volume, you need to attach a UBI device which in turns depends on a MTD device.

After some research, I have found that Linux offers a handy kernel module called nandsim. By giving it the 4 bytes of the ID code of the memory chip, it will create an MTD device for you, attached to a ramdisk (you can also specify a cache file instead but I didn’t want to mess my dumps). Then you can use the ubiattach command to create one ubi device for each volume and simply mount them using the traditional mount -t ubifs command.

# modprobe nandism first_id_byte=0xad second_id_byte=0xda third_id_byte=0x10 fourth_id_byte=0x95 
# dd if=main-dump.bin of=/dev/mtd0
# modprobe ubi
# modprobe ubifs
# ubiattach --mtdn=0

But, in my case, it failed with an obscure error (Invalid argument). By looking at the kernel messages with dmesg command, I spotted a strange message :

UBI error: validate_ec_hdr: bad VID header offset 2048, expected 512

So I had the idea to run ubiattach command once more:

# ubiattach --mtdn=0 --vid-hdr-offset=2048

And it worked! I have absolutely no idea why the tool has been designed to fail instead of only issuing a warning. But at least it gives you advices on how to circumvent its failure. With the right parameters, it created all the required devices (from /dev/ubi0_0 to /dev/ubi0_3 in my case) and by mounting them I could have access to all the files to analyze them and potentially find vulnerabilities.

But that last part is another story :)