Quantcast
Viewing latest article 6
Browse Latest Browse All 10

raid10 in mdadm reports incorrect “Used Dev Size”

Image may be NSFW.
Clik here to view.
Question

I previously had a raid5 with mdadm using four 2TB drives. I’ve recently disassembled the raid and created a raid10 with six 2TB drives, but mdadm –detail is showing “Used Dev Size” as only 2TB (the original one disk raid5 parity) instead of the expected 6TB (half of the new 12TB).

Q: Is having this field at 2TB instead of 6TB going to be an issue?
Even if it might be fine, I still don’t like seeing it wrong.

I’m using CentOS 6.3 (2.6.32-279.9.1.el6.i686) with mdadm 3.2.3-9.el6.i686

I zeroed all the superblocks when I disassembled the raid5:

sudo mdadm --zero-superblock /dev/sda1

Created the array with:

sudo mdadm -v --create /dev/md0 --level=raid10 --raid-devices=6 /dev/sd[a-f]1

Current output of mdadm –detail

sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Thu Sep 27 09:31:33 2012
     Raid Level : raid10
     Array Size : 5860535808 (5589.04 GiB 6001.19 GB)
  Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
   Raid Devices : 6
  Total Devices : 6
    Persistence : Superblock is persistent    Update Time : Fri Sep 28 09:42:55 2012
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0         Layout : near=2
     Chunk Size : 512K           Name : mega-breff:0  (local to host mega-breff)
           UUID : 08d9e66b:c1218cd5:6c8f0cb8:fd144d20
         Events : 19    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
       4       8       65        4      active sync   /dev/sde1
       5       8       81        5      active sync   /dev/sdf1

Output of gdisk (each disk has the exact same partition layout):

sudo gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.8.4Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: presentFound valid GPT with protective MBR; using GPT.
Disk /dev/sda: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 742B7071-DAB2-4C74-9522-FC18D2EE135E
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2148 sectors (1.0 MiB)Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048      3907029000   1.8 TiB     FD00  Linux RAID
Asked by Jon F

Image may be NSFW.
Clik here to view.
Answer

That number is correct. When you set up a RAID array, no matter 1, 10, 5, or 6, all of the devices has to be the same size. If they are not, then whatever device is smallest is used as the baseline. Used Dev Size is that number.

For example, if you had a RAID 5 composed of 3 x 2 TB drives and 1 x 1 TB, then Used Dev Size will be 1 TB because that is the amount of each drive that will be used.

Answered by longneck

Viewing latest article 6
Browse Latest Browse All 10

Trending Articles