[mdlug] "Device busy" errors and RAID setup issues
David McMillan
skyefire at skyefire.org
Fri Nov 20 19:41:24 EST 2009
Jeff Hanson wrote:
> On Tue, Nov 17, 2009 at 7:54 PM, David McMillan <skyefire at skyefire.org> wrote:
>> Busy, busy, busy. But I finally got time to sit down and do all of
>> this. So, I did:
>> dd if=zero of=/dev/sdx for all five of the drives I want to RAID.
>> Confirmed that each disk no longer had a partition table using fdisk.
>> Used fdisk to partition each drive as Linux Raid Autodetect.
>> Ran the following command: sudo mdadm --create --verbose /dev/md0
>> --level=10 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
>> --spare-devices=1 /dev/sdf1
>> And what I got was:
>> mdadm: layout defaults to n1
>> mdadm: chunk size defaults to 64K
>> mdadm: Cannot open /dev/sde1: Device or resource busy
>> mdadm: /dev/sdf1 appears to be part of a raid array:
>> level=raid10 devices=4 ctime=Sat Oct 10 02:50:35 2009
>> mdadm: create aborted
>>
>
> mdadm --zero-superblock should be enough to clear the old RAID data.
>
> Try setting up the array with the minimal number of members (like
> RAID0) with missing members then add spare drives to complete it. I
> think that's what I did with my server but I was using RAID1 and only
> had one member to add.
Well, I did that. Successfully created /dev/md0. /proc/mdstat showed
/dev/md0 active and containing all of the hard drives except the one I'd
left as "missing" in the initial setup. I even put a label on it and
created a filesystem on it.
Then I rebooted the server. And now /dev/md0 is gone. /proc/mdstat
just gives me this:
sudo mdadm /dev/md0 --add /dev/sde1
mdadm: error opening /dev/md0: No such file or directory
david at Archive:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md_d0 : inactive sdf1[3](S)
976759936 blocks
/dev/sdf1 was the last drive I added to /dev/md0. /dev/sde1 was the
one I couldn't add due to the "device busy" error. Before the reboot,
/dev/sdb, c, d, and f all showed in /proc/mdstat. What the heck? Why
would a RAID array that was fully built and appeared to be fine before
the reboot just vanish after? And why does sdf1, and only sdf1, still
show up in /proc/mdstat?
While we're on the subject, I've got one other question: when adding a
drive to a RAID array with mdadm, how do you tell mdadm to treat the
drive as a member of the main array, or as a hot spare? So far, all the
HOWTOs I've found on adding drives just use --add for both actions,
without any apparent differentiation between regular and spare drives.
More information about the mdlug
mailing list