Install UT OS RAID1/RAID5 by 3Hx7xU2

VIEWS: 16 PAGES: 75

									Install UT OS RAID1/RAID5
Sunday, August 15, 2010
9:37 AM


 I used "Oracle VM virtualbox" to test with.




 Boot UT ISO and enter "Expert Mode" installer
Continue through installation until partitioning comes up
Enter Manual mode for the partitioning process
Create a 100MB partition
Change partition settings to match picture below
Create a 1.5GB partition for swap file usage (note: I will be
creating a RAID 5 with this partition later on so it will be a 3GB
partition. Adjust to 2.5 - 3GB if you are just going for a RAID 1)
Make partition settings match the picture below
Create a 40GB partition for UT OS usage (note: I will be creating
a RAID 5 with this partition later on so it will be a 80GB partition.
Adjust to 80GB if you are just going for a RAID 1)
Make partition settings match the picture below
Assigned the rest of the disk space to the 4th primary partition
Make partition settings match the picture below
Should look like picture below now. (Note: You can repeat the
partitions on the other HDD or take care of it later like I will)
Begin installation




Boot into UT
You will get this error it is normal. Press a key to continue.
(Note: We will fix this later)
You will be greeted with this screen once UT loads up. Go
through the wizard or do it later. The choice is yours!




Click the terminal button on the UT desktop and specify a
password for root access.
Logon to the terminal using the password you just created
Follow commands:
cd /etc/ssh
rm sshd_not_to_be_run
/etc/init.d/ssh start

This will remove the file that stops SSH from starting and will
start the service. We will then be able to make a remote
connection to the UT server via "PuTTy "or some other SSH
client. I like to use putty as the cut and paste feature in it can
make some of this work easier. (highlight stuff to copy and
rightclick to paste)
Now I connect to the UT server using putty after verifying what
IP address the server is using. The interface I have plugged into
my network is showing as br.eth0 with an IP of 192.168.3.52

Follow commands:
ifconfig
Start up PuTTy and put in the info and click open to make the
connection. You can keep working on the UT desktop if you
prefer as well.
Click yes or no either way it connects.




You will be asked for a username and password to login

root is the default user
password is whatever you set at the untangle terminal window
on the desktop




Note: The rest of this walkthrough used this webpage
http://www.howtoforge.com/software-raid1-grub-boot-
debian-etch as a reference/guide for the rest of this
configuration.

This guide explains how to set up software RAID1/RAID5 on an
already running Debian Etch/Lenny system on an Untangle ISO
installer (v. 7.4). The GRUB bootloader will be configured in such
a way that the system will still be able to boot if one of the hard
drives fails (no matter which one).
I do not issue any guarantee that this will work for you!

1 Preliminary Note
In this tutorial I'm using a Debian Lenny system with three hard
drives, /dev/sda, /dev/sdb, /dev/sdc which are identical in size.
/dev/sdb and /dev/sdc is currently unused, and /dev/sda has the
following partitions:
        /dev/sda1: /boot partition, ext3;
        /dev/sda2: swap;
        /dev/sda3: / partition, ext3
        /dev/sda4: /data partition, ext3
In the end I want to have the following situation:
        /dev/md0 (made up of /dev/sda1 and /dev/sdb1):
    /boot partition, ext3;
        /dev/md1 (made up of /dev/sda2, /dev/sdb2 and
    /dev/sdc2): swap;
        /dev/md2 (made up of /dev/sda3, /dev/sdb3 and
    /dev/sdc3): / partition, ext3
        /dev/md3 (made up of /dev/sda4, /dev/sdb4 and
    /dev/sdc4): /data

This is the current situation:
~ # df -h

Filesystem         Size Used Avail Use% Mounted on
/dev/sda3           37G 1.3G 34G 4% /
tmpfs             373M    0 373M 0% /lib/init/rw
udev              10M 688K 9.4M 7% /dev
tmpfs             373M    0 373M 0% /dev/shm
/dev/sda1           92M 24M 68M 26% /boot
/dev/sda4           12G 157M 11G 2% /data


~ # fdsik -l

Disk /dev/sda: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00086047

  Device Boot      Start          End    Blocks Id System
/dev/sda1 *           1           12     96358+ 83 Linux
/dev/sda2           13           194    1461915 82 Linux swap /
Solaris
/dev/sda3                195        5057      39062047+ 83 Linux
/dev/sda4               5058        6527      11807775 83 Linux

Disk /dev/sdb: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/sdc doesn't contain a valid partition table

~#

2 Update software repositories

Change "/etc/apt/sources.list" to look at repositories that
have the packages we need.
pico /etc/apt/sources.list

#
# WARNING - DO NOT MODIFY THIS FILE
# Untangle can not support systems with modifications and third-party software
installed
# Proceed only if you know what you are doing
#

#deb http://ftp.debian.org/debian lenny main contrib non-free

#deb http://security.debian.org lenny/updates main contrib non-free

#deb http://volatile.debian.org/debian-volatile lenny/volatile main contrib non-
free

#deb http://www.backports.org/debian lenny-backports main contrib non-free
Edit this file to look like this
#
# WARNING - DO NOT MODIFY THIS FILE
# Untangle can not support systems with modifications and third-party software
installed
# Proceed only if you know what you are doing
#

#deb http://ftp.debian.org/debian lenny main contrib non-free

#deb http://security.debian.org lenny/updates main contrib non-free

#deb http://volatile.debian.org/debian-volatile lenny/volatile main contrib non-
free

#deb http://www.backports.org/debian lenny-backports main contrib non-free

deb http://ftp.nl.debian.org/debian/ lenny main contrib non-free

deb http://security.debian.org/ lenny/updates main contrib non-free

deb http://ftp.us.debian.org/debian/ lenny/updates main contrib non-free


ctrl+x to exit pico
y to save changes then hit enter to confirm filename to save
changes to the file


3 Installing mdadm
The most important tool for setting up RAID is mdadm. Let's
install it this way.

~ # aptitude update
this will update the software package list

After that finished run this

~#aptitude install mdadm

Afterwards, we load a few kernel modules (to avoid a reboot):
run these commands:
modprobe      md
modprobe      linear
modprobe      multipath
modprobe      raid0
modprobe      raid1
modprobe      raid5
modprobe      raid6
modprobe      raid10

Now run
cat /proc/mdstat

The output should look as follows:
~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
unused devices: <none>
~#

Return /etc/apt/sources.list to it's orginal entries:
pico /etc/apt/sources.list

#
# WARNING - DO NOT MODIFY THIS FILE
# Untangle can not support systems with modifications and third-party software
installed
# Proceed only if you know what you are doing
#

#deb http://ftp.debian.org/debian lenny main contrib non-free

#deb http://security.debian.org lenny/updates main contrib non-free

#deb http://volatile.debian.org/debian-volatile lenny/volatile main contrib non-
free

#deb http://www.backports.org/debian lenny-backports main contrib non-free

deb http://ftp.nl.debian.org/debian/ lenny main contrib non-free

deb http://security.debian.org/ lenny/updates main contrib non-free
deb http://ftp.us.debian.org/debian/ lenny/updates main contrib non-free


Edit this file to look like this
#
# WARNING - DO NOT MODIFY THIS FILE
# Untangle can not support systems with modifications and third-party software
installed
# Proceed only if you know what you are doing
#

#deb http://ftp.debian.org/debian lenny main contrib non-free

#deb http://security.debian.org lenny/updates main contrib non-free

#deb http://volatile.debian.org/debian-volatile lenny/volatile main contrib non-
free

#deb http://www.backports.org/debian lenny-backports main contrib non-free
ctrl+x to exit pico
y to save changes then hit enter to confirm filename to save
changes to the file

If /etc/apt/sources.list is left modified it can interfere
with the upgrade process of Untangle.

4 Preparing /dev/sdb and /dev/sdc
To create a RAID1 array on our already running system, we must
prepare the /dev/sdb hard drive for RAID1, then copy the
contents of our /dev/sda hard drive to it, and finally add
/dev/sda to the RAID1 array.
First, we copy the partition table from /dev/sda to /dev/sdb so
that both disks have exactly the same layout. We will also prep
/dev/sdc as well for RAID5 use.

run these command:
sfdisk -d /dev/sda | sfdisk /dev/sdb
sfdisk -d /dev/sda | sfdisk /dev/sdc
The output should be as follows:

~ # sfdisk -d /dev/sda | sfdisk /dev/sdb
[root @ hostname]
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 6527 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

  Device Boot Start       End #sectors Id System
/dev/sdb1 *        63 192779       192717 83 Linux
/dev/sdb2       192780 3116609 2923830 82 Linux swap /
Solaris
/dev/sdb3      3116610 81240704 78124095 83 Linux
/dev/sdb4     81240705 104856254 23615550 83 Linux
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then
use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7
bs=512 count=1
(See fdisk(8).)
root@hostname#
~ # sfdisk -d /dev/sda | sfdisk /dev/sdc
[root @ hostname]
Checking that no-one is using this disk right now ...
OK
Disk /dev/sdc: 6527 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/sdc: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

  Device Boot Start       End #sectors Id System
/dev/sdc1 *        63 192779       192717 83 Linux
/dev/sdc2       192780 3116609 2923830 82 Linux swap /
Solaris
/dev/sdc3      3116610 81240704 78124095 83 Linux
/dev/sdc4     81240705 104856254 23615550 83 Linux
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then
use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7
bs=512 count=1
(See fdisk(8).)

fdisk -l
should now show that all HDDs have the same layout:

~ # fdisk -l
[root @ hostname]

Disk /dev/sda: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00086047
  Device Boot     Start      End    Blocks Id System
/dev/sda1 *          1       12     96358+ 83 Linux
/dev/sda2          13       194    1461915 82 Linux swap /
Solaris
/dev/sda3          195      5057    39062047+ 83 Linux
/dev/sda4         5058      6527    11807775 83 Linux

Disk /dev/sdb: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

  Device Boot     Start      End    Blocks Id System
/dev/sdb1 *          1       12     96358+ 83 Linux
/dev/sdb2           13      194    1461915 82 Linux swap /
Solaris
/dev/sdb3          195      5057    39062047+ 83 Linux
/dev/sdb4         5058      6527    11807775 83 Linux

Disk /dev/sdc: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

  Device Boot  Start         End     Blocks Id System
/dev/sdc1 *       1          12     96358+ 83 Linux
/dev/sdc2        13         194    1461915 82 Linux swap /
Solaris
/dev/sdc3       195         5057    39062047+ 83 Linux
/dev/sdc4      5058         6527    11807775 83 Linux
root@hostname#
~#

Next we must change the partition type of our four partitions on
/dev/sdb and /dev/sdc to Linux raid autodetect:
fdisk /dev/sdb
~ # fdisk /dev/sdb
[root @ hostname]

The number of cylinders for this disk is set to 6527.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
  (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): m
Command action
  a toggle a bootable flag
  b edit bsd disklabel
  c toggle the dos compatibility flag
  d delete a partition
  l list known partition types
  m print this menu
  n add a new partition
  o create a new empty DOS partition table
  p print the partition table
  q quit without saving changes
  s create a new empty Sun disklabel
  t change a partition's system id
  u change display/entry units
  v verify the partition table
  w write table to disk and exit
  x extra functionality (experts only)

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): l

 0 Empty         1e Hidden W95 FAT1 80 Old Minix          be
Solaris boot
 1 FAT12         24 NEC DOS          81 Minix / old Lin bf Solaris
 2 XENIX root     39 Plan 9         82 Linux swap / So c1
DRDOS/sec (FAT-
 3 XENIX usr      3c PartitionMagic 83 Linux       c4
DRDOS/sec (FAT-
 4 FAT16 <32M       40 Venix 80286    84 OS/2 hidden C: c6
DRDOS/sec (FAT-
 5 Extended       41 PPC PReP Boot 85 Linux extended c7
Syrinx
 6 FAT16         42 SFS          86 NTFS volume set da Non-
FS data
 7 HPFS/NTFS       4d QNX4.x        87 NTFS volume set db
CP/M / CTOS / .
 8 AIX          4e QNX4.x 2nd part 88 Linux plaintext de Dell
Utility
 9 AIX bootable 4f QNX4.x 3rd part 8e Linux LVM         df
BootIt
 a OS/2 Boot Manag 50 OnTrack DM        93 Amoeba           e1
DOS access
 b W95 FAT32       51 OnTrack DM6 Aux 94 Amoeba BBT            e3
DOS R/O
 c W95 FAT32 (LBA) 52 CP/M           9f BSD/OS         e4
SpeedStor
 e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi
eb BeOS fs
 f W95 Ext'd (LBA) 54 OnTrackDM6       a5 FreeBSD        ee
EFI GPT
10 OPUS           55 EZ-Drive      a6 OpenBSD        ef EFI
(FAT-12/16/
11 Hidden FAT12 56 Golden Bow         a7 NeXTSTEP          f0
Linux/PA-RISC b
12 Compaq diagnost 5c Priam Edisk      a8 Darwin UFS       f1
SpeedStor
14 Hidden FAT16 <3 61 SpeedStor        a9 NetBSD         f4
SpeedStor
16 Hidden FAT16 63 GNU HURD or Sys ab Darwin boot             f2
DOS secondary
17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs            fd
Linux raid auto
18 AST SmartSleep 65 Novell Netware b8 BSDI swap             fe
LANstep
1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid ff
BBT
1c Hidden W95 FAT3 75 PC/IX
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 4
Hex code (type L to list codes): fd
Changed system type of partition 4 to fd (Linux raid autodetect)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
root@hostname#


~ # fdisk /dev/sdc
[root @ hostname]

The number of cylinders for this disk is set to 6527.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
  (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 4
Hex code (type L to list codes): fd
Changed system type of partition 4 to fd (Linux raid autodetect)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
root@hostname#

~ # fdisk -l
[root @ hostname]

Disk /dev/sda: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00086047

  Device Boot    Start         End    Blocks Id System
/dev/sda1 *         1          12     96358+ 83 Linux
/dev/sda2         13          194    1461915 82 Linux swap /
Solaris
/dev/sda3         195         5057   39062047+ 83 Linux
/dev/sda4        5058         6527   11807775 83 Linux

Disk /dev/sdb: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

  Device Boot    Start        End    Blocks Id System
/dev/sdb1 *         1         12     96358+ fd Linux raid
autodetect
/dev/sdb2          13         194    1461915    fd Linux raid
autodetect
/dev/sdb3         195         5057   39062047+ fd Linux raid
autodetect
/dev/sdb4        5058         6527   11807775     fd Linux raid
autodetect

Disk /dev/sdc: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

  Device Boot    Start        End     Blocks Id System
/dev/sdc1 *         1         12     96358+ fd Linux raid
autodetect
/dev/sdc2          13         194    1461915   fd Linux raid
autodetect
/dev/sdc3         195         5057   39062047+ fd Linux raid
autodetect
/dev/sdc4        5058         6527   11807775     fd Linux raid
autodetect
root@hostname#
~#


Run "fdisk /dev/sdc" one more time and delete partition sdc1
as it is not needed.

~ # fdisk /dev/sdc
[root @ hostname]

The number of cylinders for this disk is set to 6527.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
  (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): d
Partition number (1-4): 1

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
fdisk -l
root@hostname#
~ # fdisk -l
[root @ hostname]

Disk /dev/sda: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00086047
  Device Boot    Start      End    Blocks Id System
/dev/sda1 *         1       12     96358+ 83 Linux
/dev/sda2         13       194    1461915 82 Linux swap /
Solaris
/dev/sda3         195     5057    39062047+ 83 Linux
/dev/sda4        5058     6527    11807775 83 Linux

Disk /dev/sdb: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

  Device Boot    Start     End    Blocks Id System
/dev/sdb1 *         1      12     96358+ fd Linux raid
autodetect
/dev/sdb2         13       194    1461915    fd Linux raid
autodetect
/dev/sdb3         195     5057    39062047+ fd Linux raid
autodetect
/dev/sdb4        5058      6527    11807775    fd Linux raid
autodetect

Disk /dev/sdc: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

  Device Boot  Start       End     Blocks Id System
/dev/sdc2        13       194     1461915 fd Linux raid
autodetect
/dev/sdc3       195       5057    39062047+ fd Linux raid
autodetect
/dev/sdc4      5058        6527   11807775     fd Linux raid
autodetect
root@hostname#
~#
To make sure that there are no remains from previous RAID
installations on /dev/sdb and /dev/sdc we run the following
commands:

run these commands:
mdadm   --zero-superblock   /dev/sdb1
mdadm   --zero-superblock   /dev/sdb2
mdadm   --zero-superblock   /dev/sdb3
mdadm   --zero-superblock   /dev/sdb4

mdadm --zero-superblock /dev/sdc2
mdadm --zero-superblock /dev/sdc3
mdadm --zero-superblock /dev/sdc4

If there are no remains from previous RAID installations, each of
the above commands will throw an error like this one (which is
nothing to worry about):

~# mdadm --zero-superblock /dev/sdb1
mdadm: Unrecognised md component device - /dev/sdb1
~#

Otherwise the commands will not display anything at all.


5 Creating Our RAID Arrays

Now let's create our RAID arrays /dev/md0, /dev/md1,
/dev/md2, and /dev/md3. /dev/sdb1 will be added to /dev/md0,
/dev/sdb2 and /dev/sdc2 to /dev/md1, /dev/sdb3 and
/dev/sdc3 to /dev/md2, and /dev/sdb4 and /dev/sdc4 to
/dev/md3. /dev/sda1, /dev/sda2, and /dev/sda3 can't be added
right now (because the system is currently running on them),
therefore we use the placeholder missing in the following three
commands:

Run these commands:
mdadm --create /dev/md0     --level=1 --raid-disks=2 missing
/dev/sdb1
mdadm --create /dev/md1     --level=5 --raid-disks=3 missing
/dev/sdb2 /dev/sdc2
mdadm --create /dev/md2     --level=5 --raid-disks=3 missing
/dev/sdb3 /dev/sdc3
mdadm --create /dev/md3     --level=5 --raid-disks=3 missing
/dev/sdb4 /dev/sdc4

The command
cat /proc/mdstat
should now show that you have three degraded RAID arrays
([_U] or [U_] means that an array is degraded while [UU] means
that the array is ok):

 ~ # cat /proc/mdstat                                     [root @
hostname]
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md3 : active (auto-read-only) raid5 sdc4[2] sdb4[1]
     23615360 blocks level 5, 64k chunk, algorithm 2 [3/2]
[_UU]

md2 : active (auto-read-only) raid5 sdc3[2] sdb3[1]
   78123904 blocks level 5, 64k chunk, algorithm 2 [3/2]
[_UU]

md1 : active (auto-read-only) raid5 sdc2[2] sdb2[1]
   2923648 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]

md0 : active (auto-read-only) raid1 sdb1[1]
   96256 blocks [2/1] [_U]

unused devices: <none>
root@hostname#
~#
Next we create filesystems on our RAID arrays (ext3 on
/dev/md0 and /dev/md2 and /dev/md3 and swap on /dev/md1):

Create filesystems on the RAID devices using these
commands:
mkfs.ext3 /dev/md0
mkswap /dev/md1
mkfs.ext3 /dev/md2
mkfs.ext3 /dev/md3

Next we must adjust /etc/mdadm/mdadm.conf (which
doesn't contain any information about our new RAID arrays yet)
to the new situation:

Run these commands:
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Display the contents of the file:
cat /etc/mdadm/mdadm.conf

At the bottom of the file you should now see details about our
three (degraded) RAID arrays:
 ~ # cat /etc/mdadm/mdadm.conf
[root @ hostname]
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD
superblocks.
# alternatively, specify devices to scan, using wildcards if
desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Mon, 16 Aug 2010 07:15:28 -
0400
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=fa3b061c:76f2feaf:2c6e4b14:f5b3b008
ARRAY /dev/md1 level=raid5 num-devices=3
UUID=51cf2a12:a675ea66:2c6e4b14:f5b3b008
ARRAY /dev/md2 level=raid5 num-devices=3
UUID=f3ba22a9:e6634c5b:2c6e4b14:f5b3b008
ARRAY /dev/md3 level=raid5 num-devices=3
UUID=0f0ea451:dff2d45b:2c6e4b14:f5b3b008
root@hostname#
 ~#


6 Adjusting The System To RAID1
Now let's mount /dev/md0, /dev/md2, and /dev/md3 (we don't
need to mount the swap array /dev/md1):

Run these commands:
mkdir /mnt/md0
mkdir /mnt/md2
mkdir /mnt/md3
mount /dev/md0 /mnt/md0
mount /dev/md2 /mnt/md2
mount /dev/md3 /mnt/md3

You should now find the arrays in the output of
mount

~ # mount
[root @ hostname]
/dev/sda3 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts
(rw,noexec,nosuid,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
/dev/sda4 on /data type ext3 (rw)
/dev/md0 on /mnt/md0 type ext3 (rw)
/dev/md2 on /mnt/md2 type ext3 (rw)
/dev/md3 on /mnt/md3 type ext3 (rw)
root@hostname#
~#

Next we modify /etc/fstab. Replace /dev/sda1 with /dev/md0,
/dev/sda2 with /dev/md1, /dev/sda3 with /dev/md2 and
/dev/sda4 with /dev/md3 so that the file looks as follows:

~# pico /etc/fstab
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options>
<dump> <pass>
proc         /proc          proc defaults     0 0
/dev/sda3      /         ext3 errors=remount-ro 0          1
/dev/sda1      /boot      ext3 defaults     0     2
/dev/sda4      /data      ext3 defaults     0     2
/dev/sda2      none        swap sw          0     0
/dev/hdc      /media/cdrom0 udf,iso9660 user,noauto        0
0


Change this file to look like this
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options>
<dump> <pass>
proc         /proc          proc defaults     0   0
/dev/md2         /            ext3 errors=remount-ro 0         1
/dev/md0         /boot          ext3 defaults   0      2
/dev/md3         /data          ext3 defaults   0      2
/dev/md1         none           swap sw         0      0
/dev/hdc       /media/cdrom0 udf,iso9660 user,noauto       0
0

Save the changes you made to the file (ctrl+x) y to save then
confirm file name to save as.

Next replace /dev/sda1 with /dev/md0, /dev/sda3 with
/dev/md2 and /dev/sda4 with /dev/md3 in /etc/mtab:
~# pico /etc/mtab
/dev/sda3 / ext3 rw,errors=remount-ro 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
procbususb /proc/bus/usb usbfs rw 0 0
udev /dev tmpfs rw,mode=0755 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0
/dev/sda1 /boot ext3 rw 0 0
/dev/sda4 /data ext3 rw 0 0
/dev/md0 /mnt/md0 ext3 rw 0 0
/dev/md2 /mnt/md2 ext3 rw 0 0
/dev/md3 /mnt/md3 ext3 rw 0 0
Change this file to look like this
/dev/md2 / ext3 rw,errors=remount-ro 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
procbususb /proc/bus/usb usbfs rw 0 0
udev /dev tmpfs rw,mode=0755 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0
/dev/md0 /boot ext3 rw 0 0
/dev/md3 /data ext3 rw 0 0
/dev/md0 /mnt/md0 ext3 rw 0 0
/dev/md2 /mnt/md2 ext3 rw 0 0
/dev/md3 /mnt/md3 ext3 rw 0 0
Don't remove the duplicate /dev/mdx devices just leave em.
Save your changes (ctrl+x) y to save then confirm file name to
save as.

Now up to the GRUB boot loader. Open /boot/grub/menu.lst
and add fallback 1 right after default 0:
~# pico /boot/grub/menu.lst
# You can specify 'saved' instead of a number. In this case, the
default entry
# is the entry saved with the command 'savedefault'.
# WARNING: If you are using dmraid do not use 'savedefault' or
your
# array will desync and will not let you boot your system.
default     saved

## timeout sec
# Set a timeout, in SEC seconds, before automatically booting
the default entry
# (normally the first entry defined).
timeout   5


Change file to look like this
# You can specify 'saved' instead of a number. In this case, the
default entry
# is the entry saved with the command 'savedefault'.
# WARNING: If you are using dmraid do not use 'savedefault' or
your
# array will desync and will not let you boot your system.
default     saved
fallback         1

## timeout sec
# Set a timeout, in SEC seconds, before automatically booting
the default entry
# (normally the first entry defined).
timeout     5
This makes that if the first kernel (counting starts with 0, so the
first kernel is 0) fails to boot, kernel #2 will be booted.
In the same file, go to the bottom where you should find some
kernel stanzas. Copy the first of them and paste the stanza
before the first existing stanza; replace root=/dev/sda3 with
root=/dev/md2 and root (hd0,0) with root (hd1,0):
## ## End Default Options ##

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686
root      (hd0,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=UUID=ddca669f-675f-45ba-a8e5-642b1b4f9764 ro
ramdisk_size=100000 panic=5 hpet=disable vga=791 quiet splash ut-video
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 (high resolution mode)
root      (hd0,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=UUID=ddca669f-675f-45ba-a8e5-642b1b4f9764 ro
ramdisk_size=100000 panic=5 hpet=disable vga=791 quiet splash
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault
title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 (hardware safe mode)
root      (hd0,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=UUID=ddca669f-675f-45ba-a8e5-642b1b4f9764 ro
ramdisk_size=100000 panic=5 hpet=disable acpi=off noapic
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 (recovery mode)
root      (hd0,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=UUID=ddca669f-675f-45ba-a8e5-642b1b4f9764 ro
ramdisk_size=100000 panic=5 hpet=disable ut-restore
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

### END DEBIAN AUTOMAGIC KERNELS LIST

Change this file to look like this
title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd1)
root      (hd1,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable vga=791 quiet ut-video
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686
root      (hd0,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=UUID=ddca669f-675f-45ba-a8e5-642b1b4f9764 ro
ramdisk_size=100000 panic=5 hpet=disable vga=791 quiet splash ut-video
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd1)(high resolution mode)
root      (hd1,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable vga=791 quiet
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd1)(hardware safe mode)
root      (hd1,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable acpi=off noapic
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault
title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd1)(recovery mode)
root      (hd1,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable ut-restore
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

(Note: Notice in stanza 1 and 3 that I have removed the
"splash" option towards the end of the kernel lines. This is
a preference for me as I like to see everything load up to
make sure I don't see any issues I overlooked. "Splash"
option is what throws that loading screen up when UT is
loading and hides all the stuff going on in the
background.)

Now to fix that error we get when booting. Find the line below
and change it.
splashimage=(hd0,0)/boot/grub/utsplash.xpm.gz
Change this line to read
splashimage=(hd0,0)/grub/utsplash.xpm.gz
Now save menu.lst. Save your changes (ctrl+x) y to save then
confirm file name to save as.


root (hd1,0) refers to /dev/sdb which is already part of our RAID
arrays. We will reboot the system in a few moments; the system
will then try to boot from our (still degraded) RAID arrays; if it
fails, it will boot from /dev/sda (-> fallback 1).

Next we adjust our ramdisk to the new situation:
Run these commands:
update-initramfs -u

Now we copy the contents of /dev/sda1, /dev/sda3 and
/dev/sda4 to /dev/md0, /dev/md2 and /dev/md3 (which are
mounted on /mnt/md0, /mnt/md2 and /mnt/md3):

Run these commands:
cp -dpRx / /mnt/md2
cd /boot
cp -dpRx . /mnt/md0


7 Preparing GRUB (Part 1)
Afterwards we must install the GRUB bootloader on the second
hard drive /dev/sdb:

Run these commands:
grub

On the GRUB shell, type in the following commands:
grub> root (hd0,0)
 Filesystem type is ext2fs, partition type 0x83

grub> setup (hd0)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd0)"... 17 sectors are
embedded.
succeeded
 Running "install /grub/stage1 (hd0) (hd0)1+17 p
(hd0,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.

grub> root (hd1,0)
 Filesystem type is ext2fs, partition type 0xfd

grub> setup (hd1)
 Checking if "/boot/grub/stage1" exists... no
 Checking if "/grub/stage1" exists... yes
 Checking if "/grub/stage2" exists... yes
 Checking if "/grub/e2fs_stage1_5" exists... yes
 Running "embed /grub/e2fs_stage1_5 (hd1)"... 17 sectors are
embedded.
succeeded
 Running "install /grub/stage1 (hd1) (hd1)1+17 p
(hd1,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.

grub> quit

Now, back on the normal shell, we reboot the system and hope
that it boots ok from our RAID arrays:

Run these commands:
reboot



8 Preparing /dev/sda
If all goes well, you should now find /dev/md0, /dev/md2 and
/dev/md3 in the output of
df -h

~ # df -h
[root @ hostname]
Filesystem       Size Used Avail Use% Mounted on
/dev/md2          74G 1.3G 69G 2% /
tmpfs          373M     0 373M 0% /lib/init/rw
udev            10M 732K 9.3M 8% /dev
tmpfs          373M     0 373M 0% /dev/shm
/dev/md0          92M 24M 63M 28% /boot
/dev/md3          23G 173M 21G 1% /data
~#

The output of cat /proc/mdstat should be as follows:
~# cat /proc/mdstat
[root @ hostname]
Personalities : [raid1] [raid6] [raid5] [raid4]
md3 : active raid5 sdb4[1] sdc4[2]
    23615360 blocks level 5, 64k chunk, algorithm 2 [3/2]
[_UU]

md2 : active raid5 sdb3[1] sdc3[2]
   78123904 blocks level 5, 64k chunk, algorithm 2 [3/2]
[_UU]

md1 : active (auto-read-only) raid5 sdb2[1] sdc2[2]
   2923648 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]

md0 : active raid1 sdb1[1]
   96256 blocks [2/1] [_U]

unused devices: <none>
root@hostname#
~#


Now we must change the partition types of our four partitions on
/dev/sda to Linux raid autodetect as well:
~ # fdisk /dev/sda
[root @ hostname]

The number of cylinders for this disk is set to 6527.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
  (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 4
Hex code (type L to list codes): fd
Changed system type of partition 4 to fd (Linux raid autodetect)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16:
Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
root@hostname#
~#

Now we can add /dev/sda1, /dev/sda2, /dev/sda3 and /dev/sda4
to the respective RAID arrays:

Run these commands:
mdadm --add /dev/md0 /dev/sda1
mdadm --add /dev/md1 /dev/sda2
mdadm --add /dev/md2 /dev/sda3
mdadm --add /dev/md3 /dev/sda4

Now take a look at cat /proc/mdstat
... and you should see that the RAID arrays are being
synchronized:
~ # cat /proc/mdstat
[root @ hostname]
Personalities : [raid1] [raid6] [raid5] [raid4]
md3 : active raid5 sda4[3] sdb4[1] sdc4[2]
    23615360 blocks level 5, 64k chunk, algorithm 2 [3/2]
[_UU]
      resync=DELAYED

md2 : active raid5 sda3[3] sdb3[1] sdc3[2]
     78123904 blocks level 5, 64k chunk, algorithm 2 [3/2]
[_UU]
     [>....................] recovery = 0.6% (267008/39061952)
finish=19.3min speed=33376K/sec

md1 : active raid5 sda2[3] sdb2[1] sdc2[2]
   2923648 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]
    resync=DELAYED

md0 : active raid1 sda1[0] sdb1[1]
   96256 blocks [2/2] [UU]

unused devices: <none>
root@hostname#
~#
(You can run watch cat /proc/mdstat to get an ongoing
output of the process. To leave watch, press CTRL+C.)
Wait until the synchronization has finished.

The output of cat /proc/mdstat should then look like this:
~ # cat /proc/mdstat
[root @ hostname]
Personalities : [raid1] [raid6] [raid5] [raid4]
md3 : active raid5 sda4[0] sdb4[1] sdc4[2]
    23615360 blocks level 5, 64k chunk, algorithm 2 [3/3]
[UUU]

md2 : active raid5 sda3[0] sdb3[1] sdc3[2]
   78123904 blocks level 5, 64k chunk, algorithm 2 [3/3]
[UUU]

md1 : active raid5 sda2[0] sdb2[1] sdc2[2]
   2923648 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md0 : active raid1 sda1[0] sdb1[1]
   96256 blocks [2/2] [UU]

unused devices: <none>
root@hostname#
~#

Then adjust /etc/mdadm/mdadm.conf to the new situation:

Run these commands:
cp /etc/mdadm/mdadm.conf.orig /etc/mdadm/mdadm.conf
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

/etc/mdadm/mdadm.conf should now look something
like this:
 ~ # cat /etc/mdadm/mdadm.conf
[root @ hostname]
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD
superblocks.
# alternatively, specify devices to scan, using wildcards if
desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Mon, 16 Aug 2010 07:15:28 -
0400
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=fa3b061c:76f2feaf:2c6e4b14:f5b3b008
ARRAY /dev/md1 level=raid5 num-devices=3
UUID=51cf2a12:a675ea66:2c6e4b14:f5b3b008
ARRAY /dev/md2 level=raid5 num-devices=3
UUID=f3ba22a9:e6634c5b:2c6e4b14:f5b3b008
ARRAY /dev/md3 level=raid5 num-devices=3
UUID=0f0ea451:dff2d45b:2c6e4b14:f5b3b008
root@hostname#
~#



9 Preparing GRUB (Part 2)

We are almost done now. Now we must modify
/boot/grub/menu.lst again. Right now it is configured to boot
from /dev/sdb (hd1,0). Of course, we still want the system to be
able to boot in case /dev/sdb fails. Therefore we copy the first
kernel stanza (which contains hd1), paste it below and replace
hd1 with hd0. Furthermore we comment out or delete all original
kernel stanzas so that it looks as follows:

~# /boot/grub/menu.lst
[...]
## ## End Default Options ##

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd1)
root      (hd1,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable vga=791 quiet ut-video
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686
root      (hd0,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=UUID=ddca669f-675f-45ba-a8e5-642b1b4f9764 ro
ramdisk_size=100000 panic=5 hpet=disable vga=791 quiet splash ut-video
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd1)(high resolution mode)
root      (hd1,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable vga=791 quiet
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd1)(hardware safe mode)
root      (hd1,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable acpi=off noapic
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd1)(recovery mode)
root      (hd1,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable ut-restore
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

### END DEBIAN AUTOMAGIC KERNELS LIST
Change this file to look like this
## ## End Default Options ##

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd1)
root      (hd1,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable vga=791 quiet ut-video
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd0)
root      (hd0,0)
kernel     /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable vga=791 quiet ut-video
initrd    /initrd.img-2.6.26-2-untangle-686

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd1)(high resolution mode)
root      (hd1,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable vga=791 quiet
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd0)(high resolution mode)
root      (hd0,0)
kernel     /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable vga=791 quiet
initrd    /initrd.img-2.6.26-2-untangle-686

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd1)(hardware safe mode)
root      (hd1,0)
kernel      /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable acpi=off noapic
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd0)(hardware safe mode)
root      (hd0,0)
kernel     /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable acpi=off noapic
initrd    /initrd.img-2.6.26-2-untangle-686

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd1)(recovery mode)
root      (hd1,0)
kernel     /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable ut-restore
initrd    /initrd.img-2.6.26-2-untangle-686
savedefault

title    Debian GNU/Linux, kernel 2.6.26-2-untangle-686 RAID(hd0)(recovery mode)
root      (hd0,0)
kernel     /vmlinuz-2.6.26-2-untangle-686 root=/dev/md2 ro ramdisk_size=100000 panic=5
hpet=disable ut-restore
initrd    /initrd.img-2.6.26-2-untangle-686

### END DEBIAN AUTOMAGIC KERNELS LIST
(Note:If you want the splash load screen back add
"splash" after "quiet" in stanza 1-4)
In the same file (/boot/grub/menu.lst), there's a kopt line;
replace /dev/sda3 with /dev/md2 (don't remove the # at the
beginning of the line!):

Change entry in /boot/grub/menu.lst to match entry
below:
[...]
# kopt=root=/dev/md2 ro ramdisk_size=100000 lang=us apm=power-off
screen=1024x768 nomce nodhcp nofstab panic=5
[...]

Afterwards, update your ramdisk:
update-initramfs -u

... and reboot the system:
reboot

It should boot without problems. Every entry that has
(hd0) is a fallback entry.
That's it - you've successfully set up software
RAID1/RAID5 on your running Debian Etch/Lenny system
running UT ISO load 7.4.

10 Secure SSH - Important!

Remember in the beginning "I" enabled SSH access. This is good
as sometimes the web UI can do something funny and this is a
good way to remotely see what the UT server is doing. But SSH
that allows root to logon or does not require a secure encryption
key is not a good idea. If someone finds you they will eventually
try a brute force. I see it all the time. Hell I see SIP registration
brute force attacks against my asterisk servers.
You have two options that I am going to give you. Disable SSH
or restrict access to SSH so root can't logon via SSH

To disable SSH the UT way:
cd /etc/ssh
touch sshd_not_to_be_run
/etc/init.d/ssh restart

When SSH tries to restart you should get this line:




To restrict SSH access to forbid root:
You will need to create a user to first log into the server. Avoid
any common names, acronyms work well.
cd /etc/ssh
pico sshd_config
Locate PermitRootLogin and change this to read:
[…]
PermitRootLogin no
[…]
Save changes to the file (ctrl+x) y to save then confirm file
name and press enter.

Run /etc/init.d/ssh restart and you should get an output like
below.




Now we need a user to initially log into the SSH service with
before we can su - over to root
useradd someguy
passwd someguy
You will then be asked to enter a password and confirm it by
typing it a second time
password
password

You should get outputs like this




You should now be able to log into SSH with someguy user.
After logging into SSH you can then use the command "su -"
(don’t forget the space after su) to log into the normal root user
which uses the same password you set up via the untangle
desktop terminal.
login as: someguy
someguy@192.168.3.52's password:
Linux hostname.example.com 2.6.26-2-untangle-686 #1 SMP Fri
May 21 05:00:38 PDT 2010 i686

Could not chdir to home directory /home/someguy: No such file
or directory
someguy@hostname:/$ su -
Password:
root@hostname#
~#

Attempts to logon as root will fail.
login as: root
root@192.168.3.52's   password:
Access denied
root@192.168.3.52's   password:
Access denied
root@192.168.3.52's   password:
Access denied
root@192.168.3.52's   password:
Access denied
root@192.168.3.52's   password:
Access denied
root@192.168.3.52's   password:

This is usually enough to secure your server from unwanted
access. There is a far more secure method that requires the use
of an encryption key to gain access to SSH but if your system
needs to be that secure I am going to make you work for it.
HAHA

11 Testing - optional

Now let's simulate a hard drive failure. It doesn't matter if you
select /dev/sda or /dev/sdb here. In this example I assume that
/dev/sdb has failed. To simulate the hard drive failure, you can
either shut down the system and remove /dev/sdb from the
system, or you (soft-)remove it like this:

Run these commands:
mdadm   --manage   /dev/md0   --fail /dev/sdb1
mdadm   --manage   /dev/md1   --fail /dev/sdb2
mdadm   --manage   /dev/md2   --fail /dev/sdb3
mdadm   --manage   /dev/md3   --fail /dev/sdb4
mdadm   --manage   /dev/md0   --remove /dev/sdb1
mdadm   --manage   /dev/md1   --remove /dev/sdb2
mdadm   --manage   /dev/md2   --remove /dev/sdb3
mdadm   --manage   /dev/md3   --remove /dev/sdb4
Shut down the system:
shutdown -h now
Then put in a new /dev/sdb drive (if you simulate a failure of
/dev/sda, you should now put /dev/sdb in /dev/sda's
place and connect the new HDD as /dev/sdb!) and boot the
system. It should still start without problems.

Now run
cat /proc/mdstat

and you should see that we have a degraded array:
~ # cat /proc/mdstat
[root @ hostname]
Personalities : [raid1] [raid6] [raid5] [raid4]
md3 : active raid5 sda4[0] sdc4[2]
    23615360 blocks level 5, 64k chunk, algorithm 2 [3/2]
[U_U]

md2 : active raid5 sda3[0] sdc3[2]
   78123904 blocks level 5, 64k chunk, algorithm 2 [3/2]
[U_U]

md1 : active (auto-read-only) raid5 sda2[0] sdc2[2]
   2923648 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]

md0 : active raid1 sda1[0]
   96256 blocks [2/1] [U_]

unused devices: <none>
root@hostname#
~#

The output of fdisk -l should look as follows:
~ # fdisk -l
[root @ hostname]
Disk /dev/sda: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00086047

  Device Boot     Start       End    Blocks Id System
/dev/sda1 *          1        12     96358+ fd Linux raid
autodetect
/dev/sda2           13       194    1461915    fd Linux raid
autodetect
/dev/sda3          195       5057    39062047+ fd Linux raid
autodetect
/dev/sda4         5058       6527    11807775     fd Linux raid
autodetect

Disk /dev/sdb: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

  Device Boot     Start       End    Blocks Id System
/dev/sdc2          13        194    1461915 fd Linux raid
autodetect
/dev/sdc3          195       5057    39062047+ fd Linux raid
autodetect
/dev/sdc4         5058       6527    11807775     fd Linux raid
autodetect

Disk /dev/md0: 98 MB, 98566144 bytes
2 heads, 4 sectors/track, 24064 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md1: 2993 MB, 2993815552 bytes
2 heads, 4 sectors/track, 730912 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md2: 79.9 GB, 79998877696 bytes
2 heads, 4 sectors/track, 19530976 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md2 doesn't contain a valid partition table

Disk /dev/md3: 24.1 GB, 24182128640 bytes
2 heads, 4 sectors/track, 5903840 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md3 doesn't contain a valid partition table
root@hostname#
~#

Now we copy the partition table of /dev/sda to /dev/sdb:
sfdisk -d /dev/sda | sfdisk /dev/sdb

(If you get an error, you can try the --force option:sfdisk -d
/dev/sda | sfdisk --force /dev/sdb)
~ # sfdisk -d /dev/sda | sfdisk /dev/sdb
[root @ hostname]
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 6527 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
 /dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

  Device Boot Start       End #sectors Id System
/dev/sdb1 *        63 192779       192717 fd Linux raid
autodetect
/dev/sdb2       192780 3116609 2923830 fd Linux raid
autodetect
/dev/sdb3      3116610 81240704 78124095 fd Linux raid
autodetect
/dev/sdb4     81240705 104856254 23615550 fd Linux raid
autodetect
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then
use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7
bs=512 count=1
(See fdisk(8).)
root@hostname[10-08-17 1:06] kiosk has logged on 1 from
~#

Afterwards we remove any remains of a previous RAID
array from /dev/sdb...
mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2
mdadm --zero-superblock /dev/sdb3
mdadm --zero-superblock /dev/sdb4

... and add /dev/sdb to the RAID array:
mdadm   -a   /dev/md0   /dev/sdb1
mdadm   -a   /dev/md1   /dev/sdb2
mdadm   -a   /dev/md2   /dev/sdb3
mdadm   -a   /dev/md3   /dev/sdb4

Now take a look at cat /proc/mdstat
~ # cat /proc/mdstat
[root @ hostname]
Personalities : [raid1] [raid6] [raid5] [raid4]
md3 : active raid5 sdb4[3] sda4[0] sdc4[2]
     23615360 blocks level 5, 64k chunk, algorithm 2 [3/2]
[U_U]
     [>....................] recovery = 4.7% (563712/11807680)
finish=5.3min speed=35232K/sec

md2 : active raid5 sdb3[3] sda3[0] sdc3[2]
   78123904 blocks level 5, 64k chunk, algorithm 2 [3/2]
[U_U]
     resync=DELAYED

md1 : active raid5 sdb2[3] sda2[0] sdc2[2]
   2923648 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
    resync=DELAYED

md0 : active raid1 sdb1[1] sda1[0]
   96256 blocks [2/2] [UU]

unused devices: <none>
root@hostname#
~#
Wait until the synchronization has finished:
~ # cat /proc/mdstat
[root @ hostname]
Personalities : [raid1] [raid6] [raid5] [raid4]
md3 : active raid5 sdb4[1] sda4[0] sdc4[2]
    23615360 blocks level 5, 64k chunk, algorithm 2 [3/3]
[UUU]

md2 : active raid5 sdb3[1] sda3[0] sdc3[2]
   78123904 blocks level 5, 64k chunk, algorithm 2 [3/3]
[UUU]

md1 : active raid5 sdb2[1] sda2[0] sdc2[2]
   2923648 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md0 : active raid1 sdb1[1] sda1[0]
   96256 blocks [2/2] [UU]

unused devices: <none>
root@hostname#
~#

Then run
grub

and install the bootloader on both HDDs:
root (hd0,0)
setup (hd0)
root (hd1,0)
setup (hd1)
quit

That's it. You've just replaced a failed hard drive in your
RAID1 array.

								
To top