LINUX RamDisk



LVM and RAID

The Linux Logical Volume Manager (LVM) is a mechanism for virtualizing disks. It can create "virtual" disk partitions out of one or more physical hard drives, allowing you to grow, shrink, or move those partitions from drive to drive as your needs change. It also allows you to create larger partitions than you could achieve with a single drive.

Traditional uses of LVM have included databases and company file servers, but even home users may want large partitions for music or video collections, or for storing online backups. LVM and RAID 1 can also be convenient ways to gain redundancy without sacrificing flexibility.

An operational LVM system includes both a kernel filesystem component and userspace utilities included in the jernel compilation.

LVM Concepts

Regular physical hard drives, disk space is chopped up into partitions. A filesystem is written directly to a partition File servers use SCSI or SATA disks. SATA drives are named like SCSI (sda, sdb).

LVM uses pools of Physical Volumes (PVs) comprising Volume Groups (VGs) split up into logical volumes (LVs), where the filesystems ultimately reside.

Physical Volumes (PVs) are fixed in size (as partitions). You can extend (or reduce) the size of a Volume Group by adding or removing PVs. LV’s can also grow (to the limits of the VG) and shrink though most filesystems don't like to shrink).

To Create a Virtual (logical) Volume

1) Create the disk / partition as an LINUX LVM Partition type 8e with fdisk.

2) Initialize each of the disks (or partitions) using the pvcreate command:

# pvcreate /dev/sda /dev/sdb /dev/sdc /dev/sdd

use the pvdisplay command to examine these areas.

3) Create a single-volume group named datavg using the vgcreate command:

# vgcreate datavg /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

Use vgdisplay to see the newly created VG

4) Ceate the logical volumes within them:

# lvcreate --name medialv --size 400G

# lvcreate --name backuplv --size 50G

# lvcreate --name sharelv --size 10G

examine these volumes using the lvdisplay command.

With LVM, you can be conservative, allocating only the space required and extend it later. It's easier to grow a filesystem than to shrink it. This method also gives you the option of creating new volumes when new needs arise.

LV Filesystems

The next step is to create filesystems on the LVs.

ext2 was the standard for Linux. ext3 is an upgrade for ext2 that provides journaling. The major downside to ext2/ext3 is that to grow (or shrink) the filesystem, you must first unmount it.

Other filesystems provide advantages in certain configurations such as large file sizes, large quantities of files, or on-the-fly filesystem growth. For large numbers of small files, ReiserFS is an excellent choice for raw, uncached file I/O, it ranks at the top. It is as robust as ext3. If you are designing a file server that will contain large files, then delete speed could be a priority. JFS and XFS are better choices in this situation, although XFS has the edge due to greater reliability and better general performance.

Format the partitions as follows:

# mkfs.ext3 /dev/datavg/backuplv

# mkfs.xfs /dev/datavg/medialv

# mkfs.reiserfs /dev/datavg/sharelv

Mounting

To mount the file systems, first add the following lines to /etc/fstab:

/dev/datavg/backuplv /var/backup ext3 rw,defaults 1 2

/dev/datavg/medialv /var/media xfs rw,defaults 1 2

/dev/datavg/sharelv /var/share reiserfs rw,default;ts 1 2

and then establish and activate the mount points:

# mkdir /var/media /var/backup /var/share

# mount /var/media /var/backup /var/share

Adding Reliability With RAID

LVM has one major flaw: if any of your drives fail, all of your data is at risk.. RAID, which stands for Redundant Array of Independent Disks, is a low-level technology for combining disks together in various ways, called RAID levels.

RAID 1 mirrors data across two (or more) disks. In addition to doubling the reliability, RAID 1 adds performance benefits for reads because both drives have the same data, and read operations can be split between them. The price is redundant disk space (50%).

With three drives or more, RAID 5 is another option. It restores some of the disk space but adds even more complexity. Also, it performs well with reads but poorly with writes when done in software. Typically RAID is reserved for OS drives, RAID 5 for hardware arrays.

To combine the four drives into two RAID 1 pairs: /dev/sda + /dev/sdd and /dev/sdb + /dev/sdc.for example:

When the primary/secondary pairs are used, the relative access speeds are balanced so neither RAID array is slower than the other. There may also be a performance benefit to having accesses evenly distributed across both controllers.

Allocate partition types partition fd (Linux raid autodetect):

Build the RAID 1 mirrors, telling md that the "other half" of the mirrors are missing (because they're not ready to be added to the RAID yet):

# mdadm --create /dev/md1 -a -l 1 -n 2 /dev/sda1 missing

# mdadm --create /dev/md1 -a -l 1 -n 2 /dev/sdb1 missing

Add the broken mirrors to the LVM:

# pvcreate /dev/md0 /dev/md1

# vgecreate datavg /dev/md0 /dev/md1

Finally, change the partition types of the raw disks to fd, and get the broken mirrors on their feet with full mirroring:

# fdisk /dev/sda1

# fdisk /dev/sdc1

# mdadm --manage /dev/md0 --add /dev/sda1

# mdadm --manage /dev/md0 --add /dev/sdc1

Growth and Reallocation

For instance, to increase the amount of space available for shared files from 10GB to 15GB, run a command such as:

# lvextend -L15G /dev/datavg/sharelv

# resize_reiserfs /dev/datavg/sharelv

Over time all unallocated disk space will be used and replaced with larger drives.

In the case of RAID 1, migration is straightforward.

1)

Use mdadm to mark one drive of each of the RAID 1 mirrors as failed, and then remove them:

# mdadm manage /dev/md0 --fail /dev/sda1

# mdadm --manage /dev/md0 --remove /dev/sda1

# mdadm --manage /dev/md0 --fail /dev/sdc1

# mdadm --manage /dev/md0 --remove /dev/sdc1

Pull out the sda and sdc hard drives and replace them with two of the new 800G drives. Split each 800G drive into a 250G partition and a 550G partition using fdisk, and add the partitions back to md0 and md1.

# fdisk /dev/sd

# fdisk /dev/sd

# mdadm --manage /dev/md0 --add /dev/sda

# mdadm --manage /dev/md1 --add /dev/sdc1

Repeat the above process with sdd and sdb to move them to the other two new drives, then create a third and fourth RAID device, md2 and md3, using the new space

# mdadm --create /dev/md2 -a -l 1 -n 2 /dev/sda2 /dev/sdd

# mdadm --create /dev/md3 -a -l 1 -n 2 /dev/sdb2 /dev/sdc2

Finally, add these to LVM:

# pvcreate /dev/md2 /dev/md

# vgextend datavg /dev/md2 /dev/md3

LVM and OS Redundancy

So far, we've talked only about LVM and RAID for secondary disk space via a standalone file server, but you can also provide OS redundancy on an existing installation.

For example, assume you have your swap space and /boot partitions already set up outside of LVM on their own partitions. You can focus on moving your root filesystem onto a new LVM partition in the partition /dev/hda4. Set the filesystem type on hda4 is LVM (type 8e).

Initialize LVM and create a new physical volume:

# vgscan

# pvcreate /dev/hda4

# vgcreate rootvg /dev/hda4

Create a logical volume, formatted into an ext3 file system:

# lvcreate rootvg ---name rootlv -size 5G

# mkfs –t ext3 /dev/rootvg/rootlv

Copy the files from the existing root file system to the new LVM one:

# mkdir /mnt/new_root

# mount /dev/rootvg/rootlv /mnt/new_root

# cp -ax /. /mnt/new_root/

Modify /etc/fstab to mount / on /dev/rootvg/root instead of /dev/hda3.

The trickiest part is to rebuild your initrd to include LVM support. This tends to be distro-specific, but look for mkinitrd or yaird. Your initrd image must have the LVM modules loaded or the root filesystem will not be available. To be safe, leave your original initrd image alone and make a new one named, for example, /boot/initrd-lvm.img.

Uupdate your bootloader. Add a new section for your new root filesystem, duplicating your original boot stanza. In the new copy, change the root from /dev/hda3 to /dev/rootvg/rootlv, and change your initrd to the newly built one. If you use lilo, be sure to run lilo once you've made the changes. For example, with grub, if you have:

title=Linux

root (hd0,0)

kernel /vmlinuz root=/dev/hda3 ro single

initrd /initrd.img

add a new section such as:

title=LinuxLVM

root (hd0,0)

kernel /vmlinuz root=/dev/rootvg/root ro single

initrd /initrd-lvm.img

Or alternatively you can mirror an entire OS drive, MBR, /boot and ./ included.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download