Setup Partitions
ERASE
[Eease the disk|https://wiki.archlinux.org/index.php/Securely_wipe_disk]
# dd if=/dev/zero of=/dev/sdX bs=4096 status=progress
FDISK
- create partitions
RAID
- create raid1/mirror device
# mdadm --create /dev/md7 --level=1 --raid-devices=2 /dev/sdg1 /dev/sdh1
- for non bootable partition use v1.2
# mdadm --create /dev/md2 --metadata=1.2 --level=1 --raid-devices=2 /dev/sda5 /dev/sdb5
- for bootable partition use v0.90
# mdadm --create /dev/md0 --metadata=0.90 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
- install grub in the master boot sector on both disks
# grub-install --force --no-floppy --root-directory=/mnt/target/ /dev/sda
# grub-install --force --no-floppy --root-directory=/mnt/target/ /dev/sdb
- chroot as described and install a device map followed by update initramfs and grub
# update-initramfs -u -k `uname -r`
# update-grub
- now system should be bootable from the RAID disk
CRYPTSETUP
WARNING! The following command will remove all data on the partition that you are encrypting. You WILL lose all your information! So make sure you backup your data to an external source such as NAS or hard disk before typing any one of the following command.
In this example, I'm going to encrpt /dev/sdb7. Type the following command:
# cryptsetup -y -v luksFormat /dev/sdb7
- Open the crypted device
# cryptsetup luksOpen /dev/sdb7 backup
- Check the dm device
# ls -l /dev/mapper/backup
or use following command
# cryptsetup -v status backup
You can dump LUKS headers using the following command:
# cryptsetup luksDump /dev/sdb7
- Close a dm device after unmounting it
# cryptsetup luksClose backup
LVM setup
- Create physical volumes
# pvcreate /dev/mapper/backup
- Create a volume group
# vgcreate G750lvm /dev/mapper/backup
- After rebooting the system or running vgchange -an, you will not be able to access your VGs and LVs. To reactivate the volume group, run:
# vgchange -a y G750lvm
- Creating a logical volume
# lvcreate -L50G -nroot G750lvm
# lvcreate -L150G -nhome G750lvm
# lvcreate -L200G -ncustom G750lvm
# lvcreate -L150G -ndata G750lvm
# lvcreate -L2G -nswap1 G750lvm
# lvcreate -L2G -nswap2 G750lvm
Format the volumes
# mkfs.ext3 /dev/mapper/G750lvm-root
# mkfs.ext3 /dev/mapper/G750lvm-home
# mkfs.ext3 /dev/mapper/G750lvm-custom
# mkfs.ext3 /dev/mapper/G750lvm-data
# mkswap /dev/mapper/G750lvm-swap1
# mkswap /dev/mapper/G750lvm-swap2
copy data
cd /mnt test -d target || mkdir target test -d source || mkdir source
# mount ...(source) /mnt/source # mount /dev/mapper/G750lvm-root /mnt/target
# tar cf - . | (cd /mnt/target/; tar xvf -)
- umount and done
remove logical volume
- umount the partition
- close the volume
# lvchange -an /dev/vgcrypt/software
- remove the volume
# lvremove /dev/vgcrypt/software
Replacing RAID disk
- sync
# sync
- remove the disk from the RAID
# mdadm --manage /dev/md0 --fail /dev/sdb1
# mdadm --manage /dev/md0 --remove /dev/sdb1
- replace the disk with a new one
- copy the partition table to the new disk
# sfdisk -d /dev/sda | sfdisk /dev/sdb
- add the new disk to the RAID
# mdadm --manage /dev/md0 --remove /dev/sdb1
- verify
# /sbin/mdadm --detail /dev/md0 # cat /proc/mdstat
RAID recovery
- Examine the partition table of the drive
fdisk -l /dev/sdj Disk /dev/sdj: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x000dda44 Device Boot Start End Sectors Size Id Type /dev/sdj1 * 2048 999423 997376 487M fd Linux raid autodetect /dev/sdj2 999424 157249535 156250112 74.5G fd Linux raid autodetect /dev/sdj3 157251582 1953523711 1796272130 856.5G 5 Extended /dev/sdj5 157251584 390647807 233396224 111.3G fd Linux raid autodetect /dev/sdj6 390649856 585959423 195309568 93.1G 83 Linux /dev/sdj7 585961472 1953523711 1367562240 652.1G 83 Linux
- (Optional) make backup of the partition table
sfdisk -d /dev/sdX > partition_sdX.txt
- (Optional) Copy partition table to new drive
sfdisk -d /dev/sdX | sfdisk /dev/sdY
- Examine the RAID table
mdadm --examine /dev/sdj /dev/sdj: MBR Magic : aa55 Partition[0] : 997376 sectors at 2048 (type fd) Partition[1] : 156250112 sectors at 999424 (type fd) Partition[2] : 1796272130 sectors at 157251582 (type 05)
- Assemble and run the RAID array with one disk
mdadm -A -R /dev/md0 /dev/sdj1 mdadm: /dev/md0 has been started with 1 drive (out of 2). mdadm -A -R /dev/md1 /dev/sdj2 mdadm: /dev/md1 has been started with 1 drive (out of 2). mdadm -A -R /dev/md2 /dev/sdj5 mdadm: /dev/md2 has been started with 1 drive (out of 2).
Now it can be manipulated as desired
Recover ext Partition
http://www.cyberciti.biz/faq/recover-bad-superblock-from-corrupted-partition/
Linux: Recover Corrupted Partition From A Bad Superblock
Q. How can I Recover a bad superblock from a corrupted ext3 partition to get back my data? I’m getting following error:
/dev/sda2: Input/output error mount: /dev/sda2: can’t read superblock
How do I fix this error?
A. Linux ext2/3 filesystem stores superblock at different backup location so it is possible to get back data from corrupted partition.
WARNING! Make sure file system is UNMOUNTED.
If your system will give you a terminal type the following command, else boot Linux system from rescue disk (boot from 1st CD/DVD. At boot: prompt type command linux rescue). Mount partition using alternate superblock
Find out superblock location for /dev/sda2:
# dumpe2fs /dev/sda2 | grep superblock
Sample output:
Primary superblock at 0, Group descriptors at 1-6 Backup superblock at 32768, Group descriptors at 32769-32774 Backup superblock at 98304, Group descriptors at 98305-98310 Backup superblock at 163840, Group descriptors at 163841-163846 Backup superblock at 229376, Group descriptors at 229377-229382 Backup superblock at 294912, Group descriptors at 294913-294918 Backup superblock at 819200, Group descriptors at 819201-819206 Backup superblock at 884736, Group descriptors at 884737-884742 Backup superblock at 1605632, Group descriptors at 1605633-1605638 Backup superblock at 2654208, Group descriptors at 2654209-2654214 Backup superblock at 4096000, Group descriptors at 4096001-4096006 Backup superblock at 7962624, Group descriptors at 7962625-7962630 Backup superblock at 11239424, Group descriptors at 11239425-11239430 Backup superblock at 20480000, Group descriptors at 20480001-20480006 Backup superblock at 23887872, Group descriptors at 23887873-23887878
Now check and repair a Linux file system using alternate superblock # 32768:
# fsck -b 32768 /dev/sda2
Sample output:
fsck 1.40.2 (12-Jul-2007) e2fsck 1.40.2 (12-Jul-2007) /dev/sda2 was not cleanly unmounted, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Free blocks count wrong for group #241 (32254, counted=32253). Fix? yes
Free blocks count wrong for group #362 (32254, counted=32248). Fix? yes
Free blocks count wrong for group #368 (32254, counted=27774). Fix? yes .......... /dev/sda2: ***** FILE SYSTEM WAS MODIFIED ***** /dev/sda2: 59586/30539776 files (0.6% non-contiguous), 3604682/61059048 blocks
Now try to mount file system using mount command:
# mount /dev/sda2 /mnt
You can also use superblock stored at 32768 to mount partition, enter:
# mount sb={alternative-superblock} /dev/device /mnt # mount sb=32768 /dev/sda2 /mnt
Try to browse and access file system:
# cd /mnt # mkdir test # ls -l # cp file /path/to/safe/location
You should always keep backup of all important data including configuration files.
Remove SSD drive
# echo 1 >/sys/block/sdX/device/delete