Setup Partitions: Difference between revisions

From www.deloptes.org
Jump to navigation Jump to search
No edit summary
Tag: Manual revert
 
(17 intermediate revisions by the same user not shown)
Line 5: Line 5:
  # dd if=/dev/zero of=/dev/sdX bs=4096 status=progress
  # dd if=/dev/zero of=/dev/sdX bs=4096 status=progress


= FDISK =
= Create Partition =


* create partitions
* fdisk for MBR or disks with capacity <2.1TB
 
* parted / gparted for GPT
 
# parted -a optimal /dev/sde
(parted) mklabel gpt
(parted) mkpart primary 2048s 100%
(parted) align-check optimal 1
1 aligned
(parted) set 1 raid on                                                   
(parted) print                                                               
Model: ATA WDC WD30EFRX-68E (scsi)
Disk /dev/sde: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
 
Number  Start  End    Size    File system  Name    Flags
  1      1049kB  3001GB  3001GB              primary  raid
(parted) quit                                                           
Information: You may need to update /etc/fstab.
 
= Clone partition table =
 
# sgdisk -R <New_Disk> <Existing_Disk>
 
Example
 
# sgdisk -R /dev/sdb /dev/sda
 
Also set new ID
 
# sgdisk -G /dev/sdb
 
= BADBLOCKS =
 
[https://wiki.archlinux.org/title/badblocks| Badblocks ArchLinux]
 
# badblocks -nsv -o sdk1.txt -s /dev/sdl1
Checking for bad blocks in non-destructive read-write mode
From block 0 to 1953513559
Checking for bad blocks (non-destructive read-write test)
Testing with random pattern: badblocks: Remote I/O error during test data write, block 32304896


= RAID =
= RAID =
[https://wiki.archlinux.org/title/RAID ArchLinux article]


* create raid1/mirror device
* create raid1/mirror device
Line 35: Line 80:
  # update-grub
  # update-grub


* now system should be bootable from the raid disk
* now system should be bootable from the RAID disk
 
* create RAID-10
 
# mdadm -v --create /dev/md1 --level=raid10 --raid-devices=2 /dev/sda1 /dev/sdb1
 
* add new drives to RAID-10
 
# mdadm /dev/md1 --add /dev/sdc1 /dev/sdd1
# mdadm --grow /dev/md1 --raid-devices=4
 
* add spare drive to RAID-1
 
# mdadm --manage /dev/md127 --add-spare /dev/sdc3


= CRYPTSETUP =
= CRYPTSETUP =
Line 89: Line 147:
   # lvcreate -L150G -ndata G750lvm
   # lvcreate -L150G -ndata G750lvm


   # lvcreate -L2G -nswap1 G750lvm
  # For the swap use -C to stay contiguous
   # lvcreate -C y -L8192G -nswap1 G750lvm
 
* Creating or thinpool logical volume
 
  # lvcreate -l99%FREE -n newLvName newVgName
  # lvconvert --type thin-pool newVgName/newLvName
 
 
= remove logical volume =
 
* umount the partition
 
* close the volume
 
# lvchange -an /dev/vgcrypt/software
 
* remove the volume
 
# lvremove /dev/vgcrypt/software


  # lvcreate -L2G -nswap2 G750lvm


= Format the volumes =
= Format the volumes =
Line 120: Line 196:
* umount and done
* umount and done


= remove logical volume =


* umount the partition
= Install GRUB =
 
for installing EFI in chroot see [https://www.redhat.com/sysadmin/bios-uefi], [https://wiki.debian.org/UEFI] and [https://wiki.debian.org/GrubEFIReinstall]
 
# mount -t efivarfs none /sys/firmware/efi/efivars
 
# grub-install /dev/sdb
 
# update-grub
 
= Renaming RAID array =
 
[https://askubuntu.com/questions/63980/how-do-i-rename-an-mdadm-raid-array]
 
The whole process:
 
sudo mdadm --stop /dev/md125
sudo mdadm --assemble /dev/md/alpha --name=alpha --update=name /dev/sd[fg]
sudo mdadm -Db /dev/md/alpha
 
The third command should return something like:
 
ARRAY /dev/md/alpha metadata=1.2 name=omicron:alpha UUID=5b024352:3a940335:233aa23f:5c6b2a1f
 
Paste the result into /etc/mdadm/mdadm.conf (replacing the old line). Or execute:
 
sudo mdadm -Db /dev/md/alpha >> /etc/mdadm/mdadm.conf
 
Next run:
 
sudo update-initramfs -u
 
Finally, reboot.
 
= Replacing RAID disk =
[https://www.thegeekdiary.com/replacing-a-failed-mirror-disk-in-a-software-raid-array-mdadm/]
 
* sync
 
# sync
 
* remove the disk from the RAID
 
# mdadm --manage /dev/md0 --fail /dev/sdb1
 
# mdadm --manage /dev/md0 --remove /dev/sdb1
 
* replace the disk with a new one
 
* copy the partition table to the new disk
 
# sfdisk -d /dev/sda | sfdisk /dev/sdb
 
* add the new disk to the RAID
 
# mdadm --manage /dev/md0 --add /dev/sdb1
 
* verify
 
# /sbin/mdadm --detail /dev/md0
# cat /proc/mdstat


* close the volume
* Remove the array and the superblocks


  # lvchange -an /dev/vgcrypt/software
  # mdadm --stop /dev/md1


* remove the volume
# mdadm --remove /dev/md1


  # lvremove /dev/vgcrypt/software
  # mdadm --zero-superblock /dev/sdf1 /dev/sde1


= Raid recovery =
= RAID recovery =


* Examine the partition table of the drive
* Examine the partition table of the drive

Latest revision as of 12:49, 25 November 2023

ERASE

[Eease the disk|https://wiki.archlinux.org/index.php/Securely_wipe_disk]

# dd if=/dev/zero of=/dev/sdX bs=4096 status=progress

Create Partition

  • fdisk for MBR or disks with capacity <2.1TB
  • parted / gparted for GPT
# parted -a optimal /dev/sde 
(parted) mklabel gpt
(parted) mkpart primary 2048s 100%
(parted) align-check optimal 1
1 aligned
(parted) set 1 raid on                                                    
(parted) print                                                                
Model: ATA WDC WD30EFRX-68E (scsi)
Disk /dev/sde: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
 
Number  Start   End     Size    File system  Name     Flags
 1      1049kB  3001GB  3001GB               primary  raid

(parted) quit                                                             
Information: You may need to update /etc/fstab.

Clone partition table

# sgdisk -R <New_Disk> <Existing_Disk>

Example

# sgdisk -R /dev/sdb /dev/sda

Also set new ID

# sgdisk -G /dev/sdb

BADBLOCKS

Badblocks ArchLinux

# badblocks -nsv -o sdk1.txt -s /dev/sdl1
Checking for bad blocks in non-destructive read-write mode
From block 0 to 1953513559
Checking for bad blocks (non-destructive read-write test)
Testing with random pattern: badblocks: Remote I/O error during test data write, block 32304896

RAID

ArchLinux article


  • create raid1/mirror device
# mdadm --create /dev/md7 --level=1 --raid-devices=2 /dev/sdg1 /dev/sdh1
  • for non bootable partition use v1.2
# mdadm --create /dev/md2 --metadata=1.2 --level=1 --raid-devices=2 /dev/sda5 /dev/sdb5
  • for bootable partition use v0.90
# mdadm --create /dev/md0 --metadata=0.90 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
  • install grub in the master boot sector on both disks
# grub-install --force --no-floppy --root-directory=/mnt/target/ /dev/sda
# grub-install --force --no-floppy --root-directory=/mnt/target/ /dev/sdb
# update-initramfs -u -k `uname -r`
# update-grub
  • now system should be bootable from the RAID disk
  • create RAID-10
# mdadm -v --create /dev/md1 --level=raid10 --raid-devices=2 /dev/sda1 /dev/sdb1 
  • add new drives to RAID-10
# mdadm /dev/md1 --add /dev/sdc1 /dev/sdd1
# mdadm --grow /dev/md1 --raid-devices=4
  • add spare drive to RAID-1
# mdadm --manage /dev/md127 --add-spare /dev/sdc3

CRYPTSETUP

WARNING! The following command will remove all data on the partition that you are encrypting. You WILL lose all your information! So make sure you backup your data to an external source such as NAS or hard disk before typing any one of the following command.

In this example, I'm going to encrpt /dev/sdb7. Type the following command:

 # cryptsetup -y -v luksFormat /dev/sdb7
  • Open the crypted device
 # cryptsetup luksOpen /dev/sdb7 backup
  • Check the dm device
 # ls -l /dev/mapper/backup

or use following command

 # cryptsetup -v status backup

You can dump LUKS headers using the following command:

 # cryptsetup luksDump /dev/sdb7
  • Close a dm device after unmounting it
 # cryptsetup luksClose backup

LVM setup

  • Create physical volumes
 # pvcreate /dev/mapper/backup
  • Create a volume group
 # vgcreate G750lvm /dev/mapper/backup
  • After rebooting the system or running vgchange -an, you will not be able to access your VGs and LVs. To reactivate the volume group, run:
 # vgchange -a y G750lvm
       
  • Creating a logical volume
 # lvcreate -L50G -nroot G750lvm
 # lvcreate -L150G -nhome G750lvm
 # lvcreate -L200G -ncustom G750lvm
 # lvcreate -L150G -ndata G750lvm
 # For the swap use -C to stay contiguous
 # lvcreate -C y -L8192G -nswap1 G750lvm
  • Creating or thinpool logical volume
 # lvcreate -l99%FREE -n newLvName newVgName
 # lvconvert --type thin-pool newVgName/newLvName


remove logical volume

  • umount the partition
  • close the volume
# lvchange -an /dev/vgcrypt/software
  • remove the volume
# lvremove /dev/vgcrypt/software


Format the volumes

 # mkfs.ext3 /dev/mapper/G750lvm-root
 # mkfs.ext3 /dev/mapper/G750lvm-home
 # mkfs.ext3 /dev/mapper/G750lvm-custom
 # mkfs.ext3 /dev/mapper/G750lvm-data
 # mkswap /dev/mapper/G750lvm-swap1
 # mkswap /dev/mapper/G750lvm-swap2

copy data

 cd /mnt
 test -d target || mkdir target
 test -d source || mkdir source
 # mount ...(source) /mnt/source
 # mount /dev/mapper/G750lvm-root /mnt/target
 # tar cf - . | (cd /mnt/target/; tar xvf -)
  • umount and done


Install GRUB

for installing EFI in chroot see [1], [2] and [3]

# mount -t efivarfs none /sys/firmware/efi/efivars
# grub-install /dev/sdb
# update-grub

Renaming RAID array

[4]

The whole process:

sudo mdadm --stop /dev/md125
sudo mdadm --assemble /dev/md/alpha --name=alpha --update=name /dev/sd[fg]
sudo mdadm -Db /dev/md/alpha

The third command should return something like:

ARRAY /dev/md/alpha metadata=1.2 name=omicron:alpha UUID=5b024352:3a940335:233aa23f:5c6b2a1f

Paste the result into /etc/mdadm/mdadm.conf (replacing the old line). Or execute:

sudo mdadm -Db /dev/md/alpha >> /etc/mdadm/mdadm.conf

Next run:

sudo update-initramfs -u

Finally, reboot.

Replacing RAID disk

[5]

  • sync
# sync
  • remove the disk from the RAID
# mdadm --manage /dev/md0 --fail /dev/sdb1
# mdadm --manage /dev/md0 --remove /dev/sdb1
  • replace the disk with a new one
  • copy the partition table to the new disk
# sfdisk -d /dev/sda | sfdisk /dev/sdb
  • add the new disk to the RAID
# mdadm --manage /dev/md0 --add /dev/sdb1
  • verify
# /sbin/mdadm --detail /dev/md0
# cat /proc/mdstat
  • Remove the array and the superblocks
# mdadm --stop /dev/md1
# mdadm --remove /dev/md1
# mdadm --zero-superblock /dev/sdf1 /dev/sde1

RAID recovery

  • Examine the partition table of the drive
 fdisk -l /dev/sdj
 
 Disk /dev/sdj: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
 Units: sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disklabel type: dos
 Disk identifier: 0x000dda44
 
 Device     Boot     Start        End    Sectors   Size Id Type
 /dev/sdj1  *         2048     999423     997376   487M fd Linux raid autodetect
 /dev/sdj2          999424  157249535  156250112  74.5G fd Linux raid autodetect
 /dev/sdj3       157251582 1953523711 1796272130 856.5G  5 Extended
 /dev/sdj5       157251584  390647807  233396224 111.3G fd Linux raid autodetect
 /dev/sdj6       390649856  585959423  195309568  93.1G 83 Linux
 /dev/sdj7       585961472 1953523711 1367562240 652.1G 83 Linux
  • (Optional) make backup of the partition table
 sfdisk -d /dev/sdX > partition_sdX.txt
  • (Optional) Copy partition table to new drive
 sfdisk -d /dev/sdX | sfdisk /dev/sdY
 
  • Examine the RAID table
 mdadm --examine /dev/sdj
 /dev/sdj:
    MBR Magic : aa55
 Partition[0] :       997376 sectors at         2048 (type fd)
 Partition[1] :    156250112 sectors at       999424 (type fd)
 Partition[2] :   1796272130 sectors at    157251582 (type 05)
  • Assemble and run the RAID array with one disk
 mdadm -A -R /dev/md0 /dev/sdj1
 mdadm: /dev/md0 has been started with 1 drive (out of 2).
 
 mdadm -A -R /dev/md1 /dev/sdj2
 mdadm: /dev/md1 has been started with 1 drive (out of 2).
 
 mdadm -A -R /dev/md2 /dev/sdj5
 mdadm: /dev/md2 has been started with 1 drive (out of 2).

Now it can be manipulated as desired

Recover ext Partition

http://www.cyberciti.biz/faq/recover-bad-superblock-from-corrupted-partition/

Linux: Recover Corrupted Partition From A Bad Superblock

Q. How can I Recover a bad superblock from a corrupted ext3 partition to get back my data? I’m getting following error:

   /dev/sda2: Input/output error
   mount: /dev/sda2: can’t read superblock 

How do I fix this error?

A. Linux ext2/3 filesystem stores superblock at different backup location so it is possible to get back data from corrupted partition.

WARNING! Make sure file system is UNMOUNTED.

If your system will give you a terminal type the following command, else boot Linux system from rescue disk (boot from 1st CD/DVD. At boot: prompt type command linux rescue). Mount partition using alternate superblock

Find out superblock location for /dev/sda2:

 # dumpe2fs /dev/sda2 | grep superblock

Sample output:

 Primary superblock at 0, Group descriptors at 1-6
 Backup superblock at 32768, Group descriptors at 32769-32774
 Backup superblock at 98304, Group descriptors at 98305-98310
 Backup superblock at 163840, Group descriptors at 163841-163846
 Backup superblock at 229376, Group descriptors at 229377-229382
 Backup superblock at 294912, Group descriptors at 294913-294918
 Backup superblock at 819200, Group descriptors at 819201-819206
 Backup superblock at 884736, Group descriptors at 884737-884742
 Backup superblock at 1605632, Group descriptors at 1605633-1605638
 Backup superblock at 2654208, Group descriptors at 2654209-2654214
 Backup superblock at 4096000, Group descriptors at 4096001-4096006
 Backup superblock at 7962624, Group descriptors at 7962625-7962630
 Backup superblock at 11239424, Group descriptors at 11239425-11239430
 Backup superblock at 20480000, Group descriptors at 20480001-20480006
 Backup superblock at 23887872, Group descriptors at 23887873-23887878

Now check and repair a Linux file system using alternate superblock # 32768:

 # fsck -b 32768 /dev/sda2

Sample output:

fsck 1.40.2 (12-Jul-2007)
e2fsck 1.40.2 (12-Jul-2007)
/dev/sda2 was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong for group #241 (32254, counted=32253).
Fix? yes
Free blocks count wrong for group #362 (32254, counted=32248).
Fix? yes
Free blocks count wrong for group #368 (32254, counted=27774).
Fix? yes
..........
/dev/sda2: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sda2: 59586/30539776 files (0.6% non-contiguous), 3604682/61059048 blocks

Now try to mount file system using mount command:

# mount /dev/sda2 /mnt

You can also use superblock stored at 32768 to mount partition, enter:

# mount sb={alternative-superblock} /dev/device /mnt
# mount sb=32768 /dev/sda2 /mnt

Try to browse and access file system:

# cd /mnt
# mkdir test
# ls -l
# cp file /path/to/safe/location

You should always keep backup of all important data including configuration files.

Remove SSD drive

# echo 1 >/sys/block/sdX/device/delete

Sources

Hard disk encryption

cryptsetup

LVM HOWTO

SFDISK

RAID1