Skip to content

Instantly share code, notes, and snippets.

@1activegeek
Created March 26, 2020 16:36
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save 1activegeek/0149892171c81fd3b25e6040df77f5dc to your computer and use it in GitHub Desktop.
Save 1activegeek/0149892171c81fd3b25e6040df77f5dc to your computer and use it in GitHub Desktop.
Details and links on what I used to be able to mount a filesystem that was not mounting inside of unRAID so I could get my data off the encrypted BTRFS cache pool
https://wiki.archlinux.org/index.php/Dm-crypt/Device_encryption
===============================================
Unlocking/Mapping LUKS partitions with the device mapper
Once the LUKS partitions have been created, they can then be unlocked.
The unlocking process will map the partitions to a new device name using the device mapper. This alerts the kernel that device is actually an encrypted device and should be addressed through LUKS using the /dev/mapper/dm_name so as not to overwrite the encrypted data. To guard against accidental overwriting, read about the possibilities to backup the cryptheader after finishing setup.
In order to open an encrypted LUKS partition execute:
# cryptsetup open device dm_name
You will then be prompted for the password to unlock the partition. Usually the device mapped name is descriptive of the function of the partition that is mapped. For example the following unlocks a luks partition /dev/sda1 and maps it to device mapper named cryptroot:
# cryptsetup open /dev/sda1 cryptroot
Once opened, the root partition device address would be /dev/mapper/cryptroot instead of the partition (e.g. /dev/sda1).
For setting up LVM ontop the encryption layer the device file for the decrypted volume group would be anything like /dev/mapper/cryptroot instead of /dev/sda1. LVM will then give additional names to all logical volumes created, e.g. /dev/lvmpool/root and /dev/lvmpool/swap.
In order to write encrypted data into the partition it must be accessed through the device mapped name. The first step of access will typically be to create a filesystem. For example:
# mkfs -t ext4 /dev/mapper/cryptroot
The device /dev/mapper/cryptroot can then be mounted like any other partition.
To close the luks container, unmount the partition and do:
# cryptsetup close cryptroot
===============================================
https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=543490
===============================================
I have an unmountable BTRFS filesystem disk or pool, what can I do to recover my data?
Unlike most other file systems, btrfs fsck (check --repair) should only be used as a last resort. While it's much better in the latest kernels/btrfs-tools, it can still make things worse. So before doing that, these are the steps you should try in this order:
Note: if using encryption you need to adjust the path, e.g., instead of /dev/sdX1 it should be /dev/mapper/sdX1
1) Mount filesystem read only (non-destructive)
Create a temporary mount point, e.g.:
mkdir /x
Now attempt to mount the filesystem read-only:
mount -o usebackuproot,ro /dev/sdX1 /x
For a single device: replace X with actual device, don't forget the 1 in the end, e.g., /dev/sdf1
For a pool: replace X with any of the devices from the pool to mount the whole pool (as long as there are no devices missing), don't forget the 1 in the end, e.g., /dev/sdf1, if the normal read only recovery mount doesn't work, e.g., because there's a damaged or missing device you should use instead:
mount -o degraded,usebackuproot,ro /dev/sdX1 /x
Replace X with any of the remaining pool devices to mount the whole pool, don't forget the 1 in the end, e.g., /dev/sdf1, if all devices are present and it doesn't mount with the first device you tried use the other(s), filesystem on one of them may be more damaged then the other(s).
Note that if there are more devices missing than the profile permits for redundancy it may still mount but there will be some data missing, e.g., mounting a 4 device raid1 pool with 2 devices missing will result in missing data.
In certain cases these additional options might also help (with or without usebackuproot and degraded):
mount -o ro,notreelog,nologreplay /dev/sdX1 /x
If it mounts copy all the data from /x to another destination, like an array disk, you can use Midnight Command (mc on the console/SSH) or your favorite tool, after all data is copied format the device or pool and restore data.
2) BTRFS restore (non-destructive)
If mounting read-only fails try btrfs restore, it will try to copy all data to another disk, you need to create the destination folder before, e.g., create a folder named restore on disk2 and then:
btrfs restore -v /dev/sdX1 /mnt/disk2/restore
For a single device: replace X with actual device, don't forget the 1 in the end, e.g., /dev/sdf1
For a pool: replace X with any of the devices from the pool to recover the whole pool, don't forget the 1 in the end, e.g., /dev/sdf1, if it doesn't work with the first device you tried use the other(s).
If restoring from an unmountbale array device use mdX, where X is the disk number, e.g. to restore disk3:
btrfs restore -v /dev/md3 /mnt/disk2/restore
If the restore aborts due an error you can try adding -i to the command to skip errors, e.g.:
btrfs restore -vi /dev/sdX1 /mnt/disk2/restore
If it works check that restored data is OK, then format the original btrfs device or pool and restore data.
3) BTRFS check --repair (destructive)
If all else fails ask for help on the btrfs mailing list or #btrfs on IRC, if you don't want to do that and as a last resort use check --repair:
If it's an array disk first start the array in maintenance mode and use mdX, where X is the disk number, e.g., for disk5:
btrfs check --repair /dev/md5
For a cache device (or pool) stop the array and use sdX:
btrfs check --repair /dev/sdX1
Replace X with actual device (use cache1 for a pool), don't forget the 1 in the end, e.g., /dev/sdf1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment