Package: lvm2
Version: 2.03.02-1
Severity: wishlist
File: /usr/share/initramfs-tools/hooks/lvm2
File: /usr/share/initramfs-tools/scripts/local-block/lvm2
File: /usr/share/initramfs-tools/scripts/local-top/lvm2
As a way to migrate away from a possibly failing disk in a single-disk
system without md RAID, I've moved the disk to another system, plugged
in a second disk and added that disk as a lvm raid1 mirror, converting
my system to lvmraid. Consequently I've noticed that on boot, one of
the disks is not attached to the LVs and so I have to refresh each LV
with `lvchange --refresh` and the system then syncs the LVs to the
additional drive. The sync process is quick due to the lvmraid write
bitmaps and low amount of disk writes during boot but it is annoying to
have to run the sync manually on every boot. In case it changes
anything, both of the PVs are on top of individual encrypted LUKS
volumes. I'm not sure how to change the initramfs scripts but maybe the
lvmraid(7) manual page or the RedHat LVM RAID guide will help:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/raid_volumes
-- System Information:
Debian Release: buster/sid
APT prefers testing-debug
APT policy: (900, 'testing-debug'), (900, 'testing'), (800, 'unstable-debug'), (800, 'unstable'), (790, 'buildd-unstable'), (700, 'experimental-debug'), (700, 'experimental'), (690, 'buildd-experimental')
Architecture: amd64 (x86_64)
Kernel: Linux 4.18.0-3-amd64 (SMP w/4 CPU cores)
Locale: LANG=en_AU.utf8, LC_CTYPE=en_AU.utf8 (charmap=UTF-8), LANGUAGE=en_AU.utf8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled
Versions of packages lvm2 depends on:
ii dmeventd 2:1.02.155-1
ii dmsetup 2:1.02.155-1
ii libaio1 0.3.111-1
ii libblkid1 2.33-0.2
ii libc6 2.28-2
ii libdevmapper-event1.02.1 2:1.02.155-1
ii libdevmapper1.02.1 2:1.02.155-1
ii libreadline5 5.2+dfsg-3+b2
ii libselinux1 2.8-1+b1
ii libsystemd0 240-2
ii libudev1 240-2
ii lsb-base 10.2018112800
Versions of packages lvm2 recommends:
ii thin-provisioning-tools 0.7.4-2
lvm2 suggests no packages.
-- Configuration Files:
/etc/lvm/lvm.conf changed [not included]
-- no debconf information
--
bye,
pabs
https://wiki.debian.org/PaulWise
Acknowledgement sent
to gold holk <[email protected]>:
Extra info received and forwarded to list. Copy sent to Debian LVM Team <[email protected]>.
(Sun, 19 Mar 2023 06:03:03 GMT) (full text, mbox, link).
Hey Paul and LVM team:
I faced this issue and manage to write a initramfs-tool boot script to
fix this.
Put the enclosure script in `/etc/initramfs-tools/scripts/local-top` and
run `update-initramfs -k all -u`, the initramfs boot stage will run this
script and wait for the root device becoming `complete` status. If it
does not complete in 2 minutes, it will stop waiting and continue boot.
To write or debug the initramfs script, the `initramfs-tools(7)` would
be helpful.
As my realization, to fix this issue in lvm2 package, we should make
`/usr/share/initramfs-tools/scripts/local-top/lvm2` check the status of
the activated LV. Please tell me if I can help to merge my script into
the existing lvm2 initramfs script.
I am new to debian community. This is the first time I write a
initramfs-script, though I am already experienced in normal shell script.
I also asked and answered this issue on stack-exchange community; it may
be helpful if you want to know more details:
tabopen
https://superuser.com/questions/1773241/raid-1-lv-partially-up-and-unable-to-repair-the-down-rimage-is-missing
May the source be with you
--
linux user, amateur web developer, geomatics major.
blog: http://gholk.github.io
Acknowledgement sent
to gold holk <[email protected]>:
Extra info received and forwarded to list. Copy sent to Debian LVM Team <[email protected]>.
(Sun, 19 Mar 2023 06:12:03 GMT) (full text, mbox, link).
Sorry, a bug in script about prereq
gold holk 於 2023/3/19 14:01 寫道:
> Hey Paul and LVM team:
>
> I faced this issue and manage to write a initramfs-tool boot script to
> fix this.
>
> Put the enclosure script in `/etc/initramfs-tools/scripts/local-top`
> and run `update-initramfs -k all -u`, the initramfs boot stage will
> run this script and wait for the root device becoming `complete`
> status. If it does not complete in 2 minutes, it will stop waiting and
> continue boot.
>
> To write or debug the initramfs script, the `initramfs-tools(7)` would
> be helpful.
>
> As my realization, to fix this issue in lvm2 package, we should make
> `/usr/share/initramfs-tools/scripts/local-top/lvm2` check the status
> of the activated LV. Please tell me if I can help to merge my script
> into the existing lvm2 initramfs script.
>
> I am new to debian community. This is the first time I write a
> initramfs-script, though I am already experienced in normal shell
> script.
>
> I also asked and answered this issue on stack-exchange community; it
> may be helpful if you want to know more details:
> tabopen
> https://superuser.com/questions/1773241/raid-1-lv-partially-up-and-unable-to-repair-the-down-rimage-is-missing
>
> May the source be with you
>
--
linux user, amateur web developer, geomatics major.
blog: http://gholk.github.io
Debbugs is free software and licensed under the terms of the GNU General
Public License version 2. The current version can be obtained
from https://bugs.debian.org/debbugs-source/.