Contenu | Rechercher | Menus

Annonce

Si vous avez des soucis pour rester connecté, déconnectez-vous puis reconnectez-vous depuis ce lien en cochant la case
Me connecter automatiquement lors de mes prochaines visites.

À propos de l'équipe du forum.

#1 Le 09/11/2020, à 01:02

khamphou

Aide pour récupérer data d'un RAID10 inactive

bonsoir

J'avais un nas QNAP TVS-673 , qui a rendu l'âme après plus d'un an de service.

Suite à une coupure électrique, le boitier ne démarrait plus, et était tjs bloqué dans le cycle de démarrage tantôt en booting, starting service, starting server, mais il n'arrivait jamais au nas, malgré plusieurs heures d'attente.
J'avais dessus 1 group de RAID 10 avec 4x Seagate Ironwolf, couplé à deux SSD M2 pour du QTier (md1 et md2), et un autre group RAID seul (1xTo Seagate =>md3)
J'vais tenté de démarré le système via une clé de boot Ubuntu, cela a bien fonctionné, mais quand je manipule l'interface pour ouvrir des applications ou autres, cela plante. un freeze qui me conforme dans l'idée que le pb est matériel.
J'ai acheté un nouveau NAS, le TVS-672N, j'y ai branché mes disques AS, mais ils sont pas accessibles. Ils apparaissent en inactive.

Pour infos, Je ne suis pas très familier du monde Linux, mais plutôt de Windows.

Le bouton de récupération de l'interface QNAP ne fonctionne pas, et n'arrive pas à remonter le RAID de disques, le RAID reste inactive et par contre, les deux SSD ne sont pas détectés dans le groupe de RAID, j'ai découvert par la suite, que mes SSD M2 n'étaient pas compatibles avec la carte mère de ce NAS 672N, qui n'accepte que les SSD NVME

================================================================================================================================
J'ai lancé une commande pour voir la liste des devices, et mon RAID10 n'y figure pas, alors que le 3eme RAID est là (cachedev3)
================================================================================================================================

Filesystem Size Used Available Use% Mounted on
none 400.0M 359.0M 41.0M 90% /
devtmpfs 15.6G 8.0K 15.6G 0% /dev
tmpfs 64.0M 440.0K 63.6M 1% /tmp
tmpfs 15.7G 132.0K 15.7G 0% /dev/shm
tmpfs 16.0M 12.0K 16.0M 0% /share
tmpfs 16.0M 0 16.0M 0% /mnt/snapshot/export
/dev/md9 493.5M 195.6M 297.9M 40% /mnt/HDA_ROOT
cgroup_root 15.7G 0 15.7G 0% /sys/fs/cgroup
/dev/mapper/ce_cachedev3
7.2T 4.5T 2.6T 63% /share/CE_CACHEDEV3_DATA
/dev/md13 417.0M 390.1M 26.9M 94% /mnt/ext
/dev/ram2 433.9M 2.3M 431.6M 1% /mnt/update
tmpfs 64.0M 4.4M 59.6M 7% /samba
tmpfs 16.0M 96.0K 15.9M 1% /samba/.samba/lock/msg.lock
tmpfs 16.0M 0 16.0M 0% /mnt/ext/opt/samba/private/msg.sock

================================================================================================================================
Par contre, il figure bien dans mdstat
================================================================================================================================

$ cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md3 : active raid1 sda3[0]
7804071616 blocks super 1.0 [1/1]

md2 : active raid10 sdf3[0] sdc3[3] sdd3[2] sde3[1]
7794126848 blocks super 1.0 512K chunks 2 near-copies [4/4] [UUUU]
bitmap: 0/59 pages [0KB], 65536KB chunk

md322 : active raid1 sda5[4](S) sdc5[3](S) sdd5[2](S) sde5[1] sdf5[0]
7235136 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sda2[4](S) sdc2[3](S) sdd2[2](S) sde2[1] sdf2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sda4[38] sdf4[33] sdc4[36] sdd4[35] sde4[34]
458880 blocks super 1.0 [32/5] [UUUUU___________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sda1[4](S) sdc1[0] sdf1[3] sde1[2] sdd1[1]
530048 blocks super 1.0 [4/4] [UUUU]

unused devices: <none>

================================================================================================================================
En fouinant internet, je tente qq actions réalisés par d'autres internautes (US) qui avaient un problème similaire sur un RAID en statut inactive.
L'exécution de la commande a été le début d'un long calvaire pour tenter de remonter ce RAID /etc/init.d/init_lvm.sh
il a bien resetté la config du NAS, et détacher les deux SSD qui étaient donc absents, mais les 4 disques n'étaient plus reconnus en tant que group de RAID.

================================================================================================================================

$ /etc/init.d/init_lvm.sh

Changing old config name...
Reinitialing...
Fail to lock log file!
: Permission denied
Detect disk(8, 80)...
dev_count ++ = 0Detect disk(8, 48)...
dev_count ++ = 1Detect disk(8, 16)...
dev_count ++ = 2Detect disk(8, 96)...
ignore non-root enclosure disk(8, 96).
Detect disk(8, 64)...
dev_count ++ = 3Detect disk(8, 32)...
dev_count ++ = 4Detect disk(8, 0)...
Fail to lock log file!
: Permission denied
dev_count ++ = 5Detect disk(8, 80)...
sg_inq: error opening file: /dev/sg5: Permission denied
sginfo(open): Permission denied
file=/dev/sg5, or no corresponding sg device found
Is sg driver loaded?
Detect disk(8, 48)...
sg_inq: error opening file: /dev/sg3: Permission denied
sginfo(open): Permission denied
file=/dev/sg3, or no corresponding sg device found
Is sg driver loaded?
Detect disk(8, 16)...
sg_inq: error opening file: /dev/sg1: Permission denied
sginfo(open): Permission denied
file=/dev/sg1, or no corresponding sg device found
Is sg driver loaded?
Detect disk(8, 96)...
ignore non-root enclosure disk(8, 96).
Detect disk(8, 64)...
sg_inq: error opening file: /dev/sg4: Permission denied
sginfo(open): Permission denied
file=/dev/sg4, or no corresponding sg device found
Is sg driver loaded?
Detect disk(8, 32)...
sg_inq: error opening file: /dev/sg2: Permission denied
sginfo(open): Permission denied
file=/dev/sg2, or no corresponding sg device found
Is sg driver loaded?
Detect disk(8, 0)...
sg_inq: error opening file: /dev/sg0: Permission denied
sginfo(open): Permission denied
file=/dev/sg0, or no corresponding sg device found
Is sg driver loaded?
Fail to lock log file!
: Permission denied
Fail to lock log file!
: Permission denied
count = -1
sys_startup_p2:got called count = 5
Fail to lock log file!
: Permission denied
Perform NAS model checking...
Fail to lock log file!
: Permission denied
Fail to lock log file!
: Permission denied
Fail to lock log file!
: Permission denied
Fail to lock log file!
: Permission denied
Fail to retrieve HDD model or architecture, model checking abort!
Done

================================================================================================================================
Je bascule alors avec le compte admin initial de QNAP.

J'ai par la suite lancé les commandes ci après pour les réassembler
================================================================================================================================

[~] # md_checker

Welcome to MD superblock checker (v2.0) - have a nice day~

Scanning system...


RAID metadata found!
UUID: 6f39b015:4809705c:dcb7876a:e077729b
Level: raid10
Devices: 4
Name: md1
Chunk Size: 512K
md Version: 1.0
Creation Time: Jun 8 09:46:34 2019
Status: OFFLINE
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST 1 /dev/sdf3 0 Active Oct 30 17:54:52 2020 375 AAAA
NAS_HOST 2 /dev/sde3 1 Active Oct 30 17:54:52 2020 375 AAAA
NAS_HOST 3 /dev/sdd3 2 Active Oct 30 17:54:52 2020 375 AAAA
NAS_HOST 4 /dev/sdc3 3 Active Oct 30 17:54:52 2020 375 AAAA
===============================================================================================


RAID metadata found!
UUID: 87090714:0426f111:ac2fd668:e993d1f6
Level: raid1
Devices: 1
Name: md3
Chunk Size: -
md Version: 1.0
Creation Time: Dec 2 21:41:25 2019
Status: ONLINE (md3)
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST 6 /dev/sda3 0 Active Oct 30 18:04:23 2020 58 A
===============================================================================================

[~] # /etc/init.d/init_lvm.sh

Changing old config name...
Reinitialing...
Detect disk(8, 80)...
dev_count ++ = 0Detect disk(8, 48)...
dev_count ++ = 1Detect disk(8, 16)...
dev_count ++ = 2Detect disk(8, 96)...
ignore non-root enclosure disk(8, 96).
Detect disk(253, 0)...
ignore non-root enclosure disk(253, 0).
Detect disk(8, 64)...
dev_count ++ = 3Detect disk(8, 32)...
dev_count ++ = 4Detect disk(8, 0)...
dev_count ++ = 5Detect disk(8, 80)...
Detect disk(8, 48)...
Detect disk(8, 16)...
Detect disk(8, 96)...
ignore non-root enclosure disk(8, 96).
Detect disk(253, 0)...
ignore non-root enclosure disk(253, 0).
Detect disk(8, 64)...
Detect disk(8, 32)...
Detect disk(8, 0)...
sys_startup_p2:got called count = -1
Found duplicate PV wTlxOH0izJXMSScXgOYQ1oWT7u5YCftE: using /dev/drbd1 not /dev/md1
Using duplicate PV /dev/drbd1 from subsystem DRBD, ignoring /dev/md1
WARNING: Device for PV q3Y8CI-XAQl-q7we-bnTX-fFnl-IWx4-8iLXVw not found or rejected by a filter.
WARNING: duplicate PV xeBs5FCiUaA8f8VtL51Hz0zsI0JHcK0x is being used from both devices /dev/drbd3 and /dev/md3
Found duplicate PV xeBs5FCiUaA8f8VtL51Hz0zsI0JHcK0x: using /dev/drbd3 not /dev/md3
Using duplicate PV /dev/drbd3 from subsystem DRBD, ignoring /dev/md3
Done

[~] # md_checker

Welcome to MD superblock checker (v2.0) - have a nice day~

Scanning system...


RAID metadata found!
UUID: e5cdd7ed:9df51c8c:434cd696:95c56f61
Level: raid10
Devices: 4
Name: md1
Chunk Size: 512K
md Version: 1.0
Creation Time: Oct 30 18:49:16 2020
Status: ONLINE (md1) [UUUU]
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST 1 /dev/sdf3 0 Active Oct 30 18:58:53 2020 2 AAAA
NAS_HOST 2 /dev/sde3 1 Active Oct 30 18:58:53 2020 2 AAAA
NAS_HOST 3 /dev/sdd3 2 Active Oct 30 18:58:53 2020 2 AAAA
NAS_HOST 4 /dev/sdc3 3 Active Oct 30 18:58:53 2020 2 AAAA
===============================================================================================


RAID metadata found!
UUID: 87090714:0426f111:ac2fd668:e993d1f6
Level: raid1
Devices: 1
Name: md3
Chunk Size: -
md Version: 1.0
Creation Time: Dec 2 21:41:25 2019
Status: ONLINE (md3)
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST 6 /dev/sda3 0 Active Oct 30 18:59:12 2020 58 A
===============================================================================================

[~] # blkid

/dev/ram2: UUID="5e1988b6-c88f-491d-ad81-8d465d83feac" TYPE="ext2"
/dev/ram3: UUID="0771041a-ba45-4300-bcc6-04c083bec337" TYPE="ext2"
/dev/sda1: UUID="1c3222ce-5cc4-47fb-8b9e-006c0afc2ff8" SEC_TYPE="ext2" TYPE="ext3"
/dev/sda3: UUID="xeBs5F-CiUa-A8f8-VtL5-1Hz0-zsI0-JHcK0x" TYPE="lvm2pv"
/dev/sda4: UUID="e5eb2ed5-2abb-43f9-b59c-31ab02a02dc1" TYPE="ext3"
/dev/sdc1: UUID="1c3222ce-5cc4-47fb-8b9e-006c0afc2ff8" TYPE="ext3"
/dev/sdc2: TYPE="swap"
/dev/sdc4: UUID="e5eb2ed5-2abb-43f9-b59c-31ab02a02dc1" TYPE="ext3"
/dev/sdc5: TYPE="swap"
/dev/sdd1: UUID="1c3222ce-5cc4-47fb-8b9e-006c0afc2ff8" TYPE="ext3"
/dev/sdd2: TYPE="swap"
/dev/sdd4: UUID="e5eb2ed5-2abb-43f9-b59c-31ab02a02dc1" TYPE="ext3"
/dev/sdd5: TYPE="swap"
/dev/sde1: UUID="1c3222ce-5cc4-47fb-8b9e-006c0afc2ff8" TYPE="ext3"
/dev/sde2: TYPE="swap"
/dev/sde3: UUID="wTlxOH-0izJ-XMSS-cXgO-YQ1o-WT7u-5YCftE" TYPE="lvm2pv"
/dev/sde4: UUID="e5eb2ed5-2abb-43f9-b59c-31ab02a02dc1" TYPE="ext3"
/dev/sde5: TYPE="swap"
/dev/sdf1: UUID="1c3222ce-5cc4-47fb-8b9e-006c0afc2ff8" TYPE="ext3"
/dev/sdf2: TYPE="swap"
/dev/sdf3: UUID="wTlxOH-0izJ-XMSS-cXgO-YQ1o-WT7u-5YCftE" TYPE="lvm2pv"
/dev/sdf4: UUID="e5eb2ed5-2abb-43f9-b59c-31ab02a02dc1" TYPE="ext3"
/dev/sdf5: TYPE="swap"
/dev/sdg1: UUID="9333eb40-8071-460b-972f-a3192d483667" TYPE="ext2"
/dev/sdg2: LABEL="QTS_BOOT_PART2" UUID="46c54622-2972-4d38-a1a7-909e5700d782" TYPE="ext2"
/dev/sdg3: LABEL="QTS_BOOT_PART3" UUID="95f9e32b-f3d0-482f-bbe1-3a26e1a93fa1" TYPE="ext2"
/dev/sdg5: UUID="9015507c-d233-41e0-8480-0fe81ff64be2" TYPE="ext2"
/dev/sdg6: UUID="dd4b087d-8e10-4331-95fa-d4076d136a99" TYPE="ext2"
/dev/sdg7: UUID="abc0f87b-9169-424c-887f-2c655399dc3f" TYPE="ext2"
/dev/md9: UUID="1c3222ce-5cc4-47fb-8b9e-006c0afc2ff8" TYPE="ext3"
/dev/md13: UUID="e5eb2ed5-2abb-43f9-b59c-31ab02a02dc1" TYPE="ext3"
/dev/md256: TYPE="swap"
/dev/md322: TYPE="swap"
/dev/md3: UUID="xeBs5F-CiUa-A8f8-VtL5-1Hz0-zsI0-JHcK0x" TYPE="lvm2pv"
/dev/drbd3: UUID="xeBs5F-CiUa-A8f8-VtL5-1Hz0-zsI0-JHcK0x" TYPE="lvm2pv"
/dev/mapper/vg288-lv3: UUID="12022ab8-013c-42bd-8e91-f17719bea473" TYPE="crypt_LUKS"
/dev/mapper/cachedev3: UUID="12022ab8-013c-42bd-8e91-f17719bea473" TYPE="crypt_LUKS"
/dev/mapper/ce_cachedev3: LABEL="DataNang" UUID="ae399087-278b-4130-9b9f-d235124d94fd" TYPE="ext4"
/dev/md1: UUID="wTlxOH-0izJ-XMSS-cXgO-YQ1o-WT7u-5YCftE" TYPE="lvm2pv"
/dev/drbd1: UUID="wTlxOH-0izJ-XMSS-

[~] # mdadm --assemble --scan

mdadm: scan_assemble: failed to get exclusive lock on mapfile
mdadm: No arrays found in config file
[~] # mdadm --detail --scan
ARRAY /dev/md9 metadata=1.0 spares=1 name=9 UUID=6721f5f9:7bf25d97:e66d9587:5ccc759b
ARRAY /dev/md13 metadata=1.0 name=13 UUID=449ba1dc:b07706bf:f276ae5a:acf8f85d
ARRAY /dev/md256 metadata=1.0 spares=3 name=256 UUID=8186a35b:40de67b1:fd619c1a:ac0cc1d0
ARRAY /dev/md322 metadata=1.0 spares=3 name=322 UUID=edd20d4d:6a5154c7:db397469:c0e187a8
ARRAY /dev/md3 metadata=1.0 name=3 UUID=87090714:0426f111:ac2fd668:e993d1f6
ARRAY /dev/md1 metadata=1.0 name=1 UUID=e5cdd7ed:9df51c8c:434cd696:95c56f61

[~] # mdadm -CfR --assume-clean /dev/md1 -l 10 -n 4 /dev/sdf3 /dev/sde3 /dev/sdd3 /dev/sdc3

mdadm: /dev/sdf3 appears to be part of a raid array:
level=raid10 devices=4 ctime=Sat Jun 8 09:46:34 2019
mdadm: /dev/sde3 appears to be part of a raid array:
level=raid10 devices=4 ctime=Sat Jun 8 09:46:34 2019
mdadm: /dev/sdd3 appears to be part of a raid array:
level=raid10 devices=4 ctime=Sat Jun 8 09:46:34 2019
mdadm: /dev/sdc3 appears to be part of a raid array:
level=raid10 devices=4 ctime=Sat Jun 8 09:46:34 2019
mdadm: Defaulting to version 1.0 metadata
mdadm: array /dev/md1 started.

=============================================================================
puis qq commandes pour vérifier leur statut
=============================================================================



[~] # mdadm --examine /dev/sdf3

/dev/sdf3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : 6f39b015:4809705c:dcb7876a:e077729b
Name : 1
Creation Time : Sat Jun 8 09:46:34 2019
Raid Level : raid10
Raid Devices : 4

Avail Dev Size : 7794127240 (3716.53 GiB 3990.59 GB)
Array Size : 7794126848 (7433.06 GiB 7981.19 GB)
Used Dev Size : 7794126848 (3716.53 GiB 3990.59 GB)
Super Offset : 7794127504 sectors
Unused Space : before=0 sectors, after=616 sectors
State : clean
Device UUID : 04de8422:31e17dc2:984bdd59:f0204634

Internal Bitmap : -40 sectors from superblock
Update Time : Fri Oct 30 17:54:52 2020
Bad Block Log : 512 entries available at offset -8 sectors
Checksum : c2c27591 - correct
Events : 375

Layout : near=2
Chunk Size : 512K

Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

[~] # mdadm --examine /dev/sde3

/dev/sde3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : 6f39b015:4809705c:dcb7876a:e077729b
Name : 1
Creation Time : Sat Jun 8 09:46:34 2019
Raid Level : raid10
Raid Devices : 4

Avail Dev Size : 7794127240 (3716.53 GiB 3990.59 GB)
Array Size : 7794126848 (7433.06 GiB 7981.19 GB)
Used Dev Size : 7794126848 (3716.53 GiB 3990.59 GB)
Super Offset : 7794127504 sectors
Unused Space : before=0 sectors, after=616 sectors
State : clean
Device UUID : 50a0276d:f6c194c5:2481e06a:6ef32c26

Internal Bitmap : -40 sectors from superblock
Update Time : Fri Oct 30 17:54:52 2020
Bad Block Log : 512 entries available at offset -8 sectors
Checksum : 136620ae - correct
Events : 375

Layout : near=2
Chunk Size : 512K

Device Role : Active device 1
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

[~] # mdadm --examine /dev/sdd3

/dev/sdd3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : 6f39b015:4809705c:dcb7876a:e077729b
Name : 1
Creation Time : Sat Jun 8 09:46:34 2019
Raid Level : raid10
Raid Devices : 4

Avail Dev Size : 7794127240 (3716.53 GiB 3990.59 GB)
Array Size : 7794126848 (7433.06 GiB 7981.19 GB)
Used Dev Size : 7794126848 (3716.53 GiB 3990.59 GB)
Super Offset : 7794127504 sectors
Unused Space : before=0 sectors, after=616 sectors
State : clean
Device UUID : b491e8fe:86d1933c:2cf4511d:ed483a7e

Internal Bitmap : -40 sectors from superblock
Update Time : Fri Oct 30 17:54:52 2020
Bad Block Log : 512 entries available at offset -8 sectors
Checksum : 26a4ea2a - correct
Events : 375

Layout : near=2
Chunk Size : 512K

Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

[~] # mdadm --examine /dev/sdc3

/dev/sdc3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : 6f39b015:4809705c:dcb7876a:e077729b
Name : 1
Creation Time : Sat Jun 8 09:46:34 2019
Raid Level : raid10
Raid Devices : 4

Avail Dev Size : 7794127240 (3716.53 GiB 3990.59 GB)
Array Size : 7794126848 (7433.06 GiB 7981.19 GB)
Used Dev Size : 7794126848 (3716.53 GiB 3990.59 GB)
Super Offset : 7794127504 sectors
Unused Space : before=0 sectors, after=616 sectors
State : clean
Device UUID : 3f09ffd5:9fd72fa4:3d680185:6ae75238

Internal Bitmap : -40 sectors from superblock
Update Time : Fri Oct 30 17:54:52 2020
Bad Block Log : 512 entries available at offset -8 sectors
Checksum : 871f7a5d - correct
Events : 375

Layout : near=2
Chunk Size : 512K

Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)



[~] # blkid

/dev/ram2: UUID="5e1988b6-c88f-491d-ad81-8d465d83feac" TYPE="ext2"
/dev/ram3: UUID="0771041a-ba45-4300-bcc6-04c083bec337" TYPE="ext2"
/dev/sda1: UUID="1c3222ce-5cc4-47fb-8b9e-006c0afc2ff8" SEC_TYPE="ext2" TYPE="ext3"
/dev/sda3: UUID="xeBs5F-CiUa-A8f8-VtL5-1Hz0-zsI0-JHcK0x" TYPE="lvm2pv"
/dev/sda4: UUID="e5eb2ed5-2abb-43f9-b59c-31ab02a02dc1" TYPE="ext3"
/dev/sdc1: UUID="1c3222ce-5cc4-47fb-8b9e-006c0afc2ff8" TYPE="ext3"
/dev/sdc2: TYPE="swap"
/dev/sdc4: UUID="e5eb2ed5-2abb-43f9-b59c-31ab02a02dc1" TYPE="ext3"
/dev/sdc5: TYPE="swap"
/dev/sdd1: UUID="1c3222ce-5cc4-47fb-8b9e-006c0afc2ff8" TYPE="ext3"
/dev/sdd2: TYPE="swap"
/dev/sdd4: UUID="e5eb2ed5-2abb-43f9-b59c-31ab02a02dc1" TYPE="ext3"
/dev/sdd5: TYPE="swap"
/dev/sde1: UUID="1c3222ce-5cc4-47fb-8b9e-006c0afc2ff8" TYPE="ext3"
/dev/sde2: TYPE="swap"
/dev/sde3: UUID="wTlxOH-0izJ-XMSS-cXgO-YQ1o-WT7u-5YCftE" TYPE="lvm2pv"
/dev/sde4: UUID="e5eb2ed5-2abb-43f9-b59c-31ab02a02dc1" TYPE="ext3"
/dev/sde5: TYPE="swap"
/dev/sdf1: UUID="1c3222ce-5cc4-47fb-8b9e-006c0afc2ff8" TYPE="ext3"
/dev/sdf2: TYPE="swap"
/dev/sdf3: UUID="wTlxOH-0izJ-XMSS-cXgO-YQ1o-WT7u-5YCftE" TYPE="lvm2pv"
/dev/sdf4: UUID="e5eb2ed5-2abb-43f9-b59c-31ab02a02dc1" TYPE="ext3"
/dev/sdf5: TYPE="swap"
/dev/sdg1: UUID="9333eb40-8071-460b-972f-a3192d483667" TYPE="ext2"
/dev/sdg2: LABEL="QTS_BOOT_PART2" UUID="46c54622-2972-4d38-a1a7-909e5700d782" TYPE="ext2"
/dev/sdg3: LABEL="QTS_BOOT_PART3" UUID="95f9e32b-f3d0-482f-bbe1-3a26e1a93fa1" TYPE="ext2"
/dev/sdg5: UUID="9015507c-d233-41e0-8480-0fe81ff64be2" TYPE="ext2"
/dev/sdg6: UUID="dd4b087d-8e10-4331-95fa-d4076d136a99" TYPE="ext2"
/dev/sdg7: UUID="abc0f87b-9169-424c-887f-2c655399dc3f" TYPE="ext2"
/dev/md9: UUID="1c3222ce-5cc4-47fb-8b9e-006c0afc2ff8" TYPE="ext3"
/dev/md13: UUID="e5eb2ed5-2abb-43f9-b59c-31ab02a02dc1" TYPE="ext3"
/dev/md256: TYPE="swap"
/dev/md322: TYPE="swap"
/dev/md3: UUID="xeBs5F-CiUa-A8f8-VtL5-1Hz0-zsI0-JHcK0x" TYPE="lvm2pv"
/dev/drbd3: UUID="xeBs5F-CiUa-A8f8-VtL5-1Hz0-zsI0-JHcK0x" TYPE="lvm2pv"
/dev/mapper/vg288-lv3: UUID="12022ab8-013c-42bd-8e91-f17719bea473" TYPE="crypt_LUKS"
/dev/mapper/cachedev3: UUID="12022ab8-013c-42bd-8e91-f17719bea473" TYPE="crypt_LUKS"
/dev/mapper/ce_cachedev3: LABEL="DataNang" UUID="ae399087-278b-4130-9b9f-d235124d94fd" TYPE="ext4"
/dev/md1: UUID="wTlxOH-0izJ-XMSS-cXgO-YQ1o-WT7u-5YCftE" TYPE="lvm2pv"
/dev/drbd1: UUID="wTlxOH-0izJ-XMSS-

================================================================================================================================
Dans un blog, il a été dit de lancer la commande mountall, pour attacher le group, mais cela n'a pas l'air d'avoir bien marché
================================================================================================================================

# /etc/init.d/mountall

Update Extended /flashfs_tmp/boot/rootfs_ext.tgz...
/dev/md9 /mnt/HDA_ROOT ext3 rw,relatime,data=ordered 0 0
mount: /dev/md13 already mounted or /mnt/ext busy
mount: according to mtab, /dev/md13 is already mounted on /mnt/ext
install /mnt/HDA_ROOT/update_pkg/gconv.tgz
install /mnt/HDA_ROOT/update_pkg/samba4.tbz
/bin/tar: ajax_obj/extjs/languages.js: Cannot open: File exists
/bin/tar: ajax_obj/extjs/source/locale/ext-lang-af.js: Cannot open: File exists
/bin/tar: ajax_obj/extjs/source/locale/ext-lang-hr.js: Cannot open: File exists
/bin/tar: ajax_obj/extjs/source/locale/ext-lang-zh_CN.js: Cannot open: File exists
...
ln: /etc/raddb/raddb: Input/output error
rm: can't remove '/mnt/ext/opt/samba': No such file or directory
/bin/tar: Exiting with failure status due to previous errors
install /mnt/HDA_ROOT/update_pkg/ldap_server.tgz
ln: /etc/openldap/schema/schema: Input/output error
install /mnt/HDA_ROOT/update_pkg/avahi0630.tgz
install /mnt/HDA_ROOT/update_pkg/Python.tgz
install /mnt/HDA_ROOT/update_pkg/mariadb5.tgz
install /mnt/HDA_ROOT/update_pkg/vim.tgz
rm: can't remove '/mnt/ext/opt/Python': No such file or directory
install /mnt/HDA_ROOT/update_pkg/wifi.tgz
install /mnt/HDA_ROOT/update_pkg/qcli.tgz
install /mnt/HDA_ROOT/update_pkg/libboost.tgz
dev-mapper ready.
install /mnt/HDA_ROOT/update_pkg/mtpBinary.tgz
install /mnt/HDA_ROOT/update_pkg/chassisView.tgz
Get bcclient pid, output:6038 17253

another bcclient instance is running.
ln: /home/httpd/cgi-bin/apps/storageManagerV2/images/ChassisView/JBOD/chassisView: Input/output error
[/share] #

================================================================================================================================
Je tente la commande ci après, est ce que cette commande aurait bousillé mon tableau de RAID ?
================================================================================================================================

[~] # sudo mdadm -CfR --assume-clean /dev/md1 -l 10 -n 4 -c 64 -e 1.0 /dev/sdf3 /dev/sde3 /dev/sdd3 /dev/sdc3

mdadm: super1.x cannot open /dev/sdf3: Device or resource busy
mdadm: /dev/sdf3 is not suitable for this array.
mdadm: super1.x cannot open /dev/sde3: Device or resource busy
mdadm: /dev/sde3 is not suitable for this array.
mdadm: super1.x cannot open /dev/sdd3: Device or resource busy
mdadm: /dev/sdd3 is not suitable for this array.
mdadm: super1.x cannot open /dev/sdc3: Device or resource busy
mdadm: /dev/sdc3 is not suitable for this array.
mdadm: create aborted

================================================================================================================================
je lance une vérif des disques

================================================================================================================================

[/] # fdisk -lu

Disk /dev/sda: 8001.5 GB, 8001563222016 bytes
255 heads, 63 sectors/track, 972801 cylinders, total 15628053168 sectors
Units = sectors of 1 * 512 = 512 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1  4294967295  2147483647+  ee  EFI GPT

Disk /dev/sdb: 320.0 GB, 320071851520 bytes
255 heads, 63 sectors/track, 38913 cylinders, total 625140335 sectors
Units = sectors of 1 * 512 = 512 bytes

   Device Boot      Start         End      Blocks   Id  System

Disk /dev/sdc: 4000.7 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1  4294967295  2147483647+  ee  EFI GPT

Disk /dev/sdd: 4000.7 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1  4294967295  2147483647+  ee  EFI GPT

Disk /dev/sde: 4000.7 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1  4294967295  2147483647+  ee  EFI GPT

Disk /dev/sdf: 4000.7 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1  4294967295  2147483647+  ee  EFI GPT

Disk /dev/sdg: 4982 MB, 4982833152 bytes
8 heads, 32 sectors/track, 38016 cylinders, total 9732096 sectors
Units = sectors of 1 * 512 = 512 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1               8       10495        5244   83  Linux
/dev/sdg2           10496     1010687      500096   83  Linux
/dev/sdg3         1010688     2010879      500096   83  Linux
/dev/sdg4         2010880     7741439     2865280    5  Extended
/dev/sdg5         2010888     2027519        8316   83  Linux
/dev/sdg6         2027528     2044927        8700   83  Linux
/dev/sdg7         2044936     7741439     2848252   83  Linux

Disk /dev/md9: 542 MB, 542769152 bytes
2 heads, 4 sectors/track, 132512 cylinders, total 1060096 sectors
Units = sectors of 1 * 512 = 512 bytes

Disk /dev/md9 doesn't contain a valid partition table

Disk /dev/md13: 469 MB, 469893120 bytes
2 heads, 4 sectors/track, 114720 cylinders, total 917760 sectors
Units = sectors of 1 * 512 = 512 bytes

Disk /dev/md13 doesn't contain a valid partition table

Disk /dev/md256: 542 MB, 542834688 bytes
2 heads, 4 sectors/track, 132528 cylinders, total 1060224 sectors
Units = sectors of 1 * 512 = 512 bytes

Disk /dev/md256 doesn't contain a valid partition table

Disk /dev/md322: 7408 MB, 7408779264 bytes
2 heads, 4 sectors/track, 1808784 cylinders, total 14470272 sectors
Units = sectors of 1 * 512 = 512 bytes

Disk /dev/md322 doesn't contain a valid partition table

Disk /dev/md1: 7981.1 GB, 7981185892352 bytes
2 heads, 4 sectors/track, 1948531712 cylinders, total 15588253696 sectors
Units = sectors of 1 * 512 = 512 bytes

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md3: 7991.3 GB, 7991369334784 bytes
2 heads, 4 sectors/track, 1951017904 cylinders, total 15608143232 sectors
Units = sectors of 1 * 512 = 512 bytes

Disk /dev/md3 doesn't contain a valid partition table

Disk /dev/dm-1: 7911.3 GB, 7911329759232 bytes
255 heads, 63 sectors/track, 961831 cylinders, total 15451815936 sectors
Units = sectors of 1 * 512 = 512 bytes

Disk /dev/dm-1 doesn't contain a valid partition table

Disk /dev/dm-0: 7911.3 GB, 7911329759232 bytes
255 heads, 63 sectors/track, 961831 cylinders, total 15451815936 sectors
Units = sectors of 1 * 512 = 512 bytes

Disk /dev/dm-0 doesn't contain a valid partition table

Disk /dev/dm-2: 7911.3 GB, 7911327662080 bytes
255 heads, 63 sectors/track, 961830 cylinders, total 15451811840 sectors
Units = sectors of 1 * 512 = 512 bytes

Disk /dev/dm-2 doesn't contain a valid partition table

================================================================================================================================
J'ai l'impression qu'un process verrouille le RAID et empêche le bon foncitonnement de mes commandes plus haut
Quand, je tente d'arrêter le group de RAID

================================================================================================================================

[/] # mdadm --stop /dev/md1

mdadm: Cannot get exclusive access to /dev/md1:Perhaps a running process, mounted filesystem or active volume group?

================================================================================================================================
Pourtant lsof ne retourne pas de process sur ces devices
================================================================================================================================

[/] # lsof | grep /dev/md1
[/] # lsof | grep /dev/sdf3
[/] # lsof | grep /dev/sde3
[/] # lsof | grep /dev/sdd3
[/] # lsof | grep /dev/sdc3


================================================================================================================================
le vgDisplay semble bien indiquer qu'il y a un problème au montage du RAID
================================================================================================================================

# sudo vgdisplay

  Found duplicate PV wTlxOH0izJXMSScXgOYQ1oWT7u5YCftE: using /dev/drbd1 not /dev/md1
  Using duplicate PV /dev/drbd1 from subsystem DRBD, ignoring /dev/md1
  WARNING: Device for PV q3Y8CI-XAQl-q7we-bnTX-fFnl-IWx4-8iLXVw not found or rejected by a filter.
  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  205
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                6
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                1
  VG Size               7.41 TiB
  PE Size               4.00 MiB
  Total PE              1941225
  Alloc PE / Size       1941225 / 7.41 TiB
  Free  PE / Size       0 / 0
  VG UUID               nXwW4P-WkTF-GTTv-amdE-QisT-DfYH-5U1lI6

  WARNING: duplicate PV xeBs5FCiUaA8f8VtL51Hz0zsI0JHcK0x is being used from both devices /dev/drbd3 and /dev/md3
  Found duplicate PV xeBs5FCiUaA8f8VtL51Hz0zsI0JHcK0x: using existing dev /dev/drbd3
  --- Volume group ---
  VG Name               vg288
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  151
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               7.27 TiB
  PE Size               4.00 MiB
  Total PE              1905290
  Alloc PE / Size       1905290 / 7.27 TiB
  Free  PE / Size       0 / 0
  VG UUID               oLV6Et-WbLP-b1Oh-Mtn1-pcn7-LAA8-RmqyDC

================================================================================================================================
à la demande, je peux fournir le rapport de diagnostics qnap

j'ai l'impression que depuis que j'ai installé deux nouveaux disques sur le boitier, et ai installé le système qNAP dessus, les noms des disques du RAID10  ont été changés
le sdc3 a disparu, et le sdg3 est apparu...
je ne sais pas si cela a une incidence


Par contre, c'est la première fois que je troubleshoot sur un pb de RAID, et j'y connais pas grand chose, surtout e, shell linux tongue

Si quelqu'un aurait une piste pour me débloquer, je ne sais plus quoi faire.

Merci beaucoup de votre aide et d'avoir pris le temps de me lire.

Hors ligne

#2 Le 07/02/2022, à 11:33

mzabava

Re : Aide pour récupérer data d'un RAID10 inactive

Bonjour, je suis exactement dans le meme cas. Avez vous trouve la solution ? Merci de votre aide

Hors ligne