Hallo Liste / Leute,
Ich glaub ich hab Mist gebaut ... Nach einem Stromausfall war mein RAID5
"weg". (6 Disks, kein Spare) Der Versuch, das wieder zusammenzusetzen
schlug fehl.
root@openmediavault:~# mdadm --stop /dev/md127
mdadm: stopped /dev/md127
root@openmediavault:~# mdadm --assemble --force --verbose /dev/md127
/dev/sd[bcdefg]
mdadm: looking for devices for /dev/md127
mdadm: /dev/sdb is identified as a member of /dev/md127, slot 0.
mdadm: /dev/sdc is identified as a member of /dev/md127, slot 5.
mdadm: /dev/sdd is identified as a member of /dev/md127, slot 4.
mdadm: /dev/sde is identified as a member of /dev/md127, slot 3.
mdadm: /dev/sdf is identified as a member of /dev/md127, slot 1.
mdadm: /dev/sdg is identified as a member of /dev/md127, slot 2.
mdadm: added /dev/sdf to /dev/md127 as 1
mdadm: added /dev/sdg to /dev/md127 as 2
mdadm: added /dev/sde to /dev/md127 as 3
mdadm: added /dev/sdd to /dev/md127 as 4
mdadm: added /dev/sdc to /dev/md127 as 5
mdadm: added /dev/sdb to /dev/md127 as 0
mdadm: /dev/md127 has been started with 5 drives (out of 6) and 1
rebuilding.
root@openmediavault:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0]
[raid1] [raid10]
md127 : active (auto-read-only) raid5 sdb[0] sdc[5] sdd[4] sde[3] sdg[6]
sdf[1]
14650693120 blocks super 1.2 level 5, 512k chunk, algorithm 2
[6/5] [UU_UUU]
bitmap: 0/22 pages [0KB], 65536KB chunk
unused devices: <none>
root@openmediavault:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0]
[raid1] [raid10]
md127 : active (auto-read-only) raid5 sdb[0] sdc[5] sdd[4] sde[3] sdg[6]
sdf[1]
14650693120 blocks super 1.2 level 5, 512k chunk, algorithm 2
[6/5] [UU_UUU]
bitmap: 0/22 pages [0KB], 65536KB chunk
root@openmediavault:~# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sat Oct 8 12:19:19 2016
Raid Level : raid5
Array Size : 14650693120 (13971.99 GiB 15002.31 GB)
Used Dev Size : 2930138624 (2794.40 GiB 3000.46 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Oct 29 12:14:59 2022
State : clean, degraded
Active Devices : 5
Working Devices : 6
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : openmediavault:NAS (local to host openmediavault)
UUID : 012a11bc:bce8c44a:f036c177:460a832e
Events : 34773
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 80 1 active sync /dev/sdf
6 8 96 2 spare rebuilding /dev/sdg
3 8 64 3 active sync /dev/sde
4 8 48 4 active sync /dev/sdd
5 8 32 5 active sync /dev/sdc
root@openmediavault:~#
Wie bekomme ich das RAID wieder zum laufen ? Der rebuild Prozess startet
nicht ? Muss ich /dev/sdg noch mal entfernen und erneut hinzufügen )
(--remove / -add)
Danke für Eure Hilfe.
Gruss Pritt