Trying to use NVME ssd from Steam deck ssd on R3 Mini but have no luck. Any chance to make it work?
root@Homer:~# dmesg | grep nvme
[ 9.868121] nvme 0000:01:00.0: assign IRQ: got 142
[ 9.873046] nvme nvme0: pci function 0000:01:00.0
[ 9.877761] nvme 0000:01:00.0: enabling device (0000 -> 0002)
[ 9.883523] nvme 0000:01:00.0: enabling bus mastering
[ 9.897094] nvme 0000:01:00.0: saving config space at offset 0x0 (reading 0x87601217)
[ 9.904907] nvme 0000:01:00.0: saving config space at offset 0x4 (reading 0x100406)
[ 9.912546] nvme 0000:01:00.0: saving config space at offset 0x8 (reading 0x1080201)
[ 9.920271] nvme 0000:01:00.0: saving config space at offset 0xc (reading 0x0)
[ 9.927472] nvme 0000:01:00.0: saving config space at offset 0x10 (reading 0x20000004)
[ 9.935373] nvme 0000:01:00.0: saving config space at offset 0x14 (reading 0x0)
[ 9.942665] nvme 0000:01:00.0: saving config space at offset 0x18 (reading 0x0)
[ 9.949953] nvme 0000:01:00.0: saving config space at offset 0x1c (reading 0x0)
[ 9.957245] nvme 0000:01:00.0: saving config space at offset 0x20 (reading 0x0)
[ 9.964537] nvme 0000:01:00.0: saving config space at offset 0x24 (reading 0x0)
[ 9.971828] nvme 0000:01:00.0: saving config space at offset 0x28 (reading 0x0)
[ 9.979115] nvme 0000:01:00.0: saving config space at offset 0x2c (reading 0x21217)
[ 9.986753] nvme 0000:01:00.0: saving config space at offset 0x30 (reading 0x0)
[ 9.994050] nvme 0000:01:00.0: saving config space at offset 0x34 (reading 0x40)
[ 10.001430] nvme 0000:01:00.0: saving config space at offset 0x38 (reading 0x0)
[ 10.008719] nvme 0000:01:00.0: saving config space at offset 0x3c (reading 0x18e)
[ 40.521492] nvme nvme0: Device not ready; aborting initialisation, CSTS=0x0
[ 40.528436] nvme nvme0: Removing after probe failure status: -19
root@Homer:~# lspci -nn
00:00.0 PCI bridge [0604]: MEDIATEK Corp. Device [14c3:1f32] (rev 01)
01:00.0 Non-Volatile memory controller [0108]: O2 Micro, Inc. FORESEE E2M2 NVMe SSD [1217:8760] (rev 01)
root@Homer:~# cat /proc/modules | grep nvme
nvme 40960 0 - Live 0xffffffc0009ca000
nvme_core 69632 1 nvme, Live 0xffffffc0009b1000
root@Homer:~# cat /proc/devices
Character devices:
1 mem
4 ttyS
5 /dev/tty
5 /dev/console
5 /dev/ptmx
10 misc
89 i2c
90 mtd
108 ppp
128 ptm
136 pts
180 usb
188 ttyUSB
189 usb_device
248 nvme-generic
249 nvme
250 rpmb
251 bsg
252 watchdog
253 rtc
254 gpiochip
Block devices:
7 loop
8 sd
31 mtdblock
65 sd
66 sd
67 sd
68 sd
69 sd
70 sd
71 sd
128 sd
129 sd
130 sd
131 sd
132 sd
133 sd
134 sd
135 sd
179 mmc
254 ubiblock
259 blkext
ericwoud
(Eric W.)
August 2, 2024, 3:41am
2
That looks the same as my nvme.
I fixed it with this patch
committed 09:36AM - 29 Jul 24 UTC
On cold-boot sometime get:
nvme nvme0: Device not ready; aborting initialisatio… n, CSTS=0x0
This quirk fixes that.
Changes to be committed:
modified: drivers/nvme/host/pci.c
frank-w
(Frank W.)
August 2, 2024, 6:02am
3
How did you get the device-ids? I guess his ssd has different ids…
ericwoud
(Eric W.)
August 2, 2024, 6:05am
4
The pci part is working ok, so can just get the pci vendor and product numbers.
Initializing nvme is where it gets stuck, because the device is not ready.
I guess
lspci -nn
will do the trick.
How to apply patch?) Will it require to build openwrt module every time i want to update it?
ericwoud
(Eric W.)
August 2, 2024, 9:15am
6
I do not use openwrt, but there most be loads to find on forums how to apply patches to openwrt.
Did you try to find the numbers for your nvme with?
lspci -nn
I actually ended up with an init script
#!/bin/sh /etc/rc.common
START=10
start() {
echo "NVMe reinitialization ..."
echo 1 >/sys/bus/pci/devices/0000:00:00.0/remove
echo 1 >/sys/bus/pci/rescan
}
As I can suppose its the most easiest way to deal with it
root@Homer:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
mtdblock0 31:0 0 1M 1 disk
mtdblock1 31:1 0 512K 0 disk
mtdblock2 31:2 0 2M 0 disk
mtdblock3 31:3 0 2M 1 disk
mtdblock4 31:4 0 122.5M 0 disk
mmcblk0 179:0 0 7.3G 0 disk
├─mmcblk0p1 179:1 0 512K 0 part
├─mmcblk0p2 179:2 0 2M 0 part
├─mmcblk0p3 179:3 0 4M 0 part
├─mmcblk0p4 179:4 0 32M 0 part
├─mmcblk0p5 179:5 0 300M 0 part
├─mmcblk0p65 259:0 0 9M 1 part /rom
├─mmcblk0p66 259:1 0 284.1M 0 part /overlay
└─mmcblk0p128 259:2 0 4M 0 part
mmcblk0boot0 179:8 0 4M 1 disk
mmcblk0boot1 179:16 0 4M 1 disk
nvme0n1 259:3 0 57.6G 0 disk
└─nvme0n1p1 259:4 0 57.6G 0 part /u01/nvme
[ 40.490441] nvme nvme0: Device not ready; aborting initialisation, CSTS=0x0
[ 40.497385] nvme nvme0: Removing after probe failure status: -19
[ 46.667538] nvme 0000:01:00.0: assign IRQ: got 142
[ 46.672436] nvme nvme0: pci function 0000:01:00.0
[ 46.677151] nvme 0000:01:00.0: enabling bus mastering
[ 46.690838] nvme 0000:01:00.0: saving config space at offset 0x0 (reading 0x87601217)
[ 46.698659] nvme 0000:01:00.0: saving config space at offset 0x4 (reading 0x100406)
[ 46.706317] nvme 0000:01:00.0: saving config space at offset 0x8 (reading 0x1080201)
[ 46.714081] nvme 0000:01:00.0: saving config space at offset 0xc (reading 0x0)
[ 46.721322] nvme 0000:01:00.0: saving config space at offset 0x10 (reading 0x20000004)
[ 46.729242] nvme 0000:01:00.0: saving config space at offset 0x14 (reading 0x0)
[ 46.736570] nvme 0000:01:00.0: saving config space at offset 0x18 (reading 0x0)
[ 46.743900] nvme 0000:01:00.0: saving config space at offset 0x1c (reading 0x0)
[ 46.751219] nvme 0000:01:00.0: saving config space at offset 0x20 (reading 0x0)
[ 46.758522] nvme 0000:01:00.0: saving config space at offset 0x24 (reading 0x0)
[ 46.765836] nvme 0000:01:00.0: saving config space at offset 0x28 (reading 0x0)
[ 46.773143] nvme 0000:01:00.0: saving config space at offset 0x2c (reading 0x21217)
[ 46.780795] nvme 0000:01:00.0: saving config space at offset 0x30 (reading 0x0)
[ 46.788091] nvme 0000:01:00.0: saving config space at offset 0x34 (reading 0x40)
[ 46.795506] nvme 0000:01:00.0: saving config space at offset 0x38 (reading 0x0)
[ 46.802819] nvme 0000:01:00.0: saving config space at offset 0x3c (reading 0x18e)
[ 47.040496] nvme nvme0: missing or invalid SUBNQN field.
[ 47.063516] nvme nvme0: 1/0/0 default/read/poll queues
[ 47.074646] nvme0n1: p1 p2 p3 p4 p5 p6 p7 p8
ericwoud
(Eric W.)
August 2, 2024, 10:32am
8
That will be easier.
Do you also need to do this on reboot?
Perhaps add a condition
start() {
if [ ! -e /dev/nvme0n1 ]; then
echo "NVMe reinitialization ..."
echo 1 >/sys/bus/pci/devices/0000:00:00.0/remove
echo 1 >/sys/bus/pci/rescan
fi
}
Not fully understand your question. I suppose it make sense to have nvme ready to use with services. Or it can be used without reboot … but as I need to use it for network services it would be nice to have nvme disk mounted before services initialization
ye … it make sense … I made it to just quick try to deal with problem without dealing with (re)compiling modules
ericwoud
(Eric W.)
August 2, 2024, 10:51am
10
I meant, do you need to apply a fix also when rebooting (instead of cold-boot).
Yes. Cold or worm start … it does not matter. System just ignores this nvme module. Only this dirty reinitialization helps