Thanks for the heads up! In our case this is not an issue — we don’t copy a filesystem. We write the OpenWrt FIT image directly to a raw partition (p2) via dd, and the kernel is placed on p1 (ext4). No fstab involved, rootfs is mounted via overlayfs from the FIT image.
But good point for anyone trying to clone an existing SD/eMMC rootfs to NVMe — fstab would definitely need updating in that case.
Actually, any recent (systemd based) Linux distro does not need /etc/fstab entries for boot and root partitions. They will both be mounted correctly if setup correctly. Fstab can be useful if deviating from standard mount options, like not writing access times on ext4 for example…
Most easy setup on nvme: rootfs in btrfs partition. Since U-Boot supports it, one can boot without separate boot partition. A boot folder on the rootfs will just be fine in this case.
Quick update — we’ve finalized the NVMe deploy system for BPI-R4.
In addition to NVMe boot and sysupgrade support, we’ve added automatic p3 ext4 partition covering all remaining disk space. It’s available at /mnt/nvme0n1p3 after installation and is intended for general use — Docker, Podman, or any data storage.
quick update on my progress with the “BPI-R4 Pro 8X”.
Current status after testing and debugging:
Boot (including NVMe) is working reliably
System itself is stable once running
Networking is completely non-functional (all ports show NO-CARRIER)
After quite a bit of testing, my current understanding is:
→ kernel / device tree / switch (DSA / MT7530) support for the 8X variant is not fully implemented yet
(which all you already mentioned in your repos).
I initially tried multiple approaches (including modifying FIT images and mixing kernel/DT sources), but results were inconsistent and unstable — which makes sense in hindsight, since proper support requires aligned kernel + DT + switch configuration. (erghh, I sooo hoped that it was an easy task.))
At this point, it seems clear that:
proper 8X support is still incomplete
and fixing this is not a simple “patch one file” task, but requires coordinated kernel-level work
will now try Franks 6.19 (just saw that - looks promissing)
(I also attempted to extract and modify the FIT image manually to use as a base — useful exercise, but not a viable solution in the current state.)
So for now, I assume that waiting for upstream/kernel improvements is the realistic path forward.
That said:
I do have the hardware available and can test things locally.
If anyone is actively working on 8X support (kernel, DT, switch/DSA) and needs testing or feedback, I’d be happy to help.
I´ll be on standby in the meantime and place questions from time to time - I hope you guys dont mind.
You did not mention what image you are using. If it is mine, i just moved from 6.18-main to 6.19-main from Frank for linux-bpir4-git package, it is building now slowly and maybe finished in an hour or so.
ya, I am switching to 6.19 (i was still on 6.18)… true , will try that also … been on 18 for the past 4-5days. … saw that Frank did also update. Let me check.
I built my own image based on Frank’s 6.19 kernel (using his repos/pipeline) and tested it on a BPI-R4 Pro 8X.
Boot works, kernel works, DSA works — but I ran into several networking issues during bring-up.
Current assumption
@frank-w´s setups likely work because of predefined roles/config
My issues seem to come from:
missing/incorrect userspace network config
unclear interface mapping
I’ll verify this further. I also plan to test @ericwoud´s 6.19 build to compare behavior.
What I observed
1. Kernel / hardware side
Kernel 6.19 boots fine
DSA initializes correctly
Switch + PHY detected:
mt7530
MaxLinear MxL86252
Aeonsemi AS21xxx
LAN ports get link and work after cleanup
Conclusion: Kernel / DT / DSA stack is working
2. Default network config problem
With the default setup (systemd-networkd):
lanbr0 is created automatically
interfaces get attached automatically
VLAN behavior seems to be applied implicitly
Result:
unstable connectivity
packet loss
ARP failures
Kernel log:
Dropping packet due to invalid outer 802.1Q tag: switch -1 port -1
3. Interface mapping (important)
My initial assumption was:
eth0 = WAN What it actually was:
eth0 → internal 10G link (not usable as WAN)
eth1 → real WAN PHY (RJ45 combo port)
eth2 → switch CPU port
lan0–lan4 → front LAN ports
ip addr flush dev eth0
ip addr flush dev eth1
ip addr flush dev lan1
ip addr flush dev br0
ip addr add 192.168.2.200/24 dev eth1
ip link set eth1 up
ip route add 192.168.2.0/24 dev eth1
ip route add default via 192.168.2.1 dev eth1
Result:
stable link
ping to upstream router works
confirms WAN via eth1
Conclusion
Kernel 6.19 fixes the major “no network” issue
remaining problems are:
userspace network configuration
non-obvious interface mapping
Open questions for tomorrow
Is eth1 = WAN expected for bpi-r4pro 8X?
Is the default networkd config intended this way?
Am I missing part of Frank’s intended configuration?
Does Eric’s 6.19 build handle this mapping differently?
Next steps (on my side)
test Frank’s setup with the intended config
test Eric’s 6.19 build
compare behavior and defaults
Kernel 6.19 works
LAN works after removing default networkd bridge/VLAN
WAN is on eth1 (not eth0)
eth0 appears to be an internal link
Note
Posting this in case others run into the same behavior and want to test without waiting. Also, please correct me if I am coming to conclusions that are simply wrong (might have happened once or twice in my lifetime )
I think my setup still assumed this is eth0, so you may need to switch this in the settings, if you find that the underlying ethX of the dsa-ports is not brought link up.
Newer kernels will use lan1-lan4 for mxl-ports and lan5 as internal.
The network-config is still the one from R4,have not made a way to use a dedicated one,so here is some manual work needed.
Btw. Upstream driver for mxl switch has issues with current firmware (1.0.70.70) and software bridge. I switched to this driver with kernel 7.0. we hope that mxl release new firnware in a few weeks.
I decided to undertake a project that could be generally useful and demonstrate the practical applications of installing OpenWRT on NVMe. I set out on this journey, and after overcoming numerous obstacles that were thrown in my way, I finally arrived here