Followed a Raspberry Pi Proxmox installation guide as reference
Initially failed due to missing FUSE modules, recompiled the kernel from frank-w’s repo including FUSE
Got Proxmox running, but VMs failed to launch due to missing KVM and virtualization modules - recompiled again
Successfully ran OpenWRT, but ran into networking issues
Since BPI-R4 uses DSA, passing through RJ45 ports to OpenWRT was challenging, required macvlan and macvtap, recompiled again with those modules
Ran out of storage, added an SSD, but couldn’t use it via Proxmox GUI due to missing device mapper modules, recompiled again
With more storage, set up a Docker VM for containers, had networking issues, but resolved them
Installed Debian on another VM and converted it into an OMV instance, successfully enabled ZFS, and created a RAIDZ2 test volume using virtual disks
What’s Not Working (Yet)
RJ45 passthrough to OpenWRT isn’t working yet
PCI passthrough is not functional yet
What I Want to Achieve
LAN passthrough to fully utilize OpenWRT, but my networking skills are limited and the current setup is already complex
PCI passthrough, for 3 main goals:
Attach an M.2 to SATA adapter and move SATA drives into the OMV VM
Use an M.2 to PCIe adapter to test GPU passthrough if successful, I’d like to try GPU transcoding in Jellyfin or even run a local LLM
Pass through a Wi-Fi card to OpenWRT to manage wireless directly
Untested Hardware
All 3 Mini PCIe slots
Both SFP ports
I’m open to suggestions, advice, and happy to help anyone trying to replicate this setup. Let me know if you have any questions, I can walk you through the process.
I can upload my kernel .config file, I made many iterations at this point I don’t remember each one individually. However you can compare this one to the one I based (frank-w’s repo) and see each changed value.
Here you go:
.config (177.4 KB)
The next milestone on my roadmap is enabling PCIe passthrough or more generally, full device passthrough.
So far, USB passthrough works flawlessly. For example, I’m currently passing through a USB Wi-Fi adapter to the OpenWRT VM, which I’m using as a wireless access point.
I’ve already enabled the relevant kernel modules and configured U-Boot (huge thanks to @frank-w), but PCIe passthrough still isn’t functional.
My main suspicion is that the device tree may need modification to expose or map the devices properly, this might be the missing piece.
I’m uncertain whether individual RJ45 ports can be passed to VMs due to DSA limitations, but I’m more optimistic about the SFP+ ports, they seem more promising for passthrough.
Good luck, but personally I think you’re wasting your time. It’s better to just get a cheap Intel N100-based motherboard to achieve your goal. The BPI-R4’s CPU is too weak, it has limited RAM, and its PCIe compatibility is poor — many high-end PCIe devices simply aren’t recognized or don’t work properly.
The N100, despite being a low-power CPU, offers:
4 efficient cores (Gracemont) with a base clock of 800MHz and boost up to 3.4GHz
Support for up to 48GB DDR5/DDR4 RAM
Full x86_64 compatibility, meaning excellent software support (Proxmox, TrueNAS, pfSense, etc.)
PCIe 3.0 x9 lanes and better device compatibility with modern peripherals
In contrast, the BPI-R4:
Uses a quad-core Cortex-A73 CPU (MediaTek MT7988), which is much weaker than the N100 in both single-thread and multi-thread workloads
Has only 4GB or 8GB LPDDR4 RAM, which is not upgradeable
Its PCIe slot often has compatibility issues** with modern devices due to limited driver and hardware support in ARM environments
It makes much more sense to use the BPI-R4 as a main or secondary router, where its strengths in networking (built-in 2x10GbE SFP) can shine — rather than trying to turn it into a general-purpose server or virtualization host.
I agree, x86 is much more feasible for such use case. But I already got BPI-R4 and I was already using it as a router. Tinkering lead me to that path and now it’s a fun project for me at this point.
I already have a low power x86 system that does most of the things I plan to do on BPI-R4. Only drawbacks are low network speed and power efficiency.
For example, I already moved my containers from that system to R4, because it handles that job very well at this point.
glad that there are poeple like you, doing fun projects.
As I absolutelly agree that using bpi-r4 for this purpose is a bit unexpected, I would like to you!
I think that it is awesome that bpi-r4 can even be used this way, and it is good to know.
Really nice work!
At the moment, I can’t proceed any more on passthrough topic without any help from banana pi or mediatek. I just e-mailed someone from mediatek related to the topic and let you guys know I get something useful. If you know someone from mediatek or bananapi, I appreciate it if you get them to help me.
Until then, I am putting this project on hold.
In the meantime, I have a different and another exciting plan with this board. I will also let you guys know about that one too.
Did you check status of proxmox related services? If they are failing it is probably because the default kernel lacks fuse modules, at least that was my experience. I recompiled kernel with fuse related modules and successfully started proxmox services.
If you would like to do same, I shared my kernel config file above, you can use it to recompile kernel.
As I wrote any details on the proxmox forum, I am able to curl “some version” (but not the expected one) using curl locally, therefore I think that its services are running.
But, this is NOT an proxmox forum, please, whenever you have any ideas, use the proxmox forum.
Thanks
Bad one: I contacted someone from mediatek about pci passthrough support and he said that that this chipset unfortunately does not support pci passthrough.
Good one: I successfully run TrueNAS in a VM. One guy compiled TrueNAS from source for arm64 architecture, I used his image and successfully installed TrueNAS in a VM.