Can you please let us know which CONFIG_XXX kernel options you needed to add?
I can upload my kernel .config file, I made many iterations at this point I don’t remember each one individually. However you can compare this one to the one I based (frank-w’s repo) and see each changed value. Here you go: .config (177.4 KB)
That’s very impressive!
Thanks for sharing, please keep us posted on further progress.
I really do appreciate people and posts like this.
Thanks for the support!
The next milestone on my roadmap is enabling PCIe passthrough or more generally, full device passthrough. So far, USB passthrough works flawlessly. For example, I’m currently passing through a USB Wi-Fi adapter to the OpenWRT VM, which I’m using as a wireless access point.
I’ve already enabled the relevant kernel modules and configured U-Boot (huge thanks to @frank-w), but PCIe passthrough still isn’t functional. My main suspicion is that the device tree may need modification to expose or map the devices properly, this might be the missing piece.
I’m uncertain whether individual RJ45 ports can be passed to VMs due to DSA limitations, but I’m more optimistic about the SFP+ ports, they seem more promising for passthrough.
I’ll keep you updated as I make progress.
I don’t think that passthrough needs to be enabled in devicetree. I guess more it needs a virtual pcie driver or cgroup to be added
I think you’ll need MediaTek’s technical support.
Any idea on how to proceed?
I’m going to try everything I can. If all fails, then I will try to contact them. Is there anybody from mediatek in the forum that you know?
Good luck, but personally I think you’re wasting your time. It’s better to just get a cheap Intel N100-based motherboard to achieve your goal. The BPI-R4’s CPU is too weak, it has limited RAM, and its PCIe compatibility is poor — many high-end PCIe devices simply aren’t recognized or don’t work properly.
The N100, despite being a low-power CPU, offers:
- 4 efficient cores (Gracemont) with a base clock of 800MHz and boost up to 3.4GHz
- Support for up to 48GB DDR5/DDR4 RAM
- Full x86_64 compatibility, meaning excellent software support (Proxmox, TrueNAS, pfSense, etc.)
- PCIe 3.0 x9 lanes and better device compatibility with modern peripherals
In contrast, the BPI-R4:
- Uses a quad-core Cortex-A73 CPU (MediaTek MT7988), which is much weaker than the N100 in both single-thread and multi-thread workloads
- Has only 4GB or 8GB LPDDR4 RAM, which is not upgradeable
- Its PCIe slot often has compatibility issues** with modern devices due to limited driver and hardware support in ARM environments
It makes much more sense to use the BPI-R4 as a main or secondary router, where its strengths in networking (built-in 2x10GbE SFP) can shine — rather than trying to turn it into a general-purpose server or virtualization host.
I agree, x86 is much more feasible for such use case. But I already got BPI-R4 and I was already using it as a router. Tinkering lead me to that path and now it’s a fun project for me at this point.
I already have a low power x86 system that does most of the things I plan to do on BPI-R4. Only drawbacks are low network speed and power efficiency.
For example, I already moved my containers from that system to R4, because it handles that job very well at this point.
Hello @mr.ea,
glad that there are poeple like you, doing fun projects.
As I absolutelly agree that using bpi-r4 for this purpose is a bit unexpected, I would like
to you!
I think that it is awesome that bpi-r4 can even be used this way, and it is good to know. Really nice work!
Thank you guys for the support.
At the moment, I can’t proceed any more on passthrough topic without any help from banana pi or mediatek. I just e-mailed someone from mediatek related to the topic and let you guys know I get something useful. If you know someone from mediatek or bananapi, I appreciate it if you get them to help me.
Until then, I am putting this project on hold.
In the meantime, I have a different and another exciting plan with this board. I will also let you guys know about that one too.
I am working on running proxmox from iSCSI on bpi-r4 and it looks like averything is working, but I am unable to open proxmox web admin.
Do you are familiar with the proxmox? Could you provide me with any idea what what could be wrong?
I created thread on proxmox forum
Did you check status of proxmox related services? If they are failing it is probably because the default kernel lacks fuse modules, at least that was my experience. I recompiled kernel with fuse related modules and successfully started proxmox services.
If you would like to do same, I shared my kernel config file above, you can use it to recompile kernel.
The best use would be for telecom fraud; I really don’t understand why they need so many SIM card slots.
As I wrote any details on the proxmox forum, I am able to curl “some version” (but not the expected one) using curl locally, therefore I think that its services are running. But, this is NOT an proxmox forum, please, whenever you have any ideas, use the proxmox forum. Thanks
Guys, I have one good news and one bad news.
Bad one: I contacted someone from mediatek about pci passthrough support and he said that that this chipset unfortunately does not support pci passthrough.
Good one: I successfully run TrueNAS in a VM. One guy compiled TrueNAS from source for arm64 architecture, I used his image and successfully installed TrueNAS in a VM.
If you want to try, here is the topic: TrueNAS on ARM - Now Available - TrueNAS General - TrueNAS Community Forums
Sorry I’m interested in the topic but not very technically knowledgable
What is the impact of not having PCI Passthrough?
For example, we could create an OpenWRT virtual machine and dedicate SFP+ ports and wifi cards to that vm. We could create a TrueNAS or OMV virtual machine, then add an m.2 to sata card and dedicate sata ports to that vm. We could create a virtual machine for our docker instances, then add a gpu and dedicate it to that vm. That way we could do HW transcoding for our plex/jellyfin server and even run local LLMs.
It’s not like the end of the world but possibility of PCI passthrough would enable all these options.
Ah I really should have read PCI Passthrough - Proxmox VE first.
Without it do you lose access to the Sfp+ ports? Ssd etc inside VMs or just that the acess is less direct. I guess HW transcoding wouldnt be possible?
Does it mean that we lose those capabilities inside lxc containers?
Would it still.be feasible yto run Openwrt / home server type services?