AI/LLM PCIe accelerator modules (Hailo, Coral, etc)

i’m looking for a bird’s eye view guidance on what AI/LLM ASIC accelerator modules can be used in a BPI-R4 (or other BPI boards if the BPI-R4 is lacking something).

i’m new to all this, so pardon my ignorance!

my primary goal is to add object detection to Frigate with low power consumption, but if i can use the same module for running local LLM models with a reasonable performance, then it’d be even better.

i think these should/could work in a BPI-R4:

i have some doubts/questions, though:

  • some of these modules can draw a lot of power temporarily (2A). would the BPI-R4 be able to supply it?

  • what about the software support? will these work with an OpenWrt 24.10.1?

  • what about local memory? i can’t find any info. or these TPUs have no local memory? do they use the system’s memory through the PCIe lanes? would the BPI-R4’s 4GB be enough for running the object detection (next to the frigate docker image, which is already running fine on my board)?

  • what about the USB based TPUs, e.g. the Coral? seems to have the same performance. or are they available at all?

  • would i need an M.2 or a mini PCIe version of the Coral for the BPI-R4? if both exist and work, then which one is better?

we have test hailo8 on BPI-M5 Pro and BPI-M7 . it is support.

we have test deepx on BPI-R4 , driver is working fine. which project are you want to do ??, BPI-R4 not video input , so we can not test other function .

Frigate is already running in a Docker image on my BPI-R4.

i want to add object detection and face recognition to the camera streams.

ultimately, i would like to run local LLMs. but i’m afraid the BPI-R4 plus one of the mini PCIe cards is not powerful enough for that.

these TPUs must have local memory, because “The Google Coral EdgeTPU is available in USB and m.2 format”; i.e. if there’s an USB version, it must use local memory.

you can try deepx ai module with BPI-R4 ,we have test it.