Probably it should work in a way like laptop hybrig graphics - using DRI_PRIME to choose driver (e.g. intel+radeon or intel+nvidia), but i’m not sure if it’s implemented in gpu driver or in Xorg server itself.
P.S. I’m still trying to debug and run offscreen example, but w/o success.
[ 4.523833] [drm] hdmi-audio-codec driver bound to HDMI
[ 4.529458] lima 13040000.gpu: bus rate = 500500000
[ 4.534348] lima 13040000.gpu: mod rate = 500500000
[ 4.539401] [TTM] Zone kernel: Available graphics memory: 247120 kiB
[ 4.545824] [TTM] Zone highmem: Available graphics memory: 1028942 kiB
[ 4.552317] [TTM] Initializing pool allocator
[ 4.571794] lima 13040000.gpu: mmu gpmmu dte write test fail
[ 4.577470] [TTM] Finalizing pool allocator
[ 4.582308] [TTM] Zone kernel: Used memory at exit: 0 kiB
[ 4.587769] [TTM] Zone highmem: Used memory at exit: 0 kiB
[ 4.593253] lima 13040000.gpu: ttm finalized
[ 4.597504] lima 13040000.gpu: Fatal error during GPU init
[ 4.603098] lima: probe of 13040000.gpu failed with error -5
Now trying to figure out why.
The immediate error is in funcrion
int lima_mmu_init(struct lima_ip *ip)
But i can’t realise how and where it calls, the lastest successfull function on initialisation is
lima_init_ip(ldev, i);
I’ve not found any direct calls of lima_mmu_init, so i’m stuck.
These functions are not chaged much since 4.17, which was backported to my 4.16-lima. But now initialization doesn’t works.
Any help will be appreciated My plan is to add more verbode output on init.
I can report that using frank’s 4.20.0-hdmiv5 repository as a base, I transfered the lima kernel driver from the lima repository for 4.20.0 and integrated it into the build structure. I also got the gpmmu write test fail.
There were a few things I added to the dtsi file that wern’t already in yours frank, that the changelog showed they added for this purpose. Then I stepped through the source changes and was able to infer their correct intergration into the 4.20.0 lima driver source. Things haven’t changed much, I crossed my fingers.
[ 9.991827] INFO@lima_mediatek_init 117 err = 0
[ 9.996582] lima 13040000.gpu: bus rate = 500500000
[ 10.001477] lima 13040000.gpu: mod rate = 500500000
[ 10.037767] lima 13040000.gpu: gp - mali450 version major 0 minor 0
[ 10.044282] lima 13040000.gpu: pp0 - mali450 version major 0 minor 0
[ 10.050789] lima 13040000.gpu: pp1 - mali450 version major 0 minor 0
[ 10.057314] lima 13040000.gpu: pp2 - mali450 version major 0 minor 0
[ 10.063834] lima 13040000.gpu: l2 cache 8K, 4-way, 64byte cache line, 128bit external bus
[ 10.071997] lima 13040000.gpu: l2 cache 128K, 4-way, 64byte cache line, 128bit external bus
[ 10.120312] [drm] Initialized lima 1.0.0 20170325 for 13040000.gpu on minor 1
I’ve not built lima-mesa yet but I’ve got the build envirionment all setup. I just wanted to share the good news cause now I get to play with it
That’s my intention yes - I always meant to be working with you on getting the R2 up to spec - just life got in the way all of a sudden. I’m currently compiling mesa so I can test the entire execution path - output from the lima module is exactly as expected - offscreen rendering will most likely work - I presume there are caveats for direct rendering but I’ll be in a better position to understand them with everything else working - I’ll get this working, then create a repo in alignment with yours.
I’m running two R2s, a 1.1 and a 1.2, serial debugging each other etc using your latest debian image - but I’ve switched to the testing/sid apt sources for the system running 4.20 + lima and the cross compile box - I don’t know if 4.20 works well on the R2 yet - it doesn’t seem to power off but I’ve not looked into it properly.
But I figured that lima development isn’t going to see backported features at this stage of development so it was best to setup my build around their current branch dependencies.
Ok - it worked successfully for off-screen rendering - but to render directly to the screen requires modifications to mesa - specifically a DRI shim for the mediatek HDMI output (basically allows rendering on Lima (/dev/dri/card1) but with the final output written to an area of graphics memory using dmabuf.
After discussions on #dri-devel and #lima on freenode IRC - I have successfully written the shim and the R2 can now render directly to the screen
I’m cleaning up my code now - making sure everything is compliant and any uneeded testing is reverted.
Then I’ll be updating my repositories with both the 4.20 kernel driver and the modified mesa with the Mediatek DRI shim driver - I’ll keep you posted.
Hiya - sorry for disappearing - I’ve had health issues but resolved now
I’m going to port and test my changes for lima to your latest kernel revisions - there’s additions to XOrg as well as the lima kernel driver. It was quite the pain because I needed to submit my additions up the lima project trees on gitlab and to your trees on github - sorting through it all was quite a nightmare given that ultimately only half a dozen files were touched or added.
lima performance wasn’t great - perhaps there were DTS related things / clock related things - perhaps it’s lima being too incomplete.
So my main focus was taking what I’d learned from the lima changes and getting some official mali blobs working on the R2 - for this I was using the sunxi version of the mali kernel driver along with their xorg code.
I’m confident I can get this working given that all the prerequisites have been proven with a working lima setup - and indeed I’ve gotten the kernel part working and detecting the mali450 - there are minor errors, the reported clock frequency was way too low (DTS? same reason lima is slow?) - but the module loaded just fine.
next will be the xorg driver part - there are caveats - given the FB device and the mali device are different nodes, you need to create a kind of stub driver which essentially blits from one part of memory to the FB display memory - the setup for this stub and the functions makes use of are already proven with the lima code.
I need a few days to hack away at this - if I fail I will return to the lima driver and focus on performance.
lima is incomplete of course - BUT there’s enough functionality working that if you take care to only use working features? it’s useful. IE lima is perfectly capable of texture mapping a couple of triangles with filtering - so hardware rescaling for video / emulators is definitely in sight
I’ve already written a kind of emulator/media front end called ‘BananaBox’ that brings all the best ARM optimized code that exists into one place, with the fast NEON rescaling code it works great for many things - but for emulating more powerful systems perfectly? I need the GPU to take some load off the CPU.
You start writing a front end then a few months later you’re knee deep in the kernel. Don’t you just love programming?