BAR 0: no space for [mem size 0x200000000 64bit pref]

dev_res doesnot have any member ‘size’

Additionally to the need_fix_device_id and DEVICE_Id I have also added the functions as follows. Size I have currently assume 32 MB. And BAR 0 not assigned is the error.

static void mtk_fixup_bar_size(struct pci_dev *dev)
{
	struct resource *dev_res = &dev->resource[0];
	/* 32bit resource length will calculate size to 0, set it smaller */
	//dev_res->end = 0xfffffffe;
	dev_res->end = dev_res->start + 0x10000000 - 1;
}
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MEDIATEK, PCI_DEVICE_ID_MEDIATEK_7622, mtk_fixup_bar_size);

Have you tried adding mt7622 to v2 (which is merged to mainline and do not contain the fixup function)? So only trigger the writew with mt7622 device-id

Yes Already done that.

As per the link the function is added in struct mtk_pcie_soc. Whereas in the 5.4 release that I am working has the function already assigned to .startup struct member when creating the structure object.

However the error still exist

BAR 0: no space for [mem size 0x10000000 64bit pref]

Hi,

I am digging a bit deeper.

I have observed that in ./kernel/resources.c allocate_resource() is unable to find and request resource. As a Result of which we are getting this error.

BAR 0: no space for [mem size 0x10000000 64bit pref]

There is some issue with the memory range. I am trying to still dig in and understand the memory mapping. In Case you find something related please post.

Additional Information

At boot up when trying to allocate resources OS tries to allocate the resource in 3 attempts in__pci_assign_resource()

First attempt : call pci_bus_alloc_resource() with Flags IORESOURCE_PREFETCH | IORESOURCE_MEM_64

Second Attempt : call pci_bus_alloc_resource() with Flags IORESOURCE_PREFETCH

Third Attempt : Attempt : call pci_bus_alloc_resource() with Flags 0

Only in the 3rd attempt pci_bus_alloc_from_region() calls allocate_resource. It fails as cant find the resource. All this happens when allocating with BAR 0.

However when allocating for BAR 8 the Resource is allocated in first attempt successfully.

Does it have to do anything with 64bit or 32 bit address mapping? Does it have to do something with the initial value of the address map?

1 Like

Imho pcie is always 64 bit mapping,but i’m not that deep in these things. Have no response till now from mtk. And i’m away from home so i cannot test.

Have not found out which is bar0 and which is bar8…we only have 1 ranges for each pci controller

Hi Frank,

Any response from mtk?

I got response that pci-expert is currently busy with urgent issues…i write to the thread if i got anything that helps :slight_smile:

Hi Frank,

Just an update. When the PCI card is inserted on x86 based Laptop, the card works fine. Do we need to reconfigure the configuration registers on the card so that it should work on ARM based system. Using setpci like command

I don’t think so…it seems to be a generic memory mapping issue independed from card.

@Frank, Any update from PCI-expert regarding PCI ? Do you think switching to a different kernel version may help ? If yes , Can you suggest the kernel version that we can try.

Not really…give them a bit time…

If you use only 1 card/slot you can try mainline without splitting but i guess you run into same problem.

Actually our development has been blocked so I am continuously following up for the same,

Regarding trying mainline , Do you mean Mainline for 5.4 ? We faced the problem while 4.19 kernel and have not tried 5.4 mainline .

I think there is not much difference between 5.4 and 4.19 regarding this…i don’t know if bpi patched something in their repo

@sinovoip can you say anything about this? Maybe talk to mtk too?

we and MTK all know this ,we try to fixed this .please give us some times.

I just have a wondering. The 5.4 version of OS at https://github.com/frank-w/BPI-R2-4.14/tree/5.4-r64-dsa is giving me a indication that this version might have been developed for BPi R2 with base 4.14 kernel version of OS. Hopefully this issue is not arising due to using memory mapping of R2 on R64?

Can you please confirm this frank-w?

??

My repo is kernel only (os independ) and contains r64 branches (i try to merge r2 and r64 in future for easier maintaining). You use r64 branch…which r2 memory mapping do you mean? The mapping is defined in dts and handled by generic driver…r2 and r64 using different dts files

Ok. It is the name of the repo that gave me this doubt.

I am referring to the memory mapping of the PCI slots on R64 and R2

I simply started with r2 and 4.14 but there are newer versions too (up to 5.5) and also for r64.

Have not compared r2 with r64 regarding memory mapping…but maybe you’ll find anything

i got reponse from mtk with patches reported to be working on 4.19

more details here: [BPI-R64] PCIe issues

@neha @batul.rangwala can you please test the 4.19-mt7622pcie branch with your card?