From nobody Tue Jun 18 13:33:12 2024 X-Original-To: freebsd-arm@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4W3SPN72g2z5NpQr for ; Tue, 18 Jun 2024 13:33:12 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4W3SPN2Xm5z4Mfp for ; Tue, 18 Jun 2024 13:33:12 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1718717592; a=rsa-sha256; cv=none; b=a1+ygddc8VPuAWMhCXOj+/1QLBo3c6k+cjJ3ytGmz2xSALICsn1saxLVJXvwlhnFApH4JZ ebjihJLoO4Z08ESYTX+mqGD4OvLg/aCvcGgJOeO3mcb7c6daNTJECeOFx2I8i5XFqvS5EZ IVYAZmGTyhe6bRNSDYOMUAUNAlX/oPYzI4KpGBkuY5TtzDbEHHK4tFDKe8T+i7PJgSiNN5 BKNOfD5XEua4HhUCE0X9s47xoMOGquGgQGqGyMkkJRpku6F6U6SO/mIdal57kkjmnEnT+/ 2B6m/ASqW1F4pTFG4VbR4cn0cmKE6hqXfSoU6/Rq7ukDNKcXe7nhj6yzcsigzg== ARC-Authentication-Results: i=1; mx1.freebsd.org; none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1718717592; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=RktkSDEoG/9RhCzfSsMh03ljyc8QVJSuQnDyY3OPFjo=; b=NrklFvHh2Ii7slO7Kpkve7VEswDnEdCZAQJkC6P0BCb2BMs1sjler2x9oMnM5QKPVGGpV9 g1W8pimm7dYvUTkiVppjjduk9bhtEQTOUyK0k2XBza1mE2+X2CGun1GBuejDgwhotSb5n8 Echuff30zwgFaWzdR8NkWbi8i0UdcyH1/+DiIoKmNpw9Bbb7Dy6sC+0NWmbxrq9XO1nwRU fYAhXkHHH+lGfuTh/F8LyYQHx4SEDHyCW7SZmjxD4R9UBWC/5kR3MNTxzUxdCxh9qBqfea 9fJwqq1OgVcLKTUf17WEsozjcfCJRU9pJPbf6BOxHLhZ6jAfjvbER72zrt+00A== Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2610:1c1:1:606c::50:1d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 4W3SPN26dzz10fG for ; Tue, 18 Jun 2024 13:33:12 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org ([127.0.1.5]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id 45IDXC2A036603 for ; Tue, 18 Jun 2024 13:33:12 GMT (envelope-from bugzilla-noreply@freebsd.org) Received: (from www@localhost) by kenobi.freebsd.org (8.15.2/8.15.2/Submit) id 45IDXCd2036602 for freebsd-arm@FreeBSD.org; Tue, 18 Jun 2024 13:33:12 GMT (envelope-from bugzilla-noreply@freebsd.org) X-Authentication-Warning: kenobi.freebsd.org: www set sender to bugzilla-noreply@freebsd.org using -f From: bugzilla-noreply@freebsd.org To: freebsd-arm@FreeBSD.org Subject: [Bug 279830] Using AMD EPYC 9374F 32-Core Processor, PF driver gives error when loading for more than 62 VFs Date: Tue, 18 Jun 2024 13:33:12 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: arm X-Bugzilla-Version: 14.0-RELEASE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: vdubey@maxlinear.com X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-arm@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated List-Id: Porting FreeBSD to ARM processors List-Archive: https://lists.freebsd.org/archives/freebsd-arm List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-arm@FreeBSD.org MIME-Version: 1.0 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D279830 Bug ID: 279830 Summary: Using AMD EPYC 9374F 32-Core Processor, PF driver gives error when loading for more than 62 VFs Product: Base System Version: 14.0-RELEASE Hardware: Any OS: Any Status: New Severity: Affects Only Me Priority: --- Component: arm Assignee: freebsd-arm@FreeBSD.org Reporter: vdubey@maxlinear.com I am running on a Supermicro server which has 2xAMD EPYC 9374F 32-Core Processor and amd64 architecture. I have written a driver to use my own PCIe device in SR-IOV environment. This device supports up to 64 VFs.=20 I have set num_vfs to 62 in my iovctl.conf is below: PF { device : "dre_drv0"; num_vfs : 62; }=20 DEFAULT { passthrough : true; } After this, sudo iovctl -C -f /etc/iovctl.conf will load PF driver successfully. dmesg is as below: dre_drv0: DRE_drvIovInit: Called with num_vfs 62. dre_drv0: DRE_drvIovAddVf: Called for vfnum 0. dre_drv0: DRE_drvIovAddVf: Called for vfnum 1. dre_drv0: DRE_drvIovAddVf: Called for vfnum 2. dre_drv0: DRE_drvIovAddVf: Called for vfnum 3. dre_drv0: DRE_drvIovAddVf: Called for vfnum 4. dre_drv0: DRE_drvIovAddVf: Called for vfnum 5. dre_drv0: DRE_drvIovAddVf: Called for vfnum 6. dre_drv0: DRE_drvIovAddVf: Called for vfnum 7. dre_drv0: DRE_drvIovAddVf: Called for vfnum 8. dre_drv0: DRE_drvIovAddVf: Called for vfnum 9. dre_drv0: DRE_drvIovAddVf: Called for vfnum 10. dre_drv0: DRE_drvIovAddVf: Called for vfnum 11. dre_drv0: DRE_drvIovAddVf: Called for vfnum 12. dre_drv0: DRE_drvIovAddVf: Called for vfnum 13. dre_drv0: DRE_drvIovAddVf: Called for vfnum 14. dre_drv0: DRE_drvIovAddVf: Called for vfnum 15. dre_drv0: DRE_drvIovAddVf: Called for vfnum 16. dre_drv0: DRE_drvIovAddVf: Called for vfnum 17. dre_drv0: DRE_drvIovAddVf: Called for vfnum 18. dre_drv0: DRE_drvIovAddVf: Called for vfnum 19. dre_drv0: DRE_drvIovAddVf: Called for vfnum 20. dre_drv0: DRE_drvIovAddVf: Called for vfnum 21. dre_drv0: DRE_drvIovAddVf: Called for vfnum 22. dre_drv0: DRE_drvIovAddVf: Called for vfnum 23. dre_drv0: DRE_drvIovAddVf: Called for vfnum 24. dre_drv0: DRE_drvIovAddVf: Called for vfnum 25. dre_drv0: DRE_drvIovAddVf: Called for vfnum 26. dre_drv0: DRE_drvIovAddVf: Called for vfnum 27. dre_drv0: DRE_drvIovAddVf: Called for vfnum 28. dre_drv0: DRE_drvIovAddVf: Called for vfnum 29. dre_drv0: DRE_drvIovAddVf: Called for vfnum 30. dre_drv0: DRE_drvIovAddVf: Called for vfnum 31. dre_drv0: DRE_drvIovAddVf: Called for vfnum 32. dre_drv0: DRE_drvIovAddVf: Called for vfnum 33. dre_drv0: DRE_drvIovAddVf: Called for vfnum 34. dre_drv0: DRE_drvIovAddVf: Called for vfnum 35. dre_drv0: DRE_drvIovAddVf: Called for vfnum 36. dre_drv0: DRE_drvIovAddVf: Called for vfnum 37. dre_drv0: DRE_drvIovAddVf: Called for vfnum 38. dre_drv0: DRE_drvIovAddVf: Called for vfnum 39. dre_drv0: DRE_drvIovAddVf: Called for vfnum 40. dre_drv0: DRE_drvIovAddVf: Called for vfnum 41. dre_drv0: DRE_drvIovAddVf: Called for vfnum 42. dre_drv0: DRE_drvIovAddVf: Called for vfnum 43. dre_drv0: DRE_drvIovAddVf: Called for vfnum 44. dre_drv0: DRE_drvIovAddVf: Called for vfnum 45. dre_drv0: DRE_drvIovAddVf: Called for vfnum 46. dre_drv0: DRE_drvIovAddVf: Called for vfnum 47. dre_drv0: DRE_drvIovAddVf: Called for vfnum 48. dre_drv0: DRE_drvIovAddVf: Called for vfnum 49. dre_drv0: DRE_drvIovAddVf: Called for vfnum 50. dre_drv0: DRE_drvIovAddVf: Called for vfnum 51. dre_drv0: DRE_drvIovAddVf: Called for vfnum 52. dre_drv0: DRE_drvIovAddVf: Called for vfnum 53. dre_drv0: DRE_drvIovAddVf: Called for vfnum 54. dre_drv0: DRE_drvIovAddVf: Called for vfnum 55. dre_drv0: DRE_drvIovAddVf: Called for vfnum 56. dre_drv0: DRE_drvIovAddVf: Called for vfnum 57. dre_drv0: DRE_drvIovAddVf: Called for vfnum 58. dre_drv0: DRE_drvIovAddVf: Called for vfnum 59. dre_drv0: DRE_drvIovAddVf: Called for vfnum 60. dre_drv0: DRE_drvIovAddVf: Called for vfnum 61. ppt0 at device 0.128 numa-domain 0 on pci9 ppt1 at device 0.129 numa-domain 0 on pci9 ppt2 at device 0.130 numa-domain 0 on pci9 ppt3 at device 0.131 numa-domain 0 on pci9 ppt4 at device 0.132 numa-domain 0 on pci9 ppt5 at device 0.133 numa-domain 0 on pci9 ppt6 at device 0.134 numa-domain 0 on pci9 ppt7 at device 0.135 numa-domain 0 on pci9 ppt8 at device 0.136 numa-domain 0 on pci9 ppt9 at device 0.137 numa-domain 0 on pci9 ppt10 at device 0.138 numa-domain 0 on pci9 ppt11 at device 0.139 numa-domain 0 on pci9 ppt12 at device 0.140 numa-domain 0 on pci9 ppt13 at device 0.141 numa-domain 0 on pci9 ppt14 at device 0.142 numa-domain 0 on pci9 ppt15 at device 0.143 numa-domain 0 on pci9 ppt16 at device 0.144 numa-domain 0 on pci9 ppt17 at device 0.145 numa-domain 0 on pci9 ppt18 at device 0.146 numa-domain 0 on pci9 ppt19 at device 0.147 numa-domain 0 on pci9 ppt20 at device 0.148 numa-domain 0 on pci9 ppt21 at device 0.149 numa-domain 0 on pci9 ppt22 at device 0.150 numa-domain 0 on pci9 ppt23 at device 0.151 numa-domain 0 on pci9 ppt24 at device 0.152 numa-domain 0 on pci9 ppt25 at device 0.153 numa-domain 0 on pci9 ppt26 at device 0.154 numa-domain 0 on pci9 ppt27 at device 0.155 numa-domain 0 on pci9 ppt28 at device 0.156 numa-domain 0 on pci9 ppt29 at device 0.157 numa-domain 0 on pci9 ppt30 at device 0.158 numa-domain 0 on pci9 ppt31 at device 0.159 numa-domain 0 on pci9 ppt32 at device 0.160 numa-domain 0 on pci9 ppt33 at device 0.161 numa-domain 0 on pci9 ppt34 at device 0.162 numa-domain 0 on pci9 ppt35 at device 0.163 numa-domain 0 on pci9 ppt36 at device 0.164 numa-domain 0 on pci9 ppt37 at device 0.165 numa-domain 0 on pci9 ppt38 at device 0.166 numa-domain 0 on pci9 ppt39 at device 0.167 numa-domain 0 on pci9 ppt40 at device 0.168 numa-domain 0 on pci9 ppt41 at device 0.169 numa-domain 0 on pci9 ppt42 at device 0.170 numa-domain 0 on pci9 ppt43 at device 0.171 numa-domain 0 on pci9 ppt44 at device 0.172 numa-domain 0 on pci9 ppt45 at device 0.173 numa-domain 0 on pci9 ppt46 at device 0.174 numa-domain 0 on pci9 ppt47 at device 0.175 numa-domain 0 on pci9 ppt48 at device 0.176 numa-domain 0 on pci9 ppt49 at device 0.177 numa-domain 0 on pci9 ppt50 at device 0.178 numa-domain 0 on pci9 ppt51 at device 0.179 numa-domain 0 on pci9 ppt52 at device 0.180 numa-domain 0 on pci9 ppt53 at device 0.181 numa-domain 0 on pci9 ppt54 at device 0.182 numa-domain 0 on pci9 ppt55 at device 0.183 numa-domain 0 on pci9 ppt56 at device 0.184 numa-domain 0 on pci9 ppt57 at device 0.185 numa-domain 0 on pci9 ppt58 at device 0.186 numa-domain 0 on pci9 ppt59 at device 0.187 numa-domain 0 on pci9 ppt60 at device 0.188 numa-domain 0 on pci9 ppt61 at device 0.189 numa-domain 0 on pci9 But when i change iovctl.conf to use num_vfs to 64. But after i execute sudo iovctl -C -f /etc/iovctl.conf, it is failing with below error message: dre_drv0: DRE_drvIovInit: Called with num_vfs 64. dre_drv0: 0x2000000 bytes of rid 0x264 res 3 failed (0, 0xffffffffffffffff). dre_drv0: DRE_drvIovUnInit: Called. Seems like some memory is not able to be allocated.=20 I am using the driver code on another server which has Intel CPU and everyt= hing works fine with PF driver able to load with 64 VFs on FreeBSD 14. Can you help to check why it is unable to load with 64 VFs on AMD CPU. --=20 You are receiving this mail because: You are the assignee for the bug.=