Hi,
To elaborate more on Alex's explanation ...
AMD SOCs have audio (and in the future potentially also camera image signal processors) IPs built into the GNB (graphics north bridge). These IPs are programmed through MMIO registers in the graphics MMIO aperture. They send events to the host through the graphics IRQ. And they use memory that is accessed through the graphics memory controller. Therefore the GPU driver must be involved in programming these IPs.
However, these functions are not exposed to user mode by the graphics driver subsystem in Linux. Audio is handled by ALSA and camera ISPs are handled by V4L2. We want to represent these IPs as separate devices in the device hierarchy, so that ALSA and V4L2 drivers can discover and bind to them using the standard mechanisms.
Therefore we created this "virtual" GNB bus that allows us to create devices in a sensible place in the device hierarchy, as child devices of the GNB. The enum amd_gnb_bus_ip serves as device ID on this bus. The struct amd_gnb_bus_dev represents the device. Private_data is specific to the type of device. It contains high-level interfaces for ALSA drivers to talk to the audio IP and V4L2 driver to talk to the ISP IP. Any direct HW access, memory management and IRQ handling is done inside the GPU driver.
Regards, Felix
On 15-08-07 10:17 AM, Alex Deucher wrote:
On Fri, Aug 7, 2015 at 6:25 AM, Mark Brown broonie@kernel.org wrote:
On Thu, Aug 06, 2015 at 10:25:02AM -0400, Alex Deucher wrote:
From: Chunming Zhou david1.zhou@amd.com
This is used by the incoming ACP driver. The DMA engine for the i2s audio codec is part of the GPU.
This exposes an amd gnb bus for the i2s codec to hang off of.
Could you be more specific about what an "amd gnd bus" is please?
It's bus to hang hw blocks of the GPU on that are controlled by other subsystems.
+enum amd_gnb_bus_ip {
AMD_GNB_IP_ACP_DMA,
AMD_GNB_IP_ACP_I2S,
AMD_GNB_IP_ACP_PCM,
AMD_GNB_IP_ISP,
AMD_GNB_IP_NUM
+};
+struct amd_gnb_bus_dev {
struct device dev; /* generic device interface */
enum amd_gnb_bus_ip ip;
/* private data can be acp_handle/isp_handle etc.*/
void *private_data;
+};
Looking at the code I'm not seeing too much bus specific except for the above which looks like the sort of device we usually represent as a MFD (with the MFD providing resource distribution and arbitration between various component devices which fit into the subsystem). Why code a new bus for this device?
Adding Felix who did worked on the design for this. The idea is that there are hw blocks on the GPU that are controlled by drivers that are part of other subsystems. Those drivers need access to resources (e.g., the MMIO aperture) controlled by the GPU driver. I guess this is a MFD of sorts. If this is not the preferred way to handle this type of device, what is? Can you point me to another driver that handles this differently?
Alex