Re: [alsa-devel] Question about your DSP topic branch
Hi Liam,
I have two more questions about your DSP topic branch
7. I see in sdp4430.c, SDP4430 MODEM front-end dailink no_host_mode is set to SND_SOC_DAI_LINK_NO_HOST. What is the purpose of no_host_mode? Is it for use case that two physical devices can exchange audio data without host-processor intervention? If so, if user-space application tries to write to PCM buffer, will framework reject the buffer?
8. I see there is dmic codec(dmic.c) under sound/soc/codec which is pretty much just a dummy codec driver. I supposed the configuration of DMIC is done in other driver. Would it be better if we could have something like fixed voltage regulator? So, there is no need to duplicate the effort.
Look forward to seeing your reply soon
Thanks
On 1/6/2011 3:39 PM, Patrick Lai wrote:
Hi Liam,
I sync to your kernel DSP topic branch two days back in attempt to understand the up-coming ASOC DSP design. I have few questions to get clarification from you.
- In the sdp4430.c, there are both FE dai link and BE dai link have
omap-aess-audio as platform driver. How is omap-aess-audio platform driver used in both front-end and backend?
- Front-end dai with stream name "Multimedia" has platform driver as
omap-pcm-audio which is the DMA based. This front-end dai is mapped to backend (i.e PDM-DL1) with platform driver as omap-aess-audio. Omap-aess-audio looks to me is DSP platform driver. If a stream is DMA based, it seems strange to have DSP-based backend.
- To best of my knowledge, I see in omap-abe.c which is the front end
implementation. Playback_trigger() seems to go ahead enable all back-ends linked to the given front-end. However, front end may not be routed to all back-ends. Where in the code to make sure BE is only activated when a stream is being routed to that particular back-end?
- omap-abe.c manages activation of BE DAIS and omap-abe-dsp.c manages
routing of front-end to back-end DAPM widgets and routing map. Am I correct? This leads to previous question. How are two drivers working together to make sure BEs are only activated if Front-end has been routed to them?
- Is there mechanism for front-end to switch between DMA and DSP
platform driver? It looks to me that mapping of frond-end and platform driver is predetermined based on use case. For example, HDMI goes through DMA as legacy dailink.
- struct abe_data has array member called dapm. It looks to me that
this array simply tracks dapm components status but I don't see it's really being used in meaningful way in omap-abe-adsp.c.
Thanks
On Mon, Jan 24, 2011 at 11:01:10PM -0800, Patrick Lai wrote:
- I see there is dmic codec(dmic.c) under sound/soc/codec which is
pretty much just a dummy codec driver. I supposed the configuration of DMIC is done in other driver. Would it be better if we could have
It's for direct DMIC interfaces in CPUs like OMAP4 has.
something like fixed voltage regulator? So, there is no need to duplicate the effort.
Yes, that'd be kind of nice but given how tiny these noop drivers are and the fact that they do all need to specify their capabilites it's not clear that there's much advantage from combining them into a single driver - the boiler plate is so small and simple.
On 1/25/2011 3:51 AM, Mark Brown wrote:
On Mon, Jan 24, 2011 at 11:01:10PM -0800, Patrick Lai wrote:
- I see there is dmic codec(dmic.c) under sound/soc/codec which is
pretty much just a dummy codec driver. I supposed the configuration of DMIC is done in other driver. Would it be better if we could have
It's for direct DMIC interfaces in CPUs like OMAP4 has.
something like fixed voltage regulator? So, there is no need to duplicate the effort.
Yes, that'd be kind of nice but given how tiny these noop drivers are and the fact that they do all need to specify their capabilites it's not clear that there's much advantage from combining them into a single driver - the boiler plate is so small and simple.
Do we consider the case that codec driver is not required such as virtual sink or sink is configured outside of ALSA driver? If we create dummy codec driver for each use case, wouldn't sound/soc/codecs end up littered with bunch of noop drivers? Furthermore, couldn't capabilities being passed through platform device?
Thanks Patrick
On Tue, Jan 25, 2011 at 10:22:41PM -0800, Patrick Lai wrote:
On 1/25/2011 3:51 AM, Mark Brown wrote:
Yes, that'd be kind of nice but given how tiny these noop drivers are and the fact that they do all need to specify their capabilites it's not clear that there's much advantage from combining them into a single driver - the boiler plate is so small and simple.
Do we consider the case that codec driver is not required such as virtual sink or sink is configured outside of ALSA driver? If we create dummy codec driver for each use case, wouldn't sound/soc/codecs end up littered with bunch of noop drivers?
Yup, but it's not really a big cost - they're all so trivial.
Furthermore, couldn't capabilities being passed through platform device?
Right, but of course you probably end up defining a common set of platform data for each device so people don't have to cut'n'paste the same thing into all the different board files.
On 1/26/2011 3:20 AM, Mark Brown wrote:
On Tue, Jan 25, 2011 at 10:22:41PM -0800, Patrick Lai wrote:
On 1/25/2011 3:51 AM, Mark Brown wrote:
Yes, that'd be kind of nice but given how tiny these noop drivers are and the fact that they do all need to specify their capabilites it's not clear that there's much advantage from combining them into a single driver - the boiler plate is so small and simple.
Do we consider the case that codec driver is not required such as virtual sink or sink is configured outside of ALSA driver? If we create dummy codec driver for each use case, wouldn't sound/soc/codecs end up littered with bunch of noop drivers?
Yup, but it's not really a big cost - they're all so trivial.
Currently, I already have few dummy codec drivers in mind for the project I am working on. I am not worried about the size of these files. Beside having tons of dummy codec drivers in the source tree, I am also looking at the value and effort to upstream these trivial drivers.
Furthermore, couldn't capabilities being passed through platform device?
Right, but of course you probably end up defining a common set of platform data for each device so people don't have to cut'n'paste the same thing into all the different board files.
Yes, that would be convenient for others. However, I see the effort to create multiple platform devices and paste the capabilities in the board file far less than upstreamng no-op drivers and making changes in Kconfig and Makefile under sound/soc/codecs.
Thanks
On Thu, Jan 27, 2011 at 01:51:58PM -0800, Patrick Lai wrote:
On 1/26/2011 3:20 AM, Mark Brown wrote:
Yup, but it's not really a big cost - they're all so trivial.
Currently, I already have few dummy codec drivers in mind for the project I am working on. I am not worried about the size of these files. Beside having tons of dummy codec drivers in the source tree, I am also looking at the value and effort to upstream these trivial drivers.
Like I say, it's utterly trivial.
On 1/31/2011 5:30 AM, Mark Brown wrote:
On Thu, Jan 27, 2011 at 01:51:58PM -0800, Patrick Lai wrote:
On 1/26/2011 3:20 AM, Mark Brown wrote:
Yup, but it's not really a big cost - they're all so trivial.
Currently, I already have few dummy codec drivers in mind for the project I am working on. I am not worried about the size of these files. Beside having tons of dummy codec drivers in the source tree, I am also looking at the value and effort to upstream these trivial drivers.
Like I say, it's utterly trivial.
I see a flag "no_codec" defined in struct snd_soc_dai_link. Is that flag going to stay or just a temporary workaround?
On Thu, Mar 17, 2011 at 12:29:18AM -0700, Patrick Lai wrote:
I see a flag "no_codec" defined in struct snd_soc_dai_link. Is that flag going to stay or just a temporary workaround?
That's actually not referenced at all at the minute so it's not even a workaround and should be removed :) . I'd not expect to use that for this as a lot of the links that use this are something <-> CODEC ones, I suspect that's an early merged bit of the DSP config propagation stuff but I'd need to check.
Hi Patrick,
CCing in Mark and alsa-devel@alsa-project.org (preferred list)
On Mon, 2011-01-24 at 23:01 -0800, Patrick Lai wrote:
Hi Liam,
I have two more questions about your DSP topic branch
- I see in sdp4430.c, SDP4430 MODEM front-end dailink no_host_mode is
set to SND_SOC_DAI_LINK_NO_HOST. What is the purpose of no_host_mode?
No host mode means no audio is transferred to the host CPU in this case. i.e. the MODEM routes audio directly between the DSP and mic/speaker.
This flags also tells the ASoC core that no DMA will be required, hence the code in the DMA branch will not start any DMA. This part had to be cleaned up for upstream.
Is it for use case that two physical devices can exchange audio data without host-processor intervention? If so, if user-space application tries to write to PCM buffer, will framework reject the buffer?
Yes, that's correct. The PCM core will also not complain here when it receive no data either.
- I see there is dmic codec(dmic.c) under sound/soc/codec which is
pretty much just a dummy codec driver. I supposed the configuration of DMIC is done in other driver. Would it be better if we could have something like fixed voltage regulator? So, there is no need to duplicate the effort.
Yeah, this is a generic DMIC driver. It's designed to have a very wide coverage and should cover most DMICs out there. So it should also be able to fit into your architecture too.
Look forward to seeing your reply soon
Thanks
On 1/6/2011 3:39 PM, Patrick Lai wrote:
Hi Liam,
I sync to your kernel DSP topic branch two days back in attempt to understand the up-coming ASOC DSP design. I have few questions to get clarification from you.
- In the sdp4430.c, there are both FE dai link and BE dai link have
omap-aess-audio as platform driver. How is omap-aess-audio platform driver used in both front-end and backend?
The MODEM and Low Power (LP) Front Ends (FE) use the AESS platform driver since they do not require DMA, whilts the other FE's use the normal DMA platform driver since they do require DMA.
- Front-end dai with stream name "Multimedia" has platform driver as
omap-pcm-audio which is the DMA based. This front-end dai is mapped to backend (i.e PDM-DL1) with platform driver as omap-aess-audio. Omap-aess-audio looks to me is DSP platform driver. If a stream is DMA based, it seems strange to have DSP-based backend.
The DMA is used to send the PCM data from the ALSA PCM device to the DSP FIFO.
- To best of my knowledge, I see in omap-abe.c which is the front end
implementation. Playback_trigger() seems to go ahead enable all back-ends linked to the given front-end. However, front end may not be routed to all back-ends. Where in the code to make sure BE is only activated when a stream is being routed to that particular back-end?
This is all done in soc-dsp.c now. We use the DAPM graph to work out all valid routes from FE to BE and vice versa.
- omap-abe.c manages activation of BE DAIS and omap-abe-dsp.c manages
routing of front-end to back-end DAPM widgets and routing map. Am I correct?
Yes, although the routing management is now all in soc-dsp.c
This leads to previous question. How are two drivers working
together to make sure BEs are only activated if Front-end has been routed to them?
soc-dsp.c now marshals all the PCM ops and makes sure that only valid paths have their DAIs activated.
- Is there mechanism for front-end to switch between DMA and DSP
platform driver? It looks to me that mapping of frond-end and platform driver is predetermined based on use case. For example, HDMI goes through DMA as legacy dailink.
There is no way to dynamically switch platform drivers atm, but this can be solved by having a mutually exclusive FE for each use case.
- struct abe_data has array member called dapm. It looks to me that
this array simply tracks dapm components status but I don't see it's really being used in meaningful way in omap-abe-adsp.c.
It used by the OMAP4 ABE to work out the OPP power level and to work out the routing between FE and BE.
Liam
On 1/25/2011 8:46 AM, Liam Girdwood wrote:
Hi Patrick,
CCing in Mark and alsa-devel@alsa-project.org (preferred list)
On Mon, 2011-01-24 at 23:01 -0800, Patrick Lai wrote:
Hi Liam,
I have two more questions about your DSP topic branch
- I see in sdp4430.c, SDP4430 MODEM front-end dailink no_host_mode is
set to SND_SOC_DAI_LINK_NO_HOST. What is the purpose of no_host_mode?
No host mode means no audio is transferred to the host CPU in this case. i.e. the MODEM routes audio directly between the DSP and mic/speaker.
This flags also tells the ASoC core that no DMA will be required, hence the code in the DMA branch will not start any DMA. This part had to be cleaned up for upstream.
Is it for use case that two physical devices can exchange audio data without host-processor intervention? If so, if user-space application tries to write to PCM buffer, will framework reject the buffer?
Yes, that's correct. The PCM core will also not complain here when it receive no data either.
I experimented NO_HOST option and found platform driver pcm copy function is called. Is it part of the clean up you were talking about?
- I see there is dmic codec(dmic.c) under sound/soc/codec which is
pretty much just a dummy codec driver. I supposed the configuration of DMIC is done in other driver. Would it be better if we could have something like fixed voltage regulator? So, there is no need to duplicate the effort.
Yeah, this is a generic DMIC driver. It's designed to have a very wide coverage and should cover most DMICs out there. So it should also be able to fit into your architecture too.
Look forward to seeing your reply soon
Thanks
On 1/6/2011 3:39 PM, Patrick Lai wrote:
Hi Liam,
I sync to your kernel DSP topic branch two days back in attempt to understand the up-coming ASOC DSP design. I have few questions to get clarification from you.
- In the sdp4430.c, there are both FE dai link and BE dai link have
omap-aess-audio as platform driver. How is omap-aess-audio platform driver used in both front-end and backend?
The MODEM and Low Power (LP) Front Ends (FE) use the AESS platform driver since they do not require DMA, whilts the other FE's use the normal DMA platform driver since they do require DMA.
PCM functions inside omap-abe-dsp.c seem to do no-op if dai ID is not MODEM or LP. I guess reusing omap-aess-audio platform driver for the back-end DAI link serves purpose of architecture requirement that each DAI-LINK must have platform driver. Do I get the right impression?
- Front-end dai with stream name "Multimedia" has platform driver as
omap-pcm-audio which is the DMA based. This front-end dai is mapped to backend (i.e PDM-DL1) with platform driver as omap-aess-audio. Omap-aess-audio looks to me is DSP platform driver. If a stream is DMA based, it seems strange to have DSP-based backend.
The DMA is used to send the PCM data from the ALSA PCM device to the DSP FIFO.
- To best of my knowledge, I see in omap-abe.c which is the front end
implementation. Playback_trigger() seems to go ahead enable all back-ends linked to the given front-end. However, front end may not be routed to all back-ends. Where in the code to make sure BE is only activated when a stream is being routed to that particular back-end?
This is all done in soc-dsp.c now. We use the DAPM graph to work out all valid routes from FE to BE and vice versa.
I experimented dynamic routing. If issuing mixer command to route AIF_IN to AIF_OUT through mixer widget before starting playback, it works. Otherwise, I get I/O error from aplay. I think it's acceptable to require application to setup path before start of playback. For device switch case, is it mandatory to route stream to the mixer of BE DAI-LINK2 before derouting stream out of mixer of BE DAI-LINK1?
- omap-abe.c manages activation of BE DAIS and omap-abe-dsp.c manages
routing of front-end to back-end DAPM widgets and routing map. Am I correct?
Yes, although the routing management is now all in soc-dsp.c
This leads to previous question. How are two drivers working
together to make sure BEs are only activated if Front-end has been routed to them?
soc-dsp.c now marshals all the PCM ops and makes sure that only valid paths have their DAIs activated.
- Is there mechanism for front-end to switch between DMA and DSP
platform driver? It looks to me that mapping of frond-end and platform driver is predetermined based on use case. For example, HDMI goes through DMA as legacy dailink.
There is no way to dynamically switch platform drivers atm, but this can be solved by having a mutually exclusive FE for each use case.
- struct abe_data has array member called dapm. It looks to me that
this array simply tracks dapm components status but I don't see it's really being used in meaningful way in omap-abe-adsp.c.
It used by the OMAP4 ABE to work out the OPP power level and to work out the routing between FE and BE.
Liam
Hi Liam,
In the DSP routing scheme, what is expectation of back-end CPU/Codec drivers in term of determining what hardware parameters especially channel mode to use? DSP in question is capable of resampling , down mixing so hardware configuration of front-end DAI does not need to match one of back-end.
Here are the scenarios that I need clarification
1. If there are two front-ends routed to same backend, which front-end hardware parameters should backend DAI be based on? For example, one front-end is MONO and another front is stereo.
2. Depending on device mode/use case, I would like to configure BE to different channel mode irrespective of front-end configuration(i.e configuring back-end to handset mode). Where is the hook to do so under ASoC DSP framework?
Thanks Patrick
On 1/27/2011 3:41 PM, Patrick Lai wrote:
On 1/25/2011 8:46 AM, Liam Girdwood wrote:
Hi Patrick,
CCing in Mark and alsa-devel@alsa-project.org (preferred list)
On Mon, 2011-01-24 at 23:01 -0800, Patrick Lai wrote:
Hi Liam,
I have two more questions about your DSP topic branch
- I see in sdp4430.c, SDP4430 MODEM front-end dailink no_host_mode is
set to SND_SOC_DAI_LINK_NO_HOST. What is the purpose of no_host_mode?
No host mode means no audio is transferred to the host CPU in this case. i.e. the MODEM routes audio directly between the DSP and mic/speaker.
This flags also tells the ASoC core that no DMA will be required, hence the code in the DMA branch will not start any DMA. This part had to be cleaned up for upstream.
Is it for use case that two physical devices can exchange audio data without host-processor intervention? If so, if user-space application tries to write to PCM buffer, will framework reject the buffer?
Yes, that's correct. The PCM core will also not complain here when it receive no data either.
I experimented NO_HOST option and found platform driver pcm copy function is called. Is it part of the clean up you were talking about?
- I see there is dmic codec(dmic.c) under sound/soc/codec which is
pretty much just a dummy codec driver. I supposed the configuration of DMIC is done in other driver. Would it be better if we could have something like fixed voltage regulator? So, there is no need to duplicate the effort.
Yeah, this is a generic DMIC driver. It's designed to have a very wide coverage and should cover most DMICs out there. So it should also be able to fit into your architecture too.
Look forward to seeing your reply soon
Thanks
On 1/6/2011 3:39 PM, Patrick Lai wrote:
Hi Liam,
I sync to your kernel DSP topic branch two days back in attempt to understand the up-coming ASOC DSP design. I have few questions to get clarification from you.
- In the sdp4430.c, there are both FE dai link and BE dai link have
omap-aess-audio as platform driver. How is omap-aess-audio platform driver used in both front-end and backend?
The MODEM and Low Power (LP) Front Ends (FE) use the AESS platform driver since they do not require DMA, whilts the other FE's use the normal DMA platform driver since they do require DMA.
PCM functions inside omap-abe-dsp.c seem to do no-op if dai ID is not MODEM or LP. I guess reusing omap-aess-audio platform driver for the back-end DAI link serves purpose of architecture requirement that each DAI-LINK must have platform driver. Do I get the right impression?
- Front-end dai with stream name "Multimedia" has platform driver as
omap-pcm-audio which is the DMA based. This front-end dai is mapped to backend (i.e PDM-DL1) with platform driver as omap-aess-audio. Omap-aess-audio looks to me is DSP platform driver. If a stream is DMA based, it seems strange to have DSP-based backend.
The DMA is used to send the PCM data from the ALSA PCM device to the DSP FIFO.
- To best of my knowledge, I see in omap-abe.c which is the front end
implementation. Playback_trigger() seems to go ahead enable all back-ends linked to the given front-end. However, front end may not be routed to all back-ends. Where in the code to make sure BE is only activated when a stream is being routed to that particular back-end?
This is all done in soc-dsp.c now. We use the DAPM graph to work out all valid routes from FE to BE and vice versa.
I experimented dynamic routing. If issuing mixer command to route AIF_IN to AIF_OUT through mixer widget before starting playback, it works. Otherwise, I get I/O error from aplay. I think it's acceptable to require application to setup path before start of playback. For device switch case, is it mandatory to route stream to the mixer of BE DAI-LINK2 before derouting stream out of mixer of BE DAI-LINK1?
- omap-abe.c manages activation of BE DAIS and omap-abe-dsp.c manages
routing of front-end to back-end DAPM widgets and routing map. Am I correct?
Yes, although the routing management is now all in soc-dsp.c
This leads to previous question. How are two drivers working
together to make sure BEs are only activated if Front-end has been routed to them?
soc-dsp.c now marshals all the PCM ops and makes sure that only valid paths have their DAIs activated.
- Is there mechanism for front-end to switch between DMA and DSP
platform driver? It looks to me that mapping of frond-end and platform driver is predetermined based on use case. For example, HDMI goes through DMA as legacy dailink.
There is no way to dynamically switch platform drivers atm, but this can be solved by having a mutually exclusive FE for each use case.
- struct abe_data has array member called dapm. It looks to me that
this array simply tracks dapm components status but I don't see it's really being used in meaningful way in omap-abe-adsp.c.
It used by the OMAP4 ABE to work out the OPP power level and to work out the routing between FE and BE.
Liam
On Tue, Mar 15, 2011 at 12:08:39AM -0700, Patrick Lai wrote:
- If there are two front-ends routed to same backend, which front-end
hardware parameters should backend DAI be based on? For example, one front-end is MONO and another front is stereo.
- Depending on device mode/use case, I would like to configure BE to
different channel mode irrespective of front-end configuration(i.e configuring back-end to handset mode). Where is the hook to do so under ASoC DSP framework?
If you can configure the two completely separately then you can just have a DAPM path between the two devices and don't need to link the DAI configurations at all - WM8994 is an example of doing this, there is a bunch of DSP and mixing in the digital section between the DAIs but because the DSP includes sample rate conversion the DAIs can be configured separately and we don't have to worry about propagating DSP through.
Thinking off the top of my head if you do need to link the configurations sometimes the first thing that springs to mind is to set constraints so broad that they're noops - we're going to want to support propagating constraints through as well as specific settings anyway since some stuff has restrictions like "Route DAI 1 to DAI 2 with DAI 2 at any sample rate that's an integer division of the DAI 1 rate", though that sort of stuff may well be a second pass.
One thing I should mention which I've been working on and which may (depending on the design of your hardware) be a better match for you is providing a mechanism for setting up DAI links and starting streams on them in kernel space based on DAPM routing, with DAPM propagating over these links automatically. This is mostly intended for representing things like basebands where you'll have a DAI link that's going to be running in one of a small number of configurations (often only one) connected to something else along the lines of:
CODEC < 8kHz mono > Baseband
so the idea is that to the user you can have a couple of pin switches (or whatever else can be represented) in the baseband to say that the baseband is passing audio and then DAPM will be extended to figure out that it should start up the audio intefaces and configure their params without needing userspace to do it just like it can if the link between the two were analogue.
This will compliment the work Liam's doing - Liam's work ties together the configuration within a device, this'll help with inter-device links.
On 3/15/2011 4:25 AM, Mark Brown wrote:
On Tue, Mar 15, 2011 at 12:08:39AM -0700, Patrick Lai wrote:
- If there are two front-ends routed to same backend, which front-end
hardware parameters should backend DAI be based on? For example, one front-end is MONO and another front is stereo.
- Depending on device mode/use case, I would like to configure BE to
different channel mode irrespective of front-end configuration(i.e configuring back-end to handset mode). Where is the hook to do so under ASoC DSP framework?
If you can configure the two completely separately then you can just have a DAPM path between the two devices and don't need to link the DAI configurations at all - WM8994 is an example of doing this, there is a bunch of DSP and mixing in the digital section between the DAIs but because the DSP includes sample rate conversion the DAIs can be configured separately and we don't have to worry about propagating DSP through.
Essentially, I have front-end DAI link represents PCM stream while back-end DAI link represents physical hardware path. Irrespective of PCM configuration of front-end DAI link, I want CPU DAI and CODEC DAI of backend DAI link configured to the same PCM configuration. DSP can handle resampling/downmixing. How to achieve this use case in soc-dsp implementation is not obvious to me. In existing soc-dsp implementation, back-end does not get activated unless front-end is routed to back-end. Then, PCM parameters of front-end gets propagated to CPU driver and codec driver of back-end DAI link. I don't see how I would be able to pass PCM configuration specifically for back-end.
One thing I should mention which I've been working on and which may (depending on the design of your hardware) be a better match for you is providing a mechanism for setting up DAI links and starting streams on them in kernel space based on DAPM routing, with DAPM propagating over these links automatically. This is mostly intended for representing things like basebands where you'll have a DAI link that's going to be running in one of a small number of configurations (often only one) connected to something else along the lines of:
CODEC< 8kHz mono> Baseband
so the idea is that to the user you can have a couple of pin switches (or whatever else can be represented) in the baseband to say that the baseband is passing audio and then DAPM will be extended to figure out that it should start up the audio intefaces and configure their params without needing userspace to do it just like it can if the link between the two were analogue.
Yes, I do have use case in mind which requires functionality as you described to activate back-end DAI link without front-end DAI being activated. DAPM seems to be most appropriate approach.
On Thu, Mar 17, 2011 at 12:21:57AM -0700, Patrick Lai wrote:
Yes, I do have use case in mind which requires functionality as you described to activate back-end DAI link without front-end DAI being activated. DAPM seems to be most appropriate approach.
OK, cool - I'll try to remember to CC you on the patches when I get them done. I need to get it done for .40 so should be soonish.
participants (3)
-
Liam Girdwood
-
Mark Brown
-
Patrick Lai