Re: [alsa-devel] [Bug 1155202] [Intel DZ77SL-50K, Intel PantherPoint HDMI, Digital Out, HDMI] No sound at all
>This is doubtful. Here we see the behavior of the PCI controller, and
>there shouldn't be difference by CPU instructions.
>The difference in the code between azx_reset() and your code are:
>- Use of azx_writeb() and azx_writel()
>- Limited loop time for GCTL_RESET bit check
I tried both things. It did not work. So, I tried adding a delay between while loop and reset writel in my code. A few experiments with it shows that delay between reset state and on state determines codec detection. Only a delay of less than 100 uSec results in successful codec detection. Here is table of my observations:
usleep result 5-10 ok 50-100 ok 75-100 ok 75-150 fail 100-200 fail 250-500 fail 500-1000 fail
Since link_reset_enter and exit have a lot of code in between (atleast greater than 100us) it always results in failure.
At 25 Jan 2014 05:56:25 -0000, Niraj Kulkarni wrote:
>This is doubtful. Here we see the behavior of the PCI controller, and
Please fix your MUA not to send a half-baked HTML mail; it fills up like ">" (ampersand g t semicolon) etc.
>there shouldn't be difference by CPU instructions.
>The difference in the code between azx_reset() and your code are:
>- Use of azx_writeb() and azx_writel()
>- Limited loop time for GCTL_RESET bit check
I tried both things. It did not work. So, I tried adding a delay between while loop and reset writel in my code. A few experiments with it shows that delay between reset state and on state determines codec detection. Only a delay of less than 100 uSec results in successful codec detection. Here is table of my observations:
usleep result 5-10 ok 50-100 ok 75-100 ok 75-150 fail 100-200 fail 250-500 fail 500-1000 fail
Interesting. So this means that the controller really provides only the minimum 100us BCLK for deasserting RST and nothing more than that. Reading the spec again, it mentions that the controller assures the minimum 100us but not about the maximum. Thus this behavior isn't wrong, indeed.
Since link_reset_enter and exit have a lot of code in between (atleast greater than 100us) it always results in failure.
Yes, this explains why. The current code was written based on the old spec behavior. Maybe we need to rewrite the code like below:
-- 8< -- --- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -1148,7 +1148,7 @@ static void azx_enter_link_reset(struct azx *chip) timeout = jiffies + msecs_to_jiffies(100); while ((azx_readb(chip, GCTL) & ICH6_GCTL_RESET) && time_before(jiffies, timeout)) - usleep_range(500, 1000); + cpu_relax(); }
/* exit link reset */ @@ -1161,7 +1161,7 @@ static void azx_exit_link_reset(struct azx *chip) timeout = jiffies + msecs_to_jiffies(100); while (!azx_readb(chip, GCTL) && time_before(jiffies, timeout)) - usleep_range(500, 1000); + cpu_relax(); }
/* reset codec link */ @@ -1176,10 +1176,8 @@ static int azx_reset(struct azx *chip, int full_reset) /* reset controller */ azx_enter_link_reset(chip);
- /* delay for >= 100us for codec PLL to settle per spec - * Rev 0.9 section 5.5.1 - */ - usleep_range(500, 1000); + /* 4 BCLK edges minimum after RST# assert */ + udelay(2);
/* Bring controller out of reset */ azx_exit_link_reset(chip); -- 8< --
We'd still need tests on various chips for this change, of course...
Takashi
participants (2)
-
Niraj Kulkarni
-
Takashi Iwai