On Tue, Apr 26, 2016 at 01:01:05PM -0500, Andreas Dannenberg wrote:
On Tue, Apr 26, 2016 at 06:29:36PM +0100, Mark Brown wrote:
Is the device actually going to mess up if someone sends it something else or is it just going to ignore the extra bits (given that it's doing autodetection anyway).
well in any of the left-justified modes (which are the only ones the driver supports) the device takes and processes as many bits as it can given the clock and divider settings. Any extra bits provided will get ignored, and the next sync happens on the frame sync signal and not by counting bits so there is no downside also as confirmed by some bench testing I did feeding in 32-bit long frames for one channel. This seems like a case of preferring tolerance over strictly enforcing datasheet-advertised bit-widths. Will take out the check code.
OK, accepting extra bits is fine. You should set sig_bits in the DAI so userspace can see what's going on if it cares.
Along these lines, earlier as I was rummaging through the existing drivers looking for a solution I could model after I noticed that most(?) ASoC codec drivers don't have any type of HW fault checking, at at least whatever drivers I looked at. Not sure why this is but given this discussion this seems like a general opportunity to make improvements.
There are some with over temperature handling (eg, wm8962) but it's relatively uncommon for observable protection features to be implemented in silicon and even rarer for the interrupts to be hooked up (and hence useful to support in software) unless there is also accessory detection in the device. On older devices the required digital logic was often excessively expensive and realistically only relatively high power speaker drivers have much risk of something going wrong - things like headphone outputs or smaller speaker drivers end up with protection from their supplies collapsing well before the device is in physical danger.