[alsa-devel] [TINYCOMPRESS][PATCH 1/1] compress: no need to set metadata before calling next_track
The metadata is mainly for MP3 gapless playback, since the MP3 audio stream does not contain enough information to enable gapless. Other audio formats do not necessarily require any additional metadata so we should allow next_track to be called without any metadata.
Signed-off-by: Zhao Weijia weijia.zhao@capelabs.com Signed-off-by: Richard Fitzgerald rf@opensource.wolfsonmicro.com --- compress.c | 2 -- 1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/compress.c b/compress.c index 15dfdb7..0c9ecd2 100644 --- a/compress.c +++ b/compress.c @@ -534,8 +534,6 @@ int compress_next_track(struct compress *compress) if (!is_compress_running(compress)) return oops(compress, ENODEV, "device not ready");
- if (!compress->gapless_metadata) - return oops(compress, EPERM, "metadata not set"); if (ioctl(compress->fd, SNDRV_COMPRESS_NEXT_TRACK)) return oops(compress, errno, "cannot set next track\n"); compress->next_track = 1;
On 2/26/14 8:29 AM, Richard Fitzgerald wrote:
The metadata is mainly for MP3 gapless playback, since the MP3 audio stream does not contain enough information to enable gapless. Other audio formats do not necessarily require any additional metadata so we should allow next_track to be called without any metadata.
Metadata is required for both MP3 and AAC gapless playback. Can you clarify what 'other formats' you are referring to? And rather than removing the check that makes sense for these popular formats, why not send metadata to set the # of samples to skip to zero? Thanks, -Pierre
On Thu, Feb 27, 2014 at 10:07:38AM -0600, Pierre-Louis Bossart wrote:
On 2/26/14 8:29 AM, Richard Fitzgerald wrote:
The metadata is mainly for MP3 gapless playback, since the MP3 audio stream does not contain enough information to enable gapless. Other audio formats do not necessarily require any additional metadata so we should allow next_track to be called without any metadata.
Metadata is required for both MP3 and AAC gapless playback. Can you clarify what 'other formats' you are referring to? And rather than removing the check that makes sense for these popular formats, why not send metadata to set the # of samples to skip to zero? Thanks, -Pierre
Alsa-devel mailing list Alsa-devel@alsa-project.org http://mailman.alsa-project.org/mailman/listinfo/alsa-devel
Any format, or any use-case, where we don't need to send metadata. I said "other formats do not _necessarily_ require any additional metadata". I'm not saying that _no_ other format needs metadata, just that it's not something that's always going to be mandatory. Also you shouldn't think only in terms of gapless play, you can chain track together for other reasons than gapless (for example to make best use of the DSP buffering and allow maximum host sleep time across multiple tracs) and still want to be able to do partial drains to know when the DSP starts decoding the next track.
It would be possible to send a dummy metadata, but why send dummy ioctls to make the API work when we could just remove the check that isn't needed?
Also there's no reason why the kernel should be enforcing this restriction - the core ALSA state machine doesn't need the metadata so it should be left to the DSP driver and/or firmware to decide whether metadata is mandatory in the current situation.
Any format, or any use-case, where we don't need to send metadata. I said "other formats do not _necessarily_ require any additional metadata". I'm not saying that _no_ other format needs metadata, just that it's not something that's always going to be mandatory. Also you shouldn't think only in terms of gapless play, you can chain track together for other reasons than gapless (for example to make best use of the DSP buffering and allow maximum host sleep time across multiple tracs) and still want to be able to do partial drains to know when the DSP starts decoding the next track.
My point was that it's way simpler to use regular playback if you don't need the gapless functionality. I don't buy the argument on power savings either, if the transition lasts 500ms with 3mn tracks, we are talking about optimizing a state that represents 0.27% of the AP activity.
Also there's no reason why the kernel should be enforcing this restriction - the core ALSA state machine doesn't need the metadata so it should be left to the DSP driver and/or firmware to decide whether metadata is mandatory in the current situation.
The problem is that you removed checks at the kernel and tinycompress levels, essentially moving error management to the HAL and firmware. I would agree to the change at the kernel level but it makes sense to have a common approach in tinycompress to make the integration work lighter. If you truly want to be generic we should provide information at the codec level on whether gapless is supported and if there is a need for metadata - e.g. reclaim a reserved field from snd_codec_desc in compress_params.h, and do the check only if needed.
On Mon, Mar 03, 2014 at 08:57:10AM -0600, Pierre-Louis Bossart wrote:
Any format, or any use-case, where we don't need to send metadata. I said "other formats do not _necessarily_ require any additional metadata". I'm not saying that _no_ other format needs metadata, just that it's not something that's always going to be mandatory. Also you shouldn't think only in terms of gapless play, you can chain track together for other reasons than gapless (for example to make best use of the DSP buffering and allow maximum host sleep time across multiple tracs) and still want to be able to do partial drains to know when the DSP starts decoding the next track.
My point was that it's way simpler to use regular playback if you don't need the gapless functionality. I don't buy the argument on power savings either, if the transition lasts 500ms with 3mn tracks, we are talking about optimizing a state that represents 0.27% of the AP activity.
Also there's no reason why the kernel should be enforcing this restriction - the core ALSA state machine doesn't need the metadata so it should be left to the DSP driver and/or firmware to decide whether metadata is mandatory in the current situation.
The problem is that you removed checks at the kernel and tinycompress levels, essentially moving error management to the HAL and firmware. I would agree to the change at the kernel level but it makes sense to have a common approach in tinycompress to make the integration work lighter.
You're focussing too much on thinking about gapless. The compressed API isn't just about gapless. And it isn't just about Android either.
Let me put it this way. These checks are unnecessary and are currently breaking what I'm working on, where I want to chain the tracks together and use partial drain, but I haven't got any metadata to send. Working around these checks by sending dummy metadata makes no sense. If I have to send a dummy ioctl() that does nothing to make the API work, then the API is broken. A dummy metadata does nothing, so obviously it's not needed so there was no need to mandate it.
And having to switch between two different ways of playing audio makes no sense either - why can't I just implement one way of playing audio and if there isn't any metadata for a particular track I don't need to send any?
Also, I disagree that putting these checks in eases integration. It only eases integration if the checks are sensible and useful, if they are not then it makes integration more difficult (as in this case where it makes the integrator have to go to extra effort to send pointless dummy ioctls.) Checks should only go in at a framework level if they are preventing a case that would break things or is logically impossible. It's not impossible to play chained tracks to a DSP without any metadata.
we should provide information at the codec level on whether gapless is supported
This is a bogus comment. Again, you are thinking only about gapless. This is an api for playing audio; gapless is one thing that you might want to do but not the only thing. I agree with you that we should provide this info about the codec, but this patch isn't about what the codec supports. Even if the codec supported gapless, that doesn't mean the framework should force us to send metadata even when the firmware doesn't need it.
Let me put it this way. These checks are unnecessary and are currently breaking what I'm working on, where I want to chain the tracks together and use partial drain, but I haven't got any metadata to send. Working around these checks by sending dummy metadata makes no sense. If I have to send a dummy ioctl() that does nothing to make the API work, then the API is broken. A dummy metadata does nothing, so obviously it's not needed so there was no need to mandate it.
I don't disagree that the API is being stretched beyond what we envisioned when it was quickly extended for MP3/AAC. If we are going further than what was planned and tested with those two formats let's fix it since the assumptions we made may not carry over.
And having to switch between two different ways of playing audio makes no sense either - why can't I just implement one way of playing audio and if there isn't any metadata for a particular track I don't need to send any?
Because there are cases where you need to go through the regular playback path to reinitialize things properly, see below.
Also, I disagree that putting these checks in eases integration. It only eases integration if the checks are sensible and useful, if they are not then it makes integration more difficult (as in this case where it makes the integrator have to go to extra effort to send pointless dummy ioctls.) Checks should only go in at a framework level if they are preventing a case that would break things or is logically impossible. It's not impossible to play chained tracks to a DSP without any metadata.
Chaining is only possible for elementary streams, not in the general case if you may need to reinitialize the decoder based on header information. Chaining may or may not be possible as well if the format is only detected at run-time (e.g. AAC+ with SBR implicit signaling).
There is currently no way to know if the lowest levels support chained playback for a specific format and if they support or require metadata. The next_track/metadata scheme worked fine for MP3/AAC, if we extend the use of this API i suggest we fix some misses rather than taking shortcuts.
we should provide information at the codec level on whether gapless is supported
This is a bogus comment. Again, you are thinking only about gapless. This is an api for playing audio; gapless is one thing that you might want to do but not the only thing. I agree with you that we should provide this info about the codec, but this patch isn't about what the codec supports. Even if the codec supported gapless, that doesn't mean the framework should force us to send metadata even when the firmware doesn't need it.
It is totally about what the decoder implementation supports and no I am not thinking about gapless only. There is currently nothing that tells you next_track is supported and if metadata is supported/needed. You are making assumptions for your specific implementation that may or may not be true in other cases. My suggestion is to add flags, e.g.
#define COMPRESS_SUPPORTS_NEXT_TRACK (1<<0) #define COMPRESS_SUPPORTS_METADATA (1<<1) #define COMPRESS_REQUIRES_METADATA_ON_NEXT_TRACK (1<<2)
and do the relevant tests in tinycompress. That way we have a consistent behavior and you don't have to send information that isn't needed. -Pierre
On Mon, Mar 03, 2014 at 01:40:14PM -0600, Pierre-Louis Bossart wrote:
Let me put it this way. These checks are unnecessary and are currently breaking what I'm working on, where I want to chain the tracks together and use partial drain, but I haven't got any metadata to send. Working around these checks by sending dummy metadata makes no sense. If I have to send a dummy ioctl() that does nothing to make the API work, then the API is broken. A dummy metadata does nothing, so obviously it's not needed so there was no need to mandate it.
I don't disagree that the API is being stretched beyond what we envisioned when it was quickly extended for MP3/AAC. If we are going further than what was planned and tested with those two formats let's fix it since the assumptions we made may not carry over.
By "we" you mean Intel? I did consider cases other than what we were working on at the time, which is why I asked for next_track and partial_drain to be seperate functions though they weren't in Android - next_track is a legitimate operation but partial_drain was clearly a workaround for limitations elsewhere. Unfortunately I missed the review of the gapless stuff (probably I was on vacation).
And having to switch between two different ways of playing audio makes no sense either - why can't I just implement one way of playing audio and if there isn't any metadata for a particular track I don't need to send any?
Because there are cases where you need to go through the regular playback path to reinitialize things properly, see below.
"some cases" != "all cases". The generic layer should not impose an error check on all cases if it's only releveant to some cases.
Also, I disagree that putting these checks in eases integration. It only eases integration if the checks are sensible and useful, if they are not then it makes integration more difficult (as in this case where it makes the integrator have to go to extra effort to send pointless dummy ioctls.) Checks should only go in at a framework level if they are preventing a case that would break things or is logically impossible. It's not impossible to play chained tracks to a DSP without any metadata.
Chaining is only possible for elementary streams, not in the general case if you may need to reinitialize the decoder based on header information. Chaining may or may not be possible as well if the format is only detected at run-time (e.g. AAC+ with SBR implicit signaling).
"some cases" != "all cases". See above.
There is currently no way to know if the lowest levels support chained playback for a specific format and if they support or require metadata. The next_track/metadata scheme worked fine for MP3/AAC, if we extend the use of this API i suggest we fix some misses rather than taking shortcuts.
I agree, but I don't see anyone proposing shortcuts.
we should provide information at the codec level on whether gapless is supported
This is a bogus comment. Again, you are thinking only about gapless. This is an api for playing audio; gapless is one thing that you might want to do but not the only thing. I agree with you that we should provide this info about the codec, but this patch isn't about what the codec supports. Even if the codec supported gapless, that doesn't mean the framework should force us to send metadata even when the firmware doesn't need it.
It is totally about what the decoder implementation supports and no I am not thinking about gapless only. There is currently nothing that tells you next_track is supported and if metadata is supported/needed.
You are making assumptions for your specific implementation that may or may not be true in other cases.
This is a ridiculous comment. I'm proposing that an error check which assumes that the specific case it was relevant for applies to all implementations should be removed - how can that be described as "making assumptions for your specific implementation"? The whole point is that the code _currently_ makes an assumption based on the specific implementation it was written for, but it shouldn't.
My suggestion is to add flags, e.g.
#define COMPRESS_SUPPORTS_NEXT_TRACK (1<<0) #define COMPRESS_SUPPORTS_METADATA (1<<1) #define COMPRESS_REQUIRES_METADATA_ON_NEXT_TRACK (1<<2)
That way we have a consistent
behavior and you don't have to send information that isn't needed. -Pierre
These sort of additions should be considered carefully this time and not written around what's relevant to the two or three cases that we happen to be working on right now.
and do the relevant tests in tinycompress.
The kernel must do the relevant tests. I wonder why we need to check everything twice - once in tinycompress and then again in the kernel. Why does tinycompress need to make an assumption about what the kernel errors checks are and pre-check them? If tinycompress has internal state that needs to be kept correct then fine, it needs to check to protect itself, but if it's just passing the request on to the kernel, the kernel will reject it if it's invalid.
If you think about the current behaviour, it enforces that you must send some metadata, but it doesn't check what that metadata is. You can send totally the wrong metadata and it will be happy but that's no use to the codec. So, either the lower layers have to check that they get the metadata they need - which makes the framework check redundant. Or the codec needs to be robust against not receiving the metadata it needs - which makes the framework check unnecessary.
Also the current behviour is inconsistent, like you say the framework doesn't currently know what the codec actually supports. But in some cases (pause, metadata, next_track, partial_drain) it assumes it supports it and allowed. In other cases (next_track without metadata, partial_drain without next_track) it assumes it doesn't support and denies.
The test isn't even consistent with the case it was designed for - it just denies next_track without metadata but there's no reason why the codec should crash and burn if it gets another MP3 track without some gapless metadata.
It is totally about what the decoder implementation supports and no I am not thinking about gapless only. There is currently nothing that tells you next_track is supported and if metadata is supported/needed. You are making assumptions for your specific implementation that may or may not be true in other cases. My suggestion is to add flags, e.g.
#define COMPRESS_SUPPORTS_NEXT_TRACK (1<<0) #define COMPRESS_SUPPORTS_METADATA (1<<1) #define COMPRESS_REQUIRES_METADATA_ON_NEXT_TRACK (1<<2)
and do the relevant tests in tinycompress. That way we have a consistent behavior and you don't have to send information that isn't needed.
However, it's not that simple for a number of reasons.
1. there's the problem mentioned above, there's not much point verifying that you're sending just any metadata, it's got to be the right metadata. But you can't reasonably check every possible metadata for every possible codec for every possible use case in the generic framework, or encode them all in flags. And so if the codec does need to check for the correct metadata it's going to have to check its specific case itself.
2. The required behaviour is not necessarily a fixed feature of the codec that can be defined by a static 'feature flag'. Whether you need a particular sequence or behaviour might depend on what the user application (or human user) was expecting rather than any requirements of the codec. Take MP3 - metadata before next track is only mandataory if you wanted gapless play, if you don't need gapless you can do without. Of course it's not all about the metadata-before-next_track case, I'm just using that as an example.
3. There's many possible combinations that might apply to different codecs. Using the metadata example again, does the codec need metadata before any audio data, or just before next_track? Do I need to send any metadata before pause? Is metadata needed for every track? Are there one-off metadatas that are sent before any tracks? How many metadatas are needed? Does any of this depend on what's in the data stream (like metadata needed for certain sub-types and not others)?
Even operations we've assumed are always there could be optional. If you're not implementing a music player you might not need pause, or drain, or partial_drain.
So there are a lot of possible combinations of things that need checking and I think we need to be selective about which can sensibly be checked in the generic framework and which of them either need to be checked by the codec, or are too variable to have a simple if (!x) {return -EINVAL;} check.
participants (2)
-
Pierre-Louis Bossart
-
Richard Fitzgerald