Re: [alsa-devel] [PATCH 5/6] compress: add the core file
At Mon, 28 Nov 2011 15:36:15 -0600, Pierre-Louis Bossart wrote:
implementation? At least, the term "frame" is already used in ALSA PCM, and I'm not sure whether you use this term for the very same meaning in the above context...
Most compressed formats have a notion of frame, but this is indeed a different notion. In ALSA a frame is really a sampling point, possibly with multiple channels. Compression algorithms group sampling points in frames, or blocks, before applying a transform and quantizing. AAC works with 1024 sampling points, MPEG Layer3 with 2 granules of 576 points, AMR with 160 points, etc. We may want to use terms like 'blocks' or 'chunks', if this helps avoid confusions with existing ALSA concepts.
Yes, it's better to avoid the conflicting definition, IMO.
For capture, since application may need to get data on frame basis (think video recording with encoded video usage, where application needs audio compressed data on "frame" basis for encapsulation). The DSP is supposed callback after every encoded frame. I think only difference between PCM and this is encoding format, otherwise in terms of decoded data and time they would mean the same.. I am not expert here so maybe wrong.
For capture, in some cases the compressed bitstream doesn't provide any pointers on the beginning of a block, nor any block-length indication (eg. AAC-RAW). In that case, an encoder would need to pass data to user-space on a block-by-block basis, with the bytes available in the ring buffer corresponding to the block length. If the applications can extract 'blocks' on their own and handle the relevant file-write/multiplexing, the usual data-passing with regular events is fine.
For playback, the decoder is expected to deal with such situations on its own and find the block boundaries, meaning at the application level we can just push bytes down to the decoder without worrying.
But is this restriction guaranteed to be applicable to all possible hardwares in future? What happens if you'll get a hardware that doesn't support the byte-unit push for the playback? That said, I see no obvious reason to give a restriction coupled with the stream direction.
thanks,
Takashi
For playback, the decoder is expected to deal with such situations on
its
own and find the block boundaries, meaning at the application level
we can
just push bytes down to the decoder without worrying.
But is this restriction guaranteed to be applicable to all possible hardwares in future? What happens if you'll get a hardware that doesn't support the byte-unit push for the playback? That said, I see no obvious reason to give a restriction coupled with the stream direction.
Power consumption will be more optimized if we push bytes without looking for block boundaries on the host. But if your hardware requires parsing on the host, there's no impact on the API. You'd just write block-by-block and specify the block length instead of fixed-size value. -Pierre
participants (2)
-
Pierre-Louis Bossart
-
Takashi Iwai