[alsa-devel] [PATCH 5/6] compress: add the core file
Pierre-Louis Bossart
pierre-louis.bossart at linux.intel.com
Mon Nov 28 22:36:15 CET 2011
> > implementation? At least, the term "frame" is already used in ALSA
> > PCM, and I'm not sure whether you use this term for the very same
> > meaning in the above context...
Most compressed formats have a notion of frame, but this is indeed a
different notion. In ALSA a frame is really a sampling point, possibly with
multiple channels. Compression algorithms group sampling points in frames,
or blocks, before applying a transform and quantizing. AAC works with 1024
sampling points, MPEG Layer3 with 2 granules of 576 points, AMR with 160
points, etc.
We may want to use terms like 'blocks' or 'chunks', if this helps avoid
confusions with existing ALSA concepts.
> For capture, since application may need to get data on frame basis
> (think video recording with encoded video usage, where application
> needs
> audio compressed data on "frame" basis for encapsulation). The DSP is
> supposed callback after every encoded frame. I think only difference
> between PCM and this is encoding format, otherwise in terms of decoded
> data and time they would mean the same.. I am not expert here so maybe
> wrong.
For capture, in some cases the compressed bitstream doesn't provide any
pointers on the beginning of a block, nor any block-length indication (eg.
AAC-RAW). In that case, an encoder would need to pass data to user-space on
a block-by-block basis, with the bytes available in the ring buffer
corresponding to the block length. If the applications can extract 'blocks'
on their own and handle the relevant file-write/multiplexing, the usual
data-passing with regular events is fine.
For playback, the decoder is expected to deal with such situations on its
own and find the block boundaries, meaning at the application level we can
just push bytes down to the decoder without worrying.
-Pierre
More information about the Alsa-devel
mailing list