[PATCH v15 00/16] Add audio support in v4l2 framework
Audio signal processing also has the requirement for memory to memory similar as Video.
This asrc memory to memory (memory ->asrc->memory) case is a non real time use case.
User fills the input buffer to the asrc module, after conversion, then asrc sends back the output buffer to user. So it is not a traditional ALSA playback and capture case.
It is a specific use case, there is no reference in current kernel. v4l2 memory to memory is the closed implementation, v4l2 current support video, image, radio, tuner, touch devices, so it is not complicated to add support for this specific audio case.
Because we had implemented the "memory -> asrc ->i2s device-> codec" use case in ALSA. Now the "memory->asrc->memory" needs to reuse the code in asrc driver, so the first 3 patches is for refining the code to make it can be shared by the "memory->asrc->memory" driver.
The main change is in the v4l2 side, A /dev/vl4-audioX will be created, user applications only use the ioctl of v4l2 framework.
Other change is to add memory to memory support for two kinds of i.MX ASRC module.
changes in v15: - update MAINTAINERS for imx-asrc.c and vim2m-audio.c
changes in v14: - document the reservation of 'AUXX' fourcc format. - add v4l2_audfmt_to_fourcc() definition.
changes in v13 - change 'pixelformat' to 'audioformat' in dev-audio-mem2mem.rst - add more description for clock drift in ext-ctrls-audio-m2m.rst - Add "media: v4l2-ctrls: add support for fraction_bits" from Hans to avoid build issue for kernel test robot
changes in v12 - minor changes according to comments - drop min_buffers_needed = 1 and V4L2_CTRL_FLAG_UPDATE flag - drop bus_info
changes in v11 - add add-fixed-point-test-controls in vivid. - add v4l2_ctrl_fp_compose() helper function for min and max
changes in v10 - remove FIXED_POINT type - change code base on media: v4l2-ctrls: add support for fraction_bits - fix issue reported by kernel test robot - remove module_alias
changes in v9: - add MEDIA_ENT_F_PROC_AUDIO_RESAMPLER. - add MEDIA_INTF_T_V4L_AUDIO - add media controller support - refine the vim2m-audio to support 8k<->16k conversion.
changes in v8: - refine V4L2_CAP_AUDIO_M2M to be 0x00000008 - update doc for FIXED_POINT - address comments for imx-asrc
changes in v7: - add acked-by from Mark - separate commit for fixed point, m2m audio class, audio rate controls - use INTEGER_MENU for rate, FIXED_POINT for rate offset - remove used fmts - address other comments for Hans
changes in v6: - use m2m_prepare/m2m_unprepare/m2m_start/m2m_stop to replace m2m_start_part_one/m2m_stop_part_one, m2m_start_part_two/m2m_stop_part_two. - change V4L2_CTRL_TYPE_ASRC_RATE to V4L2_CTRL_TYPE_FIXED_POINT - fix warning by kernel test rebot - remove some unused format V4L2_AUDIO_FMT_XX - Get SNDRV_PCM_FORMAT from V4L2_AUDIO_FMT in driver. - rename audm2m to viaudm2m.
changes in v5: - remove V4L2_AUDIO_FMT_LPCM - define audio pixel format like V4L2_AUDIO_FMT_S8... - remove rate and format in struct v4l2_audio_format. - Add V4L2_CID_ASRC_SOURCE_RATE and V4L2_CID_ASRC_DEST_RATE controls - updata document accordingly.
changes in v4: - update document style - separate V4L2_AUDIO_FMT_LPCM and V4L2_CAP_AUDIO_M2M in separate commit
changes in v3: - Modify documents for adding audio m2m support - Add audio virtual m2m driver - Defined V4L2_AUDIO_FMT_LPCM format type for audio. - Defined V4L2_CAP_AUDIO_M2M capability type for audio m2m case. - with modification in v4l-utils, pass v4l2-compliance test.
changes in v2: - decouple the implementation in v4l2 and ALSA - implement the memory to memory driver as a platfrom driver and move it to driver/media - move fsl_asrc_common.h to include/sound folder
Hans Verkuil (1): media: v4l2-ctrls: add support for fraction_bits
Shengjiu Wang (15): ASoC: fsl_asrc: define functions for memory to memory usage ASoC: fsl_easrc: define functions for memory to memory usage ASoC: fsl_asrc: move fsl_asrc_common.h to include/sound ASoC: fsl_asrc: register m2m platform device ASoC: fsl_easrc: register m2m platform device media: uapi: Add V4L2_CAP_AUDIO_M2M capability flag media: v4l2: Add audio capture and output support media: uapi: Define audio sample format fourcc type media: uapi: Add V4L2_CTRL_CLASS_M2M_AUDIO media: uapi: Add audio rate controls support media: uapi: Declare interface types for Audio media: uapi: Add an entity type for audio resampler media: vivid: add fixed point test controls media: imx-asrc: Add memory to memory driver media: vim2m-audio: add virtual driver for audio memory to memory
.../media/mediactl/media-types.rst | 11 + .../userspace-api/media/v4l/buffer.rst | 6 + .../userspace-api/media/v4l/common.rst | 1 + .../media/v4l/dev-audio-mem2mem.rst | 71 + .../userspace-api/media/v4l/devices.rst | 1 + .../media/v4l/ext-ctrls-audio-m2m.rst | 59 + .../userspace-api/media/v4l/pixfmt-audio.rst | 100 ++ .../userspace-api/media/v4l/pixfmt.rst | 1 + .../media/v4l/vidioc-enum-fmt.rst | 2 + .../media/v4l/vidioc-g-ext-ctrls.rst | 4 + .../userspace-api/media/v4l/vidioc-g-fmt.rst | 4 + .../media/v4l/vidioc-querycap.rst | 3 + .../media/v4l/vidioc-queryctrl.rst | 11 +- .../media/videodev2.h.rst.exceptions | 3 + MAINTAINERS | 17 + .../media/common/videobuf2/videobuf2-v4l2.c | 4 + drivers/media/platform/nxp/Kconfig | 13 + drivers/media/platform/nxp/Makefile | 1 + drivers/media/platform/nxp/imx-asrc.c | 1256 +++++++++++++++++ drivers/media/test-drivers/Kconfig | 10 + drivers/media/test-drivers/Makefile | 1 + drivers/media/test-drivers/vim2m-audio.c | 793 +++++++++++ drivers/media/test-drivers/vivid/vivid-core.h | 2 + .../media/test-drivers/vivid/vivid-ctrls.c | 26 + drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 9 + drivers/media/v4l2-core/v4l2-ctrls-api.c | 1 + drivers/media/v4l2-core/v4l2-ctrls-core.c | 93 +- drivers/media/v4l2-core/v4l2-ctrls-defs.c | 10 + drivers/media/v4l2-core/v4l2-dev.c | 21 + drivers/media/v4l2-core/v4l2-ioctl.c | 66 + drivers/media/v4l2-core/v4l2-mem2mem.c | 13 +- include/media/v4l2-ctrls.h | 13 +- include/media/v4l2-dev.h | 2 + include/media/v4l2-ioctl.h | 34 + .../fsl => include/sound}/fsl_asrc_common.h | 60 + include/uapi/linux/media.h | 2 + include/uapi/linux/v4l2-controls.h | 9 + include/uapi/linux/videodev2.h | 50 +- sound/soc/fsl/fsl_asrc.c | 144 ++ sound/soc/fsl/fsl_asrc.h | 4 +- sound/soc/fsl/fsl_asrc_dma.c | 2 +- sound/soc/fsl/fsl_easrc.c | 233 +++ sound/soc/fsl/fsl_easrc.h | 6 +- 43 files changed, 3145 insertions(+), 27 deletions(-) create mode 100644 Documentation/userspace-api/media/v4l/dev-audio-mem2mem.rst create mode 100644 Documentation/userspace-api/media/v4l/ext-ctrls-audio-m2m.rst create mode 100644 Documentation/userspace-api/media/v4l/pixfmt-audio.rst create mode 100644 drivers/media/platform/nxp/imx-asrc.c create mode 100644 drivers/media/test-drivers/vim2m-audio.c rename {sound/soc/fsl => include/sound}/fsl_asrc_common.h (60%)
From: Hans Verkuil hverkuil@xs4all.nl
This adds support for the fraction_bits field, used with integer controls. This allows fixed point formats to be described.
The fraction_bits field is only exposed through VIDIOC_QUERY_EXT_CTRL.
For a given signed two's complement Qf fixed point value 'f' equals fraction_bits.
Signed-off-by: Hans Verkuil hverkuil-cisco@xs4all.nl --- .../media/v4l/vidioc-queryctrl.rst | 11 ++- drivers/media/v4l2-core/v4l2-ctrls-api.c | 1 + drivers/media/v4l2-core/v4l2-ctrls-core.c | 93 +++++++++++++++---- include/media/v4l2-ctrls.h | 7 +- include/uapi/linux/videodev2.h | 3 +- 5 files changed, 95 insertions(+), 20 deletions(-)
diff --git a/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst b/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst index 4d38acafe8e1..e65c7e5d78ec 100644 --- a/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst +++ b/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst @@ -267,7 +267,16 @@ See also the examples in :ref:`control`. - The size of each dimension. The first ``nr_of_dims`` elements of this array must be non-zero, all remaining elements must be zero. * - __u32 - - ``reserved``\ [32] + - ``fraction_bits`` + - The number of least significant bits of the control value that + form the fraction of the fixed point value. This is 0 if the value + is a regular integer. This can be used with all integer control types + (``INTEGER``, ``INTEGER64``, ``U8``, ``U16`` and ``U32``). + For the signed types the signed two's complement representation is used. + This field applies to the control value as well as the ``minimum``, + ``maximum``, ``step`` and ``default_value`` fields. + * - __u32 + - ``reserved``\ [31] - Reserved for future extensions. Applications and drivers must set the array to zero.
diff --git a/drivers/media/v4l2-core/v4l2-ctrls-api.c b/drivers/media/v4l2-core/v4l2-ctrls-api.c index d9a422017bd9..ef16b00421ec 100644 --- a/drivers/media/v4l2-core/v4l2-ctrls-api.c +++ b/drivers/media/v4l2-core/v4l2-ctrls-api.c @@ -1101,6 +1101,7 @@ int v4l2_query_ext_ctrl(struct v4l2_ctrl_handler *hdl, struct v4l2_query_ext_ctr qc->elems = ctrl->elems; qc->nr_of_dims = ctrl->nr_of_dims; memcpy(qc->dims, ctrl->dims, qc->nr_of_dims * sizeof(qc->dims[0])); + qc->fraction_bits = ctrl->fraction_bits; qc->minimum = ctrl->minimum; qc->maximum = ctrl->maximum; qc->default_value = ctrl->default_value; diff --git a/drivers/media/v4l2-core/v4l2-ctrls-core.c b/drivers/media/v4l2-core/v4l2-ctrls-core.c index c4d995f32191..d83a37198bb5 100644 --- a/drivers/media/v4l2-core/v4l2-ctrls-core.c +++ b/drivers/media/v4l2-core/v4l2-ctrls-core.c @@ -252,12 +252,61 @@ void v4l2_ctrl_type_op_init(const struct v4l2_ctrl *ctrl, u32 from_idx, } EXPORT_SYMBOL(v4l2_ctrl_type_op_init);
+static void v4l2_ctrl_log_fp(s64 v, unsigned int fraction_bits) +{ + s64 i, f, mask; + + if (!fraction_bits) { + pr_cont("%lld", v); + return; + } + + mask = (1ULL << fraction_bits) - 1; + + /* + * Note: this function does not support fixed point u64 with + * fraction_bits set to 64. At the moment there is no U64 + * control type, but if that is added, then this code will have + * to add support for it. + */ + if (fraction_bits >= 63) + i = v < 0 ? -1 : 0; + else + i = div64_s64(v, 1LL << fraction_bits); + + f = v < 0 ? -((-v) & mask) : (v & mask); + + if (!f) { + pr_cont("%lld", i); + } else if (fraction_bits < 20) { + u64 div = 1ULL << fraction_bits; + + if (!i && f < 0) + pr_cont("-%lld/%llu", -f, div); + else if (!i) + pr_cont("%lld/%llu", f, div); + else if (i < 0 || f < 0) + pr_cont("-%lld-%llu/%llu", -i, -f, div); + else + pr_cont("%lld+%llu/%llu", i, f, div); + } else { + if (!i && f < 0) + pr_cont("-%lld/(2^%u)", -f, fraction_bits); + else if (!i) + pr_cont("%lld/(2^%u)", f, fraction_bits); + else if (i < 0 || f < 0) + pr_cont("-%lld-%llu/(2^%u)", -i, -f, fraction_bits); + else + pr_cont("%lld+%llu/(2^%u)", i, f, fraction_bits); + } +} + void v4l2_ctrl_type_op_log(const struct v4l2_ctrl *ctrl) { union v4l2_ctrl_ptr ptr = ctrl->p_cur;
if (ctrl->is_array) { - unsigned i; + unsigned int i;
for (i = 0; i < ctrl->nr_of_dims; i++) pr_cont("[%u]", ctrl->dims[i]); @@ -266,7 +315,7 @@ void v4l2_ctrl_type_op_log(const struct v4l2_ctrl *ctrl)
switch (ctrl->type) { case V4L2_CTRL_TYPE_INTEGER: - pr_cont("%d", *ptr.p_s32); + v4l2_ctrl_log_fp(*ptr.p_s32, ctrl->fraction_bits); break; case V4L2_CTRL_TYPE_BOOLEAN: pr_cont("%s", *ptr.p_s32 ? "true" : "false"); @@ -281,19 +330,19 @@ void v4l2_ctrl_type_op_log(const struct v4l2_ctrl *ctrl) pr_cont("0x%08x", *ptr.p_s32); break; case V4L2_CTRL_TYPE_INTEGER64: - pr_cont("%lld", *ptr.p_s64); + v4l2_ctrl_log_fp(*ptr.p_s64, ctrl->fraction_bits); break; case V4L2_CTRL_TYPE_STRING: pr_cont("%s", ptr.p_char); break; case V4L2_CTRL_TYPE_U8: - pr_cont("%u", (unsigned)*ptr.p_u8); + v4l2_ctrl_log_fp(*ptr.p_u8, ctrl->fraction_bits); break; case V4L2_CTRL_TYPE_U16: - pr_cont("%u", (unsigned)*ptr.p_u16); + v4l2_ctrl_log_fp(*ptr.p_u16, ctrl->fraction_bits); break; case V4L2_CTRL_TYPE_U32: - pr_cont("%u", (unsigned)*ptr.p_u32); + v4l2_ctrl_log_fp(*ptr.p_u32, ctrl->fraction_bits); break; case V4L2_CTRL_TYPE_H264_SPS: pr_cont("H264_SPS"); @@ -1753,11 +1802,12 @@ static struct v4l2_ctrl *v4l2_ctrl_new(struct v4l2_ctrl_handler *hdl, u32 id, const char *name, enum v4l2_ctrl_type type, s64 min, s64 max, u64 step, s64 def, const u32 dims[V4L2_CTRL_MAX_DIMS], u32 elem_size, - u32 flags, const char * const *qmenu, + u32 fraction_bits, u32 flags, const char * const *qmenu, const s64 *qmenu_int, const union v4l2_ctrl_ptr p_def, void *priv) { struct v4l2_ctrl *ctrl; + unsigned int max_fraction_bits = 0; unsigned sz_extra; unsigned nr_of_dims = 0; unsigned elems = 1; @@ -1779,20 +1829,28 @@ static struct v4l2_ctrl *v4l2_ctrl_new(struct v4l2_ctrl_handler *hdl,
/* Prefill elem_size for all types handled by std_type_ops */ switch ((u32)type) { + case V4L2_CTRL_TYPE_INTEGER: + elem_size = sizeof(s32); + max_fraction_bits = 31; + break; case V4L2_CTRL_TYPE_INTEGER64: elem_size = sizeof(s64); + max_fraction_bits = 63; break; case V4L2_CTRL_TYPE_STRING: elem_size = max + 1; break; case V4L2_CTRL_TYPE_U8: elem_size = sizeof(u8); + max_fraction_bits = 8; break; case V4L2_CTRL_TYPE_U16: elem_size = sizeof(u16); + max_fraction_bits = 16; break; case V4L2_CTRL_TYPE_U32: elem_size = sizeof(u32); + max_fraction_bits = 32; break; case V4L2_CTRL_TYPE_MPEG2_SEQUENCE: elem_size = sizeof(struct v4l2_ctrl_mpeg2_sequence); @@ -1876,10 +1934,10 @@ static struct v4l2_ctrl *v4l2_ctrl_new(struct v4l2_ctrl_handler *hdl, }
/* Sanity checks */ - if (id == 0 || name == NULL || !elem_size || - id >= V4L2_CID_PRIVATE_BASE || - (type == V4L2_CTRL_TYPE_MENU && qmenu == NULL) || - (type == V4L2_CTRL_TYPE_INTEGER_MENU && qmenu_int == NULL)) { + if (id == 0 || !name || !elem_size || + fraction_bits > max_fraction_bits || id >= V4L2_CID_PRIVATE_BASE || + (type == V4L2_CTRL_TYPE_MENU && !qmenu) || + (type == V4L2_CTRL_TYPE_INTEGER_MENU && !qmenu_int)) { handler_set_err(hdl, -ERANGE); return NULL; } @@ -1940,6 +1998,7 @@ static struct v4l2_ctrl *v4l2_ctrl_new(struct v4l2_ctrl_handler *hdl, ctrl->name = name; ctrl->type = type; ctrl->flags = flags; + ctrl->fraction_bits = fraction_bits; ctrl->minimum = min; ctrl->maximum = max; ctrl->step = step; @@ -2038,7 +2097,7 @@ struct v4l2_ctrl *v4l2_ctrl_new_custom(struct v4l2_ctrl_handler *hdl, ctrl = v4l2_ctrl_new(hdl, cfg->ops, cfg->type_ops, cfg->id, name, type, min, max, is_menu ? cfg->menu_skip_mask : step, def, - cfg->dims, cfg->elem_size, + cfg->dims, cfg->elem_size, cfg->fraction_bits, flags, qmenu, qmenu_int, cfg->p_def, priv); if (ctrl) ctrl->is_private = cfg->is_private; @@ -2063,7 +2122,7 @@ struct v4l2_ctrl *v4l2_ctrl_new_std(struct v4l2_ctrl_handler *hdl, return NULL; } return v4l2_ctrl_new(hdl, ops, NULL, id, name, type, - min, max, step, def, NULL, 0, + min, max, step, def, NULL, 0, 0, flags, NULL, NULL, ptr_null, NULL); } EXPORT_SYMBOL(v4l2_ctrl_new_std); @@ -2096,7 +2155,7 @@ struct v4l2_ctrl *v4l2_ctrl_new_std_menu(struct v4l2_ctrl_handler *hdl, return NULL; } return v4l2_ctrl_new(hdl, ops, NULL, id, name, type, - 0, max, mask, def, NULL, 0, + 0, max, mask, def, NULL, 0, 0, flags, qmenu, qmenu_int, ptr_null, NULL); } EXPORT_SYMBOL(v4l2_ctrl_new_std_menu); @@ -2128,7 +2187,7 @@ struct v4l2_ctrl *v4l2_ctrl_new_std_menu_items(struct v4l2_ctrl_handler *hdl, return NULL; } return v4l2_ctrl_new(hdl, ops, NULL, id, name, type, - 0, max, mask, def, NULL, 0, + 0, max, mask, def, NULL, 0, 0, flags, qmenu, NULL, ptr_null, NULL);
} @@ -2150,7 +2209,7 @@ struct v4l2_ctrl *v4l2_ctrl_new_std_compound(struct v4l2_ctrl_handler *hdl, return NULL; } return v4l2_ctrl_new(hdl, ops, NULL, id, name, type, - min, max, step, def, NULL, 0, + min, max, step, def, NULL, 0, 0, flags, NULL, NULL, p_def, NULL); } EXPORT_SYMBOL(v4l2_ctrl_new_std_compound); @@ -2174,7 +2233,7 @@ struct v4l2_ctrl *v4l2_ctrl_new_int_menu(struct v4l2_ctrl_handler *hdl, return NULL; } return v4l2_ctrl_new(hdl, ops, NULL, id, name, type, - 0, max, 0, def, NULL, 0, + 0, max, 0, def, NULL, 0, 0, flags, NULL, qmenu_int, ptr_null, NULL); } EXPORT_SYMBOL(v4l2_ctrl_new_int_menu); diff --git a/include/media/v4l2-ctrls.h b/include/media/v4l2-ctrls.h index 59679a42b3e7..c35514c5bf88 100644 --- a/include/media/v4l2-ctrls.h +++ b/include/media/v4l2-ctrls.h @@ -211,7 +211,8 @@ typedef void (*v4l2_ctrl_notify_fnc)(struct v4l2_ctrl *ctrl, void *priv); * except for dynamic arrays. In that case it is in the range of * 1 to @p_array_alloc_elems. * @dims: The size of each dimension. - * @nr_of_dims:The number of dimensions in @dims. + * @nr_of_dims: The number of dimensions in @dims. + * @fraction_bits: The number of fraction bits for fixed point values. * @menu_skip_mask: The control's skip mask for menu controls. This makes it * easy to skip menu items that are not valid. If bit X is set, * then menu item X is skipped. Of course, this only works for @@ -228,6 +229,7 @@ typedef void (*v4l2_ctrl_notify_fnc)(struct v4l2_ctrl *ctrl, void *priv); * :math:`ceil(\frac{maximum - minimum}{step}) + 1`. * Used only if the @type is %V4L2_CTRL_TYPE_INTEGER_MENU. * @flags: The control's flags. + * @fraction_bits: The number of fraction bits for fixed point values. * @priv: The control's private pointer. For use by the driver. It is * untouched by the control framework. Note that this pointer is * not freed when the control is deleted. Should this be needed @@ -286,6 +288,7 @@ struct v4l2_ctrl { u32 new_elems; u32 dims[V4L2_CTRL_MAX_DIMS]; u32 nr_of_dims; + u32 fraction_bits; union { u64 step; u64 menu_skip_mask; @@ -426,6 +429,7 @@ struct v4l2_ctrl_handler { * @dims: The size of each dimension. * @elem_size: The size in bytes of the control. * @flags: The control's flags. + * @fraction_bits: The number of fraction bits for fixed point values. * @menu_skip_mask: The control's skip mask for menu controls. This makes it * easy to skip menu items that are not valid. If bit X is set, * then menu item X is skipped. Of course, this only works for @@ -455,6 +459,7 @@ struct v4l2_ctrl_config { u32 dims[V4L2_CTRL_MAX_DIMS]; u32 elem_size; u32 flags; + u32 fraction_bits; u64 menu_skip_mask; const char * const *qmenu; const s64 *qmenu_int; diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h index a8015e5e7fa4..b8573e9ccde6 100644 --- a/include/uapi/linux/videodev2.h +++ b/include/uapi/linux/videodev2.h @@ -1947,7 +1947,8 @@ struct v4l2_query_ext_ctrl { __u32 elems; __u32 nr_of_dims; __u32 dims[V4L2_CTRL_MAX_DIMS]; - __u32 reserved[32]; + __u32 fraction_bits; + __u32 reserved[31]; };
/* Used in the VIDIOC_QUERYMENU ioctl for querying menu items */
ASRC can be used on memory to memory case, define several functions for m2m usage.
m2m_prepare: prepare for the start step m2m_start: the start step m2m_unprepare: unprepare for stop step, optional m2m_stop: stop step m2m_check_format: check format is supported or not m2m_calc_out_len: calculate output length according to input length m2m_get_maxburst: burst size for dma m2m_pair_suspend: suspend function of pair, optional. m2m_pair_resume: resume function of pair get_output_fifo_size: get remaining data size in FIFO
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com Acked-by: Mark Brown broonie@kernel.org --- sound/soc/fsl/fsl_asrc.c | 126 ++++++++++++++++++++++++++++++++ sound/soc/fsl/fsl_asrc.h | 2 + sound/soc/fsl/fsl_asrc_common.h | 37 ++++++++++ 3 files changed, 165 insertions(+)
diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c index b793263291dc..7d8643ee0ba0 100644 --- a/sound/soc/fsl/fsl_asrc.c +++ b/sound/soc/fsl/fsl_asrc.c @@ -1063,6 +1063,124 @@ static int fsl_asrc_get_fifo_addr(u8 dir, enum asrc_pair_index index) return REG_ASRDx(dir, index); }
+/* Get sample numbers in FIFO */ +static unsigned int fsl_asrc_get_output_fifo_size(struct fsl_asrc_pair *pair) +{ + struct fsl_asrc *asrc = pair->asrc; + enum asrc_pair_index index = pair->index; + u32 val; + + regmap_read(asrc->regmap, REG_ASRFST(index), &val); + + val &= ASRFSTi_OUTPUT_FIFO_MASK; + + return val >> ASRFSTi_OUTPUT_FIFO_SHIFT; +} + +static int fsl_asrc_m2m_prepare(struct fsl_asrc_pair *pair) +{ + struct fsl_asrc_pair_priv *pair_priv = pair->private; + struct fsl_asrc *asrc = pair->asrc; + struct device *dev = &asrc->pdev->dev; + struct asrc_config config; + int ret; + + /* fill config */ + config.pair = pair->index; + config.channel_num = pair->channels; + config.input_sample_rate = pair->rate[IN]; + config.output_sample_rate = pair->rate[OUT]; + config.input_format = pair->sample_format[IN]; + config.output_format = pair->sample_format[OUT]; + config.inclk = INCLK_NONE; + config.outclk = OUTCLK_ASRCK1_CLK; + + pair_priv->config = &config; + ret = fsl_asrc_config_pair(pair, true); + if (ret) { + dev_err(dev, "failed to config pair: %d\n", ret); + return ret; + } + + pair->first_convert = 1; + + return 0; +} + +static int fsl_asrc_m2m_start(struct fsl_asrc_pair *pair) +{ + if (pair->first_convert) { + fsl_asrc_start_pair(pair); + pair->first_convert = 0; + } + /* + * Clear DMA request during the stall state of ASRC: + * During STALL state, the remaining in input fifo would never be + * smaller than the input threshold while the output fifo would not + * be bigger than output one. Thus the DMA request would be cleared. + */ + fsl_asrc_set_watermarks(pair, ASRC_FIFO_THRESHOLD_MIN, + ASRC_FIFO_THRESHOLD_MAX); + + /* Update the real input threshold to raise DMA request */ + fsl_asrc_set_watermarks(pair, ASRC_M2M_INPUTFIFO_WML, + ASRC_M2M_OUTPUTFIFO_WML); + + return 0; +} + +static int fsl_asrc_m2m_stop(struct fsl_asrc_pair *pair) +{ + if (!pair->first_convert) { + fsl_asrc_stop_pair(pair); + pair->first_convert = 1; + } + + return 0; +} + +/* calculate capture data length according to output data length and sample rate */ +static int fsl_asrc_m2m_calc_out_len(struct fsl_asrc_pair *pair, int input_buffer_length) +{ + unsigned int in_width, out_width; + unsigned int channels = pair->channels; + unsigned int in_samples, out_samples; + unsigned int out_length; + + in_width = snd_pcm_format_physical_width(pair->sample_format[IN]) / 8; + out_width = snd_pcm_format_physical_width(pair->sample_format[OUT]) / 8; + + in_samples = input_buffer_length / in_width / channels; + out_samples = pair->rate[OUT] * in_samples / pair->rate[IN]; + out_length = (out_samples - ASRC_OUTPUT_LAST_SAMPLE) * out_width * channels; + + return out_length; +} + +static int fsl_asrc_m2m_get_maxburst(u8 dir, struct fsl_asrc_pair *pair) +{ + struct fsl_asrc *asrc = pair->asrc; + struct fsl_asrc_priv *asrc_priv = asrc->private; + int wml = (dir == IN) ? ASRC_M2M_INPUTFIFO_WML : ASRC_M2M_OUTPUTFIFO_WML; + + if (!asrc_priv->soc->use_edma) + return wml * pair->channels; + else + return 1; +} + +static int fsl_asrc_m2m_pair_resume(struct fsl_asrc_pair *pair) +{ + struct fsl_asrc *asrc = pair->asrc; + int i; + + for (i = 0; i < pair->channels * 4; i++) + regmap_write(asrc->regmap, REG_ASRDI(pair->index), 0); + + pair->first_convert = 1; + return 0; +} + static int fsl_asrc_runtime_resume(struct device *dev); static int fsl_asrc_runtime_suspend(struct device *dev);
@@ -1147,6 +1265,14 @@ static int fsl_asrc_probe(struct platform_device *pdev) asrc->get_fifo_addr = fsl_asrc_get_fifo_addr; asrc->pair_priv_size = sizeof(struct fsl_asrc_pair_priv);
+ asrc->m2m_prepare = fsl_asrc_m2m_prepare; + asrc->m2m_start = fsl_asrc_m2m_start; + asrc->m2m_stop = fsl_asrc_m2m_stop; + asrc->get_output_fifo_size = fsl_asrc_get_output_fifo_size; + asrc->m2m_calc_out_len = fsl_asrc_m2m_calc_out_len; + asrc->m2m_get_maxburst = fsl_asrc_m2m_get_maxburst; + asrc->m2m_pair_resume = fsl_asrc_m2m_pair_resume; + if (of_device_is_compatible(np, "fsl,imx35-asrc")) { asrc_priv->clk_map[IN] = input_clk_map_imx35; asrc_priv->clk_map[OUT] = output_clk_map_imx35; diff --git a/sound/soc/fsl/fsl_asrc.h b/sound/soc/fsl/fsl_asrc.h index 86d2422ad606..1c492eb237f5 100644 --- a/sound/soc/fsl/fsl_asrc.h +++ b/sound/soc/fsl/fsl_asrc.h @@ -12,6 +12,8 @@
#include "fsl_asrc_common.h"
+#define ASRC_M2M_INPUTFIFO_WML 0x4 +#define ASRC_M2M_OUTPUTFIFO_WML 0x2 #define ASRC_DMA_BUFFER_NUM 2 #define ASRC_INPUTFIFO_THRESHOLD 32 #define ASRC_OUTPUTFIFO_THRESHOLD 32 diff --git a/sound/soc/fsl/fsl_asrc_common.h b/sound/soc/fsl/fsl_asrc_common.h index 7e1c13ca37f1..3b53d366182f 100644 --- a/sound/soc/fsl/fsl_asrc_common.h +++ b/sound/soc/fsl/fsl_asrc_common.h @@ -34,6 +34,12 @@ enum asrc_pair_index { * @pos: hardware pointer position * @req_dma_chan: flag to release dev_to_dev chan * @private: pair private area + * @complete: dma task complete + * @sample_format: format of m2m + * @rate: rate of m2m + * @buf_len: buffer length of m2m + * @first_convert: start of conversion + * @req_pair: flag for request pair */ struct fsl_asrc_pair { struct fsl_asrc *asrc; @@ -49,6 +55,14 @@ struct fsl_asrc_pair { bool req_dma_chan;
void *private; + + /* used for m2m */ + struct completion complete[2]; + snd_pcm_format_t sample_format[2]; + unsigned int rate[2]; + unsigned int buf_len[2]; + unsigned int first_convert; + bool req_pair; };
/** @@ -72,6 +86,16 @@ struct fsl_asrc_pair { * @request_pair: function pointer * @release_pair: function pointer * @get_fifo_addr: function pointer + * @m2m_prepare: function pointer + * @m2m_start: function pointer + * @m2m_unprepare: function pointer + * @m2m_stop: function pointer + * @m2m_calc_out_len: function pointer + * @m2m_get_maxburst: function pointer + * @m2m_pair_suspend: function pointer + * @m2m_pair_resume: function pointer + * @m2m_set_ratio_mod: function pointer + * @get_output_fifo_size: function pointer * @pair_priv_size: size of pair private struct. * @private: private data structure */ @@ -97,6 +121,19 @@ struct fsl_asrc { int (*request_pair)(int channels, struct fsl_asrc_pair *pair); void (*release_pair)(struct fsl_asrc_pair *pair); int (*get_fifo_addr)(u8 dir, enum asrc_pair_index index); + + int (*m2m_prepare)(struct fsl_asrc_pair *pair); + int (*m2m_start)(struct fsl_asrc_pair *pair); + int (*m2m_unprepare)(struct fsl_asrc_pair *pair); + int (*m2m_stop)(struct fsl_asrc_pair *pair); + + int (*m2m_calc_out_len)(struct fsl_asrc_pair *pair, int input_buffer_length); + int (*m2m_get_maxburst)(u8 dir, struct fsl_asrc_pair *pair); + int (*m2m_pair_suspend)(struct fsl_asrc_pair *pair); + int (*m2m_pair_resume)(struct fsl_asrc_pair *pair); + int (*m2m_set_ratio_mod)(struct fsl_asrc_pair *pair, int val); + + unsigned int (*get_output_fifo_size)(struct fsl_asrc_pair *pair); size_t pair_priv_size;
void *private;
ASRC can be used on memory to memory case, define several functions for m2m usage and export them as function pointer.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com Acked-by: Mark Brown broonie@kernel.org --- sound/soc/fsl/fsl_easrc.c | 214 ++++++++++++++++++++++++++++++++++++++ sound/soc/fsl/fsl_easrc.h | 4 + 2 files changed, 218 insertions(+)
diff --git a/sound/soc/fsl/fsl_easrc.c b/sound/soc/fsl/fsl_easrc.c index ec53bda46a46..cf7ad30a323b 100644 --- a/sound/soc/fsl/fsl_easrc.c +++ b/sound/soc/fsl/fsl_easrc.c @@ -1861,6 +1861,211 @@ static int fsl_easrc_get_fifo_addr(u8 dir, enum asrc_pair_index index) return REG_EASRC_FIFO(dir, index); }
+/* Get sample numbers in FIFO */ +static unsigned int fsl_easrc_get_output_fifo_size(struct fsl_asrc_pair *pair) +{ + struct fsl_asrc *asrc = pair->asrc; + enum asrc_pair_index index = pair->index; + u32 val; + + regmap_read(asrc->regmap, REG_EASRC_SFS(index), &val); + val &= EASRC_SFS_NSGO_MASK; + + return val >> EASRC_SFS_NSGO_SHIFT; +} + +static int fsl_easrc_m2m_prepare(struct fsl_asrc_pair *pair) +{ + struct fsl_easrc_ctx_priv *ctx_priv = pair->private; + struct fsl_asrc *asrc = pair->asrc; + struct device *dev = &asrc->pdev->dev; + int ret; + + ctx_priv->in_params.sample_rate = pair->rate[IN]; + ctx_priv->in_params.sample_format = pair->sample_format[IN]; + ctx_priv->out_params.sample_rate = pair->rate[OUT]; + ctx_priv->out_params.sample_format = pair->sample_format[OUT]; + + ctx_priv->in_params.fifo_wtmk = FSL_EASRC_INPUTFIFO_WML; + ctx_priv->out_params.fifo_wtmk = FSL_EASRC_OUTPUTFIFO_WML; + /* Fill the right half of the re-sampler with zeros */ + ctx_priv->rs_init_mode = 0x2; + /* Zero fill the right half of the prefilter */ + ctx_priv->pf_init_mode = 0x2; + + ret = fsl_easrc_set_ctx_format(pair, + &ctx_priv->in_params.sample_format, + &ctx_priv->out_params.sample_format); + if (ret) { + dev_err(dev, "failed to set context format: %d\n", ret); + return ret; + } + + ret = fsl_easrc_config_context(asrc, pair->index); + if (ret) { + dev_err(dev, "failed to config context %d\n", ret); + return ret; + } + + ctx_priv->in_params.iterations = 1; + ctx_priv->in_params.group_len = pair->channels; + ctx_priv->in_params.access_len = pair->channels; + ctx_priv->out_params.iterations = 1; + ctx_priv->out_params.group_len = pair->channels; + ctx_priv->out_params.access_len = pair->channels; + + ret = fsl_easrc_set_ctx_organziation(pair); + if (ret) { + dev_err(dev, "failed to set fifo organization\n"); + return ret; + } + + /* The context start flag */ + pair->first_convert = 1; + return 0; +} + +static int fsl_easrc_m2m_start(struct fsl_asrc_pair *pair) +{ + /* start context once */ + if (pair->first_convert) { + fsl_easrc_start_context(pair); + pair->first_convert = 0; + } + + return 0; +} + +static int fsl_easrc_m2m_stop(struct fsl_asrc_pair *pair) +{ + /* Stop pair/context */ + if (!pair->first_convert) { + fsl_easrc_stop_context(pair); + pair->first_convert = 1; + } + + return 0; +} + +/* calculate capture data length according to output data length and sample rate */ +static int fsl_easrc_m2m_calc_out_len(struct fsl_asrc_pair *pair, int input_buffer_length) +{ + struct fsl_asrc *easrc = pair->asrc; + struct fsl_easrc_priv *easrc_priv = easrc->private; + struct fsl_easrc_ctx_priv *ctx_priv = pair->private; + unsigned int in_rate = ctx_priv->in_params.norm_rate; + unsigned int out_rate = ctx_priv->out_params.norm_rate; + unsigned int channels = pair->channels; + unsigned int in_samples, out_samples; + unsigned int in_width, out_width; + unsigned int out_length; + unsigned int frac_bits; + u64 val1, val2; + + switch (easrc_priv->rs_num_taps) { + case EASRC_RS_32_TAPS: + /* integer bits = 5; */ + frac_bits = 39; + break; + case EASRC_RS_64_TAPS: + /* integer bits = 6; */ + frac_bits = 38; + break; + case EASRC_RS_128_TAPS: + /* integer bits = 7; */ + frac_bits = 37; + break; + default: + return -EINVAL; + } + + val1 = (u64)in_rate << frac_bits; + do_div(val1, out_rate); + val1 += (s64)ctx_priv->ratio_mod << (frac_bits - 31); + + in_width = snd_pcm_format_physical_width(ctx_priv->in_params.sample_format) / 8; + out_width = snd_pcm_format_physical_width(ctx_priv->out_params.sample_format) / 8; + + ctx_priv->in_filled_len += input_buffer_length; + if (ctx_priv->in_filled_len <= ctx_priv->in_filled_sample * in_width * channels) { + out_length = 0; + } else { + in_samples = ctx_priv->in_filled_len / (in_width * channels) - + ctx_priv->in_filled_sample; + + /* right shift 12 bit to make ratio in 32bit space */ + val2 = (u64)in_samples << (frac_bits - 12); + val1 = val1 >> 12; + do_div(val2, val1); + out_samples = val2; + + out_length = out_samples * out_width * channels; + ctx_priv->in_filled_len = ctx_priv->in_filled_sample * in_width * channels; + } + + return out_length; +} + +static int fsl_easrc_m2m_get_maxburst(u8 dir, struct fsl_asrc_pair *pair) +{ + struct fsl_easrc_ctx_priv *ctx_priv = pair->private; + + if (dir == IN) + return ctx_priv->in_params.fifo_wtmk * pair->channels; + else + return ctx_priv->out_params.fifo_wtmk * pair->channels; +} + +static int fsl_easrc_m2m_pair_suspend(struct fsl_asrc_pair *pair) +{ + fsl_easrc_stop_context(pair); + + return 0; +} + +static int fsl_easrc_m2m_pair_resume(struct fsl_asrc_pair *pair) +{ + struct fsl_easrc_ctx_priv *ctx_priv = pair->private; + + pair->first_convert = 1; + ctx_priv->in_filled_len = 0; + + return 0; +} + +/* val is Q31 */ +static int fsl_easrc_m2m_set_ratio_mod(struct fsl_asrc_pair *pair, int val) +{ + struct fsl_easrc_ctx_priv *ctx_priv = pair->private; + struct fsl_asrc *easrc = pair->asrc; + struct fsl_easrc_priv *easrc_priv = easrc->private; + unsigned int frac_bits; + + ctx_priv->ratio_mod += val; + + switch (easrc_priv->rs_num_taps) { + case EASRC_RS_32_TAPS: + /* integer bits = 5; */ + frac_bits = 39; + break; + case EASRC_RS_64_TAPS: + /* integer bits = 6; */ + frac_bits = 38; + break; + case EASRC_RS_128_TAPS: + /* integer bits = 7; */ + frac_bits = 37; + break; + default: + return -EINVAL; + } + + val <<= (frac_bits - 31); + regmap_write(easrc->regmap, REG_EASRC_RUC(pair->index), EASRC_RSUC_RS_RM(val)); + + return 0; +} + static const struct of_device_id fsl_easrc_dt_ids[] = { { .compatible = "fsl,imx8mn-easrc",}, {} @@ -1926,6 +2131,15 @@ static int fsl_easrc_probe(struct platform_device *pdev) easrc->release_pair = fsl_easrc_release_context; easrc->get_fifo_addr = fsl_easrc_get_fifo_addr; easrc->pair_priv_size = sizeof(struct fsl_easrc_ctx_priv); + easrc->m2m_prepare = fsl_easrc_m2m_prepare; + easrc->m2m_start = fsl_easrc_m2m_start; + easrc->m2m_stop = fsl_easrc_m2m_stop; + easrc->get_output_fifo_size = fsl_easrc_get_output_fifo_size; + easrc->m2m_calc_out_len = fsl_easrc_m2m_calc_out_len; + easrc->m2m_get_maxburst = fsl_easrc_m2m_get_maxburst; + easrc->m2m_pair_suspend = fsl_easrc_m2m_pair_suspend; + easrc->m2m_pair_resume = fsl_easrc_m2m_pair_resume; + easrc->m2m_set_ratio_mod = fsl_easrc_m2m_set_ratio_mod;
easrc_priv->rs_num_taps = EASRC_RS_32_TAPS; easrc_priv->const_coeff = 0x3FF0000000000000; diff --git a/sound/soc/fsl/fsl_easrc.h b/sound/soc/fsl/fsl_easrc.h index 7c70dac52713..c9f770862662 100644 --- a/sound/soc/fsl/fsl_easrc.h +++ b/sound/soc/fsl/fsl_easrc.h @@ -601,6 +601,8 @@ struct fsl_easrc_slot { * @out_missed_sample: sample missed in output * @st1_addexp: exponent added for stage1 * @st2_addexp: exponent added for stage2 + * @ratio_mod: update ratio + * @in_filled_len: input filled length */ struct fsl_easrc_ctx_priv { struct fsl_easrc_io_params in_params; @@ -618,6 +620,8 @@ struct fsl_easrc_ctx_priv { int out_missed_sample; int st1_addexp; int st2_addexp; + int ratio_mod; + unsigned int in_filled_len; };
/**
Move fsl_asrc_common.h to include/sound that it can be included from other drivers.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com Acked-by: Mark Brown broonie@kernel.org --- {sound/soc/fsl => include/sound}/fsl_asrc_common.h | 0 sound/soc/fsl/fsl_asrc.h | 2 +- sound/soc/fsl/fsl_asrc_dma.c | 2 +- sound/soc/fsl/fsl_easrc.h | 2 +- 4 files changed, 3 insertions(+), 3 deletions(-) rename {sound/soc/fsl => include/sound}/fsl_asrc_common.h (100%)
diff --git a/sound/soc/fsl/fsl_asrc_common.h b/include/sound/fsl_asrc_common.h similarity index 100% rename from sound/soc/fsl/fsl_asrc_common.h rename to include/sound/fsl_asrc_common.h diff --git a/sound/soc/fsl/fsl_asrc.h b/sound/soc/fsl/fsl_asrc.h index 1c492eb237f5..66544624de7b 100644 --- a/sound/soc/fsl/fsl_asrc.h +++ b/sound/soc/fsl/fsl_asrc.h @@ -10,7 +10,7 @@ #ifndef _FSL_ASRC_H #define _FSL_ASRC_H
-#include "fsl_asrc_common.h" +#include <sound/fsl_asrc_common.h>
#define ASRC_M2M_INPUTFIFO_WML 0x4 #define ASRC_M2M_OUTPUTFIFO_WML 0x2 diff --git a/sound/soc/fsl/fsl_asrc_dma.c b/sound/soc/fsl/fsl_asrc_dma.c index f501f47242fb..f067bf1ecea7 100644 --- a/sound/soc/fsl/fsl_asrc_dma.c +++ b/sound/soc/fsl/fsl_asrc_dma.c @@ -12,7 +12,7 @@ #include <sound/dmaengine_pcm.h> #include <sound/pcm_params.h>
-#include "fsl_asrc_common.h" +#include <sound/fsl_asrc_common.h>
#define FSL_ASRC_DMABUF_SIZE (256 * 1024)
diff --git a/sound/soc/fsl/fsl_easrc.h b/sound/soc/fsl/fsl_easrc.h index c9f770862662..a24e540876a4 100644 --- a/sound/soc/fsl/fsl_easrc.h +++ b/sound/soc/fsl/fsl_easrc.h @@ -9,7 +9,7 @@ #include <sound/asound.h> #include <linux/dma/imx-dma.h>
-#include "fsl_asrc_common.h" +#include <sound/fsl_asrc_common.h>
/* EASRC Register Map */
Register m2m platform device, that user can use M2M feature.
Defined platform data structure and platform driver name.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com Acked-by: Mark Brown broonie@kernel.org --- include/sound/fsl_asrc_common.h | 23 +++++++++++++++++++++++ sound/soc/fsl/fsl_asrc.c | 18 ++++++++++++++++++ 2 files changed, 41 insertions(+)
diff --git a/include/sound/fsl_asrc_common.h b/include/sound/fsl_asrc_common.h index 3b53d366182f..c709b8906929 100644 --- a/include/sound/fsl_asrc_common.h +++ b/include/sound/fsl_asrc_common.h @@ -71,6 +71,7 @@ struct fsl_asrc_pair { * @dma_params_rx: DMA parameters for receive channel * @dma_params_tx: DMA parameters for transmit channel * @pdev: platform device pointer + * @m2m_pdev: m2m platform device pointer * @regmap: regmap handler * @paddr: physical address to the base address of registers * @mem_clk: clock source to access register @@ -103,6 +104,7 @@ struct fsl_asrc { struct snd_dmaengine_dai_dma_data dma_params_rx; struct snd_dmaengine_dai_dma_data dma_params_tx; struct platform_device *pdev; + struct platform_device *m2m_pdev; struct regmap *regmap; unsigned long paddr; struct clk *mem_clk; @@ -139,6 +141,27 @@ struct fsl_asrc { void *private; };
+/** + * struct fsl_asrc_m2m_pdata - platform data + * @asrc: pointer to struct fsl_asrc + * @fmt_in: input sample format + * @fmt_out: output sample format + * @chan_min: minimum channel number + * @chan_max: maximum channel number + * @rate_min: minimum rate + * @rate_max: maximum rete + */ +struct fsl_asrc_m2m_pdata { + struct fsl_asrc *asrc; + u64 fmt_in; + u64 fmt_out; + int chan_min; + int chan_max; + int rate_min; + int rate_max; +}; + +#define M2M_DRV_NAME "fsl_asrc_m2m" #define DRV_NAME "fsl-asrc-dai" extern struct snd_soc_component_driver fsl_asrc_component;
diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c index 7d8643ee0ba0..5ecb5d869607 100644 --- a/sound/soc/fsl/fsl_asrc.c +++ b/sound/soc/fsl/fsl_asrc.c @@ -1187,6 +1187,7 @@ static int fsl_asrc_runtime_suspend(struct device *dev); static int fsl_asrc_probe(struct platform_device *pdev) { struct device_node *np = pdev->dev.of_node; + struct fsl_asrc_m2m_pdata m2m_pdata; struct fsl_asrc_priv *asrc_priv; struct fsl_asrc *asrc; struct resource *res; @@ -1368,6 +1369,18 @@ static int fsl_asrc_probe(struct platform_device *pdev) goto err_pm_get_sync; }
+ m2m_pdata.asrc = asrc; + m2m_pdata.fmt_in = FSL_ASRC_FORMATS; + m2m_pdata.fmt_out = FSL_ASRC_FORMATS | SNDRV_PCM_FMTBIT_S8; + m2m_pdata.rate_min = 5512; + m2m_pdata.rate_max = 192000; + m2m_pdata.chan_min = 1; + m2m_pdata.chan_max = 10; + asrc->m2m_pdev = platform_device_register_data(&pdev->dev, + M2M_DRV_NAME, + PLATFORM_DEVID_AUTO, + &m2m_pdata, + sizeof(m2m_pdata)); return 0;
err_pm_get_sync: @@ -1380,6 +1393,11 @@ static int fsl_asrc_probe(struct platform_device *pdev)
static void fsl_asrc_remove(struct platform_device *pdev) { + struct fsl_asrc *asrc = dev_get_drvdata(&pdev->dev); + + if (asrc->m2m_pdev && !IS_ERR(asrc->m2m_pdev)) + platform_device_unregister(asrc->m2m_pdev); + pm_runtime_disable(&pdev->dev); if (!pm_runtime_status_suspended(&pdev->dev)) fsl_asrc_runtime_suspend(&pdev->dev);
Register m2m platform device,that user can use M2M feature.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com Acked-by: Mark Brown broonie@kernel.org --- sound/soc/fsl/fsl_easrc.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/sound/soc/fsl/fsl_easrc.c b/sound/soc/fsl/fsl_easrc.c index cf7ad30a323b..ccbf45c7abf4 100644 --- a/sound/soc/fsl/fsl_easrc.c +++ b/sound/soc/fsl/fsl_easrc.c @@ -2075,6 +2075,7 @@ MODULE_DEVICE_TABLE(of, fsl_easrc_dt_ids); static int fsl_easrc_probe(struct platform_device *pdev) { struct fsl_easrc_priv *easrc_priv; + struct fsl_asrc_m2m_pdata m2m_pdata; struct device *dev = &pdev->dev; struct fsl_asrc *easrc; struct resource *res; @@ -2190,6 +2191,19 @@ static int fsl_easrc_probe(struct platform_device *pdev) goto err_pm_disable; }
+ m2m_pdata.asrc = easrc; + m2m_pdata.fmt_in = FSL_EASRC_FORMATS; + m2m_pdata.fmt_out = FSL_EASRC_FORMATS | SNDRV_PCM_FMTBIT_IEC958_SUBFRAME_LE; + m2m_pdata.rate_min = 8000; + m2m_pdata.rate_max = 768000; + m2m_pdata.chan_min = 1; + m2m_pdata.chan_max = 32; + easrc->m2m_pdev = platform_device_register_data(&pdev->dev, + M2M_DRV_NAME, + PLATFORM_DEVID_AUTO, + &m2m_pdata, + sizeof(m2m_pdata)); + return 0;
err_pm_disable: @@ -2199,6 +2213,11 @@ static int fsl_easrc_probe(struct platform_device *pdev)
static void fsl_easrc_remove(struct platform_device *pdev) { + struct fsl_asrc *easrc = dev_get_drvdata(&pdev->dev); + + if (easrc->m2m_pdev && !IS_ERR(easrc->m2m_pdev)) + platform_device_unregister(easrc->m2m_pdev); + pm_runtime_disable(&pdev->dev); }
V4L2_CAP_AUDIO_M2M is similar to V4L2_CAP_VIDEO_M2M flag.
It is used for audio memory to memory case.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- Documentation/userspace-api/media/v4l/vidioc-querycap.rst | 3 +++ Documentation/userspace-api/media/videodev2.h.rst.exceptions | 1 + include/uapi/linux/videodev2.h | 1 + 3 files changed, 5 insertions(+)
diff --git a/Documentation/userspace-api/media/v4l/vidioc-querycap.rst b/Documentation/userspace-api/media/v4l/vidioc-querycap.rst index 6c57b8428356..1c0d97bf192a 100644 --- a/Documentation/userspace-api/media/v4l/vidioc-querycap.rst +++ b/Documentation/userspace-api/media/v4l/vidioc-querycap.rst @@ -173,6 +173,9 @@ specification the ioctl returns an ``EINVAL`` error code. interface. A video overlay device typically stores captured images directly in the video memory of a graphics card, with hardware clipping and scaling. + * - ``V4L2_CAP_AUDIO_M2M`` + - 0x00000008 + - The device supports the audio Memory-To-Memory interface. * - ``V4L2_CAP_VBI_CAPTURE`` - 0x00000010 - The device supports the :ref:`Raw VBI Capture <raw-vbi>` diff --git a/Documentation/userspace-api/media/videodev2.h.rst.exceptions b/Documentation/userspace-api/media/videodev2.h.rst.exceptions index 3e58aac4ef0b..da6d0b8e4c2c 100644 --- a/Documentation/userspace-api/media/videodev2.h.rst.exceptions +++ b/Documentation/userspace-api/media/videodev2.h.rst.exceptions @@ -197,6 +197,7 @@ replace define V4L2_CAP_META_OUTPUT device-capabilities replace define V4L2_CAP_DEVICE_CAPS device-capabilities replace define V4L2_CAP_TOUCH device-capabilities replace define V4L2_CAP_IO_MC device-capabilities +replace define V4L2_CAP_AUDIO_M2M device-capabilities
# V4L2 pix flags replace define V4L2_PIX_FMT_PRIV_MAGIC :c:type:`v4l2_pix_format` diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h index b8573e9ccde6..5cc2a978fd9c 100644 --- a/include/uapi/linux/videodev2.h +++ b/include/uapi/linux/videodev2.h @@ -473,6 +473,7 @@ struct v4l2_capability { #define V4L2_CAP_VIDEO_CAPTURE 0x00000001 /* Is a video capture device */ #define V4L2_CAP_VIDEO_OUTPUT 0x00000002 /* Is a video output device */ #define V4L2_CAP_VIDEO_OVERLAY 0x00000004 /* Can do video overlay */ +#define V4L2_CAP_AUDIO_M2M 0x00000008 /* audio memory to memory */ #define V4L2_CAP_VBI_CAPTURE 0x00000010 /* Is a raw VBI capture device */ #define V4L2_CAP_VBI_OUTPUT 0x00000020 /* Is a raw VBI output device */ #define V4L2_CAP_SLICED_VBI_CAPTURE 0x00000040 /* Is a sliced VBI capture device */
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/v4l-audioX".
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- .../userspace-api/media/v4l/buffer.rst | 6 ++ .../media/v4l/dev-audio-mem2mem.rst | 71 +++++++++++++++++++ .../userspace-api/media/v4l/devices.rst | 1 + .../media/v4l/vidioc-enum-fmt.rst | 2 + .../userspace-api/media/v4l/vidioc-g-fmt.rst | 4 ++ .../media/videodev2.h.rst.exceptions | 2 + .../media/common/videobuf2/videobuf2-v4l2.c | 4 ++ drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 9 +++ drivers/media/v4l2-core/v4l2-dev.c | 17 +++++ drivers/media/v4l2-core/v4l2-ioctl.c | 53 ++++++++++++++ include/media/v4l2-dev.h | 2 + include/media/v4l2-ioctl.h | 34 +++++++++ include/uapi/linux/videodev2.h | 17 +++++ 13 files changed, 222 insertions(+) create mode 100644 Documentation/userspace-api/media/v4l/dev-audio-mem2mem.rst
diff --git a/Documentation/userspace-api/media/v4l/buffer.rst b/Documentation/userspace-api/media/v4l/buffer.rst index 52bbee81c080..a3754ca6f0d6 100644 --- a/Documentation/userspace-api/media/v4l/buffer.rst +++ b/Documentation/userspace-api/media/v4l/buffer.rst @@ -438,6 +438,12 @@ enum v4l2_buf_type * - ``V4L2_BUF_TYPE_META_OUTPUT`` - 14 - Buffer for metadata output, see :ref:`metadata`. + * - ``V4L2_BUF_TYPE_AUDIO_CAPTURE`` + - 15 + - Buffer for audio capture, see :ref:`audio`. + * - ``V4L2_BUF_TYPE_AUDIO_OUTPUT`` + - 16 + - Buffer for audio output, see :ref:`audio`.
.. _buffer-flags: diff --git a/Documentation/userspace-api/media/v4l/dev-audio-mem2mem.rst b/Documentation/userspace-api/media/v4l/dev-audio-mem2mem.rst new file mode 100644 index 000000000000..54cc2abb6c04 --- /dev/null +++ b/Documentation/userspace-api/media/v4l/dev-audio-mem2mem.rst @@ -0,0 +1,71 @@ +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later + +.. _audiomem2mem: + +******************************** +Audio Memory-To-Memory Interface +******************************** + +An audio memory-to-memory device can compress, decompress, transform, or +otherwise convert audio data from one format into another format, in memory. +Such memory-to-memory devices set the ``V4L2_CAP_AUDIO_M2M`` capability. +Examples of memory-to-memory devices are audio codecs, audio preprocessing, +audio postprocessing. + +A memory-to-memory audio node supports both output (sending audio frames from +memory to the hardware) and capture (receiving the processed audio frames +from the hardware into memory) stream I/O. An application will have to +setup the stream I/O for both sides and finally call +:ref:`VIDIOC_STREAMON <VIDIOC_STREAMON>` for both capture and output to +start the hardware. + +Memory-to-memory devices function as a shared resource: you can +open the audio node multiple times, each application setting up their +own properties that are local to the file handle, and each can use +it independently from the others. The driver will arbitrate access to +the hardware and reprogram it whenever another file handler gets access. + +Audio memory-to-memory devices are accessed through character device +special files named ``/dev/v4l-audio`` + +Querying Capabilities +===================== + +Device nodes supporting the audio memory-to-memory interface set the +``V4L2_CAP_AUDIO_M2M`` flag in the ``device_caps`` field of the +:c:type:`v4l2_capability` structure returned by the :c:func:`VIDIOC_QUERYCAP` +ioctl. + +Data Format Negotiation +======================= + +The audio device uses the :ref:`format` ioctls to select the capture format. +The audio buffer content format is bound to that selected format. In addition +to the basic :ref:`format` ioctls, the :c:func:`VIDIOC_ENUM_FMT` ioctl must be +supported as well. + +To use the :ref:`format` ioctls applications set the ``type`` field of the +:c:type:`v4l2_format` structure to ``V4L2_BUF_TYPE_AUDIO_CAPTURE`` or to +``V4L2_BUF_TYPE_AUDIO_OUTPUT``. Both drivers and applications must set the +remainder of the :c:type:`v4l2_format` structure to 0. + +.. c:type:: v4l2_audio_format + +.. tabularcolumns:: |p{1.4cm}|p{2.4cm}|p{13.5cm}| + +.. flat-table:: struct v4l2_audio_format + :header-rows: 0 + :stub-columns: 0 + :widths: 1 1 2 + + * - __u32 + - ``audioformat`` + - The sample format, set by the application. see :ref:`pixfmt-audio` + * - __u32 + - ``channels`` + - The channel number, set by the application. channel number range is + [1, 32]. + * - __u32 + - ``buffersize`` + - Maximum buffer size in bytes required for data. The value is set by the + driver. diff --git a/Documentation/userspace-api/media/v4l/devices.rst b/Documentation/userspace-api/media/v4l/devices.rst index 8bfbad65a9d4..758bd90f1c26 100644 --- a/Documentation/userspace-api/media/v4l/devices.rst +++ b/Documentation/userspace-api/media/v4l/devices.rst @@ -24,3 +24,4 @@ Interfaces dev-event dev-subdev dev-meta + dev-audio-mem2mem diff --git a/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst b/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst index 000c154b0f98..42deb07f4ff4 100644 --- a/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst +++ b/Documentation/userspace-api/media/v4l/vidioc-enum-fmt.rst @@ -96,6 +96,8 @@ the ``mbus_code`` field is handled differently: ``V4L2_BUF_TYPE_VIDEO_OVERLAY``, ``V4L2_BUF_TYPE_SDR_CAPTURE``, ``V4L2_BUF_TYPE_SDR_OUTPUT``, + ``V4L2_BUF_TYPE_AUDIO_CAPTURE``, + ``V4L2_BUF_TYPE_AUDIO_OUTPUT``, ``V4L2_BUF_TYPE_META_CAPTURE`` and ``V4L2_BUF_TYPE_META_OUTPUT``. See :c:type:`v4l2_buf_type`. diff --git a/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst b/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst index 675c385e5aca..528fd9df41aa 100644 --- a/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst +++ b/Documentation/userspace-api/media/v4l/vidioc-g-fmt.rst @@ -130,6 +130,10 @@ The format as returned by :ref:`VIDIOC_TRY_FMT <VIDIOC_G_FMT>` must be identical - ``meta`` - Definition of a metadata format, see :ref:`meta-formats`, used by metadata capture devices. + * - struct :c:type:`v4l2_audio_format` + - ``audio`` + - Definition of a audio data format, see :ref:`audiomem2mem`, used by + audio memory-to-memory devices * - __u8 - ``raw_data``\ [200] - Place holder for future extensions. diff --git a/Documentation/userspace-api/media/videodev2.h.rst.exceptions b/Documentation/userspace-api/media/videodev2.h.rst.exceptions index da6d0b8e4c2c..e61152bb80d1 100644 --- a/Documentation/userspace-api/media/videodev2.h.rst.exceptions +++ b/Documentation/userspace-api/media/videodev2.h.rst.exceptions @@ -29,6 +29,8 @@ replace symbol V4L2_FIELD_SEQ_TB :c:type:`v4l2_field` replace symbol V4L2_FIELD_TOP :c:type:`v4l2_field`
# Documented enum v4l2_buf_type +replace symbol V4L2_BUF_TYPE_AUDIO_CAPTURE :c:type:`v4l2_buf_type` +replace symbol V4L2_BUF_TYPE_AUDIO_OUTPUT :c:type:`v4l2_buf_type` replace symbol V4L2_BUF_TYPE_META_CAPTURE :c:type:`v4l2_buf_type` replace symbol V4L2_BUF_TYPE_META_OUTPUT :c:type:`v4l2_buf_type` replace symbol V4L2_BUF_TYPE_SDR_CAPTURE :c:type:`v4l2_buf_type` diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c index c575198e8354..0738f45b2341 100644 --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c @@ -789,6 +789,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create) case V4L2_BUF_TYPE_META_OUTPUT: requested_sizes[0] = f->fmt.meta.buffersize; break; + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + requested_sizes[0] = f->fmt.audio.buffersize; + break; default: return -EINVAL; } diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c index 8c07400bd280..5e94db8dfdae 100644 --- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c +++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c @@ -101,6 +101,7 @@ struct v4l2_format32 { struct v4l2_sliced_vbi_format sliced; struct v4l2_sdr_format sdr; struct v4l2_meta_format meta; + struct v4l2_audio_format audio; __u8 raw_data[200]; /* user-defined */ } fmt; }; @@ -166,6 +167,10 @@ static int get_v4l2_format32(struct v4l2_format *p64, case V4L2_BUF_TYPE_META_OUTPUT: return copy_from_user(&p64->fmt.meta, &p32->fmt.meta, sizeof(p64->fmt.meta)) ? -EFAULT : 0; + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + return copy_from_user(&p64->fmt.audio, &p32->fmt.audio, + sizeof(p64->fmt.audio)) ? -EFAULT : 0; default: return -EINVAL; } @@ -216,6 +221,10 @@ static int put_v4l2_format32(struct v4l2_format *p64, case V4L2_BUF_TYPE_META_OUTPUT: return copy_to_user(&p32->fmt.meta, &p64->fmt.meta, sizeof(p64->fmt.meta)) ? -EFAULT : 0; + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + return copy_to_user(&p32->fmt.audio, &p64->fmt.audio, + sizeof(p64->fmt.audio)) ? -EFAULT : 0; default: return -EINVAL; } diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c index d13954bd31fd..bac008fcedc6 100644 --- a/drivers/media/v4l2-core/v4l2-dev.c +++ b/drivers/media/v4l2-core/v4l2-dev.c @@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct video_device *vdev) bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH; bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO && (vdev->device_caps & meta_caps); + bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO; bool is_rx = vdev->vfl_dir != VFL_DIR_TX; bool is_tx = vdev->vfl_dir != VFL_DIR_RX; bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC; @@ -666,6 +667,19 @@ static void determine_valid_ioctls(struct video_device *vdev) SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out); SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_meta_out); } + if (is_audio && is_rx) { + /* audio capture specific ioctls */ + SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_cap); + SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap); + SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap); + SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_cap); + } else if (is_audio && is_tx) { + /* audio output specific ioctls */ + SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_out); + SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out); + SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out); + SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_out); + } if (is_vbi) { /* vbi specific ioctls */ if ((is_rx && (ops->vidioc_g_fmt_vbi_cap || @@ -929,6 +943,9 @@ int __video_register_device(struct video_device *vdev, case VFL_TYPE_TOUCH: name_base = "v4l-touch"; break; + case VFL_TYPE_AUDIO: + name_base = "v4l-audio"; + break; default: pr_err("%s called with unknown type: %d\n", __func__, type); diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c index 6e7b8b682d13..961abcdf7290 100644 --- a/drivers/media/v4l2-core/v4l2-ioctl.c +++ b/drivers/media/v4l2-core/v4l2-ioctl.c @@ -188,6 +188,8 @@ const char *v4l2_type_names[] = { [V4L2_BUF_TYPE_SDR_OUTPUT] = "sdr-out", [V4L2_BUF_TYPE_META_CAPTURE] = "meta-cap", [V4L2_BUF_TYPE_META_OUTPUT] = "meta-out", + [V4L2_BUF_TYPE_AUDIO_CAPTURE] = "audio-cap", + [V4L2_BUF_TYPE_AUDIO_OUTPUT] = "audio-out", }; EXPORT_SYMBOL(v4l2_type_names);
@@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool write_only) const struct v4l2_sliced_vbi_format *sliced; const struct v4l2_window *win; const struct v4l2_meta_format *meta; + const struct v4l2_audio_format *audio; u32 pixelformat; u32 planes; unsigned i; @@ -346,6 +349,13 @@ static void v4l_print_format(const void *arg, bool write_only) pr_cont(", dataformat=%p4cc, buffersize=%u\n", &pixelformat, meta->buffersize); break; + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + audio = &p->fmt.audio; + pixelformat = audio->audioformat; + pr_cont(", format=%p4cc, channels=%u, buffersize=%u\n", + &pixelformat, audio->channels, audio->buffersize); + break; } }
@@ -927,6 +937,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type) bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH; bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO && (vfd->device_caps & meta_caps); + bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO; bool is_rx = vfd->vfl_dir != VFL_DIR_TX; bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
@@ -992,6 +1003,14 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type) if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out) return 0; break; + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap) + return 0; + break; + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out) + return 0; + break; default: break; } @@ -1597,6 +1616,16 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops, break; ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg); break; + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + if (unlikely(!ops->vidioc_enum_fmt_audio_cap)) + break; + ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg); + break; + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + if (unlikely(!ops->vidioc_enum_fmt_audio_out)) + break; + ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg); + break; } if (ret == 0) v4l_fill_fmtdesc(p); @@ -1673,6 +1702,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops, return ops->vidioc_g_fmt_meta_cap(file, fh, arg); case V4L2_BUF_TYPE_META_OUTPUT: return ops->vidioc_g_fmt_meta_out(file, fh, arg); + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + return ops->vidioc_g_fmt_audio_cap(file, fh, arg); + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + return ops->vidioc_g_fmt_audio_out(file, fh, arg); } return -EINVAL; } @@ -1784,6 +1817,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops, break; memset_after(p, 0, fmt.meta); return ops->vidioc_s_fmt_meta_out(file, fh, arg); + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + if (unlikely(!ops->vidioc_s_fmt_audio_cap)) + break; + memset_after(p, 0, fmt.audio); + return ops->vidioc_s_fmt_audio_cap(file, fh, arg); + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + if (unlikely(!ops->vidioc_s_fmt_audio_out)) + break; + memset_after(p, 0, fmt.audio); + return ops->vidioc_s_fmt_audio_out(file, fh, arg); } return -EINVAL; } @@ -1892,6 +1935,16 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops, break; memset_after(p, 0, fmt.meta); return ops->vidioc_try_fmt_meta_out(file, fh, arg); + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + if (unlikely(!ops->vidioc_try_fmt_audio_cap)) + break; + memset_after(p, 0, fmt.audio); + return ops->vidioc_try_fmt_audio_cap(file, fh, arg); + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + if (unlikely(!ops->vidioc_try_fmt_audio_out)) + break; + memset_after(p, 0, fmt.audio); + return ops->vidioc_try_fmt_audio_out(file, fh, arg); } return -EINVAL; } diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h index d82dfdbf6e58..82b63f82d43f 100644 --- a/include/media/v4l2-dev.h +++ b/include/media/v4l2-dev.h @@ -30,6 +30,7 @@ * @VFL_TYPE_SUBDEV: for V4L2 subdevices * @VFL_TYPE_SDR: for Software Defined Radio tuners * @VFL_TYPE_TOUCH: for touch sensors + * @VFL_TYPE_AUDIO: for audio memory-to-memory devices * @VFL_TYPE_MAX: number of VFL types, must always be last in the enum */ enum vfl_devnode_type { @@ -39,6 +40,7 @@ enum vfl_devnode_type { VFL_TYPE_SUBDEV, VFL_TYPE_SDR, VFL_TYPE_TOUCH, + VFL_TYPE_AUDIO, VFL_TYPE_MAX /* Shall be the last one */ };
diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h index edb733f21604..f840cf740ce1 100644 --- a/include/media/v4l2-ioctl.h +++ b/include/media/v4l2-ioctl.h @@ -45,6 +45,12 @@ struct v4l2_fh; * @vidioc_enum_fmt_meta_out: pointer to the function that implements * :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic * for metadata output + * @vidioc_enum_fmt_audio_cap: pointer to the function that implements + * :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic + * for audio capture + * @vidioc_enum_fmt_audio_out: pointer to the function that implements + * :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic + * for audio output * @vidioc_g_fmt_vid_cap: pointer to the function that implements * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture * in single plane mode @@ -79,6 +85,10 @@ struct v4l2_fh; * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture * @vidioc_g_fmt_meta_out: pointer to the function that implements * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output + * @vidioc_g_fmt_audio_cap: pointer to the function that implements + * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture + * @vidioc_g_fmt_audio_out: pointer to the function that implements + * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output * @vidioc_s_fmt_vid_cap: pointer to the function that implements * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture * in single plane mode @@ -113,6 +123,10 @@ struct v4l2_fh; * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture * @vidioc_s_fmt_meta_out: pointer to the function that implements * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output + * @vidioc_s_fmt_audio_cap: pointer to the function that implements + * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture + * @vidioc_s_fmt_audio_out: pointer to the function that implements + * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output * @vidioc_try_fmt_vid_cap: pointer to the function that implements * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture * in single plane mode @@ -149,6 +163,10 @@ struct v4l2_fh; * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata capture * @vidioc_try_fmt_meta_out: pointer to the function that implements * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata output + * @vidioc_try_fmt_audio_cap: pointer to the function that implements + * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture + * @vidioc_try_fmt_audio_out: pointer to the function that implements + * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output * @vidioc_reqbufs: pointer to the function that implements * :ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl * @vidioc_querybuf: pointer to the function that implements @@ -315,6 +333,10 @@ struct v4l2_ioctl_ops { struct v4l2_fmtdesc *f); int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh, struct v4l2_fmtdesc *f); + int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh, + struct v4l2_fmtdesc *f); + int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh, + struct v4l2_fmtdesc *f);
/* VIDIOC_G_FMT handlers */ int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh, @@ -345,6 +367,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f); + int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh, + struct v4l2_format *f); + int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh, + struct v4l2_format *f);
/* VIDIOC_S_FMT handlers */ int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh, @@ -375,6 +401,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f); + int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh, + struct v4l2_format *f); + int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh, + struct v4l2_format *f);
/* VIDIOC_TRY_FMT handlers */ int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh, @@ -405,6 +435,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f); + int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh, + struct v4l2_format *f); + int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh, + struct v4l2_format *f);
/* Buffer handlers */ int (*vidioc_reqbufs)(struct file *file, void *fh, diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h index 5cc2a978fd9c..2c03d2dfadbe 100644 --- a/include/uapi/linux/videodev2.h +++ b/include/uapi/linux/videodev2.h @@ -153,6 +153,8 @@ enum v4l2_buf_type { V4L2_BUF_TYPE_SDR_OUTPUT = 12, V4L2_BUF_TYPE_META_CAPTURE = 13, V4L2_BUF_TYPE_META_OUTPUT = 14, + V4L2_BUF_TYPE_AUDIO_CAPTURE = 15, + V4L2_BUF_TYPE_AUDIO_OUTPUT = 16, /* Deprecated, do not use */ V4L2_BUF_TYPE_PRIVATE = 0x80, }; @@ -169,6 +171,7 @@ enum v4l2_buf_type { || (type) == V4L2_BUF_TYPE_VBI_OUTPUT \ || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT \ || (type) == V4L2_BUF_TYPE_SDR_OUTPUT \ + || (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT \ || (type) == V4L2_BUF_TYPE_META_OUTPUT)
#define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type)) @@ -2423,6 +2426,18 @@ struct v4l2_meta_format { __u32 buffersize; } __attribute__ ((packed));
+/** + * struct v4l2_audio_format - audio data format definition + * @audioformat: little endian four character code (fourcc) + * @channels: channel numbers + * @buffersize: maximum size in bytes required for data + */ +struct v4l2_audio_format { + __u32 audioformat; + __u32 channels; + __u32 buffersize; +} __attribute__ ((packed)); + /** * struct v4l2_format - stream data format * @type: enum v4l2_buf_type; type of the data stream @@ -2431,6 +2446,7 @@ struct v4l2_meta_format { * @fmt.win: definition of an overlaid image * @fmt.vbi: raw VBI capture or output parameters * @fmt.sliced: sliced VBI capture or output parameters + * @fmt.audio: definition of an audio format * @fmt.raw_data: placeholder for future extensions and custom formats * @fmt: union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, * @meta and @raw_data @@ -2445,6 +2461,7 @@ struct v4l2_format { struct v4l2_sliced_vbi_format sliced; /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */ struct v4l2_sdr_format sdr; /* V4L2_BUF_TYPE_SDR_CAPTURE */ struct v4l2_meta_format meta; /* V4L2_BUF_TYPE_META_CAPTURE */ + struct v4l2_audio_format audio; /* V4L2_BUF_TYPE_AUDIO_CAPTURE */ __u8 raw_data[200]; /* user-defined */ } fmt; };
The audio sample format definition is from alsa, the header file is include/uapi/sound/asound.h, but don't include this header file directly, because in user space, there is another copy in alsa-lib. There will be conflict in userspace for include videodev2.h & asound.h and asoundlib.h
Here still use the fourcc format.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- .../userspace-api/media/v4l/pixfmt-audio.rst | 100 ++++++++++++++++++ .../userspace-api/media/v4l/pixfmt.rst | 1 + drivers/media/v4l2-core/v4l2-ioctl.c | 13 +++ include/uapi/linux/videodev2.h | 29 +++++ 4 files changed, 143 insertions(+) create mode 100644 Documentation/userspace-api/media/v4l/pixfmt-audio.rst
diff --git a/Documentation/userspace-api/media/v4l/pixfmt-audio.rst b/Documentation/userspace-api/media/v4l/pixfmt-audio.rst new file mode 100644 index 000000000000..a66db9a19936 --- /dev/null +++ b/Documentation/userspace-api/media/v4l/pixfmt-audio.rst @@ -0,0 +1,100 @@ +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later + +.. _pixfmt-audio: + +************* +Audio Formats +************* + +These formats are used for :ref:`audiomem2mem` interface only. + +All FourCCs starting with 'AU' are reserved for mappings +of the snd_pcm_format_t type. + +The v4l2_audfmt_to_fourcc() is defined to convert the snd_pcm_format_t +type to a FourCC. The first character is 'A', the second character +is 'U', and the remaining two characters is the snd_pcm_format_t +value in ASCII. Example: SNDRV_PCM_FORMAT_S16_LE (with value 2) +maps to 'AU02' and SNDRV_PCM_FORMAT_S24_3LE (with value 32) maps +to 'AU32'." + +The v4l2_fourcc_to_audfmt() is defined to convert these FourCCs to +snd_pcm_format_t type. + +.. tabularcolumns:: |p{5.8cm}|p{1.2cm}|p{10.3cm}| + +.. cssclass:: longtable + +.. flat-table:: Audio Format + :header-rows: 1 + :stub-columns: 0 + :widths: 3 1 4 + + * - Identifier + - Code + - Details + * .. _V4L2-AUDIO-FMT-S8: + + - ``V4L2_AUDIO_FMT_S8`` + - 'S8' + - Corresponds to SNDRV_PCM_FORMAT_S8 in ALSA + * .. _V4L2-AUDIO-FMT-S16-LE: + + - ``V4L2_AUDIO_FMT_S16_LE`` + - 'S16_LE' + - Corresponds to SNDRV_PCM_FORMAT_S16_LE in ALSA + * .. _V4L2-AUDIO-FMT-U16-LE: + + - ``V4L2_AUDIO_FMT_U16_LE`` + - 'U16_LE' + - Corresponds to SNDRV_PCM_FORMAT_U16_LE in ALSA + * .. _V4L2-AUDIO-FMT-S24-LE: + + - ``V4L2_AUDIO_FMT_S24_LE`` + - 'S24_LE' + - Corresponds to SNDRV_PCM_FORMAT_S24_LE in ALSA + * .. _V4L2-AUDIO-FMT-U24-LE: + + - ``V4L2_AUDIO_FMT_U24_LE`` + - 'U24_LE' + - Corresponds to SNDRV_PCM_FORMAT_U24_LE in ALSA + * .. _V4L2-AUDIO-FMT-S32-LE: + + - ``V4L2_AUDIO_FMT_S32_LE`` + - 'S32_LE' + - Corresponds to SNDRV_PCM_FORMAT_S32_LE in ALSA + * .. _V4L2-AUDIO-FMT-U32-LE: + + - ``V4L2_AUDIO_FMT_U32_LE`` + - 'U32_LE' + - Corresponds to SNDRV_PCM_FORMAT_U32_LE in ALSA + * .. _V4L2-AUDIO-FMT-FLOAT-LE: + + - ``V4L2_AUDIO_FMT_FLOAT_LE`` + - 'FLOAT_LE' + - Corresponds to SNDRV_PCM_FORMAT_FLOAT_LE in ALSA + * .. _V4L2-AUDIO-FMT-IEC958-SUBFRAME-LE: + + - ``V4L2_AUDIO_FMT_IEC958_SUBFRAME_LE`` + - 'IEC958_SUBFRAME_LE' + - Corresponds to SNDRV_PCM_FORMAT_IEC958_SUBFRAME_LE in ALSA + * .. _V4L2-AUDIO-FMT-S24-3LE: + + - ``V4L2_AUDIO_FMT_S24_3LE`` + - 'S24_3LE' + - Corresponds to SNDRV_PCM_FORMAT_S24_3LE in ALSA + * .. _V4L2-AUDIO-FMT-U24-3LE: + + - ``V4L2_AUDIO_FMT_U24_3LE`` + - 'U24_3LE' + - Corresponds to SNDRV_PCM_FORMAT_U24_3LE in ALSA + * .. _V4L2-AUDIO-FMT-S20-3LE: + + - ``V4L2_AUDIO_FMT_S20_3LE`` + - 'S20_3LE' + - Corresponds to SNDRV_PCM_FORMAT_S24_3LE in ALSA + * .. _V4L2-AUDIO-FMT-U20-3LE: + + - ``V4L2_AUDIO_FMT_U20_3LE`` + - 'U20_3LE' + - Corresponds to SNDRV_PCM_FORMAT_U20_3LE in ALSA diff --git a/Documentation/userspace-api/media/v4l/pixfmt.rst b/Documentation/userspace-api/media/v4l/pixfmt.rst index 11dab4a90630..2eb6fdd3b43d 100644 --- a/Documentation/userspace-api/media/v4l/pixfmt.rst +++ b/Documentation/userspace-api/media/v4l/pixfmt.rst @@ -36,3 +36,4 @@ see also :ref:`VIDIOC_G_FBUF <VIDIOC_G_FBUF>`.) colorspaces colorspaces-defs colorspaces-details + pixfmt-audio diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c index 961abcdf7290..be229c69e991 100644 --- a/drivers/media/v4l2-core/v4l2-ioctl.c +++ b/drivers/media/v4l2-core/v4l2-ioctl.c @@ -1471,6 +1471,19 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt) case V4L2_PIX_FMT_Y210: descr = "10-bit YUYV Packed"; break; case V4L2_PIX_FMT_Y212: descr = "12-bit YUYV Packed"; break; case V4L2_PIX_FMT_Y216: descr = "16-bit YUYV Packed"; break; + case V4L2_AUDIO_FMT_S8: descr = "8-bit Signed"; break; + case V4L2_AUDIO_FMT_S16_LE: descr = "16-bit Signed LE"; break; + case V4L2_AUDIO_FMT_U16_LE: descr = "16-bit Unsigned LE"; break; + case V4L2_AUDIO_FMT_S24_LE: descr = "24(32)-bit Signed LE"; break; + case V4L2_AUDIO_FMT_U24_LE: descr = "24(32)-bit Unsigned LE"; break; + case V4L2_AUDIO_FMT_S32_LE: descr = "32-bit Signed LE"; break; + case V4L2_AUDIO_FMT_U32_LE: descr = "32-bit Unsigned LE"; break; + case V4L2_AUDIO_FMT_FLOAT_LE: descr = "32-bit Float LE"; break; + case V4L2_AUDIO_FMT_IEC958_SUBFRAME_LE: descr = "32-bit IEC958 LE"; break; + case V4L2_AUDIO_FMT_S24_3LE: descr = "24(24)-bit Signed LE"; break; + case V4L2_AUDIO_FMT_U24_3LE: descr = "24(24)-bit Unsigned LE"; break; + case V4L2_AUDIO_FMT_S20_3LE: descr = "20(24)-bit Signed LE"; break; + case V4L2_AUDIO_FMT_U20_3LE: descr = "20(24)-bit Unsigned LE"; break;
default: /* Compressed formats */ diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h index 2c03d2dfadbe..c7b1bfad82c4 100644 --- a/include/uapi/linux/videodev2.h +++ b/include/uapi/linux/videodev2.h @@ -843,6 +843,35 @@ struct v4l2_pix_format { #define V4L2_META_FMT_RK_ISP1_PARAMS v4l2_fourcc('R', 'K', '1', 'P') /* Rockchip ISP1 3A Parameters */ #define V4L2_META_FMT_RK_ISP1_STAT_3A v4l2_fourcc('R', 'K', '1', 'S') /* Rockchip ISP1 3A Statistics */
+/* + * Audio-data formats + * All these audio formats use a fourcc starting with 'AU' + * followed by the SNDRV_PCM_FORMAT_ value from asound.h. + * + * FourCCs starting with 'AU' are reserved for the snd_pcm_format_t + * to fourcc mappings + */ +#define V4L2_AUDIO_FMT_S8 v4l2_fourcc('A', 'U', '0', '0') +#define V4L2_AUDIO_FMT_S16_LE v4l2_fourcc('A', 'U', '0', '2') +#define V4L2_AUDIO_FMT_U16_LE v4l2_fourcc('A', 'U', '0', '4') +#define V4L2_AUDIO_FMT_S24_LE v4l2_fourcc('A', 'U', '0', '6') +#define V4L2_AUDIO_FMT_U24_LE v4l2_fourcc('A', 'U', '0', '8') +#define V4L2_AUDIO_FMT_S32_LE v4l2_fourcc('A', 'U', '1', '0') +#define V4L2_AUDIO_FMT_U32_LE v4l2_fourcc('A', 'U', '1', '2') +#define V4L2_AUDIO_FMT_FLOAT_LE v4l2_fourcc('A', 'U', '1', '4') +#define V4L2_AUDIO_FMT_IEC958_SUBFRAME_LE v4l2_fourcc('A', 'U', '1', '8') +#define V4L2_AUDIO_FMT_S24_3LE v4l2_fourcc('A', 'U', '3', '2') +#define V4L2_AUDIO_FMT_U24_3LE v4l2_fourcc('A', 'U', '3', '4') +#define V4L2_AUDIO_FMT_S20_3LE v4l2_fourcc('A', 'U', '3', '6') +#define V4L2_AUDIO_FMT_U20_3LE v4l2_fourcc('A', 'U', '3', '8') + +#define v4l2_fourcc_to_audfmt(fourcc) \ + (__force snd_pcm_format_t)(((((fourcc) >> 16) & 0xff) - '0') * 10 \ + + ((((fourcc) >> 24) & 0xff) - '0')) + +#define v4l2_audfmt_to_fourcc(audfmt) \ + v4l2_fourcc('A', 'U', '0' + (__force int)(audfmt) / 10, '0' + (__force int)(audfmt) % 10) + /* priv field value to indicates that subsequent fields are valid. */ #define V4L2_PIX_FMT_PRIV_MAGIC 0xfeedcafe
The Audio M2M class includes controls for audio memory-to-memory use cases. The controls can be used for audio codecs, audio preprocessing, audio postprocessing.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- .../userspace-api/media/v4l/common.rst | 1 + .../media/v4l/ext-ctrls-audio-m2m.rst | 21 +++++++++++++++++++ .../media/v4l/vidioc-g-ext-ctrls.rst | 4 ++++ drivers/media/v4l2-core/v4l2-ctrls-defs.c | 4 ++++ include/uapi/linux/v4l2-controls.h | 4 ++++ 5 files changed, 34 insertions(+) create mode 100644 Documentation/userspace-api/media/v4l/ext-ctrls-audio-m2m.rst
diff --git a/Documentation/userspace-api/media/v4l/common.rst b/Documentation/userspace-api/media/v4l/common.rst index ea0435182e44..d5366e96a596 100644 --- a/Documentation/userspace-api/media/v4l/common.rst +++ b/Documentation/userspace-api/media/v4l/common.rst @@ -52,6 +52,7 @@ applicable to all devices. ext-ctrls-fm-rx ext-ctrls-detect ext-ctrls-colorimetry + ext-ctrls-audio-m2m fourcc format planar-apis diff --git a/Documentation/userspace-api/media/v4l/ext-ctrls-audio-m2m.rst b/Documentation/userspace-api/media/v4l/ext-ctrls-audio-m2m.rst new file mode 100644 index 000000000000..82d2ecedbfee --- /dev/null +++ b/Documentation/userspace-api/media/v4l/ext-ctrls-audio-m2m.rst @@ -0,0 +1,21 @@ +.. SPDX-License-Identifier: GFDL-1.1-no-invariants-or-later + +.. _audiom2m-controls: + +*************************** +Audio M2M Control Reference +*************************** + +The Audio M2M class includes controls for audio memory-to-memory +use cases. The controls can be used for audio codecs, audio +preprocessing, audio postprocessing. + +Audio M2M Control IDs +----------------------- + +.. _audiom2m-control-id: + +``V4L2_CID_M2M_AUDIO_CLASS (class)`` + The Audio M2M class descriptor. Calling + :ref:`VIDIOC_QUERYCTRL` for this control will + return a description of this control class. diff --git a/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst b/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst index 4d56c0528ad7..aeb1ad8e7d29 100644 --- a/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst +++ b/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst @@ -488,6 +488,10 @@ still cause this situation. - 0xa50000 - The class containing colorimetry controls. These controls are described in :ref:`colorimetry-controls`. + * - ``V4L2_CTRL_CLASS_M2M_AUDIO`` + - 0xa60000 + - The class containing audio m2m controls. These controls are + described in :ref:`audiom2m-controls`.
Return Value ============ diff --git a/drivers/media/v4l2-core/v4l2-ctrls-defs.c b/drivers/media/v4l2-core/v4l2-ctrls-defs.c index 8696eb1cdd61..2a85ea3dc92f 100644 --- a/drivers/media/v4l2-core/v4l2-ctrls-defs.c +++ b/drivers/media/v4l2-core/v4l2-ctrls-defs.c @@ -1242,6 +1242,9 @@ const char *v4l2_ctrl_get_name(u32 id) case V4L2_CID_COLORIMETRY_CLASS: return "Colorimetry Controls"; case V4L2_CID_COLORIMETRY_HDR10_CLL_INFO: return "HDR10 Content Light Info"; case V4L2_CID_COLORIMETRY_HDR10_MASTERING_DISPLAY: return "HDR10 Mastering Display"; + + /* Audio M2M controls */ + case V4L2_CID_M2M_AUDIO_CLASS: return "Audio M2M Controls"; default: return NULL; } @@ -1451,6 +1454,7 @@ void v4l2_ctrl_fill(u32 id, const char **name, enum v4l2_ctrl_type *type, case V4L2_CID_DETECT_CLASS: case V4L2_CID_CODEC_STATELESS_CLASS: case V4L2_CID_COLORIMETRY_CLASS: + case V4L2_CID_M2M_AUDIO_CLASS: *type = V4L2_CTRL_TYPE_CTRL_CLASS; /* You can neither read nor write these */ *flags |= V4L2_CTRL_FLAG_READ_ONLY | V4L2_CTRL_FLAG_WRITE_ONLY; diff --git a/include/uapi/linux/v4l2-controls.h b/include/uapi/linux/v4l2-controls.h index 99c3f5e99da7..a8b4b830c757 100644 --- a/include/uapi/linux/v4l2-controls.h +++ b/include/uapi/linux/v4l2-controls.h @@ -30,6 +30,7 @@ #define V4L2_CTRL_CLASS_DETECT 0x00a30000 /* Detection controls */ #define V4L2_CTRL_CLASS_CODEC_STATELESS 0x00a40000 /* Stateless codecs controls */ #define V4L2_CTRL_CLASS_COLORIMETRY 0x00a50000 /* Colorimetry controls */ +#define V4L2_CTRL_CLASS_M2M_AUDIO 0x00a60000 /* Audio M2M controls */
/* User-class control IDs */
@@ -3491,6 +3492,9 @@ struct v4l2_ctrl_av1_film_grain { __u8 reserved[4]; };
+#define V4L2_CID_M2M_AUDIO_CLASS_BASE (V4L2_CTRL_CLASS_M2M_AUDIO | 0x900) +#define V4L2_CID_M2M_AUDIO_CLASS (V4L2_CTRL_CLASS_M2M_AUDIO | 1) + /* MPEG-compression definitions kept for backwards compatibility */ #ifndef __KERNEL__ #define V4L2_CTRL_CLASS_MPEG V4L2_CTRL_CLASS_CODEC
Add V4L2_CID_M2M_AUDIO_SOURCE_RATE and V4L2_CID_M2M_AUDIO_DEST_RATE new IDs for rate control.
Add V4L2_CID_M2M_AUDIO_SOURCE_RATE_OFFSET and V4L2_CID_M2M_AUDIO_DEST_RATE_OFFSET for clock drift.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- .../media/v4l/ext-ctrls-audio-m2m.rst | 38 +++++++++++++++++++ drivers/media/v4l2-core/v4l2-ctrls-defs.c | 6 +++ include/uapi/linux/v4l2-controls.h | 5 +++ 3 files changed, 49 insertions(+)
diff --git a/Documentation/userspace-api/media/v4l/ext-ctrls-audio-m2m.rst b/Documentation/userspace-api/media/v4l/ext-ctrls-audio-m2m.rst index 82d2ecedbfee..b137b7c442e6 100644 --- a/Documentation/userspace-api/media/v4l/ext-ctrls-audio-m2m.rst +++ b/Documentation/userspace-api/media/v4l/ext-ctrls-audio-m2m.rst @@ -19,3 +19,41 @@ Audio M2M Control IDs The Audio M2M class descriptor. Calling :ref:`VIDIOC_QUERYCTRL` for this control will return a description of this control class. + +.. _v4l2-audio-asrc: + +``V4L2_CID_M2M_AUDIO_SOURCE_RATE (integer menu)`` + This control specifies the audio source sample rate, unit is Hz + +``V4L2_CID_M2M_AUDIO_DEST_RATE (integer menu)`` + This control specifies the audio destination sample rate, unit is Hz + +``V4L2_CID_M2M_AUDIO_SOURCE_RATE_OFFSET (fixed point)`` + This control specifies the offset from the audio source sample rate, + unit is Hz. + + The offset compensates for any clock drift. The actual source audio + sample rate is the ideal source audio sample rate from + ``V4L2_CID_M2M_AUDIO_SOURCE_RATE`` plus this fixed point offset. + + The audio source clock may have some drift. Reducing or increasing the + audio sample rate dynamically to ensure that Sample Rate Converter is + working on the real sample rate, this feature is for the Asynchronous + Sample Rate Converter module. + So, userspace would be expected to be monitoring such drift + and increasing/decreasing the sample frequency as needed by this control. + +``V4L2_CID_M2M_AUDIO_DEST_RATE_OFFSET (fixed point)`` + This control specifies the offset from the audio destination sample rate, + unit is Hz. + + The offset compensates for any clock drift. The actual destination audio + sample rate is the ideal source audio sample rate from + ``V4L2_CID_M2M_AUDIO_DEST_RATE`` plus this fixed point offset. + + The audio destination clock may have some drift. Reducing or increasing + the audio sample rate dynamically to ensure that sample rate converter + is working on the real sample rate, this feature is for the Asynchronous + Sample Rate Converter module. + So, userspace would be expected to be monitoring such drift + and increasing/decreasing the sample frequency as needed by this control. diff --git a/drivers/media/v4l2-core/v4l2-ctrls-defs.c b/drivers/media/v4l2-core/v4l2-ctrls-defs.c index 2a85ea3dc92f..91e1f5348c23 100644 --- a/drivers/media/v4l2-core/v4l2-ctrls-defs.c +++ b/drivers/media/v4l2-core/v4l2-ctrls-defs.c @@ -1245,6 +1245,8 @@ const char *v4l2_ctrl_get_name(u32 id)
/* Audio M2M controls */ case V4L2_CID_M2M_AUDIO_CLASS: return "Audio M2M Controls"; + case V4L2_CID_M2M_AUDIO_SOURCE_RATE: return "Audio Source Sample Rate"; + case V4L2_CID_M2M_AUDIO_DEST_RATE: return "Audio Destination Sample Rate"; default: return NULL; } @@ -1606,6 +1608,10 @@ void v4l2_ctrl_fill(u32 id, const char **name, enum v4l2_ctrl_type *type, case V4L2_CID_COLORIMETRY_HDR10_MASTERING_DISPLAY: *type = V4L2_CTRL_TYPE_HDR10_MASTERING_DISPLAY; break; + case V4L2_CID_M2M_AUDIO_SOURCE_RATE: + case V4L2_CID_M2M_AUDIO_DEST_RATE: + *type = V4L2_CTRL_TYPE_INTEGER_MENU; + break; default: *type = V4L2_CTRL_TYPE_INTEGER; break; diff --git a/include/uapi/linux/v4l2-controls.h b/include/uapi/linux/v4l2-controls.h index a8b4b830c757..30129ccdc282 100644 --- a/include/uapi/linux/v4l2-controls.h +++ b/include/uapi/linux/v4l2-controls.h @@ -3495,6 +3495,11 @@ struct v4l2_ctrl_av1_film_grain { #define V4L2_CID_M2M_AUDIO_CLASS_BASE (V4L2_CTRL_CLASS_M2M_AUDIO | 0x900) #define V4L2_CID_M2M_AUDIO_CLASS (V4L2_CTRL_CLASS_M2M_AUDIO | 1)
+#define V4L2_CID_M2M_AUDIO_SOURCE_RATE (V4L2_CID_M2M_AUDIO_CLASS_BASE + 0) +#define V4L2_CID_M2M_AUDIO_DEST_RATE (V4L2_CID_M2M_AUDIO_CLASS_BASE + 1) +#define V4L2_CID_M2M_AUDIO_SOURCE_RATE_OFFSET (V4L2_CID_M2M_AUDIO_CLASS_BASE + 2) +#define V4L2_CID_M2M_AUDIO_DEST_RATE_OFFSET (V4L2_CID_M2M_AUDIO_CLASS_BASE + 3) + /* MPEG-compression definitions kept for backwards compatibility */ #ifndef __KERNEL__ #define V4L2_CTRL_CLASS_MPEG V4L2_CTRL_CLASS_CODEC
Declare the interface types that will be used by Audio. The type is MEDIA_INTF_T_V4L_AUDIO.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- .../userspace-api/media/mediactl/media-types.rst | 5 +++++ drivers/media/v4l2-core/v4l2-dev.c | 4 ++++ drivers/media/v4l2-core/v4l2-mem2mem.c | 13 +++++++++---- include/uapi/linux/media.h | 1 + 4 files changed, 19 insertions(+), 4 deletions(-)
diff --git a/Documentation/userspace-api/media/mediactl/media-types.rst b/Documentation/userspace-api/media/mediactl/media-types.rst index 6332e8395263..adfb37430f8e 100644 --- a/Documentation/userspace-api/media/mediactl/media-types.rst +++ b/Documentation/userspace-api/media/mediactl/media-types.rst @@ -265,6 +265,7 @@ Types and flags used to represent the media graph elements .. _MEDIA-INTF-T-V4L-SUBDEV: .. _MEDIA-INTF-T-V4L-SWRADIO: .. _MEDIA-INTF-T-V4L-TOUCH: +.. _MEDIA-INTF-T-V4L-AUDIO: .. _MEDIA-INTF-T-ALSA-PCM-CAPTURE: .. _MEDIA-INTF-T-ALSA-PCM-PLAYBACK: .. _MEDIA-INTF-T-ALSA-CONTROL: @@ -322,6 +323,10 @@ Types and flags used to represent the media graph elements - Device node interface for Touch device (V4L) - typically, /dev/v4l-touch?
+ * - ``MEDIA_INTF_T_V4L_AUDIO`` + - Device node interface for Audio device (V4L) + - typically, /dev/v4l-audio? + * - ``MEDIA_INTF_T_ALSA_PCM_CAPTURE`` - Device node interface for ALSA PCM Capture - typically, /dev/snd/pcmC?D?c diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c index bac008fcedc6..ca8462a61e1f 100644 --- a/drivers/media/v4l2-core/v4l2-dev.c +++ b/drivers/media/v4l2-core/v4l2-dev.c @@ -844,6 +844,10 @@ static int video_register_media_controller(struct video_device *vdev) intf_type = MEDIA_INTF_T_V4L_SUBDEV; /* Entity will be created via v4l2_device_register_subdev() */ break; + case VFL_TYPE_AUDIO: + intf_type = MEDIA_INTF_T_V4L_AUDIO; + /* Entity will be created via v4l2_device_register_subdev() */ + break; default: return 0; } diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c index 75517134a5e9..cda5e255305f 100644 --- a/drivers/media/v4l2-core/v4l2-mem2mem.c +++ b/drivers/media/v4l2-core/v4l2-mem2mem.c @@ -1143,10 +1143,15 @@ int v4l2_m2m_register_media_controller(struct v4l2_m2m_dev *m2m_dev, if (ret) goto err_rm_links0;
- /* Create video interface */ - m2m_dev->intf_devnode = media_devnode_create(mdev, - MEDIA_INTF_T_V4L_VIDEO, 0, - VIDEO_MAJOR, vdev->minor); + if (vdev->vfl_type == VFL_TYPE_AUDIO) + m2m_dev->intf_devnode = media_devnode_create(mdev, + MEDIA_INTF_T_V4L_AUDIO, 0, + VIDEO_MAJOR, vdev->minor); + else + /* Create video interface */ + m2m_dev->intf_devnode = media_devnode_create(mdev, + MEDIA_INTF_T_V4L_VIDEO, 0, + VIDEO_MAJOR, vdev->minor); if (!m2m_dev->intf_devnode) { ret = -ENOMEM; goto err_rm_links1; diff --git a/include/uapi/linux/media.h b/include/uapi/linux/media.h index 1c80b1d6bbaf..9ff6dec7393a 100644 --- a/include/uapi/linux/media.h +++ b/include/uapi/linux/media.h @@ -260,6 +260,7 @@ struct media_links_enum { #define MEDIA_INTF_T_V4L_SUBDEV (MEDIA_INTF_T_V4L_BASE + 3) #define MEDIA_INTF_T_V4L_SWRADIO (MEDIA_INTF_T_V4L_BASE + 4) #define MEDIA_INTF_T_V4L_TOUCH (MEDIA_INTF_T_V4L_BASE + 5) +#define MEDIA_INTF_T_V4L_AUDIO (MEDIA_INTF_T_V4L_BASE + 6)
#define MEDIA_INTF_T_ALSA_BASE 0x00000300 #define MEDIA_INTF_T_ALSA_PCM_CAPTURE (MEDIA_INTF_T_ALSA_BASE)
Add and document a media entity type for an audio resampler. It is MEDIA_ENT_F_PROC_AUDIO_RESAMPLER.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- Documentation/userspace-api/media/mediactl/media-types.rst | 6 ++++++ include/uapi/linux/media.h | 1 + 2 files changed, 7 insertions(+)
diff --git a/Documentation/userspace-api/media/mediactl/media-types.rst b/Documentation/userspace-api/media/mediactl/media-types.rst index adfb37430f8e..d353f17c3344 100644 --- a/Documentation/userspace-api/media/mediactl/media-types.rst +++ b/Documentation/userspace-api/media/mediactl/media-types.rst @@ -40,6 +40,7 @@ Types and flags used to represent the media graph elements .. _MEDIA-ENT-F-PROC-VIDEO-ENCODER: .. _MEDIA-ENT-F-PROC-VIDEO-DECODER: .. _MEDIA-ENT-F-PROC-VIDEO-ISP: +.. _MEDIA-ENT-F-PROC-AUDIO-RESAMPLER: .. _MEDIA-ENT-F-VID-MUX: .. _MEDIA-ENT-F-VID-IF-BRIDGE: .. _MEDIA-ENT-F-DV-DECODER: @@ -208,6 +209,11 @@ Types and flags used to represent the media graph elements combination of custom V4L2 controls and IOCTLs, and parameters supplied in a metadata buffer.
+ * - ``MEDIA_ENT_F_PROC_AUDIO_RESAMPLER`` + - An Audio Resampler device. An entity capable of + resampling an audio stream from one sample rate to another sample + rate. Must have one sink pad and at least one source pad. + * - ``MEDIA_ENT_F_VID_MUX`` - Video multiplexer. An entity capable of multiplexing must have at least two sink pads and one source pad, and must pass the video diff --git a/include/uapi/linux/media.h b/include/uapi/linux/media.h index 9ff6dec7393a..a8266eaa8042 100644 --- a/include/uapi/linux/media.h +++ b/include/uapi/linux/media.h @@ -125,6 +125,7 @@ struct media_device_info { #define MEDIA_ENT_F_PROC_VIDEO_ENCODER (MEDIA_ENT_F_BASE + 0x4007) #define MEDIA_ENT_F_PROC_VIDEO_DECODER (MEDIA_ENT_F_BASE + 0x4008) #define MEDIA_ENT_F_PROC_VIDEO_ISP (MEDIA_ENT_F_BASE + 0x4009) +#define MEDIA_ENT_F_PROC_AUDIO_RESAMPLER (MEDIA_ENT_F_BASE + 0x400a)
/* * Switch and bridge entity functions
Add fixed point test controls, one is for Q4.16 format another one is for Q63 format.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- drivers/media/test-drivers/vivid/vivid-core.h | 2 ++ .../media/test-drivers/vivid/vivid-ctrls.c | 26 +++++++++++++++++++ include/media/v4l2-ctrls.h | 6 +++++ 3 files changed, 34 insertions(+)
diff --git a/drivers/media/test-drivers/vivid/vivid-core.h b/drivers/media/test-drivers/vivid/vivid-core.h index cfb8e66083f6..f65465191bc9 100644 --- a/drivers/media/test-drivers/vivid/vivid-core.h +++ b/drivers/media/test-drivers/vivid/vivid-core.h @@ -222,6 +222,8 @@ struct vivid_dev { struct v4l2_ctrl *boolean; struct v4l2_ctrl *int32; struct v4l2_ctrl *int64; + struct v4l2_ctrl *int32_q16; + struct v4l2_ctrl *int64_q63; struct v4l2_ctrl *menu; struct v4l2_ctrl *string; struct v4l2_ctrl *bitmask; diff --git a/drivers/media/test-drivers/vivid/vivid-ctrls.c b/drivers/media/test-drivers/vivid/vivid-ctrls.c index f2b20e25a7a4..2444ea95b285 100644 --- a/drivers/media/test-drivers/vivid/vivid-ctrls.c +++ b/drivers/media/test-drivers/vivid/vivid-ctrls.c @@ -38,6 +38,8 @@ #define VIVID_CID_U8_PIXEL_ARRAY (VIVID_CID_CUSTOM_BASE + 14) #define VIVID_CID_S32_ARRAY (VIVID_CID_CUSTOM_BASE + 15) #define VIVID_CID_S64_ARRAY (VIVID_CID_CUSTOM_BASE + 16) +#define VIVID_CID_INT_Q4_16 (VIVID_CID_CUSTOM_BASE + 17) +#define VIVID_CID_INT64_Q63 (VIVID_CID_CUSTOM_BASE + 18)
#define VIVID_CID_VIVID_BASE (0x00f00000 | 0xf000) #define VIVID_CID_VIVID_CLASS (0x00f00000 | 1) @@ -182,6 +184,28 @@ static const struct v4l2_ctrl_config vivid_ctrl_int64 = { .step = 1, };
+static const struct v4l2_ctrl_config vivid_ctrl_int32_q16 = { + .ops = &vivid_user_gen_ctrl_ops, + .id = VIVID_CID_INT_Q4_16, + .name = "Integer 32 Bits Q4.16", + .type = V4L2_CTRL_TYPE_INTEGER, + .min = v4l2_ctrl_fp_compose(-16, 0, 16), + .max = v4l2_ctrl_fp_compose(15, 0xffff, 16), + .step = 1, + .fraction_bits = 16, +}; + +static const struct v4l2_ctrl_config vivid_ctrl_int64_q63 = { + .ops = &vivid_user_gen_ctrl_ops, + .id = VIVID_CID_INT64_Q63, + .name = "Integer 64 Bits Q63", + .type = V4L2_CTRL_TYPE_INTEGER64, + .min = v4l2_ctrl_fp_compose(-1, 0, 63), + .max = v4l2_ctrl_fp_compose(0, LLONG_MAX, 63), + .step = 1, + .fraction_bits = 63, +}; + static const struct v4l2_ctrl_config vivid_ctrl_u32_array = { .ops = &vivid_user_gen_ctrl_ops, .id = VIVID_CID_U32_ARRAY, @@ -1670,6 +1694,8 @@ int vivid_create_controls(struct vivid_dev *dev, bool show_ccs_cap, dev->button = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_button, NULL); dev->int32 = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_int32, NULL); dev->int64 = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_int64, NULL); + dev->int32_q16 = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_int32_q16, NULL); + dev->int64_q63 = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_int64_q63, NULL); dev->boolean = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_boolean, NULL); dev->menu = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_menu, NULL); dev->string = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_string, NULL); diff --git a/include/media/v4l2-ctrls.h b/include/media/v4l2-ctrls.h index c35514c5bf88..197d8b67ac13 100644 --- a/include/media/v4l2-ctrls.h +++ b/include/media/v4l2-ctrls.h @@ -1593,4 +1593,10 @@ void v4l2_ctrl_type_op_log(const struct v4l2_ctrl *ctrl); */ int v4l2_ctrl_type_op_validate(const struct v4l2_ctrl *ctrl, union v4l2_ctrl_ptr ptr);
+/* + * Fixed point compose helper define. This helper maps to the value + * i + f / (1 << fraction_bits). + */ +#define v4l2_ctrl_fp_compose(i, f, fraction_bits) (((s64)(i) << fraction_bits) + (f)) + #endif
Implement the ASRC memory to memory function using the v4l2 framework, user can use this function with v4l2 ioctl interface.
User send the output and capture buffer to driver and driver store the converted data to the capture buffer.
This feature can be shared by ASRC and EASRC drivers
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- MAINTAINERS | 8 + drivers/media/platform/nxp/Kconfig | 13 + drivers/media/platform/nxp/Makefile | 1 + drivers/media/platform/nxp/imx-asrc.c | 1256 +++++++++++++++++++++++++ 4 files changed, 1278 insertions(+) create mode 100644 drivers/media/platform/nxp/imx-asrc.c
diff --git a/MAINTAINERS b/MAINTAINERS index 375d34363777..7b8b9ee65c61 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -15821,6 +15821,14 @@ F: drivers/nvmem/ F: include/linux/nvmem-consumer.h F: include/linux/nvmem-provider.h
+NXP ASRC V4L2 MEM2MEM DRIVERS +M: Shengjiu Wang shengjiu.wang@gmail.com +L: linux-media@vger.kernel.org +S: Maintained +W: https://linuxtv.org +T: git git://linuxtv.org/media_tree.git +F: drivers/media/platform/nxp/imx-asrc.c + NXP BLUETOOTH WIRELESS DRIVERS M: Amitkumar Karwar amitkumar.karwar@nxp.com M: Neeraj Kale neeraj.sanjaykale@nxp.com diff --git a/drivers/media/platform/nxp/Kconfig b/drivers/media/platform/nxp/Kconfig index 40e3436669e2..8d0ca335601f 100644 --- a/drivers/media/platform/nxp/Kconfig +++ b/drivers/media/platform/nxp/Kconfig @@ -67,3 +67,16 @@ config VIDEO_MX2_EMMAPRP
source "drivers/media/platform/nxp/dw100/Kconfig" source "drivers/media/platform/nxp/imx-jpeg/Kconfig" + +config VIDEO_IMX_ASRC + tristate "NXP i.MX ASRC M2M support" + depends on V4L_MEM2MEM_DRIVERS + depends on MEDIA_SUPPORT + select VIDEOBUF2_DMA_CONTIG + select V4L2_MEM2MEM_DEV + select MEDIA_CONTROLLER + help + Say Y if you want to add ASRC M2M support for NXP CPUs. + It is a complement for ASRC M2P and ASRC P2M features. + This option is only useful for out-of-tree drivers since + in-tree drivers select it automatically. diff --git a/drivers/media/platform/nxp/Makefile b/drivers/media/platform/nxp/Makefile index 4d90eb713652..1325675e34f5 100644 --- a/drivers/media/platform/nxp/Makefile +++ b/drivers/media/platform/nxp/Makefile @@ -9,3 +9,4 @@ obj-$(CONFIG_VIDEO_IMX8MQ_MIPI_CSI2) += imx8mq-mipi-csi2.o obj-$(CONFIG_VIDEO_IMX_MIPI_CSIS) += imx-mipi-csis.o obj-$(CONFIG_VIDEO_IMX_PXP) += imx-pxp.o obj-$(CONFIG_VIDEO_MX2_EMMAPRP) += mx2_emmaprp.o +obj-$(CONFIG_VIDEO_IMX_ASRC) += imx-asrc.o diff --git a/drivers/media/platform/nxp/imx-asrc.c b/drivers/media/platform/nxp/imx-asrc.c new file mode 100644 index 000000000000..0c25a36199b1 --- /dev/null +++ b/drivers/media/platform/nxp/imx-asrc.c @@ -0,0 +1,1256 @@ +// SPDX-License-Identifier: GPL-2.0 +// +// Copyright (C) 2014-2016 Freescale Semiconductor, Inc. +// Copyright (C) 2019-2023 NXP +// +// Freescale ASRC Memory to Memory (M2M) driver + +#include <linux/dma/imx-dma.h> +#include <linux/pm_runtime.h> +#include <media/v4l2-ctrls.h> +#include <media/v4l2-device.h> +#include <media/v4l2-event.h> +#include <media/v4l2-fh.h> +#include <media/v4l2-ioctl.h> +#include <media/v4l2-mem2mem.h> +#include <media/videobuf2-dma-contig.h> +#include <sound/dmaengine_pcm.h> +#include <sound/fsl_asrc_common.h> + +#define V4L_CAP OUT +#define V4L_OUT IN + +#define ASRC_xPUT_DMA_CALLBACK(dir) \ + (((dir) == V4L_OUT) ? asrc_input_dma_callback \ + : asrc_output_dma_callback) + +#define DIR_STR(dir) (dir) == V4L_OUT ? "out" : "cap" + +/* Maximum output and capture buffer size */ +#define ASRC_M2M_BUFFER_SIZE (512 * 1024) + +/* Maximum output and capture period size */ +#define ASRC_M2M_PERIOD_SIZE (48 * 1024) + +struct asrc_pair_m2m { + struct fsl_asrc_pair *pair; + struct asrc_m2m *m2m; + struct v4l2_fh fh; + struct v4l2_ctrl_handler ctrl_handler; + int channels[2]; + unsigned int sequence[2]; + s64 src_rate_off_prev; /* Q31.32 */ + s64 dst_rate_off_prev; /* Q31.32 */ + s64 src_rate_off_cur; /* Q31.32 */ + s64 dst_rate_off_cur; /* Q31.32 */ +}; + +struct asrc_m2m { + struct fsl_asrc_m2m_pdata pdata; + struct v4l2_device v4l2_dev; + struct v4l2_m2m_dev *m2m_dev; + struct video_device *dec_vdev; + struct mutex mlock; /* v4l2 ioctls serialization */ + struct platform_device *pdev; +#ifdef CONFIG_MEDIA_CONTROLLER + struct media_device mdev; +#endif +}; + +static u32 formats[] = { + V4L2_AUDIO_FMT_S8, + V4L2_AUDIO_FMT_S16_LE, + V4L2_AUDIO_FMT_U16_LE, + V4L2_AUDIO_FMT_S24_LE, + V4L2_AUDIO_FMT_S24_3LE, + V4L2_AUDIO_FMT_U24_LE, + V4L2_AUDIO_FMT_U24_3LE, + V4L2_AUDIO_FMT_S32_LE, + V4L2_AUDIO_FMT_U32_LE, + V4L2_AUDIO_FMT_S20_3LE, + V4L2_AUDIO_FMT_U20_3LE, + V4L2_AUDIO_FMT_FLOAT_LE, + V4L2_AUDIO_FMT_IEC958_SUBFRAME_LE, +}; + +#define NUM_FORMATS ARRAY_SIZE(formats) + +static const s64 asrc_v1_m2m_rates[] = { + 5512, 8000, 11025, 12000, 16000, + 22050, 24000, 32000, 44100, + 48000, 64000, 88200, 96000, + 128000, 176400, 192000, +}; + +static const s64 asrc_v2_m2m_rates[] = { + 8000, 11025, 12000, 16000, + 22050, 24000, 32000, 44100, + 48000, 64000, 88200, 96000, + 128000, 176400, 192000, 256000, + 352800, 384000, 705600, 768000, +}; + +static u32 find_fourcc(snd_pcm_format_t format) +{ + snd_pcm_format_t fmt; + unsigned int k; + + for (k = 0; k < NUM_FORMATS; k++) { + fmt = v4l2_fourcc_to_audfmt(formats[k]); + if (fmt == format) + return formats[k]; + } + + return 0; +} + +static snd_pcm_format_t find_format(u32 fourcc) +{ + unsigned int k; + + for (k = 0; k < NUM_FORMATS; k++) { + if (formats[k] == fourcc) + return v4l2_fourcc_to_audfmt(formats[k]); + } + + return 0; +} + +static int asrc_check_format(struct asrc_pair_m2m *pair_m2m, u8 dir, u32 format) +{ + struct asrc_m2m *m2m = pair_m2m->m2m; + struct fsl_asrc_m2m_pdata *pdata = &m2m->pdata; + struct fsl_asrc_pair *pair = pair_m2m->pair; + snd_pcm_format_t fmt; + u64 format_bit = 0; + int i; + + for (i = 0; i < NUM_FORMATS; ++i) { + if (formats[i] == format) { + fmt = v4l2_fourcc_to_audfmt(formats[i]); + format_bit = pcm_format_to_bits(fmt); + break; + } + } + + if (dir == IN && !(format_bit & pdata->fmt_in)) + return find_fourcc(pair->sample_format[V4L_OUT]); + if (dir == OUT && !(format_bit & pdata->fmt_out)) + return find_fourcc(pair->sample_format[V4L_CAP]); + + return format; +} + +static int asrc_check_channel(struct asrc_pair_m2m *pair_m2m, u8 dir, u32 channels) +{ + struct asrc_m2m *m2m = pair_m2m->m2m; + struct fsl_asrc_m2m_pdata *pdata = &m2m->pdata; + struct fsl_asrc_pair *pair = pair_m2m->pair; + + if (channels < pdata->chan_min || channels > pdata->chan_max) + return pair->channels; + + return channels; +} + +static inline struct asrc_pair_m2m *asrc_m2m_fh_to_ctx(struct v4l2_fh *fh) +{ + return container_of(fh, struct asrc_pair_m2m, fh); +} + +/** + * asrc_read_last_fifo: read all the remaining data from FIFO + * @pair: Structure pointer of fsl_asrc_pair + * @dma_vaddr: virtual address of capture buffer + * @length: payload length of capture buffer + */ +static void asrc_read_last_fifo(struct fsl_asrc_pair *pair, void *dma_vaddr, u32 *length) +{ + struct fsl_asrc *asrc = pair->asrc; + enum asrc_pair_index index = pair->index; + u32 i, reg, size, t_size = 0, width; + u32 *reg32 = NULL; + u16 *reg16 = NULL; + u8 *reg24 = NULL; + + width = snd_pcm_format_physical_width(pair->sample_format[V4L_CAP]); + if (width == 32) + reg32 = dma_vaddr + *length; + else if (width == 16) + reg16 = dma_vaddr + *length; + else + reg24 = dma_vaddr + *length; +retry: + size = asrc->get_output_fifo_size(pair); + if (size + *length > ASRC_M2M_BUFFER_SIZE) + goto end; + + for (i = 0; i < size * pair->channels; i++) { + regmap_read(asrc->regmap, asrc->get_fifo_addr(OUT, index), ®); + if (reg32) { + *reg32++ = reg; + } else if (reg16) { + *reg16++ = (u16)reg; + } else { + *reg24++ = (u8)reg; + *reg24++ = (u8)(reg >> 8); + *reg24++ = (u8)(reg >> 16); + } + } + t_size += size; + + /* In case there is data left in FIFO */ + if (size) + goto retry; +end: + /* Update payload length */ + if (reg32) + *length += t_size * pair->channels * 4; + else if (reg16) + *length += t_size * pair->channels * 2; + else + *length += t_size * pair->channels * 3; +} + +static int asrc_m2m_start_streaming(struct vb2_queue *q, unsigned int count) +{ + struct asrc_pair_m2m *pair_m2m = vb2_get_drv_priv(q); + struct fsl_asrc_pair *pair = pair_m2m->pair; + struct asrc_m2m *m2m = pair_m2m->m2m; + struct fsl_asrc *asrc = pair->asrc; + struct device *dev = &m2m->pdev->dev; + struct vb2_v4l2_buffer *buf; + bool request_flag = false; + int ret; + + dev_dbg(dev, "Start streaming pair=%p, %d\n", pair, q->type); + + ret = pm_runtime_get_sync(dev); + if (ret < 0) { + dev_err(dev, "Failed to power up asrc\n"); + goto err_pm_runtime; + } + + /* Request asrc pair/context */ + if (!pair->req_pair) { + /* flag for error handler of this function */ + request_flag = true; + + ret = asrc->request_pair(pair->channels, pair); + if (ret) { + dev_err(dev, "failed to request pair: %d\n", ret); + goto err_request_pair; + } + + ret = asrc->m2m_prepare(pair); + if (ret) { + dev_err(dev, "failed to start pair part one: %d\n", ret); + goto err_start_part_one; + } + + pair->req_pair = true; + } + + /* Request dma channels */ + if (V4L2_TYPE_IS_OUTPUT(q->type)) { + pair_m2m->sequence[V4L_OUT] = 0; + pair->dma_chan[V4L_OUT] = asrc->get_dma_channel(pair, IN); + if (!pair->dma_chan[V4L_OUT]) { + dev_err(dev, "[ctx%d] failed to get input DMA channel\n", pair->index); + ret = -EBUSY; + goto err_dma_channel; + } + } else { + pair_m2m->sequence[V4L_CAP] = 0; + pair->dma_chan[V4L_CAP] = asrc->get_dma_channel(pair, OUT); + if (!pair->dma_chan[V4L_CAP]) { + dev_err(dev, "[ctx%d] failed to get output DMA channel\n", pair->index); + ret = -EBUSY; + goto err_dma_channel; + } + } + + v4l2_m2m_update_start_streaming_state(pair_m2m->fh.m2m_ctx, q); + + return 0; + +err_dma_channel: + if (request_flag && asrc->m2m_unprepare) + asrc->m2m_unprepare(pair); +err_start_part_one: + if (request_flag) + asrc->release_pair(pair); +err_request_pair: + pm_runtime_put_sync(dev); +err_pm_runtime: + /* Release buffers */ + if (V4L2_TYPE_IS_OUTPUT(q->type)) { + while ((buf = v4l2_m2m_src_buf_remove(pair_m2m->fh.m2m_ctx))) + v4l2_m2m_buf_done(buf, VB2_BUF_STATE_QUEUED); + } else { + while ((buf = v4l2_m2m_dst_buf_remove(pair_m2m->fh.m2m_ctx))) + v4l2_m2m_buf_done(buf, VB2_BUF_STATE_QUEUED); + } + return ret; +} + +static void asrc_m2m_stop_streaming(struct vb2_queue *q) +{ + struct asrc_pair_m2m *pair_m2m = vb2_get_drv_priv(q); + struct asrc_m2m *m2m = pair_m2m->m2m; + struct fsl_asrc_pair *pair = pair_m2m->pair; + struct fsl_asrc *asrc = pair->asrc; + struct device *dev = &m2m->pdev->dev; + + dev_dbg(dev, "Stop streaming pair=%p, %d\n", pair, q->type); + + v4l2_m2m_update_stop_streaming_state(pair_m2m->fh.m2m_ctx, q); + + /* Stop & release pair/context */ + if (asrc->m2m_stop) + asrc->m2m_stop(pair); + + if (pair->req_pair) { + if (asrc->m2m_unprepare) + asrc->m2m_unprepare(pair); + asrc->release_pair(pair); + pair->req_pair = false; + } + + /* Release dma channel */ + if (V4L2_TYPE_IS_OUTPUT(q->type)) { + if (pair->dma_chan[V4L_OUT]) + dma_release_channel(pair->dma_chan[V4L_OUT]); + } else { + if (pair->dma_chan[V4L_CAP]) + dma_release_channel(pair->dma_chan[V4L_CAP]); + } + + pm_runtime_put_sync(dev); +} + +static int asrc_m2m_queue_setup(struct vb2_queue *q, + unsigned int *num_buffers, unsigned int *num_planes, + unsigned int sizes[], struct device *alloc_devs[]) +{ + struct asrc_pair_m2m *pair_m2m = vb2_get_drv_priv(q); + struct fsl_asrc_pair *pair = pair_m2m->pair; + u32 size; + + /* + * The capture buffer size depends on output buffer size + * and the convert ratio. + * + * Here just use a fix length for capture and output buffer. + * User need to care about it. + */ + if (V4L2_TYPE_IS_OUTPUT(q->type)) + size = pair->buf_len[V4L_OUT]; + else + size = pair->buf_len[V4L_CAP]; + + if (*num_planes) + return sizes[0] < size ? -EINVAL : 0; + + *num_planes = 1; + sizes[0] = size; + + return 0; +} + +static void asrc_m2m_buf_queue(struct vb2_buffer *vb) +{ + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + struct asrc_pair_m2m *pair_m2m = vb2_get_drv_priv(vb->vb2_queue); + + /* queue buffer */ + v4l2_m2m_buf_queue(pair_m2m->fh.m2m_ctx, vbuf); +} + +static const struct vb2_ops asrc_m2m_qops = { + .wait_prepare = vb2_ops_wait_prepare, + .wait_finish = vb2_ops_wait_finish, + .start_streaming = asrc_m2m_start_streaming, + .stop_streaming = asrc_m2m_stop_streaming, + .queue_setup = asrc_m2m_queue_setup, + .buf_queue = asrc_m2m_buf_queue, +}; + +/* Init video buffer queue for src and dst. */ +static int asrc_m2m_queue_init(void *priv, struct vb2_queue *src_vq, + struct vb2_queue *dst_vq) +{ + struct asrc_pair_m2m *pair_m2m = priv; + struct asrc_m2m *m2m = pair_m2m->m2m; + int ret; + + src_vq->type = V4L2_BUF_TYPE_AUDIO_OUTPUT; + src_vq->io_modes = VB2_MMAP | VB2_DMABUF; + src_vq->drv_priv = pair_m2m; + src_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer); + src_vq->ops = &asrc_m2m_qops; + src_vq->mem_ops = &vb2_dma_contig_memops; + src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY; + src_vq->lock = &m2m->mlock; + src_vq->dev = &m2m->pdev->dev; + + ret = vb2_queue_init(src_vq); + if (ret) + return ret; + + dst_vq->type = V4L2_BUF_TYPE_AUDIO_CAPTURE; + dst_vq->io_modes = VB2_MMAP | VB2_DMABUF; + dst_vq->drv_priv = pair_m2m; + dst_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer); + dst_vq->ops = &asrc_m2m_qops; + dst_vq->mem_ops = &vb2_dma_contig_memops; + dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY; + dst_vq->lock = &m2m->mlock; + dst_vq->dev = &m2m->pdev->dev; + + ret = vb2_queue_init(dst_vq); + return ret; +} + +static int asrc_m2m_op_s_ctrl(struct v4l2_ctrl *ctrl) +{ + struct asrc_pair_m2m *pair_m2m = + container_of(ctrl->handler, struct asrc_pair_m2m, ctrl_handler); + struct fsl_asrc_pair *pair = pair_m2m->pair; + int ret = 0; + + switch (ctrl->id) { + case V4L2_CID_M2M_AUDIO_SOURCE_RATE: + pair->rate[V4L_OUT] = ctrl->qmenu_int[ctrl->val]; + break; + case V4L2_CID_M2M_AUDIO_DEST_RATE: + pair->rate[V4L_CAP] = ctrl->qmenu_int[ctrl->val]; + break; + case V4L2_CID_M2M_AUDIO_SOURCE_RATE_OFFSET: + pair_m2m->src_rate_off_cur = *ctrl->p_new.p_s64; + break; + case V4L2_CID_M2M_AUDIO_DEST_RATE_OFFSET: + pair_m2m->dst_rate_off_cur = *ctrl->p_new.p_s64; + break; + default: + ret = -EINVAL; + break; + } + + return ret; +} + +static const struct v4l2_ctrl_ops asrc_m2m_ctrl_ops = { + .s_ctrl = asrc_m2m_op_s_ctrl, +}; + +static const struct v4l2_ctrl_config asrc_src_rate_off_control = { + .ops = &asrc_m2m_ctrl_ops, + .id = V4L2_CID_M2M_AUDIO_SOURCE_RATE_OFFSET, + .name = "Audio Source Sample Rate Offset", + .type = V4L2_CTRL_TYPE_INTEGER64, + .min = v4l2_ctrl_fp_compose(-128, 0, 32), + .max = v4l2_ctrl_fp_compose(127, 0xffffffff, 32), + .def = 0, + .step = 1, + .fraction_bits = 32, +}; + +static const struct v4l2_ctrl_config asrc_dst_rate_off_control = { + .ops = &asrc_m2m_ctrl_ops, + .id = V4L2_CID_M2M_AUDIO_DEST_RATE_OFFSET, + .name = "Audio Dest Sample Rate Offset", + .type = V4L2_CTRL_TYPE_INTEGER64, + .min = v4l2_ctrl_fp_compose(-128, 0, 32), + .max = v4l2_ctrl_fp_compose(127, 0xffffffff, 32), + .def = 0, + .step = 1, + .fraction_bits = 32, +}; + +/* system callback for open() */ +static int asrc_m2m_open(struct file *file) +{ + struct asrc_m2m *m2m = video_drvdata(file); + struct fsl_asrc *asrc = m2m->pdata.asrc; + struct video_device *vdev = video_devdata(file); + struct fsl_asrc_pair *pair; + struct asrc_pair_m2m *pair_m2m; + int ret = 0; + + if (mutex_lock_interruptible(&m2m->mlock)) + return -ERESTARTSYS; + + pair = kzalloc(sizeof(*pair) + asrc->pair_priv_size, GFP_KERNEL); + if (!pair) { + ret = -ENOMEM; + goto err_alloc_pair; + } + + pair_m2m = kzalloc(sizeof(*pair_m2m), GFP_KERNEL); + if (!pair_m2m) { + ret = -ENOMEM; + goto err_alloc_pair_m2m; + } + + pair->private = (void *)pair + sizeof(struct fsl_asrc_pair); + pair->asrc = asrc; + + pair->buf_len[V4L_OUT] = ASRC_M2M_BUFFER_SIZE; + pair->buf_len[V4L_CAP] = ASRC_M2M_BUFFER_SIZE; + + pair->channels = 2; + pair->rate[V4L_OUT] = 8000; + pair->rate[V4L_CAP] = 8000; + pair->sample_format[V4L_OUT] = SNDRV_PCM_FORMAT_S16_LE; + pair->sample_format[V4L_CAP] = SNDRV_PCM_FORMAT_S16_LE; + + init_completion(&pair->complete[V4L_OUT]); + init_completion(&pair->complete[V4L_CAP]); + + v4l2_fh_init(&pair_m2m->fh, vdev); + v4l2_fh_add(&pair_m2m->fh); + file->private_data = &pair_m2m->fh; + + pair_m2m->pair = pair; + pair_m2m->m2m = m2m; + /* m2m context init */ + pair_m2m->fh.m2m_ctx = v4l2_m2m_ctx_init(m2m->m2m_dev, pair_m2m, + asrc_m2m_queue_init); + if (IS_ERR(pair_m2m->fh.m2m_ctx)) { + ret = PTR_ERR(pair_m2m->fh.m2m_ctx); + goto err_ctx_init; + } + + v4l2_ctrl_handler_init(&pair_m2m->ctrl_handler, 4); + + if (m2m->pdata.rate_min == 5512) { + v4l2_ctrl_new_int_menu(&pair_m2m->ctrl_handler, &asrc_m2m_ctrl_ops, + V4L2_CID_M2M_AUDIO_SOURCE_RATE, + ARRAY_SIZE(asrc_v1_m2m_rates) - 1, 1, asrc_v1_m2m_rates); + v4l2_ctrl_new_int_menu(&pair_m2m->ctrl_handler, &asrc_m2m_ctrl_ops, + V4L2_CID_M2M_AUDIO_DEST_RATE, + ARRAY_SIZE(asrc_v1_m2m_rates) - 1, 1, asrc_v1_m2m_rates); + } else { + v4l2_ctrl_new_int_menu(&pair_m2m->ctrl_handler, &asrc_m2m_ctrl_ops, + V4L2_CID_M2M_AUDIO_SOURCE_RATE, + ARRAY_SIZE(asrc_v2_m2m_rates) - 1, 0, asrc_v2_m2m_rates); + v4l2_ctrl_new_int_menu(&pair_m2m->ctrl_handler, &asrc_m2m_ctrl_ops, + V4L2_CID_M2M_AUDIO_DEST_RATE, + ARRAY_SIZE(asrc_v2_m2m_rates) - 1, 0, asrc_v2_m2m_rates); + } + + v4l2_ctrl_new_custom(&pair_m2m->ctrl_handler, &asrc_src_rate_off_control, NULL); + v4l2_ctrl_new_custom(&pair_m2m->ctrl_handler, &asrc_dst_rate_off_control, NULL); + + if (pair_m2m->ctrl_handler.error) { + ret = pair_m2m->ctrl_handler.error; + v4l2_ctrl_handler_free(&pair_m2m->ctrl_handler); + goto err_ctrl_handler; + } + + pair_m2m->fh.ctrl_handler = &pair_m2m->ctrl_handler; + + mutex_unlock(&m2m->mlock); + + return 0; + +err_ctrl_handler: + v4l2_m2m_ctx_release(pair_m2m->fh.m2m_ctx); +err_ctx_init: + v4l2_fh_del(&pair_m2m->fh); + v4l2_fh_exit(&pair_m2m->fh); + kfree(pair_m2m); +err_alloc_pair_m2m: + kfree(pair); +err_alloc_pair: + mutex_unlock(&m2m->mlock); + return ret; +} + +static int asrc_m2m_release(struct file *file) +{ + struct asrc_m2m *m2m = video_drvdata(file); + struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(file->private_data); + struct fsl_asrc_pair *pair = pair_m2m->pair; + + mutex_lock(&m2m->mlock); + v4l2_ctrl_handler_free(&pair_m2m->ctrl_handler); + v4l2_m2m_ctx_release(pair_m2m->fh.m2m_ctx); + v4l2_fh_del(&pair_m2m->fh); + v4l2_fh_exit(&pair_m2m->fh); + kfree(pair_m2m); + kfree(pair); + mutex_unlock(&m2m->mlock); + + return 0; +} + +static const struct v4l2_file_operations asrc_m2m_fops = { + .owner = THIS_MODULE, + .open = asrc_m2m_open, + .release = asrc_m2m_release, + .poll = v4l2_m2m_fop_poll, + .unlocked_ioctl = video_ioctl2, + .mmap = v4l2_m2m_fop_mmap, +}; + +static int asrc_m2m_querycap(struct file *file, void *priv, + struct v4l2_capability *cap) +{ + strscpy(cap->driver, M2M_DRV_NAME, sizeof(cap->driver)); + strscpy(cap->card, M2M_DRV_NAME, sizeof(cap->card)); + cap->device_caps = V4L2_CAP_STREAMING | V4L2_CAP_AUDIO_M2M; + + return 0; +} + +static int enum_fmt(struct v4l2_fmtdesc *f, u64 fmtbit) +{ + snd_pcm_format_t fmt; + int i, num; + + num = 0; + + for (i = 0; i < NUM_FORMATS; ++i) { + fmt = v4l2_fourcc_to_audfmt(formats[i]); + if (pcm_format_to_bits(fmt) & fmtbit) { + if (num == f->index) + break; + /* + * Correct type but haven't reached our index yet, + * just increment per-type index + */ + ++num; + } + } + + if (i < NUM_FORMATS) { + /* Format found */ + f->pixelformat = formats[i]; + return 0; + } + + return -EINVAL; +} + +static int asrc_m2m_enum_fmt_aud_cap(struct file *file, void *fh, + struct v4l2_fmtdesc *f) +{ + struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(fh); + struct asrc_m2m *m2m = pair_m2m->m2m; + + return enum_fmt(f, m2m->pdata.fmt_out); +} + +static int asrc_m2m_enum_fmt_aud_out(struct file *file, void *fh, + struct v4l2_fmtdesc *f) +{ + struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(fh); + struct asrc_m2m *m2m = pair_m2m->m2m; + + return enum_fmt(f, m2m->pdata.fmt_in); +} + +static int asrc_m2m_g_fmt_aud_cap(struct file *file, void *fh, + struct v4l2_format *f) +{ + struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(fh); + struct fsl_asrc_pair *pair = pair_m2m->pair; + + f->fmt.audio.channels = pair->channels; + f->fmt.audio.buffersize = pair->buf_len[V4L_CAP]; + f->fmt.audio.audioformat = find_fourcc(pair->sample_format[V4L_CAP]); + + return 0; +} + +static int asrc_m2m_g_fmt_aud_out(struct file *file, void *fh, + struct v4l2_format *f) +{ + struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(fh); + struct fsl_asrc_pair *pair = pair_m2m->pair; + + f->fmt.audio.channels = pair->channels; + f->fmt.audio.buffersize = pair->buf_len[V4L_OUT]; + f->fmt.audio.audioformat = find_fourcc(pair->sample_format[V4L_OUT]); + + return 0; +} + +/* output for asrc */ +static int asrc_m2m_s_fmt_aud_cap(struct file *file, void *fh, + struct v4l2_format *f) +{ + struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(fh); + struct fsl_asrc_pair *pair = pair_m2m->pair; + struct asrc_m2m *m2m = pair_m2m->m2m; + struct device *dev = &m2m->pdev->dev; + + f->fmt.audio.audioformat = asrc_check_format(pair_m2m, OUT, f->fmt.audio.audioformat); + f->fmt.audio.channels = asrc_check_channel(pair_m2m, OUT, f->fmt.audio.channels); + + if (pair_m2m->channels[V4L_CAP] > 0 && + pair_m2m->channels[V4L_CAP] != f->fmt.audio.channels) { + dev_err(dev, "channels don't match for cap and out\n"); + return -EINVAL; + } + + pair_m2m->channels[V4L_CAP] = f->fmt.audio.channels; + pair->channels = f->fmt.audio.channels; + pair->sample_format[V4L_CAP] = find_format(f->fmt.audio.audioformat); + + return 0; +} + +/* input for asrc */ +static int asrc_m2m_s_fmt_aud_out(struct file *file, void *fh, + struct v4l2_format *f) +{ + struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(fh); + struct fsl_asrc_pair *pair = pair_m2m->pair; + struct asrc_m2m *m2m = pair_m2m->m2m; + struct device *dev = &m2m->pdev->dev; + + f->fmt.audio.audioformat = asrc_check_format(pair_m2m, IN, f->fmt.audio.audioformat); + f->fmt.audio.channels = asrc_check_channel(pair_m2m, IN, f->fmt.audio.channels); + if (pair_m2m->channels[V4L_OUT] > 0 && + pair_m2m->channels[V4L_OUT] != f->fmt.audio.channels) { + dev_err(dev, "channels don't match for cap and out\n"); + return -EINVAL; + } + + pair_m2m->channels[V4L_OUT] = f->fmt.audio.channels; + pair->channels = f->fmt.audio.channels; + pair->sample_format[V4L_OUT] = find_format(f->fmt.audio.audioformat); + + return 0; +} + +static int asrc_m2m_try_fmt_audio_cap(struct file *file, void *fh, + struct v4l2_format *f) +{ + struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(fh); + + f->fmt.audio.audioformat = asrc_check_format(pair_m2m, OUT, f->fmt.audio.audioformat); + f->fmt.audio.channels = asrc_check_channel(pair_m2m, OUT, f->fmt.audio.channels); + + return 0; +} + +static int asrc_m2m_try_fmt_audio_out(struct file *file, void *fh, + struct v4l2_format *f) +{ + struct asrc_pair_m2m *pair_m2m = asrc_m2m_fh_to_ctx(fh); + + f->fmt.audio.audioformat = asrc_check_format(pair_m2m, IN, f->fmt.audio.audioformat); + f->fmt.audio.channels = asrc_check_channel(pair_m2m, IN, f->fmt.audio.channels); + + return 0; +} + +static const struct v4l2_ioctl_ops asrc_m2m_ioctl_ops = { + .vidioc_querycap = asrc_m2m_querycap, + + .vidioc_enum_fmt_audio_cap = asrc_m2m_enum_fmt_aud_cap, + .vidioc_enum_fmt_audio_out = asrc_m2m_enum_fmt_aud_out, + + .vidioc_g_fmt_audio_cap = asrc_m2m_g_fmt_aud_cap, + .vidioc_g_fmt_audio_out = asrc_m2m_g_fmt_aud_out, + + .vidioc_s_fmt_audio_cap = asrc_m2m_s_fmt_aud_cap, + .vidioc_s_fmt_audio_out = asrc_m2m_s_fmt_aud_out, + + .vidioc_try_fmt_audio_cap = asrc_m2m_try_fmt_audio_cap, + .vidioc_try_fmt_audio_out = asrc_m2m_try_fmt_audio_out, + + .vidioc_qbuf = v4l2_m2m_ioctl_qbuf, + .vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf, + + .vidioc_create_bufs = v4l2_m2m_ioctl_create_bufs, + .vidioc_prepare_buf = v4l2_m2m_ioctl_prepare_buf, + .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs, + .vidioc_querybuf = v4l2_m2m_ioctl_querybuf, + .vidioc_streamon = v4l2_m2m_ioctl_streamon, + .vidioc_streamoff = v4l2_m2m_ioctl_streamoff, + .vidioc_subscribe_event = v4l2_ctrl_subscribe_event, + .vidioc_unsubscribe_event = v4l2_event_unsubscribe, +}; + +/* dma complete callback */ +static void asrc_input_dma_callback(void *data) +{ + struct fsl_asrc_pair *pair = (struct fsl_asrc_pair *)data; + + complete(&pair->complete[V4L_OUT]); +} + +/* dma complete callback */ +static void asrc_output_dma_callback(void *data) +{ + struct fsl_asrc_pair *pair = (struct fsl_asrc_pair *)data; + + complete(&pair->complete[V4L_CAP]); +} + +/* config dma channel */ +static int asrc_dmaconfig(struct asrc_pair_m2m *pair_m2m, + struct dma_chan *chan, + u32 dma_addr, dma_addr_t buf_addr, u32 buf_len, + int dir, int width) +{ + struct fsl_asrc_pair *pair = pair_m2m->pair; + struct fsl_asrc *asrc = pair->asrc; + struct asrc_m2m *m2m = pair_m2m->m2m; + struct device *dev = &m2m->pdev->dev; + struct dma_slave_config slave_config; + enum dma_slave_buswidth buswidth; + unsigned int sg_len, max_period_size; + struct scatterlist *sg; + int ret, i; + + switch (width) { + case 8: + buswidth = DMA_SLAVE_BUSWIDTH_1_BYTE; + break; + case 16: + buswidth = DMA_SLAVE_BUSWIDTH_2_BYTES; + break; + case 24: + buswidth = DMA_SLAVE_BUSWIDTH_3_BYTES; + break; + case 32: + buswidth = DMA_SLAVE_BUSWIDTH_4_BYTES; + break; + default: + dev_err(dev, "invalid word width\n"); + return -EINVAL; + } + + memset(&slave_config, 0, sizeof(slave_config)); + if (dir == V4L_OUT) { + slave_config.direction = DMA_MEM_TO_DEV; + slave_config.dst_addr = dma_addr; + slave_config.dst_addr_width = buswidth; + slave_config.dst_maxburst = asrc->m2m_get_maxburst(IN, pair); + } else { + slave_config.direction = DMA_DEV_TO_MEM; + slave_config.src_addr = dma_addr; + slave_config.src_addr_width = buswidth; + slave_config.src_maxburst = asrc->m2m_get_maxburst(OUT, pair); + } + + ret = dmaengine_slave_config(chan, &slave_config); + if (ret) { + dev_err(dev, "failed to config dmaengine for %s task: %d\n", + DIR_STR(dir), ret); + return -EINVAL; + } + + max_period_size = rounddown(ASRC_M2M_PERIOD_SIZE, width * pair->channels / 8); + /* scatter gather mode */ + sg_len = buf_len / max_period_size; + if (buf_len % max_period_size) + sg_len += 1; + + sg = kmalloc_array(sg_len, sizeof(*sg), GFP_KERNEL); + if (!sg) + return -ENOMEM; + + sg_init_table(sg, sg_len); + for (i = 0; i < (sg_len - 1); i++) { + sg_dma_address(&sg[i]) = buf_addr + i * max_period_size; + sg_dma_len(&sg[i]) = max_period_size; + } + sg_dma_address(&sg[i]) = buf_addr + i * max_period_size; + sg_dma_len(&sg[i]) = buf_len - i * max_period_size; + + pair->desc[dir] = dmaengine_prep_slave_sg(chan, sg, sg_len, + slave_config.direction, + DMA_PREP_INTERRUPT); + kfree(sg); + if (!pair->desc[dir]) { + dev_err(dev, "failed to prepare dmaengine for %s task\n", DIR_STR(dir)); + return -EINVAL; + } + + pair->desc[dir]->callback = ASRC_xPUT_DMA_CALLBACK(dir); + pair->desc[dir]->callback_param = pair; + + return 0; +} + +static void asrc_m2m_set_ratio_mod(struct asrc_pair_m2m *pair_m2m) +{ + struct fsl_asrc_pair *pair = pair_m2m->pair; + struct fsl_asrc *asrc = pair->asrc; + s32 src_rate_int, dst_rate_int; + s64 src_rate_frac; + s64 dst_rate_frac; + u64 src_rate, dst_rate; + u64 ratio_pre, ratio_cur; + s64 ratio_diff; + + if (!asrc->m2m_set_ratio_mod) + return; + + if (pair_m2m->src_rate_off_cur == pair_m2m->src_rate_off_prev && + pair_m2m->dst_rate_off_cur == pair_m2m->dst_rate_off_prev) + return; + + /* + * use maximum rate 768kHz as limitation, then we can shift right 21 bit for + * division + */ + src_rate_int = pair->rate[V4L_OUT]; + src_rate_frac = pair_m2m->src_rate_off_prev; + + src_rate = ((s64)src_rate_int << 32) + src_rate_frac; + + dst_rate_int = pair->rate[V4L_CAP]; + dst_rate_frac = pair_m2m->dst_rate_off_prev; + + dst_rate = ((s64)dst_rate_int << 32) + dst_rate_frac; + dst_rate >>= 21; + do_div(src_rate, dst_rate); + ratio_pre = src_rate; + + src_rate_frac = pair_m2m->src_rate_off_cur; + src_rate = ((s64)src_rate_int << 32) + src_rate_frac; + + dst_rate_frac = pair_m2m->dst_rate_off_cur; + dst_rate = ((s64)dst_rate_int << 32) + dst_rate_frac; + dst_rate >>= 21; + do_div(src_rate, dst_rate); + ratio_cur = src_rate; + + ratio_diff = ratio_cur - ratio_pre; + asrc->m2m_set_ratio_mod(pair, ratio_diff << 10); + + pair_m2m->src_rate_off_prev = pair_m2m->src_rate_off_cur; + pair_m2m->dst_rate_off_prev = pair_m2m->dst_rate_off_cur; +} + +/* main function of converter */ +static void asrc_m2m_device_run(void *priv) +{ + struct asrc_pair_m2m *pair_m2m = priv; + struct fsl_asrc_pair *pair = pair_m2m->pair; + struct asrc_m2m *m2m = pair_m2m->m2m; + struct fsl_asrc *asrc = pair->asrc; + struct device *dev = &m2m->pdev->dev; + enum asrc_pair_index index = pair->index; + struct vb2_v4l2_buffer *src_buf, *dst_buf; + unsigned int out_buf_len; + unsigned int cap_dma_len; + unsigned int width; + u32 fifo_addr; + int ret; + + /* set ratio mod */ + asrc_m2m_set_ratio_mod(pair_m2m); + + src_buf = v4l2_m2m_next_src_buf(pair_m2m->fh.m2m_ctx); + dst_buf = v4l2_m2m_next_dst_buf(pair_m2m->fh.m2m_ctx); + + src_buf->sequence = pair_m2m->sequence[V4L_OUT]++; + dst_buf->sequence = pair_m2m->sequence[V4L_CAP]++; + + width = snd_pcm_format_physical_width(pair->sample_format[V4L_OUT]); + fifo_addr = asrc->paddr + asrc->get_fifo_addr(IN, index); + out_buf_len = vb2_get_plane_payload(&src_buf->vb2_buf, 0); + if (out_buf_len < width * pair->channels / 8 || + out_buf_len > ASRC_M2M_BUFFER_SIZE || + out_buf_len % (width * pair->channels / 8)) { + dev_err(dev, "out buffer size is error: [%d]\n", out_buf_len); + goto end; + } + + /* dma config for output dma channel */ + ret = asrc_dmaconfig(pair_m2m, + pair->dma_chan[V4L_OUT], + fifo_addr, + vb2_dma_contig_plane_dma_addr(&src_buf->vb2_buf, 0), + out_buf_len, V4L_OUT, width); + if (ret) { + dev_err(dev, "out dma config error\n"); + goto end; + } + + width = snd_pcm_format_physical_width(pair->sample_format[V4L_CAP]); + fifo_addr = asrc->paddr + asrc->get_fifo_addr(OUT, index); + cap_dma_len = asrc->m2m_calc_out_len(pair, out_buf_len); + if (cap_dma_len > 0 && cap_dma_len <= ASRC_M2M_BUFFER_SIZE) { + /* dma config for capture dma channel */ + ret = asrc_dmaconfig(pair_m2m, + pair->dma_chan[V4L_CAP], + fifo_addr, + vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0), + cap_dma_len, V4L_CAP, width); + if (ret) { + dev_err(dev, "cap dma config error\n"); + goto end; + } + } else if (cap_dma_len > ASRC_M2M_BUFFER_SIZE) { + dev_err(dev, "cap buffer size error\n"); + goto end; + } + + reinit_completion(&pair->complete[V4L_OUT]); + reinit_completion(&pair->complete[V4L_CAP]); + + /* Submit DMA request */ + dmaengine_submit(pair->desc[V4L_OUT]); + dma_async_issue_pending(pair->desc[V4L_OUT]->chan); + if (cap_dma_len > 0) { + dmaengine_submit(pair->desc[V4L_CAP]); + dma_async_issue_pending(pair->desc[V4L_CAP]->chan); + } + + asrc->m2m_start(pair); + + if (!wait_for_completion_interruptible_timeout(&pair->complete[V4L_OUT], 10 * HZ)) { + dev_err(dev, "out DMA task timeout\n"); + goto end; + } + + if (cap_dma_len > 0) { + if (!wait_for_completion_interruptible_timeout(&pair->complete[V4L_CAP], 10 * HZ)) { + dev_err(dev, "cap DMA task timeout\n"); + goto end; + } + } + + /* read the last words from FIFO */ + asrc_read_last_fifo(pair, vb2_plane_vaddr(&dst_buf->vb2_buf, 0), &cap_dma_len); + /* update payload length for capture */ + vb2_set_plane_payload(&dst_buf->vb2_buf, 0, cap_dma_len); + +end: + src_buf = v4l2_m2m_src_buf_remove(pair_m2m->fh.m2m_ctx); + dst_buf = v4l2_m2m_dst_buf_remove(pair_m2m->fh.m2m_ctx); + + v4l2_m2m_buf_done(src_buf, VB2_BUF_STATE_DONE); + v4l2_m2m_buf_done(dst_buf, VB2_BUF_STATE_DONE); + + v4l2_m2m_job_finish(m2m->m2m_dev, pair_m2m->fh.m2m_ctx); +} + +static int asrc_m2m_job_ready(void *priv) +{ + struct asrc_pair_m2m *pair_m2m = priv; + + if (v4l2_m2m_num_src_bufs_ready(pair_m2m->fh.m2m_ctx) > 0 && + v4l2_m2m_num_dst_bufs_ready(pair_m2m->fh.m2m_ctx) > 0) { + return 1; + } + + return 0; +} + +static const struct v4l2_m2m_ops asrc_m2m_ops = { + .job_ready = asrc_m2m_job_ready, + .device_run = asrc_m2m_device_run, +}; + +static const struct media_device_ops asrc_m2m_media_ops = { + .req_validate = vb2_request_validate, + .req_queue = v4l2_m2m_request_queue, +}; + +static int asrc_m2m_probe(struct platform_device *pdev) +{ + struct fsl_asrc_m2m_pdata *data = pdev->dev.platform_data; + struct device *dev = &pdev->dev; + struct asrc_m2m *m2m; + int ret; + + m2m = devm_kzalloc(dev, sizeof(*m2m), GFP_KERNEL); + if (!m2m) + return -ENOMEM; + + m2m->pdata = *data; + m2m->pdev = pdev; + + ret = v4l2_device_register(dev, &m2m->v4l2_dev); + if (ret) { + dev_err(dev, "failed to register v4l2 device\n"); + goto err_register; + } + + m2m->m2m_dev = v4l2_m2m_init(&asrc_m2m_ops); + if (IS_ERR(m2m->m2m_dev)) { + ret = PTR_ERR(m2m->m2m_dev); + dev_err_probe(dev, ret, "failed to register v4l2 device\n"); + goto err_m2m; + } + + m2m->dec_vdev = video_device_alloc(); + if (!m2m->dec_vdev) { + ret = -ENOMEM; + goto err_vdev_alloc; + } + + mutex_init(&m2m->mlock); + + m2m->dec_vdev->fops = &asrc_m2m_fops; + m2m->dec_vdev->ioctl_ops = &asrc_m2m_ioctl_ops; + m2m->dec_vdev->minor = -1; + m2m->dec_vdev->release = video_device_release; + m2m->dec_vdev->lock = &m2m->mlock; /* lock for ioctl serialization */ + m2m->dec_vdev->v4l2_dev = &m2m->v4l2_dev; + m2m->dec_vdev->vfl_dir = VFL_DIR_M2M; + m2m->dec_vdev->device_caps = V4L2_CAP_STREAMING | V4L2_CAP_AUDIO_M2M; + +#ifdef CONFIG_MEDIA_CONTROLLER + m2m->mdev.dev = &pdev->dev; + strscpy(m2m->mdev.model, M2M_DRV_NAME, sizeof(m2m->mdev.model)); + media_device_init(&m2m->mdev); + m2m->mdev.ops = &asrc_m2m_media_ops; + m2m->v4l2_dev.mdev = &m2m->mdev; +#endif + + ret = video_register_device(m2m->dec_vdev, VFL_TYPE_AUDIO, -1); + if (ret) { + dev_err_probe(dev, ret, "failed to register video device\n"); + goto err_vdev_register; + } + +#ifdef CONFIG_MEDIA_CONTROLLER + ret = v4l2_m2m_register_media_controller(m2m->m2m_dev, m2m->dec_vdev, + MEDIA_ENT_F_PROC_AUDIO_RESAMPLER); + if (ret) { + dev_err_probe(dev, ret, "Failed to init mem2mem media controller\n"); + goto error_v4l2; + } + + ret = media_device_register(&m2m->mdev); + if (ret) { + dev_err_probe(dev, ret, "Failed to register mem2mem media device\n"); + goto error_m2m_mc; + } +#endif + + video_set_drvdata(m2m->dec_vdev, m2m); + platform_set_drvdata(pdev, m2m); + pm_runtime_enable(&pdev->dev); + + return 0; + +#ifdef CONFIG_MEDIA_CONTROLLER +error_m2m_mc: + v4l2_m2m_unregister_media_controller(m2m->m2m_dev); +#endif +error_v4l2: + video_unregister_device(m2m->dec_vdev); +err_vdev_register: + video_device_release(m2m->dec_vdev); +err_vdev_alloc: + v4l2_m2m_release(m2m->m2m_dev); +err_m2m: + v4l2_device_unregister(&m2m->v4l2_dev); +err_register: + return ret; +} + +static void asrc_m2m_remove(struct platform_device *pdev) +{ + struct asrc_m2m *m2m = platform_get_drvdata(pdev); + + pm_runtime_disable(&pdev->dev); +#ifdef CONFIG_MEDIA_CONTROLLER + media_device_unregister(&m2m->mdev); + v4l2_m2m_unregister_media_controller(m2m->m2m_dev); +#endif + video_unregister_device(m2m->dec_vdev); + video_device_release(m2m->dec_vdev); + v4l2_m2m_release(m2m->m2m_dev); + v4l2_device_unregister(&m2m->v4l2_dev); +} + +#ifdef CONFIG_PM_SLEEP +/* suspend callback for m2m */ +static int asrc_m2m_suspend(struct device *dev) +{ + struct asrc_m2m *m2m = dev_get_drvdata(dev); + struct fsl_asrc *asrc = m2m->pdata.asrc; + struct fsl_asrc_pair *pair; + unsigned long lock_flags; + int i; + + for (i = 0; i < PAIR_CTX_NUM; i++) { + spin_lock_irqsave(&asrc->lock, lock_flags); + pair = asrc->pair[i]; + if (!pair || !pair->req_pair) { + spin_unlock_irqrestore(&asrc->lock, lock_flags); + continue; + } + if (!completion_done(&pair->complete[V4L_OUT])) { + if (pair->dma_chan[V4L_OUT]) + dmaengine_terminate_all(pair->dma_chan[V4L_OUT]); + asrc_input_dma_callback((void *)pair); + } + if (!completion_done(&pair->complete[V4L_CAP])) { + if (pair->dma_chan[V4L_CAP]) + dmaengine_terminate_all(pair->dma_chan[V4L_CAP]); + asrc_output_dma_callback((void *)pair); + } + + if (asrc->m2m_pair_suspend) + asrc->m2m_pair_suspend(pair); + + spin_unlock_irqrestore(&asrc->lock, lock_flags); + } + + return 0; +} + +static int asrc_m2m_resume(struct device *dev) +{ + struct asrc_m2m *m2m = dev_get_drvdata(dev); + struct fsl_asrc *asrc = m2m->pdata.asrc; + struct fsl_asrc_pair *pair; + unsigned long lock_flags; + int i; + + for (i = 0; i < PAIR_CTX_NUM; i++) { + spin_lock_irqsave(&asrc->lock, lock_flags); + pair = asrc->pair[i]; + if (!pair || !pair->req_pair) { + spin_unlock_irqrestore(&asrc->lock, lock_flags); + continue; + } + if (asrc->m2m_pair_resume) + asrc->m2m_pair_resume(pair); + + spin_unlock_irqrestore(&asrc->lock, lock_flags); + } + + return 0; +} +#endif + +static const struct dev_pm_ops asrc_m2m_pm_ops = { + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(asrc_m2m_suspend, + asrc_m2m_resume) +}; + +static const struct platform_device_id asrc_m2m_driver_ids[] __always_unused = { + { .name = M2M_DRV_NAME }, + { }, +}; +MODULE_DEVICE_TABLE(platform, asrc_m2m_driver_ids); + +static struct platform_driver asrc_m2m_driver = { + .probe = asrc_m2m_probe, + .remove_new = asrc_m2m_remove, + .id_table = asrc_m2m_driver_ids, + .driver = { + .name = M2M_DRV_NAME, + .pm = &asrc_m2m_pm_ops, + }, +}; +module_platform_driver(asrc_m2m_driver); + +MODULE_DESCRIPTION("Freescale ASRC M2M driver"); +MODULE_LICENSE("GPL");
Audio memory to memory virtual driver use video memory to memory virtual driver vim2m.c as example. The main difference is device type is VFL_TYPE_AUDIO and device cap type is V4L2_CAP_AUDIO_M2M.
The device_run function is a dummy function, which is simply copy the data from input buffer to output buffer.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- MAINTAINERS | 9 + drivers/media/test-drivers/Kconfig | 10 + drivers/media/test-drivers/Makefile | 1 + drivers/media/test-drivers/vim2m-audio.c | 793 +++++++++++++++++++++++ 4 files changed, 813 insertions(+) create mode 100644 drivers/media/test-drivers/vim2m-audio.c
diff --git a/MAINTAINERS b/MAINTAINERS index 7b8b9ee65c61..215d40d80508 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -23575,6 +23575,15 @@ L: linux-fsdevel@vger.kernel.org S: Maintained F: fs/vboxsf/*
+VIRTUAL MEM2MEM DRIVER FOR AUDIO +M: Hans Verkuil hverkuil@xs4all.nl +M: Shengjiu Wang shengjiu.wang@gmail.com +L: linux-media@vger.kernel.org +S: Maintained +W: https://linuxtv.org +T: git git://linuxtv.org/media_tree.git +F: drivers/media/test-drivers/vim2m-audio.c + VIRTUAL PCM TEST DRIVER M: Ivan Orlov ivan.orlov0322@gmail.com L: linux-sound@vger.kernel.org diff --git a/drivers/media/test-drivers/Kconfig b/drivers/media/test-drivers/Kconfig index 5a5379524bde..b6b52a7ca042 100644 --- a/drivers/media/test-drivers/Kconfig +++ b/drivers/media/test-drivers/Kconfig @@ -16,6 +16,16 @@ config VIDEO_VIM2M This is a virtual test device for the memory-to-memory driver framework.
+config VIDEO_VIM2M_AUDIO + tristate "Virtual Memory-to-Memory Driver For Audio" + depends on VIDEO_DEV + select VIDEOBUF2_VMALLOC + select V4L2_MEM2MEM_DEV + select MEDIA_CONTROLLER + help + This is a virtual audio test device for the memory-to-memory driver + framework. + source "drivers/media/test-drivers/vicodec/Kconfig" source "drivers/media/test-drivers/vimc/Kconfig" source "drivers/media/test-drivers/vivid/Kconfig" diff --git a/drivers/media/test-drivers/Makefile b/drivers/media/test-drivers/Makefile index 740714a4584d..0c61c9ada3e1 100644 --- a/drivers/media/test-drivers/Makefile +++ b/drivers/media/test-drivers/Makefile @@ -10,6 +10,7 @@ obj-$(CONFIG_DVB_VIDTV) += vidtv/
obj-$(CONFIG_VIDEO_VICODEC) += vicodec/ obj-$(CONFIG_VIDEO_VIM2M) += vim2m.o +obj-$(CONFIG_VIDEO_VIM2M_AUDIO) += vim2m-audio.o obj-$(CONFIG_VIDEO_VIMC) += vimc/ obj-$(CONFIG_VIDEO_VIVID) += vivid/ obj-$(CONFIG_VIDEO_VISL) += visl/ diff --git a/drivers/media/test-drivers/vim2m-audio.c b/drivers/media/test-drivers/vim2m-audio.c new file mode 100644 index 000000000000..6361df6320b3 --- /dev/null +++ b/drivers/media/test-drivers/vim2m-audio.c @@ -0,0 +1,793 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * A virtual v4l2-mem2mem example for audio device. + */ + +#include <linux/module.h> +#include <linux/delay.h> +#include <linux/fs.h> +#include <linux/sched.h> +#include <linux/slab.h> + +#include <linux/platform_device.h> +#include <media/v4l2-mem2mem.h> +#include <media/v4l2-device.h> +#include <media/v4l2-ioctl.h> +#include <media/v4l2-ctrls.h> +#include <media/v4l2-event.h> +#include <media/videobuf2-vmalloc.h> +#include <sound/dmaengine_pcm.h> + +MODULE_DESCRIPTION("Virtual device for audio mem2mem testing"); +MODULE_LICENSE("GPL"); + +static unsigned int debug; +module_param(debug, uint, 0644); +MODULE_PARM_DESC(debug, "debug level"); + +#define MEM2MEM_NAME "vim2m-audio" + +#define dprintk(dev, lvl, fmt, arg...) \ + v4l2_dbg(lvl, debug, &(dev)->v4l2_dev, "%s: " fmt, __func__, ## arg) + +#define SAMPLE_NUM 4096 + +static void audm2m_dev_release(struct device *dev) +{} + +static struct platform_device audm2m_pdev = { + .name = MEM2MEM_NAME, + .dev.release = audm2m_dev_release, +}; + +static u32 formats[] = { + V4L2_AUDIO_FMT_S16_LE, +}; + +#define NUM_FORMATS ARRAY_SIZE(formats) + +/* Per-queue, driver-specific private data */ +struct audm2m_q_data { + unsigned int rate; + unsigned int channels; + unsigned int buffersize; + unsigned int sequence; + u32 fourcc; +}; + +enum { + V4L2_M2M_SRC = 0, + V4L2_M2M_DST = 1, +}; + +static snd_pcm_format_t find_format(u32 fourcc) +{ + snd_pcm_format_t fmt; + unsigned int k; + + for (k = 0; k < NUM_FORMATS; k++) { + if (formats[k] == fourcc) + break; + } + + if (k == NUM_FORMATS) + return 0; + + fmt = v4l2_fourcc_to_audfmt(formats[k]); + + return fmt; +} + +struct audm2m_dev { + struct v4l2_device v4l2_dev; + struct video_device vfd; + + struct mutex dev_mutex; + + struct v4l2_m2m_dev *m2m_dev; +#ifdef CONFIG_MEDIA_CONTROLLER + struct media_device mdev; +#endif +}; + +struct audm2m_ctx { + struct v4l2_fh fh; + struct v4l2_ctrl_handler ctrl_handler; + struct audm2m_dev *dev; + + struct mutex vb_mutex; + + /* Source and destination queue data */ + struct audm2m_q_data q_data[2]; +}; + +static inline struct audm2m_ctx *file2ctx(struct file *file) +{ + return container_of(file->private_data, struct audm2m_ctx, fh); +} + +static struct audm2m_q_data *get_q_data(struct audm2m_ctx *ctx, + enum v4l2_buf_type type) +{ + if (type == V4L2_BUF_TYPE_AUDIO_OUTPUT) + return &ctx->q_data[V4L2_M2M_SRC]; + return &ctx->q_data[V4L2_M2M_DST]; +} + +static const char *type_name(enum v4l2_buf_type type) +{ + if (type == V4L2_BUF_TYPE_AUDIO_OUTPUT) + return "Output"; + return "Capture"; +} + +/* + * mem2mem callbacks + */ + +/* + * device_run() - prepares and starts the device + */ +static void device_run(void *priv) +{ + struct audm2m_ctx *ctx = priv; + struct audm2m_dev *audm2m_dev; + struct vb2_v4l2_buffer *src_buf, *dst_buf; + struct audm2m_q_data *q_data_src, *q_data_dst; + int src_size, dst_size = 0; + short *src_addr, *dst_addr; + int i; + + audm2m_dev = ctx->dev; + + q_data_src = get_q_data(ctx, V4L2_BUF_TYPE_AUDIO_OUTPUT); + if (!q_data_src) + return; + + q_data_dst = get_q_data(ctx, V4L2_BUF_TYPE_AUDIO_CAPTURE); + if (!q_data_dst) + return; + + src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx); + dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx); + src_buf->sequence = q_data_src->sequence++; + dst_buf->sequence = q_data_dst->sequence++; + v4l2_m2m_buf_copy_metadata(src_buf, dst_buf, false); + + /* Process the conversion */ + src_size = vb2_get_plane_payload(&src_buf->vb2_buf, 0); + + src_addr = vb2_plane_vaddr(&src_buf->vb2_buf, 0); + dst_addr = vb2_plane_vaddr(&dst_buf->vb2_buf, 0); + + if (q_data_src->rate == q_data_dst->rate) { + memcpy(dst_addr, src_addr, src_size); + dst_size = src_size; + } else if (q_data_src->rate == 2 * q_data_dst->rate) { + /* 8k to 16k */ + for (i = 0; i < src_size / 2; i++) { + *dst_addr++ = *src_addr++; + src_addr++; + } + + dst_size = src_size / 2; + } else if (q_data_src->rate * 2 == q_data_dst->rate) { + /* 16k to 8k */ + for (i = 0; i < src_size / 2; i++) { + *dst_addr++ = *src_addr; + *dst_addr++ = *src_addr++; + } + + dst_size = src_size * 2; + } + + vb2_set_plane_payload(&dst_buf->vb2_buf, 0, dst_size); + + src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); + dst_buf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); + + v4l2_m2m_buf_done(src_buf, VB2_BUF_STATE_DONE); + v4l2_m2m_buf_done(dst_buf, VB2_BUF_STATE_DONE); + v4l2_m2m_job_finish(audm2m_dev->m2m_dev, ctx->fh.m2m_ctx); +} + +static int audm2m_querycap(struct file *file, void *priv, + struct v4l2_capability *cap) +{ + strscpy(cap->driver, MEM2MEM_NAME, sizeof(cap->driver)); + strscpy(cap->card, MEM2MEM_NAME, sizeof(cap->card)); + + return 0; +} + +static int enum_fmt(struct v4l2_fmtdesc *f) +{ + int i, num; + + num = 0; + + for (i = 0; i < NUM_FORMATS; ++i) { + if (num == f->index) + break; + /* + * Correct type but haven't reached our index yet, + * just increment per-type index + */ + ++num; + } + + if (i < NUM_FORMATS) { + /* Format found */ + f->pixelformat = formats[i]; + return 0; + } + + /* Format not found */ + return -EINVAL; +} + +static int audm2m_enum_fmt_audio_cap(struct file *file, void *priv, + struct v4l2_fmtdesc *f) +{ + return enum_fmt(f); +} + +static int audm2m_enum_fmt_audio_out(struct file *file, void *priv, + struct v4l2_fmtdesc *f) +{ + return enum_fmt(f); +} + +static int audm2m_g_fmt(struct audm2m_ctx *ctx, struct v4l2_format *f) +{ + struct vb2_queue *vq; + struct audm2m_q_data *q_data; + + vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type); + if (!vq) + return -EINVAL; + + q_data = get_q_data(ctx, f->type); + if (!q_data) + return -EINVAL; + + f->fmt.audio.audioformat = q_data->fourcc; + f->fmt.audio.channels = q_data->channels; + f->fmt.audio.buffersize = q_data->buffersize; + + return 0; +} + +static int audm2m_g_fmt_audio_out(struct file *file, void *priv, + struct v4l2_format *f) +{ + return audm2m_g_fmt(file2ctx(file), f); +} + +static int audm2m_g_fmt_audio_cap(struct file *file, void *priv, + struct v4l2_format *f) +{ + return audm2m_g_fmt(file2ctx(file), f); +} + +static int audm2m_try_fmt(struct v4l2_format *f, snd_pcm_format_t fmt) +{ + f->fmt.audio.channels = 1; + f->fmt.audio.buffersize = f->fmt.audio.channels * + snd_pcm_format_physical_width(fmt) * + SAMPLE_NUM; + return 0; +} + +static int audm2m_try_fmt_audio_cap(struct file *file, void *priv, + struct v4l2_format *f) +{ + snd_pcm_format_t fmt; + + fmt = find_format(f->fmt.audio.audioformat); + if (!fmt) { + f->fmt.audio.audioformat = formats[0]; + fmt = find_format(f->fmt.audio.audioformat); + } + + return audm2m_try_fmt(f, fmt); +} + +static int audm2m_try_fmt_audio_out(struct file *file, void *priv, + struct v4l2_format *f) +{ + snd_pcm_format_t fmt; + + fmt = find_format(f->fmt.audio.audioformat); + if (!fmt) { + f->fmt.audio.audioformat = formats[0]; + fmt = find_format(f->fmt.audio.audioformat); + } + + return audm2m_try_fmt(f, fmt); +} + +static int audm2m_s_fmt(struct audm2m_ctx *ctx, struct v4l2_format *f) +{ + struct audm2m_q_data *q_data; + struct vb2_queue *vq; + snd_pcm_format_t fmt; + + vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type); + if (!vq) + return -EINVAL; + + q_data = get_q_data(ctx, f->type); + if (!q_data) + return -EINVAL; + + if (vb2_is_busy(vq)) { + v4l2_err(&ctx->dev->v4l2_dev, "%s queue busy\n", __func__); + return -EBUSY; + } + + q_data->fourcc = f->fmt.audio.audioformat; + q_data->channels = f->fmt.audio.channels; + + fmt = find_format(f->fmt.audio.audioformat); + q_data->buffersize = q_data->channels * + snd_pcm_format_physical_width(fmt) * + SAMPLE_NUM; + + dprintk(ctx->dev, 1, + "Format for type %s: %d/%d, fmt: %c%c%c%c\n", + type_name(f->type), q_data->rate, + q_data->channels, + (q_data->fourcc & 0xff), + (q_data->fourcc >> 8) & 0xff, + (q_data->fourcc >> 16) & 0xff, + (q_data->fourcc >> 24) & 0xff); + + return 0; +} + +static int audm2m_s_fmt_audio_cap(struct file *file, void *priv, + struct v4l2_format *f) +{ + int ret; + + ret = audm2m_try_fmt_audio_cap(file, priv, f); + if (ret) + return ret; + + return audm2m_s_fmt(file2ctx(file), f); +} + +static int audm2m_s_fmt_audio_out(struct file *file, void *priv, + struct v4l2_format *f) +{ + int ret; + + ret = audm2m_try_fmt_audio_out(file, priv, f); + if (ret) + return ret; + + return audm2m_s_fmt(file2ctx(file), f); +} + +static const struct v4l2_ioctl_ops audm2m_ioctl_ops = { + .vidioc_querycap = audm2m_querycap, + + .vidioc_enum_fmt_audio_cap = audm2m_enum_fmt_audio_cap, + .vidioc_g_fmt_audio_cap = audm2m_g_fmt_audio_cap, + .vidioc_try_fmt_audio_cap = audm2m_try_fmt_audio_cap, + .vidioc_s_fmt_audio_cap = audm2m_s_fmt_audio_cap, + + .vidioc_enum_fmt_audio_out = audm2m_enum_fmt_audio_out, + .vidioc_g_fmt_audio_out = audm2m_g_fmt_audio_out, + .vidioc_try_fmt_audio_out = audm2m_try_fmt_audio_out, + .vidioc_s_fmt_audio_out = audm2m_s_fmt_audio_out, + + .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs, + .vidioc_querybuf = v4l2_m2m_ioctl_querybuf, + .vidioc_qbuf = v4l2_m2m_ioctl_qbuf, + .vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf, + .vidioc_prepare_buf = v4l2_m2m_ioctl_prepare_buf, + .vidioc_create_bufs = v4l2_m2m_ioctl_create_bufs, + .vidioc_expbuf = v4l2_m2m_ioctl_expbuf, + + .vidioc_streamon = v4l2_m2m_ioctl_streamon, + .vidioc_streamoff = v4l2_m2m_ioctl_streamoff, + + .vidioc_subscribe_event = v4l2_ctrl_subscribe_event, + .vidioc_unsubscribe_event = v4l2_event_unsubscribe, +}; + +/* + * Queue operations + */ +static int audm2m_queue_setup(struct vb2_queue *vq, + unsigned int *nbuffers, + unsigned int *nplanes, + unsigned int sizes[], + struct device *alloc_devs[]) +{ + struct audm2m_ctx *ctx = vb2_get_drv_priv(vq); + struct audm2m_q_data *q_data; + + q_data = get_q_data(ctx, vq->type); + + if (*nplanes) + return sizes[0] < q_data->buffersize ? -EINVAL : 0; + + *nplanes = 1; + sizes[0] = q_data->buffersize; + + dprintk(ctx->dev, 1, "%s: get %d buffer(s) of size %d each.\n", + type_name(vq->type), *nplanes, sizes[0]); + + return 0; +} + +static void audm2m_buf_queue(struct vb2_buffer *vb) +{ + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + struct audm2m_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue); + + v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf); +} + +static int audm2m_start_streaming(struct vb2_queue *q, unsigned int count) +{ + struct audm2m_ctx *ctx = vb2_get_drv_priv(q); + struct audm2m_q_data *q_data = get_q_data(ctx, q->type); + + q_data->sequence = 0; + return 0; +} + +static void audm2m_stop_streaming(struct vb2_queue *q) +{ + struct audm2m_ctx *ctx = vb2_get_drv_priv(q); + struct vb2_v4l2_buffer *vbuf; + + for (;;) { + if (V4L2_TYPE_IS_OUTPUT(q->type)) + vbuf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx); + else + vbuf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx); + if (!vbuf) + return; + v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_ERROR); + } +} + +static const struct vb2_ops audm2m_qops = { + .queue_setup = audm2m_queue_setup, + .buf_queue = audm2m_buf_queue, + .start_streaming = audm2m_start_streaming, + .stop_streaming = audm2m_stop_streaming, + .wait_prepare = vb2_ops_wait_prepare, + .wait_finish = vb2_ops_wait_finish, +}; + +static int queue_init(void *priv, struct vb2_queue *src_vq, + struct vb2_queue *dst_vq) +{ + struct audm2m_ctx *ctx = priv; + int ret; + + src_vq->type = V4L2_BUF_TYPE_AUDIO_OUTPUT; + src_vq->io_modes = VB2_MMAP | VB2_DMABUF; + src_vq->drv_priv = ctx; + src_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer); + src_vq->ops = &audm2m_qops; + src_vq->mem_ops = &vb2_vmalloc_memops; + src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY; + src_vq->lock = &ctx->vb_mutex; + + ret = vb2_queue_init(src_vq); + if (ret) + return ret; + + dst_vq->type = V4L2_BUF_TYPE_AUDIO_CAPTURE; + dst_vq->io_modes = VB2_MMAP | VB2_DMABUF; + dst_vq->drv_priv = ctx; + dst_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer); + dst_vq->ops = &audm2m_qops; + dst_vq->mem_ops = &vb2_vmalloc_memops; + dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY; + dst_vq->lock = &ctx->vb_mutex; + + return vb2_queue_init(dst_vq); +} + +static const s64 audm2m_rates[] = { + 8000, 16000, +}; + +static int audm2m_op_s_ctrl(struct v4l2_ctrl *ctrl) +{ + struct audm2m_ctx *ctx = + container_of(ctrl->handler, struct audm2m_ctx, ctrl_handler); + int ret = 0; + + switch (ctrl->id) { + case V4L2_CID_M2M_AUDIO_SOURCE_RATE: + ctx->q_data[V4L2_M2M_SRC].rate = ctrl->qmenu_int[ctrl->val]; + break; + case V4L2_CID_M2M_AUDIO_DEST_RATE: + ctx->q_data[V4L2_M2M_DST].rate = ctrl->qmenu_int[ctrl->val]; + break; + default: + ret = -EINVAL; + break; + } + + return ret; +} + +static const struct v4l2_ctrl_ops audm2m_ctrl_ops = { + .s_ctrl = audm2m_op_s_ctrl, +}; + +/* + * File operations + */ +static int audm2m_open(struct file *file) +{ + struct audm2m_dev *dev = video_drvdata(file); + struct audm2m_ctx *ctx = NULL; + snd_pcm_format_t fmt; + int width; + int rc = 0; + + if (mutex_lock_interruptible(&dev->dev_mutex)) + return -ERESTARTSYS; + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) { + rc = -ENOMEM; + goto open_unlock; + } + + v4l2_fh_init(&ctx->fh, video_devdata(file)); + file->private_data = &ctx->fh; + ctx->dev = dev; + + ctx->q_data[V4L2_M2M_SRC].fourcc = formats[0]; + ctx->q_data[V4L2_M2M_SRC].rate = 8000; + ctx->q_data[V4L2_M2M_SRC].channels = 1; + + /* Fix to 4096 samples */ + fmt = find_format(formats[0]); + width = snd_pcm_format_physical_width(fmt); + ctx->q_data[V4L2_M2M_SRC].buffersize = SAMPLE_NUM * width; + ctx->q_data[V4L2_M2M_DST] = ctx->q_data[V4L2_M2M_SRC]; + + ctx->fh.m2m_ctx = v4l2_m2m_ctx_init(dev->m2m_dev, ctx, &queue_init); + + mutex_init(&ctx->vb_mutex); + + if (IS_ERR(ctx->fh.m2m_ctx)) { + rc = PTR_ERR(ctx->fh.m2m_ctx); + + v4l2_fh_exit(&ctx->fh); + kfree(ctx); + goto open_unlock; + } + + v4l2_fh_add(&ctx->fh); + + dprintk(dev, 1, "Created instance: %p, m2m_ctx: %p\n", + ctx, ctx->fh.m2m_ctx); + + v4l2_ctrl_handler_init(&ctx->ctrl_handler, 2); + + v4l2_ctrl_new_int_menu(&ctx->ctrl_handler, &audm2m_ctrl_ops, + V4L2_CID_M2M_AUDIO_SOURCE_RATE, + ARRAY_SIZE(audm2m_rates) - 1, 0, audm2m_rates); + v4l2_ctrl_new_int_menu(&ctx->ctrl_handler, &audm2m_ctrl_ops, + V4L2_CID_M2M_AUDIO_DEST_RATE, + ARRAY_SIZE(audm2m_rates) - 1, 0, audm2m_rates); + + if (ctx->ctrl_handler.error) { + rc = ctx->ctrl_handler.error; + v4l2_ctrl_handler_free(&ctx->ctrl_handler); + goto err_ctrl_handler; + } + + ctx->fh.ctrl_handler = &ctx->ctrl_handler; + + mutex_unlock(&dev->dev_mutex); + + return 0; + +err_ctrl_handler: + v4l2_m2m_ctx_release(ctx->fh.m2m_ctx); +open_unlock: + mutex_unlock(&dev->dev_mutex); + return rc; +} + +static int audm2m_release(struct file *file) +{ + struct audm2m_dev *dev = video_drvdata(file); + struct audm2m_ctx *ctx = file2ctx(file); + + dprintk(dev, 1, "Releasing instance %p\n", ctx); + + v4l2_ctrl_handler_free(&ctx->ctrl_handler); + v4l2_fh_del(&ctx->fh); + v4l2_fh_exit(&ctx->fh); + mutex_lock(&dev->dev_mutex); + v4l2_m2m_ctx_release(ctx->fh.m2m_ctx); + mutex_unlock(&dev->dev_mutex); + kfree(ctx); + + return 0; +} + +static void audm2m_device_release(struct video_device *vdev) +{ + struct audm2m_dev *dev = container_of(vdev, struct audm2m_dev, vfd); + + v4l2_device_unregister(&dev->v4l2_dev); + v4l2_m2m_release(dev->m2m_dev); + +#ifdef CONFIG_MEDIA_CONTROLLER + media_device_cleanup(&dev->mdev); +#endif + kfree(dev); +} + +static const struct v4l2_file_operations audm2m_fops = { + .owner = THIS_MODULE, + .open = audm2m_open, + .release = audm2m_release, + .poll = v4l2_m2m_fop_poll, + .unlocked_ioctl = video_ioctl2, + .mmap = v4l2_m2m_fop_mmap, +}; + +static const struct video_device audm2m_videodev = { + .name = MEM2MEM_NAME, + .vfl_dir = VFL_DIR_M2M, + .fops = &audm2m_fops, + .ioctl_ops = &audm2m_ioctl_ops, + .minor = -1, + .release = audm2m_device_release, + .device_caps = V4L2_CAP_AUDIO_M2M | V4L2_CAP_STREAMING, +}; + +static const struct v4l2_m2m_ops m2m_ops = { + .device_run = device_run, +}; + +static const struct media_device_ops audm2m_media_ops = { + .req_validate = vb2_request_validate, + .req_queue = v4l2_m2m_request_queue, +}; + +static int audm2m_probe(struct platform_device *pdev) +{ + struct audm2m_dev *dev; + struct video_device *vfd; + int ret; + + dev = kzalloc(sizeof(*dev), GFP_KERNEL); + if (!dev) + return -ENOMEM; + + ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev); + if (ret) + goto error_free; + + mutex_init(&dev->dev_mutex); + + dev->vfd = audm2m_videodev; + vfd = &dev->vfd; + vfd->lock = &dev->dev_mutex; + vfd->v4l2_dev = &dev->v4l2_dev; + + video_set_drvdata(vfd, dev); + platform_set_drvdata(pdev, dev); + + dev->m2m_dev = v4l2_m2m_init(&m2m_ops); + if (IS_ERR(dev->m2m_dev)) { + v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem device\n"); + ret = PTR_ERR(dev->m2m_dev); + dev->m2m_dev = NULL; + goto error_dev; + } + +#ifdef CONFIG_MEDIA_CONTROLLER + dev->mdev.dev = &pdev->dev; + strscpy(dev->mdev.model, MEM2MEM_NAME, sizeof(dev->mdev.model)); + media_device_init(&dev->mdev); + dev->mdev.ops = &audm2m_media_ops; + dev->v4l2_dev.mdev = &dev->mdev; +#endif + + ret = video_register_device(vfd, VFL_TYPE_AUDIO, 0); + if (ret) { + v4l2_err(&dev->v4l2_dev, "Failed to register video device\n"); + goto error_m2m; + } + +#ifdef CONFIG_MEDIA_CONTROLLER + ret = v4l2_m2m_register_media_controller(dev->m2m_dev, vfd, + MEDIA_ENT_F_PROC_AUDIO_RESAMPLER); + if (ret) { + v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem media controller\n"); + goto error_v4l2; + } + + ret = media_device_register(&dev->mdev); + if (ret) { + v4l2_err(&dev->v4l2_dev, "Failed to register mem2mem media device\n"); + goto error_m2m_mc; + } +#endif + + v4l2_info(&dev->v4l2_dev, + "Device registered as /dev/v4l-audio%d\n", vfd->num); + + return 0; + +#ifdef CONFIG_MEDIA_CONTROLLER +error_m2m_mc: + v4l2_m2m_unregister_media_controller(dev->m2m_dev); +#endif +error_v4l2: + video_unregister_device(&dev->vfd); + /* audm2m_device_release called by video_unregister_device to release various objects */ + return ret; +error_m2m: + v4l2_m2m_release(dev->m2m_dev); +error_dev: + v4l2_device_unregister(&dev->v4l2_dev); +error_free: + kfree(dev); + + return ret; +} + +static void audm2m_remove(struct platform_device *pdev) +{ + struct audm2m_dev *dev = platform_get_drvdata(pdev); + + v4l2_info(&dev->v4l2_dev, "Removing " MEM2MEM_NAME); + +#ifdef CONFIG_MEDIA_CONTROLLER + media_device_unregister(&dev->mdev); + v4l2_m2m_unregister_media_controller(dev->m2m_dev); +#endif + video_unregister_device(&dev->vfd); +} + +static struct platform_driver audm2m_pdrv = { + .probe = audm2m_probe, + .remove_new = audm2m_remove, + .driver = { + .name = MEM2MEM_NAME, + }, +}; + +static void __exit audm2m_exit(void) +{ + platform_driver_unregister(&audm2m_pdrv); + platform_device_unregister(&audm2m_pdev); +} + +static int __init audm2m_init(void) +{ + int ret; + + ret = platform_device_register(&audm2m_pdev); + if (ret) + return ret; + + ret = platform_driver_register(&audm2m_pdrv); + if (ret) + platform_device_unregister(&audm2m_pdev); + + return ret; +} + +module_init(audm2m_init); +module_exit(audm2m_exit);
Hey Shengjiu,
first of all thanks for all of this work and I am very sorry for only emerging this late into the series, I sadly didn't notice it earlier.
I would like to voice a few concerns about the general idea of adding Audio support to the Media subsystem.
1. The biggest objection is, that the Linux Kernel has a subsystem specifically targeted for audio devices, adding support for these devices in another subsystem are counterproductive as they work around the shortcomings of the audio subsystem while forcing support for a device into a subsystem that was never designed for such devices. Instead, the audio subsystem has to be adjusted to be able to support all of the required workflows, otherwise, the next audio driver with similar requirements will have to move to the media subsystem as well, the audio subsystem would then never experience the required change and soon we would have two audio subsystems.
2. Closely connected to the previous objection, the media subsystem with its current staff of maintainers is overworked and barely capable of handling the workload, which includes an abundance of different devices from DVB, codecs, cameras, PCI devices, radio tuners, HDMI CEC, IR receivers, etc. Adding more device types to this matrix will make the situation worse and should only be done with a plan for how first to improve the current maintainer situation.
3. By using the same framework and APIs as the video codecs, the audio codecs are going to cause extra work for the video codec developers and maintainers simply by occupying the same space that was orginally designed for the purpose of video only. Even if you try to not cause any extra stress the simple presence of the audio code in the codebase is going to cause restrictions.
The main issue here is that the audio subsystem doesn't provide a mem2mem framework and I would say you are in luck because the media subsystem has gathered a lot of shortcomings with its current implementation of the mem2mem framework over time, which is why a new implementation will be necessary anyway.
So instead of hammering a driver into the wrong destination, I would suggest bundling our forces and implementing a general memory-to-memory framework that both the media and the audio subsystem can use, that addresses the current shortcomings of the implementation and allows you to upload the driver where it is supposed to be. This is going to cause restrictions as well, like mentioned in the concern number 3, but with the difference that we can make a general plan for such a framework that accomodates lots of use cases and each subsystem can add their routines on top of the general framework.
Another possible alternative is to try and make the DRM scheduler more generally available, this scheduler is the most mature and in fact is very similar to what you and what the media devices need. Which again just shows how common your usecase actually is and how a general solution is the best long term solution.
Please notice that Daniel Almeida is currently working on something related to this: https://lore.kernel.org/linux-media/3F80AC0D-DCAA-4EDE-BF58-BB1369C7EDCA@col...
If the toplevel maintainers decide to add the patchset so be it, but I wanted to voice my concerns and also highlight that this is likely going to cause extra stress for the video codecs maintainers and the maintainers in general. We cannot spend a lot of time on audio codecs, as video codecs already fill up our available time sufficiently, so the use of the framework needs to be conservative and cause as little extra work as possible for the original use case of the framework.
Regards, Sebastian
On 19.03.2024 15:50, Shengjiu Wang wrote:
Audio signal processing also has the requirement for memory to memory similar as Video.
This asrc memory to memory (memory ->asrc->memory) case is a non real time use case.
User fills the input buffer to the asrc module, after conversion, then asrc sends back the output buffer to user. So it is not a traditional ALSA playback and capture case.
It is a specific use case, there is no reference in current kernel. v4l2 memory to memory is the closed implementation, v4l2 current support video, image, radio, tuner, touch devices, so it is not complicated to add support for this specific audio case.
Because we had implemented the "memory -> asrc ->i2s device-> codec" use case in ALSA. Now the "memory->asrc->memory" needs to reuse the code in asrc driver, so the first 3 patches is for refining the code to make it can be shared by the "memory->asrc->memory" driver.
The main change is in the v4l2 side, A /dev/vl4-audioX will be created, user applications only use the ioctl of v4l2 framework.
Other change is to add memory to memory support for two kinds of i.MX ASRC module.
changes in v15:
- update MAINTAINERS for imx-asrc.c and vim2m-audio.c
changes in v14:
- document the reservation of 'AUXX' fourcc format.
- add v4l2_audfmt_to_fourcc() definition.
changes in v13
- change 'pixelformat' to 'audioformat' in dev-audio-mem2mem.rst
- add more description for clock drift in ext-ctrls-audio-m2m.rst
- Add "media: v4l2-ctrls: add support for fraction_bits" from Hans
to avoid build issue for kernel test robot
changes in v12
- minor changes according to comments
- drop min_buffers_needed = 1 and V4L2_CTRL_FLAG_UPDATE flag
- drop bus_info
changes in v11
- add add-fixed-point-test-controls in vivid.
- add v4l2_ctrl_fp_compose() helper function for min and max
changes in v10
- remove FIXED_POINT type
- change code base on media: v4l2-ctrls: add support for fraction_bits
- fix issue reported by kernel test robot
- remove module_alias
changes in v9:
- add MEDIA_ENT_F_PROC_AUDIO_RESAMPLER.
- add MEDIA_INTF_T_V4L_AUDIO
- add media controller support
- refine the vim2m-audio to support 8k<->16k conversion.
changes in v8:
- refine V4L2_CAP_AUDIO_M2M to be 0x00000008
- update doc for FIXED_POINT
- address comments for imx-asrc
changes in v7:
- add acked-by from Mark
- separate commit for fixed point, m2m audio class, audio rate controls
- use INTEGER_MENU for rate, FIXED_POINT for rate offset
- remove used fmts
- address other comments for Hans
changes in v6:
- use m2m_prepare/m2m_unprepare/m2m_start/m2m_stop to replace
m2m_start_part_one/m2m_stop_part_one, m2m_start_part_two/m2m_stop_part_two.
- change V4L2_CTRL_TYPE_ASRC_RATE to V4L2_CTRL_TYPE_FIXED_POINT
- fix warning by kernel test rebot
- remove some unused format V4L2_AUDIO_FMT_XX
- Get SNDRV_PCM_FORMAT from V4L2_AUDIO_FMT in driver.
- rename audm2m to viaudm2m.
changes in v5:
- remove V4L2_AUDIO_FMT_LPCM
- define audio pixel format like V4L2_AUDIO_FMT_S8...
- remove rate and format in struct v4l2_audio_format.
- Add V4L2_CID_ASRC_SOURCE_RATE and V4L2_CID_ASRC_DEST_RATE controls
- updata document accordingly.
changes in v4:
- update document style
- separate V4L2_AUDIO_FMT_LPCM and V4L2_CAP_AUDIO_M2M in separate commit
changes in v3:
- Modify documents for adding audio m2m support
- Add audio virtual m2m driver
- Defined V4L2_AUDIO_FMT_LPCM format type for audio.
- Defined V4L2_CAP_AUDIO_M2M capability type for audio m2m case.
- with modification in v4l-utils, pass v4l2-compliance test.
changes in v2:
- decouple the implementation in v4l2 and ALSA
- implement the memory to memory driver as a platfrom driver
and move it to driver/media
- move fsl_asrc_common.h to include/sound folder
Hans Verkuil (1): media: v4l2-ctrls: add support for fraction_bits
Shengjiu Wang (15): ASoC: fsl_asrc: define functions for memory to memory usage ASoC: fsl_easrc: define functions for memory to memory usage ASoC: fsl_asrc: move fsl_asrc_common.h to include/sound ASoC: fsl_asrc: register m2m platform device ASoC: fsl_easrc: register m2m platform device media: uapi: Add V4L2_CAP_AUDIO_M2M capability flag media: v4l2: Add audio capture and output support media: uapi: Define audio sample format fourcc type media: uapi: Add V4L2_CTRL_CLASS_M2M_AUDIO media: uapi: Add audio rate controls support media: uapi: Declare interface types for Audio media: uapi: Add an entity type for audio resampler media: vivid: add fixed point test controls media: imx-asrc: Add memory to memory driver media: vim2m-audio: add virtual driver for audio memory to memory
.../media/mediactl/media-types.rst | 11 + .../userspace-api/media/v4l/buffer.rst | 6 + .../userspace-api/media/v4l/common.rst | 1 + .../media/v4l/dev-audio-mem2mem.rst | 71 + .../userspace-api/media/v4l/devices.rst | 1 + .../media/v4l/ext-ctrls-audio-m2m.rst | 59 + .../userspace-api/media/v4l/pixfmt-audio.rst | 100 ++ .../userspace-api/media/v4l/pixfmt.rst | 1 + .../media/v4l/vidioc-enum-fmt.rst | 2 + .../media/v4l/vidioc-g-ext-ctrls.rst | 4 + .../userspace-api/media/v4l/vidioc-g-fmt.rst | 4 + .../media/v4l/vidioc-querycap.rst | 3 + .../media/v4l/vidioc-queryctrl.rst | 11 +- .../media/videodev2.h.rst.exceptions | 3 + MAINTAINERS | 17 + .../media/common/videobuf2/videobuf2-v4l2.c | 4 + drivers/media/platform/nxp/Kconfig | 13 + drivers/media/platform/nxp/Makefile | 1 + drivers/media/platform/nxp/imx-asrc.c | 1256 +++++++++++++++++ drivers/media/test-drivers/Kconfig | 10 + drivers/media/test-drivers/Makefile | 1 + drivers/media/test-drivers/vim2m-audio.c | 793 +++++++++++ drivers/media/test-drivers/vivid/vivid-core.h | 2 + .../media/test-drivers/vivid/vivid-ctrls.c | 26 + drivers/media/v4l2-core/v4l2-compat-ioctl32.c | 9 + drivers/media/v4l2-core/v4l2-ctrls-api.c | 1 + drivers/media/v4l2-core/v4l2-ctrls-core.c | 93 +- drivers/media/v4l2-core/v4l2-ctrls-defs.c | 10 + drivers/media/v4l2-core/v4l2-dev.c | 21 + drivers/media/v4l2-core/v4l2-ioctl.c | 66 + drivers/media/v4l2-core/v4l2-mem2mem.c | 13 +- include/media/v4l2-ctrls.h | 13 +- include/media/v4l2-dev.h | 2 + include/media/v4l2-ioctl.h | 34 + .../fsl => include/sound}/fsl_asrc_common.h | 60 + include/uapi/linux/media.h | 2 + include/uapi/linux/v4l2-controls.h | 9 + include/uapi/linux/videodev2.h | 50 +- sound/soc/fsl/fsl_asrc.c | 144 ++ sound/soc/fsl/fsl_asrc.h | 4 +- sound/soc/fsl/fsl_asrc_dma.c | 2 +- sound/soc/fsl/fsl_easrc.c | 233 +++ sound/soc/fsl/fsl_easrc.h | 6 +- 43 files changed, 3145 insertions(+), 27 deletions(-) create mode 100644 Documentation/userspace-api/media/v4l/dev-audio-mem2mem.rst create mode 100644 Documentation/userspace-api/media/v4l/ext-ctrls-audio-m2m.rst create mode 100644 Documentation/userspace-api/media/v4l/pixfmt-audio.rst create mode 100644 drivers/media/platform/nxp/imx-asrc.c create mode 100644 drivers/media/test-drivers/vim2m-audio.c rename {sound/soc/fsl => include/sound}/fsl_asrc_common.h (60%)
-- 2.34.1
On 30/04/2024 10:21, Sebastian Fricke wrote:
Hey Shengjiu,
first of all thanks for all of this work and I am very sorry for only emerging this late into the series, I sadly didn't notice it earlier.
I would like to voice a few concerns about the general idea of adding Audio support to the Media subsystem.
- The biggest objection is, that the Linux Kernel has a subsystem
specifically targeted for audio devices, adding support for these devices in another subsystem are counterproductive as they work around the shortcomings of the audio subsystem while forcing support for a device into a subsystem that was never designed for such devices. Instead, the audio subsystem has to be adjusted to be able to support all of the required workflows, otherwise, the next audio driver with similar requirements will have to move to the media subsystem as well, the audio subsystem would then never experience the required change and soon we would have two audio subsystems.
- Closely connected to the previous objection, the media subsystem with
its current staff of maintainers is overworked and barely capable of handling the workload, which includes an abundance of different devices from DVB, codecs, cameras, PCI devices, radio tuners, HDMI CEC, IR receivers, etc. Adding more device types to this matrix will make the situation worse and should only be done with a plan for how first to improve the current maintainer situation.
- By using the same framework and APIs as the video codecs, the audio
codecs are going to cause extra work for the video codec developers and maintainers simply by occupying the same space that was orginally designed for the purpose of video only. Even if you try to not cause any extra stress the simple presence of the audio code in the codebase is going to cause restrictions.
The main issue here is that the audio subsystem doesn't provide a mem2mem framework and I would say you are in luck because the media subsystem has gathered a lot of shortcomings with its current implementation of the mem2mem framework over time, which is why a new implementation will be necessary anyway.
So instead of hammering a driver into the wrong destination, I would suggest bundling our forces and implementing a general memory-to-memory framework that both the media and the audio subsystem can use, that addresses the current shortcomings of the implementation and allows you to upload the driver where it is supposed to be. This is going to cause restrictions as well, like mentioned in the concern number 3, but with the difference that we can make a general plan for such a framework that accomodates lots of use cases and each subsystem can add their routines on top of the general framework.
Another possible alternative is to try and make the DRM scheduler more generally available, this scheduler is the most mature and in fact is very similar to what you and what the media devices need. Which again just shows how common your usecase actually is and how a general solution is the best long term solution.
Please notice that Daniel Almeida is currently working on something related to this: https://lore.kernel.org/linux-media/3F80AC0D-DCAA-4EDE-BF58-BB1369C7EDCA@col...
If the toplevel maintainers decide to add the patchset so be it, but I wanted to voice my concerns and also highlight that this is likely going to cause extra stress for the video codecs maintainers and the maintainers in general. We cannot spend a lot of time on audio codecs, as video codecs already fill up our available time sufficiently, so the use of the framework needs to be conservative and cause as little extra work as possible for the original use case of the framework.
I would really like to get the input of the audio maintainers on this. Sebastian has a good point, especially with us being overworked :-)
Having a shared mem2mem framework would certainly be nice, on the other hand, developing that will most likely take a substantial amount of time.
Perhaps it is possible to copy the current media v4l2-mem2mem.c and turn it into an alsa-mem2mem.c? I really do not know enough about the alsa subsystem to tell if that is possible.
While this driver is a rate converter, not an audio codec, the same principles would apply to off-line audio codecs as well. And it is true that we definitely do not want to support audio codecs in the media subsystem.
Accepting this driver creates a precedent and would open the door for audio codecs.
I may have been too hasty in saying yes to this, I did not consider the wider implications for our workload and what it can lead to. I sincerely apologize to Shengjiu Wang as it is no fun to end up in a situation like this.
Regards,
Hans
Em Tue, 30 Apr 2024 10:47:13 +0200 Hans Verkuil hverkuil@xs4all.nl escreveu:
On 30/04/2024 10:21, Sebastian Fricke wrote:
Hey Shengjiu,
first of all thanks for all of this work and I am very sorry for only emerging this late into the series, I sadly didn't notice it earlier.
I would like to voice a few concerns about the general idea of adding Audio support to the Media subsystem.
- The biggest objection is, that the Linux Kernel has a subsystem
specifically targeted for audio devices, adding support for these devices in another subsystem are counterproductive as they work around the shortcomings of the audio subsystem while forcing support for a device into a subsystem that was never designed for such devices. Instead, the audio subsystem has to be adjusted to be able to support all of the required workflows, otherwise, the next audio driver with similar requirements will have to move to the media subsystem as well, the audio subsystem would then never experience the required change and soon we would have two audio subsystems.
- Closely connected to the previous objection, the media subsystem with
its current staff of maintainers is overworked and barely capable of handling the workload, which includes an abundance of different devices from DVB, codecs, cameras, PCI devices, radio tuners, HDMI CEC, IR receivers, etc. Adding more device types to this matrix will make the situation worse and should only be done with a plan for how first to improve the current maintainer situation.
- By using the same framework and APIs as the video codecs, the audio
codecs are going to cause extra work for the video codec developers and maintainers simply by occupying the same space that was orginally designed for the purpose of video only. Even if you try to not cause any extra stress the simple presence of the audio code in the codebase is going to cause restrictions.
The main issue here is that the audio subsystem doesn't provide a mem2mem framework and I would say you are in luck because the media subsystem has gathered a lot of shortcomings with its current implementation of the mem2mem framework over time, which is why a new implementation will be necessary anyway.
So instead of hammering a driver into the wrong destination, I would suggest bundling our forces and implementing a general memory-to-memory framework that both the media and the audio subsystem can use, that addresses the current shortcomings of the implementation and allows you to upload the driver where it is supposed to be. This is going to cause restrictions as well, like mentioned in the concern number 3, but with the difference that we can make a general plan for such a framework that accomodates lots of use cases and each subsystem can add their routines on top of the general framework.
Another possible alternative is to try and make the DRM scheduler more generally available, this scheduler is the most mature and in fact is very similar to what you and what the media devices need. Which again just shows how common your usecase actually is and how a general solution is the best long term solution.
Please notice that Daniel Almeida is currently working on something related to this: https://lore.kernel.org/linux-media/3F80AC0D-DCAA-4EDE-BF58-BB1369C7EDCA@col...
If the toplevel maintainers decide to add the patchset so be it, but I wanted to voice my concerns and also highlight that this is likely going to cause extra stress for the video codecs maintainers and the maintainers in general. We cannot spend a lot of time on audio codecs, as video codecs already fill up our available time sufficiently, so the use of the framework needs to be conservative and cause as little extra work as possible for the original use case of the framework.
I would really like to get the input of the audio maintainers on this. Sebastian has a good point, especially with us being overworked :-)
Having a shared mem2mem framework would certainly be nice, on the other hand, developing that will most likely take a substantial amount of time.
Perhaps it is possible to copy the current media v4l2-mem2mem.c and turn it into an alsa-mem2mem.c? I really do not know enough about the alsa subsystem to tell if that is possible.
While this driver is a rate converter, not an audio codec, the same principles would apply to off-line audio codecs as well. And it is true that we definitely do not want to support audio codecs in the media subsystem.
Accepting this driver creates a precedent and would open the door for audio codecs.
I may have been too hasty in saying yes to this, I did not consider the wider implications for our workload and what it can lead to. I sincerely apologize to Shengjiu Wang as it is no fun to end up in a situation like this.
I agree with both Sebastian and Hans here: media devices always had audio streams, even on old PCI analog TV devices like bttv. There are even some devices like the ones based on usb em28xx that contains an AC97 chip on it. The decision was always to have audio supported by ALSA APIs/subsystem, as otherwise we'll end duplicating code and reinventing the wheel with new incompatible APIs for audio in and outside media, creating unneeded complexity, which will end being reflected on userspace as well.
So, IMO it makes a lot more sense to place audio codecs and processor blocks inside ALSA, probably as part of ALSA SOF, if possible.
Hans suggestion of forking v4l2-mem2mem.c on ALSA seems a good starting point. Also, moving the DRM mem2mem functionality to a core library that could be re-used by the three subsystems sounds a good idea, but I suspect that a change like that could be more time-consuming.
Regards, Mauro
On Tue, Apr 30, 2024 at 10:21:12AM +0200, Sebastian Fricke wrote:
first of all thanks for all of this work and I am very sorry for only emerging this late into the series, I sadly didn't notice it earlier.
It might be worth checking out the discussion on earlier versions...
- The biggest objection is, that the Linux Kernel has a subsystem
specifically targeted for audio devices, adding support for these devices in another subsystem are counterproductive as they work around the shortcomings of the audio subsystem while forcing support for a device into a subsystem that was never designed for such devices. Instead, the audio subsystem has to be adjusted to be able to support all of the required workflows, otherwise, the next audio driver with similar requirements will have to move to the media subsystem as well, the audio subsystem would then never experience the required change and soon we would have two audio subsystems.
The discussion around this originally was that all the audio APIs are very much centered around real time operations rather than completely async memory to memory operations and that it's not clear that it's worth reinventing the wheel simply for the sake of having things in ALSA when that's already pretty idiomatic for the media subsystem. It wasn't the memory to memory bit per se, it was the disconnection from any timing.
So instead of hammering a driver into the wrong destination, I would suggest bundling our forces and implementing a general memory-to-memory framework that both the media and the audio subsystem can use, that addresses the current shortcomings of the implementation and allows you to upload the driver where it is supposed to be.
That doesn't sound like an immediate solution to maintainer overload issues... if something like this is going to happen the DRM solution does seem more general but I'm not sure the amount of stop energy is proportionate.
On 30. 04. 24 16:46, Mark Brown wrote:
So instead of hammering a driver into the wrong destination, I would suggest bundling our forces and implementing a general memory-to-memory framework that both the media and the audio subsystem can use, that addresses the current shortcomings of the implementation and allows you to upload the driver where it is supposed to be.
That doesn't sound like an immediate solution to maintainer overload issues... if something like this is going to happen the DRM solution does seem more general but I'm not sure the amount of stop energy is proportionate.
The "do what you want" ALSA's hwdep device / interface can be used to transfer data in/out from SRC using custom read/write/ioctl/mmap syscalls. The question is, if the changes cannot be more simpler for the first implementation keeping the hardware enumeration in one subsystem where is the driver code placed. I also see the benefit to reuse the already existing framework (but is v4l2 the right one?).
Jaroslav
Em Tue, 30 Apr 2024 23:46:03 +0900 Mark Brown broonie@kernel.org escreveu:
On Tue, Apr 30, 2024 at 10:21:12AM +0200, Sebastian Fricke wrote:
first of all thanks for all of this work and I am very sorry for only emerging this late into the series, I sadly didn't notice it earlier.
It might be worth checking out the discussion on earlier versions...
- The biggest objection is, that the Linux Kernel has a subsystem
specifically targeted for audio devices, adding support for these devices in another subsystem are counterproductive as they work around the shortcomings of the audio subsystem while forcing support for a device into a subsystem that was never designed for such devices. Instead, the audio subsystem has to be adjusted to be able to support all of the required workflows, otherwise, the next audio driver with similar requirements will have to move to the media subsystem as well, the audio subsystem would then never experience the required change and soon we would have two audio subsystems.
The discussion around this originally was that all the audio APIs are very much centered around real time operations rather than completely async memory to memory operations and that it's not clear that it's worth reinventing the wheel simply for the sake of having things in ALSA when that's already pretty idiomatic for the media subsystem. It wasn't the memory to memory bit per se, it was the disconnection from any timing.
The media subsystem is also centered around real time. Without real time, you can't have a decent video conference system. Having mem2mem transfers actually help reducing real time delays, as it avoids extra latency due to CPU congestion and/or data transfers from/to userspace.
So instead of hammering a driver into the wrong destination, I would suggest bundling our forces and implementing a general memory-to-memory framework that both the media and the audio subsystem can use, that addresses the current shortcomings of the implementation and allows you to upload the driver where it is supposed to be.
That doesn't sound like an immediate solution to maintainer overload issues... if something like this is going to happen the DRM solution does seem more general but I'm not sure the amount of stop energy is proportionate.
I don't think maintainer overload is the issue here. The main point is to avoid a fork at the audio uAPI, plus the burden of re-inventing the wheel with new codes for audio formats, new documentation for them, etc.
Regards, Mauro
On Tue, Apr 30, 2024 at 05:27:52PM +0100, Mauro Carvalho Chehab wrote:
Mark Brown broonie@kernel.org escreveu:
On Tue, Apr 30, 2024 at 10:21:12AM +0200, Sebastian Fricke wrote:
The discussion around this originally was that all the audio APIs are very much centered around real time operations rather than completely
The media subsystem is also centered around real time. Without real time, you can't have a decent video conference system. Having mem2mem transfers actually help reducing real time delays, as it avoids extra latency due to CPU congestion and/or data transfers from/to userspace.
Real time means strongly tied to wall clock times rather than fast - the issue was that all the ALSA APIs are based around pushing data through the system based on a clock.
That doesn't sound like an immediate solution to maintainer overload issues... if something like this is going to happen the DRM solution does seem more general but I'm not sure the amount of stop energy is proportionate.
I don't think maintainer overload is the issue here. The main point is to avoid a fork at the audio uAPI, plus the burden of re-inventing the wheel with new codes for audio formats, new documentation for them, etc.
I thought that discussion had been had already at one of the earlier versions? TBH I've not really been paying attention to this since the very early versions where I raised some similar "why is this in media" points and I thought everyone had decided that this did actually make sense.
On Wed, 01 May 2024 03:56:15 +0200, Mark Brown wrote:
On Tue, Apr 30, 2024 at 05:27:52PM +0100, Mauro Carvalho Chehab wrote:
Mark Brown broonie@kernel.org escreveu:
On Tue, Apr 30, 2024 at 10:21:12AM +0200, Sebastian Fricke wrote:
The discussion around this originally was that all the audio APIs are very much centered around real time operations rather than completely
The media subsystem is also centered around real time. Without real time, you can't have a decent video conference system. Having mem2mem transfers actually help reducing real time delays, as it avoids extra latency due to CPU congestion and/or data transfers from/to userspace.
Real time means strongly tied to wall clock times rather than fast - the issue was that all the ALSA APIs are based around pushing data through the system based on a clock.
That doesn't sound like an immediate solution to maintainer overload issues... if something like this is going to happen the DRM solution does seem more general but I'm not sure the amount of stop energy is proportionate.
I don't think maintainer overload is the issue here. The main point is to avoid a fork at the audio uAPI, plus the burden of re-inventing the wheel with new codes for audio formats, new documentation for them, etc.
I thought that discussion had been had already at one of the earlier versions? TBH I've not really been paying attention to this since the very early versions where I raised some similar "why is this in media" points and I thought everyone had decided that this did actually make sense.
Yeah, it was discussed in v1 and v2 threads, e.g. https://patchwork.kernel.org/project/linux-media/cover/1690265540-25999-1-gi...
My argument at that time was how the operation would be, and the point was that it'd be a "batch-like" operation via M2M without any timing control. It'd be a very special usage for for ALSA, and if any, it'd be hwdep -- that is a very hardware-specific API implementation -- or try compress-offload API, which looks dubious.
OTOH, the argument was that there is already a framework for M2M in media API and that also fits for the batch-like operation, too. So was the thread evolved until now.
thanks,
Takashi
Em Thu, 02 May 2024 09:46:14 +0200 Takashi Iwai tiwai@suse.de escreveu:
On Wed, 01 May 2024 03:56:15 +0200, Mark Brown wrote:
On Tue, Apr 30, 2024 at 05:27:52PM +0100, Mauro Carvalho Chehab wrote:
Mark Brown broonie@kernel.org escreveu:
On Tue, Apr 30, 2024 at 10:21:12AM +0200, Sebastian Fricke wrote:
The discussion around this originally was that all the audio APIs are very much centered around real time operations rather than completely
The media subsystem is also centered around real time. Without real time, you can't have a decent video conference system. Having mem2mem transfers actually help reducing real time delays, as it avoids extra latency due to CPU congestion and/or data transfers from/to userspace.
Real time means strongly tied to wall clock times rather than fast - the issue was that all the ALSA APIs are based around pushing data through the system based on a clock.
That doesn't sound like an immediate solution to maintainer overload issues... if something like this is going to happen the DRM solution does seem more general but I'm not sure the amount of stop energy is proportionate.
I don't think maintainer overload is the issue here. The main point is to avoid a fork at the audio uAPI, plus the burden of re-inventing the wheel with new codes for audio formats, new documentation for them, etc.
I thought that discussion had been had already at one of the earlier versions? TBH I've not really been paying attention to this since the very early versions where I raised some similar "why is this in media" points and I thought everyone had decided that this did actually make sense.
Yeah, it was discussed in v1 and v2 threads, e.g. https://patchwork.kernel.org/project/linux-media/cover/1690265540-25999-1-gi...
My argument at that time was how the operation would be, and the point was that it'd be a "batch-like" operation via M2M without any timing control. It'd be a very special usage for for ALSA, and if any, it'd be hwdep -- that is a very hardware-specific API implementation -- or try compress-offload API, which looks dubious.
OTOH, the argument was that there is already a framework for M2M in media API and that also fits for the batch-like operation, too. So was the thread evolved until now.
M2M transfers are not a hardware-specific API, and such kind of transfers is not new either. Old media devices like bttv have internally a way to do PCI2PCI transfers, allowing media streams to be transferred directly without utilizing CPU. The media driver supports it for video, as this made a huge difference of performance back then.
On embedded world, this is a pretty common scenario: different media IP blocks can communicate with each other directly via memory. This can happen for video capture, video display and audio.
With M2M, most of the control is offloaded to the hardware.
There are still time control associated with it, as audio and video needs to be in sync. This is done by controlling the buffers size and could be fine-tuned by checking when the buffer transfer is done.
On media, M2M buffer transfers are started via VIDIOC_QBUF, which is a request to do a frame transfer. A similar ioctl (VIDIOC_DQBUF) is used to monitor when the hardware finishes transfering the buffer. On other words, the CPU is responsible for time control.
On other words, this is still real time. The main difference from a "sync" transfer is that the CPU doesn't need to copy data from/to different devices, as such operation is offloaded to the hardware.
Regards, Mauro
Em Thu, 2 May 2024 09:59:56 +0100 Mauro Carvalho Chehab mchehab@kernel.org escreveu:
Em Thu, 02 May 2024 09:46:14 +0200 Takashi Iwai tiwai@suse.de escreveu:
On Wed, 01 May 2024 03:56:15 +0200, Mark Brown wrote:
On Tue, Apr 30, 2024 at 05:27:52PM +0100, Mauro Carvalho Chehab wrote:
Mark Brown broonie@kernel.org escreveu:
On Tue, Apr 30, 2024 at 10:21:12AM +0200, Sebastian Fricke wrote:
The discussion around this originally was that all the audio APIs are very much centered around real time operations rather than completely
The media subsystem is also centered around real time. Without real time, you can't have a decent video conference system. Having mem2mem transfers actually help reducing real time delays, as it avoids extra latency due to CPU congestion and/or data transfers from/to userspace.
Real time means strongly tied to wall clock times rather than fast - the issue was that all the ALSA APIs are based around pushing data through the system based on a clock.
That doesn't sound like an immediate solution to maintainer overload issues... if something like this is going to happen the DRM solution does seem more general but I'm not sure the amount of stop energy is proportionate.
I don't think maintainer overload is the issue here. The main point is to avoid a fork at the audio uAPI, plus the burden of re-inventing the wheel with new codes for audio formats, new documentation for them, etc.
I thought that discussion had been had already at one of the earlier versions? TBH I've not really been paying attention to this since the very early versions where I raised some similar "why is this in media" points and I thought everyone had decided that this did actually make sense.
Yeah, it was discussed in v1 and v2 threads, e.g. https://patchwork.kernel.org/project/linux-media/cover/1690265540-25999-1-gi...
My argument at that time was how the operation would be, and the point was that it'd be a "batch-like" operation via M2M without any timing control. It'd be a very special usage for for ALSA, and if any, it'd be hwdep -- that is a very hardware-specific API implementation -- or try compress-offload API, which looks dubious.
OTOH, the argument was that there is already a framework for M2M in media API and that also fits for the batch-like operation, too. So was the thread evolved until now.
M2M transfers are not a hardware-specific API, and such kind of transfers is not new either. Old media devices like bttv have internally a way to do PCI2PCI transfers, allowing media streams to be transferred directly without utilizing CPU. The media driver supports it for video, as this made a huge difference of performance back then.
On embedded world, this is a pretty common scenario: different media IP blocks can communicate with each other directly via memory. This can happen for video capture, video display and audio.
With M2M, most of the control is offloaded to the hardware.
There are still time control associated with it, as audio and video needs to be in sync. This is done by controlling the buffers size and could be fine-tuned by checking when the buffer transfer is done.
On media, M2M buffer transfers are started via VIDIOC_QBUF, which is a request to do a frame transfer. A similar ioctl (VIDIOC_DQBUF) is used to monitor when the hardware finishes transfering the buffer. On other words, the CPU is responsible for time control.
Just complementing: on media, we do this per video buffer (or per half video buffer). A typical use case on cameras is to have buffers transferred 30 times per second, if the video was streamed at 30 frames per second.
I would assume that, on an audio/video stream, the audio data transfer will be programmed to also happen on a regular interval.
So, if the video stream is programmed to a 30 frames per second rate, I would assume that the associated audio stream will also be programmed to be grouped into 30 data transfers per second. On such scenario, if the audio is sampled at 48 kHZ, it means that:
1) each M2M transfer commanded by CPU will copy 1600 samples; 2) the time between each sample will remain 1/48000; 3) a notification event telling that 1600 samples were transferred will be generated when the last sample happens; 4) CPU will do time control by looking at the notification events.
On other words, this is still real time. The main difference from a "sync" transfer is that the CPU doesn't need to copy data from/to different devices, as such operation is offloaded to the hardware.
Regards, Mauro
On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote:
Mauro Carvalho Chehab mchehab@kernel.org escreveu:
There are still time control associated with it, as audio and video needs to be in sync. This is done by controlling the buffers size and could be fine-tuned by checking when the buffer transfer is done.
...
Just complementing: on media, we do this per video buffer (or per half video buffer). A typical use case on cameras is to have buffers transferred 30 times per second, if the video was streamed at 30 frames per second.
IIRC some big use case for this hardware was transcoding so there was a desire to just go at whatever rate the hardware could support as there is no interactive user consuming the output as it is generated.
I would assume that, on an audio/video stream, the audio data transfer will be programmed to also happen on a regular interval.
With audio the API is very much "wake userspace every Xms".
Em Fri, 3 May 2024 10:47:19 +0900 Mark Brown broonie@kernel.org escreveu:
On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote:
Mauro Carvalho Chehab mchehab@kernel.org escreveu:
There are still time control associated with it, as audio and video needs to be in sync. This is done by controlling the buffers size and could be fine-tuned by checking when the buffer transfer is done.
...
Just complementing: on media, we do this per video buffer (or per half video buffer). A typical use case on cameras is to have buffers transferred 30 times per second, if the video was streamed at 30 frames per second.
IIRC some big use case for this hardware was transcoding so there was a desire to just go at whatever rate the hardware could support as there is no interactive user consuming the output as it is generated.
Indeed, codecs could be used to just do transcoding, but I would expect it to be a border use case. See, as the chipsets implementing codecs are typically the ones used on mobiles, I would expect that the major use cases to be to watch audio and video and to participate on audio/video conferences.
Going further, the codec API may end supporting not only transcoding (which is something that CPU can usually handle without too much processing) but also audio processing that may require more complex algorithms - even deep learning ones - like background noise removal, echo detection/removal, volume auto-gain, audio enhancement and such.
On other words, the typical use cases will either have input or output being a physical hardware (microphone or speaker).
I would assume that, on an audio/video stream, the audio data transfer will be programmed to also happen on a regular interval.
With audio the API is very much "wake userspace every Xms".
On Fri, May 3, 2024 at 4:42 PM Mauro Carvalho Chehab mchehab@kernel.org wrote:
Em Fri, 3 May 2024 10:47:19 +0900 Mark Brown broonie@kernel.org escreveu:
On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote:
Mauro Carvalho Chehab mchehab@kernel.org escreveu:
There are still time control associated with it, as audio and video needs to be in sync. This is done by controlling the buffers size and could be fine-tuned by checking when the buffer transfer is done.
...
Just complementing: on media, we do this per video buffer (or per half video buffer). A typical use case on cameras is to have buffers transferred 30 times per second, if the video was streamed at 30 frames per second.
IIRC some big use case for this hardware was transcoding so there was a desire to just go at whatever rate the hardware could support as there is no interactive user consuming the output as it is generated.
Indeed, codecs could be used to just do transcoding, but I would expect it to be a border use case. See, as the chipsets implementing codecs are typically the ones used on mobiles, I would expect that the major use cases to be to watch audio and video and to participate on audio/video conferences.
Going further, the codec API may end supporting not only transcoding (which is something that CPU can usually handle without too much processing) but also audio processing that may require more complex algorithms - even deep learning ones - like background noise removal, echo detection/removal, volume auto-gain, audio enhancement and such.
On other words, the typical use cases will either have input or output being a physical hardware (microphone or speaker).
All, thanks for spending time to discuss, it seems we go back to the start point of this topic again.
Our main request is that there is a hardware sample rate converter on the chip, so users can use it in user space as a component like software sample rate converter. It mostly may run as a gstreamer plugin. so it is a memory to memory component.
I didn't find such API in ALSA for such purpose, the best option for this in the kernel is the V4L2 memory to memory framework I found. As Hans said it is well designed for memory to memory.
And I think audio is one of 'media'. As I can see that part of Radio function is in ALSA, part of Radio function is in V4L2. part of HDMI function is in DRM, part of HDMI function is in ALSA... So using V4L2 for audio is not new from this point of view.
Even now I still think V4L2 is the best option, but it looks like there are a lot of rejects. If develop a new ALSA-mem2mem, it is also a duplication of code (bigger duplication that just add audio support in V4L2 I think).
Best regards Shengjiu Wang.
I would assume that, on an audio/video stream, the audio data transfer will be programmed to also happen on a regular interval.
With audio the API is very much "wake userspace every Xms".
On 06. 05. 24 10:49, Shengjiu Wang wrote:
Even now I still think V4L2 is the best option, but it looks like there are a lot of rejects. If develop a new ALSA-mem2mem, it is also a duplication of code (bigger duplication that just add audio support in V4L2 I think).
Maybe not. Could you try to evaluate a pure dma-buf (drivers/dma-buf) solution and add only enumeration and operation trigger mechanism to the ALSA API? It seems that dma-buf has enough sufficient code to transfer data from and to the kernel space for the further processing. I think that one buffer can be as source and the second for the processed data.
We can eventually add new ioctls to the ALSA's control API (/dev/snd/control*) for this purpose (DSP processing).
Jaroslav
On 06/05/2024 10:49, Shengjiu Wang wrote:
On Fri, May 3, 2024 at 4:42 PM Mauro Carvalho Chehab mchehab@kernel.org wrote:
Em Fri, 3 May 2024 10:47:19 +0900 Mark Brown broonie@kernel.org escreveu:
On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote:
Mauro Carvalho Chehab mchehab@kernel.org escreveu:
There are still time control associated with it, as audio and video needs to be in sync. This is done by controlling the buffers size and could be fine-tuned by checking when the buffer transfer is done.
...
Just complementing: on media, we do this per video buffer (or per half video buffer). A typical use case on cameras is to have buffers transferred 30 times per second, if the video was streamed at 30 frames per second.
IIRC some big use case for this hardware was transcoding so there was a desire to just go at whatever rate the hardware could support as there is no interactive user consuming the output as it is generated.
Indeed, codecs could be used to just do transcoding, but I would expect it to be a border use case. See, as the chipsets implementing codecs are typically the ones used on mobiles, I would expect that the major use cases to be to watch audio and video and to participate on audio/video conferences.
Going further, the codec API may end supporting not only transcoding (which is something that CPU can usually handle without too much processing) but also audio processing that may require more complex algorithms - even deep learning ones - like background noise removal, echo detection/removal, volume auto-gain, audio enhancement and such.
On other words, the typical use cases will either have input or output being a physical hardware (microphone or speaker).
All, thanks for spending time to discuss, it seems we go back to the start point of this topic again.
Our main request is that there is a hardware sample rate converter on the chip, so users can use it in user space as a component like software sample rate converter. It mostly may run as a gstreamer plugin. so it is a memory to memory component.
I didn't find such API in ALSA for such purpose, the best option for this in the kernel is the V4L2 memory to memory framework I found. As Hans said it is well designed for memory to memory.
And I think audio is one of 'media'. As I can see that part of Radio function is in ALSA, part of Radio function is in V4L2. part of HDMI function is in DRM, part of HDMI function is in ALSA... So using V4L2 for audio is not new from this point of view.
Even now I still think V4L2 is the best option, but it looks like there are a lot of rejects. If develop a new ALSA-mem2mem, it is also a duplication of code (bigger duplication that just add audio support in V4L2 I think).
After reading this thread I still believe that the mem2mem framework is a reasonable option, unless someone can come up with a method that is easy to implement in the alsa subsystem. From what I can tell from this discussion no such method exists.
From the media side there are arguments that it adds extra maintenance load, which is true, but I believe that it is quite limited in practice.
That said, perhaps we should make a statement that while we support the use of audio m2m drivers, this is only for simple m2m audio processing like this driver, specifically where there is a 1-to-1 mapping between input and output buffers. At this point we do not want to add audio codec support or similar complex audio processing.
Part of the reason is that codecs are hard, and we already have our hands full with all the video codecs. Part of the reason is that the v4l2-mem2mem framework probably needs to be forked to make a more advanced version geared towards codecs since the current framework is too limiting for some of the things we want to do. It was really designed for scalers, deinterlacers, etc. and the codec support was added later.
If we ever allow such complex audio processing devices, then we would have to have another discussion, and I believe that will only be possible if most of the maintenance load would be on the alsa subsystem where the audio experts are.
So my proposal is to:
1) add a clear statement to dev-audio-mem2mem.rst (patch 08/16) that only simple audio devices with a 1-to-1 mapping of input to output buffer are supported. Perhaps also in videodev2.h before struct v4l2_audio_format.
2) I will experiment a bit trying to solve the main complaint about creating new audio fourcc values and thus duplicating existing SNDRV_PCM_FORMAT_ values. I have some ideas for that.
But I do not want to spend time on 2 until we agree that this is the way forward.
Regards,
Hans
On 5/8/2024 10:00 AM, Hans Verkuil wrote:
On 06/05/2024 10:49, Shengjiu Wang wrote:
On Fri, May 3, 2024 at 4:42 PM Mauro Carvalho Chehab mchehab@kernel.org wrote:
Em Fri, 3 May 2024 10:47:19 +0900 Mark Brown broonie@kernel.org escreveu:
On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote:
Mauro Carvalho Chehab mchehab@kernel.org escreveu:
There are still time control associated with it, as audio and video needs to be in sync. This is done by controlling the buffers size and could be fine-tuned by checking when the buffer transfer is done.
...
Just complementing: on media, we do this per video buffer (or per half video buffer). A typical use case on cameras is to have buffers transferred 30 times per second, if the video was streamed at 30 frames per second.
IIRC some big use case for this hardware was transcoding so there was a desire to just go at whatever rate the hardware could support as there is no interactive user consuming the output as it is generated.
Indeed, codecs could be used to just do transcoding, but I would expect it to be a border use case. See, as the chipsets implementing codecs are typically the ones used on mobiles, I would expect that the major use cases to be to watch audio and video and to participate on audio/video conferences.
Going further, the codec API may end supporting not only transcoding (which is something that CPU can usually handle without too much processing) but also audio processing that may require more complex algorithms - even deep learning ones - like background noise removal, echo detection/removal, volume auto-gain, audio enhancement and such.
On other words, the typical use cases will either have input or output being a physical hardware (microphone or speaker).
All, thanks for spending time to discuss, it seems we go back to the start point of this topic again.
Our main request is that there is a hardware sample rate converter on the chip, so users can use it in user space as a component like software sample rate converter. It mostly may run as a gstreamer plugin. so it is a memory to memory component.
I didn't find such API in ALSA for such purpose, the best option for this in the kernel is the V4L2 memory to memory framework I found. As Hans said it is well designed for memory to memory.
And I think audio is one of 'media'. As I can see that part of Radio function is in ALSA, part of Radio function is in V4L2. part of HDMI function is in DRM, part of HDMI function is in ALSA... So using V4L2 for audio is not new from this point of view.
Even now I still think V4L2 is the best option, but it looks like there are a lot of rejects. If develop a new ALSA-mem2mem, it is also a duplication of code (bigger duplication that just add audio support in V4L2 I think).
After reading this thread I still believe that the mem2mem framework is a reasonable option, unless someone can come up with a method that is easy to implement in the alsa subsystem. From what I can tell from this discussion no such method exists.
Hi,
my main question would be how is mem2mem use case different from loopback exposing playback and capture frontends in user space with DSP (or other piece of HW) in the middle?
Amadeusz
On Wed, May 8, 2024 at 4:14 PM Amadeusz Sławiński amadeuszx.slawinski@linux.intel.com wrote:
On 5/8/2024 10:00 AM, Hans Verkuil wrote:
On 06/05/2024 10:49, Shengjiu Wang wrote:
On Fri, May 3, 2024 at 4:42 PM Mauro Carvalho Chehab mchehab@kernel.org wrote:
Em Fri, 3 May 2024 10:47:19 +0900 Mark Brown broonie@kernel.org escreveu:
On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote:
Mauro Carvalho Chehab mchehab@kernel.org escreveu:
> There are still time control associated with it, as audio and video > needs to be in sync. This is done by controlling the buffers size > and could be fine-tuned by checking when the buffer transfer is done.
...
Just complementing: on media, we do this per video buffer (or per half video buffer). A typical use case on cameras is to have buffers transferred 30 times per second, if the video was streamed at 30 frames per second.
IIRC some big use case for this hardware was transcoding so there was a desire to just go at whatever rate the hardware could support as there is no interactive user consuming the output as it is generated.
Indeed, codecs could be used to just do transcoding, but I would expect it to be a border use case. See, as the chipsets implementing codecs are typically the ones used on mobiles, I would expect that the major use cases to be to watch audio and video and to participate on audio/video conferences.
Going further, the codec API may end supporting not only transcoding (which is something that CPU can usually handle without too much processing) but also audio processing that may require more complex algorithms - even deep learning ones - like background noise removal, echo detection/removal, volume auto-gain, audio enhancement and such.
On other words, the typical use cases will either have input or output being a physical hardware (microphone or speaker).
All, thanks for spending time to discuss, it seems we go back to the start point of this topic again.
Our main request is that there is a hardware sample rate converter on the chip, so users can use it in user space as a component like software sample rate converter. It mostly may run as a gstreamer plugin. so it is a memory to memory component.
I didn't find such API in ALSA for such purpose, the best option for this in the kernel is the V4L2 memory to memory framework I found. As Hans said it is well designed for memory to memory.
And I think audio is one of 'media'. As I can see that part of Radio function is in ALSA, part of Radio function is in V4L2. part of HDMI function is in DRM, part of HDMI function is in ALSA... So using V4L2 for audio is not new from this point of view.
Even now I still think V4L2 is the best option, but it looks like there are a lot of rejects. If develop a new ALSA-mem2mem, it is also a duplication of code (bigger duplication that just add audio support in V4L2 I think).
After reading this thread I still believe that the mem2mem framework is a reasonable option, unless someone can come up with a method that is easy to implement in the alsa subsystem. From what I can tell from this discussion no such method exists.
Hi,
my main question would be how is mem2mem use case different from loopback exposing playback and capture frontends in user space with DSP (or other piece of HW) in the middle?
I think loopback has a timing control, user need to feed data to playback at a fixed time and get data from capture at a fixed time. Otherwise there is xrun in playback and capture.
mem2mem case: there is no such timing control, user feeds data to it then it generates output, if user doesn't feed data, there is no xrun. but mem2mem is just one of the components in the playback or capture pipeline, overall there is time control for whole pipeline,
Best regards Shengjiu Wang
Amadeusz
On 5/9/2024 11:36 AM, Shengjiu Wang wrote:
On Wed, May 8, 2024 at 4:14 PM Amadeusz Sławiński amadeuszx.slawinski@linux.intel.com wrote:
On 5/8/2024 10:00 AM, Hans Verkuil wrote:
On 06/05/2024 10:49, Shengjiu Wang wrote:
On Fri, May 3, 2024 at 4:42 PM Mauro Carvalho Chehab mchehab@kernel.org wrote:
Em Fri, 3 May 2024 10:47:19 +0900 Mark Brown broonie@kernel.org escreveu:
On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote: > Mauro Carvalho Chehab mchehab@kernel.org escreveu:
>> There are still time control associated with it, as audio and video >> needs to be in sync. This is done by controlling the buffers size >> and could be fine-tuned by checking when the buffer transfer is done.
...
> Just complementing: on media, we do this per video buffer (or > per half video buffer). A typical use case on cameras is to have > buffers transferred 30 times per second, if the video was streamed > at 30 frames per second.
IIRC some big use case for this hardware was transcoding so there was a desire to just go at whatever rate the hardware could support as there is no interactive user consuming the output as it is generated.
Indeed, codecs could be used to just do transcoding, but I would expect it to be a border use case. See, as the chipsets implementing codecs are typically the ones used on mobiles, I would expect that the major use cases to be to watch audio and video and to participate on audio/video conferences.
Going further, the codec API may end supporting not only transcoding (which is something that CPU can usually handle without too much processing) but also audio processing that may require more complex algorithms - even deep learning ones - like background noise removal, echo detection/removal, volume auto-gain, audio enhancement and such.
On other words, the typical use cases will either have input or output being a physical hardware (microphone or speaker).
All, thanks for spending time to discuss, it seems we go back to the start point of this topic again.
Our main request is that there is a hardware sample rate converter on the chip, so users can use it in user space as a component like software sample rate converter. It mostly may run as a gstreamer plugin. so it is a memory to memory component.
I didn't find such API in ALSA for such purpose, the best option for this in the kernel is the V4L2 memory to memory framework I found. As Hans said it is well designed for memory to memory.
And I think audio is one of 'media'. As I can see that part of Radio function is in ALSA, part of Radio function is in V4L2. part of HDMI function is in DRM, part of HDMI function is in ALSA... So using V4L2 for audio is not new from this point of view.
Even now I still think V4L2 is the best option, but it looks like there are a lot of rejects. If develop a new ALSA-mem2mem, it is also a duplication of code (bigger duplication that just add audio support in V4L2 I think).
After reading this thread I still believe that the mem2mem framework is a reasonable option, unless someone can come up with a method that is easy to implement in the alsa subsystem. From what I can tell from this discussion no such method exists.
Hi,
my main question would be how is mem2mem use case different from loopback exposing playback and capture frontends in user space with DSP (or other piece of HW) in the middle?
I think loopback has a timing control, user need to feed data to playback at a fixed time and get data from capture at a fixed time. Otherwise there is xrun in playback and capture.
mem2mem case: there is no such timing control, user feeds data to it then it generates output, if user doesn't feed data, there is no xrun. but mem2mem is just one of the components in the playback or capture pipeline, overall there is time control for whole pipeline,
Have you looked at compress streams? If I remember correctly they are not tied to time due to the fact that they can pass data in arbitrary formats?
From: https://docs.kernel.org/sound/designs/compress-offload.html
"No notion of underrun/overrun. Since the bytes written are compressed in nature and data written/read doesn’t translate directly to rendered output in time, this does not deal with underrun/overrun and maybe dealt in user-library"
Amadeusz
On Thu, May 9, 2024 at 5:50 PM Amadeusz Sławiński amadeuszx.slawinski@linux.intel.com wrote:
On 5/9/2024 11:36 AM, Shengjiu Wang wrote:
On Wed, May 8, 2024 at 4:14 PM Amadeusz Sławiński amadeuszx.slawinski@linux.intel.com wrote:
On 5/8/2024 10:00 AM, Hans Verkuil wrote:
On 06/05/2024 10:49, Shengjiu Wang wrote:
On Fri, May 3, 2024 at 4:42 PM Mauro Carvalho Chehab mchehab@kernel.org wrote:
Em Fri, 3 May 2024 10:47:19 +0900 Mark Brown broonie@kernel.org escreveu:
> On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote: >> Mauro Carvalho Chehab mchehab@kernel.org escreveu: > >>> There are still time control associated with it, as audio and video >>> needs to be in sync. This is done by controlling the buffers size >>> and could be fine-tuned by checking when the buffer transfer is done. > > ... > >> Just complementing: on media, we do this per video buffer (or >> per half video buffer). A typical use case on cameras is to have >> buffers transferred 30 times per second, if the video was streamed >> at 30 frames per second. > > IIRC some big use case for this hardware was transcoding so there was a > desire to just go at whatever rate the hardware could support as there > is no interactive user consuming the output as it is generated.
Indeed, codecs could be used to just do transcoding, but I would expect it to be a border use case. See, as the chipsets implementing codecs are typically the ones used on mobiles, I would expect that the major use cases to be to watch audio and video and to participate on audio/video conferences.
Going further, the codec API may end supporting not only transcoding (which is something that CPU can usually handle without too much processing) but also audio processing that may require more complex algorithms - even deep learning ones - like background noise removal, echo detection/removal, volume auto-gain, audio enhancement and such.
On other words, the typical use cases will either have input or output being a physical hardware (microphone or speaker).
All, thanks for spending time to discuss, it seems we go back to the start point of this topic again.
Our main request is that there is a hardware sample rate converter on the chip, so users can use it in user space as a component like software sample rate converter. It mostly may run as a gstreamer plugin. so it is a memory to memory component.
I didn't find such API in ALSA for such purpose, the best option for this in the kernel is the V4L2 memory to memory framework I found. As Hans said it is well designed for memory to memory.
And I think audio is one of 'media'. As I can see that part of Radio function is in ALSA, part of Radio function is in V4L2. part of HDMI function is in DRM, part of HDMI function is in ALSA... So using V4L2 for audio is not new from this point of view.
Even now I still think V4L2 is the best option, but it looks like there are a lot of rejects. If develop a new ALSA-mem2mem, it is also a duplication of code (bigger duplication that just add audio support in V4L2 I think).
After reading this thread I still believe that the mem2mem framework is a reasonable option, unless someone can come up with a method that is easy to implement in the alsa subsystem. From what I can tell from this discussion no such method exists.
Hi,
my main question would be how is mem2mem use case different from loopback exposing playback and capture frontends in user space with DSP (or other piece of HW) in the middle?
I think loopback has a timing control, user need to feed data to playback at a fixed time and get data from capture at a fixed time. Otherwise there is xrun in playback and capture.
mem2mem case: there is no such timing control, user feeds data to it then it generates output, if user doesn't feed data, there is no xrun. but mem2mem is just one of the components in the playback or capture pipeline, overall there is time control for whole pipeline,
Have you looked at compress streams? If I remember correctly they are not tied to time due to the fact that they can pass data in arbitrary formats?
From: https://docs.kernel.org/sound/designs/compress-offload.html
"No notion of underrun/overrun. Since the bytes written are compressed in nature and data written/read doesn’t translate directly to rendered output in time, this does not deal with underrun/overrun and maybe dealt in user-library"
I checked the compress stream. mem2mem case is different with compress-offload case
compress-offload case is a full pipeline, the user sends a compress stream to it, then DSP decodes it and renders it to the speaker in real time.
mem2mem is just like the decoder in the compress pipeline. which is one of the components in the pipeline.
best regards shengjiu wang
Amadeusz
On 5/9/2024 12:12 PM, Shengjiu Wang wrote:
On Thu, May 9, 2024 at 5:50 PM Amadeusz Sławiński amadeuszx.slawinski@linux.intel.com wrote:
On 5/9/2024 11:36 AM, Shengjiu Wang wrote:
On Wed, May 8, 2024 at 4:14 PM Amadeusz Sławiński amadeuszx.slawinski@linux.intel.com wrote:
On 5/8/2024 10:00 AM, Hans Verkuil wrote:
On 06/05/2024 10:49, Shengjiu Wang wrote:
On Fri, May 3, 2024 at 4:42 PM Mauro Carvalho Chehab mchehab@kernel.org wrote: > > Em Fri, 3 May 2024 10:47:19 +0900 > Mark Brown broonie@kernel.org escreveu: > >> On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote: >>> Mauro Carvalho Chehab mchehab@kernel.org escreveu: >> >>>> There are still time control associated with it, as audio and video >>>> needs to be in sync. This is done by controlling the buffers size >>>> and could be fine-tuned by checking when the buffer transfer is done. >> >> ... >> >>> Just complementing: on media, we do this per video buffer (or >>> per half video buffer). A typical use case on cameras is to have >>> buffers transferred 30 times per second, if the video was streamed >>> at 30 frames per second. >> >> IIRC some big use case for this hardware was transcoding so there was a >> desire to just go at whatever rate the hardware could support as there >> is no interactive user consuming the output as it is generated. > > Indeed, codecs could be used to just do transcoding, but I would > expect it to be a border use case. See, as the chipsets implementing > codecs are typically the ones used on mobiles, I would expect that > the major use cases to be to watch audio and video and to participate > on audio/video conferences. > > Going further, the codec API may end supporting not only transcoding > (which is something that CPU can usually handle without too much > processing) but also audio processing that may require more > complex algorithms - even deep learning ones - like background noise > removal, echo detection/removal, volume auto-gain, audio enhancement > and such. > > On other words, the typical use cases will either have input > or output being a physical hardware (microphone or speaker). >
All, thanks for spending time to discuss, it seems we go back to the start point of this topic again.
Our main request is that there is a hardware sample rate converter on the chip, so users can use it in user space as a component like software sample rate converter. It mostly may run as a gstreamer plugin. so it is a memory to memory component.
I didn't find such API in ALSA for such purpose, the best option for this in the kernel is the V4L2 memory to memory framework I found. As Hans said it is well designed for memory to memory.
And I think audio is one of 'media'. As I can see that part of Radio function is in ALSA, part of Radio function is in V4L2. part of HDMI function is in DRM, part of HDMI function is in ALSA... So using V4L2 for audio is not new from this point of view.
Even now I still think V4L2 is the best option, but it looks like there are a lot of rejects. If develop a new ALSA-mem2mem, it is also a duplication of code (bigger duplication that just add audio support in V4L2 I think).
After reading this thread I still believe that the mem2mem framework is a reasonable option, unless someone can come up with a method that is easy to implement in the alsa subsystem. From what I can tell from this discussion no such method exists.
Hi,
my main question would be how is mem2mem use case different from loopback exposing playback and capture frontends in user space with DSP (or other piece of HW) in the middle?
I think loopback has a timing control, user need to feed data to playback at a fixed time and get data from capture at a fixed time. Otherwise there is xrun in playback and capture.
mem2mem case: there is no such timing control, user feeds data to it then it generates output, if user doesn't feed data, there is no xrun. but mem2mem is just one of the components in the playback or capture pipeline, overall there is time control for whole pipeline,
Have you looked at compress streams? If I remember correctly they are not tied to time due to the fact that they can pass data in arbitrary formats?
From: https://docs.kernel.org/sound/designs/compress-offload.html
"No notion of underrun/overrun. Since the bytes written are compressed in nature and data written/read doesn’t translate directly to rendered output in time, this does not deal with underrun/overrun and maybe dealt in user-library"
I checked the compress stream. mem2mem case is different with compress-offload case
compress-offload case is a full pipeline, the user sends a compress stream to it, then DSP decodes it and renders it to the speaker in real time.
mem2mem is just like the decoder in the compress pipeline. which is one of the components in the pipeline.
I was thinking of loopback with endpoints using compress streams, without physical endpoint, something like:
compress playback (to feed data from userspace) -> DSP (processing) -> compress capture (send data back to userspace)
Unless I'm missing something, you should be able to process data as fast as you can feed it and consume it in such case.
Amadeusz
On Thu, May 9, 2024 at 6:28 PM Amadeusz Sławiński amadeuszx.slawinski@linux.intel.com wrote:
On 5/9/2024 12:12 PM, Shengjiu Wang wrote:
On Thu, May 9, 2024 at 5:50 PM Amadeusz Sławiński amadeuszx.slawinski@linux.intel.com wrote:
On 5/9/2024 11:36 AM, Shengjiu Wang wrote:
On Wed, May 8, 2024 at 4:14 PM Amadeusz Sławiński amadeuszx.slawinski@linux.intel.com wrote:
On 5/8/2024 10:00 AM, Hans Verkuil wrote:
On 06/05/2024 10:49, Shengjiu Wang wrote: > On Fri, May 3, 2024 at 4:42 PM Mauro Carvalho Chehab mchehab@kernel.org wrote: >> >> Em Fri, 3 May 2024 10:47:19 +0900 >> Mark Brown broonie@kernel.org escreveu: >> >>> On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote: >>>> Mauro Carvalho Chehab mchehab@kernel.org escreveu: >>> >>>>> There are still time control associated with it, as audio and video >>>>> needs to be in sync. This is done by controlling the buffers size >>>>> and could be fine-tuned by checking when the buffer transfer is done. >>> >>> ... >>> >>>> Just complementing: on media, we do this per video buffer (or >>>> per half video buffer). A typical use case on cameras is to have >>>> buffers transferred 30 times per second, if the video was streamed >>>> at 30 frames per second. >>> >>> IIRC some big use case for this hardware was transcoding so there was a >>> desire to just go at whatever rate the hardware could support as there >>> is no interactive user consuming the output as it is generated. >> >> Indeed, codecs could be used to just do transcoding, but I would >> expect it to be a border use case. See, as the chipsets implementing >> codecs are typically the ones used on mobiles, I would expect that >> the major use cases to be to watch audio and video and to participate >> on audio/video conferences. >> >> Going further, the codec API may end supporting not only transcoding >> (which is something that CPU can usually handle without too much >> processing) but also audio processing that may require more >> complex algorithms - even deep learning ones - like background noise >> removal, echo detection/removal, volume auto-gain, audio enhancement >> and such. >> >> On other words, the typical use cases will either have input >> or output being a physical hardware (microphone or speaker). >> > > All, thanks for spending time to discuss, it seems we go back to > the start point of this topic again. > > Our main request is that there is a hardware sample rate converter > on the chip, so users can use it in user space as a component like > software sample rate converter. It mostly may run as a gstreamer plugin. > so it is a memory to memory component. > > I didn't find such API in ALSA for such purpose, the best option for this > in the kernel is the V4L2 memory to memory framework I found. > As Hans said it is well designed for memory to memory. > > And I think audio is one of 'media'. As I can see that part of Radio > function is in ALSA, part of Radio function is in V4L2. part of HDMI > function is in DRM, part of HDMI function is in ALSA... > So using V4L2 for audio is not new from this point of view. > > Even now I still think V4L2 is the best option, but it looks like there > are a lot of rejects. If develop a new ALSA-mem2mem, it is also > a duplication of code (bigger duplication that just add audio support > in V4L2 I think).
After reading this thread I still believe that the mem2mem framework is a reasonable option, unless someone can come up with a method that is easy to implement in the alsa subsystem. From what I can tell from this discussion no such method exists.
Hi,
my main question would be how is mem2mem use case different from loopback exposing playback and capture frontends in user space with DSP (or other piece of HW) in the middle?
I think loopback has a timing control, user need to feed data to playback at a fixed time and get data from capture at a fixed time. Otherwise there is xrun in playback and capture.
mem2mem case: there is no such timing control, user feeds data to it then it generates output, if user doesn't feed data, there is no xrun. but mem2mem is just one of the components in the playback or capture pipeline, overall there is time control for whole pipeline,
Have you looked at compress streams? If I remember correctly they are not tied to time due to the fact that they can pass data in arbitrary formats?
From: https://docs.kernel.org/sound/designs/compress-offload.html
"No notion of underrun/overrun. Since the bytes written are compressed in nature and data written/read doesn’t translate directly to rendered output in time, this does not deal with underrun/overrun and maybe dealt in user-library"
I checked the compress stream. mem2mem case is different with compress-offload case
compress-offload case is a full pipeline, the user sends a compress stream to it, then DSP decodes it and renders it to the speaker in real time.
mem2mem is just like the decoder in the compress pipeline. which is one of the components in the pipeline.
I was thinking of loopback with endpoints using compress streams, without physical endpoint, something like:
compress playback (to feed data from userspace) -> DSP (processing) -> compress capture (send data back to userspace)
Unless I'm missing something, you should be able to process data as fast as you can feed it and consume it in such case.
Actually in the beginning I tried this, but it did not work well. ALSA needs time control for playback and capture, playback and capture needs to synchronize. Usually the playback and capture pipeline is independent in ALSA design, but in this case, the playback and capture should synchronize, they are not independent.
Best regards Shengjiu Wang
Amadeusz
On 09. 05. 24 12:44, Shengjiu Wang wrote:
mem2mem is just like the decoder in the compress pipeline. which is one of the components in the pipeline.
I was thinking of loopback with endpoints using compress streams, without physical endpoint, something like:
compress playback (to feed data from userspace) -> DSP (processing) -> compress capture (send data back to userspace)
Unless I'm missing something, you should be able to process data as fast as you can feed it and consume it in such case.
Actually in the beginning I tried this, but it did not work well. ALSA needs time control for playback and capture, playback and capture needs to synchronize. Usually the playback and capture pipeline is independent in ALSA design, but in this case, the playback and capture should synchronize, they are not independent.
The core compress API core no strict timing constraints. You can eventually0 have two half-duplex compress devices, if you like to have really independent mechanism. If something is missing in API, you can extend this API (like to inform the user space that it's a producer/consumer processing without any relation to the real time). I like this idea.
Jaroslav
On 09. 05. 24 13:13, Jaroslav Kysela wrote:
On 09. 05. 24 12:44, Shengjiu Wang wrote:
mem2mem is just like the decoder in the compress pipeline. which is one of the components in the pipeline.
I was thinking of loopback with endpoints using compress streams, without physical endpoint, something like:
compress playback (to feed data from userspace) -> DSP (processing) -> compress capture (send data back to userspace)
Unless I'm missing something, you should be able to process data as fast as you can feed it and consume it in such case.
Actually in the beginning I tried this, but it did not work well. ALSA needs time control for playback and capture, playback and capture needs to synchronize. Usually the playback and capture pipeline is independent in ALSA design, but in this case, the playback and capture should synchronize, they are not independent.
The core compress API core no strict timing constraints. You can eventually0 have two half-duplex compress devices, if you like to have really independent mechanism. If something is missing in API, you can extend this API (like to inform the user space that it's a producer/consumer processing without any relation to the real time). I like this idea.
I was thinking more about this. If I am right, the mentioned use in gstreamer is supposed to run the conversion (DSP) job in "one shot" (can be handled using one system call like blocking ioctl). The goal is just to offload the CPU work to the DSP (co-processor). If there are no requirements for the queuing, we can implement this ioctl in the compress ALSA API easily using the data management through the dma-buf API. We can eventually define a new direction (enum snd_compr_direction) like SND_COMPRESS_CONVERT or so to allow handle this new data scheme. The API may be extended later on real demand, of course.
Otherwise all pieces are already in the current ALSA compress API (capabilities, params, enumeration). The realtime controls may be created using ALSA control API.
Jaroslav
Hi Jaroslav,
On 5/13/24 13:56, Jaroslav Kysela wrote:
On 09. 05. 24 13:13, Jaroslav Kysela wrote:
On 09. 05. 24 12:44, Shengjiu Wang wrote:
mem2mem is just like the decoder in the compress pipeline. which is one of the components in the pipeline.
I was thinking of loopback with endpoints using compress streams, without physical endpoint, something like:
compress playback (to feed data from userspace) -> DSP (processing) -> compress capture (send data back to userspace)
Unless I'm missing something, you should be able to process data as fast as you can feed it and consume it in such case.
Actually in the beginning I tried this, but it did not work well. ALSA needs time control for playback and capture, playback and capture needs to synchronize. Usually the playback and capture pipeline is independent in ALSA design, but in this case, the playback and capture should synchronize, they are not independent.
The core compress API core no strict timing constraints. You can eventually0 have two half-duplex compress devices, if you like to have really independent mechanism. If something is missing in API, you can extend this API (like to inform the user space that it's a producer/consumer processing without any relation to the real time). I like this idea.
I was thinking more about this. If I am right, the mentioned use in gstreamer is supposed to run the conversion (DSP) job in "one shot" (can be handled using one system call like blocking ioctl). The goal is just to offload the CPU work to the DSP (co-processor). If there are no requirements for the queuing, we can implement this ioctl in the compress ALSA API easily using the data management through the dma-buf API. We can eventually define a new direction (enum snd_compr_direction) like SND_COMPRESS_CONVERT or so to allow handle this new data scheme. The API may be extended later on real demand, of course.
Otherwise all pieces are already in the current ALSA compress API (capabilities, params, enumeration). The realtime controls may be created using ALSA control API.
So does this mean that Shengjiu should attempt to use this ALSA approach first?
If there is a way to do this reasonably cleanly in the ALSA API, then that obviously is much better from my perspective as a media maintainer.
My understanding was always that it can't be done (or at least not without a major effort) in ALSA, and in that case V4L2 is a decent plan B, but based on this I gather that it is possible in ALSA after all.
So can I shelf this patch series for now?
Regards,
Hans
On 15. 05. 24 11:17, Hans Verkuil wrote:
Hi Jaroslav,
On 5/13/24 13:56, Jaroslav Kysela wrote:
On 09. 05. 24 13:13, Jaroslav Kysela wrote:
On 09. 05. 24 12:44, Shengjiu Wang wrote:
mem2mem is just like the decoder in the compress pipeline. which is one of the components in the pipeline.
I was thinking of loopback with endpoints using compress streams, without physical endpoint, something like:
compress playback (to feed data from userspace) -> DSP (processing) -> compress capture (send data back to userspace)
Unless I'm missing something, you should be able to process data as fast as you can feed it and consume it in such case.
Actually in the beginning I tried this, but it did not work well. ALSA needs time control for playback and capture, playback and capture needs to synchronize. Usually the playback and capture pipeline is independent in ALSA design, but in this case, the playback and capture should synchronize, they are not independent.
The core compress API core no strict timing constraints. You can eventually0 have two half-duplex compress devices, if you like to have really independent mechanism. If something is missing in API, you can extend this API (like to inform the user space that it's a producer/consumer processing without any relation to the real time). I like this idea.
I was thinking more about this. If I am right, the mentioned use in gstreamer is supposed to run the conversion (DSP) job in "one shot" (can be handled using one system call like blocking ioctl). The goal is just to offload the CPU work to the DSP (co-processor). If there are no requirements for the queuing, we can implement this ioctl in the compress ALSA API easily using the data management through the dma-buf API. We can eventually define a new direction (enum snd_compr_direction) like SND_COMPRESS_CONVERT or so to allow handle this new data scheme. The API may be extended later on real demand, of course.
Otherwise all pieces are already in the current ALSA compress API (capabilities, params, enumeration). The realtime controls may be created using ALSA control API.
So does this mean that Shengjiu should attempt to use this ALSA approach first?
I've not seen any argument to use v4l2 mem2mem buffer scheme for this data conversion forcefully. It looks like a simple job and ALSA APIs may be extended for this simple purpose.
Shengjiu, what are your requirements for gstreamer support? Would be a new blocking ioctl enough for the initial support in the compress ALSA API?
Jaroslav
On Wed, 15 May 2024 11:50:52 +0200, Jaroslav Kysela wrote:
On 15. 05. 24 11:17, Hans Verkuil wrote:
Hi Jaroslav,
On 5/13/24 13:56, Jaroslav Kysela wrote:
On 09. 05. 24 13:13, Jaroslav Kysela wrote:
On 09. 05. 24 12:44, Shengjiu Wang wrote:
> mem2mem is just like the decoder in the compress pipeline. which is > one of the components in the pipeline.
I was thinking of loopback with endpoints using compress streams, without physical endpoint, something like:
compress playback (to feed data from userspace) -> DSP (processing) -> compress capture (send data back to userspace)
Unless I'm missing something, you should be able to process data as fast as you can feed it and consume it in such case.
Actually in the beginning I tried this, but it did not work well. ALSA needs time control for playback and capture, playback and capture needs to synchronize. Usually the playback and capture pipeline is independent in ALSA design, but in this case, the playback and capture should synchronize, they are not independent.
The core compress API core no strict timing constraints. You can eventually0 have two half-duplex compress devices, if you like to have really independent mechanism. If something is missing in API, you can extend this API (like to inform the user space that it's a producer/consumer processing without any relation to the real time). I like this idea.
I was thinking more about this. If I am right, the mentioned use in gstreamer is supposed to run the conversion (DSP) job in "one shot" (can be handled using one system call like blocking ioctl). The goal is just to offload the CPU work to the DSP (co-processor). If there are no requirements for the queuing, we can implement this ioctl in the compress ALSA API easily using the data management through the dma-buf API. We can eventually define a new direction (enum snd_compr_direction) like SND_COMPRESS_CONVERT or so to allow handle this new data scheme. The API may be extended later on real demand, of course.
Otherwise all pieces are already in the current ALSA compress API (capabilities, params, enumeration). The realtime controls may be created using ALSA control API.
So does this mean that Shengjiu should attempt to use this ALSA approach first?
I've not seen any argument to use v4l2 mem2mem buffer scheme for this data conversion forcefully. It looks like a simple job and ALSA APIs may be extended for this simple purpose.
Shengjiu, what are your requirements for gstreamer support? Would be a new blocking ioctl enough for the initial support in the compress ALSA API?
If it works with compress API, it'd be great, yeah. So, your idea is to open compress-offload devices for read and write, then and let them convert a la batch jobs without timing control?
For full-duplex usages, we might need some more extensions, so that both read and write parameters can be synchronized. (So far the compress stream is a unidirectional, and the runtime buffer for a single stream.)
And the buffer management is based on the fixed size fragments. I hope this doesn't matter much for the intended operation?
thanks,
Takashi
On 15. 05. 24 12:19, Takashi Iwai wrote:
On Wed, 15 May 2024 11:50:52 +0200, Jaroslav Kysela wrote:
On 15. 05. 24 11:17, Hans Verkuil wrote:
Hi Jaroslav,
On 5/13/24 13:56, Jaroslav Kysela wrote:
On 09. 05. 24 13:13, Jaroslav Kysela wrote:
On 09. 05. 24 12:44, Shengjiu Wang wrote:
>> mem2mem is just like the decoder in the compress pipeline. which is >> one of the components in the pipeline. > > I was thinking of loopback with endpoints using compress streams, > without physical endpoint, something like: > > compress playback (to feed data from userspace) -> DSP (processing) -> > compress capture (send data back to userspace) > > Unless I'm missing something, you should be able to process data as fast > as you can feed it and consume it in such case. >
Actually in the beginning I tried this, but it did not work well. ALSA needs time control for playback and capture, playback and capture needs to synchronize. Usually the playback and capture pipeline is independent in ALSA design, but in this case, the playback and capture should synchronize, they are not independent.
The core compress API core no strict timing constraints. You can eventually0 have two half-duplex compress devices, if you like to have really independent mechanism. If something is missing in API, you can extend this API (like to inform the user space that it's a producer/consumer processing without any relation to the real time). I like this idea.
I was thinking more about this. If I am right, the mentioned use in gstreamer is supposed to run the conversion (DSP) job in "one shot" (can be handled using one system call like blocking ioctl). The goal is just to offload the CPU work to the DSP (co-processor). If there are no requirements for the queuing, we can implement this ioctl in the compress ALSA API easily using the data management through the dma-buf API. We can eventually define a new direction (enum snd_compr_direction) like SND_COMPRESS_CONVERT or so to allow handle this new data scheme. The API may be extended later on real demand, of course.
Otherwise all pieces are already in the current ALSA compress API (capabilities, params, enumeration). The realtime controls may be created using ALSA control API.
So does this mean that Shengjiu should attempt to use this ALSA approach first?
I've not seen any argument to use v4l2 mem2mem buffer scheme for this data conversion forcefully. It looks like a simple job and ALSA APIs may be extended for this simple purpose.
Shengjiu, what are your requirements for gstreamer support? Would be a new blocking ioctl enough for the initial support in the compress ALSA API?
If it works with compress API, it'd be great, yeah. So, your idea is to open compress-offload devices for read and write, then and let them convert a la batch jobs without timing control?
For full-duplex usages, we might need some more extensions, so that both read and write parameters can be synchronized. (So far the compress stream is a unidirectional, and the runtime buffer for a single stream.)
And the buffer management is based on the fixed size fragments. I hope this doesn't matter much for the intended operation?
It's a question, if the standard I/O is really required for this case. My quick idea was to just implement a new "direction" for this job supporting only one ioctl for the data processing which will execute the job in "one shot" at the moment. The I/O may be handled through dma-buf API (which seems to be standard nowadays for this purpose and allows future chaining).
So something like:
struct dsp_job { int source_fd; /* dma-buf FD with source data - for dma_buf_get() */ int target_fd; /* dma-buf FD for target data - for dma_buf_get() */ ... maybe some extra data size members here ... ... maybe some special parameters here ... };
#define SNDRV_COMPRESS_DSPJOB _IOWR('C', 0x60, struct dsp_job)
This ioctl will be blocking (thus synced). My question is, if it's feasible for gstreamer or not. For this particular case, if the rate conversion is implemented in software, it will block the gstreamer data processing, too.
Jaroslav
On Wed, May 15, 2024 at 6:46 PM Jaroslav Kysela perex@perex.cz wrote:
On 15. 05. 24 12:19, Takashi Iwai wrote:
On Wed, 15 May 2024 11:50:52 +0200, Jaroslav Kysela wrote:
On 15. 05. 24 11:17, Hans Verkuil wrote:
Hi Jaroslav,
On 5/13/24 13:56, Jaroslav Kysela wrote:
On 09. 05. 24 13:13, Jaroslav Kysela wrote:
On 09. 05. 24 12:44, Shengjiu Wang wrote: >>> mem2mem is just like the decoder in the compress pipeline. which is >>> one of the components in the pipeline. >> >> I was thinking of loopback with endpoints using compress streams, >> without physical endpoint, something like: >> >> compress playback (to feed data from userspace) -> DSP (processing) -> >> compress capture (send data back to userspace) >> >> Unless I'm missing something, you should be able to process data as fast >> as you can feed it and consume it in such case. >> > > Actually in the beginning I tried this, but it did not work well. > ALSA needs time control for playback and capture, playback and capture > needs to synchronize. Usually the playback and capture pipeline is > independent in ALSA design, but in this case, the playback and capture > should synchronize, they are not independent.
The core compress API core no strict timing constraints. You can eventually0 have two half-duplex compress devices, if you like to have really independent mechanism. If something is missing in API, you can extend this API (like to inform the user space that it's a producer/consumer processing without any relation to the real time). I like this idea.
I was thinking more about this. If I am right, the mentioned use in gstreamer is supposed to run the conversion (DSP) job in "one shot" (can be handled using one system call like blocking ioctl). The goal is just to offload the CPU work to the DSP (co-processor). If there are no requirements for the queuing, we can implement this ioctl in the compress ALSA API easily using the data management through the dma-buf API. We can eventually define a new direction (enum snd_compr_direction) like SND_COMPRESS_CONVERT or so to allow handle this new data scheme. The API may be extended later on real demand, of course.
Otherwise all pieces are already in the current ALSA compress API (capabilities, params, enumeration). The realtime controls may be created using ALSA control API.
So does this mean that Shengjiu should attempt to use this ALSA approach first?
I've not seen any argument to use v4l2 mem2mem buffer scheme for this data conversion forcefully. It looks like a simple job and ALSA APIs may be extended for this simple purpose.
Shengjiu, what are your requirements for gstreamer support? Would be a new blocking ioctl enough for the initial support in the compress ALSA API?
If it works with compress API, it'd be great, yeah. So, your idea is to open compress-offload devices for read and write, then and let them convert a la batch jobs without timing control?
For full-duplex usages, we might need some more extensions, so that both read and write parameters can be synchronized. (So far the compress stream is a unidirectional, and the runtime buffer for a single stream.)
And the buffer management is based on the fixed size fragments. I hope this doesn't matter much for the intended operation?
It's a question, if the standard I/O is really required for this case. My quick idea was to just implement a new "direction" for this job supporting only one ioctl for the data processing which will execute the job in "one shot" at the moment. The I/O may be handled through dma-buf API (which seems to be standard nowadays for this purpose and allows future chaining).
So something like:
struct dsp_job { int source_fd; /* dma-buf FD with source data - for dma_buf_get() */ int target_fd; /* dma-buf FD for target data - for dma_buf_get() */ ... maybe some extra data size members here ... ... maybe some special parameters here ... };
#define SNDRV_COMPRESS_DSPJOB _IOWR('C', 0x60, struct dsp_job)
This ioctl will be blocking (thus synced). My question is, if it's feasible for gstreamer or not. For this particular case, if the rate conversion is implemented in software, it will block the gstreamer data processing, too.
Thanks.
I have several questions: 1. Compress API alway binds to a sound card. Can we avoid that? For ASRC, it is just one component,
2. Compress API doesn't seem to support mmap(). Is this a problem for sending and getting data to/from the driver?
3. How does the user get output data from ASRC after each conversion? it should happen every period.
best regards Shengjiu Wang.
On 15. 05. 24 15:34, Shengjiu Wang wrote:
On Wed, May 15, 2024 at 6:46 PM Jaroslav Kysela perex@perex.cz wrote:
On 15. 05. 24 12:19, Takashi Iwai wrote:
On Wed, 15 May 2024 11:50:52 +0200, Jaroslav Kysela wrote:
On 15. 05. 24 11:17, Hans Verkuil wrote:
Hi Jaroslav,
On 5/13/24 13:56, Jaroslav Kysela wrote:
On 09. 05. 24 13:13, Jaroslav Kysela wrote: > On 09. 05. 24 12:44, Shengjiu Wang wrote: >>>> mem2mem is just like the decoder in the compress pipeline. which is >>>> one of the components in the pipeline. >>> >>> I was thinking of loopback with endpoints using compress streams, >>> without physical endpoint, something like: >>> >>> compress playback (to feed data from userspace) -> DSP (processing) -> >>> compress capture (send data back to userspace) >>> >>> Unless I'm missing something, you should be able to process data as fast >>> as you can feed it and consume it in such case. >>> >> >> Actually in the beginning I tried this, but it did not work well. >> ALSA needs time control for playback and capture, playback and capture >> needs to synchronize. Usually the playback and capture pipeline is >> independent in ALSA design, but in this case, the playback and capture >> should synchronize, they are not independent. > > The core compress API core no strict timing constraints. You can eventually0 > have two half-duplex compress devices, if you like to have really independent > mechanism. If something is missing in API, you can extend this API (like to > inform the user space that it's a producer/consumer processing without any > relation to the real time). I like this idea.
I was thinking more about this. If I am right, the mentioned use in gstreamer is supposed to run the conversion (DSP) job in "one shot" (can be handled using one system call like blocking ioctl). The goal is just to offload the CPU work to the DSP (co-processor). If there are no requirements for the queuing, we can implement this ioctl in the compress ALSA API easily using the data management through the dma-buf API. We can eventually define a new direction (enum snd_compr_direction) like SND_COMPRESS_CONVERT or so to allow handle this new data scheme. The API may be extended later on real demand, of course.
Otherwise all pieces are already in the current ALSA compress API (capabilities, params, enumeration). The realtime controls may be created using ALSA control API.
So does this mean that Shengjiu should attempt to use this ALSA approach first?
I've not seen any argument to use v4l2 mem2mem buffer scheme for this data conversion forcefully. It looks like a simple job and ALSA APIs may be extended for this simple purpose.
Shengjiu, what are your requirements for gstreamer support? Would be a new blocking ioctl enough for the initial support in the compress ALSA API?
If it works with compress API, it'd be great, yeah. So, your idea is to open compress-offload devices for read and write, then and let them convert a la batch jobs without timing control?
For full-duplex usages, we might need some more extensions, so that both read and write parameters can be synchronized. (So far the compress stream is a unidirectional, and the runtime buffer for a single stream.)
And the buffer management is based on the fixed size fragments. I hope this doesn't matter much for the intended operation?
It's a question, if the standard I/O is really required for this case. My quick idea was to just implement a new "direction" for this job supporting only one ioctl for the data processing which will execute the job in "one shot" at the moment. The I/O may be handled through dma-buf API (which seems to be standard nowadays for this purpose and allows future chaining).
So something like:
struct dsp_job { int source_fd; /* dma-buf FD with source data - for dma_buf_get() */ int target_fd; /* dma-buf FD for target data - for dma_buf_get() */ ... maybe some extra data size members here ... ... maybe some special parameters here ... };
#define SNDRV_COMPRESS_DSPJOB _IOWR('C', 0x60, struct dsp_job)
This ioctl will be blocking (thus synced). My question is, if it's feasible for gstreamer or not. For this particular case, if the rate conversion is implemented in software, it will block the gstreamer data processing, too.
Thanks.
I have several questions:
- Compress API alway binds to a sound card. Can we avoid that? For ASRC, it is just one component,
Is this a real issue? Usually, I would expect a sound hardware (card) presence when ASRC is available, or not? Eventually, a separate sound card with one compress device may be created, too. For enumeration - the user space may just iterate through all sound cards / compress devices to find ASRC in the system.
The devices/interfaces in the sound card are independent. Also, USB MIDI converters offer only one serial MIDI interface for example, too.
- Compress API doesn't seem to support mmap(). Is this a problem for sending and getting data to/from the driver?
I proposed to use dma-buf for I/O (separate source and target buffer).
- How does the user get output data from ASRC after each conversion? it should happen every period.
target dma-buf
Jaroslav
Hi,
GStreamer hat on ...
Le mercredi 15 mai 2024 à 12:46 +0200, Jaroslav Kysela a écrit :
On 15. 05. 24 12:19, Takashi Iwai wrote:
On Wed, 15 May 2024 11:50:52 +0200, Jaroslav Kysela wrote:
On 15. 05. 24 11:17, Hans Verkuil wrote:
Hi Jaroslav,
On 5/13/24 13:56, Jaroslav Kysela wrote:
On 09. 05. 24 13:13, Jaroslav Kysela wrote:
On 09. 05. 24 12:44, Shengjiu Wang wrote: > > > mem2mem is just like the decoder in the compress pipeline. which is > > > one of the components in the pipeline. > > > > I was thinking of loopback with endpoints using compress streams, > > without physical endpoint, something like: > > > > compress playback (to feed data from userspace) -> DSP (processing) -> > > compress capture (send data back to userspace) > > > > Unless I'm missing something, you should be able to process data as fast > > as you can feed it and consume it in such case. > > > > Actually in the beginning I tried this, but it did not work well. > ALSA needs time control for playback and capture, playback and capture > needs to synchronize. Usually the playback and capture pipeline is > independent in ALSA design, but in this case, the playback and capture > should synchronize, they are not independent.
The core compress API core no strict timing constraints. You can eventually0 have two half-duplex compress devices, if you like to have really independent mechanism. If something is missing in API, you can extend this API (like to inform the user space that it's a producer/consumer processing without any relation to the real time). I like this idea.
I was thinking more about this. If I am right, the mentioned use in gstreamer is supposed to run the conversion (DSP) job in "one shot" (can be handled using one system call like blocking ioctl). The goal is just to offload the CPU work to the DSP (co-processor). If there are no requirements for the queuing, we can implement this ioctl in the compress ALSA API easily using the data management through the dma-buf API. We can eventually define a new direction (enum snd_compr_direction) like SND_COMPRESS_CONVERT or so to allow handle this new data scheme. The API may be extended later on real demand, of course.
Otherwise all pieces are already in the current ALSA compress API (capabilities, params, enumeration). The realtime controls may be created using ALSA control API.
So does this mean that Shengjiu should attempt to use this ALSA approach first?
I've not seen any argument to use v4l2 mem2mem buffer scheme for this data conversion forcefully. It looks like a simple job and ALSA APIs may be extended for this simple purpose.
Shengjiu, what are your requirements for gstreamer support? Would be a new blocking ioctl enough for the initial support in the compress ALSA API?
If it works with compress API, it'd be great, yeah. So, your idea is to open compress-offload devices for read and write, then and let them convert a la batch jobs without timing control?
For full-duplex usages, we might need some more extensions, so that both read and write parameters can be synchronized. (So far the compress stream is a unidirectional, and the runtime buffer for a single stream.)
And the buffer management is based on the fixed size fragments. I hope this doesn't matter much for the intended operation?
It's a question, if the standard I/O is really required for this case. My quick idea was to just implement a new "direction" for this job supporting only one ioctl for the data processing which will execute the job in "one shot" at the moment. The I/O may be handled through dma-buf API (which seems to be standard nowadays for this purpose and allows future chaining).
So something like:
struct dsp_job { int source_fd; /* dma-buf FD with source data - for dma_buf_get() */ int target_fd; /* dma-buf FD for target data - for dma_buf_get() */ ... maybe some extra data size members here ... ... maybe some special parameters here ... };
#define SNDRV_COMPRESS_DSPJOB _IOWR('C', 0x60, struct dsp_job)
This ioctl will be blocking (thus synced). My question is, if it's feasible for gstreamer or not. For this particular case, if the rate conversion is implemented in software, it will block the gstreamer data processing, too.
Yes, GStreamer threading is using a push-back model, so blocking for the time of the processing is fine. Note that the extra simplicity will suffer from ioctl() latency.
In GFX, they solve this issue with fences. That allow setting up the next operation in the chain before the data has been produced.
In V4L2, we solve this with queues. It allows preparing the next job, while the processing of the current job is happening. If you look at v4l2convert code in gstreamer (for simple m2m), it currently makes no use of the queues, it simply synchronously process the frames. There is two option, where it does not matter that much, or no one is using it :-D Video decoders and encoders (stateful) do run input / output from different thread to benefit from the queued.
regards, Nicolas
Jaroslav
On 15. 05. 24 22:33, Nicolas Dufresne wrote:
Hi,
GStreamer hat on ...
Le mercredi 15 mai 2024 à 12:46 +0200, Jaroslav Kysela a écrit :
On 15. 05. 24 12:19, Takashi Iwai wrote:
On Wed, 15 May 2024 11:50:52 +0200, Jaroslav Kysela wrote:
On 15. 05. 24 11:17, Hans Verkuil wrote:
Hi Jaroslav,
On 5/13/24 13:56, Jaroslav Kysela wrote:
On 09. 05. 24 13:13, Jaroslav Kysela wrote: > On 09. 05. 24 12:44, Shengjiu Wang wrote: >>>> mem2mem is just like the decoder in the compress pipeline. which is >>>> one of the components in the pipeline. >>> >>> I was thinking of loopback with endpoints using compress streams, >>> without physical endpoint, something like: >>> >>> compress playback (to feed data from userspace) -> DSP (processing) -> >>> compress capture (send data back to userspace) >>> >>> Unless I'm missing something, you should be able to process data as fast >>> as you can feed it and consume it in such case. >>> >> >> Actually in the beginning I tried this, but it did not work well. >> ALSA needs time control for playback and capture, playback and capture >> needs to synchronize. Usually the playback and capture pipeline is >> independent in ALSA design, but in this case, the playback and capture >> should synchronize, they are not independent. > > The core compress API core no strict timing constraints. You can eventually0 > have two half-duplex compress devices, if you like to have really independent > mechanism. If something is missing in API, you can extend this API (like to > inform the user space that it's a producer/consumer processing without any > relation to the real time). I like this idea.
I was thinking more about this. If I am right, the mentioned use in gstreamer is supposed to run the conversion (DSP) job in "one shot" (can be handled using one system call like blocking ioctl). The goal is just to offload the CPU work to the DSP (co-processor). If there are no requirements for the queuing, we can implement this ioctl in the compress ALSA API easily using the data management through the dma-buf API. We can eventually define a new direction (enum snd_compr_direction) like SND_COMPRESS_CONVERT or so to allow handle this new data scheme. The API may be extended later on real demand, of course.
Otherwise all pieces are already in the current ALSA compress API (capabilities, params, enumeration). The realtime controls may be created using ALSA control API.
So does this mean that Shengjiu should attempt to use this ALSA approach first?
I've not seen any argument to use v4l2 mem2mem buffer scheme for this data conversion forcefully. It looks like a simple job and ALSA APIs may be extended for this simple purpose.
Shengjiu, what are your requirements for gstreamer support? Would be a new blocking ioctl enough for the initial support in the compress ALSA API?
If it works with compress API, it'd be great, yeah. So, your idea is to open compress-offload devices for read and write, then and let them convert a la batch jobs without timing control?
For full-duplex usages, we might need some more extensions, so that both read and write parameters can be synchronized. (So far the compress stream is a unidirectional, and the runtime buffer for a single stream.)
And the buffer management is based on the fixed size fragments. I hope this doesn't matter much for the intended operation?
It's a question, if the standard I/O is really required for this case. My quick idea was to just implement a new "direction" for this job supporting only one ioctl for the data processing which will execute the job in "one shot" at the moment. The I/O may be handled through dma-buf API (which seems to be standard nowadays for this purpose and allows future chaining).
So something like:
struct dsp_job { int source_fd; /* dma-buf FD with source data - for dma_buf_get() */ int target_fd; /* dma-buf FD for target data - for dma_buf_get() */ ... maybe some extra data size members here ... ... maybe some special parameters here ... };
#define SNDRV_COMPRESS_DSPJOB _IOWR('C', 0x60, struct dsp_job)
This ioctl will be blocking (thus synced). My question is, if it's feasible for gstreamer or not. For this particular case, if the rate conversion is implemented in software, it will block the gstreamer data processing, too.
Yes, GStreamer threading is using a push-back model, so blocking for the time of the processing is fine. Note that the extra simplicity will suffer from ioctl() latency.
In GFX, they solve this issue with fences. That allow setting up the next operation in the chain before the data has been produced.
The fences look really nicely and seem more modern. It should be possible with dma-buf/sync_file.c interface to handle multiple jobs simultaneously and share the state between user space and kernel driver.
In this case, I think that two non-blocking ioctls should be enough - add a new job with source/target dma buffers guarded by one fence and abort (flush) all active jobs.
I'll try to propose an API extension for the ALSA's compress API in the linux-sound mailing list soon.
Jaroslav
On 16. 05. 24 16:50, Jaroslav Kysela wrote:
On 15. 05. 24 22:33, Nicolas Dufresne wrote:
In GFX, they solve this issue with fences. That allow setting up the next operation in the chain before the data has been produced.
The fences look really nicely and seem more modern. It should be possible with dma-buf/sync_file.c interface to handle multiple jobs simultaneously and share the state between user space and kernel driver.
In this case, I think that two non-blocking ioctls should be enough - add a new job with source/target dma buffers guarded by one fence and abort (flush) all active jobs.
I'll try to propose an API extension for the ALSA's compress API in the linux-sound mailing list soon.
I found using sync_file during the implementation to be overkill for resource management, so I proposed a simple queue with the standard poll mechanism.
https://lore.kernel.org/linux-sound/20240527071133.223066-1-perex@perex.cz/
Jaroslav
On 5/9/24 06:13, Jaroslav Kysela wrote:
On 09. 05. 24 12:44, Shengjiu Wang wrote:
mem2mem is just like the decoder in the compress pipeline. which is one of the components in the pipeline.
I was thinking of loopback with endpoints using compress streams, without physical endpoint, something like:
compress playback (to feed data from userspace) -> DSP (processing) -> compress capture (send data back to userspace)
Unless I'm missing something, you should be able to process data as fast as you can feed it and consume it in such case.
Actually in the beginning I tried this, but it did not work well. ALSA needs time control for playback and capture, playback and capture needs to synchronize. Usually the playback and capture pipeline is independent in ALSA design, but in this case, the playback and capture should synchronize, they are not independent.
The core compress API core no strict timing constraints. You can eventually0 have two half-duplex compress devices, if you like to have really independent mechanism. If something is missing in API, you can extend this API (like to inform the user space that it's a producer/consumer processing without any relation to the real time). I like this idea.
The compress API was never intended to be used this way. It was meant to send compressed data to a DSP for rendering, and keep the host processor in a low-power state while the DSP local buffer was drained. There was no intent to do a loop back to the host, because that keeps the host in a high-power state and probably negates the power savings due to a DSP.
The other problem with the loopback is that the compress stuff is usually a "Front-End" in ASoC/DPCM parlance, and we don't have a good way to do a loopback between Front-Ends. The entire framework is based on FEs being connected to BEs.
One problem that I can see for ASRC is that it's not clear when the data will be completely processed on the "capture" stream when you stop the "playback" stream. There's a non-zero risk of having a truncated output or waiting for data that will never be generated.
In other words, it might be possible to reuse/extend the compress API for a 'coprocessor' approach without any rendering to traditional interfaces, but it's uncharted territory.
participants (11)
-
Amadeusz Sławiński
-
Hans Verkuil
-
Jaroslav Kysela
-
Mark Brown
-
Mauro Carvalho Chehab
-
Nicolas Dufresne
-
Pierre-Louis Bossart
-
Sebastian Fricke
-
Shengjiu Wang
-
Shengjiu Wang
-
Takashi Iwai