[RFC PATCH v2 0/7] Add audio support in v4l2 framework
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/audioX".
And add memory to memory support for two kinds of i.MX ASRC module
changes in v2: - decouple the implementation in v4l2 and ALSA - implement the memory to memory driver as a platfrom driver and move it to driver/media - move fsl_asrc_common.h to include/sound folder
Shengjiu Wang (7): ASoC: fsl_asrc: define functions for memory to memory usage ASoC: fsl_easrc: define functions for memory to memory usage ASoC: fsl_asrc: move fsl_asrc_common.h to include/sound media: v4l2: Add audio capture and output support media: imx: fsl_asrc: Add memory to memory driver ASoC: fsl_asrc: register m2m platform device ASoC: fsl_easrc: register m2m platform device
.../media/common/videobuf2/videobuf2-v4l2.c | 4 + drivers/media/platform/nxp/Kconfig | 12 + drivers/media/platform/nxp/Makefile | 1 + drivers/media/platform/nxp/fsl_asrc_m2m.c | 962 ++++++++++++++++++ drivers/media/v4l2-core/v4l2-dev.c | 17 + drivers/media/v4l2-core/v4l2-ioctl.c | 52 + include/media/v4l2-dev.h | 2 + include/media/v4l2-ioctl.h | 34 + .../fsl => include/sound}/fsl_asrc_common.h | 48 + include/uapi/linux/videodev2.h | 19 + sound/soc/fsl/fsl_asrc.c | 150 +++ sound/soc/fsl/fsl_asrc.h | 4 +- sound/soc/fsl/fsl_asrc_dma.c | 2 +- sound/soc/fsl/fsl_easrc.c | 227 +++++ sound/soc/fsl/fsl_easrc.h | 8 +- 15 files changed, 1539 insertions(+), 3 deletions(-) create mode 100644 drivers/media/platform/nxp/fsl_asrc_m2m.c rename {sound/soc/fsl => include/sound}/fsl_asrc_common.h (63%)
ASRC can be used on memory to memory case, define several functions for m2m usage.
m2m_start_part_one: first part of the start steps m2m_start_part_two: second part of the start steps m2m_stop_part_one: first part of stop steps m2m_stop_part_two: second part of stop steps m2m_check_format: check format is supported or not m2m_calc_out_len: calculate output length according to input length m2m_get_maxburst: burst size for dma m2m_pair_suspend: suspend function of pair m2m_pair_resume: resume function of pair get_output_fifo_size: get remaining data size in FIFO
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- sound/soc/fsl/fsl_asrc.c | 138 ++++++++++++++++++++++++++++++++ sound/soc/fsl/fsl_asrc.h | 2 + sound/soc/fsl/fsl_asrc_common.h | 37 +++++++++ 3 files changed, 177 insertions(+)
diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c index adb8a59de2bd..30190ccb74e7 100644 --- a/sound/soc/fsl/fsl_asrc.c +++ b/sound/soc/fsl/fsl_asrc.c @@ -1063,6 +1063,135 @@ static int fsl_asrc_get_fifo_addr(u8 dir, enum asrc_pair_index index) return REG_ASRDx(dir, index); }
+/* Get sample numbers in FIFO */ +static unsigned int fsl_asrc_get_output_fifo_size(struct fsl_asrc_pair *pair) +{ + struct fsl_asrc *asrc = pair->asrc; + enum asrc_pair_index index = pair->index; + u32 val; + + regmap_read(asrc->regmap, REG_ASRFST(index), &val); + + val &= ASRFSTi_OUTPUT_FIFO_MASK; + + return val >> ASRFSTi_OUTPUT_FIFO_SHIFT; +} + +static int fsl_asrc_m2m_start_part_one(struct fsl_asrc_pair *pair) +{ + struct fsl_asrc_pair_priv *pair_priv = pair->private; + struct fsl_asrc *asrc = pair->asrc; + struct device *dev = &asrc->pdev->dev; + struct asrc_config config; + int ret; + + /* fill config */ + config.pair = pair->index; + config.channel_num = pair->channels; + config.input_sample_rate = pair->rate[IN]; + config.output_sample_rate = pair->rate[OUT]; + config.input_format = pair->sample_format[IN]; + config.output_format = pair->sample_format[OUT]; + config.inclk = INCLK_NONE; + config.outclk = OUTCLK_ASRCK1_CLK; + + pair_priv->config = &config; + ret = fsl_asrc_config_pair(pair, true); + if (ret) { + dev_err(dev, "failed to config pair: %d\n", ret); + return ret; + } + + fsl_asrc_start_pair(pair); + + return 0; +} + +static int fsl_asrc_m2m_start_part_two(struct fsl_asrc_pair *pair) +{ + /* + * Clear DMA request during the stall state of ASRC: + * During STALL state, the remaining in input fifo would never be + * smaller than the input threshold while the output fifo would not + * be bigger than output one. Thus the DMA request would be cleared. + */ + fsl_asrc_set_watermarks(pair, ASRC_FIFO_THRESHOLD_MIN, + ASRC_FIFO_THRESHOLD_MAX); + + /* Update the real input threshold to raise DMA request */ + fsl_asrc_set_watermarks(pair, ASRC_M2M_INPUTFIFO_WML, + ASRC_M2M_OUTPUTFIFO_WML); + + return 0; +} + +static int fsl_asrc_m2m_stop_part_one(struct fsl_asrc_pair *pair) +{ + fsl_asrc_stop_pair(pair); + + return 0; +} + +static int fsl_asrc_m2m_check_format(u8 dir, u32 rate, u32 channels, u32 format) +{ + u64 support_format = FSL_ASRC_FORMATS; + + if (channels < 1 || channels > 10) + return -EINVAL; + + if (rate < 5512 || rate > 192000) + return -EINVAL; + + if (dir == IN) + support_format |= SNDRV_PCM_FMTBIT_S8; + + if (!(1 << format & support_format)) + return -EINVAL; + + return 0; +} + +/* calculate capture data length according to output data length and sample rate */ +static int fsl_asrc_m2m_calc_out_len(struct fsl_asrc_pair *pair, int input_buffer_length) +{ + unsigned int in_width, out_width; + unsigned int channels = pair->channels; + unsigned int in_samples, out_samples; + unsigned int out_length; + + in_width = snd_pcm_format_physical_width(pair->sample_format[IN]) / 8; + out_width = snd_pcm_format_physical_width(pair->sample_format[OUT]) / 8; + + in_samples = input_buffer_length / in_width / channels; + out_samples = pair->rate[OUT] * in_samples / pair->rate[IN]; + out_length = (out_samples - ASRC_OUTPUT_LAST_SAMPLE) * out_width * channels; + + return out_length; +} + +static int fsl_asrc_m2m_get_maxburst(u8 dir, struct fsl_asrc_pair *pair) +{ + struct fsl_asrc *asrc = pair->asrc; + struct fsl_asrc_priv *asrc_priv = asrc->private; + int wml = (dir == IN) ? ASRC_M2M_INPUTFIFO_WML : ASRC_M2M_OUTPUTFIFO_WML; + + if (!asrc_priv->soc->use_edma) + return wml * pair->channels; + else + return 1; +} + +static int fsl_asrc_m2m_pair_resume(struct fsl_asrc_pair *pair) +{ + struct fsl_asrc *asrc = pair->asrc; + int i; + + for (i = 0; i < pair->channels * 4; i++) + regmap_write(asrc->regmap, REG_ASRDI(pair->index), 0); + + return 0; +} + static int fsl_asrc_runtime_resume(struct device *dev); static int fsl_asrc_runtime_suspend(struct device *dev);
@@ -1147,6 +1276,15 @@ static int fsl_asrc_probe(struct platform_device *pdev) asrc->get_fifo_addr = fsl_asrc_get_fifo_addr; asrc->pair_priv_size = sizeof(struct fsl_asrc_pair_priv);
+ asrc->m2m_start_part_one = fsl_asrc_m2m_start_part_one; + asrc->m2m_start_part_two = fsl_asrc_m2m_start_part_two; + asrc->m2m_stop_part_one = fsl_asrc_m2m_stop_part_one; + asrc->get_output_fifo_size = fsl_asrc_get_output_fifo_size; + asrc->m2m_check_format = fsl_asrc_m2m_check_format; + asrc->m2m_calc_out_len = fsl_asrc_m2m_calc_out_len; + asrc->m2m_get_maxburst = fsl_asrc_m2m_get_maxburst; + asrc->m2m_pair_resume = fsl_asrc_m2m_pair_resume; + if (of_device_is_compatible(np, "fsl,imx35-asrc")) { asrc_priv->clk_map[IN] = input_clk_map_imx35; asrc_priv->clk_map[OUT] = output_clk_map_imx35; diff --git a/sound/soc/fsl/fsl_asrc.h b/sound/soc/fsl/fsl_asrc.h index 86d2422ad606..1c492eb237f5 100644 --- a/sound/soc/fsl/fsl_asrc.h +++ b/sound/soc/fsl/fsl_asrc.h @@ -12,6 +12,8 @@
#include "fsl_asrc_common.h"
+#define ASRC_M2M_INPUTFIFO_WML 0x4 +#define ASRC_M2M_OUTPUTFIFO_WML 0x2 #define ASRC_DMA_BUFFER_NUM 2 #define ASRC_INPUTFIFO_THRESHOLD 32 #define ASRC_OUTPUTFIFO_THRESHOLD 32 diff --git a/sound/soc/fsl/fsl_asrc_common.h b/sound/soc/fsl/fsl_asrc_common.h index 7e1c13ca37f1..00a615735f35 100644 --- a/sound/soc/fsl/fsl_asrc_common.h +++ b/sound/soc/fsl/fsl_asrc_common.h @@ -34,6 +34,11 @@ enum asrc_pair_index { * @pos: hardware pointer position * @req_dma_chan: flag to release dev_to_dev chan * @private: pair private area + * @complete: dma task complete + * @sample_format: format of m2m + * @rate: rate of m2m + * @buf_len: buffer length of m2m + * @req_pair: flag for request pair */ struct fsl_asrc_pair { struct fsl_asrc *asrc; @@ -49,6 +54,13 @@ struct fsl_asrc_pair { bool req_dma_chan;
void *private; + + /* used for m2m */ + struct completion complete[2]; + snd_pcm_format_t sample_format[2]; + unsigned int rate[2]; + unsigned int buf_len[2]; + bool req_pair; };
/** @@ -72,6 +84,17 @@ struct fsl_asrc_pair { * @request_pair: function pointer * @release_pair: function pointer * @get_fifo_addr: function pointer + * @m2m_start_part_one: function pointer + * @m2m_start_part_two: function pointer + * @m2m_stop_part_one: function pointer + * @m2m_stop_part_two: function pointer + * @m2m_check_format: function pointer + * @m2m_calc_out_len: function pointer + * @m2m_get_maxburst: function pointer + * @m2m_pair_suspend: function pointer + * @m2m_pair_resume: function pointer + * @m2m_set_ratio_mod: function pointer + * @get_output_fifo_size: function pointer * @pair_priv_size: size of pair private struct. * @private: private data structure */ @@ -97,6 +120,20 @@ struct fsl_asrc { int (*request_pair)(int channels, struct fsl_asrc_pair *pair); void (*release_pair)(struct fsl_asrc_pair *pair); int (*get_fifo_addr)(u8 dir, enum asrc_pair_index index); + + int (*m2m_start_part_one)(struct fsl_asrc_pair *pair); + int (*m2m_start_part_two)(struct fsl_asrc_pair *pair); + int (*m2m_stop_part_one)(struct fsl_asrc_pair *pair); + int (*m2m_stop_part_two)(struct fsl_asrc_pair *pair); + + int (*m2m_check_format)(u8 dir, u32 rate, u32 channels, u32 format); + int (*m2m_calc_out_len)(struct fsl_asrc_pair *pair, int input_buffer_length); + int (*m2m_get_maxburst)(u8 dir, struct fsl_asrc_pair *pair); + int (*m2m_pair_suspend)(struct fsl_asrc_pair *pair); + int (*m2m_pair_resume)(struct fsl_asrc_pair *pair); + int (*m2m_set_ratio_mod)(struct fsl_asrc_pair *pair, int val); + + unsigned int (*get_output_fifo_size)(struct fsl_asrc_pair *pair); size_t pair_priv_size;
void *private;
+static int fsl_asrc_m2m_check_format(u8 dir, u32 rate, u32 channels, u32 format) +{
u64 support_format = FSL_ASRC_FORMATS;
if (channels < 1 || channels > 10)
return -EINVAL;
if (rate < 5512 || rate > 192000)
return -EINVAL;
I think we can avoid using magic numbers. Instead we could do:
#define FSL_ASRC_MIN_CHANNELS 1 /... #define FSL_ASRC_MAX_RATE 192000
if (dir == IN)
support_format |= SNDRV_PCM_FMTBIT_S8;
if (!(1 << format & support_format))
return -EINVAL;
return 0;
+}
+/* calculate capture data length according to output data length and sample rate */ +static int fsl_asrc_m2m_calc_out_len(struct fsl_asrc_pair *pair, int input_buffer_length) +{
unsigned int in_width, out_width;
unsigned int channels = pair->channels;
unsigned int in_samples, out_samples;
unsigned int out_length;
in_width = snd_pcm_format_physical_width(pair->sample_format[IN]) / 8;
out_width = snd_pcm_format_physical_width(pair->sample_format[OUT]) / 8;
in_samples = input_buffer_length / in_width / channels;
out_samples = pair->rate[OUT] * in_samples / pair->rate[IN];
out_length = (out_samples - ASRC_OUTPUT_LAST_SAMPLE) * out_width * channels;
return out_length;
+}
+static int fsl_asrc_m2m_get_maxburst(u8 dir, struct fsl_asrc_pair *pair) +{
struct fsl_asrc *asrc = pair->asrc;
struct fsl_asrc_priv *asrc_priv = asrc->private;
int wml = (dir == IN) ? ASRC_M2M_INPUTFIFO_WML : ASRC_M2M_OUTPUTFIFO_WML;
if (!asrc_priv->soc->use_edma)
return wml * pair->channels;
else
return 1;
+}
+static int fsl_asrc_m2m_pair_resume(struct fsl_asrc_pair *pair) +{
struct fsl_asrc *asrc = pair->asrc;
int i;
for (i = 0; i < pair->channels * 4; i++)
regmap_write(asrc->regmap, REG_ASRDI(pair->index), 0);
return 0;
+}
static int fsl_asrc_runtime_resume(struct device *dev); static int fsl_asrc_runtime_suspend(struct device *dev);
<snip>
There is no implementation for _suspend although you mention it in the commit message.
- @complete: dma task complete
- @sample_format: format of m2m
- @rate: rate of m2m
- @buf_len: buffer length of m2m
- @req_pair: flag for request pair
For example @complete field is not used in this patch. Maybe add it in the patch that uses it?
I think is the same for other fields.
ASRC can be used on memory to memory case, define several functions for m2m usage and export them as function pointer.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- sound/soc/fsl/fsl_easrc.c | 214 ++++++++++++++++++++++++++++++++++++++ sound/soc/fsl/fsl_easrc.h | 6 ++ 2 files changed, 220 insertions(+)
diff --git a/sound/soc/fsl/fsl_easrc.c b/sound/soc/fsl/fsl_easrc.c index 670cbdb361b6..b735b24badc2 100644 --- a/sound/soc/fsl/fsl_easrc.c +++ b/sound/soc/fsl/fsl_easrc.c @@ -1861,6 +1861,210 @@ static int fsl_easrc_get_fifo_addr(u8 dir, enum asrc_pair_index index) return REG_EASRC_FIFO(dir, index); }
+/* Get sample numbers in FIFO */ +static unsigned int fsl_easrc_get_output_fifo_size(struct fsl_asrc_pair *pair) +{ + struct fsl_asrc *asrc = pair->asrc; + enum asrc_pair_index index = pair->index; + u32 val; + + regmap_read(asrc->regmap, REG_EASRC_SFS(index), &val); + val &= EASRC_SFS_NSGO_MASK; + + return val >> EASRC_SFS_NSGO_SHIFT; +} + +static int fsl_easrc_m2m_start_part_one(struct fsl_asrc_pair *pair) +{ + struct fsl_easrc_ctx_priv *ctx_priv = pair->private; + struct fsl_asrc *asrc = pair->asrc; + struct device *dev = &asrc->pdev->dev; + int ret; + + ctx_priv->in_params.sample_rate = pair->rate[IN]; + ctx_priv->in_params.sample_format = pair->sample_format[IN]; + ctx_priv->out_params.sample_rate = pair->rate[OUT]; + ctx_priv->out_params.sample_format = pair->sample_format[OUT]; + + ctx_priv->in_params.fifo_wtmk = FSL_EASRC_INPUTFIFO_WML; + ctx_priv->out_params.fifo_wtmk = FSL_EASRC_OUTPUTFIFO_WML; + /* Fill the right half of the re-sampler with zeros */ + ctx_priv->rs_init_mode = 0x2; + /* Zero fill the right half of the prefilter */ + ctx_priv->pf_init_mode = 0x2; + + ret = fsl_easrc_set_ctx_format(pair, + &ctx_priv->in_params.sample_format, + &ctx_priv->out_params.sample_format); + if (ret) { + dev_err(dev, "failed to set context format: %d\n", ret); + return ret; + } + + ret = fsl_easrc_config_context(asrc, pair->index); + if (ret) { + dev_err(dev, "failed to config context %d\n", ret); + return ret; + } + + ctx_priv->in_params.iterations = 1; + ctx_priv->in_params.group_len = pair->channels; + ctx_priv->in_params.access_len = pair->channels; + ctx_priv->out_params.iterations = 1; + ctx_priv->out_params.group_len = pair->channels; + ctx_priv->out_params.access_len = pair->channels; + + ret = fsl_easrc_set_ctx_organziation(pair); + if (ret) { + dev_err(dev, "failed to set fifo organization\n"); + return ret; + } + + /* The context start flag */ + ctx_priv->first_convert = 1; + return 0; +} + +static int fsl_easrc_m2m_start_part_two(struct fsl_asrc_pair *pair) +{ + struct fsl_easrc_ctx_priv *ctx_priv = pair->private; + /* start context once */ + if (ctx_priv->first_convert) { + fsl_easrc_start_context(pair); + ctx_priv->first_convert = 0; + } + + return 0; +} + +static int fsl_easrc_m2m_stop_part_two(struct fsl_asrc_pair *pair) +{ + struct fsl_easrc_ctx_priv *ctx_priv = pair->private; + /* Stop pair/context */ + if (!ctx_priv->first_convert) { + fsl_easrc_stop_context(pair); + ctx_priv->first_convert = 1; + } + + return 0; +} + +static int fsl_easrc_m2m_check_format(u8 dir, u32 rate, u32 channels, u32 format) +{ + u64 support_format = FSL_EASRC_FORMATS; + + if (channels < 1 || channels > 32) + return -EINVAL; + + if (rate < 8000 || rate > 768000) + return -EINVAL; + + if (dir == OUT) + support_format |= SNDRV_PCM_FMTBIT_IEC958_SUBFRAME_LE; + + if (!(1 << format & support_format)) + return -EINVAL; + + return 0; +} + +/* calculate capture data length according to output data length and sample rate */ +static int fsl_easrc_m2m_calc_out_len(struct fsl_asrc_pair *pair, int input_buffer_length) +{ + struct fsl_asrc *easrc = pair->asrc; + struct fsl_easrc_priv *easrc_priv = easrc->private; + struct fsl_easrc_ctx_priv *ctx_priv = pair->private; + unsigned int in_rate = ctx_priv->in_params.norm_rate; + unsigned int out_rate = ctx_priv->out_params.norm_rate; + unsigned int channels = pair->channels; + unsigned int in_samples, out_samples; + unsigned int in_width, out_width; + unsigned int out_length; + unsigned int frac_bits; + u64 val1, val2; + + switch (easrc_priv->rs_num_taps) { + case EASRC_RS_32_TAPS: + /* integer bits = 5; */ + frac_bits = 39; + break; + case EASRC_RS_64_TAPS: + /* integer bits = 6; */ + frac_bits = 38; + break; + case EASRC_RS_128_TAPS: + /* integer bits = 7; */ + frac_bits = 37; + break; + default: + return -EINVAL; + } + + val1 = (u64)in_rate << frac_bits; + do_div(val1, out_rate); + val1 = val1 + ctx_priv->ratio_mod; + + in_width = snd_pcm_format_physical_width(ctx_priv->in_params.sample_format) / 8; + out_width = snd_pcm_format_physical_width(ctx_priv->out_params.sample_format) / 8; + + ctx_priv->in_filled_len += input_buffer_length; + if (ctx_priv->in_filled_len <= ctx_priv->in_filled_sample * in_width * channels) { + out_length = 0; + } else { + in_samples = ctx_priv->in_filled_len / (in_width * channels) - + ctx_priv->in_filled_sample; + + /* right shift 12 bit to make ratio in 32bit space */ + val2 = (u64)in_samples << (frac_bits - 12); + val1 = val1 >> 12; + do_div(val2, val1); + out_samples = val2; + + out_length = out_samples * out_width * channels; + ctx_priv->in_filled_len = ctx_priv->in_filled_sample * in_width * channels; + } + + return out_length; +} + +static int fsl_easrc_m2m_get_maxburst(u8 dir, struct fsl_asrc_pair *pair) +{ + struct fsl_easrc_ctx_priv *ctx_priv = pair->private; + + if (dir == IN) + return ctx_priv->in_params.fifo_wtmk * pair->channels; + else + return ctx_priv->out_params.fifo_wtmk * pair->channels; +} + +static int fsl_easrc_m2m_pair_suspend(struct fsl_asrc_pair *pair) +{ + fsl_easrc_stop_context(pair); + + return 0; +} + +static int fsl_easrc_m2m_pair_resume(struct fsl_asrc_pair *pair) +{ + struct fsl_easrc_ctx_priv *ctx_priv = pair->private; + + ctx_priv->first_convert = 1; + ctx_priv->in_filled_len = 0; + + return 0; +} + +static int fsl_easrc_m2m_set_ratio_mod(struct fsl_asrc_pair *pair, int val) +{ + struct fsl_easrc_ctx_priv *ctx_priv = pair->private; + struct fsl_asrc *easrc = pair->asrc; + + ctx_priv->ratio_mod += val; + regmap_write(easrc->regmap, REG_EASRC_RUC(pair->index), EASRC_RSUC_RS_RM(val)); + + return 0; +} + static const struct of_device_id fsl_easrc_dt_ids[] = { { .compatible = "fsl,imx8mn-easrc",}, {} @@ -1926,6 +2130,16 @@ static int fsl_easrc_probe(struct platform_device *pdev) easrc->release_pair = fsl_easrc_release_context; easrc->get_fifo_addr = fsl_easrc_get_fifo_addr; easrc->pair_priv_size = sizeof(struct fsl_easrc_ctx_priv); + easrc->m2m_start_part_one = fsl_easrc_m2m_start_part_one; + easrc->m2m_start_part_two = fsl_easrc_m2m_start_part_two; + easrc->m2m_stop_part_two = fsl_easrc_m2m_stop_part_two; + easrc->get_output_fifo_size = fsl_easrc_get_output_fifo_size; + easrc->m2m_check_format = fsl_easrc_m2m_check_format; + easrc->m2m_calc_out_len = fsl_easrc_m2m_calc_out_len; + easrc->m2m_get_maxburst = fsl_easrc_m2m_get_maxburst; + easrc->m2m_pair_suspend = fsl_easrc_m2m_pair_suspend; + easrc->m2m_pair_resume = fsl_easrc_m2m_pair_resume; + easrc->m2m_set_ratio_mod = fsl_easrc_m2m_set_ratio_mod;
easrc_priv->rs_num_taps = EASRC_RS_32_TAPS; easrc_priv->const_coeff = 0x3FF0000000000000; diff --git a/sound/soc/fsl/fsl_easrc.h b/sound/soc/fsl/fsl_easrc.h index 7c70dac52713..bee887c8b4f2 100644 --- a/sound/soc/fsl/fsl_easrc.h +++ b/sound/soc/fsl/fsl_easrc.h @@ -601,6 +601,9 @@ struct fsl_easrc_slot { * @out_missed_sample: sample missed in output * @st1_addexp: exponent added for stage1 * @st2_addexp: exponent added for stage2 + * @ratio_mod: update ratio + * @first_convert: start of conversion + * @in_filled_len: input filled length */ struct fsl_easrc_ctx_priv { struct fsl_easrc_io_params in_params; @@ -618,6 +621,9 @@ struct fsl_easrc_ctx_priv { int out_missed_sample; int st1_addexp; int st2_addexp; + int ratio_mod; + unsigned int first_convert; + unsigned int in_filled_len; };
/**
Move fsl_asrc_common.h to include/sound that it can be included from other drivers.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- {sound/soc/fsl => include/sound}/fsl_asrc_common.h | 0 sound/soc/fsl/fsl_asrc.h | 2 +- sound/soc/fsl/fsl_asrc_dma.c | 2 +- sound/soc/fsl/fsl_easrc.h | 2 +- 4 files changed, 3 insertions(+), 3 deletions(-) rename {sound/soc/fsl => include/sound}/fsl_asrc_common.h (100%)
diff --git a/sound/soc/fsl/fsl_asrc_common.h b/include/sound/fsl_asrc_common.h similarity index 100% rename from sound/soc/fsl/fsl_asrc_common.h rename to include/sound/fsl_asrc_common.h diff --git a/sound/soc/fsl/fsl_asrc.h b/sound/soc/fsl/fsl_asrc.h index 1c492eb237f5..66544624de7b 100644 --- a/sound/soc/fsl/fsl_asrc.h +++ b/sound/soc/fsl/fsl_asrc.h @@ -10,7 +10,7 @@ #ifndef _FSL_ASRC_H #define _FSL_ASRC_H
-#include "fsl_asrc_common.h" +#include <sound/fsl_asrc_common.h>
#define ASRC_M2M_INPUTFIFO_WML 0x4 #define ASRC_M2M_OUTPUTFIFO_WML 0x2 diff --git a/sound/soc/fsl/fsl_asrc_dma.c b/sound/soc/fsl/fsl_asrc_dma.c index 05a7d1588d20..b034fee3f1f4 100644 --- a/sound/soc/fsl/fsl_asrc_dma.c +++ b/sound/soc/fsl/fsl_asrc_dma.c @@ -12,7 +12,7 @@ #include <sound/dmaengine_pcm.h> #include <sound/pcm_params.h>
-#include "fsl_asrc_common.h" +#include <sound/fsl_asrc_common.h>
#define FSL_ASRC_DMABUF_SIZE (256 * 1024)
diff --git a/sound/soc/fsl/fsl_easrc.h b/sound/soc/fsl/fsl_easrc.h index bee887c8b4f2..f571647c508f 100644 --- a/sound/soc/fsl/fsl_easrc.h +++ b/sound/soc/fsl/fsl_easrc.h @@ -9,7 +9,7 @@ #include <sound/asound.h> #include <linux/dma/imx-dma.h>
-#include "fsl_asrc_common.h" +#include <sound/fsl_asrc_common.h>
/* EASRC Register Map */
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/audioX".
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- .../media/common/videobuf2/videobuf2-v4l2.c | 4 ++ drivers/media/v4l2-core/v4l2-dev.c | 17 ++++++ drivers/media/v4l2-core/v4l2-ioctl.c | 52 +++++++++++++++++++ include/media/v4l2-dev.h | 2 + include/media/v4l2-ioctl.h | 34 ++++++++++++ include/uapi/linux/videodev2.h | 19 +++++++ 6 files changed, 128 insertions(+)
diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c index c7a54d82a55e..12f2be2773a2 100644 --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c @@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create) case V4L2_BUF_TYPE_META_OUTPUT: requested_sizes[0] = f->fmt.meta.buffersize; break; + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + requested_sizes[0] = f->fmt.audio.buffersize; + break; default: return -EINVAL; } diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c index f81279492682..67484f4c6eaf 100644 --- a/drivers/media/v4l2-core/v4l2-dev.c +++ b/drivers/media/v4l2-core/v4l2-dev.c @@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct video_device *vdev) bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH; bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO && (vdev->device_caps & meta_caps); + bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO; bool is_rx = vdev->vfl_dir != VFL_DIR_TX; bool is_tx = vdev->vfl_dir != VFL_DIR_RX; bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC; @@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct video_device *vdev) SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out); SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_meta_out); } + if (is_audio && is_rx) { + /* audio capture specific ioctls */ + SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_cap); + SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap); + SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap); + SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_cap); + } else if (is_audio && is_tx) { + /* audio output specific ioctls */ + SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_out); + SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out); + SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out); + SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_out); + } if (is_vbi) { /* vbi specific ioctls */ if ((is_rx && (ops->vidioc_g_fmt_vbi_cap || @@ -927,6 +941,9 @@ int __video_register_device(struct video_device *vdev, case VFL_TYPE_TOUCH: name_base = "v4l-touch"; break; + case VFL_TYPE_AUDIO: + name_base = "audio"; + break; default: pr_err("%s called with unknown type: %d\n", __func__, type); diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c index 01ba27f2ef87..aa9d872bba8d 100644 --- a/drivers/media/v4l2-core/v4l2-ioctl.c +++ b/drivers/media/v4l2-core/v4l2-ioctl.c @@ -188,6 +188,8 @@ const char *v4l2_type_names[] = { [V4L2_BUF_TYPE_SDR_OUTPUT] = "sdr-out", [V4L2_BUF_TYPE_META_CAPTURE] = "meta-cap", [V4L2_BUF_TYPE_META_OUTPUT] = "meta-out", + [V4L2_BUF_TYPE_AUDIO_CAPTURE] = "audio-cap", + [V4L2_BUF_TYPE_AUDIO_OUTPUT] = "audio-out", }; EXPORT_SYMBOL(v4l2_type_names);
@@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool write_only) const struct v4l2_sliced_vbi_format *sliced; const struct v4l2_window *win; const struct v4l2_meta_format *meta; + const struct v4l2_audio_format *audio; u32 pixelformat; u32 planes; unsigned i; @@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool write_only) pr_cont(", dataformat=%p4cc, buffersize=%u\n", &pixelformat, meta->buffersize); break; + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + audio = &p->fmt.audio; + pr_cont(", rate=%u, format=%u, channels=%u, buffersize=%u\n", + audio->rate, audio->format, audio->channels, audio->buffersize); + break; } }
@@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type) bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH; bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO && (vfd->device_caps & meta_caps); + bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO; bool is_rx = vfd->vfl_dir != VFL_DIR_TX; bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
@@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type) if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out) return 0; break; + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap) + return 0; + break; + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out) + return 0; + break; default: break; } @@ -1594,6 +1612,16 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops, break; ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg); break; + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + if (unlikely(!ops->vidioc_enum_fmt_audio_cap)) + break; + ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg); + break; + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + if (unlikely(!ops->vidioc_enum_fmt_audio_out)) + break; + ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg); + break; } if (ret == 0) v4l_fill_fmtdesc(p); @@ -1670,6 +1698,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops, return ops->vidioc_g_fmt_meta_cap(file, fh, arg); case V4L2_BUF_TYPE_META_OUTPUT: return ops->vidioc_g_fmt_meta_out(file, fh, arg); + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + return ops->vidioc_g_fmt_audio_cap(file, fh, arg); + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + return ops->vidioc_g_fmt_audio_out(file, fh, arg); } return -EINVAL; } @@ -1781,6 +1813,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops, break; memset_after(p, 0, fmt.meta); return ops->vidioc_s_fmt_meta_out(file, fh, arg); + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + if (unlikely(!ops->vidioc_s_fmt_audio_cap)) + break; + memset_after(p, 0, fmt.audio); + return ops->vidioc_s_fmt_audio_cap(file, fh, arg); + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + if (unlikely(!ops->vidioc_s_fmt_audio_out)) + break; + memset_after(p, 0, fmt.audio); + return ops->vidioc_s_fmt_audio_out(file, fh, arg); } return -EINVAL; } @@ -1889,6 +1931,16 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops, break; memset_after(p, 0, fmt.meta); return ops->vidioc_try_fmt_meta_out(file, fh, arg); + case V4L2_BUF_TYPE_AUDIO_CAPTURE: + if (unlikely(!ops->vidioc_try_fmt_audio_cap)) + break; + memset_after(p, 0, fmt.audio); + return ops->vidioc_try_fmt_audio_cap(file, fh, arg); + case V4L2_BUF_TYPE_AUDIO_OUTPUT: + if (unlikely(!ops->vidioc_try_fmt_audio_out)) + break; + memset_after(p, 0, fmt.audio); + return ops->vidioc_try_fmt_audio_out(file, fh, arg); } return -EINVAL; } diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h index e0a13505f88d..0924e6d1dab1 100644 --- a/include/media/v4l2-dev.h +++ b/include/media/v4l2-dev.h @@ -30,6 +30,7 @@ * @VFL_TYPE_SUBDEV: for V4L2 subdevices * @VFL_TYPE_SDR: for Software Defined Radio tuners * @VFL_TYPE_TOUCH: for touch sensors + * @VFL_TYPE_AUDIO: for audio input/output devices * @VFL_TYPE_MAX: number of VFL types, must always be last in the enum */ enum vfl_devnode_type { @@ -39,6 +40,7 @@ enum vfl_devnode_type { VFL_TYPE_SUBDEV, VFL_TYPE_SDR, VFL_TYPE_TOUCH, + VFL_TYPE_AUDIO, VFL_TYPE_MAX /* Shall be the last one */ };
diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h index edb733f21604..f840cf740ce1 100644 --- a/include/media/v4l2-ioctl.h +++ b/include/media/v4l2-ioctl.h @@ -45,6 +45,12 @@ struct v4l2_fh; * @vidioc_enum_fmt_meta_out: pointer to the function that implements * :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic * for metadata output + * @vidioc_enum_fmt_audio_cap: pointer to the function that implements + * :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic + * for audio capture + * @vidioc_enum_fmt_audio_out: pointer to the function that implements + * :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic + * for audio output * @vidioc_g_fmt_vid_cap: pointer to the function that implements * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture * in single plane mode @@ -79,6 +85,10 @@ struct v4l2_fh; * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture * @vidioc_g_fmt_meta_out: pointer to the function that implements * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output + * @vidioc_g_fmt_audio_cap: pointer to the function that implements + * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture + * @vidioc_g_fmt_audio_out: pointer to the function that implements + * :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output * @vidioc_s_fmt_vid_cap: pointer to the function that implements * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture * in single plane mode @@ -113,6 +123,10 @@ struct v4l2_fh; * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture * @vidioc_s_fmt_meta_out: pointer to the function that implements * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output + * @vidioc_s_fmt_audio_cap: pointer to the function that implements + * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture + * @vidioc_s_fmt_audio_out: pointer to the function that implements + * :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output * @vidioc_try_fmt_vid_cap: pointer to the function that implements * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture * in single plane mode @@ -149,6 +163,10 @@ struct v4l2_fh; * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata capture * @vidioc_try_fmt_meta_out: pointer to the function that implements * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata output + * @vidioc_try_fmt_audio_cap: pointer to the function that implements + * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture + * @vidioc_try_fmt_audio_out: pointer to the function that implements + * :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output * @vidioc_reqbufs: pointer to the function that implements * :ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl * @vidioc_querybuf: pointer to the function that implements @@ -315,6 +333,10 @@ struct v4l2_ioctl_ops { struct v4l2_fmtdesc *f); int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh, struct v4l2_fmtdesc *f); + int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh, + struct v4l2_fmtdesc *f); + int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh, + struct v4l2_fmtdesc *f);
/* VIDIOC_G_FMT handlers */ int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh, @@ -345,6 +367,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f); + int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh, + struct v4l2_format *f); + int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh, + struct v4l2_format *f);
/* VIDIOC_S_FMT handlers */ int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh, @@ -375,6 +401,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f); + int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh, + struct v4l2_format *f); + int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh, + struct v4l2_format *f);
/* VIDIOC_TRY_FMT handlers */ int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh, @@ -405,6 +435,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f); + int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh, + struct v4l2_format *f); + int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh, + struct v4l2_format *f);
/* Buffer handlers */ int (*vidioc_reqbufs)(struct file *file, void *fh, diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h index 3af6a82d0cad..e5051410928a 100644 --- a/include/uapi/linux/videodev2.h +++ b/include/uapi/linux/videodev2.h @@ -153,6 +153,8 @@ enum v4l2_buf_type { V4L2_BUF_TYPE_SDR_OUTPUT = 12, V4L2_BUF_TYPE_META_CAPTURE = 13, V4L2_BUF_TYPE_META_OUTPUT = 14, + V4L2_BUF_TYPE_AUDIO_CAPTURE = 15, + V4L2_BUF_TYPE_AUDIO_OUTPUT = 16, /* Deprecated, do not use */ V4L2_BUF_TYPE_PRIVATE = 0x80, }; @@ -169,6 +171,7 @@ enum v4l2_buf_type { || (type) == V4L2_BUF_TYPE_VBI_OUTPUT \ || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT \ || (type) == V4L2_BUF_TYPE_SDR_OUTPUT \ + || (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT \ || (type) == V4L2_BUF_TYPE_META_OUTPUT)
#define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type)) @@ -2415,6 +2418,20 @@ struct v4l2_meta_format { __u32 buffersize; } __attribute__ ((packed));
+/** + * struct v4l2_audio_format - audio data format definition + * @rate: sample rate + * @format: sample format + * @channels: channel numbers + * @buffersize: maximum size in bytes required for data + */ +struct v4l2_audio_format { + __u32 rate; + __u32 format; + __u32 channels; + __u32 buffersize; +} __attribute__ ((packed)); + /** * struct v4l2_format - stream data format * @type: enum v4l2_buf_type; type of the data stream @@ -2423,6 +2440,7 @@ struct v4l2_meta_format { * @win: definition of an overlaid image * @vbi: raw VBI capture or output parameters * @sliced: sliced VBI capture or output parameters + * @audio: definition of an audio format * @raw_data: placeholder for future extensions and custom formats * @fmt: union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta * and @raw_data @@ -2437,6 +2455,7 @@ struct v4l2_format { struct v4l2_sliced_vbi_format sliced; /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */ struct v4l2_sdr_format sdr; /* V4L2_BUF_TYPE_SDR_CAPTURE */ struct v4l2_meta_format meta; /* V4L2_BUF_TYPE_META_CAPTURE */ + struct v4l2_audio_format audio; /* V4L2_BUF_TYPE_AUDIO_CAPTURE */ __u8 raw_data[200]; /* user-defined */ } fmt; };
Hi Shengjiu,
On Tue, Jul 25, 2023 at 02:12:17PM +0800, Shengjiu Wang wrote:
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/audioX".
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com
.../media/common/videobuf2/videobuf2-v4l2.c | 4 ++ drivers/media/v4l2-core/v4l2-dev.c | 17 ++++++ drivers/media/v4l2-core/v4l2-ioctl.c | 52 +++++++++++++++++++ include/media/v4l2-dev.h | 2 + include/media/v4l2-ioctl.h | 34 ++++++++++++ include/uapi/linux/videodev2.h | 19 +++++++ 6 files changed, 128 insertions(+)
Thanks for the patch! Please check my comments inline.
diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c index c7a54d82a55e..12f2be2773a2 100644 --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c @@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create) case V4L2_BUF_TYPE_META_OUTPUT: requested_sizes[0] = f->fmt.meta.buffersize; break;
- case V4L2_BUF_TYPE_AUDIO_CAPTURE:
- case V4L2_BUF_TYPE_AUDIO_OUTPUT:
requested_sizes[0] = f->fmt.audio.buffersize;
default: return -EINVAL; }break;
diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c index f81279492682..67484f4c6eaf 100644 --- a/drivers/media/v4l2-core/v4l2-dev.c +++ b/drivers/media/v4l2-core/v4l2-dev.c @@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct video_device *vdev) bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH; bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO && (vdev->device_caps & meta_caps);
- bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO; bool is_rx = vdev->vfl_dir != VFL_DIR_TX; bool is_tx = vdev->vfl_dir != VFL_DIR_RX; bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC;
@@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct video_device *vdev) SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out); SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_meta_out); }
- if (is_audio && is_rx) {
/* audio capture specific ioctls */
SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_cap);
SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap);
SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap);
SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_cap);
- } else if (is_audio && is_tx) {
/* audio output specific ioctls */
SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_out);
SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out);
SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out);
SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_out);
- } if (is_vbi) { /* vbi specific ioctls */ if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
@@ -927,6 +941,9 @@ int __video_register_device(struct video_device *vdev, case VFL_TYPE_TOUCH: name_base = "v4l-touch"; break;
- case VFL_TYPE_AUDIO:
name_base = "audio";
I think it was mentioned before that "audio" could be confusing. Wasn't there actually some other kind of /dev/audio device long ago?
Seems like for touch, "v4l-touch" was introduced. Maybe it would also make sense to call it "v4l-audio" for audio?
default: pr_err("%s called with unknown type: %d\n", __func__, type);break;
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c index 01ba27f2ef87..aa9d872bba8d 100644 --- a/drivers/media/v4l2-core/v4l2-ioctl.c +++ b/drivers/media/v4l2-core/v4l2-ioctl.c @@ -188,6 +188,8 @@ const char *v4l2_type_names[] = { [V4L2_BUF_TYPE_SDR_OUTPUT] = "sdr-out", [V4L2_BUF_TYPE_META_CAPTURE] = "meta-cap", [V4L2_BUF_TYPE_META_OUTPUT] = "meta-out",
- [V4L2_BUF_TYPE_AUDIO_CAPTURE] = "audio-cap",
- [V4L2_BUF_TYPE_AUDIO_OUTPUT] = "audio-out",
}; EXPORT_SYMBOL(v4l2_type_names);
@@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool write_only) const struct v4l2_sliced_vbi_format *sliced; const struct v4l2_window *win; const struct v4l2_meta_format *meta;
- const struct v4l2_audio_format *audio; u32 pixelformat; u32 planes; unsigned i;
@@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool write_only) pr_cont(", dataformat=%p4cc, buffersize=%u\n", &pixelformat, meta->buffersize); break;
- case V4L2_BUF_TYPE_AUDIO_CAPTURE:
- case V4L2_BUF_TYPE_AUDIO_OUTPUT:
audio = &p->fmt.audio;
pr_cont(", rate=%u, format=%u, channels=%u, buffersize=%u\n",
audio->rate, audio->format, audio->channels, audio->buffersize);
}break;
}
@@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type) bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH; bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO && (vfd->device_caps & meta_caps);
- bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO; bool is_rx = vfd->vfl_dir != VFL_DIR_TX; bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
@@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type) if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out) return 0; break;
- case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap)
return 0;
break;
- case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out)
return 0;
default: break; }break;
@@ -1594,6 +1612,16 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops, break; ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg); break;
- case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (unlikely(!ops->vidioc_enum_fmt_audio_cap))
break;
ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg);
break;
- case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (unlikely(!ops->vidioc_enum_fmt_audio_out))
break;
ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg);
} if (ret == 0) v4l_fill_fmtdesc(p);break;
@@ -1670,6 +1698,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops, return ops->vidioc_g_fmt_meta_cap(file, fh, arg); case V4L2_BUF_TYPE_META_OUTPUT: return ops->vidioc_g_fmt_meta_out(file, fh, arg);
- case V4L2_BUF_TYPE_AUDIO_CAPTURE:
return ops->vidioc_g_fmt_audio_cap(file, fh, arg);
- case V4L2_BUF_TYPE_AUDIO_OUTPUT:
} return -EINVAL;return ops->vidioc_g_fmt_audio_out(file, fh, arg);
} @@ -1781,6 +1813,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops, break; memset_after(p, 0, fmt.meta); return ops->vidioc_s_fmt_meta_out(file, fh, arg);
- case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (unlikely(!ops->vidioc_s_fmt_audio_cap))
break;
memset_after(p, 0, fmt.audio);
return ops->vidioc_s_fmt_audio_cap(file, fh, arg);
- case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (unlikely(!ops->vidioc_s_fmt_audio_out))
break;
memset_after(p, 0, fmt.audio);
} return -EINVAL;return ops->vidioc_s_fmt_audio_out(file, fh, arg);
} @@ -1889,6 +1931,16 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops, break; memset_after(p, 0, fmt.meta); return ops->vidioc_try_fmt_meta_out(file, fh, arg);
- case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (unlikely(!ops->vidioc_try_fmt_audio_cap))
break;
memset_after(p, 0, fmt.audio);
return ops->vidioc_try_fmt_audio_cap(file, fh, arg);
- case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (unlikely(!ops->vidioc_try_fmt_audio_out))
break;
memset_after(p, 0, fmt.audio);
} return -EINVAL;return ops->vidioc_try_fmt_audio_out(file, fh, arg);
} diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h index e0a13505f88d..0924e6d1dab1 100644 --- a/include/media/v4l2-dev.h +++ b/include/media/v4l2-dev.h @@ -30,6 +30,7 @@
- @VFL_TYPE_SUBDEV: for V4L2 subdevices
- @VFL_TYPE_SDR: for Software Defined Radio tuners
- @VFL_TYPE_TOUCH: for touch sensors
*/
- @VFL_TYPE_AUDIO: for audio input/output devices
- @VFL_TYPE_MAX: number of VFL types, must always be last in the enum
enum vfl_devnode_type { @@ -39,6 +40,7 @@ enum vfl_devnode_type { VFL_TYPE_SUBDEV, VFL_TYPE_SDR, VFL_TYPE_TOUCH,
- VFL_TYPE_AUDIO, VFL_TYPE_MAX /* Shall be the last one */
};
diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h index edb733f21604..f840cf740ce1 100644 --- a/include/media/v4l2-ioctl.h +++ b/include/media/v4l2-ioctl.h @@ -45,6 +45,12 @@ struct v4l2_fh;
- @vidioc_enum_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
- for metadata output
- @vidioc_enum_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
- for audio capture
- @vidioc_enum_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
- for audio output
- @vidioc_g_fmt_vid_cap: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture
- in single plane mode
@@ -79,6 +85,10 @@ struct v4l2_fh;
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
- @vidioc_g_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output
- @vidioc_g_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture
- @vidioc_g_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output
- @vidioc_s_fmt_vid_cap: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture
- in single plane mode
@@ -113,6 +123,10 @@ struct v4l2_fh;
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
- @vidioc_s_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output
- @vidioc_s_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture
- @vidioc_s_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output
- @vidioc_try_fmt_vid_cap: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture
- in single plane mode
@@ -149,6 +163,10 @@ struct v4l2_fh;
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
- @vidioc_try_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata output
- @vidioc_try_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture
- @vidioc_try_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output
- @vidioc_reqbufs: pointer to the function that implements
- :ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl
- @vidioc_querybuf: pointer to the function that implements
@@ -315,6 +333,10 @@ struct v4l2_ioctl_ops { struct v4l2_fmtdesc *f); int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh, struct v4l2_fmtdesc *f);
int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_fmtdesc *f);
int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_fmtdesc *f);
/* VIDIOC_G_FMT handlers */ int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh,
@@ -345,6 +367,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f);
int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_format *f);
int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_format *f);
/* VIDIOC_S_FMT handlers */ int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh,
@@ -375,6 +401,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f);
int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_format *f);
int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_format *f);
/* VIDIOC_TRY_FMT handlers */ int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh,
@@ -405,6 +435,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f);
int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_format *f);
int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_format *f);
/* Buffer handlers */ int (*vidioc_reqbufs)(struct file *file, void *fh,
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h index 3af6a82d0cad..e5051410928a 100644 --- a/include/uapi/linux/videodev2.h +++ b/include/uapi/linux/videodev2.h @@ -153,6 +153,8 @@ enum v4l2_buf_type { V4L2_BUF_TYPE_SDR_OUTPUT = 12, V4L2_BUF_TYPE_META_CAPTURE = 13, V4L2_BUF_TYPE_META_OUTPUT = 14,
- V4L2_BUF_TYPE_AUDIO_CAPTURE = 15,
- V4L2_BUF_TYPE_AUDIO_OUTPUT = 16, /* Deprecated, do not use */ V4L2_BUF_TYPE_PRIVATE = 0x80,
}; @@ -169,6 +171,7 @@ enum v4l2_buf_type { || (type) == V4L2_BUF_TYPE_VBI_OUTPUT \ || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT \ || (type) == V4L2_BUF_TYPE_SDR_OUTPUT \
|| (type) == V4L2_BUF_TYPE_META_OUTPUT)|| (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT \
#define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type)) @@ -2415,6 +2418,20 @@ struct v4l2_meta_format { __u32 buffersize; } __attribute__ ((packed));
+/**
- struct v4l2_audio_format - audio data format definition
- @rate: sample rate
- @format: sample format
- @channels: channel numbers
- @buffersize: maximum size in bytes required for data
- */
+struct v4l2_audio_format {
- __u32 rate;
- __u32 format;
What are the values for the rate and format fields? Since they are part of the UAPI, they need to be defined.
Best regards, Tomasz
- __u32 channels;
- __u32 buffersize;
+} __attribute__ ((packed));
/**
- struct v4l2_format - stream data format
- @type: enum v4l2_buf_type; type of the data stream
@@ -2423,6 +2440,7 @@ struct v4l2_meta_format {
- @win: definition of an overlaid image
- @vbi: raw VBI capture or output parameters
- @sliced: sliced VBI capture or output parameters
- @audio: definition of an audio format
- @raw_data: placeholder for future extensions and custom formats
- @fmt: union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
and @raw_data
@@ -2437,6 +2455,7 @@ struct v4l2_format { struct v4l2_sliced_vbi_format sliced; /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */ struct v4l2_sdr_format sdr; /* V4L2_BUF_TYPE_SDR_CAPTURE */ struct v4l2_meta_format meta; /* V4L2_BUF_TYPE_META_CAPTURE */
__u8 raw_data[200]; /* user-defined */ } fmt;struct v4l2_audio_format audio; /* V4L2_BUF_TYPE_AUDIO_CAPTURE */
};
2.34.1
On Fri, Jul 28, 2023 at 07:59:33AM +0000, Tomasz Figa wrote:
On Tue, Jul 25, 2023 at 02:12:17PM +0800, Shengjiu Wang wrote:
- case VFL_TYPE_AUDIO:
name_base = "audio";
I think it was mentioned before that "audio" could be confusing. Wasn't there actually some other kind of /dev/audio device long ago?
OSS used /dev/audio.
On Fri, Jul 28, 2023 at 3:59 PM Tomasz Figa tfiga@chromium.org wrote:
Hi Shengjiu,
On Tue, Jul 25, 2023 at 02:12:17PM +0800, Shengjiu Wang wrote:
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/audioX".
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com
.../media/common/videobuf2/videobuf2-v4l2.c | 4 ++ drivers/media/v4l2-core/v4l2-dev.c | 17 ++++++ drivers/media/v4l2-core/v4l2-ioctl.c | 52 +++++++++++++++++++ include/media/v4l2-dev.h | 2 + include/media/v4l2-ioctl.h | 34 ++++++++++++ include/uapi/linux/videodev2.h | 19 +++++++ 6 files changed, 128 insertions(+)
Thanks for the patch! Please check my comments inline.
Thanks for reviewing.
diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c
b/drivers/media/common/videobuf2/videobuf2-v4l2.c
index c7a54d82a55e..12f2be2773a2 100644 --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c @@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct
v4l2_create_buffers *create)
case V4L2_BUF_TYPE_META_OUTPUT: requested_sizes[0] = f->fmt.meta.buffersize; break;
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
requested_sizes[0] = f->fmt.audio.buffersize;
break; default: return -EINVAL; }
diff --git a/drivers/media/v4l2-core/v4l2-dev.c
b/drivers/media/v4l2-core/v4l2-dev.c
index f81279492682..67484f4c6eaf 100644 --- a/drivers/media/v4l2-core/v4l2-dev.c +++ b/drivers/media/v4l2-core/v4l2-dev.c @@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct
video_device *vdev)
bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH; bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO && (vdev->device_caps & meta_caps);
bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO; bool is_rx = vdev->vfl_dir != VFL_DIR_TX; bool is_tx = vdev->vfl_dir != VFL_DIR_RX; bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC;
@@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct
video_device *vdev)
SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out); SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT,
vidioc_try_fmt_meta_out);
}
if (is_audio && is_rx) {
/* audio capture specific ioctls */
SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT,
vidioc_enum_fmt_audio_cap);
SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap);
SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap);
SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT,
vidioc_try_fmt_audio_cap);
} else if (is_audio && is_tx) {
/* audio output specific ioctls */
SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT,
vidioc_enum_fmt_audio_out);
SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out);
SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out);
SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT,
vidioc_try_fmt_audio_out);
} if (is_vbi) { /* vbi specific ioctls */ if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
@@ -927,6 +941,9 @@ int __video_register_device(struct video_device
*vdev,
case VFL_TYPE_TOUCH: name_base = "v4l-touch"; break;
case VFL_TYPE_AUDIO:
name_base = "audio";
I think it was mentioned before that "audio" could be confusing. Wasn't there actually some other kind of /dev/audio device long ago?
Seems like for touch, "v4l-touch" was introduced. Maybe it would also make sense to call it "v4l-audio" for audio?
Ok, will change to use "v4l-audio".
break; default: pr_err("%s called with unknown type: %d\n", __func__, type);
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c
b/drivers/media/v4l2-core/v4l2-ioctl.c
index 01ba27f2ef87..aa9d872bba8d 100644 --- a/drivers/media/v4l2-core/v4l2-ioctl.c +++ b/drivers/media/v4l2-core/v4l2-ioctl.c @@ -188,6 +188,8 @@ const char *v4l2_type_names[] = { [V4L2_BUF_TYPE_SDR_OUTPUT] = "sdr-out", [V4L2_BUF_TYPE_META_CAPTURE] = "meta-cap", [V4L2_BUF_TYPE_META_OUTPUT] = "meta-out",
[V4L2_BUF_TYPE_AUDIO_CAPTURE] = "audio-cap",
[V4L2_BUF_TYPE_AUDIO_OUTPUT] = "audio-out",
}; EXPORT_SYMBOL(v4l2_type_names);
@@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool
write_only)
const struct v4l2_sliced_vbi_format *sliced; const struct v4l2_window *win; const struct v4l2_meta_format *meta;
const struct v4l2_audio_format *audio; u32 pixelformat; u32 planes; unsigned i;
@@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool
write_only)
pr_cont(", dataformat=%p4cc, buffersize=%u\n", &pixelformat, meta->buffersize); break;
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
audio = &p->fmt.audio;
pr_cont(", rate=%u, format=%u, channels=%u,
buffersize=%u\n",
audio->rate, audio->format, audio->channels,
audio->buffersize);
break; }
}
@@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum
v4l2_buf_type type)
bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH; bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO && (vfd->device_caps & meta_caps);
bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO; bool is_rx = vfd->vfl_dir != VFL_DIR_TX; bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
@@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum
v4l2_buf_type type)
if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out) return 0; break;
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap)
return 0;
break;
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out)
return 0;
break; default: break; }
@@ -1594,6 +1612,16 @@ static int v4l_enum_fmt(const struct
v4l2_ioctl_ops *ops,
break; ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg); break;
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (unlikely(!ops->vidioc_enum_fmt_audio_cap))
break;
ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg);
break;
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (unlikely(!ops->vidioc_enum_fmt_audio_out))
break;
ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg);
break; } if (ret == 0) v4l_fill_fmtdesc(p);
@@ -1670,6 +1698,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops
*ops,
return ops->vidioc_g_fmt_meta_cap(file, fh, arg); case V4L2_BUF_TYPE_META_OUTPUT: return ops->vidioc_g_fmt_meta_out(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
return ops->vidioc_g_fmt_audio_cap(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
return ops->vidioc_g_fmt_audio_out(file, fh, arg); } return -EINVAL;
} @@ -1781,6 +1813,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops
*ops,
break; memset_after(p, 0, fmt.meta); return ops->vidioc_s_fmt_meta_out(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (unlikely(!ops->vidioc_s_fmt_audio_cap))
break;
memset_after(p, 0, fmt.audio);
return ops->vidioc_s_fmt_audio_cap(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (unlikely(!ops->vidioc_s_fmt_audio_out))
break;
memset_after(p, 0, fmt.audio);
return ops->vidioc_s_fmt_audio_out(file, fh, arg); } return -EINVAL;
} @@ -1889,6 +1931,16 @@ static int v4l_try_fmt(const struct
v4l2_ioctl_ops *ops,
break; memset_after(p, 0, fmt.meta); return ops->vidioc_try_fmt_meta_out(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (unlikely(!ops->vidioc_try_fmt_audio_cap))
break;
memset_after(p, 0, fmt.audio);
return ops->vidioc_try_fmt_audio_cap(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (unlikely(!ops->vidioc_try_fmt_audio_out))
break;
memset_after(p, 0, fmt.audio);
return ops->vidioc_try_fmt_audio_out(file, fh, arg); } return -EINVAL;
} diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h index e0a13505f88d..0924e6d1dab1 100644 --- a/include/media/v4l2-dev.h +++ b/include/media/v4l2-dev.h @@ -30,6 +30,7 @@
- @VFL_TYPE_SUBDEV: for V4L2 subdevices
- @VFL_TYPE_SDR: for Software Defined Radio tuners
- @VFL_TYPE_TOUCH: for touch sensors
- @VFL_TYPE_AUDIO: for audio input/output devices
- @VFL_TYPE_MAX: number of VFL types, must always be last in the
enum
*/ enum vfl_devnode_type { @@ -39,6 +40,7 @@ enum vfl_devnode_type { VFL_TYPE_SUBDEV, VFL_TYPE_SDR, VFL_TYPE_TOUCH,
VFL_TYPE_AUDIO, VFL_TYPE_MAX /* Shall be the last one */
};
diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h index edb733f21604..f840cf740ce1 100644 --- a/include/media/v4l2-ioctl.h +++ b/include/media/v4l2-ioctl.h @@ -45,6 +45,12 @@ struct v4l2_fh;
- @vidioc_enum_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
- for metadata output
- @vidioc_enum_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
- for audio capture
- @vidioc_enum_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
- for audio output
- @vidioc_g_fmt_vid_cap: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture
- in single plane mode
@@ -79,6 +85,10 @@ struct v4l2_fh;
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
- @vidioc_g_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output
- @vidioc_g_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture
- @vidioc_g_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output
- @vidioc_s_fmt_vid_cap: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture
- in single plane mode
@@ -113,6 +123,10 @@ struct v4l2_fh;
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
- @vidioc_s_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output
- @vidioc_s_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture
- @vidioc_s_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output
- @vidioc_try_fmt_vid_cap: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture
- in single plane mode
@@ -149,6 +163,10 @@ struct v4l2_fh;
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata
capture
- @vidioc_try_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata
output
- @vidioc_try_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture
- @vidioc_try_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output
- @vidioc_reqbufs: pointer to the function that implements
- :ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl
- @vidioc_querybuf: pointer to the function that implements
@@ -315,6 +333,10 @@ struct v4l2_ioctl_ops { struct v4l2_fmtdesc *f); int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh, struct v4l2_fmtdesc *f);
int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_fmtdesc *f);
int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_fmtdesc *f); /* VIDIOC_G_FMT handlers */ int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh,
@@ -345,6 +367,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f);
int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_format *f);
int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_format *f); /* VIDIOC_S_FMT handlers */ int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh,
@@ -375,6 +401,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f);
int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_format *f);
int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_format *f); /* VIDIOC_TRY_FMT handlers */ int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh,
@@ -405,6 +435,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f);
int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_format *f);
int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_format *f); /* Buffer handlers */ int (*vidioc_reqbufs)(struct file *file, void *fh,
diff --git a/include/uapi/linux/videodev2.h
b/include/uapi/linux/videodev2.h
index 3af6a82d0cad..e5051410928a 100644 --- a/include/uapi/linux/videodev2.h +++ b/include/uapi/linux/videodev2.h @@ -153,6 +153,8 @@ enum v4l2_buf_type { V4L2_BUF_TYPE_SDR_OUTPUT = 12, V4L2_BUF_TYPE_META_CAPTURE = 13, V4L2_BUF_TYPE_META_OUTPUT = 14,
V4L2_BUF_TYPE_AUDIO_CAPTURE = 15,
V4L2_BUF_TYPE_AUDIO_OUTPUT = 16, /* Deprecated, do not use */ V4L2_BUF_TYPE_PRIVATE = 0x80,
}; @@ -169,6 +171,7 @@ enum v4l2_buf_type { || (type) == V4L2_BUF_TYPE_VBI_OUTPUT \ || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT \ || (type) == V4L2_BUF_TYPE_SDR_OUTPUT \
|| (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT \ || (type) == V4L2_BUF_TYPE_META_OUTPUT)
#define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type)) @@ -2415,6 +2418,20 @@ struct v4l2_meta_format { __u32 buffersize; } __attribute__ ((packed));
+/**
- struct v4l2_audio_format - audio data format definition
- @rate: sample rate
- @format: sample format
- @channels: channel numbers
- @buffersize: maximum size in bytes required for data
- */
+struct v4l2_audio_format {
__u32 rate;
__u32 format;
What are the values for the rate and format fields? Since they are part of the UAPI, they need to be defined.
The range for sample rate is [5512, 768000].
The format is defined in include/uapi/sound/asound.h, they are SNDRV_PCM_FORMAT_S8, SNDRV_PCM_FORMAT_U8, ...
Where should I put these info?
best regards wang shengjiu
Best regards, Tomasz
__u32 channels;
__u32 buffersize;
+} __attribute__ ((packed));
/**
- struct v4l2_format - stream data format
- @type: enum v4l2_buf_type; type of the data stream
@@ -2423,6 +2440,7 @@ struct v4l2_meta_format {
- @win: definition of an overlaid image
- @vbi: raw VBI capture or output parameters
- @sliced: sliced VBI capture or output parameters
- @audio: definition of an audio format
- @raw_data: placeholder for future extensions and custom
formats
- @fmt: union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
and @raw_data
@@ -2437,6 +2455,7 @@ struct v4l2_format { struct v4l2_sliced_vbi_format sliced; /*
V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */
struct v4l2_sdr_format sdr; /*
V4L2_BUF_TYPE_SDR_CAPTURE */
struct v4l2_meta_format meta; /*
V4L2_BUF_TYPE_META_CAPTURE */
struct v4l2_audio_format audio; /*
V4L2_BUF_TYPE_AUDIO_CAPTURE */
__u8 raw_data[200]; /* user-defined */ } fmt;
};
2.34.1
On Fri, Jul 28, 2023 at 3:59 PM Tomasz Figa tfiga@chromium.org wrote:
Hi Shengjiu,
On Tue, Jul 25, 2023 at 02:12:17PM +0800, Shengjiu Wang wrote:
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/audioX".
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com
.../media/common/videobuf2/videobuf2-v4l2.c | 4 ++ drivers/media/v4l2-core/v4l2-dev.c | 17 ++++++ drivers/media/v4l2-core/v4l2-ioctl.c | 52 +++++++++++++++++++ include/media/v4l2-dev.h | 2 + include/media/v4l2-ioctl.h | 34 ++++++++++++ include/uapi/linux/videodev2.h | 19 +++++++ 6 files changed, 128 insertions(+)
Thanks for the patch! Please check my comments inline.
Thanks for reviewing.
Sorry for sending again for using the plain text mode.
diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c index c7a54d82a55e..12f2be2773a2 100644 --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c @@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create) case V4L2_BUF_TYPE_META_OUTPUT: requested_sizes[0] = f->fmt.meta.buffersize; break;
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
requested_sizes[0] = f->fmt.audio.buffersize;
break; default: return -EINVAL; }
diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c index f81279492682..67484f4c6eaf 100644 --- a/drivers/media/v4l2-core/v4l2-dev.c +++ b/drivers/media/v4l2-core/v4l2-dev.c @@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct video_device *vdev) bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH; bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO && (vdev->device_caps & meta_caps);
bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO; bool is_rx = vdev->vfl_dir != VFL_DIR_TX; bool is_tx = vdev->vfl_dir != VFL_DIR_RX; bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC;
@@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct video_device *vdev) SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out); SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_meta_out); }
if (is_audio && is_rx) {
/* audio capture specific ioctls */
SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_cap);
SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap);
SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap);
SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_cap);
} else if (is_audio && is_tx) {
/* audio output specific ioctls */
SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_out);
SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out);
SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out);
SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_out);
} if (is_vbi) { /* vbi specific ioctls */ if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
@@ -927,6 +941,9 @@ int __video_register_device(struct video_device *vdev, case VFL_TYPE_TOUCH: name_base = "v4l-touch"; break;
case VFL_TYPE_AUDIO:
name_base = "audio";
I think it was mentioned before that "audio" could be confusing. Wasn't there actually some other kind of /dev/audio device long ago?
Seems like for touch, "v4l-touch" was introduced. Maybe it would also make sense to call it "v4l-audio" for audio?
Ok, will change to use "v4l-audio".
break; default: pr_err("%s called with unknown type: %d\n", __func__, type);
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c index 01ba27f2ef87..aa9d872bba8d 100644 --- a/drivers/media/v4l2-core/v4l2-ioctl.c +++ b/drivers/media/v4l2-core/v4l2-ioctl.c @@ -188,6 +188,8 @@ const char *v4l2_type_names[] = { [V4L2_BUF_TYPE_SDR_OUTPUT] = "sdr-out", [V4L2_BUF_TYPE_META_CAPTURE] = "meta-cap", [V4L2_BUF_TYPE_META_OUTPUT] = "meta-out",
[V4L2_BUF_TYPE_AUDIO_CAPTURE] = "audio-cap",
[V4L2_BUF_TYPE_AUDIO_OUTPUT] = "audio-out",
}; EXPORT_SYMBOL(v4l2_type_names);
@@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool write_only) const struct v4l2_sliced_vbi_format *sliced; const struct v4l2_window *win; const struct v4l2_meta_format *meta;
const struct v4l2_audio_format *audio; u32 pixelformat; u32 planes; unsigned i;
@@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool write_only) pr_cont(", dataformat=%p4cc, buffersize=%u\n", &pixelformat, meta->buffersize); break;
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
audio = &p->fmt.audio;
pr_cont(", rate=%u, format=%u, channels=%u, buffersize=%u\n",
audio->rate, audio->format, audio->channels, audio->buffersize);
break; }
}
@@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type) bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH; bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO && (vfd->device_caps & meta_caps);
bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO; bool is_rx = vfd->vfl_dir != VFL_DIR_TX; bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
@@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type) if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out) return 0; break;
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap)
return 0;
break;
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out)
return 0;
break; default: break; }
@@ -1594,6 +1612,16 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops, break; ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg); break;
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (unlikely(!ops->vidioc_enum_fmt_audio_cap))
break;
ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg);
break;
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (unlikely(!ops->vidioc_enum_fmt_audio_out))
break;
ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg);
break; } if (ret == 0) v4l_fill_fmtdesc(p);
@@ -1670,6 +1698,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops, return ops->vidioc_g_fmt_meta_cap(file, fh, arg); case V4L2_BUF_TYPE_META_OUTPUT: return ops->vidioc_g_fmt_meta_out(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
return ops->vidioc_g_fmt_audio_cap(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
return ops->vidioc_g_fmt_audio_out(file, fh, arg); } return -EINVAL;
} @@ -1781,6 +1813,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops, break; memset_after(p, 0, fmt.meta); return ops->vidioc_s_fmt_meta_out(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (unlikely(!ops->vidioc_s_fmt_audio_cap))
break;
memset_after(p, 0, fmt.audio);
return ops->vidioc_s_fmt_audio_cap(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (unlikely(!ops->vidioc_s_fmt_audio_out))
break;
memset_after(p, 0, fmt.audio);
return ops->vidioc_s_fmt_audio_out(file, fh, arg); } return -EINVAL;
} @@ -1889,6 +1931,16 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops, break; memset_after(p, 0, fmt.meta); return ops->vidioc_try_fmt_meta_out(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (unlikely(!ops->vidioc_try_fmt_audio_cap))
break;
memset_after(p, 0, fmt.audio);
return ops->vidioc_try_fmt_audio_cap(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (unlikely(!ops->vidioc_try_fmt_audio_out))
break;
memset_after(p, 0, fmt.audio);
return ops->vidioc_try_fmt_audio_out(file, fh, arg); } return -EINVAL;
} diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h index e0a13505f88d..0924e6d1dab1 100644 --- a/include/media/v4l2-dev.h +++ b/include/media/v4l2-dev.h @@ -30,6 +30,7 @@
- @VFL_TYPE_SUBDEV: for V4L2 subdevices
- @VFL_TYPE_SDR: for Software Defined Radio tuners
- @VFL_TYPE_TOUCH: for touch sensors
*/
- @VFL_TYPE_AUDIO: for audio input/output devices
- @VFL_TYPE_MAX: number of VFL types, must always be last in the enum
enum vfl_devnode_type { @@ -39,6 +40,7 @@ enum vfl_devnode_type { VFL_TYPE_SUBDEV, VFL_TYPE_SDR, VFL_TYPE_TOUCH,
VFL_TYPE_AUDIO, VFL_TYPE_MAX /* Shall be the last one */
};
diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h index edb733f21604..f840cf740ce1 100644 --- a/include/media/v4l2-ioctl.h +++ b/include/media/v4l2-ioctl.h @@ -45,6 +45,12 @@ struct v4l2_fh;
- @vidioc_enum_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
- for metadata output
- @vidioc_enum_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
- for audio capture
- @vidioc_enum_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
- for audio output
- @vidioc_g_fmt_vid_cap: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture
- in single plane mode
@@ -79,6 +85,10 @@ struct v4l2_fh;
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
- @vidioc_g_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output
- @vidioc_g_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture
- @vidioc_g_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output
- @vidioc_s_fmt_vid_cap: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture
- in single plane mode
@@ -113,6 +123,10 @@ struct v4l2_fh;
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
- @vidioc_s_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output
- @vidioc_s_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture
- @vidioc_s_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output
- @vidioc_try_fmt_vid_cap: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture
- in single plane mode
@@ -149,6 +163,10 @@ struct v4l2_fh;
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
- @vidioc_try_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata output
- @vidioc_try_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture
- @vidioc_try_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output
- @vidioc_reqbufs: pointer to the function that implements
- :ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl
- @vidioc_querybuf: pointer to the function that implements
@@ -315,6 +333,10 @@ struct v4l2_ioctl_ops { struct v4l2_fmtdesc *f); int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh, struct v4l2_fmtdesc *f);
int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_fmtdesc *f);
int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_fmtdesc *f); /* VIDIOC_G_FMT handlers */ int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh,
@@ -345,6 +367,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f);
int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_format *f);
int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_format *f); /* VIDIOC_S_FMT handlers */ int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh,
@@ -375,6 +401,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f);
int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_format *f);
int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_format *f); /* VIDIOC_TRY_FMT handlers */ int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh,
@@ -405,6 +435,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f);
int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_format *f);
int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_format *f); /* Buffer handlers */ int (*vidioc_reqbufs)(struct file *file, void *fh,
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h index 3af6a82d0cad..e5051410928a 100644 --- a/include/uapi/linux/videodev2.h +++ b/include/uapi/linux/videodev2.h @@ -153,6 +153,8 @@ enum v4l2_buf_type { V4L2_BUF_TYPE_SDR_OUTPUT = 12, V4L2_BUF_TYPE_META_CAPTURE = 13, V4L2_BUF_TYPE_META_OUTPUT = 14,
V4L2_BUF_TYPE_AUDIO_CAPTURE = 15,
V4L2_BUF_TYPE_AUDIO_OUTPUT = 16, /* Deprecated, do not use */ V4L2_BUF_TYPE_PRIVATE = 0x80,
}; @@ -169,6 +171,7 @@ enum v4l2_buf_type { || (type) == V4L2_BUF_TYPE_VBI_OUTPUT \ || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT \ || (type) == V4L2_BUF_TYPE_SDR_OUTPUT \
|| (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT \ || (type) == V4L2_BUF_TYPE_META_OUTPUT)
#define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type)) @@ -2415,6 +2418,20 @@ struct v4l2_meta_format { __u32 buffersize; } __attribute__ ((packed));
+/**
- struct v4l2_audio_format - audio data format definition
- @rate: sample rate
- @format: sample format
- @channels: channel numbers
- @buffersize: maximum size in bytes required for data
- */
+struct v4l2_audio_format {
__u32 rate;
__u32 format;
What are the values for the rate and format fields? Since they are part of the UAPI, they need to be defined.
The range for sample rate is [5512, 768000]. The format is defined in include/uapi/sound/asound.h, they are SNDRV_PCM_FORMAT_S8, SNDRV_PCM_FORMAT_U8, ...
Where should I put these info?
Best regards Wang shengjiu
Best regards, Tomasz
__u32 channels;
__u32 buffersize;
+} __attribute__ ((packed));
/**
- struct v4l2_format - stream data format
- @type: enum v4l2_buf_type; type of the data stream
@@ -2423,6 +2440,7 @@ struct v4l2_meta_format {
- @win: definition of an overlaid image
- @vbi: raw VBI capture or output parameters
- @sliced: sliced VBI capture or output parameters
- @audio: definition of an audio format
- @raw_data: placeholder for future extensions and custom formats
- @fmt: union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
and @raw_data
@@ -2437,6 +2455,7 @@ struct v4l2_format { struct v4l2_sliced_vbi_format sliced; /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */ struct v4l2_sdr_format sdr; /* V4L2_BUF_TYPE_SDR_CAPTURE */ struct v4l2_meta_format meta; /* V4L2_BUF_TYPE_META_CAPTURE */
struct v4l2_audio_format audio; /* V4L2_BUF_TYPE_AUDIO_CAPTURE */ __u8 raw_data[200]; /* user-defined */ } fmt;
};
2.34.1
On Tue, Aug 1, 2023 at 6:47 PM Shengjiu Wang shengjiu.wang@gmail.com wrote:
On Fri, Jul 28, 2023 at 3:59 PM Tomasz Figa tfiga@chromium.org wrote:
Hi Shengjiu,
On Tue, Jul 25, 2023 at 02:12:17PM +0800, Shengjiu Wang wrote:
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/audioX".
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com
.../media/common/videobuf2/videobuf2-v4l2.c | 4 ++ drivers/media/v4l2-core/v4l2-dev.c | 17 ++++++ drivers/media/v4l2-core/v4l2-ioctl.c | 52 +++++++++++++++++++ include/media/v4l2-dev.h | 2 + include/media/v4l2-ioctl.h | 34 ++++++++++++ include/uapi/linux/videodev2.h | 19 +++++++ 6 files changed, 128 insertions(+)
Thanks for the patch! Please check my comments inline.
Thanks for reviewing.
Sorry for sending again for using the plain text mode.
diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c index c7a54d82a55e..12f2be2773a2 100644 --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c @@ -785,6 +785,10 @@ int vb2_create_bufs(struct vb2_queue *q, struct v4l2_create_buffers *create) case V4L2_BUF_TYPE_META_OUTPUT: requested_sizes[0] = f->fmt.meta.buffersize; break;
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
requested_sizes[0] = f->fmt.audio.buffersize;
break; default: return -EINVAL; }
diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c index f81279492682..67484f4c6eaf 100644 --- a/drivers/media/v4l2-core/v4l2-dev.c +++ b/drivers/media/v4l2-core/v4l2-dev.c @@ -553,6 +553,7 @@ static void determine_valid_ioctls(struct video_device *vdev) bool is_tch = vdev->vfl_type == VFL_TYPE_TOUCH; bool is_meta = vdev->vfl_type == VFL_TYPE_VIDEO && (vdev->device_caps & meta_caps);
bool is_audio = vdev->vfl_type == VFL_TYPE_AUDIO; bool is_rx = vdev->vfl_dir != VFL_DIR_TX; bool is_tx = vdev->vfl_dir != VFL_DIR_RX; bool is_io_mc = vdev->device_caps & V4L2_CAP_IO_MC;
@@ -664,6 +665,19 @@ static void determine_valid_ioctls(struct video_device *vdev) SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_meta_out); SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_meta_out); }
if (is_audio && is_rx) {
/* audio capture specific ioctls */
SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_cap);
SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_cap);
SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_cap);
SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_cap);
} else if (is_audio && is_tx) {
/* audio output specific ioctls */
SET_VALID_IOCTL(ops, VIDIOC_ENUM_FMT, vidioc_enum_fmt_audio_out);
SET_VALID_IOCTL(ops, VIDIOC_G_FMT, vidioc_g_fmt_audio_out);
SET_VALID_IOCTL(ops, VIDIOC_S_FMT, vidioc_s_fmt_audio_out);
SET_VALID_IOCTL(ops, VIDIOC_TRY_FMT, vidioc_try_fmt_audio_out);
} if (is_vbi) { /* vbi specific ioctls */ if ((is_rx && (ops->vidioc_g_fmt_vbi_cap ||
@@ -927,6 +941,9 @@ int __video_register_device(struct video_device *vdev, case VFL_TYPE_TOUCH: name_base = "v4l-touch"; break;
case VFL_TYPE_AUDIO:
name_base = "audio";
I think it was mentioned before that "audio" could be confusing. Wasn't there actually some other kind of /dev/audio device long ago?
Seems like for touch, "v4l-touch" was introduced. Maybe it would also make sense to call it "v4l-audio" for audio?
Ok, will change to use "v4l-audio".
break; default: pr_err("%s called with unknown type: %d\n", __func__, type);
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c index 01ba27f2ef87..aa9d872bba8d 100644 --- a/drivers/media/v4l2-core/v4l2-ioctl.c +++ b/drivers/media/v4l2-core/v4l2-ioctl.c @@ -188,6 +188,8 @@ const char *v4l2_type_names[] = { [V4L2_BUF_TYPE_SDR_OUTPUT] = "sdr-out", [V4L2_BUF_TYPE_META_CAPTURE] = "meta-cap", [V4L2_BUF_TYPE_META_OUTPUT] = "meta-out",
[V4L2_BUF_TYPE_AUDIO_CAPTURE] = "audio-cap",
[V4L2_BUF_TYPE_AUDIO_OUTPUT] = "audio-out",
}; EXPORT_SYMBOL(v4l2_type_names);
@@ -276,6 +278,7 @@ static void v4l_print_format(const void *arg, bool write_only) const struct v4l2_sliced_vbi_format *sliced; const struct v4l2_window *win; const struct v4l2_meta_format *meta;
const struct v4l2_audio_format *audio; u32 pixelformat; u32 planes; unsigned i;
@@ -346,6 +349,12 @@ static void v4l_print_format(const void *arg, bool write_only) pr_cont(", dataformat=%p4cc, buffersize=%u\n", &pixelformat, meta->buffersize); break;
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
audio = &p->fmt.audio;
pr_cont(", rate=%u, format=%u, channels=%u, buffersize=%u\n",
audio->rate, audio->format, audio->channels, audio->buffersize);
break; }
}
@@ -927,6 +936,7 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type) bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH; bool is_meta = vfd->vfl_type == VFL_TYPE_VIDEO && (vfd->device_caps & meta_caps);
bool is_audio = vfd->vfl_type == VFL_TYPE_AUDIO; bool is_rx = vfd->vfl_dir != VFL_DIR_TX; bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
@@ -992,6 +1002,14 @@ static int check_fmt(struct file *file, enum v4l2_buf_type type) if (is_meta && is_tx && ops->vidioc_g_fmt_meta_out) return 0; break;
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (is_audio && is_rx && ops->vidioc_g_fmt_audio_cap)
return 0;
break;
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (is_audio && is_tx && ops->vidioc_g_fmt_audio_out)
return 0;
break; default: break; }
@@ -1594,6 +1612,16 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops, break; ret = ops->vidioc_enum_fmt_meta_out(file, fh, arg); break;
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (unlikely(!ops->vidioc_enum_fmt_audio_cap))
break;
ret = ops->vidioc_enum_fmt_audio_cap(file, fh, arg);
break;
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (unlikely(!ops->vidioc_enum_fmt_audio_out))
break;
ret = ops->vidioc_enum_fmt_audio_out(file, fh, arg);
break; } if (ret == 0) v4l_fill_fmtdesc(p);
@@ -1670,6 +1698,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops, return ops->vidioc_g_fmt_meta_cap(file, fh, arg); case V4L2_BUF_TYPE_META_OUTPUT: return ops->vidioc_g_fmt_meta_out(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
return ops->vidioc_g_fmt_audio_cap(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
return ops->vidioc_g_fmt_audio_out(file, fh, arg); } return -EINVAL;
} @@ -1781,6 +1813,16 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops, break; memset_after(p, 0, fmt.meta); return ops->vidioc_s_fmt_meta_out(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (unlikely(!ops->vidioc_s_fmt_audio_cap))
break;
memset_after(p, 0, fmt.audio);
return ops->vidioc_s_fmt_audio_cap(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (unlikely(!ops->vidioc_s_fmt_audio_out))
break;
memset_after(p, 0, fmt.audio);
return ops->vidioc_s_fmt_audio_out(file, fh, arg); } return -EINVAL;
} @@ -1889,6 +1931,16 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops, break; memset_after(p, 0, fmt.meta); return ops->vidioc_try_fmt_meta_out(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_CAPTURE:
if (unlikely(!ops->vidioc_try_fmt_audio_cap))
break;
memset_after(p, 0, fmt.audio);
return ops->vidioc_try_fmt_audio_cap(file, fh, arg);
case V4L2_BUF_TYPE_AUDIO_OUTPUT:
if (unlikely(!ops->vidioc_try_fmt_audio_out))
break;
memset_after(p, 0, fmt.audio);
return ops->vidioc_try_fmt_audio_out(file, fh, arg); } return -EINVAL;
} diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h index e0a13505f88d..0924e6d1dab1 100644 --- a/include/media/v4l2-dev.h +++ b/include/media/v4l2-dev.h @@ -30,6 +30,7 @@
- @VFL_TYPE_SUBDEV: for V4L2 subdevices
- @VFL_TYPE_SDR: for Software Defined Radio tuners
- @VFL_TYPE_TOUCH: for touch sensors
*/
- @VFL_TYPE_AUDIO: for audio input/output devices
- @VFL_TYPE_MAX: number of VFL types, must always be last in the enum
enum vfl_devnode_type { @@ -39,6 +40,7 @@ enum vfl_devnode_type { VFL_TYPE_SUBDEV, VFL_TYPE_SDR, VFL_TYPE_TOUCH,
VFL_TYPE_AUDIO, VFL_TYPE_MAX /* Shall be the last one */
};
diff --git a/include/media/v4l2-ioctl.h b/include/media/v4l2-ioctl.h index edb733f21604..f840cf740ce1 100644 --- a/include/media/v4l2-ioctl.h +++ b/include/media/v4l2-ioctl.h @@ -45,6 +45,12 @@ struct v4l2_fh;
- @vidioc_enum_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
- for metadata output
- @vidioc_enum_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
- for audio capture
- @vidioc_enum_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_ENUM_FMT <vidioc_enum_fmt>` ioctl logic
- for audio output
- @vidioc_g_fmt_vid_cap: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for video capture
- in single plane mode
@@ -79,6 +85,10 @@ struct v4l2_fh;
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
- @vidioc_g_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for metadata output
- @vidioc_g_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio capture
- @vidioc_g_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_G_FMT <vidioc_g_fmt>` ioctl logic for audio output
- @vidioc_s_fmt_vid_cap: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for video capture
- in single plane mode
@@ -113,6 +123,10 @@ struct v4l2_fh;
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
- @vidioc_s_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for metadata output
- @vidioc_s_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio capture
- @vidioc_s_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_S_FMT <vidioc_g_fmt>` ioctl logic for audio output
- @vidioc_try_fmt_vid_cap: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for video capture
- in single plane mode
@@ -149,6 +163,10 @@ struct v4l2_fh;
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata capture
- @vidioc_try_fmt_meta_out: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for metadata output
- @vidioc_try_fmt_audio_cap: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio capture
- @vidioc_try_fmt_audio_out: pointer to the function that implements
- :ref:`VIDIOC_TRY_FMT <vidioc_g_fmt>` ioctl logic for audio output
- @vidioc_reqbufs: pointer to the function that implements
- :ref:`VIDIOC_REQBUFS <vidioc_reqbufs>` ioctl
- @vidioc_querybuf: pointer to the function that implements
@@ -315,6 +333,10 @@ struct v4l2_ioctl_ops { struct v4l2_fmtdesc *f); int (*vidioc_enum_fmt_meta_out)(struct file *file, void *fh, struct v4l2_fmtdesc *f);
int (*vidioc_enum_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_fmtdesc *f);
int (*vidioc_enum_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_fmtdesc *f); /* VIDIOC_G_FMT handlers */ int (*vidioc_g_fmt_vid_cap)(struct file *file, void *fh,
@@ -345,6 +367,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_g_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f);
int (*vidioc_g_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_format *f);
int (*vidioc_g_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_format *f); /* VIDIOC_S_FMT handlers */ int (*vidioc_s_fmt_vid_cap)(struct file *file, void *fh,
@@ -375,6 +401,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_s_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f);
int (*vidioc_s_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_format *f);
int (*vidioc_s_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_format *f); /* VIDIOC_TRY_FMT handlers */ int (*vidioc_try_fmt_vid_cap)(struct file *file, void *fh,
@@ -405,6 +435,10 @@ struct v4l2_ioctl_ops { struct v4l2_format *f); int (*vidioc_try_fmt_meta_out)(struct file *file, void *fh, struct v4l2_format *f);
int (*vidioc_try_fmt_audio_cap)(struct file *file, void *fh,
struct v4l2_format *f);
int (*vidioc_try_fmt_audio_out)(struct file *file, void *fh,
struct v4l2_format *f); /* Buffer handlers */ int (*vidioc_reqbufs)(struct file *file, void *fh,
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h index 3af6a82d0cad..e5051410928a 100644 --- a/include/uapi/linux/videodev2.h +++ b/include/uapi/linux/videodev2.h @@ -153,6 +153,8 @@ enum v4l2_buf_type { V4L2_BUF_TYPE_SDR_OUTPUT = 12, V4L2_BUF_TYPE_META_CAPTURE = 13, V4L2_BUF_TYPE_META_OUTPUT = 14,
V4L2_BUF_TYPE_AUDIO_CAPTURE = 15,
V4L2_BUF_TYPE_AUDIO_OUTPUT = 16, /* Deprecated, do not use */ V4L2_BUF_TYPE_PRIVATE = 0x80,
}; @@ -169,6 +171,7 @@ enum v4l2_buf_type { || (type) == V4L2_BUF_TYPE_VBI_OUTPUT \ || (type) == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT \ || (type) == V4L2_BUF_TYPE_SDR_OUTPUT \
|| (type) == V4L2_BUF_TYPE_AUDIO_OUTPUT \ || (type) == V4L2_BUF_TYPE_META_OUTPUT)
#define V4L2_TYPE_IS_CAPTURE(type) (!V4L2_TYPE_IS_OUTPUT(type)) @@ -2415,6 +2418,20 @@ struct v4l2_meta_format { __u32 buffersize; } __attribute__ ((packed));
+/**
- struct v4l2_audio_format - audio data format definition
- @rate: sample rate
- @format: sample format
- @channels: channel numbers
- @buffersize: maximum size in bytes required for data
- */
+struct v4l2_audio_format {
__u32 rate;
__u32 format;
What are the values for the rate and format fields? Since they are part of the UAPI, they need to be defined.
The range for sample rate is [5512, 768000]. The format is defined in include/uapi/sound/asound.h, they are SNDRV_PCM_FORMAT_S8, SNDRV_PCM_FORMAT_U8, ...
Where should I put these info?
I see, so those are standard definitions of the sound subsystem. I think we should refer to the right header and/or data types in the kerneldoc comment for the struct. We also need to provide the sphinx documentation for the new device type and extend the description of relevant ioctls (e.g. VIDIOC_S_FMT) that accept the new structs. I.e. the v4l2_format struct used by VIDIOC_S_FMT is documented in
https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/vidioc-g-fmt....
and there is documentation for each of the union members, like v4l2_pix_format_mplane that is commonly used for video data:
https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-v4l2-m...
We'll need a similar one for the new v4l2_audio_format.
Best regards, Tomasz
Best regards Wang shengjiu
Best regards, Tomasz
__u32 channels;
__u32 buffersize;
+} __attribute__ ((packed));
/**
- struct v4l2_format - stream data format
- @type: enum v4l2_buf_type; type of the data stream
@@ -2423,6 +2440,7 @@ struct v4l2_meta_format {
- @win: definition of an overlaid image
- @vbi: raw VBI capture or output parameters
- @sliced: sliced VBI capture or output parameters
- @audio: definition of an audio format
- @raw_data: placeholder for future extensions and custom formats
- @fmt: union of @pix, @pix_mp, @win, @vbi, @sliced, @sdr, @meta
and @raw_data
@@ -2437,6 +2455,7 @@ struct v4l2_format { struct v4l2_sliced_vbi_format sliced; /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */ struct v4l2_sdr_format sdr; /* V4L2_BUF_TYPE_SDR_CAPTURE */ struct v4l2_meta_format meta; /* V4L2_BUF_TYPE_META_CAPTURE */
struct v4l2_audio_format audio; /* V4L2_BUF_TYPE_AUDIO_CAPTURE */ __u8 raw_data[200]; /* user-defined */ } fmt;
};
2.34.1
Implement the ASRC memory to memory function using the v4l2 framework, user can use this function with v4l2 ioctl interface.
User send the output and capture buffer to driver and driver store the converted data to the capture buffer.
This feature can be shared by ASRC and EASRC drivers
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- drivers/media/platform/nxp/Kconfig | 12 + drivers/media/platform/nxp/Makefile | 1 + drivers/media/platform/nxp/fsl_asrc_m2m.c | 962 ++++++++++++++++++++++ include/sound/fsl_asrc_common.h | 9 + 4 files changed, 984 insertions(+) create mode 100644 drivers/media/platform/nxp/fsl_asrc_m2m.c
diff --git a/drivers/media/platform/nxp/Kconfig b/drivers/media/platform/nxp/Kconfig index a0ca6b297fb8..359f11fe2a80 100644 --- a/drivers/media/platform/nxp/Kconfig +++ b/drivers/media/platform/nxp/Kconfig @@ -56,3 +56,15 @@ config VIDEO_MX2_EMMAPRP
source "drivers/media/platform/nxp/dw100/Kconfig" source "drivers/media/platform/nxp/imx-jpeg/Kconfig" + +config VIDEO_FSL_ASRC_M2M + tristate "MXP i.MX ASRC M2M support" + depends on V4L_MEM2MEM_DRIVERS + depends on MEDIA_SUPPORT + select VIDEOBUF2_DMA_CONTIG + select V4L2_MEM2MEM_DEV + help + Say Y if you want to add ASRC M2M support for NXP CPUs. + It is a completement for ASRC M2P and ASRC P2M features. + This option is only useful for out-of-tree drivers since + in-tree drivers select it automatically. diff --git a/drivers/media/platform/nxp/Makefile b/drivers/media/platform/nxp/Makefile index b8e672b75fed..db565e39f7d5 100644 --- a/drivers/media/platform/nxp/Makefile +++ b/drivers/media/platform/nxp/Makefile @@ -8,3 +8,4 @@ obj-$(CONFIG_VIDEO_IMX7_CSI) += imx7-media-csi.o obj-$(CONFIG_VIDEO_IMX_MIPI_CSIS) += imx-mipi-csis.o obj-$(CONFIG_VIDEO_IMX_PXP) += imx-pxp.o obj-$(CONFIG_VIDEO_MX2_EMMAPRP) += mx2_emmaprp.o +obj-$(CONFIG_VIDEO_FSL_ASRC_M2M) += fsl_asrc_m2m.o diff --git a/drivers/media/platform/nxp/fsl_asrc_m2m.c b/drivers/media/platform/nxp/fsl_asrc_m2m.c new file mode 100644 index 000000000000..67936915857b --- /dev/null +++ b/drivers/media/platform/nxp/fsl_asrc_m2m.c @@ -0,0 +1,962 @@ +// SPDX-License-Identifier: GPL-2.0 +// +// Copyright (C) 2014-2016 Freescale Semiconductor, Inc. +// Copyright (C) 2019-2023 NXP +// +// Freescale ASRC Memory to Memory (M2M) driver + +#include <linux/dma/imx-dma.h> +#include <linux/pm_runtime.h> +#include <media/v4l2-ctrls.h> +#include <media/v4l2-device.h> +#include <media/v4l2-event.h> +#include <media/v4l2-fh.h> +#include <media/v4l2-ioctl.h> +#include <media/v4l2-mem2mem.h> +#include <media/videobuf2-dma-contig.h> +#include <sound/dmaengine_pcm.h> +#include <sound/fsl_asrc_common.h> + +#define V4L_CAP OUT +#define V4L_OUT IN + +#define ASRC_xPUT_DMA_CALLBACK(dir) \ + (((dir) == V4L_OUT) ? fsl_asrc_input_dma_callback \ + : fsl_asrc_output_dma_callback) + +#define DIR_STR(dir) (dir) == V4L_OUT ? "out" : "cap" + +#define ASRC_M2M_BUFFER_SIZE (512 * 1024) +#define ASRC_M2M_PERIOD_SIZE (48 * 1024) +#define ASRC_M2M_SG_NUM (20) + +struct fsl_asrc_pair_m2m { + struct fsl_asrc_pair *pair; + struct fsl_asrc_m2m *m2m; + struct v4l2_fh fh; + struct v4l2_ctrl_handler ctrl_handler; +}; + +struct fsl_asrc_m2m { + struct fsl_asrc *asrc; + struct v4l2_device v4l2_dev; + struct v4l2_m2m_dev *m2m_dev; + struct video_device *dec_vdev; + struct mutex mlock; /* v4l2 ioctls serialization */ + struct platform_device *pdev; +}; + +static inline struct fsl_asrc_pair_m2m *fsl_asrc_m2m_fh_to_ctx(struct v4l2_fh *fh) +{ + return container_of(fh, struct fsl_asrc_pair_m2m, fh); +} + +/** + * fsl_asrc_read_last_fifo: read all the remaining data from FIFO + * @pair: Structure pointer of fsl_asrc_pair + * @dma_vaddr: virtual address of capture buffer + * @length: payload length of capture buffer + */ +static void fsl_asrc_read_last_fifo(struct fsl_asrc_pair *pair, void *dma_vaddr, u32 *length) +{ + struct fsl_asrc *asrc = pair->asrc; + enum asrc_pair_index index = pair->index; + u32 i, reg, size, t_size = 0, width; + u32 *reg32 = NULL; + u16 *reg16 = NULL; + u8 *reg24 = NULL; + + width = snd_pcm_format_physical_width(pair->sample_format[V4L_CAP]); + if (width == 32) + reg32 = dma_vaddr + *length; + else if (width == 16) + reg16 = dma_vaddr + *length; + else + reg24 = dma_vaddr + *length; +retry: + size = asrc->get_output_fifo_size(pair); + if (size + *length > ASRC_M2M_BUFFER_SIZE) + goto end; + + for (i = 0; i < size * pair->channels; i++) { + regmap_read(asrc->regmap, asrc->get_fifo_addr(OUT, index), ®); + if (reg32) { + *(reg32) = reg; + reg32++; + } else if (reg16) { + *(reg16) = (u16)reg; + reg16++; + } else { + *reg24++ = (u8)reg; + *reg24++ = (u8)(reg >> 8); + *reg24++ = (u8)(reg >> 16); + } + } + t_size += size; + + /* In case there is data left in FIFO */ + if (size) + goto retry; +end: + /* Update payload length */ + if (reg32) + *length += t_size * pair->channels * 4; + else if (reg16) + *length += t_size * pair->channels * 2; + else + *length += t_size * pair->channels * 3; +} + +static int fsl_asrc_m2m_start_streaming(struct vb2_queue *q, unsigned int count) +{ + struct fsl_asrc_pair_m2m *pair_m2m = vb2_get_drv_priv(q); + struct fsl_asrc_pair *pair = pair_m2m->pair; + struct fsl_asrc_m2m *m2m = pair_m2m->m2m; + struct fsl_asrc *asrc = pair->asrc; + struct device *dev = &m2m->pdev->dev; + struct vb2_v4l2_buffer *buf; + bool request_flag = false; + int ret; + + dev_dbg(dev, "Start streaming pair=%p, %d\n", pair, q->type); + + ret = pm_runtime_get_sync(dev); + if (ret < 0) { + dev_err(dev, "Failed to power up asrc\n"); + goto err_pm_runtime; + } + + /* Request asrc pair/context */ + if (!pair->req_pair) { + /* flag for error handler of this function */ + request_flag = true; + + ret = asrc->request_pair(pair->channels, pair); + if (ret) { + dev_err(dev, "failed to request pair: %d\n", ret); + goto err_request_pair; + } + + ret = asrc->m2m_start_part_one(pair); + if (ret) { + dev_err(dev, "failed to start pair part one: %d\n", ret); + goto err_start_part_one; + } + + pair->req_pair = true; + } + + /* Request dma channels */ + if (V4L2_TYPE_IS_OUTPUT(q->type)) { + pair->dma_chan[V4L_OUT] = asrc->get_dma_channel(pair, IN); + if (!pair->dma_chan[V4L_OUT]) { + dev_err(dev, "[ctx%d] failed to get input DMA channel\n", pair->index); + ret = -EBUSY; + goto err_dma_channel; + } + } else { + pair->dma_chan[V4L_CAP] = asrc->get_dma_channel(pair, OUT); + if (!pair->dma_chan[V4L_CAP]) { + dev_err(dev, "[ctx%d] failed to get output DMA channel\n", pair->index); + ret = -EBUSY; + goto err_dma_channel; + } + } + + v4l2_m2m_update_start_streaming_state(pair_m2m->fh.m2m_ctx, q); + + return 0; + +err_dma_channel: + if (request_flag && asrc->m2m_stop_part_one) + asrc->m2m_stop_part_one(pair); +err_start_part_one: + if (request_flag) + asrc->release_pair(pair); +err_request_pair: + pm_runtime_put_sync(dev); +err_pm_runtime: + /* Release buffers */ + if (V4L2_TYPE_IS_OUTPUT(q->type)) { + while ((buf = v4l2_m2m_src_buf_remove(pair_m2m->fh.m2m_ctx))) + v4l2_m2m_buf_done(buf, VB2_BUF_STATE_QUEUED); + } else { + while ((buf = v4l2_m2m_dst_buf_remove(pair_m2m->fh.m2m_ctx))) + v4l2_m2m_buf_done(buf, VB2_BUF_STATE_QUEUED); + } + return ret; +} + +static void fsl_asrc_m2m_stop_streaming(struct vb2_queue *q) +{ + struct fsl_asrc_pair_m2m *pair_m2m = vb2_get_drv_priv(q); + struct fsl_asrc_m2m *m2m = pair_m2m->m2m; + struct fsl_asrc_pair *pair = pair_m2m->pair; + struct fsl_asrc *asrc = pair->asrc; + struct device *dev = &m2m->pdev->dev; + + dev_dbg(dev, "Stop streaming pair=%p, %d\n", pair, q->type); + + v4l2_m2m_update_stop_streaming_state(pair_m2m->fh.m2m_ctx, q); + + /* Stop & release pair/context */ + if (asrc->m2m_stop_part_two) + asrc->m2m_stop_part_two(pair); + + if (pair->req_pair) { + if (asrc->m2m_stop_part_one) + asrc->m2m_stop_part_one(pair); + asrc->release_pair(pair); + pair->req_pair = false; + } + + /* Release dma channel */ + if (V4L2_TYPE_IS_OUTPUT(q->type)) { + if (pair->dma_chan[V4L_OUT]) + dma_release_channel(pair->dma_chan[V4L_OUT]); + } else { + if (pair->dma_chan[V4L_CAP]) + dma_release_channel(pair->dma_chan[V4L_CAP]); + } + + pm_runtime_put_sync(dev); +} + +static int fsl_asrc_m2m_queue_setup(struct vb2_queue *q, + unsigned int *num_buffers, unsigned int *num_planes, + unsigned int sizes[], struct device *alloc_devs[]) +{ + struct fsl_asrc_pair_m2m *pair_m2m = vb2_get_drv_priv(q); + struct fsl_asrc_pair *pair = pair_m2m->pair; + + /* single buffer */ + *num_planes = 1; + + /* + * The capture buffer size depends on output buffer size + * and the convert ratio. + * + * Here just use a fix length for capture and output buffer. + * User need to care about it. + */ + + if (V4L2_TYPE_IS_OUTPUT(q->type)) + sizes[0] = pair->buf_len[V4L_OUT]; + else + sizes[0] = pair->buf_len[V4L_CAP]; + + return 0; +} + +static void fsl_asrc_m2m_buf_queue(struct vb2_buffer *vb) +{ + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); + struct fsl_asrc_pair_m2m *pair_m2m = vb2_get_drv_priv(vb->vb2_queue); + + /* queue buffer */ + v4l2_m2m_buf_queue(pair_m2m->fh.m2m_ctx, vbuf); +} + +static const struct vb2_ops fsl_asrc_m2m_qops = { + .wait_prepare = vb2_ops_wait_prepare, + .wait_finish = vb2_ops_wait_finish, + .start_streaming = fsl_asrc_m2m_start_streaming, + .stop_streaming = fsl_asrc_m2m_stop_streaming, + .queue_setup = fsl_asrc_m2m_queue_setup, + .buf_queue = fsl_asrc_m2m_buf_queue, +}; + +/* Init video buffer queue for src and dst. */ +static int fsl_asrc_m2m_queue_init(void *priv, struct vb2_queue *src_vq, + struct vb2_queue *dst_vq) +{ + struct fsl_asrc_pair_m2m *pair_m2m = priv; + struct fsl_asrc_m2m *m2m = pair_m2m->m2m; + int ret; + + src_vq->type = V4L2_BUF_TYPE_AUDIO_OUTPUT; + src_vq->io_modes = VB2_MMAP | VB2_DMABUF; + src_vq->drv_priv = pair_m2m; + src_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer); + src_vq->ops = &fsl_asrc_m2m_qops; + src_vq->mem_ops = &vb2_dma_contig_memops; + src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY; + src_vq->lock = &m2m->mlock; + src_vq->dev = &m2m->pdev->dev; + src_vq->min_buffers_needed = 1; + + ret = vb2_queue_init(src_vq); + if (ret) + return ret; + + dst_vq->type = V4L2_BUF_TYPE_AUDIO_CAPTURE; + dst_vq->io_modes = VB2_MMAP | VB2_DMABUF; + dst_vq->drv_priv = pair_m2m; + dst_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer); + dst_vq->ops = &fsl_asrc_m2m_qops; + dst_vq->mem_ops = &vb2_dma_contig_memops; + dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY; + dst_vq->lock = &m2m->mlock; + dst_vq->dev = &m2m->pdev->dev; + dst_vq->min_buffers_needed = 1; + + ret = vb2_queue_init(dst_vq); + return ret; +} + +static int fsl_asrc_m2m_op_s_ctrl(struct v4l2_ctrl *ctrl) +{ + struct fsl_asrc_pair_m2m *pair_m2m = + container_of(ctrl->handler, struct fsl_asrc_pair_m2m, ctrl_handler); + struct fsl_asrc_pair *pair = pair_m2m->pair; + struct fsl_asrc *asrc = pair->asrc; + int ret = 0; + + switch (ctrl->id) { + case V4L2_CID_GAIN: + if (asrc->m2m_set_ratio_mod) + asrc->m2m_set_ratio_mod(pair, ctrl->val); + break; + default: + ret = -EINVAL; + break; + } + + return ret; +} + +static const struct v4l2_ctrl_ops fsl_asrc_m2m_ctrl_ops = { + .s_ctrl = fsl_asrc_m2m_op_s_ctrl, +}; + +/* system callback for open() */ +static int fsl_asrc_m2m_open(struct file *file) +{ + struct fsl_asrc_m2m *m2m = video_drvdata(file); + struct fsl_asrc *asrc = m2m->asrc; + struct video_device *vdev = video_devdata(file); + struct fsl_asrc_pair *pair; + struct fsl_asrc_pair_m2m *pair_m2m; + int ret = 0; + + if (mutex_lock_interruptible(&m2m->mlock)) + return -ERESTARTSYS; + + pair = kzalloc(sizeof(*pair) + asrc->pair_priv_size, GFP_KERNEL); + if (!pair) { + ret = -ENOMEM; + goto err_alloc_pair; + } + + pair_m2m = kzalloc(sizeof(*pair_m2m), GFP_KERNEL); + if (!pair_m2m) { + ret = -ENOMEM; + goto err_alloc_pair_m2m; + } + + pair->private = (void *)pair + sizeof(struct fsl_asrc_pair); + pair->asrc = m2m->asrc; + + pair->buf_len[V4L_OUT] = ASRC_M2M_BUFFER_SIZE; + pair->buf_len[V4L_CAP] = ASRC_M2M_BUFFER_SIZE; + + init_completion(&pair->complete[V4L_OUT]); + init_completion(&pair->complete[V4L_CAP]); + + v4l2_fh_init(&pair_m2m->fh, vdev); + v4l2_fh_add(&pair_m2m->fh); + file->private_data = &pair_m2m->fh; + + pair_m2m->pair = pair; + pair_m2m->m2m = m2m; + /* m2m context init */ + pair_m2m->fh.m2m_ctx = v4l2_m2m_ctx_init(m2m->m2m_dev, pair_m2m, + fsl_asrc_m2m_queue_init); + if (IS_ERR(pair_m2m->fh.m2m_ctx)) { + ret = PTR_ERR(pair_m2m->fh.m2m_ctx); + goto err_ctx_init; + } + + v4l2_ctrl_handler_init(&pair_m2m->ctrl_handler, 2); + + /* use V4L2_CID_GAIN for ratio update control */ + v4l2_ctrl_new_std(&pair_m2m->ctrl_handler, &fsl_asrc_m2m_ctrl_ops, + V4L2_CID_GAIN, + 0xFFFFFFFF80000001, 0x7fffffff, 1, 0); + + if (pair_m2m->ctrl_handler.error) { + ret = pair_m2m->ctrl_handler.error; + v4l2_ctrl_handler_free(&pair_m2m->ctrl_handler); + goto err_ctrl_handler; + } + + pair_m2m->fh.ctrl_handler = &pair_m2m->ctrl_handler; + + mutex_unlock(&m2m->mlock); + + return 0; + +err_ctrl_handler: + v4l2_m2m_ctx_release(pair_m2m->fh.m2m_ctx); +err_ctx_init: + v4l2_fh_del(&pair_m2m->fh); + v4l2_fh_exit(&pair_m2m->fh); + kfree(pair_m2m); +err_alloc_pair_m2m: + kfree(pair); +err_alloc_pair: + mutex_unlock(&m2m->mlock); + return ret; +} + +static int fsl_asrc_m2m_release(struct file *file) +{ + struct fsl_asrc_m2m *m2m = video_drvdata(file); + struct fsl_asrc_pair_m2m *pair_m2m = fsl_asrc_m2m_fh_to_ctx(file->private_data); + struct fsl_asrc_pair *pair = pair_m2m->pair; + + mutex_lock(&m2m->mlock); + v4l2_ctrl_handler_free(&pair_m2m->ctrl_handler); + v4l2_m2m_ctx_release(pair_m2m->fh.m2m_ctx); + v4l2_fh_del(&pair_m2m->fh); + v4l2_fh_exit(&pair_m2m->fh); + kfree(pair_m2m); + kfree(pair); + mutex_unlock(&m2m->mlock); + + return 0; +} + +static const struct v4l2_file_operations fsl_asrc_m2m_fops = { + .owner = THIS_MODULE, + .open = fsl_asrc_m2m_open, + .release = fsl_asrc_m2m_release, + .poll = v4l2_m2m_fop_poll, + .unlocked_ioctl = video_ioctl2, + .mmap = v4l2_m2m_fop_mmap, +}; + +static int fsl_asrc_m2m_querycap(struct file *file, void *priv, + struct v4l2_capability *cap) +{ + strscpy(cap->driver, "asrc m2m", sizeof(cap->driver)); + strscpy(cap->card, "asrc m2m", sizeof(cap->card)); + cap->device_caps = V4L2_CAP_AUDIO | V4L2_CAP_STREAMING; + cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS; + + return 0; +} + +static int fsl_asrc_m2m_g_fmt_aud_cap(struct file *file, void *fh, + struct v4l2_format *f) +{ + struct fsl_asrc_pair_m2m *pair_m2m = fsl_asrc_m2m_fh_to_ctx(fh); + struct fsl_asrc_pair *pair = pair_m2m->pair; + + f->fmt.audio.channels = pair->channels; + f->fmt.audio.rate = pair->rate[V4L_CAP]; + f->fmt.audio.format = pair->sample_format[V4L_CAP]; + f->fmt.audio.buffersize = pair->buf_len[V4L_CAP]; + + return 0; +} + +static int fsl_asrc_m2m_g_fmt_aud_out(struct file *file, void *fh, + struct v4l2_format *f) +{ + struct fsl_asrc_pair_m2m *pair_m2m = fsl_asrc_m2m_fh_to_ctx(fh); + struct fsl_asrc_pair *pair = pair_m2m->pair; + + f->fmt.audio.channels = pair->channels; + f->fmt.audio.rate = pair->rate[V4L_OUT]; + f->fmt.audio.format = pair->sample_format[V4L_OUT]; + f->fmt.audio.buffersize = pair->buf_len[V4L_OUT]; + + return 0; +} + +/* output for asrc */ +static int fsl_asrc_m2m_s_fmt_aud_cap(struct file *file, void *fh, + struct v4l2_format *f) +{ + struct fsl_asrc_pair_m2m *pair_m2m = fsl_asrc_m2m_fh_to_ctx(fh); + struct fsl_asrc_pair *pair = pair_m2m->pair; + struct fsl_asrc_m2m *m2m = pair_m2m->m2m; + struct fsl_asrc *asrc = pair->asrc; + struct device *dev = &m2m->pdev->dev; + int ret; + + ret = asrc->m2m_check_format(OUT, f->fmt.audio.rate, + f->fmt.audio.channels, + f->fmt.audio.format); + if (ret) + return -EINVAL; + + if (pair->channels > 0 && pair->channels != f->fmt.audio.channels) { + dev_err(dev, "channels don't match for cap and out\n"); + return -EINVAL; + } + + pair->channels = f->fmt.audio.channels; + pair->rate[V4L_CAP] = f->fmt.audio.rate; + pair->sample_format[V4L_CAP] = f->fmt.audio.format; + + /* Get buffer size from user */ + if (f->fmt.audio.buffersize > pair->buf_len[V4L_CAP]) + pair->buf_len[V4L_CAP] = f->fmt.audio.buffersize; + + return 0; +} + +/* input for asrc */ +static int fsl_asrc_m2m_s_fmt_aud_out(struct file *file, void *fh, + struct v4l2_format *f) +{ + struct fsl_asrc_pair_m2m *pair_m2m = fsl_asrc_m2m_fh_to_ctx(fh); + struct fsl_asrc_pair *pair = pair_m2m->pair; + struct fsl_asrc_m2m *m2m = pair_m2m->m2m; + struct fsl_asrc *asrc = pair->asrc; + struct device *dev = &m2m->pdev->dev; + int ret; + + ret = asrc->m2m_check_format(IN, f->fmt.audio.rate, + f->fmt.audio.channels, + f->fmt.audio.format); + if (ret) + return -EINVAL; + + if (pair->channels > 0 && pair->channels != f->fmt.audio.channels) { + dev_err(dev, "channels don't match for cap and out\n"); + return -EINVAL; + } + + pair->channels = f->fmt.audio.channels; + pair->rate[V4L_OUT] = f->fmt.audio.rate; + pair->sample_format[V4L_OUT] = f->fmt.audio.format; + + /* Get buffer size from user */ + if (f->fmt.audio.buffersize > pair->buf_len[V4L_OUT]) + pair->buf_len[V4L_OUT] = f->fmt.audio.buffersize; + + return 0; +} + +static int fsl_asrc_m2m_try_fmt_audio_cap(struct file *file, void *fh, + struct v4l2_format *f) +{ + struct fsl_asrc_m2m *m2m = video_drvdata(file); + struct fsl_asrc *asrc = m2m->asrc; + int ret; + + ret = asrc->m2m_check_format(OUT, f->fmt.audio.rate, + f->fmt.audio.channels, + f->fmt.audio.format); + return ret; +} + +static int fsl_asrc_m2m_try_fmt_audio_out(struct file *file, void *fh, + struct v4l2_format *f) +{ + struct fsl_asrc_m2m *m2m = video_drvdata(file); + struct fsl_asrc *asrc = m2m->asrc; + int ret; + + ret = asrc->m2m_check_format(IN, f->fmt.audio.rate, + f->fmt.audio.channels, + f->fmt.audio.format); + return ret; +} + +static const struct v4l2_ioctl_ops fsl_asrc_m2m_ioctl_ops = { + .vidioc_querycap = fsl_asrc_m2m_querycap, + + .vidioc_g_fmt_audio_cap = fsl_asrc_m2m_g_fmt_aud_cap, + .vidioc_g_fmt_audio_out = fsl_asrc_m2m_g_fmt_aud_out, + + .vidioc_s_fmt_audio_cap = fsl_asrc_m2m_s_fmt_aud_cap, + .vidioc_s_fmt_audio_out = fsl_asrc_m2m_s_fmt_aud_out, + + .vidioc_try_fmt_audio_cap = fsl_asrc_m2m_try_fmt_audio_cap, + .vidioc_try_fmt_audio_out = fsl_asrc_m2m_try_fmt_audio_out, + + .vidioc_qbuf = v4l2_m2m_ioctl_qbuf, + .vidioc_dqbuf = v4l2_m2m_ioctl_dqbuf, + + .vidioc_create_bufs = v4l2_m2m_ioctl_create_bufs, + .vidioc_prepare_buf = v4l2_m2m_ioctl_prepare_buf, + .vidioc_reqbufs = v4l2_m2m_ioctl_reqbufs, + .vidioc_querybuf = v4l2_m2m_ioctl_querybuf, + .vidioc_streamon = v4l2_m2m_ioctl_streamon, + .vidioc_streamoff = v4l2_m2m_ioctl_streamoff, +}; + +/* dma complete callback */ +static void fsl_asrc_input_dma_callback(void *data) +{ + struct fsl_asrc_pair *pair = (struct fsl_asrc_pair *)data; + + complete(&pair->complete[V4L_OUT]); +} + +/* dma complete callback */ +static void fsl_asrc_output_dma_callback(void *data) +{ + struct fsl_asrc_pair *pair = (struct fsl_asrc_pair *)data; + + complete(&pair->complete[V4L_CAP]); +} + +/* config dma channel */ +static int fsl_asrc_dmaconfig(struct fsl_asrc_pair_m2m *pair_m2m, + struct dma_chan *chan, + u32 dma_addr, dma_addr_t buf_addr, u32 buf_len, + int dir, int width) +{ + struct fsl_asrc_pair *pair = pair_m2m->pair; + struct fsl_asrc *asrc = pair->asrc; + struct fsl_asrc_m2m *m2m = pair_m2m->m2m; + struct device *dev = &m2m->pdev->dev; + struct dma_slave_config slave_config; + struct scatterlist sg[ASRC_M2M_SG_NUM]; + enum dma_slave_buswidth buswidth; + unsigned int sg_len, max_period_size; + int ret, i; + + switch (width) { + case 8: + buswidth = DMA_SLAVE_BUSWIDTH_1_BYTE; + break; + case 16: + buswidth = DMA_SLAVE_BUSWIDTH_2_BYTES; + break; + case 24: + buswidth = DMA_SLAVE_BUSWIDTH_3_BYTES; + break; + case 32: + buswidth = DMA_SLAVE_BUSWIDTH_4_BYTES; + break; + default: + dev_err(dev, "invalid word width\n"); + return -EINVAL; + } + + memset(&slave_config, 0, sizeof(slave_config)); + if (dir == V4L_OUT) { + slave_config.direction = DMA_MEM_TO_DEV; + slave_config.dst_addr = dma_addr; + slave_config.dst_addr_width = buswidth; + slave_config.dst_maxburst = asrc->m2m_get_maxburst(IN, pair); + } else { + slave_config.direction = DMA_DEV_TO_MEM; + slave_config.src_addr = dma_addr; + slave_config.src_addr_width = buswidth; + slave_config.src_maxburst = asrc->m2m_get_maxburst(OUT, pair); + } + + ret = dmaengine_slave_config(chan, &slave_config); + if (ret) { + dev_err(dev, "failed to config dmaengine for %s task: %d\n", + DIR_STR(dir), ret); + return -EINVAL; + } + + max_period_size = rounddown(ASRC_M2M_PERIOD_SIZE, width * pair->channels / 8); + /* scatter gather mode */ + sg_len = buf_len / max_period_size; + if (buf_len % max_period_size) + sg_len += 1; + + sg_init_table(sg, sg_len); + for (i = 0; i < (sg_len - 1); i++) { + sg_dma_address(&sg[i]) = buf_addr + i * max_period_size; + sg_dma_len(&sg[i]) = max_period_size; + } + sg_dma_address(&sg[i]) = buf_addr + i * max_period_size; + sg_dma_len(&sg[i]) = buf_len - i * max_period_size; + + pair->desc[dir] = dmaengine_prep_slave_sg(chan, sg, sg_len, + slave_config.direction, + DMA_PREP_INTERRUPT); + if (!pair->desc[dir]) { + dev_err(dev, "failed to prepare dmaengine for %s task\n", DIR_STR(dir)); + return -EINVAL; + } + + pair->desc[dir]->callback = ASRC_xPUT_DMA_CALLBACK(dir); + pair->desc[dir]->callback_param = pair; + + return 0; +} + +/* main function of converter */ +static void fsl_asrc_m2m_device_run(void *priv) +{ + struct fsl_asrc_pair_m2m *pair_m2m = priv; + struct fsl_asrc_pair *pair = pair_m2m->pair; + struct fsl_asrc_m2m *m2m = pair_m2m->m2m; + struct fsl_asrc *asrc = pair->asrc; + struct device *dev = &m2m->pdev->dev; + enum asrc_pair_index index = pair->index; + struct vb2_v4l2_buffer *src_buf, *dst_buf; + unsigned int out_buf_len; + unsigned int cap_dma_len; + unsigned int width; + u32 fifo_addr; + int ret; + + src_buf = v4l2_m2m_next_src_buf(pair_m2m->fh.m2m_ctx); + dst_buf = v4l2_m2m_next_dst_buf(pair_m2m->fh.m2m_ctx); + + width = snd_pcm_format_physical_width(pair->sample_format[V4L_OUT]); + fifo_addr = asrc->paddr + asrc->get_fifo_addr(IN, index); + out_buf_len = vb2_get_plane_payload(&src_buf->vb2_buf, 0); + if (out_buf_len < width * pair->channels / 8 || + out_buf_len > ASRC_M2M_BUFFER_SIZE || + out_buf_len % (width * pair->channels / 8)) { + dev_err(dev, "out buffer size is error: [%d]\n", out_buf_len); + goto end; + } + + /* dma config for output dma channel */ + ret = fsl_asrc_dmaconfig(pair_m2m, + pair->dma_chan[V4L_OUT], + fifo_addr, + vb2_dma_contig_plane_dma_addr(&src_buf->vb2_buf, 0), + out_buf_len, V4L_OUT, width); + if (ret) { + dev_err(dev, "out dma config error\n"); + goto end; + } + + width = snd_pcm_format_physical_width(pair->sample_format[V4L_CAP]); + fifo_addr = asrc->paddr + asrc->get_fifo_addr(OUT, index); + cap_dma_len = asrc->m2m_calc_out_len(pair, out_buf_len); + if (cap_dma_len > 0 && cap_dma_len <= ASRC_M2M_BUFFER_SIZE) { + /* dma config for capture dma channel */ + ret = fsl_asrc_dmaconfig(pair_m2m, + pair->dma_chan[V4L_CAP], + fifo_addr, + vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0), + cap_dma_len, V4L_CAP, width); + if (ret) { + dev_err(dev, "cap dma config error\n"); + goto end; + } + } else if (cap_dma_len > ASRC_M2M_BUFFER_SIZE) { + dev_err(dev, "cap buffer size error\n"); + goto end; + } + + reinit_completion(&pair->complete[V4L_OUT]); + reinit_completion(&pair->complete[V4L_CAP]); + + /* Submit DMA request */ + dmaengine_submit(pair->desc[V4L_OUT]); + dma_async_issue_pending(pair->desc[V4L_OUT]->chan); + if (cap_dma_len > 0) { + dmaengine_submit(pair->desc[V4L_CAP]); + dma_async_issue_pending(pair->desc[V4L_CAP]->chan); + } + + asrc->m2m_start_part_two(pair); + + if (!wait_for_completion_interruptible_timeout(&pair->complete[V4L_OUT], 10 * HZ)) { + dev_err(dev, "out DMA task timeout\n"); + goto end; + } + + if (cap_dma_len > 0) { + if (!wait_for_completion_interruptible_timeout(&pair->complete[V4L_CAP], 10 * HZ)) { + dev_err(dev, "cap DMA task timeout\n"); + goto end; + } + } + + /* read the last words from FIFO */ + fsl_asrc_read_last_fifo(pair, vb2_plane_vaddr(&dst_buf->vb2_buf, 0), &cap_dma_len); + /* update payload length for capture */ + vb2_set_plane_payload(&dst_buf->vb2_buf, 0, cap_dma_len); + +end: + src_buf = v4l2_m2m_src_buf_remove(pair_m2m->fh.m2m_ctx); + dst_buf = v4l2_m2m_dst_buf_remove(pair_m2m->fh.m2m_ctx); + + v4l2_m2m_buf_done(src_buf, VB2_BUF_STATE_DONE); + v4l2_m2m_buf_done(dst_buf, VB2_BUF_STATE_DONE); + + v4l2_m2m_job_finish(m2m->m2m_dev, pair_m2m->fh.m2m_ctx); +} + +static int fsl_asrc_m2m_job_ready(void *priv) +{ + struct fsl_asrc_pair_m2m *pair_m2m = priv; + + if (v4l2_m2m_num_src_bufs_ready(pair_m2m->fh.m2m_ctx) > 0 && + v4l2_m2m_num_dst_bufs_ready(pair_m2m->fh.m2m_ctx) > 0) { + return 1; + } + + return 0; +} + +static const struct v4l2_m2m_ops fsl_asrc_m2m_ops = { + .job_ready = fsl_asrc_m2m_job_ready, + .device_run = fsl_asrc_m2m_device_run, +}; + +static int fsl_asrc_m2m_probe(struct platform_device *pdev) +{ + struct fsl_asrc_m2m_pdata *data = pdev->dev.platform_data; + struct fsl_asrc *asrc = data->asrc; + struct device *dev = &pdev->dev; + struct fsl_asrc_m2m *m2m; + int ret; + + m2m = devm_kzalloc(dev, sizeof(struct fsl_asrc_m2m), GFP_KERNEL); + if (!m2m) + return -ENOMEM; + + m2m->asrc = asrc; + m2m->pdev = pdev; + + ret = v4l2_device_register(dev, &m2m->v4l2_dev); + if (ret) { + dev_err(dev, "failed to register v4l2 device\n"); + goto err_register; + } + + m2m->m2m_dev = v4l2_m2m_init(&fsl_asrc_m2m_ops); + if (IS_ERR(m2m->m2m_dev)) { + dev_err(dev, "failed to register v4l2 device\n"); + ret = PTR_ERR(m2m->m2m_dev); + goto err_m2m; + } + + m2m->dec_vdev = video_device_alloc(); + if (!m2m->dec_vdev) { + dev_err(dev, "failed to register v4l2 device\n"); + ret = -ENOMEM; + goto err_vdev_alloc; + } + + mutex_init(&m2m->mlock); + + m2m->dec_vdev->fops = &fsl_asrc_m2m_fops; + m2m->dec_vdev->ioctl_ops = &fsl_asrc_m2m_ioctl_ops; + m2m->dec_vdev->minor = -1; + m2m->dec_vdev->release = video_device_release; + m2m->dec_vdev->lock = &m2m->mlock; /* lock for ioctl serialization */ + m2m->dec_vdev->v4l2_dev = &m2m->v4l2_dev; + m2m->dec_vdev->vfl_dir = VFL_DIR_M2M; + m2m->dec_vdev->device_caps = V4L2_CAP_AUDIO | V4L2_CAP_STREAMING; + + ret = video_register_device(m2m->dec_vdev, VFL_TYPE_AUDIO, -1); + if (ret) { + dev_err(dev, "failed to register video device\n"); + goto err_vdev_register; + } + + video_set_drvdata(m2m->dec_vdev, m2m); + platform_set_drvdata(pdev, m2m); + pm_runtime_enable(&pdev->dev); + + return 0; + +err_vdev_register: + video_device_release(m2m->dec_vdev); +err_vdev_alloc: + v4l2_m2m_release(m2m->m2m_dev); +err_m2m: + v4l2_device_unregister(&m2m->v4l2_dev); +err_register: + return ret; +} + +static void fsl_asrc_m2m_remove(struct platform_device *pdev) +{ + struct fsl_asrc_m2m *m2m = platform_get_drvdata(pdev); + + pm_runtime_disable(&pdev->dev); + video_unregister_device(m2m->dec_vdev); + video_device_release(m2m->dec_vdev); + v4l2_m2m_release(m2m->m2m_dev); + v4l2_device_unregister(&m2m->v4l2_dev); +} + +/* suspend callback for m2m */ +static int fsl_asrc_m2m_suspend(struct device *dev) +{ + struct fsl_asrc_m2m *m2m = dev_get_drvdata(dev); + struct fsl_asrc *asrc = m2m->asrc; + struct fsl_asrc_pair *pair; + unsigned long lock_flags; + int i; + + for (i = 0; i < PAIR_CTX_NUM; i++) { + spin_lock_irqsave(&asrc->lock, lock_flags); + pair = asrc->pair[i]; + if (!pair || !pair->req_pair) { + spin_unlock_irqrestore(&asrc->lock, lock_flags); + continue; + } + if (!completion_done(&pair->complete[V4L_OUT])) { + if (pair->dma_chan[V4L_OUT]) + dmaengine_terminate_all(pair->dma_chan[V4L_OUT]); + fsl_asrc_input_dma_callback((void *)pair); + } + if (!completion_done(&pair->complete[V4L_CAP])) { + if (pair->dma_chan[V4L_CAP]) + dmaengine_terminate_all(pair->dma_chan[V4L_CAP]); + fsl_asrc_output_dma_callback((void *)pair); + } + + if (asrc->m2m_pair_suspend) + asrc->m2m_pair_suspend(pair); + + spin_unlock_irqrestore(&asrc->lock, lock_flags); + } + + return 0; +} + +static int fsl_asrc_m2m_resume(struct device *dev) +{ + struct fsl_asrc_m2m *m2m = dev_get_drvdata(dev); + struct fsl_asrc *asrc = m2m->asrc; + struct fsl_asrc_pair *pair; + unsigned long lock_flags; + int i; + + for (i = 0; i < PAIR_CTX_NUM; i++) { + spin_lock_irqsave(&asrc->lock, lock_flags); + pair = asrc->pair[i]; + if (!pair || !pair->req_pair) { + spin_unlock_irqrestore(&asrc->lock, lock_flags); + continue; + } + if (asrc->m2m_pair_resume) + asrc->m2m_pair_resume(pair); + + spin_unlock_irqrestore(&asrc->lock, lock_flags); + } + + return 0; +} + +static const struct dev_pm_ops fsl_asrc_m2m_pm_ops = { + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(fsl_asrc_m2m_suspend, + fsl_asrc_m2m_resume) +}; + +static struct platform_driver fsl_asrc_m2m_driver = { + .probe = fsl_asrc_m2m_probe, + .remove_new = fsl_asrc_m2m_remove, + .driver = { + .name = "fsl_asrc_m2m", + .pm = &fsl_asrc_m2m_pm_ops, + }, +}; +module_platform_driver(fsl_asrc_m2m_driver); + +MODULE_DESCRIPTION("Freescale ASRC M2M driver"); +MODULE_LICENSE("GPL"); diff --git a/include/sound/fsl_asrc_common.h b/include/sound/fsl_asrc_common.h index 00a615735f35..191302711ea6 100644 --- a/include/sound/fsl_asrc_common.h +++ b/include/sound/fsl_asrc_common.h @@ -139,6 +139,15 @@ struct fsl_asrc { void *private; };
+/** + * struct fsl_asrc_m2m_pdata - platform data + * @asrc: pointer to struct fsl_asrc + * + */ +struct fsl_asrc_m2m_pdata { + struct fsl_asrc *asrc; +}; + #define DRV_NAME "fsl-asrc-dai" extern struct snd_soc_component_driver fsl_asrc_component;
On Tue, Jul 25, 2023 at 10:31 AM Shengjiu Wang shengjiu.wang@nxp.com wrote:
Implement the ASRC memory to memory function using the v4l2 framework, user can use this function with v4l2 ioctl interface.
User send the output and capture buffer to driver and driver store the converted data to the capture buffer.
This feature can be shared by ASRC and EASRC drivers
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com
drivers/media/platform/nxp/Kconfig | 12 + drivers/media/platform/nxp/Makefile | 1 + drivers/media/platform/nxp/fsl_asrc_m2m.c | 962 ++++++++++++++++++++++ include/sound/fsl_asrc_common.h | 9 + 4 files changed, 984 insertions(+) create mode 100644 drivers/media/platform/nxp/fsl_asrc_m2m.c
diff --git a/drivers/media/platform/nxp/Kconfig b/drivers/media/platform/nxp/Kconfig index a0ca6b297fb8..359f11fe2a80 100644 --- a/drivers/media/platform/nxp/Kconfig +++ b/drivers/media/platform/nxp/Kconfig @@ -56,3 +56,15 @@ config VIDEO_MX2_EMMAPRP
source "drivers/media/platform/nxp/dw100/Kconfig" source "drivers/media/platform/nxp/imx-jpeg/Kconfig"
+config VIDEO_FSL_ASRC_M2M
tristate "MXP i.MX ASRC M2M support"
s/MXP/NXP
depends on V4L_MEM2MEM_DRIVERS
depends on MEDIA_SUPPORT
select VIDEOBUF2_DMA_CONTIG
select V4L2_MEM2MEM_DEV
help
Say Y if you want to add ASRC M2M support for NXP CPUs.
It is a completement for ASRC M2P and ASRC P2M features.
Complement for?
Register m2m platform device, that user can use M2M feature.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- include/sound/fsl_asrc_common.h | 2 ++ sound/soc/fsl/fsl_asrc.c | 12 ++++++++++++ 2 files changed, 14 insertions(+)
diff --git a/include/sound/fsl_asrc_common.h b/include/sound/fsl_asrc_common.h index 191302711ea6..0f3effa42308 100644 --- a/include/sound/fsl_asrc_common.h +++ b/include/sound/fsl_asrc_common.h @@ -69,6 +69,7 @@ struct fsl_asrc_pair { * @dma_params_rx: DMA parameters for receive channel * @dma_params_tx: DMA parameters for transmit channel * @pdev: platform device pointer + * @m2m_pdev: m2m platform device pointer * @regmap: regmap handler * @paddr: physical address to the base address of registers * @mem_clk: clock source to access register @@ -102,6 +103,7 @@ struct fsl_asrc { struct snd_dmaengine_dai_dma_data dma_params_rx; struct snd_dmaengine_dai_dma_data dma_params_tx; struct platform_device *pdev; + struct platform_device *m2m_pdev; struct regmap *regmap; unsigned long paddr; struct clk *mem_clk; diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c index 30190ccb74e7..0d1dfa30271e 100644 --- a/sound/soc/fsl/fsl_asrc.c +++ b/sound/soc/fsl/fsl_asrc.c @@ -1198,6 +1198,7 @@ static int fsl_asrc_runtime_suspend(struct device *dev); static int fsl_asrc_probe(struct platform_device *pdev) { struct device_node *np = pdev->dev.of_node; + struct fsl_asrc_m2m_pdata m2m_pdata; struct fsl_asrc_priv *asrc_priv; struct fsl_asrc *asrc; struct resource *res; @@ -1380,6 +1381,12 @@ static int fsl_asrc_probe(struct platform_device *pdev) goto err_pm_get_sync; }
+ m2m_pdata.asrc = asrc; + asrc->m2m_pdev = platform_device_register_data(&pdev->dev, + "fsl_asrc_m2m", + PLATFORM_DEVID_AUTO, + &m2m_pdata, + sizeof(m2m_pdata)); return 0;
err_pm_get_sync: @@ -1392,6 +1399,11 @@ static int fsl_asrc_probe(struct platform_device *pdev)
static void fsl_asrc_remove(struct platform_device *pdev) { + struct fsl_asrc *asrc = dev_get_drvdata(&pdev->dev); + + if (asrc->m2m_pdev && !IS_ERR(asrc->m2m_pdev)) + platform_device_unregister(asrc->m2m_pdev); + pm_runtime_disable(&pdev->dev); if (!pm_runtime_status_suspended(&pdev->dev)) fsl_asrc_runtime_suspend(&pdev->dev);
Register m2m platform device,that user can use M2M feature.
Signed-off-by: Shengjiu Wang shengjiu.wang@nxp.com --- sound/soc/fsl/fsl_easrc.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
diff --git a/sound/soc/fsl/fsl_easrc.c b/sound/soc/fsl/fsl_easrc.c index b735b24badc2..b5befefa8fbe 100644 --- a/sound/soc/fsl/fsl_easrc.c +++ b/sound/soc/fsl/fsl_easrc.c @@ -2074,6 +2074,7 @@ MODULE_DEVICE_TABLE(of, fsl_easrc_dt_ids); static int fsl_easrc_probe(struct platform_device *pdev) { struct fsl_easrc_priv *easrc_priv; + struct fsl_asrc_m2m_pdata m2m_pdata; struct device *dev = &pdev->dev; struct fsl_asrc *easrc; struct resource *res; @@ -2190,11 +2191,23 @@ static int fsl_easrc_probe(struct platform_device *pdev) return ret; }
+ m2m_pdata.asrc = easrc; + easrc->m2m_pdev = platform_device_register_data(&pdev->dev, + "fsl_asrc_m2m", + PLATFORM_DEVID_AUTO, + &m2m_pdata, + sizeof(m2m_pdata)); + return 0; }
static void fsl_easrc_remove(struct platform_device *pdev) { + struct fsl_asrc *easrc = dev_get_drvdata(&pdev->dev); + + if (easrc->m2m_pdev && !IS_ERR(easrc->m2m_pdev)) + platform_device_unregister(easrc->m2m_pdev); + pm_runtime_disable(&pdev->dev); }
Hi all,
On 25/07/2023 08:12, Shengjiu Wang wrote:
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/audioX".
And add memory to memory support for two kinds of i.MX ASRC module
Before I spend time on this: are the audio maintainers OK with doing this in V4L2?
I do want to have a clear statement on this as it is not something I can decide.
Regards,
Hans
changes in v2:
- decouple the implementation in v4l2 and ALSA
- implement the memory to memory driver as a platfrom driver and move it to driver/media
- move fsl_asrc_common.h to include/sound folder
Shengjiu Wang (7): ASoC: fsl_asrc: define functions for memory to memory usage ASoC: fsl_easrc: define functions for memory to memory usage ASoC: fsl_asrc: move fsl_asrc_common.h to include/sound media: v4l2: Add audio capture and output support media: imx: fsl_asrc: Add memory to memory driver ASoC: fsl_asrc: register m2m platform device ASoC: fsl_easrc: register m2m platform device
.../media/common/videobuf2/videobuf2-v4l2.c | 4 + drivers/media/platform/nxp/Kconfig | 12 + drivers/media/platform/nxp/Makefile | 1 + drivers/media/platform/nxp/fsl_asrc_m2m.c | 962 ++++++++++++++++++ drivers/media/v4l2-core/v4l2-dev.c | 17 + drivers/media/v4l2-core/v4l2-ioctl.c | 52 + include/media/v4l2-dev.h | 2 + include/media/v4l2-ioctl.h | 34 + .../fsl => include/sound}/fsl_asrc_common.h | 48 + include/uapi/linux/videodev2.h | 19 + sound/soc/fsl/fsl_asrc.c | 150 +++ sound/soc/fsl/fsl_asrc.h | 4 +- sound/soc/fsl/fsl_asrc_dma.c | 2 +- sound/soc/fsl/fsl_easrc.c | 227 +++++ sound/soc/fsl/fsl_easrc.h | 8 +- 15 files changed, 1539 insertions(+), 3 deletions(-) create mode 100644 drivers/media/platform/nxp/fsl_asrc_m2m.c rename {sound/soc/fsl => include/sound}/fsl_asrc_common.h (63%)
On Wed, 02 Aug 2023 09:32:37 +0200, Hans Verkuil wrote:
Hi all,
On 25/07/2023 08:12, Shengjiu Wang wrote:
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/audioX".
And add memory to memory support for two kinds of i.MX ASRC module
Before I spend time on this: are the audio maintainers OK with doing this in V4L2?
I do want to have a clear statement on this as it is not something I can decide.
Well, I personally don't mind to have some audio capability in v4l2 layer. But, the only uncertain thing for now is whether this is a must-have or not.
IIRC, the implementation in the sound driver side was never done just because there was no similar implementation? If so, and if the extension to the v4l2 core layer is needed, shouldn't it be more considered for the possible other route?
thanks,
Takashi
On Wed, Aug 2, 2023 at 7:22 PM Takashi Iwai tiwai@suse.de wrote:
On Wed, 02 Aug 2023 09:32:37 +0200, Hans Verkuil wrote:
Hi all,
On 25/07/2023 08:12, Shengjiu Wang wrote:
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/audioX".
And add memory to memory support for two kinds of i.MX ASRC module
Before I spend time on this: are the audio maintainers OK with doing this in V4L2?
I do want to have a clear statement on this as it is not something I can decide.
Well, I personally don't mind to have some audio capability in v4l2 layer. But, the only uncertain thing for now is whether this is a must-have or not.
Thanks, I am also not sure about this. I am also confused that why there is no m2m implementation for audio in the kernel. Audio also has similar decoder encoder post-processing as video.
IIRC, the implementation in the sound driver side was never done just because there was no similar implementation? If so, and if the extension to the v4l2 core layer is needed, shouldn't it be more considered for the possible other route?
Actually I'd like someone could point me to the other route. I'd like to try.
The reason why I select to extend v4l2 for such audio usage is that v4l2 looks best for this audio m2m implementation. v4l2 is designed for m2m usage. if we need implement another 'route', I don't think it can do better that v4l2.
I appreciate that someone can share his ideas or doable solutions. And please don't ignore my request, ignore my patch.
Best regards Wang shengjiu
On Wed, 02 Aug 2023 14:02:29 +0200, Shengjiu Wang wrote:
On Wed, Aug 2, 2023 at 7:22 PM Takashi Iwai tiwai@suse.de wrote:
On Wed, 02 Aug 2023 09:32:37 +0200, Hans Verkuil wrote:
Hi all,
On 25/07/2023 08:12, Shengjiu Wang wrote:
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/audioX".
And add memory to memory support for two kinds of i.MX ASRC module
Before I spend time on this: are the audio maintainers OK with doing this in V4L2?
I do want to have a clear statement on this as it is not something I can decide.
Well, I personally don't mind to have some audio capability in v4l2 layer. But, the only uncertain thing for now is whether this is a must-have or not.
Thanks, I am also not sure about this. I am also confused that why there is no m2m implementation for audio in the kernel. Audio also has similar decoder encoder post-processing as video.
IIRC, the implementation in the sound driver side was never done just because there was no similar implementation? If so, and if the extension to the v4l2 core layer is needed, shouldn't it be more considered for the possible other route?
Actually I'd like someone could point me to the other route. I'd like to try.
The reason why I select to extend v4l2 for such audio usage is that v4l2 looks best for this audio m2m implementation. v4l2 is designed for m2m usage. if we need implement another 'route', I don't think it can do better that v4l2.
I appreciate that someone can share his ideas or doable solutions. And please don't ignore my request, ignore my patch.
Can you explain a bit more details of your demand? At least, a "big picture" showing how your hardware is implemented and what is exactly necessary would be helpful for understanding the problem.
thanks,
Takashi
On Wed, Aug 2, 2023 at 8:08 PM Takashi Iwai tiwai@suse.de wrote:
On Wed, 02 Aug 2023 14:02:29 +0200, Shengjiu Wang wrote:
On Wed, Aug 2, 2023 at 7:22 PM Takashi Iwai tiwai@suse.de wrote:
On Wed, 02 Aug 2023 09:32:37 +0200, Hans Verkuil wrote:
Hi all,
On 25/07/2023 08:12, Shengjiu Wang wrote:
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/audioX".
And add memory to memory support for two kinds of i.MX ASRC module
Before I spend time on this: are the audio maintainers OK with doing this in V4L2?
I do want to have a clear statement on this as it is not something I can decide.
Well, I personally don't mind to have some audio capability in v4l2 layer. But, the only uncertain thing for now is whether this is a must-have or not.
Thanks, I am also not sure about this. I am also confused that why there is no m2m implementation for audio in the kernel. Audio also has similar decoder encoder post-processing as video.
IIRC, the implementation in the sound driver side was never done just because there was no similar implementation? If so, and if the extension to the v4l2 core layer is needed, shouldn't it be more considered for the possible other route?
Actually I'd like someone could point me to the other route. I'd like to try.
The reason why I select to extend v4l2 for such audio usage is that v4l2 looks best for this audio m2m implementation. v4l2 is designed for m2m usage. if we need implement another 'route', I don't think it can do better that v4l2.
I appreciate that someone can share his ideas or doable solutions. And please don't ignore my request, ignore my patch.
Can you explain a bit more details of your demand? At least, a "big picture" showing how your hardware is implemented and what is exactly necessary would be helpful for understanding the problem.
We have the hardware IP: ASRC, asynchronous sample rate converter.
Currently the ASRC in ALSA is to connect to another I2S device as a sound card. But we'd like to the ASRC can be used by user space directly that user space application can get the output after conversion from ASRC.
The ASRC can be integrated into a multimedia framework (gstreamer) as a plugin.
best regards wang shengjiu
On Wed, Aug 02, 2023 at 10:41:43PM +0800, Shengjiu Wang wrote:
Currently the ASRC in ALSA is to connect to another I2S device as a sound card. But we'd like to the ASRC can be used by user space directly that user space application can get the output after conversion from ASRC.
That sort of use case would be handled via DPCM at the minute, though persuading it to connect two front ends together might be fun (which is the sort of reason why we want to push digital information down into DAPM and make everything a component).
On Thu, Aug 3, 2023 at 1:28 AM Mark Brown broonie@kernel.org wrote:
On Wed, Aug 02, 2023 at 10:41:43PM +0800, Shengjiu Wang wrote:
Currently the ASRC in ALSA is to connect to another I2S device as a sound card. But we'd like to the ASRC can be used by user space directly that user space application can get the output after conversion from ASRC.
That sort of use case would be handled via DPCM at the minute, though persuading it to connect two front ends together might be fun (which is the sort of reason why we want to push digital information down into DAPM and make everything a component).
Thanks.
ASRC M2M case needs to run as fast as possible, no sync clock control. If use sound card to handle ASRC M2M case, the user application should be aplay/arecord, then we need to consider xrun issue, buffer may timeout, sync between aplay and arecord, these should't be considered by pure memory to memory operation.
DPCM may achitect all the audio things in components and sound card, it is good. but for the M2M case, it is complcated. not sure it is doable.
best regards wang shengjiu
Hi Mark, Takashi
On Thu, Aug 3, 2023 at 9:11 PM Shengjiu Wang shengjiu.wang@gmail.com wrote:
On Thu, Aug 3, 2023 at 1:28 AM Mark Brown broonie@kernel.org wrote:
On Wed, Aug 02, 2023 at 10:41:43PM +0800, Shengjiu Wang wrote:
Currently the ASRC in ALSA is to connect to another I2S device as a sound card. But we'd like to the ASRC can be used by user space directly that user space application can get the output after conversion from ASRC.
That sort of use case would be handled via DPCM at the minute, though persuading it to connect two front ends together might be fun (which is the sort of reason why we want to push digital information down into DAPM and make everything a component).
Thanks.
ASRC M2M case needs to run as fast as possible, no sync clock control. If use sound card to handle ASRC M2M case, the user application should be aplay/arecord, then we need to consider xrun issue, buffer may timeout, sync between aplay and arecord, these should't be considered by pure memory to memory operation.
DPCM may achitect all the audio things in components and sound card, it is good. but for the M2M case, it is complcated. not sure it is doable.
Beside the concern in previous mail,
DPCM needs to separate ASRC to be two substreams (playback and capture).
But the ASRC needs the sample rate & format of input and output first then start conversion.
If the playback controls the rate & format of input, capture substream controls the rate & format of output, as a result one substream needs to get information(dma buffer address, size... rate, format) from another substream, then start both substreams in the last substream. How to synchronize these two substreams is a problem. One stream can be released but another stream doesn't know .
So I don't think it is a good idea to use DPCM for pure M2M case.
So can I persuade you to consider the V4L2 solution?
Best regards Wang Shengjiu
On Fri, Aug 11, 2023 at 7:05 PM Shengjiu Wang shengjiu.wang@gmail.com wrote:
Hi Mark, Takashi
On Thu, Aug 3, 2023 at 9:11 PM Shengjiu Wang shengjiu.wang@gmail.com wrote:
On Thu, Aug 3, 2023 at 1:28 AM Mark Brown broonie@kernel.org wrote:
On Wed, Aug 02, 2023 at 10:41:43PM +0800, Shengjiu Wang wrote:
Currently the ASRC in ALSA is to connect to another I2S device as a sound card. But we'd like to the ASRC can be used by user space directly that user space application can get the output after conversion from ASRC.
That sort of use case would be handled via DPCM at the minute, though persuading it to connect two front ends together might be fun (which is the sort of reason why we want to push digital information down into DAPM and make everything a component).
Thanks.
ASRC M2M case needs to run as fast as possible, no sync clock control. If use sound card to handle ASRC M2M case, the user application should be aplay/arecord, then we need to consider xrun issue, buffer may timeout, sync between aplay and arecord, these should't be considered by pure memory to memory operation.
DPCM may achitect all the audio things in components and sound card, it is good. but for the M2M case, it is complcated. not sure it is doable.
Beside the concern in previous mail,
DPCM needs to separate ASRC to be two substreams (playback and capture).
But the ASRC needs the sample rate & format of input and output first then start conversion.
If the playback controls the rate & format of input, capture substream controls the rate & format of output, as a result one substream needs to get information(dma buffer address, size... rate, format) from another substream, then start both substreams in the last substream. How to synchronize these two substreams is a problem. One stream can be released but another stream doesn't know .
So I don't think it is a good idea to use DPCM for pure M2M case.
So can I persuade you to consider the V4L2 solution?
Just a summary:
Basic M2M conversion can work with DPCM, I have tried with some workaround to make it work.
But there are several issues: 1. Need to create sound cards. ASRC module support multi instances, then need to create multi sound cards for each instance.
2. The ASRC is an entirety but with DPCM we need to separate input port and output port to playback substream and capture stream. Synchronous between playback substream and capture substream is a problem. How to start them and stop them at the same time.
3. How to handle the xrun issue. pause or resume. which are brought by ALSA.
So shall we make the decision that we can go to the V4L2 solution?
Best regards Wang Shengjiu
On Wed, 23 Aug 2023 16:33:19 +0200, Shengjiu Wang wrote:
On Fri, Aug 11, 2023 at 7:05 PM Shengjiu Wang shengjiu.wang@gmail.com wrote:
Hi Mark, Takashi
On Thu, Aug 3, 2023 at 9:11 PM Shengjiu Wang shengjiu.wang@gmail.com wrote:
On Thu, Aug 3, 2023 at 1:28 AM Mark Brown broonie@kernel.org wrote:
On Wed, Aug 02, 2023 at 10:41:43PM +0800, Shengjiu Wang wrote:
Currently the ASRC in ALSA is to connect to another I2S device as a sound card. But we'd like to the ASRC can be used by user space directly that user space application can get the output after conversion from ASRC.
That sort of use case would be handled via DPCM at the minute, though persuading it to connect two front ends together might be fun (which is the sort of reason why we want to push digital information down into DAPM and make everything a component).
Thanks.
ASRC M2M case needs to run as fast as possible, no sync clock control. If use sound card to handle ASRC M2M case, the user application should be aplay/arecord, then we need to consider xrun issue, buffer may timeout, sync between aplay and arecord, these should't be considered by pure memory to memory operation.
DPCM may achitect all the audio things in components and sound card, it is good. but for the M2M case, it is complcated. not sure it is doable.
Beside the concern in previous mail,
DPCM needs to separate ASRC to be two substreams (playback and capture).
But the ASRC needs the sample rate & format of input and output first then start conversion.
If the playback controls the rate & format of input, capture substream controls the rate & format of output, as a result one substream needs to get information(dma buffer address, size... rate, format) from another substream, then start both substreams in the last substream. How to synchronize these two substreams is a problem. One stream can be released but another stream doesn't know .
So I don't think it is a good idea to use DPCM for pure M2M case.
So can I persuade you to consider the V4L2 solution?
Just a summary:
Basic M2M conversion can work with DPCM, I have tried with some workaround to make it work.
But there are several issues:
- Need to create sound cards. ASRC module support multi instances, then
need to create multi sound cards for each instance.
Hm, why can't it be multiple PCM instances instead?
- The ASRC is an entirety but with DPCM we need to separate input port and
output port to playback substream and capture stream. Synchronous between playback substream and capture substream is a problem. How to start them and stop them at the same time.
This could be done by enforcing the full duplex and linking the both PCM streams, I suppose.
- How to handle the xrun issue. pause or resume. which are brought by ALSA.
Doesn't V4L2 handle the overrun/underrun at all? Also, no resume support?
Pause and resume are optional in ALSA frame work, you don't need to implement them unless you want/need.
So shall we make the decision that we can go to the V4L2 solution?
Honestly speaking, I don't mind much whether it's implemented in V2L4 or not -- at least for the kernel part, we can reorganize / refactor things internally. But, the biggest remaining question to me is whether this user-space interface is the most suitable one. Is it well defined, usable and maintained for the audio applications? Or is it meant to be a stop-gap for a specific use case?
thanks,
Takashi
On Thu, Aug 24, 2023 at 07:03:09PM +0200, Takashi Iwai wrote:
Shengjiu Wang wrote:
But there are several issues:
- Need to create sound cards. ASRC module support multi instances, then
need to create multi sound cards for each instance.
Hm, why can't it be multiple PCM instances instead?
I'm having a hard time following this one too.
- The ASRC is an entirety but with DPCM we need to separate input port and
output port to playback substream and capture stream. Synchronous between playback substream and capture substream is a problem. How to start them and stop them at the same time.
This could be done by enforcing the full duplex and linking the both PCM streams, I suppose.
So shall we make the decision that we can go to the V4L2 solution?
Honestly speaking, I don't mind much whether it's implemented in V2L4 or not -- at least for the kernel part, we can reorganize / refactor things internally. But, the biggest remaining question to me is whether this user-space interface is the most suitable one. Is it well defined, usable and maintained for the audio applications? Or is it meant to be a stop-gap for a specific use case?
I'm having a really hard time summoning much enthusiasm for using v4l here, it feels like this is heading down the same bodge route as DPCM but directly as ABI so even harder to fix properly. That said all the ALSA APIs are really intended to be used in real time and this sounds like a non real time application? I don't fully understand what the actual use case is here.
On Fri, Aug 25, 2023 at 4:21 AM Mark Brown broonie@kernel.org wrote:
On Thu, Aug 24, 2023 at 07:03:09PM +0200, Takashi Iwai wrote:
Shengjiu Wang wrote:
But there are several issues:
- Need to create sound cards. ASRC module support multi instances, then
need to create multi sound cards for each instance.
Hm, why can't it be multiple PCM instances instead?
I'm having a hard time following this one too.
- The ASRC is an entirety but with DPCM we need to separate input port and
output port to playback substream and capture stream. Synchronous between playback substream and capture substream is a problem. How to start them and stop them at the same time.
This could be done by enforcing the full duplex and linking the both PCM streams, I suppose.
So shall we make the decision that we can go to the V4L2 solution?
Honestly speaking, I don't mind much whether it's implemented in V2L4 or not -- at least for the kernel part, we can reorganize / refactor things internally. But, the biggest remaining question to me is whether this user-space interface is the most suitable one. Is it well defined, usable and maintained for the audio applications? Or is it meant to be a stop-gap for a specific use case?
I'm having a really hard time summoning much enthusiasm for using v4l here, it feels like this is heading down the same bodge route as DPCM but directly as ABI so even harder to fix properly. That said all the ALSA APIs are really intended to be used in real time and this sounds like a non real time application? I don't fully understand what the actual use case is here.
Thanks for your reply.
This asrc memory to memory (memory ->asrc->memory) case is a non real time use case.
User fills the input buffer to the asrc module, after conversion, then asrc sends back the output buffer to user. So it is not a traditional ALSA playback and capture case. I think it is not good to create sound card for it, it is not a sound card actually.
It is a specific use case, there is no reference in current kernel. v4l2 memory to memory is the closed implementation, v4l2 current support video, image, radio, tuner, touch devices, so it is not complicated to add support for this specific audio case.
Maybe you can go through these patches first. Because we had implemented the "memory -> asrc ->i2s device-> codec" use case in ALSA. Now the "memory->asrc->memory" needs to reuse the code in asrc driver, so the first 3 patches is for refining the code to make it can be shared by the "memory->asrc->memory" driver.
The main change is in the v4l2 side, A /dev/vl42-audio will be created, user applications only use the ioctl of v4l2 framework.
Best regards Wang Shengjiu
On Fri, 25 Aug 2023 05:46:43 +0200, Shengjiu Wang wrote:
On Fri, Aug 25, 2023 at 4:21 AM Mark Brown broonie@kernel.org wrote:
On Thu, Aug 24, 2023 at 07:03:09PM +0200, Takashi Iwai wrote:
Shengjiu Wang wrote:
But there are several issues:
- Need to create sound cards. ASRC module support multi instances, then
need to create multi sound cards for each instance.
Hm, why can't it be multiple PCM instances instead?
I'm having a hard time following this one too.
- The ASRC is an entirety but with DPCM we need to separate input port and
output port to playback substream and capture stream. Synchronous between playback substream and capture substream is a problem. How to start them and stop them at the same time.
This could be done by enforcing the full duplex and linking the both PCM streams, I suppose.
So shall we make the decision that we can go to the V4L2 solution?
Honestly speaking, I don't mind much whether it's implemented in V2L4 or not -- at least for the kernel part, we can reorganize / refactor things internally. But, the biggest remaining question to me is whether this user-space interface is the most suitable one. Is it well defined, usable and maintained for the audio applications? Or is it meant to be a stop-gap for a specific use case?
I'm having a really hard time summoning much enthusiasm for using v4l here, it feels like this is heading down the same bodge route as DPCM but directly as ABI so even harder to fix properly. That said all the ALSA APIs are really intended to be used in real time and this sounds like a non real time application? I don't fully understand what the actual use case is here.
Thanks for your reply.
This asrc memory to memory (memory ->asrc->memory) case is a non real time use case.
User fills the input buffer to the asrc module, after conversion, then asrc sends back the output buffer to user. So it is not a traditional ALSA playback and capture case. I think it is not good to create sound card for it, it is not a sound card actually.
It is a specific use case, there is no reference in current kernel. v4l2 memory to memory is the closed implementation, v4l2 current support video, image, radio, tuner, touch devices, so it is not complicated to add support for this specific audio case.
Maybe you can go through these patches first. Because we had implemented the "memory -> asrc ->i2s device-> codec" use case in ALSA. Now the "memory->asrc->memory" needs to reuse the code in asrc driver, so the first 3 patches is for refining the code to make it can be shared by the "memory->asrc->memory" driver.
The main change is in the v4l2 side, A /dev/vl42-audio will be created, user applications only use the ioctl of v4l2 framework.
Ah, now I'm slowly understanding. So, what you want is to have an interface to perform the batch conversion of a data stream from an input to an output? And ALSA PCM interface doesn't fit fully for that purpose because the data handling is batched and it's not like a normal PCM streaming?
Basically the whole M2M arguments are rather subtle. Those are implementation details that can be resolved in several different ways in the kernel side. But the design of the operation is the crucial point.
Maybe we can consider implementing a similar feature in ALSA API, too. But it's too far-stretched for now.
So, if v4l2 interface provides the requested feature (the batched audio stream conversion), it's OK to ride on it.
thanks,
Takashi
On 25/08/2023 15:54, Takashi Iwai wrote:
On Fri, 25 Aug 2023 05:46:43 +0200, Shengjiu Wang wrote:
On Fri, Aug 25, 2023 at 4:21 AM Mark Brown broonie@kernel.org wrote:
On Thu, Aug 24, 2023 at 07:03:09PM +0200, Takashi Iwai wrote:
Shengjiu Wang wrote:
But there are several issues:
- Need to create sound cards. ASRC module support multi instances, then
need to create multi sound cards for each instance.
Hm, why can't it be multiple PCM instances instead?
I'm having a hard time following this one too.
- The ASRC is an entirety but with DPCM we need to separate input port and
output port to playback substream and capture stream. Synchronous between playback substream and capture substream is a problem. How to start them and stop them at the same time.
This could be done by enforcing the full duplex and linking the both PCM streams, I suppose.
So shall we make the decision that we can go to the V4L2 solution?
Honestly speaking, I don't mind much whether it's implemented in V2L4 or not -- at least for the kernel part, we can reorganize / refactor things internally. But, the biggest remaining question to me is whether this user-space interface is the most suitable one. Is it well defined, usable and maintained for the audio applications? Or is it meant to be a stop-gap for a specific use case?
I'm having a really hard time summoning much enthusiasm for using v4l here, it feels like this is heading down the same bodge route as DPCM but directly as ABI so even harder to fix properly. That said all the ALSA APIs are really intended to be used in real time and this sounds like a non real time application? I don't fully understand what the actual use case is here.
Thanks for your reply.
This asrc memory to memory (memory ->asrc->memory) case is a non real time use case.
User fills the input buffer to the asrc module, after conversion, then asrc sends back the output buffer to user. So it is not a traditional ALSA playback and capture case. I think it is not good to create sound card for it, it is not a sound card actually.
It is a specific use case, there is no reference in current kernel. v4l2 memory to memory is the closed implementation, v4l2 current support video, image, radio, tuner, touch devices, so it is not complicated to add support for this specific audio case.
Maybe you can go through these patches first. Because we had implemented the "memory -> asrc ->i2s device-> codec" use case in ALSA. Now the "memory->asrc->memory" needs to reuse the code in asrc driver, so the first 3 patches is for refining the code to make it can be shared by the "memory->asrc->memory" driver.
The main change is in the v4l2 side, A /dev/vl42-audio will be created, user applications only use the ioctl of v4l2 framework.
Ah, now I'm slowly understanding. So, what you want is to have an interface to perform the batch conversion of a data stream from an input to an output? And ALSA PCM interface doesn't fit fully for that purpose because the data handling is batched and it's not like a normal PCM streaming?
Basically the whole M2M arguments are rather subtle. Those are implementation details that can be resolved in several different ways in the kernel side. But the design of the operation is the crucial point.
Maybe we can consider implementing a similar feature in ALSA API, too. But it's too far-stretched for now.
So, if v4l2 interface provides the requested feature (the batched audio stream conversion), it's OK to ride on it.
The V4L2 M2M interface is simple: you open a video device and then you can pass data to the hardware, it processes it and you get the processed data back.
The hardware just processes the data as fast as it can. Each time you open the video device a new instance is created, and each instance can pass jobs to the hardware.
Currently it is used for video scalers, deinterlacers, colorspace converters and codecs, but in the end it is just data in, data out with some job scheduling (fifo) towards the hardware. So supporting audio using the same core m2m framework wouldn't be a big deal. We'd probably make a /dev/v4l-audio device for that.
It doesn't come for free: it is a new API, so besides adding support for it, it also needs to be documented, we would need compliance tests, and very likely I would want a new virtual driver for this (vim2m.c would be a good template).
Regards,
Hans
On Fri, Aug 25, 2023 at 10:15 PM Hans Verkuil hverkuil@xs4all.nl wrote:
On 25/08/2023 15:54, Takashi Iwai wrote:
On Fri, 25 Aug 2023 05:46:43 +0200, Shengjiu Wang wrote:
On Fri, Aug 25, 2023 at 4:21 AM Mark Brown broonie@kernel.org wrote:
On Thu, Aug 24, 2023 at 07:03:09PM +0200, Takashi Iwai wrote:
Shengjiu Wang wrote:
But there are several issues:
- Need to create sound cards. ASRC module support multi instances, then
need to create multi sound cards for each instance.
Hm, why can't it be multiple PCM instances instead?
I'm having a hard time following this one too.
- The ASRC is an entirety but with DPCM we need to separate input port and
output port to playback substream and capture stream. Synchronous between playback substream and capture substream is a problem. How to start them and stop them at the same time.
This could be done by enforcing the full duplex and linking the both PCM streams, I suppose.
So shall we make the decision that we can go to the V4L2 solution?
Honestly speaking, I don't mind much whether it's implemented in V2L4 or not -- at least for the kernel part, we can reorganize / refactor things internally. But, the biggest remaining question to me is whether this user-space interface is the most suitable one. Is it well defined, usable and maintained for the audio applications? Or is it meant to be a stop-gap for a specific use case?
I'm having a really hard time summoning much enthusiasm for using v4l here, it feels like this is heading down the same bodge route as DPCM but directly as ABI so even harder to fix properly. That said all the ALSA APIs are really intended to be used in real time and this sounds like a non real time application? I don't fully understand what the actual use case is here.
Thanks for your reply.
This asrc memory to memory (memory ->asrc->memory) case is a non real time use case.
User fills the input buffer to the asrc module, after conversion, then asrc sends back the output buffer to user. So it is not a traditional ALSA playback and capture case. I think it is not good to create sound card for it, it is not a sound card actually.
It is a specific use case, there is no reference in current kernel. v4l2 memory to memory is the closed implementation, v4l2 current support video, image, radio, tuner, touch devices, so it is not complicated to add support for this specific audio case.
Maybe you can go through these patches first. Because we had implemented the "memory -> asrc ->i2s device-> codec" use case in ALSA. Now the "memory->asrc->memory" needs to reuse the code in asrc driver, so the first 3 patches is for refining the code to make it can be shared by the "memory->asrc->memory" driver.
The main change is in the v4l2 side, A /dev/vl42-audio will be created, user applications only use the ioctl of v4l2 framework.
Ah, now I'm slowly understanding. So, what you want is to have an interface to perform the batch conversion of a data stream from an input to an output? And ALSA PCM interface doesn't fit fully for that purpose because the data handling is batched and it's not like a normal PCM streaming?
Basically the whole M2M arguments are rather subtle. Those are implementation details that can be resolved in several different ways in the kernel side. But the design of the operation is the crucial point.
Maybe we can consider implementing a similar feature in ALSA API, too. But it's too far-stretched for now.
So, if v4l2 interface provides the requested feature (the batched audio stream conversion), it's OK to ride on it.
The V4L2 M2M interface is simple: you open a video device and then you can pass data to the hardware, it processes it and you get the processed data back.
The hardware just processes the data as fast as it can. Each time you open the video device a new instance is created, and each instance can pass jobs to the hardware.
Currently it is used for video scalers, deinterlacers, colorspace converters and codecs, but in the end it is just data in, data out with some job scheduling (fifo) towards the hardware. So supporting audio using the same core m2m framework wouldn't be a big deal. We'd probably make a /dev/v4l-audio device for that.
It doesn't come for free: it is a new API, so besides adding support for it, it also needs to be documented, we would need compliance tests, and very likely I would want a new virtual driver for this (vim2m.c would be a good template).
Thanks all.
I will try to pass the compliance test. Should the virtual driver be added now?
Best regards Wang Shengiu
Hi,
Le jeudi 24 août 2023 à 19:03 +0200, Takashi Iwai a écrit :
- How to handle the xrun issue. pause or resume. which are brought by ALSA.
Doesn't V4L2 handle the overrun/underrun at all? Also, no resume support?
just a 2 cents comment. All our video m2m are job based. When there is no job available they stop and resume when there is more work in queues. As there is no time constraints coming from the hardware, there is also no API to know that there has been a period of time without anything being executed (under utilization). Only overrun can be detected by application, each chunk of work is in its own v4l2_buffer and the application will run out of buffer in that case, and will have to wait for free space in the queue. Understand though that the larger the queue, the large the latency. There is currently no way to submit job ahead of the data (unlike DRM subsystem).
I have slight impression that all this seems rather inefficient for high rate / small buffer. I'd suggest creating a dummy benchmark driver to verify that the overhead isn't just too much for an audio use case.
Nicolas
On Wed, Aug 02, 2023 at 08:02:29PM +0800, Shengjiu Wang wrote:
On Wed, Aug 2, 2023 at 7:22 PM Takashi Iwai tiwai@suse.de wrote:
Well, I personally don't mind to have some audio capability in v4l2 layer. But, the only uncertain thing for now is whether this is a must-have or not.
Thanks, I am also not sure about this. I am also confused that why there is no m2m implementation for audio in the kernel. Audio also has similar decoder encoder post-processing as video.
This is the thing where we've been trying to persuade people to work on replacing DPCM with full componentisation for about a decade now but nobody's had time other than Morimoto-san who's been chipping away at making everything component based for a good chunk of that time. One trick is that we don't just want this to work for things that are memory to memory, we also want things where there's a direct interconnect that bypasses memory for off-SoC case.
The reason why I select to extend v4l2 for such audio usage is that v4l2 looks best for this audio m2m implementation. v4l2 is designed for m2m usage. if we need implement another 'route', I don't think it can do better that v4l2.
I appreciate that someone can share his ideas or doable solutions. And please don't ignore my request, ignore my patch.
There's a bunch of presentations Lars-Peter did at ELC some considerable time ago about this.
On 02/08/2023 14:02, Shengjiu Wang wrote:
On Wed, Aug 2, 2023 at 7:22 PM Takashi Iwai tiwai@suse.de wrote:
On Wed, 02 Aug 2023 09:32:37 +0200, Hans Verkuil wrote:
Hi all,
On 25/07/2023 08:12, Shengjiu Wang wrote:
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/audioX".
And add memory to memory support for two kinds of i.MX ASRC module
Before I spend time on this: are the audio maintainers OK with doing this in V4L2?
I do want to have a clear statement on this as it is not something I can decide.
Well, I personally don't mind to have some audio capability in v4l2 layer. But, the only uncertain thing for now is whether this is a must-have or not.
Thanks, I am also not sure about this. I am also confused that why there is no m2m implementation for audio in the kernel. Audio also has similar decoder encoder post-processing as video.
IIRC, the implementation in the sound driver side was never done just because there was no similar implementation? If so, and if the extension to the v4l2 core layer is needed, shouldn't it be more considered for the possible other route?
Actually I'd like someone could point me to the other route. I'd like to try.
The reason why I select to extend v4l2 for such audio usage is that v4l2 looks best for this audio m2m implementation. v4l2 is designed for m2m usage. if we need implement another 'route', I don't think it can do better that v4l2.
I appreciate that someone can share his ideas or doable solutions. And please don't ignore my request, ignore my patch.
To give a bit more background: if it is decided to use the v4l API for this (and I have no objection to this from my side since API/framework-wise it is a good fit for this), then there are a number of things that need to be done to get this into the media subsystem:
- documentation for the new uAPI - add support for this to v4l2-ctl - add v4l2-compliance tests for the new device - highly desirable: have a virtual driver (similar to vim2m) that supports this: it could be as simple as just copy input to output. This helps regression testing. - it might need media controller support as well. TBD.
None of this is particularly complex, but taken all together it is a fair amount of work that also needs a lot of review time from our side.
I want to add one more option to the mix: drivers/media/core/v4l2-mem2mem.c is the main m2m framework, but it relies heavily on the videobuf2 framework for the capture and output queues.
The core vb2 implementation in drivers/media/common/videobuf2/videobuf2-core.c is independent of V4L2 and can be used by other subsystems (in our case, it is also used by the DVB API). It is a possibility to create an alsa version of v4l2-mem2mem.c that uses the core vb2 code with an ALSA uAPI on top.
So in drivers/media/common/videobuf2/ you would have a videobuf2-alsa.c besides the already existing videobuf2-v4l2.c and -dvb.c.
Perhaps parts of v4l2-mem2mem.c can be reused as well in that case, but I am not sure if it is worth the effort. I suspect copying it to an alsa-mem2mem.c and adapting it for alsa is easiest if you want to go that way.
Regards,
Hans
On Wed, Aug 2, 2023 at 8:28 PM Hans Verkuil hverkuil@xs4all.nl wrote:
On 02/08/2023 14:02, Shengjiu Wang wrote:
On Wed, Aug 2, 2023 at 7:22 PM Takashi Iwai tiwai@suse.de wrote:
On Wed, 02 Aug 2023 09:32:37 +0200, Hans Verkuil wrote:
Hi all,
On 25/07/2023 08:12, Shengjiu Wang wrote:
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/audioX".
And add memory to memory support for two kinds of i.MX ASRC module
Before I spend time on this: are the audio maintainers OK with doing this in V4L2?
I do want to have a clear statement on this as it is not something I can decide.
Well, I personally don't mind to have some audio capability in v4l2 layer. But, the only uncertain thing for now is whether this is a must-have or not.
Thanks, I am also not sure about this. I am also confused that why there is no m2m implementation for audio in the kernel. Audio also has similar decoder encoder post-processing as video.
IIRC, the implementation in the sound driver side was never done just because there was no similar implementation? If so, and if the extension to the v4l2 core layer is needed, shouldn't it be more considered for the possible other route?
Actually I'd like someone could point me to the other route. I'd like to try.
The reason why I select to extend v4l2 for such audio usage is that v4l2 looks best for this audio m2m implementation. v4l2 is designed for m2m usage. if we need implement another 'route', I don't think it can do better that v4l2.
I appreciate that someone can share his ideas or doable solutions. And please don't ignore my request, ignore my patch.
To give a bit more background: if it is decided to use the v4l API for this (and I have no objection to this from my side since API/framework-wise it is a good fit for this), then there are a number of things that need to be done to get this into the media subsystem:
- documentation for the new uAPI
- add support for this to v4l2-ctl
- add v4l2-compliance tests for the new device
- highly desirable: have a virtual driver (similar to vim2m) that supports this: it could be as simple as just copy input to output. This helps regression testing.
- it might need media controller support as well. TBD.
None of this is particularly complex, but taken all together it is a fair amount of work that also needs a lot of review time from our side.
I want to add one more option to the mix: drivers/media/core/v4l2-mem2mem.c is the main m2m framework, but it relies heavily on the videobuf2 framework for the capture and output queues.
The core vb2 implementation in drivers/media/common/videobuf2/videobuf2-core.c is independent of V4L2 and can be used by other subsystems (in our case, it is also used by the DVB API). It is a possibility to create an alsa version of v4l2-mem2mem.c that uses the core vb2 code with an ALSA uAPI on top.
So in drivers/media/common/videobuf2/ you would have a videobuf2-alsa.c besides the already existing videobuf2-v4l2.c and -dvb.c.
Perhaps parts of v4l2-mem2mem.c can be reused as well in that case, but I am not sure if it is worth the effort. I suspect copying it to an alsa-mem2mem.c and adapting it for alsa is easiest if you want to go that way.
Thanks.
Does this means that videobuf2-v4l2.c and v4l2-mem2mem.c are dedicate for video device? if audio want to use v4l2 framework, need to create videobuf2-alsa.c and alsa-mem2mem.c, but it may cause a lot of function duplicate.
Best regards Wang Shengjiu
On 04/08/2023 14:19, Shengjiu Wang wrote:
On Wed, Aug 2, 2023 at 8:28 PM Hans Verkuil hverkuil@xs4all.nl wrote:
On 02/08/2023 14:02, Shengjiu Wang wrote:
On Wed, Aug 2, 2023 at 7:22 PM Takashi Iwai tiwai@suse.de wrote:
On Wed, 02 Aug 2023 09:32:37 +0200, Hans Verkuil wrote:
Hi all,
On 25/07/2023 08:12, Shengjiu Wang wrote:
Audio signal processing has the requirement for memory to memory similar as Video.
This patch is to add this support in v4l2 framework, defined new buffer type V4L2_BUF_TYPE_AUDIO_CAPTURE and V4L2_BUF_TYPE_AUDIO_OUTPUT, defined new format v4l2_audio_format for audio case usage.
The created audio device is named "/dev/audioX".
And add memory to memory support for two kinds of i.MX ASRC module
Before I spend time on this: are the audio maintainers OK with doing this in V4L2?
I do want to have a clear statement on this as it is not something I can decide.
Well, I personally don't mind to have some audio capability in v4l2 layer. But, the only uncertain thing for now is whether this is a must-have or not.
Thanks, I am also not sure about this. I am also confused that why there is no m2m implementation for audio in the kernel. Audio also has similar decoder encoder post-processing as video.
IIRC, the implementation in the sound driver side was never done just because there was no similar implementation? If so, and if the extension to the v4l2 core layer is needed, shouldn't it be more considered for the possible other route?
Actually I'd like someone could point me to the other route. I'd like to try.
The reason why I select to extend v4l2 for such audio usage is that v4l2 looks best for this audio m2m implementation. v4l2 is designed for m2m usage. if we need implement another 'route', I don't think it can do better that v4l2.
I appreciate that someone can share his ideas or doable solutions. And please don't ignore my request, ignore my patch.
To give a bit more background: if it is decided to use the v4l API for this (and I have no objection to this from my side since API/framework-wise it is a good fit for this), then there are a number of things that need to be done to get this into the media subsystem:
- documentation for the new uAPI
- add support for this to v4l2-ctl
- add v4l2-compliance tests for the new device
- highly desirable: have a virtual driver (similar to vim2m) that supports this: it could be as simple as just copy input to output. This helps regression testing.
- it might need media controller support as well. TBD.
None of this is particularly complex, but taken all together it is a fair amount of work that also needs a lot of review time from our side.
I want to add one more option to the mix: drivers/media/core/v4l2-mem2mem.c is the main m2m framework, but it relies heavily on the videobuf2 framework for the capture and output queues.
The core vb2 implementation in drivers/media/common/videobuf2/videobuf2-core.c is independent of V4L2 and can be used by other subsystems (in our case, it is also used by the DVB API). It is a possibility to create an alsa version of v4l2-mem2mem.c that uses the core vb2 code with an ALSA uAPI on top.
So in drivers/media/common/videobuf2/ you would have a videobuf2-alsa.c besides the already existing videobuf2-v4l2.c and -dvb.c.
Perhaps parts of v4l2-mem2mem.c can be reused as well in that case, but I am not sure if it is worth the effort. I suspect copying it to an alsa-mem2mem.c and adapting it for alsa is easiest if you want to go that way.
Thanks.
Does this means that videobuf2-v4l2.c and v4l2-mem2mem.c are dedicate for video device? if audio want to use v4l2 framework, need to create videobuf2-alsa.c and alsa-mem2mem.c, but it may cause a lot of function duplicate.
The videobuf2-v4l2.c sits on top of videobuf2-core.c and provides the V4L2 uAPI for the streaming functionality. If you don't want to use the V4L2 uAPI for this, then you would need a videobuf2-alsa.c that provides a (possibly new) ALSA uAPI. Whether that makes sense is something I cannot decide.
v4l2-mem2mem.c uses videobuf2-v4l2.c, so if you need a ALSA version, then you probably need to create an alsa-mem2mem.c (possibly some functionality can be shared).
It's just a third option, and it can be useful if there is a strong desire to keep the uAPI for this functionality entirely within the ALSA subsystem, but you want to reuse the streaming I/O functionality that the videobuf2 core provides.
If the decision is that it is fine to use the V4L2 uAPI for this type of audio functionality through a /dev/v4l-audioX device, then just ignore this option and use V4L2.
Regards,
Hans
participants (8)
-
Daniel Baluta
-
Hans Verkuil
-
Mark Brown
-
Nicolas Dufresne
-
Shengjiu Wang
-
Shengjiu Wang
-
Takashi Iwai
-
Tomasz Figa