Harsha, Priya wrote:
In the trigger function - I see that from substream->runtime->private_data, voice specific parameters are extracted. How and can somehow user space fill in some private data to the driver so that driver can process voice in a different way and music in a different way?
On the Emu10k1 chip, the hardware voices are used for both PCM playback and wavetable (MIDI) playback. Voice-specific parameters are set for MIDI data; the parameters come from the soundfont that was loaded to specify how the instruments sound. (Here, the term "voice" has no relation to the sound of a human voice.)
As far as PCM playback is concerned, ALSA does not have any predefined mechanism to differentiate between voice and music audio streams. It would be possible to define mixer controls to switch between these settings for each substream.
HTH Clemens