No subject


Wed Jun 10 10:12:54 CEST 2009


i.e. I call the interface with the arguments required and I get the
expected result back. Further the tests need to run in an automated
fashion and produce a certain output (automated processing). The output
can be created by retrofitting existing tests or by specifically writing
tests within the LSB test framework.

IMHO it would be best to have tests that are useful to both, the project
and the LSB and in a best case scenario we wouldn't have to start over
as the existing tests already cover a good part of the interface.

With this as background there are a few questions that have been raised
while discussing this during LSB workgroup sessions.

1.) Is there interest from the community side to participate in this
effort and accept patches?
2.) Is it reasonable to expect that the existing tests can be used in
some way by using a dummy sound device or a sound loop device to verify
output?
One requirement for LSB testing is that a test has only two states, pass
or fail, and a recovery or try again mode is not supported. Further a
test must pass or fail.
3.) Would the community be more comfortable if this is an effort that
creates new tests separate from the existing HW focused tests?
4.) Is anyone interested in helping with this effort, writing tests,
answering questions about ALSA for those not familiar with the interface
and or sound in general?

Comments, thoughts, etc. are much appreciated.

Thanks,
Robert

-- 
Robert Schweikert                           MAY THE SOURCE BE WITH YOU
Software Engineer Consultant                          LINUX
rschweikert at novell.com 
781-464-8147

Novell
Making IT Work As One



More information about the Alsa-devel mailing list