Hi, all,
I'm trying to develop a system which can determine the direction(s) sound comes from. In the end, it will be put in an R/c submarine, on a gumstix linux board (see http://www.subcommittee.com for my hobby)
But for now, I develop in air, with ubuntu. all I want is to record using both the mic channel and the line in (where another 2 mics are attached with a small preamp), and I want to record while emitting a chirp, analyzing echo's. the concept has been shown to work. see for instance http://eddiem.com/projects/chirp/chirp.htm
I am assuming that since all this goes via the same sound card, the same crystal is used, and samples from mic and line in are essentially synchronized (I only need it to be sync-ish for < 0,1 sec..)
Now I have read a lot of ALSA tutorials and such like, but I'm lost. a lot is about humans consuming nice harmonics, sharing and mixing, which is not applicable to my case. I have the nagging feeling OSS is a better fit for my case, but OSS is apparently on a 60 day licence now.
And, I am a newbie coder. no surprise there ;-)
So, these are my questions :
- Do I need to combine mic and line-in into one device using the config file ? do I get a 3 channel device then ? does a plug-in add latency ?
- can I sync the transmission of a short chirp with the start of the recording on these 3 channels ? is starting them close after each other in the program OK, or is there possibly large latency caused *after* I call the API ? the "real" sync'ing will be the (first) recording of the chirp once it leaves the speaker, so it isn't critical, but I would like to know because it might save a few bytes useless samples.
- Does anybody out there have any links, code snippets or general published info on how to apply ALSA to the robotics field ?
Ps I have read the el-cheapo hack with multiple soundcards, and while a very good idea, I cannot use it in the boat later on..
thanks for any input you might have.
Regards, Ronald van Aalst