I'm playing with the sound driver in linux. I've got two little
programs, one that records sound and one that plays it back. (I just
write to a file in raw format) However, I'm a little puzzled about the
best way to allow the user to specify how long they want to record or
play for. There's two things I've seen that might work.
1. Just calculating the size of the data for the time you want to record for.
The method given in the O'reilly book "Linux Multimedia" by Jeff Tranter
is just to allocate a buffer based on the following
unsigned char buffer[LENGTH*RATE*SIZE*CHANNELS/8];
Where length is the number of seconds to record for, rate is the sampling
rate, size is 8 or 16 for the bits/sample, and channels is 1 for mono and
two for stereo.
This doesn't seem to make a whole lot of sense. I guess what I don't
understand is how the sound driver fills in the buffer. Does it fill it
in like an array, or does it just treat it as a big block of memory, and
write to it in some arbitrary size. If you're going to use 16-bit audio,
shouldn't you need a "short int" to store the data?
I'm making the following assumptions about data in linux/gcc :
char - 8-bit
short - 16-bit
int - 32-bit
long - 32-bit
So wouldn't a short be the best thing for 16-bit data?
2. Using the alarm syscall/signal to do timing.
The obvious problem with this is that it isn't real time, and can be
delayed by the system.
If anybody has some knowledge with this, I'd love to hear about it.