Converting a 16bits stereo wav file to a 16bits mono wav file 
Author Message
 Converting a 16bits stereo wav file to a 16bits mono wav file

Hi,
I want to write a function which take a 16 bits stereo wav file and convert
it to a 16 bits mono wav file.
I use this function for making this. When i make T=(L+R) instead of
T=(L+R)/2 the output file is OK but the sound is too high. When I make the
division of T, the output file is pratically only noise.
What is the problem?
Help me please.

Here is the function:

int nRead,i,j;
WORD *pTmp,*pSample;
WORD L,R;

 pSample=new WORD[748544];
 pTmp=new WORD[748544/2];

 for(j=0;j<(748544/2);) {
  for(i=0;i<748544;i+=2) {
   L=pSample[i];
   R=pSample[i+1];
   T=(L+R)/2;
   pTmp[j++]=(WORD)T;
  }
 }



Fri, 21 Jun 2002 03:00:00 GMT  
 Converting a 16bits stereo wav file to a 16bits mono wav file

Quote:

> Hi,
> I want to write a function which take a 16 bits stereo wav file and convert
> it to a 16 bits mono wav file.
> I use this function for making this. When i make T=(L+R) instead of
> T=(L+R)/2 the output file is OK but the sound is too high. When I make the
> division of T, the output file is pratically only noise.
> What is the problem?
> Help me please.

> Here is the function:

> int nRead,i,j;
> WORD *pTmp,*pSample;
> WORD L,R;

>  pSample=new WORD[748544];
>  pTmp=new WORD[748544/2];

>  for(j=0;j<(748544/2);) {
>   for(i=0;i<748544;i+=2) {
>    L=pSample[i];
>    R=pSample[i+1];
>    T=(L+R)/2;
>    pTmp[j++]=(WORD)T;
>   }
>  }

Why do you have two loops ? You have the ugly 748544 appearing 5 times
in your code. Suppose you want to work with another file, you will have
to change it in every spot. So either #define it or use a const int
variable (In other threads on the group there is a discussion about the
advantages/disadvantages between the two aproaches). Further ANSI C does
not know a type WORD, and I think that's where your troubles come from.

<OFF TOPIC>
I am not sure about this but I seem to remember that the samples in wav
file (on intel/windoz) are signed and use 2's complement arithmetic. My
guess is that WORD is #defined as unsigned int, so when you add L and R
it doesn't matter (that's inherrent in 2's complement) but when you dive
by two it does wierd stuff.
You may want to try something like this:
If on your machine CHAR_BIT is 8 and sizeof int is 2 then replace WORD
by int.
If not try the following:

#define N_SAMPLE (748544/2)

int i;
WORD *pStereo,*pMono;
WORD L,R,T;

pStereo=new WORD[2*N_SAMPLE];
/* put something in the array */

pMono=new WORD[N_SAMPLE];

for(i=0;i<N_SAMPLE;i++){
        L=*(pStereo++);
        R=*(pStereo++);
        T=((int) L + (int) R) >> 2;
        pMono[j]=(WORD) T;

Quote:
}

Sorry I can't be more specific, but I use dec-alpha machines these days,
for more info ask on a windoz related newsgroup, Tobias
<OFF TOPIC>


Sat, 22 Jun 2002 03:00:00 GMT  
 Converting a 16bits stereo wav file to a 16bits mono wav file

Quote:

> If on your machine CHAR_BIT is 8 and sizeof int is 2 then replace WORD
> by int.

it doesn't have to be exact.  use a short (or int, i guess), and all will
work out.

Quote:
> If not try the following:
> #define N_SAMPLE (748544/2)
> int i;
> WORD *pStereo,*pMono;
> WORD L,R,T;
> pStereo=new WORD[2*N_SAMPLE];
> /* put something in the array */
> pMono=new WORD[N_SAMPLE];

*ahem* no c++-isms allowed, please :)

Quote:
> for(i=0;i<N_SAMPLE;i++){
>    L=*(pStereo++);
>    R=*(pStereo++);
>    T=((int) L + (int) R) >> 2;
>    pMono[j]=(WORD) T;
> }

try:

short *
stereo_to_mono(short *stereo, size_t length)
/*
 * note: length is measured in number of samples in the mono wave.  i.e. it
 * will be equal to the length of the stereo wave divided by two
 *
 */
{
        short *mono;
        size_t s, m;

        mono = malloc(num_samples * sizeof *mono);
        if (! mono)     return NULL;

        for (s = 0, m = 0; m < length; m++, s += 2)
                mono[m] = (stereo[s] / 2) + (stereo[s + 1] / 2);
                /*             L                   R          */

        return mono;

Quote:
}

it's a bit ugly, using 2 iterators when theoretically only 1 is needed, but
hopefully you can figure out a way aronud that :).  the blatantly obvious
problem is the fact that instead of adding (L + R) / 2, we're adding (L / 2)
+ (R / 2).  this makes it a tiny bit more inaccurate, which could add a bit
of noise to your final product.  the reason i do this is because otherwise,
it'll overflow, and unfortunately c is useless when it comes to dealing with
overflows.  btw if you're doing this on an x86, you might consider using
assembly, which will allow you to gracefully catch overflows.  anyway,
consider:

- short is 16-bits long.  it can hold from -32768 to 32767.
- L is -24001.  R is -24305.

if we were to do (L + R) / 2, it would overflow ('clip').  it first
calculates (-24001 + -24305).  to human eyes, this is obviously -48306.  but
to the eyes of our poor little short, this is beyond its capacity, and
overflowing a signed number is indeed bad behaviour ('undefined' i believe).

so with my method, this becomes (-24001 / 2) + (-24305 / 2), which is -24152
(i think, i can't remember exactly how integer division works with negative
numbers).  anyway, the 'correct' answer is -24153.  the m{*filter*}of the story
is: if both numbers are odd, your answer is going to be off by 1, because of
the nature of integer division.

other methods you might consider:
- use longs:  mono = ((long)R + L) / 2;
- use floats: mono = (short)(((float)R + L) / 2.0);

cheers

--
              /"\                              m i k e    b u r r e l l

               X        AGAINST HTML MAIL       http://www.*-*-*.com/
              / \



Sat, 22 Jun 2002 03:00:00 GMT  
 Converting a 16bits stereo wav file to a 16bits mono wav file


...

Quote:
>if we were to do (L + R) / 2, it would overflow ('clip').

The problem with this theory is that the OP says that using T = (L + R)
works (it is just too loud) so it presumably is not overflowing. The
only obvious reson why (L + R) would work and (L + R) / 2 not would be
that the latter uses the wrong sort of division (i.e. signed or unsigned).
Other operations like addition on 2's complement systems usually perform the
same operations at the bit level for signed and unsigned calculations
so aren't affected by this.

Quote:
> it first
>calculates (-24001 + -24305).  to human eyes, this is obviously -48306.  but
>to the eyes of our poor little short, this is beyond its capacity, and
>overflowing a signed number is indeed bad behaviour ('undefined' i believe).

>so with my method, this becomes (-24001 / 2) + (-24305 / 2), which is -24152

If it is the type of division that is wrong then this isn't going to
fix it. The "fix" is to make sure the code is using the correct datatypes.

--
-----------------------------------------


-----------------------------------------



Sun, 23 Jun 2002 03:00:00 GMT  
 
 [ 4 post ] 

 Relevant Pages 

1. Starnge signal and 16bits stereo wav file to 16bits mono wav file

2. int64 in a 16bits code (16bits compiler)?

3. Converting a .WAV file to a C struct?

4. How to convert wav file into Mobile Ring Tone

5. Converting 16bits to 32bits code

6. How can I convert pcm file to wav?

7. Converting Mac SND resources to Windows .wav files

8. Are there any API to convert .wav files to wma format

9. Play WAV File From Resource File

10. file io for .wav files

11. Decoding .wav files or .mp3 files using C++

12. Sticking .BMP's or .WAV files into one big file

 

 
Powered by phpBB® Forum Software