new wave types

 


the necessary type, structure and constant defintions are in mmreg.h.

all newly defined wave types must contain both a fact chunk and an extended wave format description within the 'fmt' chunk. riff wave files of type wave_format_pcm need not have the extra chunk nor the extended wave format description.

fact chunk

this chunk stores file dependent information about the contents of the wave file. it currently specifies the length of the file in samples.

waveformatex

the extended wave format structure is used to defined all non-pcm format wave data, and is described as follows in the include file mmreg.h:

/* general extended waveform format structure */

/* use this for all non pcm formats */

/* (information common to all formats) */

typedef struct waveformat_extended_tag {

word wformattag; /* format type */

word nchannels; /* number of channels (i.e. mono, stereo...) */

dword nsamplespersec; /* sample rate */

dword navgbytespersec; /* for buffer estimation */

word nblockalign; /* block size of data */

word wbitspersample; /* number of bits per sample of mono data */

word cbsize; /* the count in bytes of the extra size */} waveformatex;

wformattag defines the type of wave file.
nchannels number of channels in the wave, 1 for mono, 2 for stereo
nsamplespersec frequency of the sample rate of the wave file. this should be 11025, 22050, or 44100. other sample rates are allowed, but not encouraged. this rate is also used by the sample size entry in the fact chunk to determine the length in time of the data.
navgbytespersec average data rate.
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign the block alignment (in bytes) of the data in <data-ck>.
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample per channel data. each channel is assumed to have the same sample resolution. if this field is not needed, then it should be set to zero.img
cbsize the size in bytes of the extra information in the wave format header not including the size of the waveformatex structure.. as an example, in the ima adpcm format cbsize is calculated as sizeof(imaadpcmwaveformat) - sizeof(waveformatex) which yeilds two.

 

defined wformattags

expr1 wave form registration no - hex expr2
#define wave_format_g723_adpcm 0x0014 /* antex electronics corporation */
#define wave_format_antex_adpcme 0x0033 /* antex electronics corporation */
#define wave_format_g721_adpcm 0x0040 /* antex electronics corporation */
#define wave_format_aptx 0x0025 /* audio processing technology */
#define wave_format_audiofile_af36 0x0024 /* audiofile, inc. */
#define wave_format_audiofile_af10 0x0026 /* audiofile, inc. */
#define wave_format_control_res_vqlpc 0x0034 /* control resources limited */
#define wave_format_control_res_cr10 0x0037 /* control resources limited */
#define wave_format_creative_adpcm 0x0200 /* creative labs, inc */
#define wave_format_dolby_ac2 0x0030 /* dolby laboratories */
#define wave_format_dspgroup_truespeech 0x0022 /* dsp group, inc */
#define wave_format_digistd 0x0015 /* dsp solutions, inc. */
#define wave_format_digifix 0x0016 /* dsp solutions, inc. */
#define wave_format_digireal 0x0035 /* dsp solutions, inc. */
#define wave_format_digiadpcm 0x0036 /* dsp solutions, inc. */
#define wave_format_echosc1 0x0023 /* echo speech corporation */
#define wave_format_fm_towns_snd 0x0300 /* fujitsu corp. */
#define wave_format_ibm_cvsd 0x0005 /* ibm corporation */
#define wave_format_oligsm 0x1000 /* ing c. olivetti & c., s.p.a. */
#define wave_format_oliadpcm 0x1001 /* ing c. olivetti & c., s.p.a. */
#define wave_format_olicelp 0x1002 /* ing c. olivetti & c., s.p.a. */
#define wave_format_olisbc 0x1003 /* ing c. olivetti & c., s.p.a. */
#define wave_format_oliopr 0x1004 /* ing c. olivetti & c., s.p.a. */
#define wave_format_ima_adpcm (wave_form_dvi_adpcm) /* intel corporation */
#define wave_format_dvi_adpcm 0x0011 /* intel corporation */
#define wave_format_unknown 0x0000 /* microsoft corporation */
#define wave_format_pcm 0x0001 /* microsoft corporation */
#define wave_format_adpcm 0x0002 /* microsoft corporation */
#define wave_format_alaw 0x0006 /* microsoft corporation */
#define wave_format_mulaw 0x0007 /* microsoft corporation */
#define wave_format_gsm610 0x0031 /* microsoft corporation */
#define wave_format_mpeg 0x0050 /* microsoft corporation */
#define wave_format_nms_vbxadpcm 0x0038 /* natural microsystems */
#define wave_format_oki_adpcm 0x0010 /* oki */
#define wave_format_sierra_adpcm 0x0013 /* sierra semiconductor corp */
#define wave_format_sonarc 0x0021 /* speech compression */
#define wave_format_mediaspace_adpcm 0x0012 /* videologic */
#define wave_format_yamaha_adpcm 0x0020 /* yamaha corporation of america */

unknown wave type

added: 05/01/92
author: microsoft

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

changed as of september 5, 1993: this wave format will not be defined. for development purposes, do not use 0x0000. instead, use 0xffff until an id has been obtained.

# define wave_format_unknown (0x0000)

wformattag this must be set to wave_format_unknown.
nchannels number of channels in the wave.(1 for mono)
nsamplespersec frequency the of the sample rate of wave file.
navgbytespersec average data rate.
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign block alignment of the data.
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header.

microsoft adpcm

added 05/01/92
author: microsoft

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

# define wave_format_adpcm (0x0002)

typedef struct adpcmcoef_tag {

int icoef1;

int icoef2;

} adpcmcoefset;

 

typedef struct adpcmwaveformat_tag {

waveformatex wfxx;

word wsamplesperblock;

word wnumcoef;

adpcmcoefset acoeff[wnumcoef];

} adpcmwaveformat;

wformattag this must be set to wave_format_adpcm.
nchannels number of channels in the wave, 1 for mono, 2 for stereo.
nsamplespersec frequency of the sample rate of the wave file. this should be 11025, 22050, or 44100. other sample rates are allowed, but not encouraged.
navgbytespersec average data rate. ((nsamplespersec / nsamplesperblock) * nblockalign).
playback software can estimate the buffer size using the value.
nblockalign the block alignment (in bytes) of the data in .
  nsamplespersec x channels nblockalign
  8k 256
  11k 256
  22k 512
  44k 1024
  playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of adpcm. currently only 4 bits per sample is defined. other values are reserved.
cbsize the size in bytes of the extended information after the waveformatex structure.
for the standard wave_format_adpcm using the standard seven coefficient pairs, this is 32. if extra coefficients are added, then this value will increase.
nsamplesperblock count of number of samples per block.
(((nblockalign - (7 * nchannels)) * 8) / (wbitspersample * nchannels)) + 2.
nnumcoef count of the number of coefficient sets defined in acoef.
acoeff these are the coefficients used by the wave to play. they may be interpreted as fixed point 8.8 signed values. currently there are 7 preset coefficient sets. they must appear in the following order.
  coef set coef1 coef2
  0 256 0
  1 512 -256
  2 0 0
  3 192 64
  4 240 0
  5 460 -208
  6 392 -232
  note that if even only 1 coefficient set was used to encode the file then all coefficient sets are still included. more coefficients may be added by the encoding software, but the first 7 must always be the same.

note: 8.8 signed values can be divided by 256 to obtain the integer portion of the value.

block

the block has three parts, the header, data, and padding. the three together are <nblockalign> bytes.

typedef struct adpcmblockheader_tag {

byte bpredictor[nchannels];

int idelta[nchannels];

int isamp1[nchannels];

int isamp2[nchannels];

} adpcmblockheader;

field description
bpredictor index into the acoef array to define the predictor used to encode this block.
idelta initial delta value to use.
isamp1 the second sample value of the block. when decoding this will be used as the previous sample to start decoding with.
isamp2 the first sample value of the block. when decoding this will be used as the previous' previous sample to start decoding with.

data

the data is a bit string parsed in groups of (wbitspersample * nchannels).

for the case of mono voice adpcm (wbitspersample = 4, nchannels = 1) we have:

... ...

where has or <(sample 2n + 2) (sample 2n + 3)>

= ((4 bit error delta for sample (2 * n) + 2) << 4) | (4 bit error delta for sample (2 * n) + 3)

for the case of stereo voice adpcm (wbitspersample = 4, nchannels = 2) we have:

... ...

where has or

<(left channel of sample n + 2) (right channel of sample n + 2)>

= ((4 bit error delta for left channel of sample n + 2) << 4) | (4 bit error delta for right channel of sample n + 2)

padding

bit padding is used to round off the block to an exact byte length.

the size of the padding (in bits):

((nblockalign - (7 * nchannels)) * 8) -

(((nsamplesperblock - 2) * nchannels) * wbitspersample)

the padding does not store any data and should be made zero.

adpcm algorithm

each channel of the adpcm file can be encoded/decoded independently. however this should not destroy phase and amplitude information since each channel will track the original. since the channels are encoded/decoded independently, this document is written as if only one channel is being decoded. since the channels are interleaved, multiple channels may be encoded/decoded in parallel using independent local storage and temporaries.

note that the process for encoding/decoding one block is independent from the process for the next block. therefore the process is described for one block only, and may be repeated for other blocks. while some optimizations may relate the process for one block to another, in theory they are still independent.

note that in the description below the number designation appended to isamp (i.e. isamp1 and isamp2) refers to the placement of the sample in relation to the current one being decoded. thus when you are decoding sample n, isamp1 would be sample n - 1 and isamp2 would be sample n - 2. coef1 is the coefficient for isamp1 and coef2 is the coefficient for isamp2. this numbering is identical to that used in the block and format descriptions above.

a sample application will be provided to convert a riff waveform file to and from adpcm and pcm formats.

decoding

first the predictor coefficients are determined by using the bpredictor field of block header. this value is an index into the acoef array in the file header.

bpredictor = getbyte

the initial idelta is also taken from the block header.

idelta = getword

then the first two samples are taken from block header. (they are stored as 16 bit pcm data as isamp1 and isamp2. isamp2 is the first sample of the block, isamp1 is the second sample.)

isamp1= getint

isamp2 = getint

after taking this initial data from the block header, the process of decoding the rest of the block may begin. it can be done in the following manner:

while there are more samples in the block to decode:

predict the next sample from the previous two samples.

lpredsamp = ((isamp1 * icoef1) + (isamp2 *icoef2)) / fixed_point_coef_base

get the 4 bit signed error delta.

(ierrordelta = getnibble)

add the 'error in prediction' to the predicted next sample and prevent over/underflow errors.

(lnewsamp = lpredsample + (idelta * ierrordelta)

if lnewsample too large, make it the maximum allowable size.

if lnewsample too small, make it the minimum allowable size.

output the new sample.

output( lnewsamp )

adjust the quantization step size used to calculate the 'error in prediction'.

idelta = idelta * adaptiontable[ ierrordelta] / fixed_point_adaption_base

if idelta too small, make it the minimum allowable size.

update the record of previous samples.

isamp2 = isamp1;

isamp1 = lnewsample.

encoding

for each block, the encoding process can be done through the following steps. (for each channel)

determine the predictor to use for the block.

determine the initial idelta for the block.

write out the block header.

encode and write out the data.

the predictor to use for each block can be determined in many ways.

1. a static predictor for all files.

2. the block can be encoded with each possible predictor. then the predictor that gave the least error can be chosen. the least error can be determined from:

1. sum of squares of differences. (from compressed/decompressed to original data)

2. the least average absolute difference.

3. the least average idelta

3. the predictor that has the smallest initial idelta can be chosen. (this is an approximation of method 2.3)

4. statistics from either the previous or current block. (e.g. a linear combination of the first 5 samples of a block that corresponds to the average predicted error.)

the starting idelta for each block can also be determined in a couple of ways.

1. one way is to always start off with the same initial idelta.

2. another way is to use the idelta from the end of the previous block. (note that for the first block an initial value must then be chosen.)

3. the initial idelta may also be determined from the first few samples of the block. (idelta generally fluctuates around the value that makes the absolute value of the encoded output about half maximum absolute value of the encoded output. (for 4 bit error deltas the maximum absolute value is 8. this means the initial idelta should be set so that the first output is around 4.)

4. finally the initial idelta for this block may be determined from the last few samples of the last block. (note that for the first block an initial value must then be chosen.)

note that different choices for predictor and initial idelta will result in different audio quality.

once the predictor and starting quantization values are chosen, the block header may be written out.

first the choice of predictor is written out. (for each channel.)

then the initial idelta (quantization scale) is written out. (for each channel.)

then the 16 bit pcm value of the second sample is written out. (isamp1) (for each channel.)

finally the 16 bit pcm value of the first sample is written out. (isamp2) (for each channel.)

then the rest of the block may be encoded. (note that the first encoded value will be for the 3rd sample in the block since the first two are contained in the header.)

while there are more samples in the block to decode:

predict the next sample from the previous two samples.

lpredsamp = ((isamp1 * icoef1) + (isamp2 *icoef2))

/ fixed_point_coef_base

the 4 bit signed error delta is produced and overflow/underflow is prevented..

ierrordelta = (sample(n) - lpredsamp) / idelta

if ierrordelta is too large, make it the maximum allowable size.

if ierrordelta is too small, make it the minimum allowable size.

then the nibble ierrordelta is written out.

putnibble( ierrordelta )

add the 'error in prediction' to the predicted next sample and prevent over/underflow errors.

(lnewsamp = lpredsample + (idelta * ierrordelta)

if lnewsample too large, make it the maximum allowable size.

if lnewsample too small, make it the minimum allowable size.

adjust the quantization step size used to calculate the 'error in prediction'.

idelta = idelta * adaptiontable[ ierrordelta] / fixed_point_adaption_base

if idelta too small, make it the minimum allowable size.

update the record of previous samples.

isamp2 = isamp1;

isamp1 = lnewsample.

sample c code

sample c code is contained in the file msadpcm.c, which is available with this document in electronic form and separately. see the overview section for how to obtain this sample code.

cvsd wave type

added 07/21/92
author: dsp solutions, formerly digispeech

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

# define wave_format_ibm_cvsd (0x0005)

wformattag this must be set to wave_format_ibm_cvsd
nchannels number of channels in the wave, 1 for mono, 2 for stereo...
nsamplespersec frequency the source was sampled at. see chart below.
navgbytespersec average data rate. see chart below. (one of 1800, 2400, 3000, 3600, 4200, or 4800)
playback software can estimate the buffer size using the value.
nblockalign set to 2048 to provide efficient caching of file from cd-rom.
playback software needs to process a multiple of bytes of data at a time, so that the value of can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data. this is always 1 for cvsd.
cbsize the size in bytes of the rest of the wave format header. this is zero for cvsd.

the digispeech cvsd compression format is compatible with the ibm ps/2 speech adapter, which uses a motorola mc3418 for cvsd modulation. the motorola chip uses only one algorithm which can work at variable sampling clock rates. the cvsd algorithm compresses each input audio sample to 1 bit. an acceptable quality of sound is achieved using high sampling rates. the digispeech ds201 adapter supports six cvsd sampling frequencies, which are being used by most software using the ibm ps/2 speech adapter:

sample rate bytes/second
14,400hz 1800 bytes
19,200hz 2400 bytes
24,000hz 3000 bytes
28,800hz 3600 bytes
33,600hz 4200 bytes
38,400hz 4800 bytes

the cvsd format is a compression scheme which has been used by ibm and is supported by the ibm ps/2 speech adapter card. digispeech also has a card that uses this compression scheme. it is not digispeech's policy to disclose any of these algorithms to any third party vendor.

ccitt standard companded wave types

added: 05/22/92
author: microsoft, dsp solutions formerly digispeech, vocaltec, artisoft

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

#define wave_format_alaw (0x0006)

#define wave_format_mulaw (0x0007)

wformattag this must be set to one of wave_format_alaw, wave_format_mulaw
nchannels number of channels in the wave, 1 for mono, 2 for stereo...
nsamplespersec frequency of the wave file. (8000, 11025, 22050, 44100).
navgbytespersec average data rate.
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign size of the blocks in bytes.
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data. (this is 8 for all the companded formats.)
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. this should be zero.

see the ccitt g.711 specification for details of the data format.

this is a ccitt (international telegraph and telephone consultative committee) specification. their address is:

palais des nations
ch-1211 geneva 10, switzerland
phone: 22 7305111

oki adpcm wave types

added: 05/22/92
author: digispeech, vocaltec, wang

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

# define wave_format_oki_adpcm (0x0010)

typedef struct oki_adpcmwaveformat_tag {
waveformatex wfx;
word wpole;
} okiadpcmwaveformat;

wformattag this must be set to wave_format_oki_adpcm
nchannels number of channels in the wave, 1 for mono, 2 for stereo.
nsamplespersec frequency the sample rate of the wave file. (8000, 11025, 22050, 44100).
navgbytespersec average data rate.
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign this is dependent upon the number of bits per sample.
  wbitspersample nchannels nblockalign
  3 1 3
  3 2 6
  4 1 1
  4 2 1
  playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data. (oki can be 3 or 4)
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. this should be 2.
wpole high frequency emphasis value

this format is created and read by the oki apdcm chip set. this chip set is used by a number of card manufacturers.

ima adpcm wave type

the ima adpcm and the dvi adpcm are identical. please see the following section on the dvi adpcm wave type for a full description.

# define wave_format_ima_adpcm (0x0011)

dvi adpcm wave type

added: 12/16/92
author: intel

please note that dvi adpcm wave type is identical to ima adpcm wave type.

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

# define wave_format_dvi_adpcm (0x0011)

typedef struct dvi_adpcmwaveformat_tag {
waveformatex wfx;

word wsamplesperblock;
} dviadpcmwaveformat;

wformattag this must be set to wave_format_dvi_adpcm.
nchannels number of channels in the wave, 1 for mono, 2 for stereo...
nsamplespersec sample rate of the wave file. this should be 8000, 11025, 22050 or 44100. other sample rates are allowed.
navgbytespersec total average data rate.
playback software can estimate the buffer size for a selected amount of time by using the <navgbytespersec> value.
nblockalign this is dependent upon the number of bits per sample.
  wbitspersample nblockalign
  3 (( n * 3 ) + 1 ) * 4 * nchannels
  4 (n + 1) * 4 * nchannels
    where n = 0, 1, 2, 3 . . .
  the recommended block size for coding is
256 * <nchannels> bytes* min(1, (/ 11 khz))
smaller values cause the block header to become a more significant storage overhead. but, it is up to the implementation of the coding portion of the algorithm to decide the optimal value for <nblockalign> within the given constraints (see above). the decoding portion of the algorithm must be able to handle any valid block size. playback software needs to process a multiple of <nblockalign> bytes of data at a time, so the value of <nblockalign> can be used for allocating buffers.
wbitspersample this is the number of bits per sample of data. dvi adpcm supports 3 or 4 bits per sample.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. this should be 2.
wsamplesperblock count of the number of samples per channel per block.

block

the block is defined to be <nblockalign> bytes in length. for dvi adpcm this must be a multiple of 4 bytes since all information in the block is divided on 32 bit word boundaries.

the block has two parts, the header and the data. the two together are <nblockalign> bytes in length. the following diagram shows the header and data parts of one block.

where:

m =

header

this is a c structure that defines the dvi adpcm block header.

typedef struct dvi_adpcmblockheader_tag {

int isamp0;

byte bsteptableindex;

byte breserved;

} dvi_adpcmblockheader;

field description
isamp0 the first sample value of the block. when decoding, this will be used as the previous sample to start decoding with.
bsteptableindex the current index into the step table array. (0 - 88)
breserved this byte is reserved for future use.

a block contains an array of <nchannels> header structures as defined above. this diagram gives a byte level description of the contents of each header word.

data

the data words are interpreted differently depending on the number of bits per sample selected.

for 4 bit dvi adpcm (where <wbitspersample> is equal to four) each data word contains eight sample codes as shown in the following diagram.

 

where:

n = a data word for a given channel, in the range of 0 to

<nblockalign> / ( 4 * <nchannels> ) - <nchannels> - 1

p = ( n * 8 ) + 1

sample 0 is always included in the block header for the channel.

each sample is 4 bits in length. each block contains a total of <wsamplesperblock> samples for each channel.

for 3 bit dvi adpcm (where <wbitspersample> is equal to three) each data word contains 10.667 sample codes. it takes three words to hold an integral number of sample codes at 3 bits per code. so for 3 bit dvi adpcm, the number of data words is required to be a multiple of three words (12 bytes). these three words contain 32 sample codes as shown in the following diagram.

 

where:

m = one of the channels, in the range of 1 to <nchannels>

n = the first data word in a group of three data words for channelm, in the

range of 0 to <nblockalign> / ( 4 * <nchannels> ) - <nchannels> - 1

p = ( ( n / 3 ) * 32 ) + 1

sample 0 is always included in the block header for the channel.

each sample is 3 bits in length. each block contains a total of <wsamplesperblock> samples for each channel.

dvi adpcm algorithm

each channel of the dvi adpcm file can be encoded/decoded independently. since the channels are encoded/decoded independently, this document is written as if only one channel is being decoded. since the channels are interleaved, multiple channels may be encoded/decoded in parallel using independent local storage and temporaries.

note that the process for encoding/decoding one block is independent from the process for the next block. therefore the process is described for one block only, and may be repeated for other blocks.

the processes for encoding and decoding is discussed below.

tables

the dvi adpcm algorithm relies on two tables to encode and decode audio samples. these are the step table and the index table. the contents of these tables are fixed for this algorithm. the 3 and 4 bit versions of the dvi adpcm algorithm use the same step table, which is:

const int steptab[ 89 ] = {

7, 8, 9, 10, 11, 12, 13, 14,

16, 17, 19, 21, 23, 25, 28, 31,

34, 37, 41, 45, 50, 55, 60, 66,

73, 80, 88, 97, 107, 118, 130, 143,

157, 173, 190, 209, 230, 253, 279, 307,

337, 371, 408, 449, 494, 544, 598, 658,

724, 796, 876, 963, 1060, 1166, 1282, 1411,

1552, 1707, 1878, 2066, 2272, 2499, 2749, 3024,

3327, 3660, 4026, 4428, 4871, 5358, 5894, 6484,

7132, 7845, 8630, 9493, 10442, 11487, 12635, 13899,

15289, 16818, 18500, 20350, 22385, 24623, 27086, 29794,

32767 }

but, the index table is different for the different bit rates. for the 4 bit dvi adpcm the contents of index table is:

const int indextab[ 16 ] = { -1, -1, -1, -1, 2, 4, 6, 8,

-1, -1, -1, -1, 2, 4, 6, 8 };

for 3 bit dvi adpcm the contents of the index table is:

const int indextab[ 8 ] = { -1, -1, 1, 2,

-1, -1, 1, 2 };

decoding

this section describes the algorithm used for decoding the 4 bit dvi adpcm. this procedure must be followed for each block for each channel.

get the first sample, samp0, from the block header

set the initial step table index, index, from the block header

output the first sample, samp0

set the previous sample value:

sampx-1 = samp0

while there are still samples to decode

get the next sample code, sampx code

calculate the new sample:

calculate the difference:

diff = 0

if ( sampx code & 4 )

diff = diff + steptab[ index ]

if ( sampx code & 2 )

diff = diff + ( steptab[ index ] >> 1 )

if ( sampx code & 1 )

diff = diff + ( steptab[ index ] >> 2 )

diff = diff + ( steptab[ index ] >> 3 )

check the sign bit:

if ( sampx code & 8 )

diff = -diff

sampx = sampx-1 + diff

check for overflow and underflow errors:

if sampx too large, make it the maximum allowable size (32767)

if sampx too small, make it the minimum allowable size (-32768)

output the new sample, sampx

adjust the step table index:

index = index + indextab[ sampx code ]

check for step table index overflow and underflow:

if index too large, make it the maximum allowable size (88)

if index too small, make it the minimum allowable size (0)

save the previous sample value:

sampx-1 = sampx

this section describes the algorithm used for decoding the 3 bit dvi adpcm. this procedure must be followed for each block for each channel.

get the first sample, samp0, from the block header

set the initial step table index, index, from the block header

output the first sample, samp0

set the previous sample value:

sampx-1 = samp0

while there are still samples to decode

get the next sample code, sampx code

calculate the new sample:

calculate the difference:

diff = 0

if ( sampx code & 2 )

diff = diff + steptab[ index ]

if ( sampx code & 1 )

diff = diff + ( steptab[ index ] >> 1 )

diff = diff + ( steptab[ index ] >> 2 )

check the sign bit:

if ( sampx code & 4 )

diff = -diff

sampx = sampx-1 + diff

check for overflow and underflow errors:

if sampx too large, make it the maximum allowable size (32767)

if sampx too small, make it the minimum allowable size (-32768)

output the new sample, sampx

adjust the step table index:

index = index + indextab[ sampx code ]

check for step table index overflow and underflow:

if index too large, make it the maximum allowable size (88)

if index too small, make it the minimum allowable size (0)

save the previous sample value:

sampx-1 = sampx

encoding

this section describes the algorithm used for encoding the 4 bit dvi adpcm. this procedure must be followed for each block for each channel.

for the first block only, clear the initial step table index:

index = 0

get the first sample, samp0

create the block header:

write the first sample, samp0, to the header

write the initial step table index, index, to the header

set the previously predicted sample value:

predsamp = samp0

while there are still samples to encode, and we're not at the end of the block

get the next sample to encode, sampx

calculate the new sample code:

diff = sampx - predsamp

set the sign bit:

if ( diff <0 )

sampx code = 8

diff = -diff

else

sampx code = 0

set the rest of the code:

if ( diff >= steptab[ index ] )

sampx code = sampx code | 4

diff = diff - steptab[ index ]

if ( diff >= ( steptab[ index ] >> 1 )

sampx code = sampx code | 2

diff = diff - ( steptab[ index ] >> 1 )

if ( diff >= ( steptab[ index ] >> 2 )

sampx code = sampx code | 1

save the sample code, sampx code in the block

predict the current sample based on the sample code:

calculate the difference:

diff = 0

if ( sampx code & 4 )

diff = diff + steptab[ index ]

if ( sampx code & 2 )

diff = diff + ( steptab[ index ] >> 1 )

if ( sampx code & 1 )

diff = diff + ( steptab[ index ] >> 2 )

diff = diff + ( steptab[ index ] >> 3 )

check the sign bit:

if ( sampx code & 8 )

diff = -diff

sampx = sampx-1 + diff

check for overflow and underflow errors:

if predsamp too large, make it the maximum allowable size (32767)

if predsamp too small, make it the minimum allowable size (-32768)

adjust the step table index:

index = index + indextab[ sampx code ]

check for step table index overflow and underflow:

if index too large, make it the maximum allowable size (88)

if index too small, make it the minimum allowable size (0)

this section describes the algorithm used for encoding the 3 bit dvi adpcm. this procedure must be followed for each block for each channel.

for the first block only, clear the initial step table index:

index = 0

get the first sample, samp0

create the block header:

write the first sample, samp0, to the header

write the initial step table index, index, to the header

set the previously predicted sample value:

predsamp = samp0

while there are still samples to encode, and we're not at the end of the block

get the next sample to encode, sampx

calculate the new sample code:

diff = sampx - predsamp

set the sign bit:

if ( diff <0 )

sampx code = 4

diff = -diff

else

sampx code = 0

set the rest of the code:

if ( diff >= steptab[ index ] )

sampx code = sampx code | 2

diff = diff - steptab[ index ]

if ( diff >= ( steptab[ index ] >> 1 )

sampx code = sampx code | 1

save the sample code, sampx code in the block

predict the current sample based on the sample code:

calculate the difference:

diff = 0

if ( sampx code & 2 )

diff = diff + steptab[ index ]

if ( sampx code & 1 )

diff = diff + ( steptab[ index ] >> 1 )

diff = diff + ( steptab[ index ] >> 2 )

check the sign bit:

if ( sampx code & 4 )

diff = -diff

sampx = sampx-1 + diff

check for overflow and underflow errors:

if predsamp too large, make it the maximum allowable size (32767)

if predsamp too small, make it the minimum allowable size (-32768)

adjust the step table index:

index = index + indextab[ sampx code ]

check for step table index overflow and underflow:

if index too large, make it the maximum allowable size (88)

if index too small, make it the minimum allowable size (0)

dsp solutions formerly digispeech wave types

added: 05/22/92
author: digispeech

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

# define wave_format_digistd (0x0015)

# define wave_format_digifix (0x0016)

wformattag this must be set to either wave_format_digistd or wave_format_digifix.
nchannels number of channels in the wave. (1 for mono)
nsamplespersec frequency the sample rate of the wave file. (8000). this value is also used by the fact chunk to determine the length in time units of the date.
navgbytespersec average data rate. (1100 for digistd or 1625 for digifix)
playback software can estimate the buffer size using the value.
nblockalign block alignment of 2 for digistd and 26 for digifix.
playback software needs to process a multiple of bytes of data at a time, so that the value of can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. this should be zero.

the definition of the data contained in the digistd and digifix formats are considered proprietary information of digispeech. they can be contacted at:

dsp solutions, inc.
2464 embarcadero way
palo alto, ca 94303

the digistd is a format used in a compression technique developed by digispeech, inc. digistd format provides good speech quality with average rate of about 1100 bytes/second. the blocks (or buffers) in this format cannot be cyclically repeated.

the digifix is a format used in a compression technique developed by digispeech, inc. digifix format provides good speech quality (similar to digistd) with average rate of exactly 1625 bytes/second. this format uses blocks of 26 bytes long.

yamaha adpcm

added 09/25/92
author: yamaha

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

# define wave_format_yamaha_adpcm (0x0020)

wformattag this must be set to wave_format_yamaha_adpcm.
nchannels number of channels in the wave, 1 for mono, 2 for stereo.
nsamplespersec frequency of the sample rate of the wave file. this should be 5125, 7350, 9600, 11025, 22050, or 44100 hz. other sample rates are not allowed.
navgbytespersec average data rate..
playback software can estimate the buffer size using the value.
nblockalign this is dependent upon the number of bits per sample.
  wbitspersample nblockalign
  4 1
  4 1
wbitspersample this is the number of bits per sample of yadpcm. currently only 4 bits per sample is defined. other values are reserved.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. this should be zero.

this format is created and read by yamaha chip included in the gold sound standard (gss) that is implemented in a number of manufacturers boards. the algorithm and conversion routines are published in the source code provided in yadpcm.c with this technote.

sonarc™ compression

added 10/21/92
author: sound compression

sound compression has developed a new compression algorithm which, unlike adpcm, is capable of lossless compression of digitized audio files to a degree far greater (50-60%) than that achievable with the other compressors, pkzip and lharc. "lossy" compression is possible with even higher ratios. information about the algorithm is available form the address below.

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

typedef struct sonarcwaveformat_tag {

waveformatex wfx;

word wcomptype;

} sonarcwaveformat

# define wave_format_sonarc (0x0021)

wformattag this must be set to wave_format_sonarc.
nchannels number of channels in the wave, 1 for mono, 2 for stereo.
nsamplespersec frequency of the sample rate of the wave file. this should be 11025, 22050, or 44100 hz. other sample rates are not allowed.
navgbytespersec average data rate.
playback software can estimate the buffer size using the value.
nblockalign the valid values have not been defined.
playback software needs to process a multiple of bytes of data at a time, so that the value of can be used for buffer alignment.
wbitspersample this is the number of bits per sample of sonarc.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. this should be 2.
wcomptype this value is not yet defined..

"sonarc" is a trademark of speech compression.

to get information on this format please contact:

speech compression
1682 langley ave.
irvine, ca 92714
telephone: 714-660-7727 fax: 714-660-7155

creative labs adpcm

added 10/01/92
author: creative labs

createive has defined a new adpcm compression scheme, and this new scheme will be implemented on their h/w and will be able to support compression and decompression real-time. they do not provide a description of this algorithm. information about the algorithm is available form the address below.

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

typedef struct creative_adpcmwaveformat_tag {

waveformatex wfx;

word wrevision;

} creativeadpcmwaveformat

# define wave_format_creative_adpcm (0x0200)

wformattag this must be set to wave_format_creative_adpcm.
nchannels number of channels in the wave, 1 for mono, 2 for stereo.
nsamplespersec frequency of the sample rate of the wave file. this should be 8000, 11025, 22050, or 44100 hz. other sample rates are not allowed.
navgbytespersec average data rate..
playback software can estimate the buffer size using the value.
nblockalign this is dependent upon the number of bits per sample.
  wbitspersample nchannels nblockalign
  4 1 1
  4 2 1
  playback software needs to process a multiple of bytes of data at a time, so that the value of can be used for buffer alignment.
wbitspersample this is the number of bits per sample of cadpcm.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. this should be 2.
wrevision revision of algorithm. this should be one for the current definition.

to get information on this format please contact:

creative developer support
1901, mccarthy blvd, milpitas, ca 95035.
tel : 408-428 6644 fax : 408-428 6655

dsp group wave type

added: 01/04/93
author: paul beard, dsp group

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the length of the data in samples.

wave format header

# define wave_format_dspgroup_truespeech (0x0022)

wformattag this must be set to wave_format_dspgroup_truespeech.
nchannels number of channels in the wave, 1 for mono.
nsamplespersec frequency of the sample rate of the wave file. this should be 8000
navgbytespersec average data rate.. (1067)
playback software can estimate the buffer size using the value.
nblockalign this is the block alignment of the data in bytes. (32).
playback software needs to process a multiple of bytes of data at a time, so that the value of can be used for buffer alignment.
wbitspersample this is the number of bits per sample of truespeech. not used; set to zero.
cbextrasize the size in bytes of the extra information in the extended wave 'fmt' header. this should be 32.
wrevision revision no (1,...)
nsamplesperblock number of samples per block. 240

= / * )

the definition of the data contained in the truespeech format is considered proprietary information of dsp group inc. they can be contacted at:

dsp group inc.,
4050 moorpark ave.,
san jose ca. 95117
(408) 985 0722

truespeech is a format used in a compression technique developed by dsp group inc. truespeech format provides high quality telephony bandwidth voice vocoding with a rate of 1067 bytes per second. this format uses blocks of 32 bytes long.

echo speech wave type

added: 01/21/93
author: echo speech corporation

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the length of the data in samples.

wave format header

# define wave_format_echosc1 (0x0023)

wformattag this must be set to wave_format_echosc1.
nchannels number of channels in the wave, always 1 for mono.
nsamplespersec frequency of the sample rate of the wave file. this should be 11025
navgbytespersec average data rate.. (450)
playback software can estimate the buffer size using the value.
nblockalign this is the block alignment of the data in bytes. (6).
playback software needs to process a multiple of bytes of data at a time, so that the value of can be used for buffer alignment.
wbitspersample this is the number of bits per sample. not used; set to zero.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. this should be 0.

the definition of the data contained in the echo sc-1 format is considered proprietary information of echo speech corporation. they can be contacted at:

echo speech corporation
6460 via real
carpinteria, ca. 93013
805 684-4593

echo sc-1 is a format used in a compression technique developed by echo speech corporation. echo sc-1 format provides excellent speech quality with an average data rate of exactly 450 bytes/second. this format uses blocks 6 bytes long.

echo is a registered trademark of echo speech corporation.

audiofile wave type af36

added: april 29, 1993
author: audiofile

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the length of the data in samples.

wave format header

# define wave_format_audiofile_af36 (0x0024)

wformattag this must be set to wave_format_audiofile_af36
nchannels number of channels in the wave.(1 for mono)
nsamplespersec frequency the of the sample rate of wave file.
navgbytespersec average data rate.
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign block alignment of the data.
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header.

audio file af36 format provides very high compression for speech -based waveform audio. (relative to 11 khz, 16-bit pcm, a compression ratio of 36-to-1 is achieved with af36.

for more information on af36 and other audiofile host-based and dsp based compression software contact: :

audiofile, inc.
four militia drive
lexington, ma, 02173
(617) 861-2996

comment

trademark info.

audio processing technology wave type

added: 06/22/93
author: calypso software limited

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the length of the data in samples.

wave format header

# define wave_format_aptx (0x0025)

wformattag this must be set to wave_format_aptx.
nchannels number of channels in the wave, always 1 for mono, 2 for stereo.
nsamplespersec frequency of the sample rate of the wave file. (8000, 11025, 22050, 44100, 48000)
navgbytespersec average data rate..= nchannels * nsamplespersec/2. (16bit audio)
playback software can estimate the buffer size using the value.
nblockalign should be set to 2 (bytes) for mono data or 4 (bytes) for stereo.
for mono data 4 sixteen bit samples will be compressed into 1 sixteen bit word
for stereo data 4 sizteen bit left channel samples will be compressed into the first 16bit word and 4 sixteen bit right channel samples will be cmpressed into the next 16 bit word.
playback software needs to process a multiple of bytes of data at a time, so that the value of can be used for buffer alignment.
wbitspersample this is the number of bits per sample. not used; set to four.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. this should be 0.(zero)

the definition of the data contained in the aptx format is considered proprietary information of audio processing technology limited. they can be contacted at:

audio processing technology limited
edgewater road
belfast, northern ireland, bt3 9qj
tel 44 232 371110
fax 44 232 371137

this format is proprietary audio format using 4:1 compression i.c. 16 bits of audio are compressed to 4 bits. it is only encoded/decoded by dedicated hardware from mm_apt

audiofile wave type af10

added: june 22, 1993
author: audiofile

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the length of the data in samples.

wave format header

# define wave_format_audiofile_af10 (0x0026)

wformattag this must be set to wave_format_audiofile_af10
nchannels number of channels in the wave.(1 for mono)
nsamplespersec frequency the of the sample rate of wave file.
navgbytespersec average data rate.
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign block alignment of the data.
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header.

for more information on af36 and other audiofile host-based and dsp based compression software contact: :

audiofile, inc.
four militia drive
lexington, ma, 02173
(617) 861-2996

dolby labs ac-2 wave type

added: 06/24/93
author: dolby laboratories, inc.

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the length of the data in samples.

wave format header

define wave_format_dolby_ac2 (0x0030)

wformattag this must be set to wave_format_dolby_ac2
nchannels number of channels, 1 for mono, 2 for stereo
nsamplespersec three sample rates allowed: 48000, 44100, 32000 samples per second
navgbytespersec average data rate. ((nsamplespersec*nblockalign)/512
nblockalign the block alignment (in bytes) of the dat in . given in table
nsamplespersec nblockalign
48000 nchannels*168
44100 nchannels*184
32000 nchannels*190
wbitspersample approximately 3 bits per sample
cbextrasize 2 extra bytes of information in format header
nauxbitscode auxiliary bits code indicating number of aux. bits per block. the amount of audio data bits is reduced by this number in the decoder, such that the overall block size remains constant.
nauxbitscode number of aux bits in block
0 0
1 8
2 16
3 32

specific structure of the chunk is proprietary, and may be obtained from dolby laboratories. also contact dolby for methods of including chunks.

dolby laboratories
100 potrero avenue
san francisco, ca 94103-4813
tel 415-558-0200

/* dolby's ac-2 wave format structure definition */

typedef struct dolbyac2waveformat_tag {

waveformatex wfx;

word nauxbitscode;

} dolbyac2waveformat;

sierra adpcm

added 07/26/93
author: sierra semiconductor corp.

sierra semiconductor has developed a compression scheme similar to the standard ccitt adpcm. this scheme has been implemented in aria -based sound boards and is capable of supporting compression and decompression in real-time. a description of the algorithm is not available at this time.

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

typedef struct sierra_adpcmwaveformat_tag {

extwaveformat ewf;

word wrevision;

} sierraadpcmwaveformat;

# define wave_format_sierra_adpcm (0x0013)

wformattag this must be set to wave_format_sierra_adpcm.
nchannels number of channels in the wave, 1 for mono, 2 for stereo.
nsamplespersec frequency of the sample rate of the wave file. this should be 22050 hz. other sample rates are not currently allowed.
navgbytespersec average data rate.
playback software can estimate the buffer size using the value.
nblockalign this is dependent upon the number of bits per sample.
  wbitspersample nchannels nblockalign
  4 1 1
  4 2 1
  playback software needs to process a multiple of bytes of data at a time, so that the value of can be used for buffer alignment.
wbitspersample this is the number of bits per sample of sierra adpcm. currently, only
4 bits per sample is defined. other values are reserved.
cbextrasize the size in bytes of the extra information in the extended wave 'fmt' header. this should be 2.
wrevision revision of algorithm. this should be 0x0100 for the current definition.

videologic wave types

added: 07/13/93
author: videologic

fact chunck

wave format header

# define wave_format_mediaspace_adpcm (0x0012)

//

// videologic's mediaspace adpcm structure definitions

//

// for wave_format_mediaspace_adpcm (0x0012)

//

//

 

typedef struct mediaspace_adpcmwaveformat_tag {

waveformatex wfx;

word wrevision;

} mediaspaceadpcmwaveformat;

typedef mediaspaceadpcmwaveformat *pmediaspaceadpcmwaveformat;

typedef mediaspaceadpcmwaveformat near *npmediaspaceadpcmwaveformat;

typedef mediaspaceadpcmwaveformat far *lpmediaspaceadpcmwaveformat;

ccitt g.723 adpcm

added: 08/25/93
author: antex electronics corp.

the algorithm for g.721 header format is essentially the same as g723.

fact chunk

wave format header

# define wave_format_g723_adpcm (0x0014)

wformattag this must be set to wave_format_g.723_adpcm
nchannels number of channels in the wave, 1 for mono, 2 for stereo
nsamplespersec frequency the sample rate of the wave file. (8000, 11025, 22050, 44100)
navgbytespersec average data rate
playback software can estimate the buffer size using the value.
nblockalign this is dependent upon the number of bits per sample.
  wbitspersample nchannels nblockalign
  3 1 48 + nauxblocksize
  3 2 96 + nauxblocksize
  5 1 80 + nauxblocksize
  5 2 160 + nauxblocksize
  playback software needs to process a multiple of bytes of data at a time, so that the value of can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data. (g.723 can be 3 or 5)
cbextrasize the size in bytes of the extra information in the extended wave 'fmt' header. this should be 2.
nauxblocksize this is the size in bytes of auxiliary data that is stored at the beginning of each data block. in most instances this should be set to 0.

see the g.723 specification for algorithm details.

data format

mono, 3 bits per sample

grouped into 3 byte sub-blocks containing 8 mono samples. the bit ordering for samples labeled a through h is:

byte 1 byte 2 byte 3

;where a2 is the msb and a0 is the lsb of the first sample.

stereo, 3 bits per sample

grouped into 6 byte sub-blocks containing 8 stereo samples. the bit ordering for samples labeled a through h is:

byte 1 byte 2

 

byte 3 byte 4

 

byte 5 byte 6

;where al2 is the msb and al0 is the lsb of the first left sample, and ar2 is the msb and ar0 is the lsb of the first right sample

mono, 5 bits per sample

grouped into 5 byte sub-blocks containing 8 mono samples. the bit ordering for samples labeled a through h is:

byte 1 byte 2 byte 3

 

byte 4 byte 5

;where a4 is the msb and a0 is the lsb of the first sample.

stereo, 5 bits per sample

grouped into 10 byte sub-blocks containing 8 stereo samples. the bit ordering for samples labeled a through h is:

byte 1 byte 2

 

byte 3 byte 4

 

byte 5 byte 6

 

byte 7 byte 8

 

byte 9 byte 10

;where al4 is the msb and al0 is the lsb of the first left sample, and ar4 is the msb and ar0 is the lsb of the first right sample

dialogic oki adpcm

added: 04/07/94
author: dialogic

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

#define wave_format_dialogic_oki_adpcm (0x0203)

wformattag this must be set to wave_format_dialogic_oki_adpcm.
nchannels number of channels in the wave. 1
nsamplespersec frequency the of the sample rate of wave file. 6000, 8000,
navgbytespersec average data rate. 3000, 4000
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign block alignment of for the data. 1
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data. 4
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. 0

this format can be created and read by either oki adpcm chip set of by a firmware program.

control resources limited vqlpc

added: 04/05/94
author: control resources limited

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

#define wave_format_control_res_vqlpc (0x0034)

wformattag this must be set to wave_format_control_res_vqlpc
nchannels number of channels in the wave.(1 for mono)
nsamplespersec frequency the of the sample rate of wave file. 8000
navgbytespersec average data rate.394
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign block alignment of the data in bytes. 18
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data. 4
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. 2
wcomptype this value is reserved and should be set to 1

vqlpc is trademarked of control resources ltd.

control resources limited cr10

added: 04/05/94
author: control resources limited

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

#define wave_format_control_res_cr10 (0x0037)

wformattag this must be set to wave_format_control_res_cr10.
nchannels number of channels in the wave.(1 for mono)
nsamplespersec frequency the of the sample rate of wave file.
navgbytespersec average data rate.
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign block alignment of the data.
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header.

data not available at time of printing.

g.721 wave format header

added: 08/25/93
author: antex electronics corp.

the algorithm for g.721 header format is essentially the same as g723.

fact chunk

wave format header

# define wave_format_g721_adpcm (0x0040)

wformattag this must be set to wave_format_g721_adpcm.
nchannels number of channels in the wave.(1 for mono, 2 for stereo)
nsamplespersec frequency the of the sample rate of wave file.
navgbytespersec average data rate.
playback software can estimate the buffer size using the value.
nblockalign block alignment of the data.
  nchannels nblockalign
  1 64+nauxblocksize
  2 128+nauxblocksize
  playback software needs to process a multiple of bytes of data at a time, so that the value of can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data. this should be 4.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. this should be 2.
nauxblocksize this is the size in bytes of auxiliary data that is stored at the beginning of each data block. in most instances this should be set to 0.

see the g.721 specification for algorithm details.

this is a ccitt (international telegraph and telephone consultative committee) specification. their address is:

palais des nations
ch-1211 geneva 10, switzerland
phone: 22 7305111

data format

mono, 4 bits per sample

grouped into 1 byte sub-blocks containing 2 mono samples. the bit ordering for samples labeled a and b is:

;where a3 is the msb and a0 is the lsb of the first sample and b3 is the msb and b0 is the lsb of the second sample.

stereo, 4 bits per sample

grouped into 1 byte sub-blocks containing 1 stereo sample. the bit ordering for one stereo sample is:

;where l3 is the msb and l0 is the lsb of the left sample, and r3 is the msb and r0 is the lsb of the right sample

adpcme wave format header

added: 10/23/93
author: antex electronics corp.

fact chunk

wave format header

# define wave_format_adpcme (0x0033)

wformattag this must be set to wave_format_adpcme.
nchannels number of channels in the wave.(1 for mono, 2 for stereo)
nsamplespersec frequency the of the sample rate of wave file.
navgbytespersec average data rate.
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign block alignment of the data, 1 for mono, 2 for stereo.
wbitspersample this is the number of bits per sample of data. this should be 4.
cbextrasize 0

data format

mono nibbles are labelled m and left and right samples labelled l and r.

mono adpcme

byte 0 byte 1 byte 2 byte 3

stereo adpcme

byte 0 byte 1 byte 2 byte 3

note: stereo nibble ordering is delibrately different from the mono order.

gsm610 wave type

added: 09/05/93
author: microsoft

fact chunk

wave format header

typedef struct gsm610waveformat_tag {

waveformatex wfx;
word wsamplesperblock;

} gsm610waveformat;

typedef gsm610waveformat *pgsm610waveformat;

typedef gsm610waveformat near *npgsm610waveformat;

typedef gsm610waveformat far *lpgsm610waveformat;

#define wave_format_gsm610 (0x0031)

wformattag this must be set to wave_format_gsm610
nchannels number of channels in the wave.(1 for mono)
nsamplespersec frequency the of the sample rate of wave file.
navgbytespersec average data rate.
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign block alignment of the data.
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header.

dsp solutions real wave type

added 02/03/94
author: dsp solutions (formerly digispeech)

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

the extended wave format structure is used to defined all non-pcm format wave data, and is described as follows in the include file mmreg.h:

/* general extended waveform format structure */

/* use this for all non pcm formats */

/* (information common to all formats) */

typedef struct waveformat_extended_tag {

word wformattag; /* format type */

word nchannels; /* number of channels (i.e. mono, stereo...) */

dword nsamplespersec; /* sample rate */

dword navgbytespersec; /* for buffer estimation */

word nblockalign; /* block size of data */

word wbitspersample; /* number of bits per sample of mono data */

word cbsize; /* the count in bytes of the extra size */} waveformatex;

#define wave_format_digireal (0x0035)

wformattag must be set wave_format_digireal
nchannels number of channels in the wave, 1 for mono.
nsamplespersec frequency of the sample rate of the wave file. this should be 8000. other sample rates are allowed, but not encouraged. this rate is also used by the sample size entry in the fact chunk to determine the length in time of the data.
navgbytespersec average data rate (1650).
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign the block alignment (in bytes) of the data in <data-ck> (13).
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample per sample of data. (2). each channel is assumed to have the same sample resolution. if this field is not needed, then it should be set to zero.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. this should be 0.

dsp solutions adpcm wave type

added 02/03/94
author: dsp solutions (formerly digispeech)

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

waveformatex

the extended wave format structure is used to defined all non-pcm format wave data, and is described as follows in the include file mmreg.h:

/* general extended waveform format structure */

/* use this for all non pcm formats */

/* (information common to all formats) */

typedef struct waveformat_extended_tag {

word wformattag; /* format type */

word nchannels; /* number of channels (i.e. mono, stereo...) */

dword nsamplespersec; /* sample rate */

dword navgbytespersec; /* for buffer estimation */

word nblockalign; /* block size of data */

word wbitspersample; /* number of bits per sample of mono data */

word cbsize; /* the count in bytes of the extra size */} waveformatex;

#define wave_format_digiadpcm (0x0036)

wformattag must be set to wave_format_digiadpcm
nchannels number of channels in the wave, 1 for mono, 2 for stereo
nsamplespersec frequency of the sample rate of the wave file. this should be 11025, 22050, or 44100. other sample rates are allowed.
navgbytespersec average data rate.
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign the block alignment (in bytes) of the data in <data-ck>.
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
  wbitspersample nchannels nblockalign
  3 1 3
  3 2 6
wbitspersample this is the number of bits per sample per channel data. (3)
cbsize the size in bytes of the extra information in the wave format. should be 0.

mpeg-1 audio (audio-only)

added 18/01/93
author: microsoft

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

# define wave_format_mpeg (0x0050)

typedef struct mpeg1waveformat_tag {

waveformatex wfx;

word fwheadlayer;

dword dwheadbitrate;

word fwheadmode;

word fwheadmodeext;

word wheademphasis;

word fwheadflags;

dword dwptslow;

dword dwptshigh;

} mpeg1waveformat;

wformattag this must be set to wave_format_mpeg.
nchannels number of channels in the wave, 1 for mono, 2 for stereo.
nsamplespersec sampling frequency (in hz) of the wave file: 32000, 44100, or 48000. note, however, that if the sampling frequency of the data is variable, then this field should be set to zero. it is strongly recommended that a fixed sampling frequency be used for desktop applications.
navgbytespersec average data rate; this might not be a legal mpeg bit rate if variable bit rate coding under layer 3 is used.
nblockalign the block alignment (in bytes) of the data in . for audio streams which have a fixed audio frame length, the block alignment is equal to the length of the frame. for streams in which the frame length varies, nblockalign should be set to 1.
  with a sampling frequency of 32 or 48 khz, the size of an mpeg audio frame is a function of the bit rate. if an audio stream uses a constant bit rate, the size of the audio frames does not vary. therefore, the following formulas apply:
layer 1: nblockalign = 4*(int)(12*bitrate/samplingfreq)
layers 2 and 3: nblockalign = (int)(144*bitrate/samplingfreq)
example 1: for layer 1, with a sampling frequency of 32000 hz and a bit rate of 256 kbits/s, nblockalign = 384 bytes.
  if an audio stream contains frames with different bit rates, then the length of the frames varies within the stream. variable frame lengths also occur when using a sampling frequency of 44.1 khz: in order to maintain the data rate at the nominal value, the size of an mpeg audio frame is periodically increased by one "slot" (4 bytes in layer 1, 1 byte in layers 2 and 3) as compared to the formulas given above. in these two cases, the concept of block alignment is invalid. the value of nblockalign must therefore be set to 1, so that mpeg-aware applications can tell whether the data is block-aligned or not.
note that it is possible to construct an audio stream which has constant-length audio frames at 44.1 khz by setting the padding_bit in each audio frame header to the same value (either 0 or 1). note, however, that bit rate of the resulting stream will not correspond exactly to the nominal value in the frame header, and therefore some decoders may not be capable of decoding the stream correctly. in the interested of standardization and compatibility, this approach is discouraged.
wbitspersample not used; set to zero.
cbsize the size in bytes of the extended information after the waveformatex structure. for the standard wave_format_mpeg format, this is 22. if extra fields are added, this value will increase.
fwheadlayer the mpeg audio layer, as defined by the following flags:
acm_mpeg_layer1 - layer 1.
acm_mpeg_layer2 - layer 2.
acm_mpeg_layer3 - layer 3.
some legal mpeg streams may contain frames of different layers. in this case, the above flags should be ored together so that a driver may tell which layers are present in the stream.
dwheadbitrate the bit rate of the data, in bits per second. this value must be a standard bit rate according to the mpeg specification; not all bit rates are valid for all modes and layers. see tables 1 and 2, below. note that this field records the actual bit rate, not mpeg frame header code. if the bitrate is variable, or if it is a non-standard bit rate, then this field should be set to zero. it is recommended that variable bit rate coding be avoided where possible.
fwheadmode stream mode, as defined by the following flags:
acm_mpeg_stereo - stereo.
acm_mpeg_jointstereo - joint-stereo.
acm_mpeg_dualchannel - dual-channel (for example, a bilingual stream).
acm_mpeg_singlechannel - single channel.
some legal mpeg streams may contain frames of different modes. in this case, the above flags should be ored together so that a driver may tell which modes are present in the stream. this situation is particularly likely with joint-stereo encoding, as encoders may find it useful to switch dynamically between stereo and joint-stereo according to the characteristics of the signal. in this case, both the acm_mpeg_stereo and the acm_mpeg_jointstereo flags should be set.
fwheadmodeext contains extra parameters for joint-stereo coding; not used for other modes. see table 3, below. some legal mpeg streams may contain frames of different mode extensions. in this case, the values in table 3 may be ored together. note that fwheadmodeext is only used for joint-stereo coding; for other modes (single channel, dual channel, or stereo), it should be set to zero.
in general, encoders will dynamically switch between the various possible mode_extension values according to the characteristics of the signal. therefore, for normal joint-stereo encoding, this field should be set to 0x000f. however, if it is desirable to limit the encoder to a particular type of joint-stereo coding, this field may be used to specify the allowable types.
wheademphasis describes the de-emphasis required by the decoder; this implies the emphasis performed on the stream prior to encoding. see table 4, below.
fwheadflags sets the corresponding flags in the audio frame header:
acm_mpeg_privatebit - set the private bit.
acm_mpeg_copyright - set the copyright bit.
acm_mpeg_originalhome - sets the original/home bit.
acm_mpeg_protectionbit - sets the protection bit, and inserts a 16-bit error protection code into each frame.
acm_mpeg_id_mpeg1 - sets the id bit to 1, defining the stream as an mpeg-1 audio stream. this flag must always be set explicitly to maintain compatibility with future mpeg audio extensions (i.e. mpeg-2).
an encoder will use the value of these flags to set the corresponding bits in the header of each mpeg audio frame. when describing an encoded data stream, these flags represent a logical or of the flags set in each frame header. that is, if the copyright bit is set in one or more frame headers in the stream, then the acm_mpeg_copyright flag will be set. therefore, the value of these flags is not necessarily valid for every audio frame.
dwptslow this field (together with the following field) consists of the presentation time stamp (pts) of the first frame of the audio stream, as taken from the mpeg system layer. dwptslow contains the 32 lsbs of the 33-bit pts. the pts may be used to aid in the re-integration of an audio stream with an associated video stream. if the audio stream is not associated with a system layer, then this field should be set to zero.
dwptshigh this field (together with the previous field) consists of the presentation time stamp (pts) of the first frame of the audio stream, as taken from the mpeg system layer. the lsb of dwptshigh contains the msb of the 33-bit pts. the pts may be used to aid in the re-integration of an audio stream with an associated video stream. if the audio stream is not associated with a system layer, then this field should be set to zero.
  note: the previous two fields can be treated as a single 64-bit integer; optionally, the dwptshigh field can be tested as a flag to determine whether the msb is set or cleared.

table 1: allowable bit rates (bits/s)

mpeg frame header code layer 1 layer 2 layer 3
'0000' free format free format free format
'0001' 32000 32000 32000
'0010' 64000 48000 40000
'0011' 96000 56000 48000
'0100' 128000 64000 56000
'0101' 160000 80000 64000
'0110' 192000 96000 80000
'0111' 224000 112000 96000
'1000' 256000 128000 112000
'1001' 288000 160000 128000
'1010' 320000 192000 160000
'1011' 352000 224000 192000
'1100' 384000 256000 224000
'1101' 416000 320000 256000
'1110' 448000 384000 320000
'1111' forbidden forbidden forbidden

table 2: allowable mode-bitrate combinations for layer 2.

bit rate (bits/sec) allowable modes
32000 single channel
48000 single channel
56000 single channel
64000 all modes
80000 single channel
96000 all modes
112000 all modes
128000 all modes
160000 all modes
192000 all modes
224000 stereo, intensity stereo, dual channel
256000 stereo, intensity stereo, dual channel
320000 stereo, intensity stereo, dual channel
384000 stereo, intensity stereo, dual channel

table 3: mode extension


fwheadmodeext
mpeg frame header code
layers 1 and 2

layers 3
0x0001 '00' subbands 4-31 in intensity stereo no intensity or ms-stereo coding
0x0002 '01' subbands 8-31 in intensity stereo intensity stereo
0x0004 '10' subbands 12-31 in intensity stereo ms-stereo
0x0008 '11' subbands 16-31 in intensity stereo both intensity and ms-stereo coding

table 4: emphasis field

wheademphasis mpeg frame header code de-emphasis required
1 '00' no emphasis
2 '01' 50/15 ms emphasis
3 '10' reserved
4 '11' ccitt j.17

flags

the following flags are defined for the fwheadlayer field. for encoding, one of these flags should be set so that the encoder knows what layer to use. for decoding, the driver can check these flags to determine whether it is capable of decoding the stream. note that a legal mpeg stream may use different layers in different frames within a single stream. therefore, more than one of these flags may be set.

#define acm_mpeg_layer1 (0x0001)

#define acm_mpeg_layer2 (0x0002)

#define acm_mpeg_layer3 (0x0004)

the following flags are defined for the fwheadmode field. for encoding, one of these flags should be set so that the encoder knows what layer to use; for joint-stereo encoding, typically the acm_mpeg_stereo and acm_mpeg_jointstereo flags will both be set so that the encoder can use joint-stereo coding only when it is more efficient than stereo. for decoding, the driver can check these flags to determine whether it is capable of decoding the stream. note that a legal mpeg stream may use different layers in different frames within a single stream. therefore, more than one of these flags may be set.

#define acm_mpeg_stereo (0x0001)

#define acm_mpeg_jointstereo (0x0002)

#define acm_mpeg_dualchannel (0x0004)

#define acm_mpeg_singlechannel (0x0008)

table 3 defines flags for the fwheadmodeext field. this field is only used for joint-stereo coding; for other encoding modes, this field should be set to zero. for joint-stereo encoding, these flags indicate the types of joint-stereo encoding which an encoder is permitted to use. normally, an encoder will dynamically select the mode extension which is most appropriate for the input signal; therefore, an application would typically set this field to 0x000f so that the encoder may select between all possibilities; however, it is possible to limit the encoder by clearing some of the flags. for an encoded stream, this field indicates the values of the mpeg mode_extension field which are present in the stream.

the following flags are defined for the fwheadflags field. these flags should be set before encoding so that the appropriate bits are set in the mpeg frame header. when describing an encoded mpeg audio stream, these flags represent a logical or of the corresponding bits in the header of each audio frame. that is, if the bit is set in any of the frames, it is set in the fwheadflags field. if an application wraps a riff wave header around a pre-encoded mpeg audio bit stream, it is responsible for parsing the bit stream and setting the flags in this field.

#define acm_mpeg_privatebit (0x0001)

#define acm_mpeg_copyright (0x0002)

#define acm_mpeg_originalhome (0x0004)

#define acm_mpeg_protectionbit (0x0008)

#define acm_mpeg_id_mpeg1 (0x0010)

data

the data chunk consists of an mpeg-1 audio sequence as defined by the iso 11172 specification, part 3 (audio). this sequence consists of a bit stream, which is stored in the data chunk as an array of bytes. within a byte, the msb is the first bit of the stream, and the lsb is the last bit. the data is not byte-reversed. for example, the following data consists of the first 16 bits (from left to right) of a typical audio frame header:

syncword id layer protectionbit ...

111111111111 1 10 1 ...

this data would be stored in bytes in the following order:

byte0 byte1 ...

ff fd ...

mpeg audio frames

an mpeg audio sequence consists of a series of audio frames, each of which begins with a frame header. most of the fields within this frame header correspond to fields in the mpeg1waveformat structure defined above. for encoding, these fields can be set in the mpeg1waveformat structure, and the driver can use this information to set the appropriate bits in the frame header when it encodes. for decoding, a driver can check these fields to determine whether it is capable of decoding the stream.

encoding

a driver which encodes an mpeg audio stream should read the header fields in the mpeg1waveformat structure and set the corresponding bits in the mpeg frame header. if there is any other information which a driver requires, it must get this information either from a configuration dialog box, or through a driver callback function. for more information, see the ancillary data section, below.

if a pre-encoded mpeg audio stream is wrapped with a riff header, it is the responsibility of the application to parse the bit stream and set the fields in the mpeg1waveformat structure. if the sampling frequency or the bitrate index is not constant throughout the data stream, the driver should set the corresponding mpeg1waveformat fields (nsamplespersec and dwheadbitrate) to zero, as described above. if the stream contains frames of more than one layer, it should set the flags in fwheadlayer for all layers which are present in the stream. since fields such as fwheadflags can vary from frame to frame, caution must be used in setting and testing these flags; in general, an application should not rely on them to be valid for every frame. when setting these flags, adhere to the following guidelines:

• acm_mpeg_copyright should be set if any of the frames in the stream have the copyright bit set.

• acm_mpeg_protectionbit should be set if any of the frames in the stream have the protection bit set.

• acm_mpeg_originalhome should be set if any of the frames in the stream have the original/home bit set. this bit may be cleared if a copy of the stream is made.

• acm_mpeg_privatebit should be set if any of the frames in the stream have the private bit set.

• acm_mpeg_id_mpeg1 should be set if any of the frames in the stream have the id bit set. for mpeg-1 streams, the id bit should always be set; however, future extensions of mpeg (such as the mpeg-2 multi-channel format) may have the id bit cleared.

if the mpeg audio stream was taken from a system-layer mpeg stream, or if the stream is intended to be integrated into the system layer, then the presentation time stamp (pts) fields may be used. the pts is a field in the mpeg system layer which is used for synchronization of the various fields. the mpeg pts field is 33 bits, and therefore the riff wave format header stores the value in two fields: dwptslow contains the 32 lsbs of the pts, and dwptshigh contains the msb. these two fields may be taken together as a 64-bit integer; optionally, the dwptshigh field may be tested as a flag to determine whether the msb is set or cleared. when extracting an audio stream from a system layer, a driver should set the pts fields to the pts of the first frame of the audio data. this may later be used to re-integrate the stream into the system layer. the pts fields should not be used for any other purpose. if the audio stream is not associated with the mpeg system layer, then the pts fields should be set to zero.

decoding

a driver may test the fields in the mpeg1waveformat structure to determine whether it is capable of decoding the stream. however, the driver must be aware that some fields, such as the fwheadflags field, may not be consistent for every frame in the bit stream. a driver should never use the fields of the mpeg1waveformat structure to perform the actual decoding. the decoding parameters should be taken entirely from the mpeg data stream.

a driver may check the nsamplespersec field to determine whether it supports the sampling frequency specified. if the mpeg stream contains data with a variable sampling rate, then the nsamplespersec field will be set to zero. if the driver cannot handle this type of data stream, then it should not attempt to decode the data, but should fail immediately.

ancillary data

the audio data in an mpeg audio frame may not fill the entire frame. any remaining data is called ancillary data. this data may have any format desired, and may be used to pass additional information of any kind. if a driver wishes to support the ancillary data, it must have a facility for passing the data to and from the calling application. the driver may use a callback function for this purpose. basically, the driver may call a specified callback function whenever it has ancillary data to pass to the application (i.e. on decode) or whenever it requires more ancillary data (on encode).

drivers should be aware that not all applications will want to process the ancillary data. therefore, a driver should only provide this service when explicitly requested by the application. the driver may define a custom message which enables and disables the callback facility. separate messages could be defined for the encoding and decoding operations for more flexibility.

if the callback facility is enabled, then the application is responsible for creating a callback function which is capable of processing the ancillary data. typically, the application already has a callback defined in order to feed data blocks to the wave device as they are needed; this callback processes the wom_close, wom_done, and wom_open messages, and/or the wim_close, wim_data, and wim_open messages. the address of the callback function (or a window handle) is passed to the driver by the waveoutopen or the waveinopen calls in the dwcallback parameter. two additional messages must defined by the driver and supported by the callback: one to pass ancillary data back to the application (i.e. wom_ancdata_out), and one to request ancillary data from the application (i.e. wim_ancdata_in).

as message parameters, the wom_ancdata_out could pass a pointer to a data buffer, and a size parameter indicating the number of bits (or bytes) of data in the buffer. the buffer would be allocated by the driver, and freed after the message has been processed by the callback. the driver could pass back the ancillary data frame by frame as it is received, or it could process an entire block of data and pass back the ancillary data in a single large chunk. the method is up to the driver, or could be configurable either through a configuration dialog or as a parameter passed when the ancillary data functions are enabled by the application.

to request ancillary data, the wim_ancdata_in message could pass a pointer to an empty data buffer, which the callback function would fill with ancillary data. if the amount of ancillary data varies from frame to frame, the first few bytes of the buffer could be defined to be the number of bits (or bytes) of data. this buffer would be allocated and freed by the driver; in order to ensure that there is enough space to hold all the data, the buffer size could be configurable using either a configuration dialog or by passing the value to the driver as a parameter when the ancillary data functions are enabled by the application.

note that this method may not be appropriate for all drivers or all applications; it is included only as an illustration of how ancillary data may be supported. for more information, consult the windows 3.1 software development kit, "multimedia programmers reference," and the windows 3.1 device driver kit, "multimedia device adaptation guide."

standards

it is recommended that applications use the 44.1 khz sampling rate whenever possible, to maintain compatibility with current computer standards. it is also recommended that encoders avoid the use of variable bitrate coding, and it is strongly recommended that all bit streams use a constant sampling frequency. streams which have a variable sampling frequency cannot be decoded to pcm for manipulation by other audio services.

references

iso/iec jtc1/sc29/wg11 mpeg, april 1992. iso/iec draft international standard: "coding of moving pictures and associated audio for digital storage media up to about 1.5 mbit/s."

creative labs, inc. fastspeech 8 & 10

added: 03/2/94
author: creative labs

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

#define wave_format_creative_fastspeech8 (0x0202)

#define wave_format_creative_fastspeech10 (0x0203)

wformattag this must be set to wave_format_creative_fastspeech8 or 10
nchannels number of channels in the wave.(1 for mono)
nsamplespersec frequency the of the sample rate of wave file. 8000 or 11025
navgbytespersec average data rate.
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign block alignment of for the data. 32 for fastspeech8 and 26 for fastspeech10
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data.
cbextrasize the size in bytes of the extra information in the extended wave 'fmt' header. 2.
wrevision revision of the algorithm. this should be 1 for the current definition.

fujitsu fm towns snd wave type

added: 02/15/94
author: fujitsu

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

#define wave_format_fm_towns_snd (0x0300)

wformattag this must be set to wave_format_fm_towns_snd
nchannels number of channels in the wave. 1
nsamplespersec frequency the of the sample rate of wave file. 0-20833
navgbytespersec average data rate. same as sampling rate.
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign block alignment of for the data. always 1
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data. always 8.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header.

olivetti gsm

added: 01/20/94
author: olivetti

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

#define wave_format_oligsm (0x1000)

wformattag this must be set to wave_format_oligsm
nchannels number of channels in the wave.(1 for mono), 2
nsamplespersec frequency the of the sample rate of wave file. 8000
navgbytespersec average data rate. 1633
  playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign block alignment of the data. 196
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data. 2
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. 0

olivetti adpcm

added: 01/20/94
author: olivetti

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

#define wave_format_oliadpcm (0x1001)

wformattag this must be set to wave_format_oliadpcm.
nchannels number of channels in the wave. (1, 2)
nsamplespersec frequency the of the sample rate of wave file. 8000
navgbytespersec average data rate. 4000
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign block alignment of the data. 480
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data. 4
cbsize the size in bytes of the extra information in the extended wave 'fmt' header. 0

olivetti celp

added: 01/20/94
author: olivetti

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

#define wave_format_olisbc (0x1003)

wformattag this must be set to wave_format_olisbc.
nchannels number of channels in the wave.(1 for mono)
nsamplespersec frequency the of the sample rate of wave file.
navgbytespersec average data rate.
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign block alignment of the data.
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header.

olivetti opr

added: 01/20/94
author: olivetti

fact chunk

this chunk is required for all wave formats other than wave_format_pcm. it stores file dependent information about the contents of the wave data. it currently specifies the time length of the data in samples.

wave format header

#define wave_format_oliopr (0x1004)

more data not available at time of printing.

wformattag this must be set to wave_format_oliopr.
nchannels number of channels in the wave.(1 for mono)
nsamplespersec frequency the of the sample rate of wave file.
navgbytespersec average data rate.
playback software can estimate the buffer size using the <navgbytespersec> value.
nblockalign block alignment of the data.
playback software needs to process a multiple of <nblockalign> bytes of data at a time, so that the value of <nblockalign> can be used for buffer alignment.
wbitspersample this is the number of bits per sample of data.
cbsize the size in bytes of the extra information in the extended wave 'fmt' header.




¹ԴĵԽ̳վŻַѧϷдݱվ
Լ̿ר̸˵ϷԪϷϷ˶ֻרҵమӰɳ
԰ѧŽݺ̸۽ҽѧͼרطиҵ

¹վȨ