Hello, welcome!
I haven't thought about this for a little while but let's see:
a few definitions first:
time = a floating point value of seconds
bit depth = the amplitude resolution; quality; usually 8, 16, or 24 but I think the max in DBC is 16.
Channels = mono , or stereo = 1 or 2 channels
Sample Rate = how many small pieces of sound pass through the sound generator in a second . Often stated as frequency this is not the PITCH of a sound but the rate at which the information in the file processed. The higher the rate, the more sound information per second, thus the better quality. CD quality is 44,100 samples per second. DVD sound quality averages about 96,000 samples per second
bytes=((time in seconds) X (bit depth / 8) X (channels) X (sample rate)) + header size
header = the file header. Can differ between sound formats. Once loaded into DBC, I'm not sure how the header is treated unless the sound is converted to a memblock in which case the header is 12 bytes.
so if you had a monophonic sound playing at a sample rate of 44.1khz for 2 seconds with a bit quality of 16 the byte position would be
bytes = (2 * (16/8) * 1 * 44100) + (header size)
I'm not sure if once the sound is loaded it is converted to a memblock format internally with a custom header in which case the header size is 12 bytes . But the header size might be 0 depending on how the sound is stored internally. The calculation is based on the file itself (RIFF .wav) . So it's probably a safe bet to leave off the header size in the calculation. You can experiment a little bit to see how close the position is.
Enjoy your day.