32bit float to 16bit integer for ZGY output
I've not done this before so I'm basing this on research I've done on the web.
To scale 32bit float to store as 16bit integer I am planning to:
determine the maximum 32bit float value in the dataset. scale each sample by dividing it by the maximum 32bit float, multiplying it by 32767, and truncating it.
sample[x] = (int16_t)(sample[x] / maxfloat) * 32767)
Does this make sense or am I way off base?
Thanks, Terry Walters