A common question among new programmers is “Why are there so many sizes of variables available?” We have two different sizes of reals;
float at 32 bits, and
double at 64 bits. We also have three different 4 sizes of intgers at 8, 16, and 32 bits each1. In many languages, there’s just real and integer with no size variation, so why does C offer so many choices? The reason is that “one size doesn’t fit all”. You have options in order to optimize your code. If you have a variable that ranges from say, 0 to 1000, there’s no need to use more than a short (16 bit) integer. Using a 32 bit integer simply uses more memory. Now, you might consider 2 extra bytes to be no big deal, but remember that we are talking about embedded controllers in some cases, not desktop systems. Some small controllers may have only a few hundred bytes of memory available for data. Even on desktop systems with gigabytes of memory, choosing the wrong size can be disastrous. For example, suppose you have a system with an analog to digital converter for audio. The CD standard sampling rate is 44,100 samples per second. Each sample is a 16 bit value (2 bytes), producing a data rate of 88,100 bytes per second. Now imagine that you need enough memory for a five minute song in stereo. That works out to nearly 53 megabytes of memory. If you had chosen long (32 bit) integers to hold these data, you’d need about 106 megabytes instead. As the values placed on an audio CD never exceed 16 bits, it would be foolish to allocate more than 16 bits each for the values. Data sizes are power-of-2 multiples of a byte though, so you can’t choose to have an integer of say, 22 bits length. It’s 8, 16, or 32 for the most part (some controllers have an upper limit of 16 bits).
In the case of
float is used where space is at a premium. It has a smaller range (size of exponent) and a lower precision (number of significant digits) than
double is generally preferred and is the norm for most math functions. Plain floats are sometimes referred to as singles (that is, single precision versus double precision).
If you don’t know the size of a particular data item (for example an
int might be either 16 or 32 bits depending on the hardware and compiler), you can use the
sizeof() command. This looks like a function but it’s really built into the language. The argument is the item or expression you’re interested in. It returns the size required in bytes.
size = sizeof( int );
size will be either 2 or 4 depending on the system.
In some systems 80 bit doubles and/or 64 bit integers are also available.