Standard int sizes
I admit, my first years of programming, I was coddled by fairly standard definitions for variable sizes. A byte was a byte, a short was two, an int was four, a long was eight; a float was four, a double was eight. But then I bumped into the wonderful, terrible thing that is compiler-, platform-, and architecture-dependent variable sizes. An int of 16 bits, a long of 32, a double that was the same as a float, long longs being needed to specify 64, blah blah blah. This only progressed as I started working on yet-unreleased processor architectures (such as AVX512), tiny microcontrollers (ints being 16 bits, and occasionally no support for anything bigger), and as I started looking into implementation specifics (intermediate values held as 80-bit floats).
Thankfully, we have
to save the day. The link has more, and features our lovely int8_t, uint32_t, and so on. Until I learned about this, I satisfied myself with declaring my own typedefs to do this; it's much nicer when someone else does it for you.
They also do things I hadn't before, like defining int_leastX_t - for example, an int_least32_t will be at least 32 bits though it may be more. Then there's int_fastX_t, which is like leastX in that it specifies the minimum width, but it also chooses a type that should be fastest to operate on/with for that minimum width. And then there's intptr_t and uintptr_t, which is an integer capable of holding the signed/unsigned version of a pointer (defined by casting the Xptr_t to void *, then back to Xptr_t, and being equal on comparison.)
October 3, 2014