Signed integers exist because we need some way to store negative numbers in a computer. Imagine you have 32 bits of storage and you want to put data in there. We could put in 0x00000001 and say that represents 1 in decimal, as you might expect. If we subtract 1 from that, we get 0x00000000, which represents 0. But if we subtract 1 from 0, the bits "wrap around" and we get 0xFFFFFFFF -- this happens on the hardware level! The internal counter has nowhere to go so it wraps around to the top of the range of representable bits.
If we'd been representing this number as an unsigned integer, 0xFFFFFFFF would represent 4,294,967,295. This is the number you'd normally expect if you started from 0x00000000 and kept counting up until you got to 0xFFFFFFFF. So when we have an unsigned integer, 0 - 1 = 4,294,967,295
. Which is weird and probably not what we want! A signed integer is just a convention that reserv