A common compiler is gcc
, which is the Gnu C Compiler.
On a mac, it's actually a different compiler called clang
.
You have this if you installed homebrew, because it needs to compile programs,
so it installs Clang for you :)
$ ls -l
total 32
-rw-r--r-- 1 josh staff 3800 May 21 13:25 Readme.md
-rw-r--r-- 1 josh staff 613 May 21 11:18 binary.c
-rw-r--r-- 1 josh staff 591 May 21 11:30 howbig_are_things.c
-rw-r--r-- 1 josh staff 2382 May 21 12:23 practice.c
$ gcc binary.c # compiles the program, gives it the default name a.out
$ ls -l
total 56
-rw-r--r-- 1 josh staff 3800 May 21 13:25 Readme.md
-rwxr-xr-x 1 josh staff 8724 May 21 13:26 a.out
-rw-r--r-- 1 josh staff 613 May 21 11:18 binary.c
-rw-r--r-- 1 josh staff 591 May 21 11:30 howbig_are_things.c
-rw-r--r-- 1 josh staff 2382 May 21 12:23 practice.c
$ ./a.out # run the program
-129 - 01111111
-128 - 10000000
...
I made an elective that goes into how these things work. Here's the bit about permissions. Here's the bit about locating programs you're trying to run by setting an environment variable called PATH
# 1 byte = 8 bits
0b00000000 # => 0
0b11111111 # => 255
0b1... means negative (leading bit is set) 0b0... means positive (leading bit is not set)
Then the rest of the number is treated as normal.
This is almost true, except negative numbers have a slightly different representation (called 2s complement), for the purpose of making addition cheap (try adding the bits for 2 and -1, which you can see below).
Because we only have 8 bits, and 1 is reserved for the sign, we can only represent numbers -128 through 127.
We wrote the program binary.c (included) to see where they rollover, and how they are represented.
-129: 01111111 <-- rollover, this is +127
-128: 10000000 <-- smallest negative number
-001: 11111111
0000: 00000000 <-- 0 is neither positive nor negative,
its value starts with 0, though,
so 1 fewer positive numbers than negative
0001: 00000001 <-- counting like "normal"
0127: 01111111 <-- largest positive number
0128: 10000000 <-- rollover, this is -128
Ruby has 64 bits it can use to represent a number (because I'm on a 64 bit machine).
It uses 1 bit for the sign, and one bit for some other purpose
(pretty sure I know, if you're ever curious ^_^
).
So, there are 2^62
positive values and 2^62
negative values.
Since the number 0 occupies the positive space, our maximum positive number is one fewer than that.
Once we rollover, Ruby switches to an alternate representation.
# available_bits = 64
# bits_used_for_other_shit = 2
# values_used_for_0 = 1
2**(64-2)-1).class # => Fixnum
2**(64-2) ).class # => Bignum
The & will do a "bitwise" comparison, meaning it will give you a new number, where each bit is set if that bit is set on both inputs
logically:
3 has binary "011"
& with 1: "001" gives "001"
& with 2: "010" gives "010"
& with 4: "100" gives "000"
See it in Ruby
3 & 1 # => 1
3 & 2 # => 2
3 & 4 # => 0
7 has binary value of "0111"
7 & 1 # => 1
7 & 2 # => 2
7 & 4 # => 4
7 & 8 # => 0
Ascii has no values larger than 127, so no ascii values will have the first bit set.
Utf8 supports ascii because if the first bit is a 0, then it maps to the ascii values. But if the first bit is a 1, then it means the character requires more than one byte to represent. So all the fancy characters, and characters from other languages, are represented by setting the first bit, and then having another system of mapping the bits to numbers.
string = "a∂" # => "a∂"
string.encoding # => #<Encoding:UTF-8>
string.chars # => ["a", "∂"]
string.bytes # => [97, 226, 136, 130]
"a".ord # => 97
97.chr # => "a"