Skip to content

Instantly share code, notes, and snippets.

@afcapel
Last active November 15, 2016 21:23
Show Gist options
  • Star 5 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save afcapel/5030390 to your computer and use it in GitHub Desktop.
Save afcapel/5030390 to your computer and use it in GitHub Desktop.
Article for the amazing practicing ruby Tech Writing Clinic. http://elmcitycraftworks.org/post/36678454423/technical-writing-clinic-winter-2013 One assignment was to write an article explaining a technical concept to non technical readers. Here is my article.

My job, explained so my mom can understand it

"I am a freelance Ruby and Rails developer". When I talk with my geek friends, everyone understands what I do and they have a reasonable understanding of how I spend my day. But sometimes, less technical people ask me what I do for a living and then I tend to dumb down the answer: "I am a programmer, I usually build websites.". Oh -they say- my nephew also make websites, he even built me a blog in Blogger.

Err... yes, but that hardly has anything to do with what I do for a living.

There are many different kinds of websites, built with different technologies, and depending on the site and the technology used to build it, they are made by one kind of professional or another. The craft of building a website comprises many different disciplines, and professionals usually specialize in just one part of the process. Although I do many different things, my specialization is mainly programming dynamic websites using Ruby and Rails.

In this article I want to explain what are those "dynamic websites programmed with Ruby on Rails" that I made. I will try to explain the technologies behind them and, hopefully, after reading the article, you'll also have a high level understanding of how they work.

But first we need to know how a any website works.

How computers communicate over the Internet

A lot of things happen when you enter a url in your browser bar. Let's say you type http://en.wikipedia.org/wiki/Unicorn and hit enter. Your browser is a program running in your computer and the Wikipedia's site is stored in Wikipedia's computers, possibly thousands of miles away. Somehow the information must travel a long distance until it is displayed in your screen. The question is: how does this communication happen?

The Internet is a network of interconnected computers. When a computer connects to this network it gets a unique address (an IP address) so that it can be located in the network and receive information. IP addresses are sets of numbers separated by dots, such as 74.125.230.195 or 208.80.152.201. To communicate with other computers in the network, your computer sends messages with an associated IP address. These messages are technically called IP packets. As a regular mail packet, each IP packet has associated a sender IP address and an addressee address.

To request a web page from Wikipedia, your computer must send the request as a series of IP packets to Wikipedia's computers. Those packets arrive first to the computers at your ISP provider (the cable or ADSL Company that gives you access to the Internet, they will sound familiar because you pay them monthly for this service). The ISP computers store large tables in which to find where an IP address is located. If a packet destination is, for example, the IP address 208.80.152.201, they will look into those tables and say: "Ok, this looks like an address from this company in San Francisco. To reach those computers we need to use the large infrastructure from Big Telco Corp. They have a big cable deployed across the Atlantic. Let's send the packet to them". Once the packet reaches Big Telco computers, the process can be repeated, possibly passing through many computers until it ultimately reach the addressed address.

For example, if my computer in Spain sends a packet to Wikipedia it travels across 11 intermediaries until it reaches Wikipedia's computers in San Francisco.

Hop IP Aproximate Location
1 192.168.1.1 Madrid, Spain
2 80.58.67.163
3 80.58.72.153
4 81.46.7.201
5 84.16.9.161
6 84.16.15.210
7 213.140.55.50 Ronda, Andalucia, Spain
8 89.149.183.154 Neu Isenburg, Hessen, Germany
9 89.149.183.238
10 89.149.183.150
11 208.80.154.214 San Francisco, California, US
12 208.80.152.201

But there's one more little step we have skipped. When you type wikipedia.org in your browser bar, you are telling your computer that you want to see a website called wikipedia.org. Wikipedia.org, Google.com or Amazon.com are names especially made so humans can remember them. We call them domain names. But no matter how easy they are for us to remember those names, our computers just like their packets with their addresses because they can only send packets to IP addresses, not to domain names. So, how does it know which IP is associated with the domain name wikipedia.org?

Again, it has to ask other computers, which are called Domain Name Servers, or DNS servers in short. Usually we call server to any computer on a network that gives responses to requests from other computers. Computers that ask the servers are called clients. In this case, your browser is the client and asks the domain server which IP is associated with the domain name wikipedia.org. The DNS server would respond something like 208.80.152.201.

How a website works

Now we understand how your browser sends messages to Wikipedia's computers. But how does the browser tell Wikipedia that you want to see the page about Unicorns and no other page? While sending packets computers follow protocols. A protocol is a set of conventions, usually defined by an official organization. To ask for a web page, your browser uses the HTTP protocol. (HTTP is short for Hypertext Transfer protocol, but nobody really uses the long form).

How does the HTTP protocol look like? Actually it is a pretty simple protocol and looking at an example we can get an idea of what's going on.

Let's say your computer asks Wikipedia for the page about Unicorns. It will send a message, called an HTTP request, to Wikipedia's servers. It will be something like this:

GET /wiki/Unicorn HTTP/1.1
Host: en.wikipedia.org

This is just saying: "Give me the page at /wiki/Unicorn at the wikipedia.org server. To understand each other, we will use the version 1.1 of the HTTP protocol".

Wikipedia's server will then send back an HTTP response. The response has two parts: headers and body. The headers tell if the request was successful, and some other data of what's going on. This is a simple HTTP response:

 HTTP/1.1 200 OK
 Date: Mon, 23 May 2005 22:38:34 GMT
 Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
 Content-Length: 438
 Content-Type: text/html; charset=UTF-8

The 200 OK is the code the server sends when it finds the requested page. If there were any problem, like if you requested a page that doesn't exist in this particular server, the response will have another code. When the server can not find the page you have requested the code is 404 Not Found . Does it sound familiar?

After the headers, it comes the actual web page. Now is when we need to understand how web pages are made.

What's in a webpage?

You can think of web pages as regular files that your browser is able to understand. They are no different than a .doc file, but instead of opening them with MS Word, you open them with your browser. And instead of being stored in your computer hard drive, they are stored in other computers all over the Internet.

The main difference between a MS Word document and a web page is that the Word documents were designed by Microsoft engineers to be opened with MS Word, while web pages can be opened by any browser, like Internet Explorer or Google Chrome. Ideally, the web page will look the same in all browsers. Life as web developer would be much easier if that really happened in real world.

There is an agreed language that all web pages and browsers use in order to define the content of a web page. This language is called HTML (HyperText Markup Language, although again nobody uses the long version). HTML documents are just regular text files that follow HTML conventions to tell the meaning of different parts of the text. For example, let's see a very simple HTML document:

<!DOCTYPE html>
<html>
  <head>
    <title>Hello Web</title>
  </head>
  <body>
    <p>Hello World!</p>
  </body>
</html>

In this document <title>Hello Web</title> is an HTML element and <title> and </title> are called HTML tags. HTML tags come in pairs and give a special meaning to anything that comes in between them. In this case, "Hello Web" is the title of this particular web page. If you opened that file in your browser you would see Hello Web in the title bar, at the top of your browser window.

<p>Hello world!</p> is another element that defines a paragraph of text. There are many different kinds of elements in an HTML document, but they always follow the same pattern.

And now, Dynamic websites, (the ones I work with)

When the web was invented, all web pages were actually html files stored in the hard drives of the few web servers that existed. But modern web sites can no longer work that way: they can't have an already made HTML document for any page a user could visit. Imagine, for example, a site like Google. People can search anything they want in Google: 'kittens', 'lolcats', 'aaa', 'aab', 'abba', ababa'... The list is literally infinite. Even as big as Google is, and even if it has hundreds of thousands of servers, it is completely impossible for them to have a file for each one of all the possible pages that we can visit.

Remember just above, when we talk the HTTP protocol? The actual HTML content is sent after the headers in an HTTP response. Web servers can read this HTML from a file on disk, but nowadays they can also generate the HTML on the fly. The browser doesn't care. When the web server is able to generate custom content depending on the user input, we say the web site is dynamic.

In the case of Wikipedia, the content of the articles is stored in a database. A database is yet another program that stores the information so it can be easily accessed by other programs. Internally, a database contains tables, conceptually quite similar to the ones you see in an Excel file. Wikipedia's database surely has a table with all the articles in the website. It could look something like this:

article title article content
Kitten A kitten is a juvenile domesticated cat.[1] A feline litter usually consists of two ...
Unicorn The unicorn is a legendary animal from European folklore that resembles a white horse ...
Rainbow A rainbow is an optical and meteorological phenomenon that is caused by reflection of ...

To show the pages for those articles, there's a program in Wikipedia's servers that listens to incoming HTTP requests and builds the HTML pages on the fly using the information in the database. This program is called a web application. The main difference between a web application and a good old web site is that a web application includes some logic to create the web page dynamically, depending on the user's input.

The complete process a web application uses to create a web page would be as follows:

Let's say the server receives an HTTP request like this:

GET /wiki/Unicorn HTTP/1.1
Host: en.wikipedia.org

The web application would read and understand the message: the browser is asking for the page at en.wikipedia.org/wiki/Unicorn. With that information the application searches the database for an article with the title Unicorn. When it finds the row about unicorns it builds an HTML page with this information.

Web applications usually employ templates. A template contains the HTML that is the same in all pages. A simplified version of one of those templates could look like this:

<!DOCTYPE html>
<html>
  <head>
    <title><%= Insert page title here %></title>
  </head>
  <body>
    <%= Insert article body here %>
  </body>
</html>

But there’s something else. Template salso use placeholders to insert text coming from the database. How you write a placeholder depends on the technology in which the web application is built. In the example above, we used <%= ... %> which are indeed used in some real web applications.

The web application reads the template and replaces any content within the placeholders with meaningful content.

And finally: What is Ruby and Rails?

Even if the example above is fairly simple (believe me, it is), it is not very different of how real web applications work. But as applications grow and get more features, the logic of how to create the web pages and respond to user's inputs becomes more and more complicated.

In the Wikipedia's example, to view an article is the site's simplest feature. There are many other things that you can do at wikipedia.org: create a new article, link different articles, participate in a discussion, moderate some pages so they don't get vandalized, etc. The rules that govern those features are usually different from one web application to another. But there are also some things that don't change between web applications: each web application will need to know how to search a database, how to understand an HTTP request, or how to insert custom content in an HTML template, and so on.

If you have, let's say 100 web applications and they are built with the same technology, most of what they do is fairly the same. If we were to build them from scratch, we would be repeating ourselves 99% of the time. And because programmers hate to repeat themselves, they have gone a long distance to avoid repetition and only create what's unique in each application.

When we programmers see that we are doing the same thing once and again, we like to extract what we are repeating into a framework so we can reuse it later (and it’s not that we are lazy, it’s a matter of recycling). A framework is a collection of utilities that has been useful in the past to create applications.

So when I build a new website, I don't have to start from scratch, I can stand on the shoulders of giants. Well maybe not exactly giants but at least some other smart programmers. There are many web frameworks that have already encoded many of the utilities that I will likely need. A fundamental part of my job is to know which frameworks exists and which ones are suitable for the task.

To build websites I often use a web framework called Ruby on Rails, or simply Rails. Rails came to life when a Danish programmer build a website and noticed that many things he was coding could also be useful to build other sites. So he extracted the utilities that could be reused, and released those utilities as the framework Ruby on Rails. Rails is free software, which means not only that we don't have to pay anything to use it, but also that we are free to modify it in any way we want. And if the team that maintains Rails likes the changes we have made, they can even incorporate those changes into the Rails framework itself. So, even if Rails begun as the work of just one programmer, since it was released it has received contributions from hundreds of developers around the globe.

But Rails is hardly the only web framework to build websites. So what's unique about Ruby on Rails? To be honest, web frameworks are to web developers as wines to wine aficionados. There are different tastes and everyone has his opinion of which one is better. I find Ruby on Rails easier to use than other alternatives for most (but not all) of my projects. Before working with Ruby on Rails I worked with other technologies, but after learning Rails I felt much more productive and got more things done in the same amount of time.

Conclusion

There are many technologies behind the simple act of opening a web page. Each one of them is incredibly complex and has a lot of details. And you must consider that there are many more involved in the process that I haven't even mentioned, but that any web developer must know. I hope that now you have a high level understanding of how the web works and how web pages are made. Or at least, after this, I hope it is the last time you ask, mom.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment