Skip to content

Instantly share code, notes, and snippets.

@andrewjburnett
Last active August 7, 2018 00:07
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save andrewjburnett/6700d2e507467f678f60b5565f1a181c to your computer and use it in GitHub Desktop.
Save andrewjburnett/6700d2e507467f678f60b5565f1a181c to your computer and use it in GitHub Desktop.

Table of Contents


Environment Configuration

1. Create the project directory.

Create a directory for project.

$ mkdir {project_name}

2. Navigate into the directory.

$ cd {project_name}

3. Create the virtual environment.

Create a virtualenv for the project.

$ mkvirtualenv --python=/usr/local/bin/python3 {virtualenv_name}

The newly created virtualenv will not be activated. If done properly, the terminal project will show the name of the virtualenv within parentheses just after the command line prompt, like so:

$(virtualenv_name)

For the sake of brevity, I'm going to exclude the active virtualenv name from the beginning of the remaining commands. But, remember, the following commands should only be done when the environmnent is active and the virtualenv name is appended to the beginning of the command line.

4. Attach the virtual environment to the project.

Making sure you're still within the project directory you just created, set the current working directory as the project directory for the newly created virtualenv.

$ setvirtualenvproject


Setup Django Project

1. Install Django

Now that you have your virtualenv setup, download Django.

$ pip install django

2. Start your Django project.

Start a django project within the current working directory. The . at the end of the command is necessary in order to start the project wihtin the current directory rather than creating a subdirectory within the cwd and putting the new project there.

$ django-admin startproject {project_name} .

3. Create a scripts directory.

Create a scripts directory in your project's root directory.

$ mkdir scripts

4. Create an assets & a static directory.

Create both assets and static directories in your project's root directory. These directories will be used by your build tools. Assets is where you will store the static files that are used by all of your project's apps.

$ mkdir assets static

5. Create some directories within assets.

We're going to create a few more directories within assets where we can store our static files during development. (You could potentially create a few more separate directories within the assets directory if you wanted to further group your assets by whichever Django app they belong to.) Move to the assets directory and create css, img, js, and sass directories.

$ cd assets

$ mkdir css img js sass


Configure Django Project Settings For Multiple Environments

We need to configure our Django project to use different settings depending on the environment/circumstances under which it's being run.

1. Create a settings directory.

Within your project's root directory, navigate into the directory that has the same name as your project's root.

$ cd {project_name}

If you call ls to output the contents of your cwd you should see settings.py, urls.py, and wsgi.py amongst other potential files and folders.

$ ls -> __init__.py __pycache__ settings.py urls.py wsgi.py

We're going to make a few changes. First, create a settings directory.

$ mkdir settings

2. Move base.py to settings/settings.py.

Now, move the settings.py file to the new directory, renaiming it to base.py in the process.

$ mv settings.py settings/base.py

3. Create some environment specific settings files.

Next, navigate within the newly created settings directory and create two more files: development.py, production.py.

$ cd settings

$ touch development.py production.py

Now, open development.py (with your favorite text editor – I use Sublime Text)

$ subl development.py

and paste the following code:

from .base import *

4. Update manage.py so it's aware of the new settings files' locations.

If you were to try to run the app at this moment in time, you would be met with an error. Python will throw a core exception (ImproperlyConfigured) because the SECRET_KEY setting is empty. It's actually not. But, our manage.py script still thinks that our settings are located within the settings.py, which was moved and had its name changed. So, we need to open manage.py and make a small change. First, cd to the project's root directory. Next, open manage.py.

$ subl manage.py

Within __main__, find the line that reads:

// manage.py
os.environ.setdefault('DJANGO_SETTINGS_MODULE', '{project_name}.settings')

where {project_name} is the name of your project. And change it to:

// manage.py
os.environ.setdefault('DJANGO_SETTINGS_MODULE', '{project_name}.settings.development')

Now, Django will be able to locate your project's settings. Start the server and make sure your project runs properly.

Calling $ python manage.py runserver should output something similar to:

Performing system checks...

System check identified no issues (0 silenced).

August 03, 2018 - 22:28:03
Django version 2.1, using settings '{project_name}.settings.development'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

Open up a browser on your computer and head over to http://127.0.0.1:8000/. You should see Django's default landing page.

5. Update the base directory in settings.py.

We need to update the base directory path since we moved settings to base.py, which is located in a subdirectory that is one level deeper than the old settings file.

BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))

6. Update static settings in base.py.

Update our settings in base.py so the application is aware of where it can gather the static assets and store them when collectstatic is run.

// base.py
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
STATICFILES_DIRS = (\
    os.path.join(BASE_DIR, 'assets'),
)

Setup Front End Tools

1. Install & initialize NPM.

First, make sure you have npm (node package manager) installed on your system. Now, initialize npm in your project's root directory. This will create a package.json file that will assist in the management of your JavaScript packages. Although it won't immediately create a node_modules directory, once you begin using npm to install packages, the node_modules directory will automatically be created and your installed JavaScript packages will be located within it. When using the following command, you will be prompted by npm to enter some configuration parameters that will be stored in your package.json. Feel free to use the defaults or provide your own values. They can always be changed later on.

$ npm init

2. Install Webpack.

A JavaScript module bundler is a tool that gets around the problem with a build step (which has access to the file system) to create a final output that is browser compatible (which doesn’t need access to the file system). In this case, we need a module bundler to find all require statements (which is invalid browser JavaScript syntax) and replace them with the actual contents of each required file. The final result is a single bundled JavaScript file (with no require statements)! We're going to use Webpack. Webpack itself is an npm package, so we can install it from the command line:

$ npm install webpack --save-dev

We also need a Webpack command line tool – we'll be using webpack-command.

$ npm install webpack-command --save-dev

Note the --save-dev argument — this saves it as a development dependency, which means it’s a package that you need in your development environment but not on your production server.

3. Create an index.js & bundle.js files within assets/js.

$ cd assets/js

$ touch index.js bundle.js

4. Use the webpack CLI to bundle the JS files.

Now we have webpack installed as one of the packages in the node_modules folder. After moving back into the project root directory, you can use webpack from the command line as follows:

$ ./node_modules/.bin/webpack assets/js/index.js assets/js/bundle.js

This command will run the webpack tool that was installed in the node_modules folder, start with the index.js file, find any require statements, and replace them with the appropriate code to create a single output file named bundle.js.

5. Create a config file for Webpack.

Note that we’ll need to run the webpack command each time we change index.js. This is tedious, and will get even more tedious as we use webpack’s more advanced features (like generating source maps to help debug the original code from the transpiled code). Webpack can read options from a config file in the root directory of the project named webpack.config.js.

$ touch webpack.config.js

Now, within webpack.config.js add the following:

// webpack.config.js
const path = require('path');
module.exports = {
  entry: { main: './assets/js/index.js' },
  output: {
    filename: 'bundle.js',
    path: __dirname + '/assets/js'
  },
};

Now each time we change index.js, we can run webpack with the command:

$ ./node_modules/.bin/webpack

Overall, this may not seem like much, but there are some huge advantages to this workflow. We are no longer loading external scripts via global variables. Any new JavaScript libraries will be added using require statements in the JavaScript, as opposed to adding new <script> tags in the HTML. Having a single JavaScript bundle file is often better for performance. And now that we added a build step, there are some other powerful features we can add to our development workflow!

6. Transpiling code for new language features (babel).

First we’ll install babel (which is an npm package) into the project from the command line:

$ npm install babel-core babel-preset-env babel-loader --save-dev

Note that we’re installing 3 separate packages as dev dependencies — babel-core is the main part of babel, babel-preset-env is a preset defining which new JavaScript features to transpile, and babel-loader is a package to enable babel to work with webpack. We can configure webpack to use babel-loader by editing the webpack.config.js file as follows:

// webpack.config.js
const path = require('path');
module.exports = {
  entry: { main: './assets/js/index.js' },
  output: {
    filename: 'bundle.js',
    path: __dirname + '/assets/js'
  },
  module: {
    rules: [
      {
        test: /\.js$/,
        exclude: /node_modules/,
        use: {
          loader: 'babel-loader',
          options: {
            presets: ['env']
          }
        }
      }
    ]
  }
};

Basically we’re telling webpack to look for any .js files (excluding ones in the node_modules folder) and apply babel transpilation using babel-loader with the babel-preset-env preset. Now that everything’s set up, we can start writing ES2015 features in our JavaScript!


Using a Task Runner (NPM Scripts)

We’re almost done, but there’s still some unpolished edges in our workflow. If we’re concerned about performance, we should be minifying the bundle file, which should be easy enough since we’re already incorporating a build step. We also need to re-run the webpack command each time we change the JavaScript, which gets old real fast. So the next thing we’ll look at are some convenience tools to solve these issues.

1. build and watch scripts.

Let’s write some npm scripts to make using webpack easier. This involves simply changing the package.json file as follows:

// package.json
"scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "build": "webpack --progress -p --mode development", // ADD THIS LINE!
    "watch": "webpack --progress --watch" // AND THIS LINE!
  },

Here we’ve added two new scripts, build and watch. To run the build script, you can enter in the command line:

$ npm run build

This will run webpack (using configuration from the webpack.config.js we made earlier) with the --progress option to show the percent progress and the -p option to minimize the code for production. To run the watch script:

$ npm run watch

This uses the --watch option instead to automatically re-run webpack each time any JavaScript file changes, which is great for development.

Note that the scripts in package.json can run webpack without having to specify the full path ./node_modules/.bin/webpack, since node.js knows the location of each npm module path. This is pretty sweet!

2. start script.

// package.json
"scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "build": "webpack --progress -p", 
    "dev": "webpack --mode development",
    "start": "python manage.py runserver", // ADD THIS LINE!
    "watch": "webpack --progress --watch",
  },

Advanced CSS Handling (SCSS Compiling)

We're going to install and use a NPM developemnt tool called node-sass. This will compile all of our SASS files and output them to a single .css file that we can import into our base.html file.

First, install node-sass using npm.

$ npm install node-sass --save-dev

Now, navigate to the assets/sass directory and create a styles.scss file. This will be our primary SASS stylesheet into which we will import all of our other .sass files. We're going to add a npm script to compile the SASS and output the result to a file called styles.css within the assets/css directory. We're also going to add a watch script that will watch our SASS files and recompile the SASS anytime there is a change.

Update our package.json as follows:

// package.json
"scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "build": "webpack --progress -p", 
    "dev": "webpack --mode development",
    "sass": "node-sass assets/sass/styles.scss --output assets/css --source-map-embed --source-map-contents", // ADD THIS LINE!
    "start": "python manage.py runserver" ,
    "watch": "webpack --progress --watch",
    "watch:sass": "npm run sass -- --watch" // ADD THIS LINE!
  },

Setup Amazon S3 to Store Static and Media Files

The following steps were taken from a Caktus Group tutorial.

This portion of the tutorial requires an active Amazon S3 account.

One of the things I've always found tricky about using S3 this way is getting all the permissions set up so that the files are public but read-only, while allowing AWS users I choose to update the S3 files.

The approach I'm going to describe here isn't the simplest possible way to do that, but will be easier to maintain over the life of the site.

Here are the steps we'll take:

  1. Create a bucket.
  2. Configure the bucket for public access from the web.
  3. Create an AWS user group.
  4. Add a policy to the user group that lets members of the group control the bucket.
  5. Create a user and add it to that group.

Let's get started.

Create bucket

Start by creating a new S3 bucket, using the "Create bucket" button on the S3 page in the AWS console. Enter a name and choose a region. Leave everything else at the default settings, clicking through until the bucket has been created.

Enable web access

The first thing we'll do is enable the bucket for web access.

Click on the newly created S3 bucket, click on the Properties tab, and click on the big "Static website hosting button". Select "Use this bucket to host a website", and fill in anything you want as the index and error documents; we won't be using them.

Click "Save".

Note that enabling web access for the bucket only hooks the bucket up to the AWS web servers to make HTTP access possible. You must also set the permissions so that anonymous users can actually read the files in your bucket. We'll do that soon.

CORS

Another thing you need to be sure to set up is CORS. CORS defines a way for client web applications loaded in one domain to interact with resources in a different domain. Since we're going to be serving our static files and media from a different domain, if you don't take CORS into account, you'll run into mysterious problems, like Firefox not using your custom fonts for no apparent reason.

Go to your S3 bucket properties, and under "Permissions", click on "CORS Configuration". Make sure that something like this is set. (At the time of writing, this was the default.)

<CORSConfiguration>
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>Authorization</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

Public permissions

Now let's start setting up the right permissions.

In the AWS console, click the "Permissions" tab, then on the "Bucket policy" button. An editing box will appear. Paste in the following:

{
  "Version":"2012-10-17",
  "Statement":[{
    "Sid":"PublicReadGetObject",
        "Effect":"Allow",
      "Principal": "*",
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::example-bucket/*"
      ]
    }
  ]
}

Don't hit save quite yet! Look just above the editing box and you should see something like this:

Bucket policy editor ARN: arn:aws:s3:::some-bucket

Copy the part that starts with "arn:" and save it somewhere; we'll need it again later. This is the official complete name of our S3 bucket that we can use to refer to our bucket anywhere in AWS.

Replace the "arn:aws:s3:::example-bucket" in the editing box with the part starting with "arn:" shown above your editing box, but be careful to preserve the "/" at the end of it. For example, change "arn:aws:s3:::example-bucket/" to "arn:aws:s3:::some-bucket/*".

Now you can save it.

It's tempting at this point to upload a test file to your bucket and try to access it from the web, but resist for a moment. While everything is now set up, it can take a few minutes for the settings to take effect, and it's confusing if you've got everything right and yet your first test fails. We'll finish the setup, then test everything.

To recap: once these settings have taken effect, anyone should be able to access any file in your bucket by visiting http://.s3-website-.amazonaws.com/.

Create user group

Our next goal is to arrange for our own servers to be able to manage the files in our S3 bucket. We'll start by creating a new group in IAM. When we finish, we'll be able to add and remove users from this group to control whether those users can manage this bucket's files.

In the AWS console, go to IAM, click "Groups" on the left, then "Create New Group". Assign a meaningful name to your group, like "manage-some-bucket". Click through the rest of the process without changing anything else.

Add a policy

Next, we'll create a policy with rules allowing management of our S3 bucket. Still in IAM, click "Policies" on the left. Click "Create policy" at the top. On the next page, select "Choose a service". Search for "S3", then select it.

Next, under "Actions" -> "Manual Actions", choose "All S3 actions (s3:*)" under .

Now, click "Resources". Make sure "Specific" is selected, and under "bucket" click "Add ARN". Type in your bucket name (e.g. some-bucket) and click "Add".

Under "object", click "Add ARN". Type the bucket name (e.g. some-bucket) into the "Bucket name" field. In "object name" field, either select "Any" or type an asterisk (e.g. "*"). Click "Add".

Click "Review Policy".

Give your new policy a meaningful name, like "manage-some-bucket".

At the bottom, click "Create Policy".

Now your policy exists. We just need to tell AWS which users to apply it to. Check the checkbox next to your new policy in the list, then pull down the "Policy actions" button at the top and select "Attach". On the next page, check the group we created, then click the "Attach policy" button at the bottom right.

We're almost done. We just need to create a new IAM user that our servers can use at runtime.

Create a user

Still in IAM, click "Users" on the left, then "Add user". Give the user a name, e.g. "blogpostbucketuser", and choose the "Programmatic" access type. Click "Next:Permissions". Now you can check the newly created group to add the new user to it, then click "Next:Review", and finally "Create user".

You'll need the user's access key and secret access key to configure the servers. The page you're on now should have a "Download .csv" button. Just click that and save the downloaded file, which will have the username, access id, and secret access key in it.

If you accidentally mess up in downloading the credentials or lose them, you can't fetch them again. But you can just delete this user, and create a new one the same way. Luckily, this part is pretty quick.

Expected results:

The site can use the access key ID and secret key associated with the user's access key to access the bucket. The site will be able to do anything with that bucket. The site will not be able to do anything outside that bucket.


Advanced Django File Handling

In order for your static files to be served from S3 instead of your own server, you need to arrange for two things to happen:

  1. When you serve pages, any links in the pages to your static files should point at their location on S3 instead of your own server. Your static files are on S3 and accessible to the website's users.
  2. Part 1 is easy if you've been careful not to hardcode static file paths in your templates. Just change STATICFILES_STORAGE in your settings. We'll show how to do that in a second.

But you still need to get your files onto S3, and keep them up to date. You could do that by running collectstatic locally, and using some standalone tool to sync the collected static files to S3 at each deploy. But that won't work for media files, so we might as well go ahead and set up the custom Django storage we'll need now, and then our collectstatic will copy the files up to S3 for us.

We're going to change the file storage class for static files to a new class, storages.backends.s3boto3.S3Boto3Storage, that will do that for us. Instead of STATIC_ROOT and STATIC_URL, it'll look at a group of settings starting with AWS_ to know how to write files to the storage, and how to make links to files there.

To start, install two Python packages: django-storages (yes, that's "storages" with an "S" on the end), and boto3.

$ pip install django-storages boto3

In our base.py under the settings directory, add Add 'storages' to INSTALLED_APPS:

// base.py
INSTALLED_APPS = (
  ...,
  'storages',
)

And (optionally) add this:

// base.py
AWS_S3_OBJECT_PARAMETERS = {
  'Expires': 'Thu, 31 Dec 2099 20:00:00 GMT',
  'CacheControl': 'max-age=94608000',
}

This will tell boto that when it uploads files to S3, it should set properties on them so that when S3 serves them, it'll include some HTTP headers in the response. Those HTTP headers, in turn, will tell browsers that they can cache these files for a very long time.

Now, add this to base.py, changing the first four values as appropriate:

// base.py
AWS_STORAGE_BUCKET_NAME = 'BUCKET_NAME'
AWS_S3_REGION_NAME = 'REGION_NAME'  # e.g. us-east-2
AWS_ACCESS_KEY_ID = 'xxxxxxxxxxxxxxxxxxxx'
AWS_SECRET_ACCESS_KEY = 'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy'

# Tell django-storages the domain to use to refer to static files.
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME

Add all this to your settings, then customize the first four lines for your own S3 storage.

We might consider changing DEFAULT_FILE_STORAGE to storages.backends.s3boto.S3Boto3Storage, which is the same class we used for static files. Django file storage classes provide a standard interface that both static files and media files can use, like this:

# DO NOT DO THIS!
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3Boto3Storage'

Adding those settings would indeed tell Django to save uploaded files to our S3 bucket, and use our S3 URL to link to them.

Unfortunately, this would store our media files on top of our static files, which we're already keeping in our S3 bucket. That could let users overwrite our static files, leaving us wide open to security problems.

What we want to do is either enforce always storing our static files and media files in different subdirectories of our bucket, or use two different buckets. I'll show how to use the different paths first.

In order for our STATICFILES_STORAGE to have different settings from our DEFAULT_FILE_STORAGE, they need to use two different storage classes; there's no way to configure anything more fine-grained in Django. So, we'll start by creating a custom storage class for our static file storage, by subclassing S3Boto3Storage. We'll also define a new setting, so we don't have to hardcode the path in our Python code.

For our example, we'll create a file custom_storages.py in our root project directory, that is, in the same directory as manage.py:

// custom_storages.py
from django.conf import settings
from storages.backends.s3boto3 import S3Boto3Storage

class StaticStorage(S3Boto3Storage):
    location = settings.STATICFILES_LOCATION

Now in development.py:

// development.py
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.StaticFilesStorage'

Then in production.py:

// production.py
STATICFILES_LOCATION = 'static'
STATICFILES_STORAGE = 'custom_storages.StaticStorage'

These settings enable the production application to serve the files from S3, but still allow the files to be served locally during devleopment.

STATICFILES_LOCATION is a new setting that we've created so that our new storage class can be configured separately from other storage classes in Django. Giving our class a location attribute of 'static' will put all our files into paths on S3 starting with 'static/'.

You should be able to run collectstatic again, restart your site, and then all of your static files should have '/static/' in their URLs. Now delete from your S3 bucket any files outside of '/static' (using the S3 console, or whatever tool you like).

We can do something very similar now for media files, adding another storage class in custom_storages.py:

// custom_storages.py
class MediaStorage(S3Boto3Storage):
    location = settings.MEDIAFILES_LOCATION

In base.py:

// base.py
MEDIAFILES_LOCATION = 'media'
DEFAULT_FILE_STORAGE = 'custom_storages.MediaStorage'

Now when a user uploads a photo, it should go into '/media/' in our S3 bucket. When we display the image on a page, the image URL will include '/media/'.

With all of this set up, you should be able to upload your static files to S3 using collectstatic:

$ python manage.py collectstatic

If you see any errors, double check all the steps above.


Django Settings Configuration

Next we're going to install a package called 'python-dotenv', so that we can remove sensitive configuration parameters from our settings files and either a) have them read in from a .env file b) loaded in as environmental variables.

$ pip install python-dotenv

Next, navigate to your project's root directory and create a .env file.

$ touch .env

I'm only going to show you how to move one setting. But this paradigm can be used to configure any setting.

First, copy and paste the SECRET_KEY from within base.py. Then, open .env and update it as follows:

# BASE

SECRET_KEY=g$hu=jrrvpx2)u@q21o=uw_+hopmtifg(17p5jscdj@k8-4$bh

Back in base.py, change the line that reads:

// base.py
SECRET_KEY = '(uo1t6eacdn$-4fyy8l(+uc1tthy1=87$wrm@5f8-mg+&wh&7$'

to,

// base.py
SECRET_KEY = os.getenv('SECRET_KEY')

In order to make sure our variables are being loaded properly before runtime, we need to open manage.py and add the following lines to the top.

from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())

You're all set. The variables will either be loaded from .env or override by an environment variables that are explicity set.


Application Build

We're also going to update the npm build script so that it compiles our SASS, and run's Django's collectstatic method.

Update the "build" script in package.json as follows:

// package.json
...
"build": "webpack --mode production && npm run sass && python manage.py collectstatic",
...

Now, running npm run build will compile our JS, SASS, and run Django's collectstatic, pushing any static files to the S3 bucket you configured.

=================

You're ready to GO!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment