mkdir -p ~/bin ~/lib/python2.6/site-packages
Configure the environment by putting something like this in your
.bashrc
and sourcing it:export CFLAGS=-I/usr/local/pgsql/include export LDFLAGS=-L/usr/local/pgsql/lib
-----BEGIN WEBFACTION INSTALL SCRIPT----- | |
#!/bin/env python2.4 | |
""" | |
Nginx 0.7.65 Installer New | |
"autostart": not applicable | |
"extra info": Enter domain name for the nginx app | |
""" |
<!DOCTYPE html> | |
<html lang="en"> | |
<head> | |
<meta charset="utf-8"> | |
<title>Document</title> | |
<link rel="stylesheet" href="style.css"> | |
<script src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.5.min.js" type="text/javascript"></script> | |
</head> | |
<body> | |
<h1>Document</h1> |
This proposal presents a "middle ground" approach to improving and refactoring auth.User
, based around a new concept of "profiles". These profiles provide the main customization hook for the user model, but the user model itself stays concrete and cannot be replaced.
I call it a middle ground because it doesn't go as far as refactoring the whole auth app -- a laudable goal, but one that I believe will ultimately take far too long -- but goes a bit further than just fixing the most egregious errors (username length, for example).
This proposal vastly pare down the User model to the absolute bare minimum and defers all "user field additions" to a new profile system.
#!/usr/bin/env python | |
# -*- coding: utf-8 -*- | |
import sys | |
import boto.ec2 | |
import boto.vpc | |
def dprint (body): | |
# print >> sys.stderr, body | |
return |
#!/usr/bin/env python | |
# -*- coding: utf-8 -*- | |
import sys | |
import boto.ec2 | |
import boto.vpc | |
def dprint (body): | |
# print >> sys.stderr, body | |
return |
require 'rubygems' | |
require 'activesupport' | |
require 'aws' | |
require 'graphviz' | |
ec2 = Aws::Ec2.new(ENV["AMAZON_ACCESS_KEY_ID"], ENV["AMAZON_SECRET_ACCESS_KEY"]) | |
g = ec2.describe_security_groups | |
gv = GraphViz::new( "structs", "type" => "graph" ) |
I've had many people ask me questions about OpenTracing, often in relation to OpenZipkin. I've seen assertions about how it is vendor neutral and is the lock-in cure. This post is not a sanctioned, polished or otherwise muted view, rather what I personally think about what it is and is not, and what it helps and does not help with. Scroll to the very end if this is too long. Feel free to add a comment if I made any factual mistakes or you just want to add a comment.
OpenTracing is documentation and library interfaces for distributed tracing instrumentation. To be "OpenTracing" requires bundling its interfaces in your work, so that others can use it to time distributed operations with the same library.
OpenTracing interfaces are targeted to authors of instrumentation libraries, and those who want to collaborate with traces created by them. Ex something started a trace somewhere and I add a notable event to that trace. Structure logging was recently added to O
package main | |
import ( | |
"context" | |
"flag" | |
"fmt" | |
"io/ioutil" | |
"log" | |
"net/http" |
#!/bin/bash | |
echo "This is a idle script (infinite loop) to keep container running." | |
echo "Please replace this script." | |
cleanup () | |
{ | |
kill -s SIGTERM $! | |
exit 0 | |
} |