Update your phoenix dep:
def deps do
[...,
{:phoenix, "~> 0.16"},
...]
end
require "twitter" | |
require "sunlight" | |
data_pos = DATA.pos | |
last_id = DATA.read.to_s[/\d+/] | |
last_id = last_id.to_i if last_id | |
DATA.reopen(__FILE__, "a+") | |
Twitter.configure do |config| |
if defined?(::Bundler) | |
global_gemset = ENV['GEM_PATH'].split(':').grep(/ruby.*@global/).first | |
if global_gemset | |
all_global_gem_paths = Dir.glob("#{global_gemset}/gems/*") | |
all_global_gem_paths.each do |p| | |
gem_path = "#{p}/lib" | |
$LOAD_PATH << gem_path | |
end | |
end | |
end |
def a l = def b; <<-NIGHTMARES; end; puts send l; end | |
😱😱😱 | |
NIGHTMARES | |
a |
#!/bin/bash | |
usage(){ | |
echo " Usage: $0 url <clean> <tls>" | |
echo " url: http url to your DC/OS cluster master ip" | |
echo " clean: optional, will erase your DC/OS and kubectl configs and reconfigure" | |
echo " tls: optional, will deploy kubernetes using TLS" | |
echo " Minimum Config:" | |
echo " If using CCM the mininum configuration for the ""generic"" Kuberentes Install" | |
echo " 1 public slave node" |
# Add this to the end of your development.rb and add | |
# | |
# gem 'pry' | |
# | |
# to your Gemfile and run bundle to install. | |
silence_warnings do | |
begin | |
require 'pry' | |
IRB = Pry |
-- script.lua | |
-- Receives a table, returns the sum of its components. | |
io.write("The table the script received has:\n"); | |
x = 0 | |
for i = 1, #foo do | |
print(i, foo[i]) | |
x = x + foo[i] | |
end | |
io.write("Returning data back to C\n"); | |
return x |
HEADER | |
{ | |
CompileTargets = ( IS_SM_50 && ( PC || VULKAN ) ); | |
Description = "Hologram Effect"; | |
} | |
FEATURES | |
{ | |
#include "common/features.hlsl" |
For a long time I've been really impacted by the ease of use Cassandra and CockroachDB bring to operating a data store at scale. While these systems have very different tradeoffs what they have in common is how easy it is to deploy and operate a cluster. I have experience with them with cluster sizes in the dozens, hundreds, or even thousands of nodes and in comparison to some other clustered technologies they get you far pretty fast. They have sane defaults that provide scale and high availability to people that wouldn't always understand how to achieve it with more complex systems. People can get pretty far before they have to become experts. When you start needing more extreme usage you will need to become an expert of the system just like any other piece of infrastructure. But what I really love about these systems is it makes geo-aware data placement, GDPR concerns potentially simplified and data replication and movement a breeze most of the time.
Several years ago the great [Andy Gross](ht