A Pen by Le Roux Bodenstein on CodePen.
# Based on idan's script | |
function _git_branch_name | |
echo (command git symbolic-ref HEAD ^/dev/null | sed -e 's|^refs/heads/||') | |
end | |
function _is_git_dirty | |
echo (command git status -s --ignore-submodules=dirty ^/dev/null) | |
end |
defmodule Palindrome do | |
def is_palindrome(x) do | |
y = x |> Integer.to_char_list(2) | |
y == Enum.reverse y | |
end | |
end | |
1900..2100 |> Enum.filter &Palindrome.is_palindrome/1 |
fn bin_reverse(n_orig: u32) -> u32 { | |
let mut n = n_orig; | |
n = ((n & 0xFFFF0000) >> 16) | ((n & 0x0000FFFF) << 16 ); | |
n = ((n & 0xFF00FF00) >> 8 ) | ((n & 0x00FF00FF) << 8 ); | |
n = ((n & 0xF0F0F0F0) >> 4 ) | ((n & 0x0F0F0F0F) << 4 ); | |
n = ((n & 0xCCCCCCCC) >> 2 ) | ((n & 0x33333333) << 2 ); | |
n = ((n & 0xAAAAAAAA) >> 1 ) | ((n & 0x55555555) << 1 ); | |
n |
call plug#begin() | |
Plug 'junegunn/seoul256.vim' | |
Plug 'tpope/vim-sensible' | |
"Plug 'benekastah/neomake' | |
Plug 'Shougo/deoplete.nvim' | |
Plug 'junegunn/fzf', { 'dir': '~/.fzf', 'do': './install --all' } | |
Plug 'leafgarland/typescript-vim', { 'for': 'typescript' } |
#!/usr/bin/env ruby | |
require 'find' | |
ACCESS_DENIED = ["$Recycle.Bin", "Documents and Settings", "Program Files", "Program Files (x86)", "ProgramData", ".", ".."] | |
def search(name, dirname = ".") | |
Find.find(dirname).find do |path| | |
! ACCESS_DENIED.include?(File.basename(path)) && | |
! File.directory?(path) && | |
File.basename(path) == name |
Today every self-respecting developer thinks of microservices and containers. This approach significantly simplifies deployments and maintenance efforts, while shortening the development-to-production cycle length.
But there are some quite weakly addressed pain points on that route: we can reliably deploy a product as a swarm of weakly-linked containers, but we're lacking most of the usual tools to get insights of what's happening inside those containers|services, and how to debug and troubleshoot our production issues.
As a team in the middle of such transition, we at SCC have hit most of those issues:
- Any incoming request requires several containers to work in unison. How to log that work so it's easy to retrieve everything what happened on all the affected nodes?
We at SCC Team (and at SUSE in general) are providing more and more APIs with the wonderful HTTP REST approach. APIs evolve over time, often unexpectedly — so it makes sense to get into some API versioning best practices right from the day 0. I was asked to join Crowbar guys' discussion to share my SCC experience with versioning APIs. This article is an attempt to formalize our solution and prepare it for a wider audience.
So, imagine you have different API consumers out of your area of control. Some of them definitely will lag behind the latest release.
Автор — kirill@pimenov.cc
Лицензия — CC BY-SA 4.0, https://creativecommons.org/licenses/by-sa/4.0/deed.ru
Во-первых, гит — это тоже такой себе блокчейн.
Собственно блок-чейн, цепочка блоков — это структура, где каждое текущее положение определяется хешем, который вычисляется из прошлого по функции new_block = hash(old_block+metadata+data)
Где data
— это собственно полезная нагрузка, скажем информация из каких файлов в какие какие строки переместились, или с каких кошельков на какие деньги переводятся.
I hereby claim:
- I am kirushik on github.
- I am kirushik (https://keybase.io/kirushik) on keybase.
- I have a public key ASBISvcxSpUwUP5DhJqCMHuGjL0JsrP9FUuJd2WtGpm-pQo
To claim this, I am signing this object: