To turn off detection of MITM attacks, edit ~.ssh/config to include
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
Alternatively, on the command line use
ssh -o "StrictHostKeyChecking no" -o LogLevel=ERROR name@server
# Python3 | |
import multiprocessing | |
print(multiprocessing.cpu_count()) | |
def serial_function(some_variable): | |
reslt=some_variable*5 | |
return reslt | |
list_of_variables=[3, 5] |
import requests | |
import json | |
import datetime | |
import time | |
import matplotlib.pyplot as plt | |
import matplotlib.dates as md | |
import pandas | |
print('pandas',pandas.__version__) | |
import numpy | |
print('numpy',numpy.__version__) |
import multiprocessing | |
def serial_func(arg1): | |
return arg1*2 | |
if __name__ == '__main__': | |
start_time=time.time() | |
res_list=[] | |
with multiprocessing.Pool(processes=multiprocessing.cpu_count()) as pool: # start worker processes | |
res_list = pool.map(serial_func,[3,4,5]) # see https://docs.python.org/3/library/multiprocessing.html |
from inspect import stack, currentframe, getframeinfo # file name and line number | |
def prntln(*args): | |
""" | |
https://stackoverflow.com/questions/24438976/python-debugging-get-filename-and-line-number-from-which-a-function-is-called | |
""" | |
caller = getframeinfo(stack()[1][0]) | |
print(caller.filename, caller.lineno, args) |
from functools import wraps | |
import errno | |
import signal | |
def timeout(seconds=10, error_message=os.strerror(errno.ETIME)): | |
""" | |
https://stackoverflow.com/questions/2281850/timeout-function-if-it-takes-too-long-to-finish | |
see also |
To turn off detection of MITM attacks, edit ~.ssh/config to include
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
Alternatively, on the command line use
ssh -o "StrictHostKeyChecking no" -o LogLevel=ERROR name@server
There are many Git workflows out there, I heavily suggest also reading the atlassian.com [Git Workflow][article] article as there is more detail than presented here.
The two prevailing workflows are [Gitflow][gitflow] and [feature branches][feature]. IMHO, being more of a subscriber to continuous integration, I feel that the feature branch workflow is better suited.
When using Bash in the command line, it leaves a bit to be desired when it comes to awareness of state. I would suggest following these instructions on [setting up GIT Bash autocompletion][git-auto].
When working with a centralized workflow the concepts are simple, master
represented the official history and is always deployable. With each new scope of work, aka feature, the developer is to create a new branch. For clarity, make sure to use descriptive names like transaction-fail-message
or github-oauth
for your branches.
I hereby claim:
To claim this, I am signing this object:
function lcount() { | |
count_py_files=`find . -type f | grep \.py$ | wc -l` | |
count_c_files=`find . -type f | grep \.c$ | wc -l` | |
count_csv_files=`find . -type f | grep \.csv$ | wc -l` | |
count_png_files=`find . -type f | grep \.png$ | wc -l` | |
count_html_files=`find . -type f | grep \.html$ | wc -l` | |
count_sh_files=`find . -type f | grep \.sh$ | wc -l` | |
count_xml_files=`find . -type f | grep \.xml$ | wc -l` | |
py_line_count=`find . -name "*.py" -type f -exec grep . {} \; | sed -n '/^# /!p' | wc -l` | |
tex_line_count=`find . -name "*.tex" -type f -exec grep . {} \; | sed -n '/^% /!p' | wc -l` |
import time | |
from flask import Flask, request, g, render_template | |
app = Flask(__name__) | |
app.config['DEBUG'] = True | |
@app.before_request | |
def before_request(): | |
g.request_start_time = time.time() |