$ uname -r
Taken from: https://hackerlists.com/hacking-sites/ | |
22 Hacking Sites, CTFs and Wargames To Practice Your Hacking Skills | |
InfoSec skills are in such high demand right now. As the world continues to turn everything into an app and connect even the most basic devices to the internet, the demand is only going to grow, so it’s no surprise everyone wants to learn hacking these days. | |
However, almost every day I come across a forum post where someone is asking where they should begin to learn hacking or how to practice hacking. I’ve compiled this list of some of the best hacking sites to hopefully be a valuable resource for those wondering how they can build and practice their hacking skill set. I hope you find this list helpful, and if you know of any other quality hacking sites, please let me know in the comments, so I can add them to the list. | |
1. CTF365 https://ctf365.com/ |
By default, Rails applications build URLs based on the primary key -- the id
column from the database. Imagine we have a Person
model and associated controller. We have a person record for Bob Martin
that has id
number 6
. The URL for his show page would be:
/people/6
But, for aesthetic or SEO purposes, we want Bob's name in the URL. The last segment, the 6
here, is called the "slug". Let's look at a few ways to implement better slugs.
""" | |
Example TensorFlow script for finetuning a VGG model on your own data. | |
Uses tf.contrib.data module which is in release v1.2 | |
Based on PyTorch example from Justin Johnson | |
(https://gist.github.com/jcjohnson/6e41e8512c17eae5da50aebef3378a4c) | |
Required packages: tensorflow (v1.2) | |
Download the weights trained on ImageNet for VGG: | |
``` | |
wget http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz |
So here's the premise: For scenes that take around a minute or less to render, performance is actually worse if you render on all of the cards with a single instance of Blender. This is because (AFAIK) there's a bit of additional time necessary to collect the render results from each card and stitch them together. That time is a fixed short duration, so it's negligible on larger/longer render jobs. However, on shorter render jobs, the 'stitch time' has a much more significant impact.
I ran into this with a machine I render on that has 4 Quadro K6000s in it. To render animations, I ended up writing a few little scripts to facilitate launching 4 separate instances of Blender, each one tied to one GPU. Overall rendertime was much shorter with that setup than one instance of Blender using all 4 GPUs.
The setup works basically like this... I have a the following Python script (it can be anywhere on your hard drive, so long as you remember the path to it).
If your pods are showing ErrImagePull, ErrImageNeverPull, or ImagePullBackOff errors after running kubectl apply, the simplest solution is to provide an imagePullPolicy to the pod. | |
First, run kubectl delete -f infra/k8s/ | |
Then, update your pod manifest: | |
spec: | |
containers: | |
- name: posts | |
image: cygnet/posts:0.0.1 |
import tensorflow as tf | |
from tensorflow.python.platform import gfile | |
with tf.Session() as sess: | |
model_filename ='PATH_TO_PB.pb' | |
with gfile.FastGFile(model_filename, 'rb') as f: | |
graph_def = tf.GraphDef() | |
graph_def.ParseFromString(f.read()) | |
g_in = tf.import_graph_def(graph_def) | |
LOGDIR='/logs/tests/1/' | |
train_writer = tf.summary.FileWriter(LOGDIR) |