This tutorial explains how to use CleverHans
together
with a TensorFlow model to craft adversarial examples,
using the Jacobian-based saliency map approach. This attack
is described in details by the following paper.
We assume basic knowledge of TensorFlow. If you need help
getting CleverHans
installed before getting started,
you may find our MNIST tutorial on the fast gradient sign method
to be useful.
from sklearn.datasets import make_blobs | |
from sklearn.cluster import dbscan | |
from sklearn.cluster._dbscan_inner import dbscan_inner | |
from sklearn.metrics import pairwise_distances_chunked | |
from scipy.sparse import csr_matrix | |
import numpy as np | |
# dataset | |
n = 50000 | |
ds, _ = make_blobs(n, 100, 50) |
import pandas as pd | |
def confusion_matrix(df: pd.DataFrame, col1: str, col2: str): | |
""" | |
Given a dataframe with at least | |
two categorical columns, create a | |
confusion matrix of the count of the columns | |
cross-counts | |
use like: |
Non-Uniform Memory Access (NUMA) is one of the computer memory design methods used in multiprocessor systems, and the time to access the memory varies depending on the relative position between the memory and the processor. In the NUMA architecture, when a processor accesses its local memory, it is faster than when it accesses the remote memory. Remote memory refers to memory that is connected to another processor, and local memory refers to memory that is connected to its own processor. In other words, it is a technology to increase memory access efficiency while using multiple processors on one motherboard. When a specific processor runs out of memory, it monopolizes the bus by itself, so other processors have to play. , and designate 'access only here', and call it a NUMA node.
lspci | grep -i nvidia
01:00.0 VGA compatible controller: NVIDIA Corporation TU106 [GeForce RTX 2060 12GB] (rev a1)