Skip to content

Instantly share code, notes, and snippets.

@flaing
Created January 29, 2020 18:11
Show Gist options
  • Save flaing/ac02aa1180624adce1c7b380db8ea9d0 to your computer and use it in GitHub Desktop.
Save flaing/ac02aa1180624adce1c7b380db8ea9d0 to your computer and use it in GitHub Desktop.
2 categories of attacks
{\bfseries Common attacks} fall into two categories:
{\bfseries Evasion Attacks:}
Evasion Attacks focus on models. ( attacks on supervised learning) In signature-based context, there was polymorphism attack that focuses on shellcodes for signature.
For example, as Fogla et al., the polymorphism- blending attack focuses on an evil feature that has a high similarity score to normal feature vector. This kind of adversarial examples attacks ``reliability'' as a whole.
{\bfseries Poison Attacks:}
Poison Attacks focus on algorithms. This attacks on unsupervised learning, because clustering itself is a highly non-trivial optimization problem. Kloft and Lask propose a greedy-optimal strategy that displaces the centroid towards the attack target, when it also remains in the normal region of anomaly detector. (Centroid)
Rubinstein et al., by skewing or stretching the anomaly detector, executes DDOS on Principal Components for them to be unable to self learn. (PCA)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment