Skip to content

Instantly share code, notes, and snippets.

@flaing
Created January 29, 2020 21:53
Show Gist options
  • Save flaing/6c595fed819b50767b666f29b69a3699 to your computer and use it in GitHub Desktop.
Save flaing/6c595fed819b50767b666f29b69a3699 to your computer and use it in GitHub Desktop.
four plus one
{\textbf{Common Anomaly Detectors}} can classify into ``four plus one'' main categories:
\begin{enumerate}
\item Centroid-based: Finding the mean c of all training data points, Then computing the distance of a new sample point x from the centroid c. Its native defense is the threshold to the distance value.
\item Kernel PCA-based (Principal Component Analysis): Analyzing correlations among the variables and finding the values that best captures differences in outcomes. These combined feature values are used to create a more compact feature space called the principal components.
\item N-gram: contiguous sequence of n items from a given sample.
\item Sanitizing Training Data by Cretu et al. is widely used against poisoning\cite{cretu2008casting}.
\item GANs based: GANs (Generative Adversarial Networks) are based on neurons mechanisms, activation function and loss function. In IDS context: Discriminator is trained to copy the black-box IDS and help and guide with the generator training. Generator is changing some features that are very specific to to produce generate adversarial traffic data. Feedback loop of both are going towards vanishing gradient, or towards hacker objective function.
\end{enumerate}
{\bfseries Common attacks} fall into two categories:
{\bfseries Evasion Attacks:}
Evasion Attacks focus on models. ( attacks on supervised learning) In signature-based context, there was polymorphism attack that focuses on shellcodes for signature.
For example, as Fogla et al., the polymorphism- blending attack focuses on an evil feature that has a high similarity score to normal feature vector. This kind of adversarial examples attacks ``reliability'' as a whole.
{\bfseries Poison Attacks:}
Poison Attacks focus on algorithms. This attacks on unsupervised learning, because clustering itself is a highly non-trivial optimization problem. Kloft and Lask propose a greedy-optimal strategy that displaces the centroid towards the attack target, when it also remains in the normal region of anomaly detector. (Centroid)
Rubinstein et al., by skewing or stretching the anomaly detector, executes DDOS on Principal Components for them to be unable to self learn. (PCA)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment