Created
June 27, 2020 17:23
-
-
Save arjvik/f6b350cb97d06920f99f24b364ace340 to your computer and use it in GitHub Desktop.
Wordlist for playing Skribbl.io using Machine Learning terminology
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
accuracy | |
activation function | |
active learning | |
artificial intelligence | |
AUC | |
augmentation | |
augmented reality | |
backpropagation | |
bag of words | |
batch normalization | |
Bayesian neural network | |
bias | |
binary classification | |
binning | |
boosting | |
bounding box | |
broadcasting | |
bucketing | |
categorical data | |
centroid | |
class | |
classification | |
clustering | |
collaborative filtering | |
computer vision | |
confirmation bias | |
confusion matrix | |
continuous feature | |
convergence | |
convex function | |
convolutional layer | |
convolutional neural network | |
cross entropy | |
cross validation | |
dataframe | |
datapoint | |
decision boundary | |
decision tree | |
deep neural network | |
dense layer | |
discrete feature | |
discriminator | |
downsampling | |
dropout regularization | |
eager execution | |
embeddings | |
ensemble model | |
environment | |
epoch | |
false negative | |
false positive | |
feature engineering | |
feature extraction | |
feature vector | |
feedforward | |
generative adversarial network | |
generator | |
gradient | |
gradient descent | |
ground truth | |
hashing | |
heuristic | |
hidden layer | |
image recognition | |
imbalanced dataset | |
input layer | |
instance | |
keras | |
kernel | |
k means | |
k medians | |
k nearest neighbors | |
l1 regularization | |
l2 regularization | |
label | |
lambda | |
layer | |
learning rate | |
least squares regression | |
linear regression | |
logistic regression | |
long short term memory | |
loss | |
machine learning | |
matrix factorization | |
mean squared error | |
model | |
natural language processing | |
negative class | |
neural network | |
neuron | |
normalization | |
numpy | |
one hot encoding | |
online learning | |
output layer | |
overfitting | |
pandas | |
parameter | |
partial derivative | |
perceptron | |
performance | |
pipeline | |
pooling | |
positive class | |
precision | |
preprocessing | |
prior | |
q learning | |
random forest | |
recall | |
rectified linear unit | |
recurrent neural network | |
regression model | |
regularization | |
reinforcement learning | |
reward | |
root directory | |
scikit learn | |
sigmoid function | |
softmax | |
tensor | |
tensorflow | |
training set | |
true negative | |
true positive | |
underfitting | |
unsupervised learning | |
vanishing gradient problem |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
accuracy,activation function,active learning,artificial intelligence,AUC,augmentation,augmented reality,backpropagation,bag of words,batch normalization,Bayesian neural network,bias,binary classification,binning,boosting,bounding box,broadcasting,bucketing,categorical data,centroid,class,classification,clustering,collaborative filtering,computer vision,confirmation bias,confusion matrix,continuous feature,convergence,convex function,convolutional layer,convolutional neural network,cross entropy,cross validation,dataframe,datapoint,decision boundary,decision tree,deep neural network,dense layer,discrete feature,discriminator,downsampling,dropout regularization,eager execution,embeddings,ensemble model,environment,epoch,false negative,false positive,feature engineering,feature extraction,feature vector,feedforward,generative adversarial network,generator,gradient,gradient descent,ground truth,hashing,heuristic,hidden layer,image recognition,imbalanced dataset,input layer,instance,keras,kernel,k means,k medians,k nearest neighbors,l1 regularization,l2 regularization,label,lambda,layer,learning rate,least squares regression,linear regression,logistic regression,long short term memory,loss,machine learning,matrix factorization,mean squared error,model,natural language processing,negative class,neural network,neuron,normalization,numpy,one hot encoding,online learning,output layer,overfitting,pandas,parameter,partial derivative,perceptron,performance,pipeline,pooling,positive class,precision,preprocessing,prior,q learning,random forest,recall,rectified linear unit,recurrent neural network,regression model,regularization,reinforcement learning,reward,root directory,scikit learn,sigmoid function,softmax,tensor,tensorflow,training set,true negative,true positive,underfitting,unsupervised learning,vanishing gradient problem |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment