import gym
!pip3 install box2d
import random
import torch
import numpy as np
from collections import deque
Code snippets from Udacity Spark Course
MapReduce
The biggest difference between Hadoop and Spark is that Spark tries to do as many calculations as possible in memory, which avoids moving data back and forth across a cluster. Hadoop writes intermediate calculations out to disk, which can be less efficient. Hadoop is an older technology than Spark and one of the cornerstone big data technologies.
MapReduce versus Hadoop MapReduce
example 1 / from the CVND Image captioning project
class DecoderRNN(nn.Module):
def __init__(self, embed_size, hidden_size, vocab_size, num_layers=1, dropout=0):
super().__init__()
self.embed_size = embed_size
self.hidden_size = hidden_size
self.vocab_size = vocab_size
model class of CNN in PyToch
# with batch normalization, dropout layer and 3 convolutional layers
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
Finding Contours Fit Ellipse Crop selected contour based on angle of ellips
import numpy as np
import matplotlib.pyplot as plt
import cv2
Harris corner detection
Detect corners
import matplotlib.pyplot as plt
import numpy as np
import cv2
Canny Edge Detection
Hough Line Detection
import numpy as np
import matplotlib.pyplot as plt
import cv2
Load image apply high-pass filter apply fourier tr
import numpy as np
import matplotlib.pyplot as plt
import cv2
# Read in an image