Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

View bandiatindra's full-sized avatar

Atindra Bandi bandiatindra

View GitHub Profile
We can make this file beautiful and searchable if this error is corrected: Illegal quoting in line 2.
Date, user_id, comments
September 7 2018 4:01PM, dino001, If they keep it around in next four-five years, I think I will make myself to put on a hazmat suit and visit our friendly Kia dealer (oh, boy are they horrible here, or what) to check it out. It's got a lot of good stuff, but seems like they are still behind on a few things, such as relationship between power and gas mileage (e.g. BMW 340/440 has similar performance, but much better gas mileage as a daily driver), but with appropriate price difference, those objections and shortcomings are not insurmountable. Biggest thing will be the "first contact" with a sales person. Hope it won't start from "are you buying it today?", "what can I do to make you take it home?", or "let me wash your BMW - oh, I can't find the keys", "How about I show you the deal - square one, two, three, four".
September 7 2018 9:43PM, circlew, The lease rate is the factor that stops me cold from taking the leap. Since I don't track my cars, it would do fine for my needs but I
# Taking the last convolutional layer
layer = model.layers[-4]
weights = layer.get_weights()
# Randomly initializing the weights of the last layer
new_kernel = np.random.normal(size=weights[0].shape)/(GRID_H*GRID_W)
new_bias = np.random.normal(size=weights[1].shape)/(GRID_H*GRID_W)
# Setting the weights of the last layer
layer.set_weights([new_kernel, new_bias])
LABELS = ['Shirt', 'Trousers', 'Swimwear', 'Tie', 'Bus', 'Truck', 'Train', 'Motorcycle', 'Helmet', 'Shorts', 'Airplane',
'Sunglasses', 'Jacket', 'Dress', 'Human eye', 'Suit', 'Footwear', 'Woman', 'Human face', 'Man', 'Human arm',
'Human head','Human hand', 'Human leg', 'Human nose', 'Human mouth', 'Human ear', 'Human beard', 'Human foot', 'Car',
'Wheel', 'Boat', 'House', 'Bird', 'Guitar', 'Fast food', 'Hat', 'Dog', 'Laptop', 'Beer', 'Cat', 'Lantern', 'Fountain']
# Setting the input image size to 608 X 608
IMAGE_H, IMAGE_W = 608, 608
# We wil use 19X19 grids for our images. This will lead us to a total of 608/19 = 32 grids for an image
GRID_H, GRID_W = 19 , 19
# Optimization Functions
optimizer = Adam(lr=0.5e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
#optimizer = SGD(lr=1e-4, decay=0.0005, momentum=0.9)
#optimizer = RMSprop(lr=1e-4, rho=0.9, epsilon=1e-08, decay=0.0)
model.compile(loss=custom_loss, optimizer=optimizer)
model.fit_generator(generator = train_batch,
steps_per_epoch = int(len(train_batch)/16),
# Layer 20
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_20', use_bias=False)(x)
x = BatchNormalization(name='norm_20')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 21
skip_connection = Conv2D(64, (1,1), strides=(1,1), padding='same', name='conv_21', use_bias=False)(skip_connection)
skip_connection = BatchNormalization(name='norm_21')(skip_connection)
skip_connection = LeakyReLU(alpha=0.1)(skip_connection)
skip_connection = Lambda(space_to_depth_x2)(skip_connection)
@bandiatindra
bandiatindra / Web Scraping Selenium Edmunds.com
Last active February 16, 2020 00:05
Code to scrape 5000 comments from Edmunds.com
driver = webdriver.Chrome('C:/Users/bandi/Desktop/Text Analytics/TA Session/chromedriver_win32/chromedriver')
driver.get('https://forums.edmunds.com/discussion/2864/general/x/entry-level-luxury-performance-sedans/p702')
comments = pd.DataFrame(columns = ['Date','user_id','comments'])
ids = driver.find_elements_by_xpath("//*[contains(@id,'Comment_')]")
comment_ids = []
for i in ids:
comment_ids.append(i.get_attribute('id'))
for x in comment_ids: