Skip to content

Instantly share code, notes, and snippets.

View somyamohanty's full-sized avatar

Somya Mohanty somyamohanty

  • Department of Computer Science, University of North Carolina - Greensboro
  • Greensboro, NC
View GitHub Profile
from gensim import corpora, models, similarities
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
from datetime import date
from dateutil.rrule import rrule, DAILY, HOURLY
a = date(2009, 5, 30)
b = date(2009, 6, 9)
for dt in rrule(DAILY, dtstart=a, until=b):
print dt.strftime("%Y-%m-%d")
================================================================================
Document Title: The rate of adaptation in a changing environment
Document Abstract: Global warming is a major threat to biodiversity that goes beyond precedent. Historically, many species survived by shifting their range, but now human-induced habitat loss and fragmentation commonly restricts range shifts, forcing species to adapt in situ or face extinction. Simulations, using a model integrating population dynamics, mutation, environmental variance, and genetic change, examine the relationship between maximum phenotypic and genetic rates of change. Not surprisingly, small populations of range-bound species (103 or less) will be severely limited in their long-term response to very rapid climate change; however, linked populations can adapt effectively if gene flow between neighboring reserves is adequate.
Processed Document: ['global/JJ', 'warming/NN', 'be/VB', 'major/JJ', 'threat/NN', 'biodiversity/NN', 'go/VB', 'precedent/NN
import nltk
docs = ['hi this is a test', 'testing done for now', 'today is a test']
docs_l = []
for sentence in docs:
tokens = nltk.word_tokenize(sentence)
docs_l.append(tokens)
finder = BigramCollocationFinder.from_documents(docs_l)
bigram_measures = nltk.collocations.BigramAssocMeasures()
print(finder.score_ngrams(bigram_measures.raw_freq))
import tornado
from tornado import autoreload, ioloop, web, options, escape, websocket
import re
import json
profane_dict = dict()
re_dict = dict()
def re_compile(word_list):
exp = r'\b%s\b' %'|'.join(word_list)
gnipclient.py
class PowertrackClient(object):
"""
Auth attributes common to all gnip clients
"""
def __init__(self, username, password, account):
self.username = username
self.password = password
self.account = account
Environment:
Request Method: GET
Request URL: http://localhost:8000/admin/SocialNetworkData/fb_activity/4/
Django Version: 1.5
Python Version: 2.7.2
Installed Applications:
('django.contrib.auth',
class FB_activity(models.Model):
investigation = models.ForeignKey("investigation.Investigation")
author = models.ForeignKey(FB_user)
fbid = models.TextField()
activity_type = models.TextField()
body = models.TextField()
created_time = models.DateTimeField(blank=True,null=True)
updated_time = models.DateTimeField(blank=True,null=True)
our_updated_time = models.DateTimeField(auto_now=True)
place = models.TextField(blank=True)
echo '>> Start of Script'
nodes=($( cat $PBS_NODEFILE | sort | uniq ))
nnodes=${#nodes[@]}
last=$(( $nnodes - 1 ))
export SPARK_HOME=/work/{user}/sparktest/spark/
ssh ${nodes[0]} "cd ${SPARK_HOME}; ./sbin/start-master.sh"
sparkmaster="spark://${nodes[0]}:7077"
echo 'master created'
import tensorflow as tf
physical_devices = tf.config.experimental.list_physical_devices('GPU')
for physical_device in physical_devices:
tf.config.experimental.set_memory_growth(physical_device, True)
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only allocate 1GB * 2 of memory on the first GPU
try:
tf.config.experimental.set_virtual_device_configuration(
gpus[0],