sh -c "$(curl -fsSL https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"
- Download zsh-autosuggestions by
from tqdm import tqdm | |
from myapp.models import MyModel | |
progress_bar = tqdm(desc="Processing", total=MyModel.objects.count()) | |
for obj in MyModel.objects.all(): | |
obj.process() # do something time consuming | |
progress_bar.update(1) | |
progress_bar.close() |
Firstly, I strongly think that if you're working with NLP/ML/AI related tools, getting things to work on Linux and Mac OS is much easier and save you quite a lot of time.
Disclaimer: I am not affiliated with Continuum (conda), Git, Java, Windows OS or Stanford NLP or MaltParser group. And the steps presented below is how I, IMHO, would setup a Windows computer if I own one.
Please please please understand the solution don't just copy and paste!!! We're not monkeys typing Shakespeare ;P
#!/usr/bin/expect | |
set timeout -1; | |
spawn {{django_dir}}/venv/bin/python manage.py changepassword {{admin_user}}; | |
expect { | |
"Password:" { exp_send "{{admin_pass}}\r" ; exp_continue } | |
"Password (again):" { exp_send "{{admin_pass}}\r" ; exp_continue } | |
eof | |
} |
With NLTK version 3.1 and Stanford NER tool 2015-12-09, it is possible to hack the StanfordNERTagger._stanford_jar
to include other .jar
files that are necessary for the new tagger.
First set up the environment variables as per instructed at https://github.com/nltk/nltk/wiki/Installing-Third-Party-Software
from __future__ import division | |
from sklearn.cluster import KMeans | |
from numbers import Number | |
from pandas import DataFrame | |
import sys, codecs, numpy | |
class autovivify_list(dict): | |
'''Pickleable class to replicate the functionality of collections.defaultdict''' | |
def __missing__(self, key): | |
value = self[key] = [] |
# How to display only interesting fields for a Django Rest Framework API | |
# /?fields=field1,field2,field3 | |
from rest_framework import serializers, pagination | |
class DynamicFieldsModelSerializer(serializers.ModelSerializer): | |
""" | |
A ModelSerializer that takes an additional `fields` argument that | |
controls which fields should be displayed. |
I often like to start my ipython session from where I last left off - similar to saving a firefox browsing session. IPython already automatically saves your input history so that you can look up commands in your history, but it doesn't save your variables. Here are the steps to save the state of your variables on exit and have them loaded on startup:
Add the save_user_variables.py script below to your ipython folder (by default $HOME/.ipython). This script takes care of saving user variables on exit.
Add this line to your profile's ipython startup script (e.g., $HOME/.ipython/profile_default/startup/startup.py):
get_ipython().ex("import save_user_variables;del save_user_variables")
In your ipython profile config file (by default $HOME/.ipython/profile_default/ipython_config.py) find the following line:
# c.StoreMagics.autorestore = False
Uncomment it and set it to true. This automatically reloads stored variables on startup. Alternatively you can use reload the last session manually us