Skip to content

Instantly share code, notes, and snippets.

View linwoodc3's full-sized avatar

Linwood Creekmore linwoodc3

  • United States
View GitHub Profile
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@evansde77
evansde77 / mock_requests.py
Last active January 31, 2024 08:49
Example of mocking requests calls
#!/usr/bin/env python
"""
mocking requests calls
"""
import mock
import unittest
import requests
from requests.exceptions import HTTPError
@DaniSancas
DaniSancas / neo4j_cypher_cheatsheet.md
Created June 14, 2016 23:52
Neo4j's Cypher queries cheatsheet

Neo4j Tutorial

Fundamentals

Store any kind of data using the following graph concepts:

  • Node: Graph data records
  • Relationship: Connect nodes (has direction and a type)
  • Property: Stores data in key-value pair in nodes and relationships
  • Label: Groups nodes and relationships (optional)
@jqtrde
jqtrde / modern-geospatial-python.md
Last active August 1, 2023 14:50
Modern remote sensing image processing with Python
@bonzanini
bonzanini / create_index.sh
Last active May 25, 2023 23:35
Elasticsearch/Python test
curl -XPOST http://localhost:9200/test/articles/1 -d '{
"content": "The quick brown fox"
}'
curl -XPOST http://localhost:9200/test/articles/2 -d '{
"content": "What does the fox say?"
}'
curl -XPOST http://localhost:9200/test/articles/3 -d '{
"content": "The quick brown fox jumped over the lazy dog"
}'
curl -XPOST http://localhost:9200/test/articles/4 -d '{
@dwyerk
dwyerk / concave_hulls.ipynb
Created April 12, 2014 23:20
concave hulls using shapely and scipy
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@onyxfish
onyxfish / example1.py
Created March 5, 2010 16:51
Basic example of using NLTK for name entity extraction.
import nltk
with open('sample.txt', 'r') as f:
sample = f.read()
sentences = nltk.sent_tokenize(sample)
tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences]
tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences]
chunked_sentences = nltk.batch_ne_chunk(tagged_sentences, binary=True)