Skip to content

Instantly share code, notes, and snippets.

View salgo60's full-sized avatar
😴

Magnus Sälgö salgo60

😴
View GitHub Profile
@osanseviero
osanseviero / app.py
Last active April 1, 2022 11:44
Simple gradio demo
%%capture
!pip install gradio transformers sentencepiece
import gradio as gr
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-es")
def predict(text):
--- # Config file for Wiki API Connector. Make sure there are no TAB characters in this YAML file!
units:
- unit:
name: Smithsonian American Art Museum
api:
api_url: https://api.si.edu/openaccess/api/v1.0/content/edanmdm:{}?api_key={}
api_key: true
api_key_info: https://api.data.gov/signup/
api_key_string: TKTKTKTK # This needs to be filled in or brought in as a secret
fields: [identifier, api_key_string]
@fuzheado
fuzheado / nmaahc_2018.59.3.json
Created September 21, 2021 10:58
Smithsonian API example - NMAAHC
{
"status": 200,
"responseCode": 1,
"response": {
"id": "edanmdm-nmaahc_2018.59.3",
"title": "M1917 Revolver issued by US Army during WWI to Charles H. Houston",
"unitCode": "NMAAHC",
"type": "edanmdm",
"url": "edanmdm:nmaahc_2018.59.3",
"content": {
@frogcat
frogcat / 20190917.md
Created September 17, 2019 09:58
Wikidata on Leaflet

Wikidata on Leaflet

@yunjey
yunjey / download_flickr_image.py
Last active December 26, 2023 15:22
downloading images from flickr using python-flickr
# First, you should install flickrapi
# pip install flickrapi
import flickrapi
import urllib
from PIL import Image
# Flickr api access key
flickr=flickrapi.FlickrAPI('c6a2c45591d4973ff525042472446ca2', '202ffe6f387ce29b', cache=True)
@VladimirAlexiev
VladimirAlexiev / README.md
Created March 30, 2017 06:54
How to use Google Sheets to Manage Wikidata Coreferencing

How to use Google Sheets to Manage Wikidata Coreferencing

A previous post How to Add Museum IDs to Wikidata explained how to use SPARQL to find missing data on Wikidata (Getty Museum IDs), how to create such values (from museum webpage URLs) and how to format them properly for QuickStatements.

Here I explain how to use Google sheets to manage a more advanced task. The sheet AAT-Wikidata matches about 3k AAT concepts to Wikipedia, WordNet30 and BabelNet (it restored an old mapping to Wordnet, retrieved it from BabelNet, mapped to Wikipedia).

  • For each row, it uses the following Google sheet formula (column C) to query the Wikipedia API and get the corresponding Wikidata ID (wikibase_item); split on two lines for readability:
=ImportXml(concat("https://en.wikipedia.o
@AbdealiLoKo
AbdealiLoKo / 00 - Intro to Jupyter.ipynb
Created September 24, 2016 10:24
WIkimedia Hackathon - Bits Pilani Hyderabad Campus
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.