Skip to content

Instantly share code, notes, and snippets.

@csiebler
csiebler / example.py
Last active May 11, 2022
Synthesizing 10+ min audio via real-time Speech API
View example.py
# This example shows how the real-time Speech API can be used to snythesize audio files that are longer than 10 minutes
import azure.cognitiveservices.speech as speechsdk
speech_key, service_region = "xxxxxxxxxxx", "westeurope"
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
speech_config.speech_synthesis_voice_name = "de-DE-KatjaNeural"
file_name = "outputaudio.wav"
View text_to_speech_example.py
import requests
import azure.cognitiveservices.speech as speechsdk
# This code should run in the backend of the mobile application
headers = {
'Ocp-Apim-Subscription-Key': '<paste your code here>'
}
token_url = 'https://speechapicstest.cognitiveservices.azure.com/sts/v1.0/issuetoken'
response = requests.post(token_url, headers=headers)
@csiebler
csiebler / test_read_latency.py
Created Feb 15, 2022
Test Read API performance using different methologies
View test_read_latency.py
import requests
import io
import logging
import threading
import time
import concurrent.futures
import Levenshtein as lev
from datetime import datetime
@csiebler
csiebler / invoke_speech.py
Created Feb 3, 2022
Short example on how to secure use Speech API on a frontend client
View invoke_speech.py
import azure.cognitiveservices.speech as speechsdk
import requests
# Access key to Speech API from Azure (this needs to be stored in the backend and show not get to the client)
access_key = 'xxxxxxxxxxxx'
region = "westeurope"
# This method should run in the backend, so that access_key is not needed in the client
def get_auth_token(access_key):
fetch_token_url = f"https://{region}.api.cognitive.microsoft.com/sts/v1.0/issueToken"
@csiebler
csiebler / lexicon.xml
Created Dec 15, 2021
A lexicon for most French words, that are used in German language
View lexicon.xml
<?xml version="1.0" encoding="utf-8"?>
<lexicon xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon http://www.w3.org/TR/2007/CR-pronunciation-lexicon-20071212/pls.xsd" version="1.0" alphabet="ipa" xml:lang="de-DE" xmlns="http://www.w3.org/2005/01/pronunciation-lexicon">
<lexeme>
<grapheme>passieren</grapheme>
<phoneme>paˈsiːʁən</phoneme>
</lexeme>
<lexeme>
<grapheme>Reserve</grapheme>
<phoneme>ʁeˈzɛʁvə</phoneme>
</lexeme>
@csiebler
csiebler / indexer.json
Created Nov 3, 2021
Run Skill on individual fields in nested array structure in Cognitive Search
View indexer.json
{
"@odata.type": "#Microsoft.Skills.Text.V3.SentimentSkill",
"name": "#4",
"description": null,
"context": "/document/transcript/*",
"defaultLanguageCode": "en",
"modelVersion": null,
"includeOpinionMining": false,
"inputs": [
{
@csiebler
csiebler / compare.py
Last active Oct 26, 2021
Comparison of sequential vs parallel Azure Read API processing time
View compare.py
import requests
import io
import time
n = 50
# Enter your resource details here
url = "https://xxxxxxx.cognitiveservices.azure.com/vision/v3.2/read/analyze?language=en&pages=1&readingOrder=natural"
key = "xxxxxxx"
View bing_search_example.py
import requests, json
key = "xxxxx" # Paste your API key here
url = "https://api.bing.microsoft.com/v7.0/search"
search_term = "Azure Cognitive Services"
headers = {"Ocp-Apim-Subscription-Key" : key}
params = {"q": search_term, "textDecorations": True, "textFormat": "HTML"}
@csiebler
csiebler / enforce_init_script.json
Created Jul 19, 2021
Enforce an init script for Azure Machine Learning Compute Instance via Azure Policy
View enforce_init_script.json
{
"mode": "All",
"policyRule": {
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.MachineLearningServices/workspaces/computes"
},
{
@csiebler
csiebler / package_model.py
Created May 3, 2021
Package existing model as container in Azure Machine Learning
View package_model.py
from azureml.core import Workspace, Model
from azureml.core.model import InferenceConfig
from azureml.core.environment import Environment
from azureml.core.conda_dependencies import CondaDependencies
ws = Workspace.from_config()
env = Environment("inference-env")
env.docker.enabled = True
# Replace with your conda enviroment file