I found a bug in Azure's az
CLI when it parses variables.
I was tagging a lot of resources and initially did this:
export ENV=demo
export VERSION=
export RG=rg-apps-${ENV}${VERSION}
export LOCATION=eastus
export SUBSCRIPTION="my-subscription-id"
I found a bug in Azure's az
CLI when it parses variables.
I was tagging a lot of resources and initially did this:
export ENV=demo
export VERSION=
export RG=rg-apps-${ENV}${VERSION}
export LOCATION=eastus
export SUBSCRIPTION="my-subscription-id"
Here's my TL;DR on Azure Service Principals, with a full example of creating one to push docker images to Azure Container Registry.
First, you need an Azure Container Registry. So, go make one. And, once you're done go to its page in the UI and click Overview > JSON View
. See that Resource ID
? Copy that. That will become the scope
or the Azure thing we want to give to give our principal access to. Scopes can be at various levels. In this example, I'm very finely scoping down to ONLY this container registry resource. This principal will have no permissions anywhere else. You can go up levels in the scope hierarchy if you want, and say, provide access to all resources in a resource group
or even a subscription
. My resource ID looks a bit like this: /subscriptions/1d6a...982f/resourceGroups/my-container-registry/providers/Microsoft.ContainerRegistry/registries/registryname
.
Also, store the name of your registry if you want to push an image to it later.
Second, you need to determine what permissions
When I click on links from Slack or Outlook on MacOS they open in seemingly random browser windows/profiles. This is annoying.
Open links in a particular google chrome profile window. Be less annoyed.
chrome://version
and find the desired profile name. Mine was Default
. Copy that profile's directory name, like Profile 2
or Default
, not the profile's vanity name you see when you click on your profile icon in the browser.brew install finicky
. After install it should be running and you should see the icon in the upper toolbar.> Config > Create New
~/.finicky
and make it look something like this, filling in your profile name:# Service Graph: | |
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}') 8088:8088 & | |
# Kiali: | |
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=kiali -o jsonpath='{.items[0].metadata.name}') 20001:20001 & | |
# Grafana: | |
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 & | |
# Prometheus: |
# | |
# This will build the memtier_benchmark tool from Redis labs and | |
# run it against a Redis server running on your Mac's localhost. | |
# | |
git clone https://github.com/RedisLabs/memtier_benchmark.git | |
cd memtier_benchmark | |
docker build -t memtier_benchmark . | |
docker run --rm -it memtier_benchmark -s host.docker.internal |
"""Checks to see if media URIs listed in the given file are valid. | |
Warning this is untested, shit code. | |
"Valid" means: | |
- the file is under 32 megs | |
- the file exists at the URI | |
Usage: | |
# check all URLs in the `your-uri-file.txt` and output failures only. |
Here we document our file format for importing CRE data into RealMassive. Third party data providers will be interested in this because, if they:
In [7]: %paste | |
""" | |
Testing with Berkshire-Hathaway | |
""" | |
""" | |
Colliers API Testing | |
""" | |
import zeep |
``` | |
17:09 $ python -mzeep 'https://listings.colliershub.com/Services/API/Currencies.svc?wsdl' | |
Traceback (most recent call last): | |
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 174, in _run_module_as_main | |
"__main__", fname, loader, pkg_name) | |
File "/usr/local/Cellar/python/2.7.12_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code | |
exec code in run_globals | |
File "/Users/vertrees/.virtualenvs/colliers_testing/lib/python2.7/site-packages/zeep/__main__.py", line 86, in <module> | |
main(args) | |
File "/Users/vertrees/.virtualenvs/colliers_testing/lib/python2.7/site-packages/zeep/__main__.py", line 75, in main |
The documentation for how to deploy a pipeline with extra, non-PyPi, pure Python packages on GCP is missing some detail. This gist shows how to package and deploy an external pure-Python, non-PyPi dependency to a managed dataflow pipeline on GCP.
TL;DR: You external package needs to be a python (source/binary) distro properly packaged and shipped alongside your pipeline. It is not enough to only specify a tar file with a setup.py
.
Your external package must have a proper setup.py
. What follow is an example setup.py
for our ETL
package. This is used to package version 1.1.1 of the etl library. The library requires 3 native PyPi packages to run. These are specified in the install_requires
field. This package also ships with custom external JSON data, declared in the package_data
section. Last, the setuptools.find_packages
function searches for all available packages and returns that