Welcome to this week's Insiders Update! Insiders gain exclusive access to early previews, tutorials, updates, news, and events on my OSS work.
🐳 Become an Insider or Subscribe today 👉 through GitHub
I hope that you are keeping well? If you need to talk and socialise, feel free to join us on OpenFaaS Slack. I gave myself some time off this #FaaSFriday for a walk with my wife in the countryside.
In this update the feature is going to be about bare-metal automation and some of the work I've been doing recently around this area. I hope you'll learn something whether you are using bare-metal, or have a general interest in computers and what happens when you launch an instance in the cloud.
This week my talk from Software Circus was made available on YouTube. Here's the abstract:
What do you think of when you hear the term "bare-metal"? Do you imagine a distant warehouse with servers racked up to the ceiling in cages, connected together with colourful cables? This talk introduces what working with bare-metal is really like, the pros & cons along with how we can relate real servers to a Cloud Native, API-driven world.
Watch the video: Introducing Bare Metal to a Cloud Native World
If you prefer the written word, I wrote up my analysis of bare metal computing vs. serverless and cloud native for The New Stack: Bare Metal in a Cloud Native World
You may have seen me doing analysis recently around bare-metal and covering projects like Packet's open-source Tinkerbell initiative.
As part of that, I wanted to show how the power of ChatOps could improve the workflow for operators of Tinkerbell. I wrote a Slackbot in Go with the fast golang-middleware template and extended the existing functionality that was present in the CLI, and added the ability to query logs from ElasticSearch, something which is otherwise rather cumbersome to do.
Showing the logs from ELK for the nginx index
The code orchestrates both Tinkerbell's gRPC API and ELK's REST API.
- If you're a Tinkerbell user try it out and let me know what you think?
- If you're interested in trying out Tinkerbell, checkout my getting started repo
- If you're an OpenFaaS user, why not develop your own Slackbot based upon my open-source repo to automate and query your own internal systems?
The python3-flask template is designed to be a drop-in replacement for the python3
template, however it has some advantages. It uses the high performance of-watchdog which reuses the same memory for each request by running a HTTP server alongside your code. The second big difference is that it allows you to do more with the request and response.
- You can now work with a raw request as an input instead of only strings
- You can return a non-200 HTTP code
- You can also return custom headers
Enable raw requests by setting an environment variable:
environment:
RAW_BODY: True
Did you know that you can do the same thing with our node12
template for JavaScript functions?
I've also added documentation to show you how, and how to return a non-200 HTTP code, or a custom header with this template.
Example of returning a custom HTTP code
def handle(req):
return "unable to find page", 404
Example of returning a custom HTTP code and content-type
def handle(req):
return '{"status": "not authorized"}', 401, {"Content-Type":"application/json"}
You may also like the python3-http template which is designed to give you greater control over both the HTTP request and response.
As of 0.8.2 the inlets-operator:
- only synchronises services of type LoadBalancer, resulting in fewer log lines
- uses the AWS HTTP web-service to detect your public IP when using inlets PRO and automatic TLS
The first person to qualify was Christopher De Vries who got everything set up very quickly 🍻
The easiest tutorial to use to get started is: Expose Your IngressController and get TLS from LetsEncrypt
Run through the whole tutorial and share the HTTPS with me, and I'll ship you some swag for free.
I was asked whether remote desktop (RDP) or VNC would work over inlets PRO. It turns out that both work flawlessly, and I was able to help a new user get himself set up to access his servers in Scotland from Austria.
Try it for yourself with my Tweet where I show to run RDP or VNC within a Docker container for a virtual Linux desktop.
This week on Slack I was asked if a service on a private network could be exposed inside a remote Kubernetes cluster, but not to the Internet. The answer is yes and this is called a "split plane"
-
The control-plane for inlets PRO is a websocket, secured via the built-in TLS or your own reverse proxy - this runs on port 8123 and can be specified as a command-line option in the
inlets-pro server
-
The data-plane does not have to be exposed on the Internet, although it's usually useful to do so. The data-plane is what you're exposing - in this example it was ElasticSearch on port 9200.
You can simply only expose the secure, authenticated control-plane port and hide the data-port inside the cluster, it's relatively easy to do. See also:
- Our sample Kubernetes YAML files for the client/server
- Complete CLI reference guide for inlets-pro client and server
You can run inlets-pro as a process in the terminal as and when you need it, but most of the time you will want to run it as a Kubernetes Deployment, or as a systemd unit file. This ensures the service can restart and gets its output logged centrally.
Add "inlets-pro client --generate=systemd" to generate a system unit file for your client along with all the other required parameters.
For example:
export TOKEN="auth token"
inlets-pro client --generate=systemd \
--license-file /var/lib/inlets-pro/LICENSE \
--tcp-ports "80,443" \
--connect "wss://167.99.90.104:8123/connect" \
--token $TOKEN
That's it. Hope this is as useful to you as it was to Matt Brewster and other recent users on the Slack channel.
Loki is described by the Grafana team as "Like Prometheus, but for logs."
This provides a much faster and easier to use experience than ELK, not to mention that there's no Java in sight, so things take a lot less memory for the same kind of functionality.
Download the latest version of arkade and try out the new app on your Kubernetes cluster:
arkade update
arkade install loki
Thanks to Alistair Hey for this one.
I'm starting to see more and more people getting stumped by the latest kubectl run changes in Kubernetes 1.18. I've done what I can to write up the workarounds and alternatives. Something else I'm excited about is the upcoming webinar with Sysdig where I'll get hands-on and explore four of the major changes in the 1.18 release.
It would be great to see you there, sign-up for free: Live Webinar - June 18, 2020 10:00am Pacific / 6:00pm BST / 7:00pm CEST
Hope you enjoyed the updates - stay safe this weekend.