Helper setup to edit .yaml files with Vim:
List of general purpose commands for Kubernetes management:
| map $request_uri $request_uri_path { | |
| "~^(?P<path>[^?]*)(\?.*)?$" $path; | |
| } | |
| log_format w3cjson escape=json | |
| '{' | |
| '"Date":"$time_iso8601",' | |
| '"Client IP Address":"$remote_addr",' | |
| '"Client Username":"$remote_user",' | |
| '"Server IP Address":"$server_addr",' |
| const m = 10; // Number of tosses | |
| const n = 0.4; // Probability of observe | |
| const k = 3; // number of at most observe | |
| const testN = 20; //Run the simulation for at least 20 times | |
| const observe = "T"; | |
| let count = 0; | |
| let round = 0; | |
| console.log("Number of tosses: ", m); | |
| console.log("Observe: ", observe); |
Helper setup to edit .yaml files with Vim:
List of general purpose commands for Kubernetes management:
| // Note: Using non-standard V8 feature | |
| // https://code.google.com/archive/p/v8-i18n/wikis/BreakIterator.wiki | |
| // | |
| // The standard is now Intl.Segmenter but no browser implements it yet. | |
| // | |
| function cut(text) { | |
| const iterator = new Intl.v8BreakIterator(["th"]); | |
| iterator.adoptText(text); | |
| const result = []; | |
| let pos = iterator.first(); |
| FROM nginx:1.14.2 AS builder | |
| LABEL maintainer="NGINX Docker Maintainers <docker-maint@nginx.com>" | |
| ENV NGINX_VERSION 1.14.2 | |
| ENV VTS_VERSION 0.1.18 | |
| COPY ./badproxy /etc/apt/apt.conf.d/99fixbadproxy | |
| RUN apt-get update && apt-get install --no-install-recommends --no-install-suggests -y \ |
| FROM nginx:alpine AS builder | |
| # nginx:alpine contains NGINX_VERSION environment variable, like so: | |
| # ENV NGINX_VERSION 1.15.0 | |
| # Our NCHAN version | |
| ENV NCHAN_VERSION 1.1.15 | |
| # Download sources | |
| RUN wget "http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz" -O nginx.tar.gz && \ |
def recursive_dbscan(orders, min_radius, max_radius):
while min_radius < max_radius:
curr_radius = (min_radius+max_radius)/2
clusters = DBSCAN(orders, curr_radius)
if no_clusters < min_no_clusters: # not enough clusters
max_radius = curr_radius-1
else:
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
| CREATE DEFINER=`root`@`localhost` PROCEDURE `Cmonk_partition_manager`(in partition_frequency varchar(100), in db_schema varchar(100),in input_table_name varchar(100), in partition_column varchar(100)) | |
| BEGIN | |
| -- Author - Code Monk | |
| -- Version - 1.0 | |
| -- Procedure for automated partitioning of table | |
| -- Inputs : | |
| -- 1- Partition_frequency : Options-(Daily,Monthly,Weekly) | |
| -- 2- db_schema : Name of Database schema | |
| -- 3- input_table_name : Table Name |
Generate one SEO propose article for CDP provider called SABLE for keywords: CDP, Customer Data Platform, Marketing Automation with 500 words as a MD markdown format and Add link "https://sable.asia" to the Keyword CDP
With the rise of digital marketing, customer data has become an invaluable asset for businesses to better understand their customers and cater to their needs. But managing customer data can be a daunting task. This is why many businesses are turning to a CDP (Customer Data Platform) to help them better manage and utilize customer data. A CDP is a single platform that centralizes all customer data from multiple sources and provides companies with insights to create targeted marketing campaigns, boost customer loyalty and personalize the customer experience.
A CDP is an integrated platform that collects, stores, organizes and analyzes customer data from multiple sources