Skip to content

Instantly share code, notes, and snippets.

View pekermert's full-sized avatar
🐴
Yeaah

Peker Mert Öksüz pekermert

🐴
Yeaah
View GitHub Profile
@pekermert
pekermert / my.cnf
Created February 13, 2022 20:25 — forked from fevangelou/my.cnf
Optimized my.cnf configuration for MySQL/MariaDB (on Ubuntu, CentOS, Almalinux etc. servers)
# === Optimized my.cnf configuration for MySQL/MariaDB (on Ubuntu, CentOS, Almalinux etc. servers) ===
#
# by Fotis Evangelou, developer of Engintron (engintron.com)
#
# ~ Updated December 2021 ~
#
#
# The settings provided below are a starting point for a 8-16 GB RAM server with 4-8 CPU cores.
# If you have different resources available you should adjust accordingly to save CPU, RAM & disk I/O usage.
#
@pekermert
pekermert / Counting Valleys
Created May 28, 2019 21:13
hackerrank counting valleys
# Complete the countingValleys function below.
def countingValleys(n, s):
path = [i for i in s]
dd=0
uu=0
vc=0
for i in range(len(path)-1):
if path[i] == 'D' and path[i+1] == 'D':
dd=1
if path[i] == 'U' and path[i+1] == 'U' and dd > 0:
@pekermert
pekermert / sockmerchant.py
Created May 28, 2019 20:02
hackerrank solution sock merchant
# Complete the sockMerchant function below.
def sockMerchant(n, ar):
colors = []
i = 0
while i<len(ar):
if ar[i] not in colors:
colors.append(ar[i])
i += 1
t = 0
tp = 0
@pekermert
pekermert / dns-checker
Created June 28, 2018 17:46
Dns monitoring script
#!/bin/bash
DOMAINLIST='./domains.list'
MAIL='/usr/bin/mail'
EMAILS='testuser@gmail.com'
OLDLOG='./monitordns.OLD'
@pekermert
pekermert / prometheus.yml
Created May 29, 2018 11:39
Prometheus Kubernetes scrape config yaml
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
@pekermert
pekermert / nginx-lua-s3.nginxconf
Created August 15, 2017 18:02 — forked from raucao/nginx-lua-s3.nginxconf
Nginx proxy to S3
location ~* ^/s3/(.*) {
set $bucket '<REPLACE WITH YOUR S3 BUCKET NAME>';
set $aws_access '<REPLACE WITH YOUR AWS ACCESS KEY>';
set $aws_secret '<REPLACE WITH YOUR AWS SECRET KEY>';
set $url_full "$1";
set_by_lua $now "return ngx.cookie_time(ngx.time())";
set $string_to_sign "$request_method\n\n\n\nx-amz-date:${now}\n/$bucket/$url_full";
set_hmac_sha1 $aws_signature $aws_secret $string_to_sign;
set_encode_base64 $aws_signature $aws_signature;
@pekermert
pekermert / Logstash, Elasticsearch in an EC2_AWS enviroment
Last active September 7, 2015 12:37 — forked from gzholder/Logstash, Elasticsearch in an EC2_AWS enviroment
Logstash, Elasticsearch in an EC2_AWS enviroment
Here I will go over how to setup Logstash, Kibana, Redis, and Elasticsearch in an EC2 environment behind a public Load Balancer. The setup I'll be doing will have:
1) One server for Redis to act as the broker/buffer to receive logs.
2) One server using Logstash receive logs from Redis and parse/index them over to Elasticsearch.
3) One server for Elasticsearch to receive logs and Kibana to view them in a browser.
4) One server to send the logs using logstash.
5) One public Load Balancer.
This may seem like a lot but follow these steps and you'll get the hang of it :)
What you will need: