Skip to content

Instantly share code, notes, and snippets.

View Erreur32's full-sized avatar
💭

Erreur32 Erreur32

💭
View GitHub Profile
@Erreur32
Erreur32 / 2019-https-localhost.md
Created June 16, 2022 09:39 — forked from cecilemuller/2019-https-localhost.md
How to create an HTTPS certificate for localhost domains

How to create an HTTPS certificate for localhost domains

This focuses on generating the certificates for loading local virtual hosts hosted on your computer, for development only.

Do not use self-signed certificates in production ! For online certificates, use Let's Encrypt instead (tutorial).

@Erreur32
Erreur32 / git-commit-log-stats.md
Created July 19, 2021 09:07 — forked from eyecatchup/git-commit-log-stats.md
Some commands to get git commit log statistics for a repository on the command line.

git commit stats

Commands to get commit statistics for a Git repository from the command line -
using git log, git shortlog and friends.




@Erreur32
Erreur32 / php-webscraping.md
Created December 14, 2020 09:12 — forked from anchetaWern/php-webscraping.md
web scraping in php

Have you ever wanted to get a specific data from another website but there's no API available for it? That's where Web Scraping comes in, if the data is not made available by the website we can just scrape it from the website itself.

But before we dive in let us first define what web scraping is. According to Wikipedia:

{% blockquote %} Web scraping (web harvesting or web data extraction) is a computer software technique of extracting information from websites. Usually, such software programs simulate human exploration of the World Wide Web by either implementing low-level Hypertext Transfer Protocol (HTTP), or embedding a fully-fledged web browser, such as Internet Explorer or Mozilla Firefox. {% endblockquote %}

Following on from other Gists I have posted, this one shows a neat way of using Includes to centralise general blocking rules for Bad Bots, creepy crawlers and irritating IPs
see the full post at http://www.blue-bag.com/blog/apache-better-blocking-common-rules
@Erreur32
Erreur32 / httpd.conf_spiders
Created July 3, 2020 18:57 — forked from gplv2/httpd.conf_spiders
Apache bot control system, filter out spiders good and bad crawlers/ webspiders when they hit your server hard, like googlebot , bingbot. Block all them for specific places marked in the robots.txt to not visit (yet they do sometimes).
# To relieve servers
##Imagine a robots.txt file like this (Google understands this format):
#User-agent: *
#Disallow: /detailed
#Disallow: /?action=detailed
#Disallow: /*/detailed
#Crawl-delay: 20
##
@Erreur32
Erreur32 / updater-netdata-git.sh
Created January 24, 2019 19:15
updater-netdata-git.sh
#!/bin/bash
#
# Script Updater for netdata
#
# - Depencies: Wring package (NPM)
#
# By Erreur32 - 2018
#
@Erreur32
Erreur32 / updater-netdata.sh
Last active December 21, 2018 19:08
updater-netdata.sh
#!/bin/bash
#
# Script Updater for netdata
#
# /!\ NEED Depencies:
# Wring package (NPM)
# Install:
# npm install --global wring
#
# By Erreur32 - 2018 December
#!/bin/bash
#####
#
# This Script will update Plex Media Server to the latest version for Ubuntu
#
# To automatically check & update plex, run "crontab -e" and add the following lines
#
# # Check for Plex Media Server Updates every day @6:00 am
# 0 6 * * * /path/you/want/update-plexmediaserver.sh
<h1 class="alpha ">
Echo'system'
</h1>
<img class="mars" src="https://www.nasa.gov/sites/default/files/thumbnails/image/christmas2015fullmoon.jpg" alt="" />