This gist was deprecated by full featured repository
Helper for adding global Ajax callbacks manualy from console. See https://stackoverflow.com/questions/5202296/add-a-hook-to-all-ajax-requests-on-a-page and https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Using_XMLHttpRequest#Monitoring_progress
Install as bookmarklet.
#!/bin/bash | |
# pkcs15-tool --list-keys | |
# Private RSA Key [Digital Signature]: ID: 11 | |
TEMPFILE=`mktemp` | |
# reading PIN into environment variable is NOT safe, DO NOT use on shared computer. | |
# read is built-in, manual can be shown with | |
# help read | |
read -e -s -p "PIN: " PIN |
<?xml version="1.0" encoding="UTF-8"?> | |
<?xml-stylesheet type="text/xsl" href="stylesheet.xsl"?> | |
<pozadavky xmlns="http://calavera.info"> | |
<pozadavek id="1" subdodavka="true" md="1" submd="5"> | |
<text>Some text</text> | |
</pozadavek> | |
<pozadavek id="2" subdodavka="false" md="10"> | |
<text>Some other text</text> | |
<note>Some note</note> |
Since the process of public procurement has to be transaparent, assessment of proposals is in most cases clearly defined beforehand, strictly evaluated and it is often complicated mathematical combination of multiple factors like price and quality. This opens a room for speculation about different strategies for winning a contract, for example low price and low quality vs higher price and higher quality. This is an example of analysis of such possibilities using mathematical software Octave.
I'm keeping these notes mainly as personal notes on how to use Octave.
-
Landauer's principle: There is a minimum energy needed to delete information because of the laws of thermodynamics.
-
Limits of Koomey's law: Because of Landauer's principle, for non-reversible computation there is a limit on number of computations per joule.
-
Margolus–Levitin theorem: Even reversible computing is constrained in number of computations per joule.
-
Bekenstein bound: Entropy in given finite space is limited.
-
Bremermann's limit: Implication of Bekenstein bound is that there is a limit on computational power of a computer.
Standard grep has one weakness regarding the log files - it doesn't handle multiline log entries well. For example logs of Java
programs are famous to have many lines of stack traces in a single log item. It is possible to use -A
grep parameter but this
adds lines also for log items that doesn't have any other lines in them resulting in showing log entries which doesn't contain the
search term which is very confusing.
I was so frustrated greping through a log file once that I tried a more exotic approach using Lex and it turned out it's extremely easy with it. I realized that it's in fact raison d’être of lex and that it's very dumb not to use it in the first place.
<?xml version="1.0" encoding="UTF-8"?> | |
<result> | |
<status>success</status> | |
<html xmlns="http://www.w3.org/1999/xhtml"> | |
<a id="email">somebody@example.com</a> | |
<span id="filename">file.ext</span> | |
<table> | |
<thead> | |
<tr> | |
<th>Dodavatel</th> |