Skip to content

Instantly share code, notes, and snippets.

@bric3
Last active March 10, 2021 08:14
Show Gist options
  • Star 6 You must be signed in to star a gist
  • Fork 4 You must be signed in to fork a gist
  • Save bric3/2fef800c825d685fb237 to your computer and use it in GitHub Desktop.
Save bric3/2fef800c825d685fb237 to your computer and use it in GitHub Desktop.
Shell (commands) tips accumulated over years

Blockchain

projects like

  • we.trade
  • r3 worldwide infrastructure finance network
  • carrefour, food supply chain traceability

Enterprise Grade Blockchain

  • Hyperledger
  • corda
  • ethereum

How to transpose his own business on the public blockchain space?

  • Thomson Reuters

pain points

  • Scalability
  • Confidentiality
  • Governance

The DAO (Decentralized Autonomous Organization)

Multi-blockchain

  • polkadot
  • cosmos

Asseth (ethereum france association)

DB

SQL

Create table using with

with user_count as (
  select
    date_trunc('day', created_at)::date as day,
    count(1) as value
  from users
  group by 1
)

Outlier detection with percentage thresholds

Source : https://www.periscopedata.com/blog/outlier-detection-in-sql.html

Using previous table user_count, just select * from user_count where value > 1000, but does not allow variation over time.

with user_count as (
  select
    date_trunc('day', created_at)::date as day,
    count(1) as value
  from users
  group by 1
), user_count_with_pct as (
  select
    day,
    value,
    value / (avg(value) over ()) as pct_of_mean
  from user_count
  order by 1
)

The line value / (avg(value) over ()) uses a window function to divide each row’s value by the average value for the entire table.

With select * from user_count_with_pct where pct >= 2.0, we see the days where we had 200%+ of the average signup rate. This requires however to choose the threshold manually.

Outlier detection with standard deviation

First, we need to pick a zscore (number of standard deviations) threshold. This page from Boston University has a good explanation and z-scores for different proabilities.

For example, if we care about high or low values that occur only 5% of the time by random chance, we’d use a zscore threshold of +/- 1.645. If we want a 5% threshold exclusively for high values, we’d pick 1.96.

with data as (
  select
    date_trunc('day', created_at)::date as day,
    count(1) as value
  from disk_usage
  group by 1
), data_with_stddev as (
  select
    day,
    value,
    (value - avg(value) over ())
     / (stddev(value) over ()) as zscore
  from data
  order by 1
)

The first part of the calculation is (value - avg(value) over ()) which calculates how much a single datapoint deviates from the mean. The second part / (stddev(value) over ()) divides the deviation by the standard deviation, to measure how many standard deviations the data point is from the mean.

Here’s the outlier query for a two-tailed 5% threshold: select * from data_with_stddev where abs(stddev) >= 1.645

Oracle

Install sqlplus on linux :

First of all you need to download Instant Client Downloads. Install alien software so you can install rpm packages by typing following command in terminal.

sudo apt-get install alien

Once that is done, go to the folder where the rpm files are located and execute the following:

sudo alien -i oracle-instantclient-basic*.rpm
sudo alien -i oracle-instantclient-sqlplus*.rpm
sudo alien -i oracle-instantclient-devel*.rpm

You need to install libaio.so. Type following command to do it:

sudo apt-get install libaio1

Create Oracle dynamic link file :

sudo sh -c 'echo "/usr/lib/oracle/12.1/client64/lib" > /etc/ld.so.conf.d/oracle.conf'
sudo ldconfig

Try to connect using:

sqlplus username/password@//dbhost:1521/SID
sqlplus testuser/password

To get history in sqlplus, install rlwrap (see doc).

SQL Random

select USER_ID
from (
  select USER_ID
  from USER_VOX
  order by DBMS_RANDOM.VALUE
)
where ROWNUM <= 30;

Explain

More info in this Oracle documentation page

EXPLAIN PLAN
  SET STATEMENT_ID = 'st1' FOR
SELECT last_name FROM employees;

EXPLAIN PLAN
   SET STATEMENT_ID = 'st1'
   INTO my_plan_table
 FOR
SELECT last_name FROM employees;

SELECT PLAN_TABLE_OUTPUT 
  FROM TABLE(DBMS_XPLAN.DISPLAY('MY_PLAN_TABLE', 'st1','TYPICAL'));

Describe inside a standard JDBC connectoion

select column_name, data_type from all_tab_columns where table_name = 'TABLE_NAME';

Find columns matching the data type

select
  TABLE_NAME,
  COLUMN_NAME,
  DATA_TYPE,
  DATA_TYPE_MOD,
  DATA_LENGTH,
  DATA_PRECISION,
  DATA_UPGRADED
from ALL_TAB_COLUMNS
where DATA_TYPE = 'LONG';

List long running operations and their percentage (operations in the explain plan)

SELECT sid, to_char(start_time,'hh24:mi:ss') stime, message, (sofar/totalwork)*100 percent
FROM v$session_longops
WHERE sofar/totalwork <= 1;

Get the size in megabytes of tables for a particular owner

SELECT
  owner, table_name, TRUNC(sum(bytes)/1024/1024) "SIZE (MB)"
FROM (
  SELECT segment_name table_name, owner, bytes
    FROM dba_segments
  WHERE segment_type = 'TABLE'
  UNION ALL
  SELECT i.table_name, i.owner, s.bytes
    FROM dba_indexes i, dba_segments s
  WHERE s.segment_name = i.index_name
        AND   s.owner = i.owner
        AND   s.segment_type = 'INDEX'
  UNION ALL
  SELECT l.table_name, l.owner, s.bytes
    FROM dba_lobs l, dba_segments s
  WHERE s.segment_name = l.index_name
        AND   s.owner = l.owner
        AND   s.segment_type = 'LOBINDEX'
)
WHERE owner = 'ONDEV'
GROUP BY table_name, owner
HAVING SUM(bytes)/1024/1024 > 10
ORDER BY SUM(bytes) desc
;

Get the constraint (foreign key)

SELECT
  a.table_name,
  a.column_name,
  a.constraint_name,
  c.owner,
-- referenced pk
  c.r_owner,
  c_pk.table_name      r_table_name,
  c_pk.constraint_name r_pk
FROM all_cons_columns a
  JOIN all_constraints c ON a.owner = c.owner
                            AND a.constraint_name = c.constraint_name
  JOIN all_constraints c_pk ON c.r_owner = c_pk.owner
                               AND c.r_constraint_name = c_pk.constraint_name
WHERE c.constraint_type = 'R'
      AND c.constraint_name = :ContraintName;
--       AND a.table_name = :TableName;

Type of constraint definition:

  • C (check constraint on a table)
  • P (primary key)
  • U (unique key)
  • R (referential integrity)
  • V (with check option, on a view)
  • O (with read only, on a view)

See : http://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_1037.htm

Generate user sequences DDL statements

select 'create sequence '||sequence_name||
       ' increment by '||increment_by||
       ' start with '||last_number||
       ' maxvalue '||max_value||
       decode(cycle_flag,'N',' NOCYCLE ',' CYCLE ')||
       decode(cache_size,0,'NOCACHE ','CACHE '||cache_size)
from user_sequences;

Get DDL statements

select DBMS_METADATA.get_ddl('TABLE', 'TEST') from DUAL;
select DBMS_METADATA.get_ddl('SEQUENCE', 'SEQ_AB_ADDRESS') from DUAL;
select DBMS_METADATA.get_ddl('VIEW', 'MY_TABLES') from DUAL;

Display all rows that have duplicate value

select
  *
from (select
        SERVICE_ID,
        USER_ID,
        CREATION_DATE,
        MODIFICATION_DATE,
        count(*)
        over (partition by USER_ID) CNT
      from CR_SERVICES)
where CNT > 1;
select
  USER_ID,
  count(USER_ID)
from CR_SERVICES
group by USER_ID
having count(USER_ID) > 1;

MySQL

Grant access to all machines

GRANT ALL PRIVILEGES ON *.* TO root@127.0.0.1 IDENTIFIED BY '9302ty09fuy8CHIOp90u9IYFVKHi8h';

Analyze what's goning on

SHOW ENGINE INNODB STATUS\G

See InnoDB monitors

SHOW PROCESSLIST ALL


GIT

Some cheat sheets:

Tree-ish and Commit-ish


| Commit-ish/Tree-ish | Examples |

| 1. | dae86e1950b1277e545cee180551750029cfe735 | | 2. | v1.7.4.2-679-g3bee7fb | | 3. | master, heads/master, refs/heads/master | | 4. @{} | master@{yesterday}, HEAD@{5 minutes ago} | | 5. @{} | master@{1} | | 6. @{} | @{1} | | 7. @{-} | @{-1} | | 8. @{upstream} | master@{upstream}, @{u} | | 9. ^ | HEAD^, v1.5.1^0 | | 10. | master3 | | 11. ^{} | v0.99.8^{commit} | | 12. ^{} | v0.99.8^{} | | 13. ^{/} | HEAD^{/fix nasty bug} | | 14. :/ | :/fix nasty bug |

| Tree-ish only | Examples |

| 15. : | HEAD:README, :README, master:./README |

| Tree-ish? | Examples |

| 16. :: | :0:README, :README |

  • Identifiers #1-14 are all "commit-ish", because they all lead to commits, but because commits also point to directory trees, they all ultimately lead to (sub)directory tree objects, and can therefore also be used as "tree-ish".
  • #15 can also be used as tree-ish when it refers to a (sub)directory, but it can also be used to identify specific files. When it refers to files, I'm not sure if it's still considered "tree-ish", or if acts more like "blob-ish" (Git refers to files as "blobs").

Use custom SSH command with git (since Git 2.3.0)

To a pass custom key for example.

GIT_SSH_COMMAND="ssh -i $HOME/id_rsa" git ...

Clone with submodules

With version 1.6.5 of Git and later, you can use:

git clone --recursive git://github.com/foo/bar.git
cd bar

For already cloned repos, or older Git versions, just use:

git clone git://github.com/foo/bar.git
cd bar
git submodule update --init --recursive

Open file in another branch

git show the_branch:path/to/file | mate
git show the_branch:path/to/file > exported_file

the_branch can be any reference (tag, branch, commit, HEAD, etc.)

Track all remote branches

for brname in `git branch -r | grep origin | grep -v master | grep -v HEAD`;
  do git checkout --track $brname;
done

Fetches all sub repositories

find . -name .git -type d \
    | xargs -n1 -P4 -I% git --git-dir=% --work-tree=%/.. remote update --prune

Delete branch

git push origin --delete <branch>  # Git version 1.7.0 or newer
git push origin :<branch>          # Git versions older than 1.7.0

For remote tracking branch

git branch --delete --remotes <remote>/<branch>
git branch -dr <remote>/<branch>

Cherry pick range of commit (1.8.x)

git cherry-pick A..B                      # commit A will not be cherry picked

Cherry pick from another repo

git --git-dir=../some_other_repo/.git format-patch \
    --keep-subject \
    -1 \
    --stdout \
    <commit SHA> \
    | git am --3way --keep

Fix mistakes in a previous commit (say 0f0d8a27622e7bf7f008983c4b8ee23bfb9843ab) on 'master'

git checkout 0f0d8a27622e7bf7f008983c4b8ee23bfb9843ab
your_editor path/to/file
git add path/to/file
git commit --amend -v
git rebase --onto HEAD 0f0d8a27622e7bf7f008983c4b8ee23bfb9843ab master

Stash specific changes

git stash --patch

Working with submodule

Since Git 1.8.4, git submodule update can optionally clone the submodule repositories shallowly.

git submodule add -- repo path            # Add submodule (sub repo may be already cloned in path)
git submodule add --depth 1 -- repo path  # Add submodule repo with depth 1
git submodule update --depth 1 -- [paths] # Update submodule keeping a depth of 1
git submodule update --recursive --remote # Update recursively all submodules (since 1.8.2)

Updates tag sha-1

# Delete the tag on any remote before you push
git push origin :refs/tags/<tagname>
# Replace the tag to reference the most recent commit
git tag -fa <tagname>
# Push the tag to the remote origin
git push origin master --tags
git tag -f <tag> <commit>

List tags and their commits

git show-ref --tags

Listing tracked files

List current tracked files.

git ls-tree -r master --name-only

List all files ever tracked.

git log --pretty=format: --name-only --diff-filter=A | sort - | sed '/^$/d'

Resetting / removing wrong commits

git reset --soft HEAD~1                   # remove last commit locally (but keep changed files)
git reset --hard HEAD~3                   # remove last 3 commits locally (and clear changed files)
git reset --hard                          # return to the last commited state
git reset --hard origin/master            # reset local repo to match remote branch, use this on failed merge

File history

git log -- the.file                       # simple file history
git log --follow -p the.file              # show the entire history of the file (including history beyond renames and with diffs for each change).

Files changed in a directory between two commits

git slog --follow --name-status bytebuddy-mockmaker~31..bytebuddy-mockmaker -- test

Remote changes

git log --left-right --graph --cherry-pick --oneline HEAD...origin/master

Git apply patch (without commiting) of specified files in another commit

# creates patch file
git format-patch -1 10d5a55 -- README README.md

# apply patch without commiting
git format-patch --stdout -1 10d5a55 -- README README.md | git apply

# apply apply filtered patch and commit using same message
git format-patch --stdout -1 10d5a55 -- README README.md | git am

Checkout changes from a commit

git checkout -p bc66559                   # interactive checkout from a commit (and history)
git checkout -p bc66559 -- path/file.java # interactive checkout from a file in a commit
git show topic:main.cpp > old_main.cpp   # _checkout_ other revision of a file under a new name

Diff between two commits

git diff $start_commit..$end_commit -- the.file
git diff HEAD~2 the.file
git diff HEAD^^ HEAD the.file
git diff HEAD^^..HEAD -- the.file

Git and dates

Using --since and --until options (when available). Possible aliases --before and --after

git log --since="1 week ago" --until=yesterday
git whatchanged --since="1/1/2010"

When --since or --until are not available it's possible to use the @ construct

git revert master@{"1 month ago"}         # Revert the repo as it was 1 month ago
git diff master@{"yesterday"} master@{"1 year 6 months ago"}   # Difference between master of yesterday with master of 1 year and 6 months ago

Note :

Date reference suh as master@{1979-02-26 18:30:00} are stored in the reflog of the local repository and reflog entries expire after 90 days by default.

In order to get a reference for any date, it's possible to use the git-rev-list tool.

git rev-list -n 1 --before="2009-07-27" master # output the sha-1 available at this date

Now it's possible to use it anywhere a reference is needed

git checkout `git rev-list -n 1 --before="2009-07-27 13:37" master`

Revert multiple adjacent commits

e.g. revert the last 3 commits, no commit keeps the reverting changes uncommited.

git revert [--no-commit] HEAD~3..

Unstage

git rm --cached the.file                  # Removes the file from the staging area, leaving the file uncommitted and untracked
git reset HEAD the.file                   # Removes changes on tracked the.file from the staging area, leaving changes on the filesystem

Patches

git format-patch A^..B                    # creates range of patches (lowest wanted commit hash ^.. highest wanted commit hash)
git format-patch master --stdout > the.patch   # creates a patch file containing changes from current branch to master (excluded)
git am -3 *.patch                         # apply the patch with 3 way merge
git am --resolved                         # If conflict, resolve and enter :
git apply --stat the.patch                # look at the patch without applying
git apply --check the.patch               # dry run of the patch
git am --signoff < the.patch              # apply the path using sign-off (to keep the original commiter)

Dry runs

For git merge

git merge --no-commit --no-ff <BRANCH>    # Avoid commit and fast-forward commit
git merge --abort                         # Then to unstage automatically merged files

List branches containing a commit

git branch -r --contains <commit>

Commit difference between two branches containing a the same changes useful with cherry-pick

Branch release-4.x contains :

f06e4df25724ad0dd51702a10f075d39368e1963 - Added zoom
2a446b1a19253a69c4bb133eedb311c14b2906e8 - Added bar
1afda04ccbf2f834663ca8ec3eaf6e3b917581fb - Added foo

In master :

8c71e1b2232c1a524e1de20553180676fb971f86 - Amended. This was Added bar
1afda04ccbf2f834663ca8ec3eaf6e3b917581fb - Added foo
  • 1afda04ccbf2f834663ca8ec3eaf6e3b917581fb is both in master and release-4.x
  • 2a446b1a19253a69c4bb133eedb311c14b2906e8 was cherry-picked or merged on master then amended with a changed title
  • f06e4df25724ad0dd51702a10f075d39368e1963 only in release-4.x
git cherry -v master release-4.x
- 2a446b1a19253a69c4bb133eedb311c14b2906e8 Added bar
+ f06e4df25724ad0dd51702a10f075d39368e1963 Added zoom

Meaning that :

  • 2a446b1a19253a69c4bb133eedb311c14b2906e8 changes exists in master but commit SHA-1 differ
  • f06e4df25724ad0dd51702a10f075d39368e1963 changes are not present in master
git cherry release-23 master -v
- 8c71e1b2232c1a524e1de20553180676fb971f86 Amended. This was Added bar

Means that :

  • 8c71e1b2232c1a524e1de20553180676fb971f86 equivalent exists in release-4.x
  • No other commits in master are not present in release-4.x

Mass reword commit in a branch

git filter-branch \
  --msg-filter 'sed "s/\[WIP]/\[EDGE-933]/g"' \
  master..gcp-migration

May need to --force if there's no backup

Changing the author for all commits with a given name

git filter-branch --commit-filter '
        if [ "$GIT_COMMITTER_NAME" = "<Old Name>" ];
        then
                GIT_COMMITTER_NAME="<New Name>";
                GIT_AUTHOR_NAME="<New Name>";
                GIT_COMMITTER_EMAIL="<New Email>";
                GIT_AUTHOR_EMAIL="<New Email>";
                git commit-tree "$@";
        else
                git commit-tree "$@";
        fi' HEAD

Creates new branch from stashed changes

git stash branch testchanges

Rebase from remote to local branch

git fetch origin                          # Updates origin/master
git rebase origin/master                  # Rebases current branch onto origin/master

Push to a specific upstream branch

git push -v --set-upstream origin release-s18:release-s18
git push --progress origin release-s18:release-s18

Interactive rebase with auto squash if the commit messages begins with !squash

git rebase --interactive --autosquash

see Stackoverflow answer for Trimming GIT Checking/Squashing GIT History

Make an existing branch track a remote branch

In git 1.8.x

git branch -u upstream/foo                                    # in current branch
git branch -u upstream/foo foo                                # if local branch foo is not the current branch

Same but longer version

git branch --set-upstream-to=origin/release-s17               # in current local branch
git branch --set-upstream-to=origin/release-s17 release-s17   # in specified local branch

In git 1.7.x (deprecated)

git branch --set-upstream foo upstream/foo

Using git rebase --onto

Original info here.

Using the 2 arg form

To remove commits in between.

Given this layout

          master
            ↓
A--B--C--D--E

If commit C needs to go away, it is possible to do an interactive rebase with git rebase -i C~1 and remove the line referencing commit C.

There's a quicker way in a non-interactive fashion using git rebase --onto, it changes the base of a commit, or put in other words it rebase s it.

git rebase --onto B C                                         # makes commit B the new base of commit D, commit C being the previous base of D

Of course it is possible to use backreferences :

git rebase --onto B D~1                                       # makes commit B the new base of commit D, D~1 being the previous base of D

After the rebase the repo will now be in the following state, meaning everything between B (not included) and D (not included) go away. In this case only commit C.

        master
          ↓
A--B--D'--E'

D' and E' being the rewritten commits of D and E (new SHA1s).

To move a branch to another base

Given the following structure, supposing topicA is supposed to be feature branch of versionA on top of commit J

           versionA
              ↓
        H--I--J
       /
A--B--F--G ← master
    \
     C--D--E
           ↑
         topicA

Then instead of cherry-picking commits C to E of topicA to a topicA' branch starting from commit J, it is possible to use git rebase --onto

git rebase --onto J C~1

After the rebase, the structure will be

                C'--D'--E' ← topicA
               /
        H--I--J ← versionA
       /
A--B--F--G ← master

Never use the one arg form

git rebase --onto master

has the same effect as

git reset --hard master

meaning all commits in your branch to disappear, the HEAD of the branch is reset to the HEAD of the master.

These commits are still in the git garbage until garbage collection, and their SHA1s are accessible using git reflog.

Using the 3 arg form

Supposing a feature has been developped on topicA branch

A--B--C--D--E ← master
    \
     F--G--H ← topicA

and as topicA is not yet ready to be merged in master, another continuation of the feature needs to be developped using commits developped in topicA

A--B--C--D--E ← master
    \
     F--G--H ← topicA
            \
             I--J--K--L--M ← topicB

However topicA history needs to be reworked (commits too big, unneeded files, changes not related to topicA, etc.). So now topicA has completely different history

A--B--C--D--E ← master
   |\
   | F--G--H--I--J--K--L--M ← topicB
    \
     N--O--P--Q--R--S--T--U--V--W ← topicA, origin/topicA, proper-history-topicA

Now commits in topicB should be moved on top of proper-history-topicA and they should become part of the proper-history-topicA branch.

Using

git rebase --onto proper-history-topicA I~1 proper-history-topicA

Where proper-history-topicA is the newbase, I~1 is the oldbase, and proper-history-topicA is the reference for what HEAD of proper-history-topicA will become.

It will give the following result :

A--B--C--D--E ← master
    \
     N--O--P--Q--R--S--T--U--V--W--I'--J'--K'--L'--M' ← topicA, proper-history-topicA

The five commits from topicB (I through M), get played on top of proper-history-topicA, starting from where topicB diverged, to create I’, J’, K’, L’, and M’.

Marking last 2 commits as another branch

Suppose you have the following history :

A-B-C-D-E (HEAD)
        ↑
       main

After git branch newBranch :

    newBranch
        ↓
A-B-C-D-E (HEAD)
        ↑
       main

After git reset --hard HEAD~2 :

    newBranch
        ↓
A-B-C-D-E (HEAD)
    ↑
   main

Finally git checkout newbranch

Apply a fix for all commits during rebase

COMMAND="find . -not -path '*/\.*' -type f \( -name '*.java' -or -name 'pom.xml' \) -perm +111 -exec chmod -x {} \;"
git rebase --interactive 757dd8b923136fc350231106808b2fd1b955bac5^ 
           --exec "$COMMAND && git add -u && git commit --amend --no-edit"

The git rebase command allow to pass multiple -exec command however, the rebase process will stop there's file that are left modified by those command, or if the command fails. In this case since $COMMAND is modifying files, it is then necessary to commit the changes in the same -exec.

Move / set branch reference to specific commit

git branch -f branch-name new-tip-commit  # Force the branch-name head to new-tip-commit

For any references (for those that are not branch)

git update-ref -m "reset: branch-name to new-tip-commit" branch-name new-tip-commit

Checkout Github's pull requests locally

See this gist. Also take a look at this github help page.

Locate the section for your github remote in the .git/config file. It looks like this:

[remote "origin"]
	url = git@github.com:mockito/mockito.git
	fetch = +refs/heads/*:refs/remotes/origin/*

Now add the line fetch = +refs/pull/*/head:refs/remotes/origin/pr/* to this section. It ends up looking like this:

[remote "origin"]
	url = git@github.com:mockito/mockito.git
	fetch = +refs/heads/*:refs/remotes/origin/*
	fetch = +refs/pull/*/head:refs/remotes/origin/pr/*

Now fetch all the pull requests:

$ git fetch origin
From github.com:joyent/node
 * [new ref]         refs/pull/1000/head -> origin/pr/1000
 * [new ref]         refs/pull/1002/head -> origin/pr/1002
 * [new ref]         refs/pull/1004/head -> origin/pr/1004
 * [new ref]         refs/pull/1009/head -> origin/pr/1009
 ...

To check out a particular pull request:

$ git checkout pr/999
Branch pr/999 set up to track remote branch pr/999 from origin.
Switched to a new branch 'pr/999'

Getting the current branch

git rev-parse --abbrev-ref HEAD           # display the current branch
git symbolic-ref --short HEAD             # same
git rev-parse --symbolic-full-name --abbrev-ref @{u} # display remote branch

Commit on a specified date

The following command allow the define the date for both the author date and the commit date, as the --date option only operates on author date.

GIT_AUTHOR_DATE="Wed Oct 30 10:51:12 CET 2013" GIT_COMMITTER_DATE="$GIT_AUTHOR_DATE" git commit ....

Checkout (and merge) a branch from a fork

  1. On the target repo and branch, create the new branch from the target (for example master)

    git checkout -b marcingrzejszczak-issue399_generics master
  2. Fetch the changes from the fork's branch

    git pull git@github.com:marcingrzejszczak/mockito.git issue399_generics
  3. Eventually merge the changes

    git checkout master
    git merge marcingrzejszczak-issue399_generics
    git push origin master

Bisect wrong commit

# start bisect current HEAD is bad, v2.0.26-beta is good
git bisect start
git bisect bad
git bisect good v2.0.26-beta

# or shorter
git bisect start HEAD v2.0.26-beta

# then automate search with ascript that will exit 0 if the project is good or non-0 if the project is bad
git bisect run ./gradlew :test --tests "org.mockitousage.bugs.ConfusedSignatureTest"
git bisect reset

Search lost commits

Searchig in the reflog

git log -g --grep="<some string from your commit message>"    # search all commits that matches the given text
git log --all --grep="<some string from your commit message>" # search all commits in every branch that matches the given text
git fsck --full --no-reflogs --unreachable --lost-found       # find every unreachable commits, or blobs that are not commits
ls -1 .git/lost-found/commit/ | xargs -n 1 git log -n 1 --pretty=oneline

It might be usefull to have a look at the reflog as well.

git reflog

Search changed lines containing a string since a date

git log -G@Value --since=2014.5.16 --patch-with-stat
    | grep "^[+-].*@Value"
    | sed 's/^\([+-]\).*\${\([a-z0-9:.-]*\).*/\2 \1/'
    | sort
    | uniq

Changed files in a branch

git diff --name-only the-branch $(git merge-base the-branch master)

Get the commit from which a branch was created

git show-branch --merge-base feature origin/master
git merge-base feature origin/master

Get the first commit in a branch

git rev-list ^master branch-name | tail -n 1

git rev-list master branch-name will give all commits reachable from both master and branch-name, not hte thing we want.

With git rev-list there's a special syntax to exclude commits reachable from a branch ; place a ^ is in front of the branch. git rev-list ^master branch-name will show all commits in reachable in branch-name but not in master. As we want the first commit of branch-name the command is piped to tail -1.

Note that these command are virtually the same :

git rev-list ^master branch-name | tail -n 1
git rev-list branch-name ^master | tail -n 1
git rev-list master..branch-name | tail -n 1

Clean repo

git clean --force -d -dry-run             # dry run clean files and directories
git clean -fdn                            # same
git clean --force -d -x                   # clean files and directory and untracked files including those in gitignore

Ignore changes in a tracked file

Suppose some tracked file need to be modified locally with changes that are not to be commited, to ignore such files :

git update-index --assume-unchanged path/to/ignored/file

To list those ignored files

git ls-files -v | grep ^[a-z]

To unignore such files :

git update-index --no-assume-unchanged path/to/ignored/file

Split/extract sub directory as submodule

# Go into the project root
cd ~/my-project

# Create a branch which only contains commits for the children of 'foo'
git subtree split --prefix=foo --branch=foo-only

# Remove 'foo' from the project
git rm -rf ./foo

# Create a git repo for 'foo' (assuming we already created it on github)
mkdir foo
pushd foo
git init
git remote add origin git@github.com:my-user/new-project.git
git pull ../ foo-only
git push origin -u master
popd

# Add 'foo' as a git submodule to `my-project`
git submodule add git@github.com:my-user/new-project.git foo

Also see http://lostechies.com/johnteague/2014/04/04/using-git-subtrees-to-split-a-repository/

Delete old local branches

Only works with merge commit strategy and with branch that track a remote branch.

git branch -vv | grep 'origin/.*: gone]' | awk '{print $1}' | xargs git branch -d

worktree

git worktree add -b bugfix-1234 ../bugfix origin/master

New wowrtree at ../bugfix, creates a new branch bugfix-1234 starts from orign/master.

git worktree prune

Tig

sample commands

tig pom.xml                               # history of a specific file
tig show                                  # last commit patch

View switching

  • m Show main view
  • d Show diff view
  • l Show log view
  • t Show tree view
  • B Show blame view
  • H Show branch view
  • h Show help view
  • c Show stage view
  • y Show stash view

Some commands

  • [ Decrease the diff context
  • ] Increase the diff context
  • , Move to parent. In the tree view, this means switch to the parent directory. In the blame view it will load blame for the parent commit. For merges the parent is queried.

If using GPG Suite :

Configure the following properties in .gnupg/gpg-agent.conf

pinentry-program /usr/local/MacGPG2/libexec/pinentry-mac.app/Contents/MacOS/pinentry-mac
enable-ssh-support

Then put in the .zshrc file (and in running ttys)

export "GPG_TTY=$(tty)"
export "SSH_AUTH_SOCK=${HOME}/.gnupg/S.gpg-agent.ssh"

[WARN] Note that it will replace the apple ssh integration

> echo $SSH_AUTH_SOCK
/private/tmp/com.apple.launchd.5vr2u6RFvj/Listeners

Restart the agents

gpg-connect-agent killagent /bye
gpg-connect-agent /bye;
> gpg --list-keys --keyid-format LONG
/Users/b.dutheil/.gnupg/pubring.kbx
-----------------------------------
pub   dsa2048/76D78F0500D026C4 2010-08-19 [SC] [expires: 2020-06-15]
      85E38F69046B44C1EC9FB07B76D78F0500D026C4
uid                 [ unknown] GPGTools Team <team@gpgtools.org>
sub   elg2048/07EAE49ADBCBE671 2010-08-19 [E] [expires: 2020-06-15]
sub   rsa4096/E8A664480D9E43F5 2014-04-08 [S] [expires: 2024-01-02]

pub   rsa4096/D05FC1099F415579 2018-11-20 [SC] [expires: 2020-11-19]
      865C745CC9BAA23618209DD5D05FC1099F415579
uid                 [ultimate] Brice Dutheil (hello) <brice.dutheil@blablacar.com>
sub   rsa4096/6E78D09702B4A61D 2018-11-20 [E] [expires: 2020-11-19]
[user]
    name = b.dutheil
    email = brice.dutheil@blablacar.com
    signingkey = D05FC1099F415579

[commit]
    gpgsign = true

[gpg]
    program = gpg

Gradle Tips

Gradle user interactions

  • assemble, check de type lifecycle, they are independent

  • ./gradlew execute help task by default, overridden by defaultTasks assemble,check

  • Some properties

    • org.gradle.daemon can disable the daemon
    • org.gradle.daemon.ideltimeout
    • org.gradle.parallel toggle parallel execution of tasks
    • org.gradle.workers.max default to number of CPU
  • ./gradlew --dry-run clean build print the tasks graph

  • ./gradlew --console=verbose build

  • If clean task is needed the build has an issue

  • ./gradlew test --rerun-tasks

  • Do the minimum in the configuration phase it slows down the build

  • ./gradlew init, different type of project exists, like pom to help move from maven to gradle

  • new gradle version every 6 week

The build script

Typed tasks

task compileMe(type: JavaCompile) {
    destinationDir "$buildDir/dir"

    options.fork = true
}

This kind of task definition needs to be avoided in a build script, because we need to be declarative not imperative

Importance of input / output

  • SourceSet concept
  • Configuration = dependencies container and produced artifacts
configuratuions {
    implementation
}

dependencies {
    implementation "..."
    implementation project(":common")
}

To build the classpath :

  • gradle needs to resolve the configuration = resolve the dependency graph + retrieve or build the dependencies

maven doesn't understand local project really ?

Display build script dependencies
./gradlew buildEnvironment

Common script

    dependencies {
        classpath 'com.blablacar.service.common:common-gradle:2.+'
    }
    dependencies {
        delete "build/gradle"
        ant.unjar src: configurations.classpath.files.find { it.name.matches '.*common-gradle.*' }, dest: 'build/gradle'
    }

Display test info in the standard output

tasks.withType(Test) {
    testLogging {
        events("passed"
               ,"skipped"
               ,"failed"
//               ,"standardOut"
//               ,"standardError"
        )
        exceptionFormat "full"
        showCauses true
        showExceptions true
        showStackTraces true
    }
}

Plugins

How java plugins is working.

base

base plugin adds tasks, conventions, some build structure, archives configuration

  • arhciveBaseName defaults to project.name
  • destinatiionDir defaults to type of archive
  • version that defaults to project.version
  • ...
java-base
  • Apply base
  • Defines additional tasks type, like compile
  • Model input source and resource files, output class files, compilation and runtime classpath
java
  • Apply java-base
  • Defines default values for existing tasks
  • Defines conventions src/main/java style sourceset
  • Adds configurations

source set conventions src/<name>/<lang> compilation task compile<Name><Lang> configuration <Name>Implementation

java-library

Adds extra configuration Express the difference in your dependencies

  • API
  • The required ones at runtime in the implementation
                 api <------------\
              ^                    \
             /                      \
            /                        \
apiElements    compileOnly (C, R)    implementation      runtimeOnly
                  ^              ^         ^         ^  ^
                  |             /          |        /    \
                compileClassPath    runtimeElements (C)   runtimeClassPath (R)

Does it support natively integration tests => nope

Custom source set and configuration for integration test

plugin {
    id 'java'
}

repositories {
    jcenter()
}

sourceSets {
    slowTest {
        // necessary otherwise gradle doesn't know where _main output_
        compileClasspath = main.output
        runtimeClasspath = main.output
    }
}

configurations {
    // necessary to acquire the same dependencies as main
    slowTestImplementation.extendsFrom implementation
    slowTestRuntime.extendsFrom runtime
}

dependencies {
    implementation ""
    runtime ""

    slowTestImplementation "junit:junit:4.12"
}

task slowTest(type: Test) {
    classpath = sourceSets.slowTest.runtimeClasspath
    testClassesDirs = sourceSets.slowTest.output
}

==> src/slowTest/java

Add but do not apply plugin

plugins {
    // Add Asciidoctor plugin, but do not apply it.
    id 'org.asciidoctor.convert' version '1.5.3' apply false
}
 
configurations {
    convert
}
 
repositories {
    jcenter()
}
 
dependencies {
    convert 'org.asciidoctor:asciidoctorj:1.5.4'
}
 
// Use of Asciidoctor task from the Asciidoctor plugin.
task convert(type: org.asciidoctor.gradle.AsciidoctorTask) {
    classpath = configurations.convert
}

Or

subprojects {
    if (name.endsWith('-doc')) {
        apply plugin: 'org.asciidoctor.convert'
    }
}

Custom plugin repository

In the settings.gradle :

// First statement of the settings.gradle file
pluginRepositories {
    maven { url 'http://intranet/artifactory/libs-release/' }
    gradlePluginPortal() // Include public Gradle plugin portal
}

Incremental

Incremental tasks

tasks not executed b/c

  • inputs not changed
  • outputs not changed
  • ... ?

How :

  • hash input output
  • hash content of in/out folder
  • serialize input properties

Annotate task implementation with Input Output annotation.

Incremental task input
  • e.g. check which file changed

Extensibility

properties

gradle.properties

  • org.gradle.parallel=true
  • deployUrl = ...

can be overridden with ./gradlew -Dorg.gradle.parallel=false -PdeployUrl= -D system (gradle) property -P project property

the ext container

def getRevision() {

}

ext.revision = getRevision()

extensions

class MyExtension {

}

extensions.create("myDsl", MyExtension)

myDsl

configurations

configurations.all {
    println "..."
}
tasks.withType(Jar) {
    doLast { /* */ }
}

plugins.withType(IdeaPlugin) {

}

See GitHub gradle-devoxx-deep-dive

add source set

project.java.sourceSets["main"].java.srcDir(generateCode)

Best approach because, it allows every interested consumer of the sourceset to be aware of the generated code. (Compare to adding the source set in the compile task.)


Dependency

  • .m2 does not say from where the dependency came from, gradle cache works per project
  • because in dependency to explain the reason
  • 1.0 forever, range/dynamic cached for 24h, 1.0-SNAPSHOT, cached for 24h

Resolution strategy

configurations.all {
    resolutionStrategy {
        failOnVersionConflict()
        cacheDynamicVersionsFor(5, HOURS)
    }
}

Build Cache

Allows to share tasks output artefacts across builds / branch / projects

--build-cache org.gradle.caching=true

buildCache {
    local(...) {
        ...
    }
}

Remote caching (built by the CI)

buildCache {
    remote(Http...) {
        url

    }
}

Included in the open source version

There's a docker image.

=> Look at build scan docker, need the enterprise version ?

Configuring IntelliJ IDEA

gradle/gradle#1366 https://plugins.gradle.org/plugin/org.jetbrains.gradle.plugin.idea-ext https://github.com/jetbrains/gradle-idea-ext-plugin

Using Git commit hash in build script

buildscript {
 
    repositories {
        jcenter()
    }
 
    dependencies {
        classpath 'org.ajoberstar:grgit:1.7.2'
    }
 
}
 
apply plugin: 'java'
apply plugin: 'application'
 
ext {
    // Open the Git repository in the current directory.
    git = org.ajoberstar.grgit.Grgit.open(file('.'))
 
    // Get commit id of HEAD.
    revision = git.head().id
    headTag = git.tag.list().find { it.commit == git.head() }
}
 
// Use abbreviatedId commit id in the version.
version = "2.0.1.${git.head().abbreviatedId}"
 
// application plugin extension properties.
mainClassName = 'sample.Hello'
applicationName = 'sample'
 
// Customize applicationDistribution
// CopySpec from application plugin extension.
applicationDistribution.with {
    from('src/dist') {
        include 'VERSION'
        expand(
            buildDate: new Date(),
            // Use revision with Git commit id:
            revision : revision,
            version  : project.version,
            tag      : headTag ?: '',
            appName  : applicationName)
    }
}
 
// Contents for src/dist/VERSION:
/*
Version: ${version}
Revision: ${revision}
Build-date: ${buildDate.format('dd-MM-yyyy HH:mm:ss')}
Application-name: ${appName}
*/
 
assemble.dependsOn installDist

Init script

Make gradle go automatically offline

This avoids to manually add --offline on the command line.

/** 
 * Makes gradle go automatically offline.
 *
 * Store in $HOME/.gradle/init.d/checknetwork.gradle
 */

apply plugin : CheckNetwork

class CheckNetwork implements Plugin<Gradle> {
  void apply(Gradle gradle) {
    def hostname = gradle.startParameter.getCurrentDir().toString().contains("internal") ?
                   "internal-hostname" :
                   "google.com"

    Process r = "ping -c 1 ${hostname}".execute()
    r.waitFor()
    
    boolean goOffline = r.exitValue()!=0
    gradle.startParameter.setOffline(goOffline)

    if(goOffline) {
      println "Going offline (--offline)"
    }
  }
}

Various tips

excluding transitive dependency

ext {
  withoutStuff = { 
        exclude group: 'com.android.support', module: 'support-v4' 
        exclude group: 'com.android.support', module: 'support-v13'
        exclude group: 'com.android.support', module: 'design-v13' 
    }
}

dependency {
    compile deps.foobar, withoutStuff
}

Slow server? This is the flow chart you're looking for !

Source : http://blog.scoutapp.com/articles/2014/07/31/slow_server_flow_chart

Your high-powered server is suddenly running dog slow, and you need to remember the troubleshooting steps again. Bookmark this page for a ready reminder the next time you need to diagnose a slow server.

Get on on top of it

Linux's top command provides a wealth of troubleshooting information, but you have to know what you're looking for. Reference this diagram as you go through the steps below:

top output

Step 1: Check I/O WAIT and CPU IDLE TIME

How: use top - look for wa (I/O wait) and id (CPU idletime)

Why: checking I/O wait is the best initial step to narrow down the root cause of server slowness. If I/O wait is low, you can rule out disk access in your diagnosis.

I/O Wait represents the amount of time the CPU waiting for disk or network I/O. Waiting is the key here - if your CPU is waiting, it's not doing useful work. It's like a chef who can't serve a meal until he gets a delivery of ingredients. Anything above 10% I/O wait should be considered high.

On the other hand, CPU idle time is a metric you want to be high -- the higher this is, the more bandwidth your server has to handle whatever else you throw at it. If your idle time is consistently above 25%, consider it high enough.

Step 2: IO WAIT is low and IDLE TIME is low: Check CPU USER TIME

How: use top again -- look for the %us column (first column), then look for a process or processes that is doing the damage.

Why: at this point you expect the usertime percentage to be high -- there's most likely a program or service you've configured on you server that's hogging CPU. Checking the % user time just confirms this. When you see that the % usertime is high, it's time to see what executable is monopolizing the CPU

Once you've confirmed that the % usertime is high, check the process list (also provided by top). Be default, top sorts the process list by %CPU, so you can just look at the top process or processes.

If there's a single process hogging the CPU in a way that seems abnormal, it's an anomalous situation that a service restart can fix. If there are are multiple processes taking up CPU resources, or it there's one process that takes lots of resources while otherwise functioning normally, than your setup may just be underpowered. You'll need to upgrade your server (add more cores), or split services out onto other boxes. In either case, you have a resolution:

  • if situation seems anomalous: kill the offending processes.
  • if situation seems typical given history: upgrade server or add more servers.

This is an area where historical context can be a huge help in understanding what's going in. If you're using Scout, check out the historical charts for these metrics. A flat line for % user time followed by a huge increase in the last 10 minutes tells a much different story than smooth, steady increase over the last 6 months.

Step 3: IO WAIT is low and IDLE TIME is high

Your slowness isn't due to CPU or IO problems, so it's likely an application-specific issue. It's also possible that the slowness is being caused by another server in your cluster, or by an external service you rely on.

  • start by checking important applications for uncharacteristic slowness (the DB is a good place to start),
  • think through which parts of your infrastructure could be slowed down externally. For example, do you use an externally hosted email service that could slow down critical parts of your application?

If you suspect another server in your cluster, strace and lsof can provide information on what the process is doing or waiting on. Strace will show you which file descriptors are being read or written to (or being attempted to be read from) and lsof can give you a mapping of those file descriptors to network connections.

Step 4: IO WAIT is high: Check your SWAP usage

How: use top or free -m

Why: if your box is swapping out to disk a lot, the cache swaps will monopolize the disk and processes with legitimate IO needs will be starved for disk access. In other words, checking disk swap separates real IO wait problems from what are actually RAM problems that "look like" IO Wait problems.

An alternative to top is free -m -- this is useful if you find top's frequent updates frustrating to use, and you don't have any console log of changes.

Step 5: Swap usage is high

High swap usage means that you are actually out of RAM. See step 7 below.

Step 6: Swap usage is low

Low swap means you have a real IO wait problem. The next step is to see what's hogging your IO.

How: iotop

iotop is an awesome tool for identifying IO offenders. Two things to note:

  • unless you've already installed iotop, it's probably not already on your system. Recommendation: install it before you need it -- it's no fun trying to install a troubleshooting tool on an overloaded machine.
  • iotop requies a Linux of 2.62 or above

Step 7: Check memory usage

How: use top. Once top is running, press the M key - this will sort applications by the memory used.

Important: don't look at the free memory -- it's misleading. To get the actual memory available, subtract the cached memory from the used memory. This is because Linux caches things liberally, and often the memory can be freed up when it's needed. Read here for more info.

Once you've identified the offenders, the resolution will again depend on whether their memory usage seems business-as-usual or not. For example, a memory leak can be satisfactorily addressed by a one-time or periodic restart of the process.

  • if memory usage seems anomalous: kill the offending processes.
  • if memory usage seems business-as-usual: add RAM to the server, or split high-memory using services to other servers.

A HANDY FLOW CHART TO TIE IT ALL TOGETHER

troubleshooting flowchart

Additional tips

  • vmstat is also a very handy tool, because it shows past values instead of an in-place update like top. Running vmstat 1 shows concise metrics on memory, swap, io, and CPU every second.
  • Track your disk IO latency and compare to IOPS (I/O operations per second). Sometimes it's not activity in your own server causing the disk IO to be slow in a cloud/virtual environment. Proving this is hard, and you really want to have graphs of historical performance to show your provider!
  • Increasing IO latency can mean a failing disk or bad sectors. Keep an eye on this before it escalates to data corruption or complete failure of the disk.

Wrapping it up

Having concrete steps at your fingertips makes slow server troubleshooting a little easier. top is a powerful tool that provides a wealth of metrics to help you narrow down the cause of server slowness. The metrics you'll be looking at are io wait, cpu idle %, user %, memory free (taking into account the file cache), and swap usage. Depending on whether conditions are a one-off or the result of growing demands on your infrastructure, you may be able to solve the slowdown by restarting services, or you may need to upgrade your servers. Historical context can be very useful in establishing what's normal for your machines.

Java ecosystem

Flat list of dependencies with maven

mvn dependency:list \
  | sed -ne s/..........// -e /patterntoexclude/d -e s/:compile//p -e s/:runtime//p \
  | sort | uniq
  • -n don’t print non matching line
  • -e s/..........// strips the first 10 characters
  • -e /patterntoexclude/d you can exclude some unwanted patterns from the list using the d command of sed
  • -e s/:compile//p -e s/:runtime//p removes :compile and :runtime but print related lines
  • | sort | uniq for duplicate entries in multi-module

VM properties

java -XshowSettings:properties -version
java -XshowSettings:system -version # On Linux only
java -XshowSettings:vm -version

Final flags

java -XX:+PrintFlagsFinal -version

Running commands

jcmd <pid> help
jcmd <pid> VM.flags -all # equivalent to -XX:+PrintFlagsFinal on the jvm with the given PID
jcmd <pid> VM.info
ps -o rss,vsz,sz <pid> # Reseident Set Size of the PID

Heap dumps

Make a heap dumps of live object in the hprof format, which can be opened in jVisualVm FileLoadHeap Dumps

jmap -F -dump:live,format=b,file=the-pid-heapdump.hprof $pid

Use following options for auto heap dump on OOME

-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/path/to/heapdump.hprof

To get a live view in the shell of the live object.

jmap -histo:live $pid > heapdump.txt

Display information on the memory usage of the heap

jmap -heap $pid

Display GC information, see jstat documentation

jstat -gc $pid <intervall>
jstat -gcutil $pid 250 7                     # Takes 7 samples every 250 ms

Using JVisualVM over an SSH connection

On the remote machine starts jstatd with a port if wanted (default is 1099)

jstatd -J-Djava.security.policy=permissions.policy [-p port]

The permissions.policy file will contain the following :

grant {
  permission java.security.AllPermission;
};

Then create a dynamic socket tunnel

ssh -D 9696 me@remote

Then on the local machine start jVisualVM with the following proxy parameters

visualvm -J-Dnetbeans.system_socks_proxy=localhost:9696 -J-Djava.net.useSystemProxies=true

It may be relevant to setup manual proxy settings in the Preferences / Options dialog, socks proxy using localhost and port 9696.

And in visual vm: add new remote connection, specify remote as host and the port for jstatd (1099 for default, or what you specified with -p when running jstatd)

alternatives on linux systems

#!/bin/bash
LINKDIR=/usr/bin
JHOME=/usr/java/latest
JREDIR=$JHOME/jre/bin
JDKDIR=$JHOME/bin


# taken from : http://bad-concurrency.blogspot.fr/2014/01/linux-alternatives-and-oracle-java.html
# Gist : https://gist.github.com/mikeb01/8694620#file-java-alternatives-install-sh

sudo alternatives --install $LINKDIR/java java $JREDIR/java 20000  \
  --slave $LINKDIR/keytool     keytool     $JREDIR/keytool         \
  --slave $LINKDIR/orbd        orbd        $JREDIR/orbd            \
  --slave $LINKDIR/pack200     pack200     $JREDIR/pack200         \
  --slave $LINKDIR/rmid        rmid        $JREDIR/rmid            \
  --slave $LINKDIR/rmiregistry rmiregistry $JREDIR/rmiregistry     \
  --slave $LINKDIR/servertool  servertool  $JREDIR/servertool      \
  --slave $LINKDIR/tnameserv   tnameserv   $JREDIR/tnameserv       \
  --slave $LINKDIR/unpack200   unpack200   $JREDIR/unpack200       \
  --slave $LINKDIR/jcontrol    jcontrol    $JREDIR/jcontrol        \
  --slave $LINKDIR/javaws      javaws      $JREDIR/javaws

sudo alternatives --install $LINKDIR/javac javac $JDKDIR/javac 20000  \
  --slave $LINKDIR/appletviewer appletviewer $JDKDIR/appletviewer     \
  --slave $LINKDIR/apt          apt          $JDKDIR/apt              \
  --slave $LINKDIR/extcheck     extcheck     $JDKDIR/extcheck         \
  --slave $LINKDIR/idlj         idlj         $JDKDIR/idlj             \
  --slave $LINKDIR/jar          jar          $JDKDIR/jar              \
  --slave $LINKDIR/jarsigner    jarsigner    $JDKDIR/jarsigner        \
  --slave $LINKDIR/javadoc      javadoc      $JDKDIR/javadoc          \
  --slave $LINKDIR/javah        javah        $JDKDIR/javah            \
  --slave $LINKDIR/javap        javap        $JDKDIR/javap            \
  --slave $LINKDIR/jcmd         jcmd         $JDKDIR/jcmd             \
  --slave $LINKDIR/jconsole     jconsole     $JDKDIR/jconsole         \
  --slave $LINKDIR/jdb          jdb          $JDKDIR/jdb              \
  --slave $LINKDIR/jhat         jhat         $JDKDIR/jhat             \
  --slave $LINKDIR/jinfo        jinfo        $JDKDIR/jinfo            \
  --slave $LINKDIR/jmap         jmap         $JDKDIR/jmap             \
  --slave $LINKDIR/jps          jps          $JDKDIR/jps              \
  --slave $LINKDIR/jrunscript   jrunscript   $JDKDIR/jrunscript       \
  --slave $LINKDIR/jsadebugd    jsadebugd    $JDKDIR/jsadebugd        \
  --slave $LINKDIR/jstack       jstack       $JDKDIR/jstack           \
  --slave $LINKDIR/jstat        jstat        $JDKDIR/jstat            \
  --slave $LINKDIR/jstatd       jstatd       $JDKDIR/jstatd           \
  --slave $LINKDIR/native2ascii native2ascii $JDKDIR/native2ascii     \
  --slave $LINKDIR/policytool   policytool   $JDKDIR/policytool       \
  --slave $LINKDIR/rmic         rmic         $JDKDIR/rmic             \
  --slave $LINKDIR/schemagen    schemagen    $JDKDIR/schemagen        \
  --slave $LINKDIR/serialver    serialver    $JDKDIR/serialver        \
  --slave $LINKDIR/wsgen        wsgen        $JDKDIR/wsgen            \
  --slave $LINKDIR/wsimport     wsimport     $JDKDIR/wsimport         \
  --slave $LINKDIR/xjc          xjc          $JDKDIR/xjc

sudo alternatives --install /usr/lib64/mozilla/plugins/libjavaplugin.so libjavaplugin.so.x86_64 $JHOME/jre/lib/amd64/libnpjp2.so 20000

JS

Loading jQuery on Google Chrome inspector

(function() {
  if (! window.jQuery ) {
    var s = document.createElement('script');
    s.type = 'text/javascript';
    s.async = true;
    s.src = '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min.js'; // you can change this url by latest jQuery version
    (document.getElementsByTagName('head')[0] ||
      document.getElementsByTagName('body')[0]).appendChild(s);
  }
}());

kafka

  • Data warehouse => batch only, difficult modeling, limited throughput
  • Data lake (hadoop) => no more meeting to make sense of the data :) (schema-on-read)
  • Data stream

Few patterns

Stream of events

pain points

  • different systems
  • logs, databases,
  • different data

all your data is event stream

b/c event there is common things like

  • integration (collection, processing,)

PageViewEvent -> you can deduce a lot of things

At LinkedIn too many events to integrate is called data integration tax that's why Kafka is made to solve.

Gwen said 10k consumers is not even close to an exageration

Keep events compatible

Event schema are the API (interface definition language)

Hipster Stream Processing (you're in hell if you want join, merge, etc.)

Streaming Data Enrichement

CDC connectors turns state (in db) to event stream.

Ktable table = builder.table("cdc-xxx"); table.leftJoin()

Ridiculously Parallel Transaction

Linux

With x.org

xclip -out > file              # (Paste) pipe clipboard content
xclip -in -selection c <<< file    # (Copy) pipe in clipboard

Maven

Tweaking the build workflow

with --projects/-pl, --also-make/-am, -also-make-dependents/-amd

mvn clean install –pl module-C            # builds only the module-C
mvn clean install –pl module-C,module-D   # builds the module-C & module-D
mvn clean install –pl module-C –am        # builds module-A then module-C
mvn clean install –pl module-D –am        # builds module-A, module-C then module-D
mvn clean install –pl module-A –am        # builds only module-A
mvn clean install -pl module-C –amd       # builds the module-C then module-D
mvn clean install -pl module-A –amd       # builds all modules

with --resume-from/-rf

mvn clean install –rf module-B            # builds module-B, module-C and module-D

Install module from existing jar

mvn install:install-file
    -Dfile=my_module.jar
    -DgroupId=my.private.project
    -DartifactId=my_module
    -Dversion=1.0.0
    -Dpackaging=jar
mvn deploy:deploy-file                                            \
    -DgroupId=<group-id>                                          \
    -DartifactId=<artifact-id>                                    \
    -Dversion=<version>                                           \
    -Dpackaging=<type-of-packaging>                               \
    -Dfile=<path-to-file>                                         \
    -DrepositoryId=<id-to-map-on-server-section-of-settings.xml>  \
    -Durl=<url-of-the-repository-to-deploy>

Phisically delete artifact or folder (coordinates) on Nexus

curl -v -u user:password -X DELETE https://my.repo.com/content/repositories/repo-id/coordinates/

Clean old dependencies

find ~/.m2/repository -atime +30 -iname '*.pom' -print0 | while read -d '' -r pom; do echo rm -rf "$(dirname $pom)"; done
  • -atime +n accessed n days ago
  • -amin +n accessed n minutes ago

Verify artefact integrity

Maven instructs developers that deploy artefacts to sign those and to deploy a public key on a public server, this server usually is pgp.mit.edu. With gpg if the key is not present it mist be installed, then one can verify the artifact. With the following command line, gpg is instructed to import automatically author public key if found (from pgp.mit.edu, single point of failure). Note that the protocol default port is 11371, this could be a problem behind most firewall, however it is possible to use port 80.

gpg --auto-key-locate keyserver
    --keyserver hkp://pgp.mit.edu:80
    --keyserver-options auto-key-retrieve
    --verify ${KAFKA_RELEASE_ARCHIVE}.asc ${KAFKA_RELEASE_ARCHIVE}

Soures :


Converting documents

Converting ebooks with Calibre

ebook-convert book.epub book.mobi

Scan your network for hosts listening on TCP port 22 (SSH protocol).

nmap --open -p T:22 192.168.1.0/24

Reduce file size of PDF with ImageMagick and Ghostscript

convert -density 600x600 -quality 60 -compress JPEG big.pdf smaller.pdf

Create a composite image from video frames

convert -evaluate-sequence Min $( for i in $( seq 34 7 99 ) ; do echo $i.png ; done ) every7th-frame.jpg 

Using frames 34 through 99 from a video, take every 7th frame and overlay it into a composite image. The convert command is part of the ImageMagick package.

System / Misc

Simple Robust OS Detection in Bash Using $OSTYPE

case "$OSTYPE" in
  solaris*) echo "SOLARIS" ;;
  darwin*)  echo "OSX" ;;
  linux*)   echo "LINUX" ;;
  bsd*)     echo "BSD" ;;
  *)        echo "unknown: $OSTYPE" ;;
esac

Replaces tabs with spaces

find . -name '*.scala' ! -type d -exec bash -c 'expand -t 2 "$0" > /tmp/e && mv /tmp/e "$0"' {} \;

Redirect output from two commands in a single file

{ 
    LC_ALL=C cat /tmp/output-12M-first-ids.log | tee \
        >(grep -o -P '(?<=User )[0-9]+(?= is broken)') \
        >(grep -o -P '(?<=\[Reconciliation\] User )[0-9]+') \
        > /dev/null; 
    LC_ALL=C cat /tmp/warroom-10-01-18.log | tee \
        >(grep -o -P '(?<=User )[0-9]+(?= is broken)') \
        >(grep -o -P '(?<=\[Reconciliation\] User )[0-9]+') \
        > /dev/null; 
} | uniq > file.txt

cat Here document and chmod them in the same command

cat << \EOF > ${ROOTFS}/heap-dump.sh && chmod +x ${ROOTFS}/heap-dump.sh
#!/bin/bash

jmap -dump:live,format=b,file=/heapDump-`hostname`-`date +%FT%H-%M-%S`.hprof `jps -q -J-XX:+PerfDisableSharedMem`
EOF

cat << \EOF > ${ROOTFS}/vm-uptime.sh && chmod +x ${ROOTFS}/vm-uptime.sh
#!/bin/bash

jcmd `jps -q -J-XX:+PerfDisableSharedMem` VM.uptime
EOF

French dictionary for IntelliJ

aspell --lang fr dump master | aspell --lang fr expand | tr ' ' '\n' > ../french.dic

Convert PEM file to PKCS12

openssl pkcs12
     -passin pass:${TEH_PEM_PASSPHRASE}
     -passout pass:${THE_PKCS12_PASSPHRASE}
     -export
     -in the.pem
     -out the.pkcs12
     -name "Name of certificate"

Hash a string with SHA-1 / HMAC SHA-1 / HMAC SHA-256

echo -n "text to hash" | openssl sha1
echo -n "text to hash" | openssl sha1 -hmac "secret"
echo -n "text to hash" | openssl dgst -sha256 -hmac "secret"

Generates random data

openssl rand -base64 128                  # write 128 random bytes of base64-encoded data to stdout
openssl rand -out random-data.bin 1024    # write 1024 bytes of binary random data to a file

# seed openssl with semi-random bytes from browser cache
cd $(find ~/.mozilla/firefox -type d -name Cache)
openssl rand -rand $(find . -type f -printf '%f:') -base64 1024

Generate random passwords

openssl passwd MySecret                   # generate a random crypted password
openssl passwd -salt 8E MySecret          # generate the crypted password for the given secret and salt

Newer Unix systems use a more secure MD5-based hashing mechanism that uses an eight-character salt (as compared to the two-character salt in traditional crypt()-style hashes). Generating them is still straightforward using the -1 option.

openssl passwd -1 MySecret                # generate random shadow password
openssl passwd -1 -salt sXiKzkus MySecret # generate the shadow password for the given secret and salt

Benchmark system performance for different algorithms

openssl speed

Key length of RSA

openssl rsa -in private.key -text -noout

For a certificate :

openssl x509 -in public.pem -text -noout | grep "RSA Public Key"

Perform a POST request in multipart/form-data

curl -v -k -f
     --header "Authorization: Bearer e2515be8-9b79-4521-9a13-0243dbc74745"
     --header "Content-Type: multipart/form-data"
     --form "archive=@/Users/brice/Documents/hazelcast-documentation.pdf;type=application/pdf"
     -X POST https://api.dev.u-post.fr/api/v1.0/document

Move current year pics to 2013 directory.

find . -maxdepth 1 -daystart -type f -name '*.jpg' -mtime -$( date +%j ) -exec mv -v {} 2013/ \;

Listing

Listing directory in chronological order, most recent files at end of list
ls -qaltr
Listing directories only in reverse order
ls -ltrhd */

Reformat xml

xmllint --format file.xml > file-formatted.xml

Valid xml with schema

xmllint --schema the_schema.xsd file.xml

List processes using the 8080 port

lsof -i -P | grep 8080
lsof -i :22
sudo lsof -i -P | grep 8080 # for root processes
lsof -i@192.168.1.5
lsof -i | grep LISTEN

List established TCP connections

lsof -i -n | grep ESTABLISHED

Which app has an active active connection

lsof -i -P +c 0 +M | grep -i "$1"

Listing TCP connections

sudo lsof -iTCP -sTCP:LISTEN
sudo lsof -iTCP -sTCP:LISTEN -P
sudo lsof -iTCP -sTCP:LISTEN -P -n
sudo lsof -iTCP -sTCP:LISTEN -n

-P -n prevents lsof from doing name resolution, and it doesn't block. Missing either one of these options can make lsof be very slow. For UDP: sudo lsof -iUDP -P -n | egrep -v '(127|::1)'. Again without -n and -P, it takes a long time. Reminder: This does not include firewall settings.

Listen to TCP traffic on en3 device

sudo ngrep -d 'en3' -W byline -t -q '^(GET|POST|PUT|HEAD|CONNECT) '

Some query examples:

ngrep -q 'HTTP' 'tcp'
ngrep -q 'HTTP' 'udp'
ngrep -q 'HTTP' 'icmp'
ngrep -q 'HTTP' 'host 192.168'     # matches all headers containing the string 'HTTP' sent to or from the ip address starting with 192.168
ngrep -q 'HTTP' 'dst host 192.168' # match a destination host
ngrep -q 'HTTP' 'src host 192.168' # match a source host
ngrep -q 'HTTP' 'port 80'          # match a port

Make any command stay active when terminal is closed

This is done by ignoring the SIGHUP signal using nohup, this signal is sent to a process when the controlling terminal is closed. For example, also note the & to display the process id :

nohup tail /var/log/murmur.log | perl -ne '/^<.>[0-9:. -]{24}(\d+ => )?(.*)/; $pid=`pgrep -u murmur murmurd | head`; chomp $pid; `logger -p info -t "murmurd[$pid]" \\"$2\\"`;' &

Colorized tail

tail -f "foo.log" | egrep --line-buffered --color=auto 'ERROR|WARN|$'

Remove lines in a file defined in another

grep -v -x -f f2 f1
  • -v to select non-matching lines
  • -x to match whole lines only
  • -f removing_patterns to get patterns from removing_patterns, for fixed string use -F <file>

Get a specific line of a file

sed -n '2p' < file.txt                    # print 2nd line
sed -n '2011p' < file.txt                 # print 2011th line
sed -n '10,33p' file.txt                  # print line 10 up to line 33
sed -n '1p;3p' file.txt                   # print 1st and 3th line

Using pipes cat some.xml | sed -n '256p' | xml format | mate

Replace content

sed -i.bak s/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g index.html

Negative lookahead with sed

sed --in-place '/non-wanted/! s/\(before\)\(after\)/\1to be inserted\2/' files*
  • --in-place changes the file in place
  • /non-wanted/! only perform next action on lines not ! matching non-wanted
  • s/\(before\)\(after\)/\1to be inserted\2/ the usual sed action

Fun with awk

# find in a csv every value matching a regex and print the column position
# column separator is '","|"' for CSV file in the following format "a","b","c"
awk -F '","|"' '{ for (i=1; i<=NF; i++) if ($i ~ "some regex") print i ":" $i }'

Using screen

screen -R session_name                    # Create a screen session
screen -x -R session_name                 # Attach to an existing screen session without detaching (multi-display)
screen -d -R session_name                 # Attach to a screen session, and detach the previous screen

ctrl+a ctrl+d To detach from an active session

Using XMLStarlet

Selecting nodes

File containing some garbage and oneline xmls (identifed with the standard XML declaration <?xml?> ). First using a subshell with ( commands ) whose purpose is to selecvt only the XML lines, removing the XML declaration for each line s/<\?xml[^>]+>//, and creates a dumb XML root foobar.

Then XMLStarlet is used to get the values if the nodes in the following xpath from the root /foobar/roster/contact, addtional commands allowed us to select transformed lines that didn't have any data in the 2 last nodes first-name and last-name.

( echo "<foobar>" ; grep "?xml" xml-rpc.2013-10-15.log | perl -pe 's/<\?xml[^>]+>//' ; echo "</foobar>" )
    | xml select --template --match '/foobar/roster/contact'
                 --value-of 'user-id' --output ':'
                 --value-of 'first-name' --output ':'
                 --value-of 'last-name' --output ':'
                 --nl
    | grep :::
    | sort
    | uniq

Other commands

xml format
xml validate
xml edit ...
...

Speed test

wget --output-document=/dev/null http://speedtest.wdc01.softlayer.com/downloads/test500.zip

Shell stuff

Print date from epoch
#BSD
date -r 1234567890
#GNU
date -d @1234567890
Listing all processes in full format with associated user, command, environment, hierarchy
ps auxef
Listing process hierarchy
ps xf
Listing thread of a process
ps -L -p <the pid>                        # for Linux boxes
ps -Mp <the pid>                          # for OSX (no threads id)
Find java thread eating CPU
kill -3 <pid>
ps -eLo pid,lwp,nlwp,ruser,pcpu,stime,etime,args|grep <pid> > lwpthread.txt

the file lwpthread.txt will contain some thing similar:

PID	NLWP	SZ	RUSER	%CPU	STIME	ELAPSED	COMMAND
8234	8234	1110	admin	0.3	08:11	30:15	/usr/java/jdk1.6.0_24/bin/java
8234	8245	1110 	admin	99.0	08:45	10:15	/usr/java/jdk1.6.0_24/bin/java

And find the thread native id by converting the decimal to hex :

echo "obase=16; <tid>" | bc

in thread dump :

"TP-Processor234786" daemon prio=10 tid=0x00002aaad8024800 nid=0x2035 runnable [0x00002aaadef29000]
java.lang.Thread.State: RUNNABLE
  at java.util.HashMap.get(HashMap.java:303)
  at ......
Parameter (variable) expansion

For parameter expansion ${...} see ZSH and/or Bash documentation.

More to come...

History reuse tricks
  • ``!!` repeat last command
  • !$ last argument of last command
  • !^ first argument of last command

Here's more

!-3                                       # Execute the command that what was executed 3 commands ago
!!                                        # Execute the content of the previous command
cd !$                                     # Use the last arg of the previous command
vi !^                                     # Use the first arg of the previous
vi !-3^                                   # Use the first argument of the command executed 3 commands ago
!keyword                                  # Execute or copy the last command **begining** with the keyword
!?keyword                                 # Execute or copy the last command with the keyword appearing **anywhere**

!tail:p                                   # Adding :p at the end will only display the command
  • !:0 is the previous command name
  • !^, !:2, !:3, , !$ are the arguments of the previous command
  • !* is all the arguments of the previous commands
  • !-2^, !-2:2, !-2$, !-2* are arguments of the command exectuted 2 commands ago
  • !# repeat everything written so far in this line
!find:5                                   # display the fifth word from the last command beginning with !find
!find:$                                   # display the last word from the same command.
!find:*                                   # is same as !find:1-$ - both resulting in the complete command except for the first word.
ls path/to/project/pom.xml
ls !$:h                                   # Use the path part of the last argument => path/to/project
ls !$:t                                   # Use the file part of the last argument => pom.xml
ls !$:e                                   # Use the whole path + file without the file extension of the last argument => path/to/project/pom
ls !$:t:q                                 # 2 modifiers, :q will quote the modified string => 'pom.xml'

They are know in bash as event designators : http://www.gnu.org/software/bash/manual/bashref.html#Event-Designators

Don't add command in history
<space>command
 ls

Decode base 64

Works with Authorization: Basic YWRtaW46cGFzc3dvcmQ= HTTP header.

echo YWRtaW46cGFzc3dvcmQ= | base64 -d       # Linux
echo YWRtaW46cGFzc3dvcmQ= | base64 --decode # BSD / OSX

Dig

dig www.leftcolumn.net A                  # enquire for IPv4 entries
dig www.leftcolumn.net AAAA               # enquire for IPv6 entries
dig www.leftcolumn.net SOA +multiline     # enquire for Start of Authority entry
dig www.leftcolumn.net CNAME              # look for CNAME entry
dig www.leftcolumn.net MX                 # enquire for mail domains
dig -t ANY google.co.nz                   # retrieve all available stuff in the DNS Zone for a domain
dig mockito.org \
    +nostats    \                         # Don't show DNS server statistics
    +nocomments \                         # Don't show comments (section comments, ...)
    +nocmd                                # Don't show command

Clean output :

dig mockito.org \
    +noall      \                         # Disable output
    +answer                               # Output answer

Who is logged plus some other info

w

Copy with progress and rate

rsync --progress -ah <source> <destnation>

SSH Vagrant Tunnel

vagrant ssh -- -f -L 7777:127.0.0.1:3306 -N
vagrant ssh-config

SSH Tunnel

  1. -L Local : Redirect a distant port visible by the remote host to a port on the local machine
  2. -R Remote : Redirect a distant machine port to a port visible by the local machine
  3. -D Dynamic : Transfert de port dynamique basé sur SOCKS (qui va faciliter la gestion des services d'un firewall pour les applications clients serveurs, voir, pour application concrète, le point Configuration du navigateur internet dans l'explication sur putty, ou la configuration du serveur mandataire avec GNOME

See beautiful graphics here : http://unix.stackexchange.com/questions/115897/whats-ssh-port-forwarding-and-whats-the-difference-between-ssh-local-and-remot

Local forwarding

ssh -L local_port:remote_host:remote_port -N user@ssh_host
ssh -L [local_bind_address:]local_port:remote_host:remote_port -N user@ssh_host
  • remote_host can be localhost the loopback interface of the remote host or, a machine only visible from the remote host.

  • local_bind_address by default bound to the loopback interface, to bind on different network interface use the relevant IP. To bind to all interfaces :

    ssh -L \*:8080:localhost:80 -N root@website.com
    ssh -L 0.0.0.0:8080:localhost:80 -N root@website.com
    ssh -L [::]:8080:localhost:80 -N root@website.com

Example

ssh -f -N -L \*:7607:127.0.0.1:3306 user@ssh_host

Remote forwarding

ssh -R remote_port:visible_host:visible_host_port -N user@ssh_host
ssh -R [local_bind_address:]remote_port:visible_host:visible_host_port -N user@ssh_host
  • remote_port the remote port on ssh_host that will be forwarded

  • visible_host the host on the current machine, either localhost or a host accessible by current machine

  • visible_host_port the port on the visible host to forward to

  • local_bind_address is the network interface on which the forward should listen, by default bound to the loopback interface, to bind on different network interface use the relevant IP. To bind to all interfaces :

    ssh -R \*:8080:localhost:80 -N root@website.com
    ssh -R 0.0.0.0:8080:localhost:80 -N root@website.com
    ssh -R [::]:8080:localhost:80 -N root@website.com

    Note that the (OpenSSL) sshd server config should be configured with the GatewayPorts option to either yes or clientspecified to enable the port binding to other interfaces than loopback.

Example

ssh -R 8080:localhost:80 -N root@website.com

Tips for tunnels

  • -N will not issue command on remote host, it just opens ports.
  • -f will make the ssh command go background, don't forget to kill the command after use !

SSH Escape sequences

Normal keys are forwarded over the ssh session, so ctrl+c, ctrl+d or ctrl+z won't work. Instead, use the SSH escape sequences. For example to kill the current session hit subsequently Enter ↵, ~, ..

More of these escape sequences can be listed with Enter ↵, ~, ?:

Supported escape sequences:

  • ~. : terminate session
  • ~B : send a BREAK to the remote system
  • ~R : Request rekey (SSH protocol 2 only)
  • ~# : list forwarded connections
  • ~? : this message
  • ~~ : send the escape character by typing it twice

(Note that escapes are only recognized immediately after newline.)

Check if certificate is signed by key

diff <(openssl rsa -in my.key -modulus | grep Modulus) <(openssl x509 -in my.crt -modulus | grep Modulus) # Check if cert was signed by key.

Start a simple HTTP server from the current dir

python -m SimpleHTTPServer 8000

Shortcuts on the terminal

ATM doesn't work with oh-my-zsh.

  • ctrl-A : moves to the start of the line
  • ctrl-E : moves to the end of the line
  • ctrl-B : move back on character
  • ctrl-F : move forward one character
  • esc-B : move back one word
  • esc-F : move forward one word
  • ctrl-U : delete from the cursor to the beginning of the line
  • ctrl-K : delete from the cursor to the end of the line

OSX

setuid for htop on osx

You can either run the program via sudo or set the setuid bit:

sudo chown root:wheel /usr/local/opt/htop-osx/bin/htop
sudo chmod u+s /usr/local/opt/htop-osx/bin/htop

Encode in base64 on mac

openssl base64 -in <infile> -out <outfile>
openssl base64 < path/to/file.png | tr -d '\n' | pbcopy
cat path/to/file.png | openssl base64 | tr -d '\n' | pbcopy

find sed

find ./ -type f -exec sed -i '' -e "s/192.168.20.1/new.domain.com/" {} \;

On the OSX version of sed, the -i option expects an extension argument so your command is actually parsed as the extension argument and the file path is interpreted as the command code. Adding the -e argument explicitly for the search/replace command and giving '' as argument to -i solves the issue.

Preview from command line

qlmanage -p ~/Pictures/dilbert-leadership-process.jpg

Query or modify raster image files

Without outfile / ouput directory files WILL BE OVERWRITTEN!

sips -Z 100x100 image.jpg                                                           # resample an image to 100x100
sips -s format png sandwich.jpg --out my-converted-image.png
sips -s formatOptions low sandwich.jpg --out my-lower-quality-converted-image.jpg   # JPEG only
sips -s formatOptions 5% sandwich.jpg --out my-lower-qulity-converted-image.jpg     # JPEG only
sips -r 90 sandwich-to-rotate.png
sips -r 73 --padColor FFFFF sandwich-to-rotate.png
sips -f horizontal sandwich-to-be-flipped.png
sips -s format jpeg --resampleHeight 50 *.jpg --out /output-dir                     # BATCH resampling

Play file from terminal

afplay ~/path/to/file.mp3 &

Watches files being accessed on the system

opensnoop uses DTrace to show you all of the files that are being accessed on your system, you need to execute it with superuser privileges.

sudo opensnoop
sudo opensnoop -p PID                     # watch a particular process
sudo opensnoop -f /etc/passwd             # watch who is accessing a particular file

Networking

networksetup -getairportnetwork en0       # ESSID on OSX
networksetup -listallnetworkservices      # Network services
networksetup -getdnsservers "Ethernet"    # name servers
networksetup -setairportpower "Wi-Fi" on  # switches on the airport power on
networksetup -setairportpower "Wi-Fi" off # switches on the airport power off

Computer name

sudo scutil --set ComputerName "newname"
sudo scutil --set LocalHostName "newname"
sudo scutil --set HostName "newname"

Flush the DNS cache

sudo killall -HUP mDNSResponder           # From 10.7+
sudo dscacheutil -flushcache              # Up to 10.6

OSX info

sw_vers [-productName|-productVersion|-buildVersion]

Power management settings

pmset -g
sudo pmset displaysleep 15                # Put display to sleep after 15 minutes of inactivity
sudo pmset sleep 30                       # Put computer to sleep after 30 minutes of inactivity
sudo pmset repeat wakeorpoweron MTWRF 7:00:00  # Wake up every morning at 7am

Suspend / Resume process in OSX

kill -STOP 12345
kill -CONT 12345
killall -STOP iTunes
killall -CONT iTunes

List USB info (equivalent to lsusb)

system_profier SPUSBDataType

Secure delete

srm

Some settings

# Enable Time Machine to backup on a regular NAS
defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1

NVRAM

nvram -p                                  # Show nvram paramters

Change HFS flags

chflags nohidden ~/Library                # Unhide the User Library folder

Copy/Paste

pbpaste | wc -l
pbpaste | sort | pbcopy
pbpaste | tidy | pbcopy

Make you mac talk to you

say Hello World!
say --output-file=audio.aiff --input-file=FILENAME.txt
say --voice="?"                           # List of voices
cat somefile.txt | say

Size of sections in an object size (useful for universal binaries)

size /Applications/iWorks/Pages.app/Contents/MacOS/Pages

Wifi scan

airport -s

Disk operations

diskutil secureErase freespace 3 /Volumes/name-of-drive
diskutil eject …
hdiutil mount ~/Desktop/lastest_webkit.dmg
hdiutil detach /Volumes/<mountpoint>

Create a ramdisk on OSX (a la tmpfs)

# create the ramdisk device
ramdisk=$( hdiutil attach -nomount ram://$(( 720 * 1024 * 1024 / 512 )) ) # returns device with 720 MB (in 512k blocks)
# mount the volume to /Volumes/tmp
diskutil erasevolume HFS+ "tmp" $(echo "$ramdisk")

# unmount the volume
diskutil unmount $(echo "$ramdisk")

# detach the ramdisk device
hdiutil detach $(echo "$ramdisk")

Read system information

sysctl -a
sysctl -n machdep.cpu.brand_string

Free memory cache

Mac OS X keeps apps in memory for a while after you close them, so they will open fast if you open them again. Purge will remove them from memory and give your free memory back.

purge

Installing pandoc

Step 1: Install Haskell Platform

Using Homebrew install the Haskell Platform.

brew install haskell-platform

This takes a few minutes so you will need to be patient.

Eventually if things don't work remove packages

brew remove caba-install
brew remove ghc
brew remove haskell-platform

Also if you are replacing a previous version of haskell-platform, you may want to unregister packages belonging to the old version. You can find broken packages using:

ghc-pkg check --simple-output

You can uninstall them using:

ghc-pkg check --simple-output | xargs -n 1 ghc-pkg unregister --force

Step 2: Install pandoc

In the previous step we installed the Haskell Platform which installs the cabal package manager.

cabal update
cabal install pandoc

By default, cabal installs pandoc in ~/.cabal/bin/pandoc. This can be changed, but I just added this to my path and am done with it.

System updates

sudo softwareupdate -l
sudo softwareupdate -i -a

System configuration, mostly accessible via System Preferences

systemsetup -setnetworktimeserver us.pool.ntp.org

Manage system configuration parameters

scutil ...

Network configuration, mostly accessible via System Preferences

networksetup
networksetup -listallnetworkservices      # List all known devices enabled and disabled

Recent downloads

sqlite3 ~/Library/Preferences/com.apple.LaunchServices.QuarantineEventsV* 'select LSQuarantineDataURLString from LSQuarantineEvent'

Clear recent download

sqlite3 ~/Library/Preferences/com.apple.LaunchServices.QuarantineEventsV* 'delete from LSQuarantineEvent'

Calculating relevant numbers for IPs

ipcalc -n 10.22.18.16/28

Adding more quartz filters

Filters are in /System/Library/Filters, for example duplicating Reduce File Size [Best] change the following parameters :

<key>Compression Quality</key>
<real>0.85</real>
<key>ImageScaleFactor</key>
<real>1.0</real>
<key>ImageSizeMax</key>
<integer>4678</integer>
<key>ImageSizeMin</key>
<integer>128</integer>
<string>Reduce File Size [Best]</string>

Note that for A4 paper : 1684 pixel with 144 dpi, 3508 pixels with 300 dpi, 4678 pixels with 400 dpi. Also see Color Sync utility, though added filters are not visible in other apps (they are added in ~/Library/Filters).

Use quartz filters from the command line

/System/Library/Printers/Libraries/quartzfilter sourcefile filter destination
/System/Library/Printers/Libraries/quartzfilter big.pdf /System/Library/Filters/Reduce\ File\ Size.qfilter small.pdf

Install Central Authority certificate in Keychain

sudo security import <CA certificate file name> -k /Library/Keychains/System.keychain

iotop on OSX

sudo iotop -C 3 10

iotop is in fact a dtrace script, and dtrace isn't allowed on El Capitan. This script will fail with a weird error. One has to reenable dtrace with SIP.

With OSX tooling

sudo fs_usage -f filesys

However iotop requires dtrace and dtrace is not activated on El Capitan thanks to the rootless (SIP) mode.

Measures the number of context switches

latency measures the number of context switches and interrupts of the system

$ sudo latency
Thu Apr 28 14:09:39                             0:00:09
                     SCHEDULER     INTERRUPTS
---------------------------------------------
total_samples                0           3859

delays <  10 usecs           0           3807
delays <  20 usecs           0             46
delays <  30 usecs           0              2
delays <  40 usecs           0              1
delays <  50 usecs           0              2
delays <  60 usecs           0              0
delays <  70 usecs           0              1
delays <  80 usecs           0              0
delays <  90 usecs           0              0
delays < 100 usecs           0              0
total  < 100 usecs           0           3859
...

The SCHEDULER column lists the number of context switches and the INTERRUPTS column lists the number of interrupts.

strace on OSX

The sc_usage utility samples system calls and page faults.

sudo sc_usage 71664
Slack             6 preemptions   52 context switches    4 threads    14:15:58
                  0 faults       273 system calls                      0:00:13

TYPE                           NUMBER        CPU_TIME   WAIT_TIME
------------------------------------------------------------------------------
System         Idle                                     00:11.222(00:00.976)
System         Busy                                     00:01.210(00:00.118)
Slack          Usermode                      00:00.170

semwait_signal                   24(2)       00:00.000  00:12.341(00:01.042) W
mach_msg_trap                  1120(88)      00:00.008  00:12.245(00:01.027) W
select                           19          00:00.000  00:12.086(00:01.042) W
workq_kernreturn                 11          00:00.000  00:09.462(00:01.042) W
psynch_mutexwait                 11          00:00.000  00:00.000
mk_timer_arm                   1134(88)      00:00.014  00:00.000
read                             25          00:00.000
write                             7          00:00.000
recvfrom                          6          00:00.000
madvise                          24(2)       00:00.000
gettimeofday                     35(5)       00:00.000
sendto                            6          00:00.000
psynch_mutexdrop                 11          00:00.000
bsdthread_ctl                    36          00:00.000
mach_vm_allocate_trap             2          00:00.000
mach_vm_deallocate_trap           2          00:00.000
mach_port_mod_refs_trap           2          00:00.000
mach_port_insert_member         560(44)      00:00.002
mach_port_extract_membe         560(44)      00:00.002
mach_reply_port                   2          00:00.000
mach_vm_purgable_contro           1          00:00.000
...

vmstat on OSX

vm_stat
vm_stat <interval in seconds>

OSX shortcuts

  • --t : Open the special chars window in most apps
  • -control-power : Force to restart
  • --G : In Finder go to Folder
  • -E : In Finder eject the current drive
  • --N : In Finder create a new folder
  • control-F2 : Go to the menu bar
  • control-F3 : Go to the dock
  • control-F8 : Go to the status menu
  • -- : Open Force Quit
  • --- : Force quite the front-most app

Startup shortcuts

  • : Display all bootable volumes (Startup Manager)
  • : Perform a Safe Boot (start up in Safe Mode)
  • Left : Prevent automatic login
  • D : Use Apple Hardware Test
  • -R : Use Recovery (OS X Lion or later)
  • -V : Start in Verbose Mode
  • -S : Start in Single User Mode
  • --P-R : Reset NVRAM / parameter RAM

useful commands

❯ fluxctl list-images --k8s-fwd-ns flux --workload security:helmrelease/security-auth --namespace security --context gke_infra-prod-67bbc6f2_europe-west4_prod-1
WORKLOAD                            CONTAINER    IMAGE                                 CREATED
security:helmrelease/security-auth  chart-image  eu.gcr.io/bbc-registry/security-auth
                                                 '-> 1.20200304.160755-a1aa0d1         04 Mar 20 16:08 UTC
                                                     1.20200304.150021-92ecfbf         04 Mar 20 15:00 UTC
                                                     1.20200303.125505-57ae9dd         03 Mar 20 12:55 UTC
                                                     1.20200302.233522-5f5253c         02 Mar 20 23:35 UTC
                                                     1.20200302.230556-4024625         02 Mar 20 23:06 UTC
                                                     1.20200302.225035-898a035         02 Mar 20 22:50 UTC
                                                     1.20200302.222412-9bb62a8         02 Mar 20 22:24 UTC
                                                     1.20200302.215626-fb438b5         02 Mar 20 21:56 UTC
                                                     1.20200302.172527-fb438b5         02 Mar 20 17:25 UTC
                                                     1.20200302.143544-88f4c29         02 Mar 20 14:35 UTC
❯ fluxctl list-images --k8s-fwd-ns flux --workload security:deployment/security-auth --namespace security --context gke_infra-prod-67bbc6f2_europe-west4_prod-1
WORKLOAD                           CONTAINER      IMAGE                                 CREATED
security:deployment/security-auth  security-auth  eu.gcr.io/bbc-registry/security-auth
                                                  |   1.20200304.160755-a1aa0d1         04 Mar 20 16:08 UTC
                                                  |   1.20200304.150021-92ecfbf         04 Mar 20 15:00 UTC
                                                  |   1.20200303.125505-57ae9dd         03 Mar 20 12:55 UTC
                                                  '-> 1.20200302.233522-5f5253c         02 Mar 20 23:35 UTC
                                                      1.20200302.230556-4024625         02 Mar 20 23:06 UTC
                                                      1.20200302.225035-898a035         02 Mar 20 22:50 UTC
                                                      1.20200302.222412-9bb62a8         02 Mar 20 22:24 UTC
                                                      1.20200302.215626-fb438b5         02 Mar 20 21:56 UTC
                                                      1.20200302.172527-fb438b5         02 Mar 20 17:25 UTC
                                                      1.20200302.143544-88f4c29         02 Mar 20 14:35 UTC
❯ k get event --context gke_infra-prod-67bbc6f2_europe-west4_prod-1 --namespace security --field-selector involvedObject.name=security-auth-65754f9589-tzw5x
LAST SEEN   TYPE      REASON      OBJECT                               MESSAGE
37m         Normal    Scheduled   pod/security-auth-65754f9589-tzw5x   Successfully assigned security/security-auth-65754f9589-tzw5x to gke-prod-1-n1-standard-32-cos-898812f7-b9kg
37m         Normal    Pulled      pod/security-auth-65754f9589-tzw5x   Container image "istio/proxyv2:1.4.4" already present on machine
37m         Normal    Created     pod/security-auth-65754f9589-tzw5x   Created container istio-init
37m         Normal    Started     pod/security-auth-65754f9589-tzw5x   Started container istio-init
37m         Normal    Pulling     pod/security-auth-65754f9589-tzw5x   Pulling image "eu.gcr.io/bbc-registry/security-auth:1.20200304.160755-a1aa0d1"
36m         Normal    Pulled      pod/security-auth-65754f9589-tzw5x   Successfully pulled image "eu.gcr.io/bbc-registry/security-auth:1.20200304.160755-a1aa0d1"
36m         Normal    Created     pod/security-auth-65754f9589-tzw5x   Created container security-auth
36m         Normal    Started     pod/security-auth-65754f9589-tzw5x   Started container security-auth
36m         Normal    Pulled      pod/security-auth-65754f9589-tzw5x   Container image "istio/proxyv2:1.4.4" already present on machine
36m         Normal    Created     pod/security-auth-65754f9589-tzw5x   Created container istio-proxy
36m         Normal    Started     pod/security-auth-65754f9589-tzw5x   Started container istio-proxy
36m         Warning   Unhealthy   pod/security-auth-65754f9589-tzw5x   Readiness probe failed: HTTP probe failed with statuscode: 503
36m         Warning   Unhealthy   pod/security-auth-65754f9589-tzw5x   Readiness probe failed: HTTP probe failed with statuscode: 503
36m         Warning   Unhealthy   pod/security-auth-65754f9589-tzw5x   Readiness probe failed: Get http://10.208.13.73:8080/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
36m         Normal    Killing     pod/security-auth-65754f9589-tzw5x   Stopping container security-auth
36m         Normal    Killing     pod/security-auth-65754f9589-tzw5x   Stopping container istio-proxy
35m         Warning   Unhealthy   pod/security-auth-65754f9589-tzw5x   Readiness probe failed: Get http://10.208.13.73:15020/healthz/ready: dial tcp 10.208.13.73:15020: connect: connection refused

Kibana logs kubernetes.replicaset.name: security-auth-65754f9589 kubernetes.pod.name: security-auth-65754f9589-lmcmc

show message and field.message

❯ kt helm-operator --since 9m -n flux --context gke_infra-prod-67bbc6f2_europe-west4_prod-1 | grep security-auth
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:37:29.35410232Z caller=operator.go:307 component=operator info="enqueuing release" resource=security:helmrelease/security-auth-cassandra
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:37:29.448195777Z caller=operator.go:307 component=operator info="enqueuing release" resource=security:helmrelease/security-auth
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:37:29.472195926Z caller=operator.go:307 component=operator info="enqueuing release" resource=security:helmrelease/memorystore-security-auth-metrics
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:37:43.479434679Z caller=release.go:360 component=release release=security-auth-cassandra targetNamespace=security resource=security:helmrelease/security-auth-cassandra helmVersion=v2 info="performing dry-run upgrade to see if release has diverged"
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:37:43.819708882Z caller=release.go:404 component=release release=security-auth-cassandra targetNamespace=security resource=security:helmrelease/security-auth-cassandra helmVersion=v2 info="no changes" action=skip
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:38:51.063865248Z caller=release.go:360 component=release release=security-auth targetNamespace=security resource=security:helmrelease/security-auth helmVersion=v2 info="performing dry-run upgrade to see if release has diverged"
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:38:51.363338801Z caller=release.go:378 component=release release=security-auth targetNamespace=security resource=security:helmrelease/security-auth helmVersion=v2 info="release has been rolled back, comparing dry-run output with latest failed release"
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:38:51.400861697Z caller=release.go:404 component=release release=security-auth targetNamespace=security resource=security:helmrelease/security-auth helmVersion=v2 info="no changes" action=skip
❯ stern helm-operator --namespace flux --context gke_infra-prod-67bbc6f2_europe-west4_prod-1
❯ k rollout status deployment/security-auth --context gke_infra-prod-67bbc6f2_europe-west4_prod-1 --namespace security
Waiting for deployment "security-auth" rollout to finish: 4 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 4 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 4 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 9 of 10 updated replicas are available...
deployment "security-auth" successfully rolled out
# requires iproute2
ss -nlp
ss --listening --numeric --process --unix --tcp
❯ DOCKER_BUILDKIT=1 docker build -t test-edge-api --build-arg REGISTRY=eu.gcr.io/bbc-registry --no-cache -f _infra/Dockerfile .
[+] Building 1.4s (9/9) FINISHED
 => [internal] load build definition from Dockerfile                                                                                                                                                                                                                                                               0.0s
 => => transferring dockerfile: 1.34kB                                                                                                                                                                                                                                                                             0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                                                                                                  0.0s
 => => transferring context: 35B                                                                                                                                                                                                                                                                                   0.0s
 => [internal] load metadata for eu.gcr.io/bbc-registry/corretto-java:11.0.6.10.1                                                                                                                                                                                                                                  0.0s
 => CACHED [1/4] FROM eu.gcr.io/bbc-registry/corretto-java:11.0.6.10.1                                                                                                                                                                                                                                             0.0s
 => [internal] load build context                                                                                                                                                                                                                                                                                  0.0s
 => => transferring context: 1.32kB                                                                                                                                                                                                                                                                                0.0s
 => [2/4] RUN mkdir -p /gclogs /etc/edge-api                                                                                                                                                                                                                                                                       0.3s
 => [3/4] COPY ./build/async-profiler/linux-x64 /async-profiler                                                                                                                                                                                                                                                    0.0s
 => [4/4] COPY ./build/libs/edge-api-boot.jar   ./build/java-agents/newrelic-agent.jar   ./build/java-agents/sqreen-agent.jar   ./build/java-agents/file-leak-detector.jar   ./src/serviceability/*.sh   /                                                                                                         0.6s
 => exporting to image                                                                                                                                                                                                                                                                                             0.4s
 => => exporting layers                                                                                                                                                                                                                                                                                            0.4s
 => => writing image sha256:5ceef8f5a4e23cb3bea7ca7cb7c90c0e338386b7f37992c92861cb119c312cb9                                                                                                                                                                                                                       0.0s
 => => naming to docker.io/library/test-edge-api
❯ docker run test-edge-api
Picked up JAVA_TOOL_OPTIONS:
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
Mar 20, 2020 10:14:50 +0000 [7 1] com.newrelic INFO: Configuration file not found. The agent will attempt to read required values from environment variables.
Mar 20, 2020 10:14:50 +0000 [7 1] com.newrelic INFO: Using default collector host: collector.newrelic.com
Mar 20, 2020 10:14:50 +0000 [7 1] com.newrelic ERROR: Unable to start the New Relic Agent. Your application will continue to run but it will not be monitored.
com.newrelic.agent.config.ConfigurationException: The agent requires an application name. Check the app_name setting in newrelic.yml
        at com.newrelic.agent.config.ConfigServiceFactory.validateConfig(ConfigServiceFactory.java:64) ~[newrelic-agent.jar:5.8.0]
        at com.newrelic.agent.config.ConfigServiceFactory.createConfigService(ConfigServiceFactory.java:27) ~[newrelic-agent.jar:5.8.0]
        at com.newrelic.agent.service.ServiceManagerImpl.<init>(ServiceManagerImpl.java:121) ~[newrelic-agent.jar:5.8.0]
        at com.newrelic.agent.Agent.tryToInitializeServiceManager(Agent.java:194) [newrelic-agent.jar:5.8.0]
        at com.newrelic.agent.Agent.continuePremain(Agent.java:137) [newrelic-agent.jar:5.8.0]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
        at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
        at com.newrelic.bootstrap.BootstrapAgent.startAgent(BootstrapAgent.java:140) [newrelic-agent.jar:5.8.0]
        at com.newrelic.bootstrap.BootstrapAgent.premain(BootstrapAgent.java:77) [newrelic-agent.jar:5.8.0]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
        at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
        at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:513) [?:?]
        at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:525) [?:?]
10:14:53.566 [main] INFO org.springframework.core.KotlinDetector - Kotlin reflection implementation not found at runtime, related features won't be available.
2020-03-20 10:14:55.616 [] WARN  --- [kground-preinit] o.s.h.c.j.Jackson2ObjectMapperBuilder    : For Jackson Kotlin classes support please add "com.fasterxml.jackson.module:jackson-module-kotlin" to the classpath
...
VM.flags local docker
❯ docker exec -it unruffled_rubin bash -c "JAVA_TOOL_OPTIONS='' jcmd \$(pgrep java) VM.flags"
Picked up JAVA_TOOL_OPTIONS:
6:
-XX:CICompilerCount=3 -XX:ConcGCThreads=1 -XX:G1ConcRefinementThreads=4 -XX:G1HeapRegionSize=1048576
-XX:GCDrainStackTargetSize=64 -XX:InitialHeapSize=1774190592 -XX:InitialRAMPercentage=85.000000
-XX:+ManagementServer -XX:MarkStackSize=4194304 -XX:MaxHeapSize=1774190592 -XX:MaxNewSize=1064304640
-XX:MaxRAMPercentage=85.000000 -XX:MinHeapDeltaBytes=1048576 -XX:NativeMemoryTracking=summary
-XX:NonNMethodCodeHeapSize=5830732 -XX:NonProfiledCodeHeapSize=122913754 -XX:ProfiledCodeHeapSize=122913754
-XX:ReservedCodeCacheSize=251658240 -XX:+SegmentedCodeCache -XX:+UseCompressedClassPointers -XX:+UseCompressedOops
-XX:+UseFastUnorderedTimeStamps -XX:+UseG1GC
RSS
$ cat /proc/$(pgrep java)/status | grep VmRSS
VmRSS:	 4701120 kB
VM.flags in k8s
6:
-XX:CICompilerCount=4
-XX:ConcGCThreads=2
-XX:G1ConcRefinementThreads=8
-XX:G1HeapRegionSize=2097152
-XX:GCDrainStackTargetSize=64
-XX:InitialHeapSize=4563402752
-XX:InitialRAMPercentage=85.000000
-XX:+ManagementServer
-XX:MarkStackSize=4194304
-XX:MaxHeapSize=4563402752
-XX:MaxNewSize=2736783360
-XX:MaxRAMPercentage=85.000000
-XX:MinHeapDeltaBytes=2097152
-XX:NativeMemoryTracking=summary
-XX:NonNMethodCodeHeapSize=5836300
-XX:NonProfiledCodeHeapSize=122910970
-XX:ProfiledCodeHeapSize=122910970
-XX:ReservedCodeCacheSize=251658240
-XX:+SegmentedCodeCache
-XX:+UseCompressedClassPointers
-XX:+UseCompressedOops
-XX:+UseFastUnorderedTimeStamps
-XX:+UseG1GC

As the container runs java with native memory tracking (-XX:NativeMemoryTracking=summary), it’s possible to ask the JVM some information about JVM memory zones other than heap.

Note
Enabling native memory tracking (NMT) causes a 5% -10% performance overhead.
VM.native_memory instant snapshot
❯ docker exec -it unruffled_rubin bash -c "JAVA_TOOL_OPTIONS='' jcmd \$(pgrep java) VM.native_memory scale=KB"
Picked up JAVA_TOOL_OPTIONS:
6:

Native Memory Tracking:

Total: reserved=7168324KB, committed=5380868KB                               (1)
-                 Java Heap (reserved=4456448KB, committed=4456448KB)        (2)
                            (mmap: reserved=4456448KB, committed=4456448KB)

-                     Class (reserved=1195628KB, committed=165788KB)         (3)
                            (classes #28431)                                 (4)
                            (  instance classes #26792, array classes #1639)
                            (malloc=5740KB #87822)
                            (mmap: reserved=1189888KB, committed=160048KB)
                            (  Metadata:   )
                            (    reserved=141312KB, committed=139876KB)
                            (    used=135945KB)
                            (    free=3931KB)
                            (    waste=0KB =0.00%)
                            (  Class space:)
                            (    reserved=1048576KB, committed=20172KB)
                            (    used=17864KB)
                            (    free=2308KB)
                            (    waste=0KB =0.00%)

-                    Thread (reserved=696395KB, committed=85455KB)           (5)
                            (thread #674)
                            (stack: reserved=692812KB, committed=81872KB)
                            (malloc=2432KB #4046)
                            (arena=1150KB #1347)

-                      Code (reserved=251877KB, committed=105201KB)          (6)
                            (malloc=4189KB #11718)
                            (mmap: reserved=247688KB, committed=101012KB)

-                        GC (reserved=230739KB, committed=230739KB)          (7)
                            (malloc=32031KB #63631)
                            (mmap: reserved=198708KB, committed=198708KB)

-                  Compiler (reserved=5914KB, committed=5914KB)              (8)
                            (malloc=6143KB #3281)
                            (arena=18014398509481755KB #5)

-                  Internal (reserved=24460KB, committed=24460KB)           (10)
                            (malloc=24460KB #13140)

-                     Other (reserved=267034KB, committed=267034KB)         (11)
                            (malloc=267034KB #631)

-                    Symbol (reserved=28915KB, committed=28915KB)            (9)
                            (malloc=25423KB #330973)
                            (arena=3492KB #1)

-    Native Memory Tracking (reserved=8433KB, committed=8433KB)
                            (malloc=117KB #1498)
                            (tracking overhead=8316KB)

-               Arena Chunk (reserved=217KB, committed=217KB)
                            (malloc=217KB)

-                   Logging (reserved=7KB, committed=7KB)
                            (malloc=7KB #266)

-                 Arguments (reserved=19KB, committed=19KB)
                            (malloc=19KB #521)

-                    Module (reserved=1362KB, committed=1362KB)
                            (malloc=1362KB #6320)

-              Synchronizer (reserved=837KB, committed=837KB)
                            (malloc=837KB #6877)

-                 Safepoint (reserved=8KB, committed=8KB)
                            (mmap: reserved=8KB, committed=8KB)

-                   Unknown (reserved=32KB, committed=32KB)
                            (mmap: reserved=32KB, committed=32KB)
  1. This shows what the JVM reserved for memory 7168324 KB (~7.1 GB) and what is actually used by the jvm process 4456448 KB (~4.45 GB).

  2. heap arena, note reserved and committed values are the same 4456448 KB, I’m not sure why this number is different from the VM flags -XX:MaxHeapSize=4563402752

  3. ~165 MB of class metadata

  4. how many classes have been loaded : 28431

  5. 674 threads are using ~81 MB out of 696 MB reserved

  6. Code cache area (assembly of the used methods) ~105 MB out of 251 MB which matches with -XX:ReservedCodeCacheSize=251658240

  7. G1GC internal data structures take ~230 MB

  8. C1 / C2 compilers (which compile bytecodes to assembly) uses ~6 MB

  9. The symbols contains many things lik interned strings and other internal constants ~29 MB

  10. Internal (included DirectByteBuffers before Java 11), maybe others objects, here takes ~24 MB

  11. Other section after Java 11 includes DirectByteBuffers ~267 MB

Other areas are much smaller in scale, NMT takes ~8 MB itself, module system ~1.3 MB, etc.

For a lot more details read this article by Aleksey Shipilёv on native memory tracking

There’s also the MappedByteBuffers, these are the files mapped to virtual memory of a process. NMT does not track them, however, MappedByteBuffers can also take physical memory. And there is no a simple way to limit how much they can take. However it’s possible to see the actual usage of a process memory map: pmap -x <pid>

process memory mappings
$ pmap -x $(pgrep java)
6:   /usr/bin/java -Dfile.encoding=UTF-8 -Duser.timezone=UTC -Dcom.sun.management.jmxremote.port=7199 -Dcom.sun.management.jmxremote.rmi.port=7199 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Djava.security.egd=file:/dev/.
/urandom -XX:InitialRAMPercentage=85.0 -XX:MaxRAMPercentage=85.0 -XX:NativeMemoryTracking=summary -Xlog:os,safepoint*,gc*,gc+ref=debug,gc+ergo*=debug,gc+age*=debug,gc+phases*:file=/gclogs/%t-gc.log:time,uptime,tags:filecount=5,filesize=10M -javaag
Address           Kbytes     RSS   Dirty Mode  Mapping
0000000000400000       4       4       0 r-x-- java
0000000000600000       4       4       4 r---- java
0000000000601000       4       4       4 rw--- java
000000000216f000     404     272     272 rw---   [ anon ]
00000006f0000000 4476620 3128252 3128252 rw---   [ anon ]
00000008013b3000 1028404       0       0 -----   [ anon ]
00007fc5de9ea000      16       0       0 -----   [ anon ]
00007fc5de9ee000    1012     104     104 rw---   [ anon ]
00007fc5deaeb000      16       0       0 -----   [ anon ]
00007fc5deaef000    1012      24      24 rw---   [ anon ]
00007fc5debec000      16       0       0 -----   [ anon ]
00007fc5debf0000    1012      92      92 rw---   [ anon ]
00007fc5deced000      16       0       0 -----   [ anon ]
00007fc5decf1000    1012     100     100 rw---   [ anon ]
00007fc5dedee000      16       0       0 -----   [ anon ]
00007fc5dedf2000    1012     100     100 rw---   [ anon ]
00007fc5deeef000      16       0       0 -----   [ anon ]
00007fc5deef3000    1012     100     100 rw---   [ anon ]
00007fc5deff0000      16       0       0 -----   [ anon ]
00007fc5deff4000    1012     100     100 rw---   [ anon ]
00007fc5df0f1000      16       0       0 -----   [ anon ]
00007fc5df0f5000    1012     100     100 rw---   [ anon ]
00007fc5df1f2000      16       0       0 -----   [ anon ]
00007fc5df1f6000    1012     100     100 rw---   [ anon ]
00007fc5df2f3000      16       0       0 -----   [ anon ]
00007fc5df2f7000    1012     100     100 rw---   [ anon ]
00007fc5df3f4000      16       0       0 -----   [ anon ]
00007fc5df3f8000    1012     100     100 rw---   [ anon ]
00007fc5df4f5000      16       0       0 -----   [ anon ]
00007fc5df4f9000    1012     100     100 rw---   [ anon ]
00007fc5df5f6000      16       0       0 -----   [ anon ]
00007fc5df5fa000    1012     100     100 rw---   [ anon ]

...

00007fca48ba9000   17696   14876       0 r-x-- libjvm.so
00007fca49cf1000    2044       0       0 ----- libjvm.so
00007fca49ef0000     764     764     764 r---- libjvm.so
00007fca49faf000     232     232     208 rw--- libjvm.so
00007fca49fe9000     352     320     320 rw---   [ anon ]
00007fca4a041000     136     136       0 r---- libc-2.28.so
00007fca4a063000    1312    1140       0 r-x-- libc-2.28.so
00007fca4a1ab000     304     148       0 r---- libc-2.28.so
00007fca4a1f7000       4       0       0 ----- libc-2.28.so
00007fca4a1f8000      16      16      16 r---- libc-2.28.so
00007fca4a1fc000       8       8       8 rw--- libc-2.28.so
00007fca4a1fe000      16      16      16 rw---   [ anon ]
00007fca4a202000       4       4       0 r---- libdl-2.28.so
00007fca4a203000       4       4       0 r-x-- libdl-2.28.so
00007fca4a204000       4       4       0 r---- libdl-2.28.so
00007fca4a205000       4       4       4 r---- libdl-2.28.so
00007fca4a206000       4       4       4 rw--- libdl-2.28.so
00007fca4a207000     100     100       0 r-x-- libjli.so
00007fca4a220000    2048       0       0 ----- libjli.so
00007fca4a420000       4       4       4 r---- libjli.so
00007fca4a421000       4       4       4 rw--- libjli.so
00007fca4a422000      24      24       0 r---- libpthread-2.28.so
00007fca4a428000      60      60       0 r-x-- libpthread-2.28.so
00007fca4a437000      24       0       0 r---- libpthread-2.28.so
00007fca4a43d000       4       4       4 r---- libpthread-2.28.so
00007fca4a43e000       4       4       4 rw--- libpthread-2.28.so
00007fca4a43f000      16       4       4 rw---   [ anon ]
00007fca4a443000       4       4       0 r---- LC_IDENTIFICATION
00007fca4a444000       4       0       0 -----   [ anon ]
00007fca4a445000       4       0       0 r----   [ anon ]
00007fca4a446000       8       8       8 rw---   [ anon ]
00007fca4a448000       4       4       0 r---- ld-2.28.so
00007fca4a449000     120     120       0 r-x-- ld-2.28.so
00007fca4a467000      32      32       0 r---- ld-2.28.so
00007fca4a46f000       4       4       4 r---- ld-2.28.so
00007fca4a470000       4       4       4 rw--- ld-2.28.so
00007fca4a471000       4       4       4 rw---   [ anon ]
00007ffe28536000     140      40      40 rw---   [ stack ]
00007ffe28582000      12       0       0 r----   [ anon ]
00007ffe28585000       8       4       0 r-x--   [ anon ]
ffffffffff600000       4       0       0 r-x--   [ anon ]
---------------- ------- ------- -------
total kB         24035820 4776860 4720796

That’s a lot of information, let’s refine that with more knowledge about /proc/<pid>/maps, each map is associated with a set of modes:

  • r-: readable memory mapping

  • w: writable memory mapping

  • x: executable memory mapping

  • s or p : shared memory mapping or private mapping. /proc/<pid>/maps shows both but pmap only show the s flag.

Also, pmap has another mapping mode which I barely found any reference of, here’s one and here

  • R: if set, the map has no swap space reserved (MAP_NORESERVE flag of mmap). This means that we can get a segmentation fault by accessing that memory if it has not already been mapped to physical memory, and the system is out of physical memory.

So what’s interesting us at this time are the process’s memory mapped (shared) files

process memory mapped files
$ pmap -x 6 | grep "[r-][w-][x-][s][R-]"
00007f5fdc02f000       4       4       0 r--s- instrumentation1647616515145161084.jar
00007f5fdc030000       4       4       0 r--s- instrumentation11262564974060761935.jar
00007f5fdc053000       8       8       0 r--s- java-agent-bs-cl.jar
00007f5fdc055000       4       4       0 r--s- instrumentation249633448216144460.jar
00007f5fdc056000       4       4       0 r--s- newrelic-bootstrap10447345921091566771.jar
00007f5fdc057000      12      12       0 r--s- newrelic-api6038277081136135384.jar
00007f5fec000000       8       8       0 r--s- newrelic-weaver-api16247655721253674284.jar
00007f5fec002000       4       4       0 r--s- newrelic-opentracing-bridge12060425782296980104.jar
00007f5fec003000      12      12       0 r--s- agent-bridge3261511391751138774.jar
00007f5ffb910000  138176   36060       0 r--s- modules
00007f6008006000      28      28       0 r--s- gconv-modules.cache
                           ^^^^^               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Which means there’s around 36 MB of memory mapped files.

Another read on process memory here.

That leaves us with this equation :

Total memory = Heap + Code Cache + Metaspace + Symbol tables
               + Compiler + Other JVM structures + Thread stacks
               + Direct buffers + Mapped files +
               + Native Libraries + Malloc overhead + ...

Heap

4456448

Code Cache

105201

Metaspace

165788

Symbol tables

28915

Compiler

5914

Other JVM structures (Internal + NMT + smaller area)

24460 + 8433 + 217 + 7 + 19 + 1362 + 837 + 8 + 32

Thread stacks

85455

Direct buffers (Other)

267034

Mapped files

36060 + 4 + 4 + 8 + 4 + 4 + 12 + 8 + 4 + 12 + 28

Native Libraries

unaccounted at this time

Malloc overhead

accounted in NMT

…​

Total

5186278 KB

5186278 KB is just tad under 5 GB (5242880 KB).

More importantly is the actual non heap usage :

5186278 - 4456448 = 729830 KB

Non heap

5186278 - 4456448 = 729830

~14 %

Heap

4456448

~85 %

Total

5186278

100 %

This means the application needs at least 730 MB plus the heap to run.

The heap committed memory is 4563402752 B (set via -XX:MaxRAMPercentage=85.000000), but the heap usage may have a different figure :

$ jcmd $(pgrep java) GC.heap_info
6:
 garbage-first heap   total 4456448K, used 925702K [0x00000006f0000000, 0x0000000800000000)
  region size 2048K, 387 young (792576K), 12 survivors (24576K)
 Metaspace       used 154131K, capacity 160610K, committed 160976K, reserved 1189888K
  class space    used 18070K, capacity 20474K, committed 20556K, reserved 1048576K

Successive execution may give different results about the used memory

$ jcmd 6 GC.heap_info
6:
 garbage-first heap   total 4456448K, used 1245902K [0x00000006f0000000, 0x0000000800000000)
  region size 2048K, 543 young (1112064K), 12 survivors (24576K)
 Metaspace       used 154163K, capacity 160620K, committed 160976K, reserved 1189888K
  class space    used 18071K, capacity 20476K, committed 20556K, reserved 1048576K

$ jcmd 6 GC.heap_info
6:
 garbage-first heap   total 4456448K, used 2421454K [0x00000006f0000000, 0x0000000800000000)
  region size 2048K, 1117 young (2287616K), 12 survivors (24576K)
 Metaspace       used 154163K, capacity 160620K, committed 160976K, reserved 1189888K
  class space    used 18071K, capacity 20476K, committed 20556K, reserved 1048576K

The heap went from 925702 KB to 2421454 KB ! Following the trend of the heap usage lead can lead to the actual memory usage for this app (in the given cluster topology).

2.5 GB of used heap + 0.8 GB of non heap + 0.2 MB margin = 3.5 GB

Which leads to set -XX:MaxRAMPercentage=71.0. if we want a lower memory footprint.

  • RSS ⇒ amount of physical memory allocated & used by a process

  • Java MaxHeapSize != Docker stats (“MEM USAGE”)

    • Java ~= heap + metaspace + off-heap (DirectBuffer + threads + compiled code + GC data + …​)

A good start is the actual Linux Kernel documentation on cgroup v1.

memory.stat
❯ kubectl exec -it --container=edge-api deployment/edge-api -- cat /sys/fs/cgroup/memory/memory.stat
cache 57434112 (7)
rss 4822343680 (1)
rss_huge 0
shmem 0
mapped_file 0
dirty 0
writeback 0
swap 0 (6)
pgpgin 7918680
pgpgout 6726903
pgfault 7682598
pgmajfault 0
pgmajfault_s 0
pgmajfault_a 0
pgmajfault_f 0
inactive_anon 0 (2)
active_anon 4823887872 (3)
inactive_file 58806272 (4)
active_file 188416 (5)
unevictable 0
hierarchical_memory_limit 5368709120
hierarchical_memsw_limit 5368709120
total_cache 57434112
total_rss 4822343680
total_rss_huge 0
total_shmem 0
total_mapped_file 0
total_dirty 0
total_writeback 0
total_swap 0
total_pgpgin 7918680
total_pgpgout 6726903
total_pgfault 7682598
total_pgmajfault 0
total_pgmajfault_s 0
total_pgmajfault_a 0
total_pgmajfault_f 0
total_inactive_anon 0
total_active_anon 4823887872
total_inactive_file 58806272
total_active_file 188416
total_unevictable 0
  1. rss of the processes, anonymous memory and swap cache, without tmpfs (shmem) (~4.8 GB)

  2. anonymous memory and swap cache on active LRU list, with tmpfs (shmem)

  3. anonymous memory and swap cache on inactive LRU list, with tmpfs (shmem) (~4.8 GB)

  4. file-backed memory on inactive LRU list, in bytes (~59 MB)

  5. file-backed memory on active LRU list, in bytes (~190 KB)

  6. swap usage, 0 is the only good value for java

  7. page cache memory (~57 MB)

When you interpret the values reported by memory.stat, note how the various statistics inter-relate:

  • active_anon + inactive_anon = anonymous memory + file cache for tmpfs + swap cache

Therefore, active_anon + inactive_anon ≠ rss, because rss does not include tmpfs.

  • active_file + inactive_file = cache - size of tmpfs

There other memory settings to look at

memory usage and limits
cat /sys/fs/cgroup/memory/memory.{usage_in_bytes,limit_in_bytes,memsw.usage_in_bytes,memsw.limit_in_bytes}
4944756736 (1)
5368709120 (2)
4944748544 (3)
5368709120 (4)
  1. current memory usage ~4.9GB, but it’s recommended to read cache+rss+swap values in memory.stat

  2. limit on the memory usage (~5.3GB)

  3. current memory and swap usage (~4.9 GB)

  4. limit on memory and swap (~5.3GB)

Note the memory.limit_in_bytes and memory.memsw.limit_in_bytes values are the same, that means that the processes in the cgroup can use all the memory before swaping, however it is not impossible for the process to be use the swap before this limit is reached.

In fact due to the swapiness value the kernel may try to reclaim memory.

There are other parameters related to the kernel and tcp allocations.

memory.swapiness
cat /proc/sys/vm/swappiness (1)
60
cat /sys/fs/cgroup/memory/memory.swappiness (2)
60
  1. OS swapiness

  2. cgroup swapiness, here the setting is not overridden

useful commands

❯ fluxctl list-images --k8s-fwd-ns flux --workload security:helmrelease/security-auth --namespace security --context gke_infra-prod-67bbc6f2_europe-west4_prod-1
WORKLOAD                            CONTAINER    IMAGE                                 CREATED
security:helmrelease/security-auth  chart-image  eu.gcr.io/bbc-registry/security-auth
                                                 '-> 1.20200304.160755-a1aa0d1         04 Mar 20 16:08 UTC
                                                     1.20200304.150021-92ecfbf         04 Mar 20 15:00 UTC
                                                     1.20200303.125505-57ae9dd         03 Mar 20 12:55 UTC
                                                     1.20200302.233522-5f5253c         02 Mar 20 23:35 UTC
                                                     1.20200302.230556-4024625         02 Mar 20 23:06 UTC
                                                     1.20200302.225035-898a035         02 Mar 20 22:50 UTC
                                                     1.20200302.222412-9bb62a8         02 Mar 20 22:24 UTC
                                                     1.20200302.215626-fb438b5         02 Mar 20 21:56 UTC
                                                     1.20200302.172527-fb438b5         02 Mar 20 17:25 UTC
                                                     1.20200302.143544-88f4c29         02 Mar 20 14:35 UTC
❯ fluxctl list-images --k8s-fwd-ns flux --workload security:deployment/security-auth --namespace security --context gke_infra-prod-67bbc6f2_europe-west4_prod-1
WORKLOAD                           CONTAINER      IMAGE                                 CREATED
security:deployment/security-auth  security-auth  eu.gcr.io/bbc-registry/security-auth
                                                  |   1.20200304.160755-a1aa0d1         04 Mar 20 16:08 UTC
                                                  |   1.20200304.150021-92ecfbf         04 Mar 20 15:00 UTC
                                                  |   1.20200303.125505-57ae9dd         03 Mar 20 12:55 UTC
                                                  '-> 1.20200302.233522-5f5253c         02 Mar 20 23:35 UTC
                                                      1.20200302.230556-4024625         02 Mar 20 23:06 UTC
                                                      1.20200302.225035-898a035         02 Mar 20 22:50 UTC
                                                      1.20200302.222412-9bb62a8         02 Mar 20 22:24 UTC
                                                      1.20200302.215626-fb438b5         02 Mar 20 21:56 UTC
                                                      1.20200302.172527-fb438b5         02 Mar 20 17:25 UTC
                                                      1.20200302.143544-88f4c29         02 Mar 20 14:35 UTC
❯ k get event --context gke_infra-prod-67bbc6f2_europe-west4_prod-1 --namespace security --field-selector involvedObject.name=security-auth-65754f9589-tzw5x
LAST SEEN   TYPE      REASON      OBJECT                               MESSAGE
37m         Normal    Scheduled   pod/security-auth-65754f9589-tzw5x   Successfully assigned security/security-auth-65754f9589-tzw5x to gke-prod-1-n1-standard-32-cos-898812f7-b9kg
37m         Normal    Pulled      pod/security-auth-65754f9589-tzw5x   Container image "istio/proxyv2:1.4.4" already present on machine
37m         Normal    Created     pod/security-auth-65754f9589-tzw5x   Created container istio-init
37m         Normal    Started     pod/security-auth-65754f9589-tzw5x   Started container istio-init
37m         Normal    Pulling     pod/security-auth-65754f9589-tzw5x   Pulling image "eu.gcr.io/bbc-registry/security-auth:1.20200304.160755-a1aa0d1"
36m         Normal    Pulled      pod/security-auth-65754f9589-tzw5x   Successfully pulled image "eu.gcr.io/bbc-registry/security-auth:1.20200304.160755-a1aa0d1"
36m         Normal    Created     pod/security-auth-65754f9589-tzw5x   Created container security-auth
36m         Normal    Started     pod/security-auth-65754f9589-tzw5x   Started container security-auth
36m         Normal    Pulled      pod/security-auth-65754f9589-tzw5x   Container image "istio/proxyv2:1.4.4" already present on machine
36m         Normal    Created     pod/security-auth-65754f9589-tzw5x   Created container istio-proxy
36m         Normal    Started     pod/security-auth-65754f9589-tzw5x   Started container istio-proxy
36m         Warning   Unhealthy   pod/security-auth-65754f9589-tzw5x   Readiness probe failed: HTTP probe failed with statuscode: 503
36m         Warning   Unhealthy   pod/security-auth-65754f9589-tzw5x   Readiness probe failed: HTTP probe failed with statuscode: 503
36m         Warning   Unhealthy   pod/security-auth-65754f9589-tzw5x   Readiness probe failed: Get http://10.208.13.73:8080/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
36m         Normal    Killing     pod/security-auth-65754f9589-tzw5x   Stopping container security-auth
36m         Normal    Killing     pod/security-auth-65754f9589-tzw5x   Stopping container istio-proxy
35m         Warning   Unhealthy   pod/security-auth-65754f9589-tzw5x   Readiness probe failed: Get http://10.208.13.73:15020/healthz/ready: dial tcp 10.208.13.73:15020: connect: connection refused

Kibana logs kubernetes.replicaset.name: security-auth-65754f9589 kubernetes.pod.name: security-auth-65754f9589-lmcmc

show message and field.message

❯ kt helm-operator --since 9m -n flux --context gke_infra-prod-67bbc6f2_europe-west4_prod-1 | grep security-auth
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:37:29.35410232Z caller=operator.go:307 component=operator info="enqueuing release" resource=security:helmrelease/security-auth-cassandra
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:37:29.448195777Z caller=operator.go:307 component=operator info="enqueuing release" resource=security:helmrelease/security-auth
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:37:29.472195926Z caller=operator.go:307 component=operator info="enqueuing release" resource=security:helmrelease/memorystore-security-auth-metrics
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:37:43.479434679Z caller=release.go:360 component=release release=security-auth-cassandra targetNamespace=security resource=security:helmrelease/security-auth-cassandra helmVersion=v2 info="performing dry-run upgrade to see if release has diverged"
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:37:43.819708882Z caller=release.go:404 component=release release=security-auth-cassandra targetNamespace=security resource=security:helmrelease/security-auth-cassandra helmVersion=v2 info="no changes" action=skip
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:38:51.063865248Z caller=release.go:360 component=release release=security-auth targetNamespace=security resource=security:helmrelease/security-auth helmVersion=v2 info="performing dry-run upgrade to see if release has diverged"
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:38:51.363338801Z caller=release.go:378 component=release release=security-auth targetNamespace=security resource=security:helmrelease/security-auth helmVersion=v2 info="release has been rolled back, comparing dry-run output with latest failed release"
[helm-operator-854d7756c4-jpcm4] ts=2020-03-05T13:38:51.400861697Z caller=release.go:404 component=release release=security-auth targetNamespace=security resource=security:helmrelease/security-auth helmVersion=v2 info="no changes" action=skip
❯ stern helm-operator --namespace flux --context gke_infra-prod-67bbc6f2_europe-west4_prod-1
❯ k rollout status deployment/security-auth --context gke_infra-prod-67bbc6f2_europe-west4_prod-1 --namespace security
Waiting for deployment "security-auth" rollout to finish: 4 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 4 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 4 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 3 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "security-auth" rollout to finish: 9 of 10 updated replicas are available...
deployment "security-auth" successfully rolled out
# requires iproute2
ss -nlp
ss --listening --numeric --process --unix --tcp
❯ DOCKER_BUILDKIT=1 docker build -t test-edge-api --build-arg REGISTRY=eu.gcr.io/bbc-registry --no-cache -f _infra/Dockerfile .
[+] Building 1.4s (9/9) FINISHED
 => [internal] load build definition from Dockerfile                                                                                                                                                                                                                                                               0.0s
 => => transferring dockerfile: 1.34kB                                                                                                                                                                                                                                                                             0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                                                                                                  0.0s
 => => transferring context: 35B                                                                                                                                                                                                                                                                                   0.0s
 => [internal] load metadata for eu.gcr.io/bbc-registry/corretto-java:11.0.6.10.1                                                                                                                                                                                                                                  0.0s
 => CACHED [1/4] FROM eu.gcr.io/bbc-registry/corretto-java:11.0.6.10.1                                                                                                                                                                                                                                             0.0s
 => [internal] load build context                                                                                                                                                                                                                                                                                  0.0s
 => => transferring context: 1.32kB                                                                                                                                                                                                                                                                                0.0s
 => [2/4] RUN mkdir -p /gclogs /etc/edge-api                                                                                                                                                                                                                                                                       0.3s
 => [3/4] COPY ./build/async-profiler/linux-x64 /async-profiler                                                                                                                                                                                                                                                    0.0s
 => [4/4] COPY ./build/libs/edge-api-boot.jar   ./build/java-agents/newrelic-agent.jar   ./build/java-agents/sqreen-agent.jar   ./build/java-agents/file-leak-detector.jar   ./src/serviceability/*.sh   /                                                                                                         0.6s
 => exporting to image                                                                                                                                                                                                                                                                                             0.4s
 => => exporting layers                                                                                                                                                                                                                                                                                            0.4s
 => => writing image sha256:5ceef8f5a4e23cb3bea7ca7cb7c90c0e338386b7f37992c92861cb119c312cb9                                                                                                                                                                                                                       0.0s
 => => naming to docker.io/library/test-edge-api
❯ docker run test-edge-api
Picked up JAVA_TOOL_OPTIONS:
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
Mar 20, 2020 10:14:50 +0000 [7 1] com.newrelic INFO: Configuration file not found. The agent will attempt to read required values from environment variables.
Mar 20, 2020 10:14:50 +0000 [7 1] com.newrelic INFO: Using default collector host: collector.newrelic.com
Mar 20, 2020 10:14:50 +0000 [7 1] com.newrelic ERROR: Unable to start the New Relic Agent. Your application will continue to run but it will not be monitored.
com.newrelic.agent.config.ConfigurationException: The agent requires an application name. Check the app_name setting in newrelic.yml
        at com.newrelic.agent.config.ConfigServiceFactory.validateConfig(ConfigServiceFactory.java:64) ~[newrelic-agent.jar:5.8.0]
        at com.newrelic.agent.config.ConfigServiceFactory.createConfigService(ConfigServiceFactory.java:27) ~[newrelic-agent.jar:5.8.0]
        at com.newrelic.agent.service.ServiceManagerImpl.<init>(ServiceManagerImpl.java:121) ~[newrelic-agent.jar:5.8.0]
        at com.newrelic.agent.Agent.tryToInitializeServiceManager(Agent.java:194) [newrelic-agent.jar:5.8.0]
        at com.newrelic.agent.Agent.continuePremain(Agent.java:137) [newrelic-agent.jar:5.8.0]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
        at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
        at com.newrelic.bootstrap.BootstrapAgent.startAgent(BootstrapAgent.java:140) [newrelic-agent.jar:5.8.0]
        at com.newrelic.bootstrap.BootstrapAgent.premain(BootstrapAgent.java:77) [newrelic-agent.jar:5.8.0]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
        at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
        at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:513) [?:?]
        at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:525) [?:?]
10:14:53.566 [main] INFO org.springframework.core.KotlinDetector - Kotlin reflection implementation not found at runtime, related features won't be available.
2020-03-20 10:14:55.616 [] WARN  --- [kground-preinit] o.s.h.c.j.Jackson2ObjectMapperBuilder    : For Jackson Kotlin classes support please add "com.fasterxml.jackson.module:jackson-module-kotlin" to the classpath
...
VM.flags local docker
❯ docker exec -it unruffled_rubin bash -c "JAVA_TOOL_OPTIONS='' jcmd \$(pgrep java) VM.flags"
Picked up JAVA_TOOL_OPTIONS:
6:
-XX:CICompilerCount=3 -XX:ConcGCThreads=1 -XX:G1ConcRefinementThreads=4 -XX:G1HeapRegionSize=1048576
-XX:GCDrainStackTargetSize=64 -XX:InitialHeapSize=1774190592 -XX:InitialRAMPercentage=85.000000
-XX:+ManagementServer -XX:MarkStackSize=4194304 -XX:MaxHeapSize=1774190592 -XX:MaxNewSize=1064304640
-XX:MaxRAMPercentage=85.000000 -XX:MinHeapDeltaBytes=1048576 -XX:NativeMemoryTracking=summary
-XX:NonNMethodCodeHeapSize=5830732 -XX:NonProfiledCodeHeapSize=122913754 -XX:ProfiledCodeHeapSize=122913754
-XX:ReservedCodeCacheSize=251658240 -XX:+SegmentedCodeCache -XX:+UseCompressedClassPointers -XX:+UseCompressedOops
-XX:+UseFastUnorderedTimeStamps -XX:+UseG1GC
RSS
$ cat /proc/$(pgrep java)/status | grep VmRSS
VmRSS:	 4701120 kB
VM.flags in k8s
6:
-XX:CICompilerCount=4
-XX:ConcGCThreads=2
-XX:G1ConcRefinementThreads=8
-XX:G1HeapRegionSize=2097152
-XX:GCDrainStackTargetSize=64
-XX:InitialHeapSize=4563402752
-XX:InitialRAMPercentage=85.000000
-XX:+ManagementServer
-XX:MarkStackSize=4194304
-XX:MaxHeapSize=4563402752
-XX:MaxNewSize=2736783360
-XX:MaxRAMPercentage=85.000000
-XX:MinHeapDeltaBytes=2097152
-XX:NativeMemoryTracking=summary
-XX:NonNMethodCodeHeapSize=5836300
-XX:NonProfiledCodeHeapSize=122910970
-XX:ProfiledCodeHeapSize=122910970
-XX:ReservedCodeCacheSize=251658240
-XX:+SegmentedCodeCache
-XX:+UseCompressedClassPointers
-XX:+UseCompressedOops
-XX:+UseFastUnorderedTimeStamps
-XX:+UseG1GC

As the container runs java with native memory tracking (-XX:NativeMemoryTracking=summary), it’s possible to ask the JVM some information about JVM memory zones other than heap.

Note
Enabling native memory tracking (NMT) causes a 5% -10% performance overhead.
VM.native_memory instant snapshot
❯ docker exec -it unruffled_rubin bash -c "JAVA_TOOL_OPTIONS='' jcmd \$(pgrep java) VM.native_memory scale=KB"
Picked up JAVA_TOOL_OPTIONS:
6:

Native Memory Tracking:

Total: reserved=7168324KB, committed=5380868KB                               (1)
-                 Java Heap (reserved=4456448KB, committed=4456448KB)        (2)
                            (mmap: reserved=4456448KB, committed=4456448KB)

-                     Class (reserved=1195628KB, committed=165788KB)         (3)
                            (classes #28431)                                 (4)
                            (  instance classes #26792, array classes #1639)
                            (malloc=5740KB #87822)
                            (mmap: reserved=1189888KB, committed=160048KB)
                            (  Metadata:   )
                            (    reserved=141312KB, committed=139876KB)
                            (    used=135945KB)
                            (    free=3931KB)
                            (    waste=0KB =0.00%)
                            (  Class space:)
                            (    reserved=1048576KB, committed=20172KB)
                            (    used=17864KB)
                            (    free=2308KB)
                            (    waste=0KB =0.00%)

-                    Thread (reserved=696395KB, committed=85455KB)           (5)
                            (thread #674)
                            (stack: reserved=692812KB, committed=81872KB)
                            (malloc=2432KB #4046)
                            (arena=1150KB #1347)

-                      Code (reserved=251877KB, committed=105201KB)          (6)
                            (malloc=4189KB #11718)
                            (mmap: reserved=247688KB, committed=101012KB)

-                        GC (reserved=230739KB, committed=230739KB)          (7)
                            (malloc=32031KB #63631)
                            (mmap: reserved=198708KB, committed=198708KB)

-                  Compiler (reserved=5914KB, committed=5914KB)              (8)
                            (malloc=6143KB #3281)
                            (arena=18014398509481755KB #5)

-                  Internal (reserved=24460KB, committed=24460KB)           (10)
                            (malloc=24460KB #13140)

-                     Other (reserved=267034KB, committed=267034KB)         (11)
                            (malloc=267034KB #631)

-                    Symbol (reserved=28915KB, committed=28915KB)            (9)
                            (malloc=25423KB #330973)
                            (arena=3492KB #1)

-    Native Memory Tracking (reserved=8433KB, committed=8433KB)
                            (malloc=117KB #1498)
                            (tracking overhead=8316KB)

-               Arena Chunk (reserved=217KB, committed=217KB)
                            (malloc=217KB)

-                   Logging (reserved=7KB, committed=7KB)
                            (malloc=7KB #266)

-                 Arguments (reserved=19KB, committed=19KB)
                            (malloc=19KB #521)

-                    Module (reserved=1362KB, committed=1362KB)
                            (malloc=1362KB #6320)

-              Synchronizer (reserved=837KB, committed=837KB)
                            (malloc=837KB #6877)

-                 Safepoint (reserved=8KB, committed=8KB)
                            (mmap: reserved=8KB, committed=8KB)

-                   Unknown (reserved=32KB, committed=32KB)
                            (mmap: reserved=32KB, committed=32KB)
  1. This shows what the JVM reserved for memory 7168324 KB (~7.1 GB) and what is actually used by the jvm process 4456448 KB (~4.45 GB).

  2. heap arena, note reserved and committed values are the same 4456448 KB, I’m not sure why this number is different from the VM flags -XX:MaxHeapSize=4563402752

  3. ~165 MB of class metadata

  4. how many classes have been loaded : 28431

  5. 674 threads are using ~81 MB out of 696 MB reserved

  6. Code cache area (assembly of the used methods) ~105 MB out of 251 MB which matches with -XX:ReservedCodeCacheSize=251658240

  7. G1GC internal data structures take ~230 MB

  8. C1 / C2 compilers (which compile bytecodes to assembly) uses ~6 MB

  9. The symbols contains many things lik interned strings and other internal constants ~29 MB

  10. Internal (included DirectByteBuffers before Java 11), maybe others objects, here takes ~24 MB

  11. Other section after Java 11 includes DirectByteBuffers ~267 MB

Other areas are much smaller in scale, NMT takes ~8 MB itself, module system ~1.3 MB, etc.

For a lot more details read this article by Aleksey Shipilёv on native memory tracking

There’s also the MappedByteBuffers, these are the files mapped to virtual memory of a process. NMT does not track them, however, MappedByteBuffers can also take physical memory. And there is no a simple way to limit how much they can take. However it’s possible to see the actual usage of a process memory map: pmap -x <pid>

process memory mappings
$ pmap -x $(pgrep java)
6:   /usr/bin/java -Dfile.encoding=UTF-8 -Duser.timezone=UTC -Dcom.sun.management.jmxremote.port=7199 -Dcom.sun.management.jmxremote.rmi.port=7199 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Djava.security.egd=file:/dev/.
/urandom -XX:InitialRAMPercentage=85.0 -XX:MaxRAMPercentage=85.0 -XX:NativeMemoryTracking=summary -Xlog:os,safepoint*,gc*,gc+ref=debug,gc+ergo*=debug,gc+age*=debug,gc+phases*:file=/gclogs/%t-gc.log:time,uptime,tags:filecount=5,filesize=10M -javaag
Address           Kbytes     RSS   Dirty Mode  Mapping
0000000000400000       4       4       0 r-x-- java
0000000000600000       4       4       4 r---- java
0000000000601000       4       4       4 rw--- java
000000000216f000     404     272     272 rw---   [ anon ]
00000006f0000000 4476620 3128252 3128252 rw---   [ anon ]
00000008013b3000 1028404       0       0 -----   [ anon ]
00007fc5de9ea000      16       0       0 -----   [ anon ]
00007fc5de9ee000    1012     104     104 rw---   [ anon ]
00007fc5deaeb000      16       0       0 -----   [ anon ]
00007fc5deaef000    1012      24      24 rw---   [ anon ]
00007fc5debec000      16       0       0 -----   [ anon ]
00007fc5debf0000    1012      92      92 rw---   [ anon ]
00007fc5deced000      16       0       0 -----   [ anon ]
00007fc5decf1000    1012     100     100 rw---   [ anon ]
00007fc5dedee000      16       0       0 -----   [ anon ]
00007fc5dedf2000    1012     100     100 rw---   [ anon ]
00007fc5deeef000      16       0       0 -----   [ anon ]
00007fc5deef3000    1012     100     100 rw---   [ anon ]
00007fc5deff0000      16       0       0 -----   [ anon ]
00007fc5deff4000    1012     100     100 rw---   [ anon ]
00007fc5df0f1000      16       0       0 -----   [ anon ]
00007fc5df0f5000    1012     100     100 rw---   [ anon ]
00007fc5df1f2000      16       0       0 -----   [ anon ]
00007fc5df1f6000    1012     100     100 rw---   [ anon ]
00007fc5df2f3000      16       0       0 -----   [ anon ]
00007fc5df2f7000    1012     100     100 rw---   [ anon ]
00007fc5df3f4000      16       0       0 -----   [ anon ]
00007fc5df3f8000    1012     100     100 rw---   [ anon ]
00007fc5df4f5000      16       0       0 -----   [ anon ]
00007fc5df4f9000    1012     100     100 rw---   [ anon ]
00007fc5df5f6000      16       0       0 -----   [ anon ]
00007fc5df5fa000    1012     100     100 rw---   [ anon ]

...

00007fca48ba9000   17696   14876       0 r-x-- libjvm.so
00007fca49cf1000    2044       0       0 ----- libjvm.so
00007fca49ef0000     764     764     764 r---- libjvm.so
00007fca49faf000     232     232     208 rw--- libjvm.so
00007fca49fe9000     352     320     320 rw---   [ anon ]
00007fca4a041000     136     136       0 r---- libc-2.28.so
00007fca4a063000    1312    1140       0 r-x-- libc-2.28.so
00007fca4a1ab000     304     148       0 r---- libc-2.28.so
00007fca4a1f7000       4       0       0 ----- libc-2.28.so
00007fca4a1f8000      16      16      16 r---- libc-2.28.so
00007fca4a1fc000       8       8       8 rw--- libc-2.28.so
00007fca4a1fe000      16      16      16 rw---   [ anon ]
00007fca4a202000       4       4       0 r---- libdl-2.28.so
00007fca4a203000       4       4       0 r-x-- libdl-2.28.so
00007fca4a204000       4       4       0 r---- libdl-2.28.so
00007fca4a205000       4       4       4 r---- libdl-2.28.so
00007fca4a206000       4       4       4 rw--- libdl-2.28.so
00007fca4a207000     100     100       0 r-x-- libjli.so
00007fca4a220000    2048       0       0 ----- libjli.so
00007fca4a420000       4       4       4 r---- libjli.so
00007fca4a421000       4       4       4 rw--- libjli.so
00007fca4a422000      24      24       0 r---- libpthread-2.28.so
00007fca4a428000      60      60       0 r-x-- libpthread-2.28.so
00007fca4a437000      24       0       0 r---- libpthread-2.28.so
00007fca4a43d000       4       4       4 r---- libpthread-2.28.so
00007fca4a43e000       4       4       4 rw--- libpthread-2.28.so
00007fca4a43f000      16       4       4 rw---   [ anon ]
00007fca4a443000       4       4       0 r---- LC_IDENTIFICATION
00007fca4a444000       4       0       0 -----   [ anon ]
00007fca4a445000       4       0       0 r----   [ anon ]
00007fca4a446000       8       8       8 rw---   [ anon ]
00007fca4a448000       4       4       0 r---- ld-2.28.so
00007fca4a449000     120     120       0 r-x-- ld-2.28.so
00007fca4a467000      32      32       0 r---- ld-2.28.so
00007fca4a46f000       4       4       4 r---- ld-2.28.so
00007fca4a470000       4       4       4 rw--- ld-2.28.so
00007fca4a471000       4       4       4 rw---   [ anon ]
00007ffe28536000     140      40      40 rw---   [ stack ]
00007ffe28582000      12       0       0 r----   [ anon ]
00007ffe28585000       8       4       0 r-x--   [ anon ]
ffffffffff600000       4       0       0 r-x--   [ anon ]
---------------- ------- ------- -------
total kB         24035820 4776860 4720796

That’s a lot of information, let’s refine that with more knowledge about /proc/<pid>/maps, each map is associated with a set of modes:

  • r-: readable memory mapping

  • w: writable memory mapping

  • x: executable memory mapping

  • s or p : shared memory mapping or private mapping. /proc/<pid>/maps shows both but pmap only show the s flag.

Also, pmap has another mapping mode which I barely found any reference of, here’s one and here

  • R: if set, the map has no swap space reserved (MAP_NORESERVE flag of mmap). This means that we can get a segmentation fault by accessing that memory if it has not already been mapped to physical memory, and the system is out of physical memory.

So what’s interesting us at this time are the process’s memory mapped (shared) files

process memory mapped files
$ pmap -x 6 | grep "[r-][w-][x-][s][R-]"
00007f5fdc02f000       4       4       0 r--s- instrumentation1647616515145161084.jar
00007f5fdc030000       4       4       0 r--s- instrumentation11262564974060761935.jar
00007f5fdc053000       8       8       0 r--s- java-agent-bs-cl.jar
00007f5fdc055000       4       4       0 r--s- instrumentation249633448216144460.jar
00007f5fdc056000       4       4       0 r--s- newrelic-bootstrap10447345921091566771.jar
00007f5fdc057000      12      12       0 r--s- newrelic-api6038277081136135384.jar
00007f5fec000000       8       8       0 r--s- newrelic-weaver-api16247655721253674284.jar
00007f5fec002000       4       4       0 r--s- newrelic-opentracing-bridge12060425782296980104.jar
00007f5fec003000      12      12       0 r--s- agent-bridge3261511391751138774.jar
00007f5ffb910000  138176   36060       0 r--s- modules
00007f6008006000      28      28       0 r--s- gconv-modules.cache
                           ^^^^^               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Which means there’s around 36 MB of memory mapped files.

Another read on process memory here.

That leaves us with this equation :

Total memory = Heap + Code Cache + Metaspace + Symbol tables
               + Compiler + Other JVM structures + Thread stacks
               + Direct buffers + Mapped files +
               + Native Libraries + Malloc overhead + ...

Heap

4456448

Code Cache

105201

Metaspace

165788

Symbol tables

28915

Compiler

5914

Other JVM structures (Internal + NMT + smaller area)

24460 + 8433 + 217 + 7 + 19 + 1362 + 837 + 8 + 32

Thread stacks

85455

Direct buffers (Other)

267034

Mapped files

36060 + 4 + 4 + 8 + 4 + 4 + 12 + 8 + 4 + 12 + 28

Native Libraries

unaccounted at this time

Malloc overhead

accounted in NMT

…​

Total

5186278 KB

5186278 KB is just tad under 5 GB (5242880 KB).

More importantly is the actual non heap usage :

5186278 - 4456448 = 729830 KB

Non heap

5186278 - 4456448 = 729830

~14 %

Heap

4456448

~85 %

Total

5186278

100 %

This means the application needs at least 730 MB plus the heap to run.

The heap committed memory is 4563402752 B (set via -XX:MaxRAMPercentage=85.000000), but the heap usage may have a different figure :

$ jcmd $(pgrep java) GC.heap_info
6:
 garbage-first heap   total 4456448K, used 925702K [0x00000006f0000000, 0x0000000800000000)
  region size 2048K, 387 young (792576K), 12 survivors (24576K)
 Metaspace       used 154131K, capacity 160610K, committed 160976K, reserved 1189888K
  class space    used 18070K, capacity 20474K, committed 20556K, reserved 1048576K

Successive execution may give different results about the used memory

$ jcmd 6 GC.heap_info
6:
 garbage-first heap   total 4456448K, used 1245902K [0x00000006f0000000, 0x0000000800000000)
  region size 2048K, 543 young (1112064K), 12 survivors (24576K)
 Metaspace       used 154163K, capacity 160620K, committed 160976K, reserved 1189888K
  class space    used 18071K, capacity 20476K, committed 20556K, reserved 1048576K

$ jcmd 6 GC.heap_info
6:
 garbage-first heap   total 4456448K, used 2421454K [0x00000006f0000000, 0x0000000800000000)
  region size 2048K, 1117 young (2287616K), 12 survivors (24576K)
 Metaspace       used 154163K, capacity 160620K, committed 160976K, reserved 1189888K
  class space    used 18071K, capacity 20476K, committed 20556K, reserved 1048576K

The heap went from 925702 KB to 2421454 KB ! Following the trend of the heap usage lead can lead to the actual memory usage for this app (in the given cluster topology).

2.5 GB of used heap + 0.8 GB of non heap + 0.2 MB margin = 3.5 GB

Which leads to set -XX:MaxRAMPercentage=71.0. if we want a lower memory footprint.

  • RSS ⇒ amount of physical memory allocated & used by a process

  • Java MaxHeapSize != Docker stats (“MEM USAGE”)

    • Java ~= heap + metaspace + off-heap (DirectBuffer + threads + compiled code + GC data + …​)

A good start is the actual Linux Kernel documentation on cgroup v1.

memory.stat
❯ kubectl exec -it --container=edge-api deployment/edge-api -- cat /sys/fs/cgroup/memory/memory.stat
cache 57434112 (7)
rss 4822343680 (1)
rss_huge 0
shmem 0
mapped_file 0
dirty 0
writeback 0
swap 0 (6)
pgpgin 7918680
pgpgout 6726903
pgfault 7682598
pgmajfault 0
pgmajfault_s 0
pgmajfault_a 0
pgmajfault_f 0
inactive_anon 0 (2)
active_anon 4823887872 (3)
inactive_file 58806272 (4)
active_file 188416 (5)
unevictable 0
hierarchical_memory_limit 5368709120
hierarchical_memsw_limit 5368709120
total_cache 57434112
total_rss 4822343680
total_rss_huge 0
total_shmem 0
total_mapped_file 0
total_dirty 0
total_writeback 0
total_swap 0
total_pgpgin 7918680
total_pgpgout 6726903
total_pgfault 7682598
total_pgmajfault 0
total_pgmajfault_s 0
total_pgmajfault_a 0
total_pgmajfault_f 0
total_inactive_anon 0
total_active_anon 4823887872
total_inactive_file 58806272
total_active_file 188416
total_unevictable 0
  1. rss of the processes, anonymous memory and swap cache, without tmpfs (shmem) (~4.8 GB)

  2. anonymous memory and swap cache on active LRU list, with tmpfs (shmem)

  3. anonymous memory and swap cache on inactive LRU list, with tmpfs (shmem) (~4.8 GB)

  4. file-backed memory on inactive LRU list, in bytes (~59 MB)

  5. file-backed memory on active LRU list, in bytes (~190 KB)

  6. swap usage, 0 is the only good value for java

  7. page cache memory (~57 MB)

When you interpret the values reported by memory.stat, note how the various statistics inter-relate:

  • active_anon + inactive_anon = anonymous memory + file cache for tmpfs + swap cache

Therefore, active_anon + inactive_anon ≠ rss, because rss does not include tmpfs.

  • active_file + inactive_file = cache - size of tmpfs

There other memory settings to look at

memory usage and limits
cat /sys/fs/cgroup/memory/memory.{usage_in_bytes,limit_in_bytes,memsw.usage_in_bytes,memsw.limit_in_bytes}
4944756736 (1)
5368709120 (2)
4944748544 (3)
5368709120 (4)
  1. current memory usage ~4.9GB, but it’s recommended to read cache+rss+swap values in memory.stat

  2. limit on the memory usage (~5.3GB)

  3. current memory and swap usage (~4.9 GB)

  4. limit on memory and swap (~5.3GB)

Note the memory.limit_in_bytes and memory.memsw.limit_in_bytes values are the same, that means that the processes in the cgroup can use all the memory before swaping, however it is not impossible for the process to be use the swap before this limit is reached.

In fact due to the swapiness value the kernel may try to reclaim memory.

There are other parameters related to the kernel and tcp allocations.

memory.swapiness
cat /proc/sys/vm/swappiness (1)
60
cat /sys/fs/cgroup/memory/memory.swappiness (2)
60
  1. OS swapiness

  2. cgroup swapiness, here the setting is not overridden

Spring 5

  • Driven by functional API design and reactive architectures
  • Stay the classpath (for other libraries)

Functional style vs Annotation style

  • annotation: loosely coupled components, self-descriptives endpoints

  • functional: programmatic bean registration, endpoint composition, no reflection, scanning, annotations

  • first class support of kotlin


GenericApplicationContext ctx = new GenericApplicationContext();
ctx.registerBean(Foo.class);
ctx.registerBean(Bar.class, () -> new Bar(ctx.getBean(Foo.class)));

GenericApplicationContext ctx = new GenericApplicationContext();
ctx.registerBean(Foo.class, Foo::new);
ctx.registerBean(Bar.class, () -> new Bar(ctx.getBean(Foo.class)), 
                            bd -> bd.setLazyInit(true));
val ctx = GenericApplicationContext()
ctx.registerBean(Foo::class)
ctx.registerBean { Bar(it.getBean(Foo::class)) }

val ctx = GenericApplicationContext{
    registerBean<Foo>()
    registerBean { Bar(it.getBean<Foo>()) }
}

Reactive

------------
Spring MVC
------------
Servlet API
------------

↑ Blocking IO ↓

------------------
Tomcat, Jetty, ...
------------------
----------------
Spring WebFlux
----------------
Spring Web API
(Reactor, reactive streams)
---------------------------

↑ Non-blocking IO ↓

------   -------------
Netty  | Tomcat, Jetty
------   -------------

@Controller works with webflux. Annotation based

@Controler
public class ReactiveController {
    //...

    @GetMapping("/users/{id}")
    public Mono<User> getUser(@PathVariable Long id) {
        return this.repository.findById(id);
    }

}

Functionnal web endpoints

RouterFunction<?> router = 
  route(GET("/users/{id}"), handlerDelegate::getUser)
  .andRoute(GET("/users"), handlerDelegate::getUsers)

RouterFunction<?> router = 
  route(GET("/users/{id}"), req -> return ServerResponse.ok()...)
  .andRoute(GET("/users"), handlerDelegate::getUsers)

Nullability declarations via @Nullable where applicable.


Programmatic lookup via ObjectProvier

@Inject ObjectProvider theBeanProvider;

component scan is slow


spring boot 2

@Bean
public RouterFunction web(handlerDelegate) {
    return route(GET("/users/{id}"), handlerDelegate::getUser)
        .andRoute(GET("/users"), handlerDelegate::getUsers);
}
@GetMapping("/")
public Rendering home() {
    return Rendering.view("...").modelAttribute("...", /** reactive data **/).build();
}

Start 2 tmux panes with synchronized typing

tmux new-session \; split-window -h \; set-window-option synchronize-panes on \; set-option -s mouse on \; attach

Notice the mouse support option is post 2.1.

Same for pre 2.1 :

tmux new-session \; split-window -h \; set-window-option synchronize-panes on \; set -g mode-mouse on \; set -g mouse-resize-pane on \; set -g mouse-select-pane on \; set -g mouse-select-window on \; attach

tmux mouse mode

Changes in from 2.0 to 2.1 https://github.com/tmux/tmux/blob/310f0a960ca64fa3809545badc629c0c166c6cd2/CHANGES#L12

* Mouse-mode has been rewritten.  There's now no longer options for:
	- mouse-resize-pane
	- mouse-select-pane
	- mouse-select-window
	- mode-mouse

  Instead there is just one option:  'mouse' which turns on mouse support
entirely.

Possibly add this to ~/.tmux.conf to handle pre 2.1 and post 2.1 :

is_older="[[ $(tmux -V | cut -d' ' -f2) -lt 2.1 ]] && true || false"
if-shell "$is_older" "set -g mode-mouse on; set -g mouse-resize-pane on;\
  set -g mouse-select-pane on; set -g mouse-select-window on" "set -g mouse on"

Start tmux panes with each their command

From answer

I was trying to create a complex grid of panes and had to deal with switching and splitting panes over and over again. Here are my learnings:

tmux new-session \;

Gets you started with a new session. To split it horizontal or vertical use split-window -h or -v subsequently, like that:

tmux new-session \; split-window -v \; split-window -h \;

Creates 3 panes, like this:

------------
|          |
|----------|
|    |     |
------------

To run commands in that panes, just add them with the send-keys 'my-command' command and C-m which executes it:

tmux new-session \; \
    send-keys 'tail -f /var/log/monitor.log' C-m \; \
    split-window -v \; \
    split-window -h \; \
    send-keys 'top' C-m \; 

C-m means carriage return, one could use Enter instead.

And the resulting session should look like that.

------------
|  tail    |
|----------|
|    | top |
------------

Now I tried to again sub-divide the bottom left pane, so switching either back using last-pane, or in more complex windows, with the select-pane -t 1 where 1 is the number of the pane in order created starting with 0.

tmux new-session \; \
    send-keys 'tail -f /var/log/monitor.log' C-m \; \
    split-window -v \; \
    split-window -h \; \
    send-keys 'top' C-m \; \
    select-pane -t 1 \; \
    split-window -v \; \
    send-keys 'weechat' C-m \;

Does that. Basically knowing your way around with split-window and select-pane is all you need. It's also handy to pass with -p 75 a percentage size of the pane created by split-window to have more control over the size of the panes.

tmux new-session \; \
  send-keys 'tail -f /var/log/monitor.log' C-m \; \
  split-window -v -p 75 \; \
  split-window -h -p 30 \; \
  send-keys 'top' C-m \; \
  select-pane -t 1 \; \
  split-window -v \; \
  send-keys 'weechat' C-m \;

Which results in a session looking like that

------------------
|      tail      |
|----------------|
|          | top |
|----------|     |
| weechat  |     |
------------------

Get pane number/index of each pane in a window from a script

tmux display-message -p "#{pane_index}"
tmux display -pt "${TMUX_PANE:?}" '#{pane_index}'
UTF-8 encoded sample plain-text file
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
Markus Kuhn [ˈmaʳkʊs kuːn] <http://www.cl.cam.ac.uk/~mgk25/> — 2002-07-25 CC BY
The ASCII compatible UTF-8 encoding used in this plain-text file
is defined in Unicode, ISO 10646-1, and RFC 2279.
Using Unicode/UTF-8, you can write in emails and source code things such as
Mathematics and sciences:
∮ E⋅da = Q, n → ∞, ∑ f(i) = ∏ g(i), ⎧⎡⎛┌─────┐⎞⎤⎫
⎪⎢⎜│a²+b³ ⎟⎥⎪
∀x∈ℝ: ⌈x⌉ = −⌊−x⌋, α ∧ ¬β = ¬(¬α ∨ β), ⎪⎢⎜│───── ⎟⎥⎪
⎪⎢⎜⎷ c₈ ⎟⎥⎪
ℕ ⊆ ℕ₀ ⊂ ℤ ⊂ ℚ ⊂ ℝ ⊂ ℂ, ⎨⎢⎜ ⎟⎥⎬
⎪⎢⎜ ∞ ⎟⎥⎪
⊥ < a ≠ b ≡ c ≤ d ≪ ⊤ ⇒ (⟦A⟧ ⇔ ⟪B⟫), ⎪⎢⎜ ⎲ ⎟⎥⎪
⎪⎢⎜ ⎳aⁱ-bⁱ⎟⎥⎪
2H₂ + O₂ ⇌ 2H₂O, R = 4.7 kΩ, ⌀ 200 mm ⎩⎣⎝i=1 ⎠⎦⎭
Linguistics and dictionaries:
ði ıntəˈnæʃənəl fəˈnɛtık əsoʊsiˈeıʃn
Y [ˈʏpsilɔn], Yen [jɛn], Yoga [ˈjoːgɑ]
APL:
((V⍳V)=⍳⍴V)/V←,V ⌷←⍳→⍴∆∇⊃‾⍎⍕⌈
Nicer typography in plain text files:
╔══════════════════════════════════════════╗
║ ║
║ • ‘single’ and “double” quotes ║
║ ║
║ • Curly apostrophes: “We’ve been here” ║
║ ║
║ • Latin-1 apostrophe and accents: '´` ║
║ ║
║ • ‚deutsche‘ „Anführungszeichen“ ║
║ ║
║ • †, ‡, ‰, •, 3–4, —, −5/+5, ™, … ║
║ ║
║ • ASCII safety test: 1lI|, 0OD, 8B ║
║ ╭─────────╮ ║
║ • the euro symbol: │ 14.95 € │ ║
║ ╰─────────╯ ║
╚══════════════════════════════════════════╝
Combining characters:
STARGΛ̊TE SG-1, a = v̇ = r̈, a⃑ ⊥ b⃑
Greek (in Polytonic):
The Greek anthem:
Σὲ γνωρίζω ἀπὸ τὴν κόψη
τοῦ σπαθιοῦ τὴν τρομερή,
σὲ γνωρίζω ἀπὸ τὴν ὄψη
ποὺ μὲ βία μετράει τὴ γῆ.
᾿Απ᾿ τὰ κόκκαλα βγαλμένη
τῶν ῾Ελλήνων τὰ ἱερά
καὶ σὰν πρῶτα ἀνδρειωμένη
χαῖρε, ὦ χαῖρε, ᾿Ελευθεριά!
From a speech of Demosthenes in the 4th century BC:
Οὐχὶ ταὐτὰ παρίσταταί μοι γιγνώσκειν, ὦ ἄνδρες ᾿Αθηναῖοι,
ὅταν τ᾿ εἰς τὰ πράγματα ἀποβλέψω καὶ ὅταν πρὸς τοὺς
λόγους οὓς ἀκούω· τοὺς μὲν γὰρ λόγους περὶ τοῦ
τιμωρήσασθαι Φίλιππον ὁρῶ γιγνομένους, τὰ δὲ πράγματ᾿
εἰς τοῦτο προήκοντα, ὥσθ᾿ ὅπως μὴ πεισόμεθ᾿ αὐτοὶ
πρότερον κακῶς σκέψασθαι δέον. οὐδέν οὖν ἄλλο μοι δοκοῦσιν
οἱ τὰ τοιαῦτα λέγοντες ἢ τὴν ὑπόθεσιν, περὶ ἧς βουλεύεσθαι,
οὐχὶ τὴν οὖσαν παριστάντες ὑμῖν ἁμαρτάνειν. ἐγὼ δέ, ὅτι μέν
ποτ᾿ ἐξῆν τῇ πόλει καὶ τὰ αὑτῆς ἔχειν ἀσφαλῶς καὶ Φίλιππον
τιμωρήσασθαι, καὶ μάλ᾿ ἀκριβῶς οἶδα· ἐπ᾿ ἐμοῦ γάρ, οὐ πάλαι
γέγονεν ταῦτ᾿ ἀμφότερα· νῦν μέντοι πέπεισμαι τοῦθ᾿ ἱκανὸν
προλαβεῖν ἡμῖν εἶναι τὴν πρώτην, ὅπως τοὺς συμμάχους
σώσομεν. ἐὰν γὰρ τοῦτο βεβαίως ὑπάρξῃ, τότε καὶ περὶ τοῦ
τίνα τιμωρήσεταί τις καὶ ὃν τρόπον ἐξέσται σκοπεῖν· πρὶν δὲ
τὴν ἀρχὴν ὀρθῶς ὑποθέσθαι, μάταιον ἡγοῦμαι περὶ τῆς
τελευτῆς ὁντινοῦν ποιεῖσθαι λόγον.
Δημοσθένους, Γ´ ᾿Ολυνθιακὸς
Georgian:
From a Unicode conference invitation:
გთხოვთ ახლავე გაიაროთ რეგისტრაცია Unicode-ის მეათე საერთაშორისო
კონფერენციაზე დასასწრებად, რომელიც გაიმართება 10-12 მარტს,
ქ. მაინცში, გერმანიაში. კონფერენცია შეჰკრებს ერთად მსოფლიოს
ექსპერტებს ისეთ დარგებში როგორიცაა ინტერნეტი და Unicode-ი,
ინტერნაციონალიზაცია და ლოკალიზაცია, Unicode-ის გამოყენება
ოპერაციულ სისტემებსა, და გამოყენებით პროგრამებში, შრიფტებში,
ტექსტების დამუშავებასა და მრავალენოვან კომპიუტერულ სისტემებში.
Russian:
From a Unicode conference invitation:
Зарегистрируйтесь сейчас на Десятую Международную Конференцию по
Unicode, которая состоится 10-12 марта 1997 года в Майнце в Германии.
Конференция соберет широкий круг экспертов по вопросам глобального
Интернета и Unicode, локализации и интернационализации, воплощению и
применению Unicode в различных операционных системах и программных
приложениях, шрифтах, верстке и многоязычных компьютерных системах.
Thai (UCS Level 2):
Excerpt from a poetry on The Romance of The Three Kingdoms (a Chinese
classic 'San Gua'):
[----------------------------|------------------------]
๏ แผ่นดินฮั่นเสื่อมโทรมแสนสังเวช พระปกเกศกองบู๊กู้ขึ้นใหม่
สิบสองกษัตริย์ก่อนหน้าแลถัดไป สององค์ไซร้โง่เขลาเบาปัญญา
ทรงนับถือขันทีเป็นที่พึ่ง บ้านเมืองจึงวิปริตเป็นนักหนา
โฮจิ๋นเรียกทัพทั่วหัวเมืองมา หมายจะฆ่ามดชั่วตัวสำคัญ
เหมือนขับไสไล่เสือจากเคหา รับหมาป่าเข้ามาเลยอาสัญ
ฝ่ายอ้องอุ้นยุแยกให้แตกกัน ใช้สาวนั้นเป็นชนวนชื่นชวนใจ
พลันลิฉุยกุยกีกลับก่อเหตุ ช่างอาเพศจริงหนาฟ้าร้องไห้
ต้องรบราฆ่าฟันจนบรรลัย ฤๅหาใครค้ำชูกู้บรรลังก์ ฯ
(The above is a two-column text. If combining characters are handled
correctly, the lines of the second column should be aligned with the
| character above.)
Ethiopian:
Proverbs in the Amharic language:
ሰማይ አይታረስ ንጉሥ አይከሰስ።
ብላ ካለኝ እንደአባቴ በቆመጠኝ።
ጌጥ ያለቤቱ ቁምጥና ነው።
ደሀ በሕልሙ ቅቤ ባይጠጣ ንጣት በገደለው።
የአፍ ወለምታ በቅቤ አይታሽም።
አይጥ በበላ ዳዋ ተመታ።
ሲተረጉሙ ይደረግሙ።
ቀስ በቀስ፥ ዕንቁላል በእግሩ ይሄዳል።
ድር ቢያብር አንበሳ ያስር።
ሰው እንደቤቱ እንጅ እንደ ጉረቤቱ አይተዳደርም።
እግዜር የከፈተውን ጉሮሮ ሳይዘጋው አይድርም።
የጎረቤት ሌባ፥ ቢያዩት ይስቅ ባያዩት ያጠልቅ።
ሥራ ከመፍታት ልጄን ላፋታት።
ዓባይ ማደሪያ የለው፥ ግንድ ይዞ ይዞራል።
የእስላም አገሩ መካ የአሞራ አገሩ ዋርካ።
ተንጋሎ ቢተፉ ተመልሶ ባፉ።
ወዳጅህ ማር ቢሆን ጨርስህ አትላሰው።
እግርህን በፍራሽህ ልክ ዘርጋ።
Runes:
ᚻᛖ ᚳᚹᚫᚦ ᚦᚫᛏ ᚻᛖ ᛒᚢᛞᛖ ᚩᚾ ᚦᚫᛗ ᛚᚪᚾᛞᛖ ᚾᚩᚱᚦᚹᛖᚪᚱᛞᚢᛗ ᚹᛁᚦ ᚦᚪ ᚹᛖᛥᚫ
(Old English, which transcribed into Latin reads 'He cwaeth that he
bude thaem lande northweardum with tha Westsae.' and means 'He said
that he lived in the northern land near the Western Sea.')
Braille:
⡌⠁⠧⠑ ⠼⠁⠒ ⡍⠜⠇⠑⠹⠰⠎ ⡣⠕⠌
⡍⠜⠇⠑⠹ ⠺⠁⠎ ⠙⠑⠁⠙⠒ ⠞⠕ ⠃⠑⠛⠔ ⠺⠊⠹⠲ ⡹⠻⠑ ⠊⠎ ⠝⠕ ⠙⠳⠃⠞
⠱⠁⠞⠑⠧⠻ ⠁⠃⠳⠞ ⠹⠁⠞⠲ ⡹⠑ ⠗⠑⠛⠊⠌⠻ ⠕⠋ ⠙⠊⠎ ⠃⠥⠗⠊⠁⠇ ⠺⠁⠎
⠎⠊⠛⠝⠫ ⠃⠹ ⠹⠑ ⠊⠇⠻⠛⠹⠍⠁⠝⠂ ⠹⠑ ⠊⠇⠻⠅⠂ ⠹⠑ ⠥⠝⠙⠻⠞⠁⠅⠻⠂
⠁⠝⠙ ⠹⠑ ⠡⠊⠑⠋ ⠍⠳⠗⠝⠻⠲ ⡎⠊⠗⠕⠕⠛⠑ ⠎⠊⠛⠝⠫ ⠊⠞⠲ ⡁⠝⠙
⡎⠊⠗⠕⠕⠛⠑⠰⠎ ⠝⠁⠍⠑ ⠺⠁⠎ ⠛⠕⠕⠙ ⠥⠏⠕⠝ ⠰⡡⠁⠝⠛⠑⠂ ⠋⠕⠗ ⠁⠝⠹⠹⠔⠛ ⠙⠑
⠡⠕⠎⠑ ⠞⠕ ⠏⠥⠞ ⠙⠊⠎ ⠙⠁⠝⠙ ⠞⠕⠲
⡕⠇⠙ ⡍⠜⠇⠑⠹ ⠺⠁⠎ ⠁⠎ ⠙⠑⠁⠙ ⠁⠎ ⠁ ⠙⠕⠕⠗⠤⠝⠁⠊⠇⠲
⡍⠔⠙⠖ ⡊ ⠙⠕⠝⠰⠞ ⠍⠑⠁⠝ ⠞⠕ ⠎⠁⠹ ⠹⠁⠞ ⡊ ⠅⠝⠪⠂ ⠕⠋ ⠍⠹
⠪⠝ ⠅⠝⠪⠇⠫⠛⠑⠂ ⠱⠁⠞ ⠹⠻⠑ ⠊⠎ ⠏⠜⠞⠊⠊⠥⠇⠜⠇⠹ ⠙⠑⠁⠙ ⠁⠃⠳⠞
⠁ ⠙⠕⠕⠗⠤⠝⠁⠊⠇⠲ ⡊ ⠍⠊⠣⠞ ⠙⠁⠧⠑ ⠃⠑⠲ ⠔⠊⠇⠔⠫⠂ ⠍⠹⠎⠑⠇⠋⠂ ⠞⠕
⠗⠑⠛⠜⠙ ⠁ ⠊⠕⠋⠋⠔⠤⠝⠁⠊⠇ ⠁⠎ ⠹⠑ ⠙⠑⠁⠙⠑⠌ ⠏⠊⠑⠊⠑ ⠕⠋ ⠊⠗⠕⠝⠍⠕⠝⠛⠻⠹
⠔ ⠹⠑ ⠞⠗⠁⠙⠑⠲ ⡃⠥⠞ ⠹⠑ ⠺⠊⠎⠙⠕⠍ ⠕⠋ ⠳⠗ ⠁⠝⠊⠑⠌⠕⠗⠎
⠊⠎ ⠔ ⠹⠑ ⠎⠊⠍⠊⠇⠑⠆ ⠁⠝⠙ ⠍⠹ ⠥⠝⠙⠁⠇⠇⠪⠫ ⠙⠁⠝⠙⠎
⠩⠁⠇⠇ ⠝⠕⠞ ⠙⠊⠌⠥⠗⠃ ⠊⠞⠂ ⠕⠗ ⠹⠑ ⡊⠳⠝⠞⠗⠹⠰⠎ ⠙⠕⠝⠑ ⠋⠕⠗⠲ ⡹⠳
⠺⠊⠇⠇ ⠹⠻⠑⠋⠕⠗⠑ ⠏⠻⠍⠊⠞ ⠍⠑ ⠞⠕ ⠗⠑⠏⠑⠁⠞⠂ ⠑⠍⠏⠙⠁⠞⠊⠊⠁⠇⠇⠹⠂ ⠹⠁⠞
⡍⠜⠇⠑⠹ ⠺⠁⠎ ⠁⠎ ⠙⠑⠁⠙ ⠁⠎ ⠁ ⠙⠕⠕⠗⠤⠝⠁⠊⠇⠲
(The first couple of paragraphs of "A Christmas Carol" by Dickens)
Compact font selection example text:
ABCDEFGHIJKLMNOPQRSTUVWXYZ /0123456789
abcdefghijklmnopqrstuvwxyz £©µÀÆÖÞßéöÿ
–—‘“”„†•…‰™œŠŸž€ ΑΒΓΔΩαβγδω АБВГДабвгд
∀∂∈ℝ∧∪≡∞ ↑↗↨↻⇣ ┐┼╔╘░►☺♀ fi�⑀₂ἠḂӥẄɐː⍎אԱა
Greetings in various languages:
Hello world, Καλημέρα κόσμε, コンニチハ
Box drawing alignment tests: █
╔══╦══╗ ┌──┬──┐ ╭──┬──╮ ╭──┬──╮ ┏━━┳━━┓ ┎┒┏┑ ╷ ╻ ┏┯┓ ┌┰┐ ▊ ╱╲╱╲╳╳╳
║┌─╨─┐║ │╔═╧═╗│ │╒═╪═╕│ │╓─╁─╖│ ┃┌─╂─┐┃ ┗╃╄┙ ╶┼╴╺╋╸┠┼┨ ┝╋┥ ▋ ╲╱╲╱╳╳╳
║│╲ ╱│║ │║ ║│ ││ │ ││ │║ ┃ ║│ ┃│ ╿ │┃ ┍╅╆┓ ╵ ╹ ┗┷┛ └┸┘ ▌ ╱╲╱╲╳╳╳
╠╡ ╳ ╞╣ ├╢ ╟┤ ├┼─┼─┼┤ ├╫─╂─╫┤ ┣┿╾┼╼┿┫ ┕┛┖┚ ┌┄┄┐ ╎ ┏┅┅┓ ┋ ▍ ╲╱╲╱╳╳╳
║│╱ ╲│║ │║ ║│ ││ │ ││ │║ ┃ ║│ ┃│ ╽ │┃ ░░▒▒▓▓██ ┊ ┆ ╎ ╏ ┇ ┋ ▎
║└─╥─┘║ │╚═╤═╝│ │╘═╪═╛│ │╙─╀─╜│ ┃└─╂─┘┃ ░░▒▒▓▓██ ┊ ┆ ╎ ╏ ┇ ┋ ▏
╚══╩══╝ └──┴──┘ ╰──┴──╯ ╰──┴──╯ ┗━━┻━━┛ ▗▄▖▛▀▜ └╌╌┘ ╎ ┗╍╍┛ ┋ ▁▂▃▄▅▆▇█
▝▀▘▙▄▟

VIM

Searching

  • /pattern : search forward for pattern
  • ?pattern : search backward
  • n : repeat forward search
  • N : repeat backward
  • * : search for word currently under cursor
  • g* : search for partial word under cursor (repeat with n)
  • ctrl-o, ctrl-i : go through jump locations
  • [I : show lines with matching word under cursor

Search and replace...

  • :%s/search_for_this/replace_with_this/ : search whole file and replace
  • :%s/search_for_this/replace_with_this/c : confirm each replace

Selecting

  • shift-v : selects entire lines

  • v : selects range of text

  • ctrl-v : selects columns

  • gv : reselect block

  • shift-i change column text

Identation

  • :set tabstop=8 : tabs are at proper location
  • :set expandtab : don't use actual tab character (ctrl-v)
  • :set shiftwidth=4 : indenting is 4 spaces
  • :set autoindent : turns it on
  • :set smartindent : does the right thing (mostly) in programs
  • :set cindent : stricter rules for C programs

To indent the current line, or a visual block:

  • ctrl-t, ctrl-d : indent current line forward, backwards (insert mode)
  • visual > or < : indent block by sw (repeat with . )

To stop indenting when pasting with the mouse, add this to your .vimrc:

:set pastetoggle=<f5>

Changing line ending

In the current view

:e ++ff=dos
:e ++ff=mac
:e ++ff=unix

While saving

:w ++ff=dos
:w ++ff=mac
:w ++ff=unix

And you can use it from the command-line

for file in $(ls *cpp)
do
  vi +':w ++ff=unix' +':q' ${file}
done

Saving read-only file

:w !sudo tee %

Open file at a particular position

vim +commandHere file                     # open vim and execute the vim command
vim + file                                # open vim at the end of file
vim +362 file                             # open file at line 362
vim +/searched_term file                  # open vim at the first line matching 'searched_term'

Hightlight current line

:color desert
:set cursorline
:hi CursorLine term=bold cterm=bold guibg=Grey40
# or
:hi CursorLine cterm=NONE ctermbg=darkred ctermfg=white

modelines

At the start or end of a file

/* vim: tw=60 ts=2: */

Windows

sets proxy

in windows 7
netsh winhttp set proxy <server:port>
netsh winhttp set proxy proxy-server="http=myswg.mydomain.com:8080;https=myswg.mydomain.com:8443"
netsh winhttp reset proxy
in windows xp
proxycfg -p myswg.mydomain.com:8080
net stop wuauserv
net start wuauserv

sets port forwarding ipv4, ipv6

netsh interface portproxy add v4tov4 listenport=4443 connectaddress=www.google.com connectport=443 listenaddress=127.0.0.1

netsh interface portproxy add v4tov6 listenport=6667 connectaddress=2001:888:0:2::6667 connectport=6667

netsh interface portproxy show all

see http://technet.microsoft.com/en-us/library/cc776297(v=ws.10).aspx

enable firewall

netsh firewall set adapter "Wireless Network Connection" filtering=enable

Loops in cmd

for /r %i in (*.mp3) do echo %i will list all files recursively.

Also if you run that command in a batch file you need to double the % signs.

for /r %%i in (*) do echo %%i

For each SFV file in subdirectories do check the SFV

for /R . %f in (*.sfv) do exf -c "%f" -d "%~dpf"

Here's some modifiers :

~dpni if you want the drive, the path, and the name without the extension, where i is the current item

Another example

for /l %x in (1, 1, 100) do (
   echo %x
   copy %x.txt z:\whatever\etc
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment