This is just a random collection of commands which I have found useful in Bash. This Gist is expected to grow over time (until I have mastered the whole of Bash). Another useful resource is this list of Unix commands on Wikipedia. Hyperlinked bash commands in general lead to relevant Man (manual) pages.
- Notes on
bash
- Contents
- Get the current date and time and generate timestamped filenames with
date
- Display date and time in
bash
history usingHISTTIMEFORMAT
- Calculate running times of commands using
time
- Run script in the current shell environment using
source
- Updating and upgrading packages using
apt update
andapt upgrade
- Seeing available disk space (using
df
) and disk usage (usingdu
) - View the return code of the most recent command using
$?
- Use stdout from one command as a command-line argument in another using
$()
notation - Serial communication using
minicom
- Change users using
su
- Finding access permissions using
stat
- Changing access permissions using
chmod
- Change ownership of a file using
chown
- Recursively find word counts of all files with a particular file ending
- View all of the most recent bash commands using
history
- View the full path to a file using
realpath
- Fixing
$'\r': command not found
error when running a bash script in WSL usingdos2unix
- Extract (unzip) a
.tar.gz
file usingtar -xvzf
- Compress (zip) a file or directory using
tar -czvf
- Viewing available memory and swap files using
free
- View running processes using
ps aux
- Useful
grep
commands - Useful
gcc
flags (including profiling withgprof
) - Counting the number of lines in a file using
wc
- Viewing the first/last
n
lines of a file usinghead
/tail
- Changing the bash prompt
apt-get update
vsapt-get upgrade
- Checking the version of an installed
apt
package usingapt list
- Clear the console window using
clear
- Iterating through files which match a file pattern
- Recursively
git add
-ing files (including files hidden by.gitignore
) git
-moving files in a loop- Iteratively and recursively
git
-moving files one directory up - Search for files anywhere using
find
- Connect to a WiFi network from the command line using
nmcli
- View the hostname and IP address using
hostname
- Viewing the properties of a file using
file
- Viewing and editing the system path
- Viewing the Linux distribution details using
lsb_release
- WSL
- Connecting to a serial device using WSL
- View filesize using
ls -l
- Reboot/restart machine using
reboot
- Shutdown machine
- Add user to group
- Check if user is part of a group
- View directory contents in a single column
- Storing
git
credentials - Automatically providing password to
sudo
- Sort
$PATH
and remove duplicates - Download VSCode
- Get the absolute path to the current
bash
script and its directory using$BASH_SOURCE
ssh
- Synchronise remote files and directories with
rsync
- Create an
alias
- Create a symbolic link using
ln -s
- Find CPU details (including model name) using
lscpu
The command date
can be used to print the current date and time on the command line, or to get a string variable containing the current date and time which can be used in future commands, for example:
$ date
Fri Feb 11 14:53:37 GMT 2022
$ echo $(date) > ~/temp.txt
$ cat ~/temp.txt
Fri Feb 11 14:53:39 GMT 2022
It can also be used to generate a timestamped filename on the command line, for example:
$ mkdir ./temp && cd ./temp
$ ls
$ echo "Hello, world!" > "Info $(date '+%Y-%m-%d %H-%M-%S').txt"
$ ls
'Info 2022-09-06 13-35-13.txt'
Using the command history 10
will display the last 10 bash
commands that were used, but not when they were used (date and time). To include this information in the bash history in the current bash terminal, use the command export HISTTIMEFORMAT="| %Y-%m-%d %T | "
. Note that using the command history 10
will now display the date and time of commands that were used both before and after setting HISTTIMEFORMAT
. To make this behaviour persist in future bash terminals, use the following commands (source):
echo 'export HISTTIMEFORMAT="| %Y-%m-%d %T | "' >> ~/.bash_profile
source ~/.bash_profile
Example:
$ history 5
94 | 2022-05-17 15:48:24 | ls /
95 | 2022-05-17 15:48:28 | df -h
96 | 2022-05-17 15:48:33 | cd ~
97 | 2022-05-17 15:48:36 | ps
98 | 2022-05-17 15:48:40 | history 5
Prepend a bash
command with time
to print the running time of that command, EG time ls /
. Note that arguments to the command being timed don't need to be placed in quotation marks (as is the case with running commands over ssh
). time
displays 3 statistics, which are described below (source):
real
: wall clock time, from start to finish of the command being run, including time that the process spends being blockeduser
: amount of CPU time spent in user-mode code (outside the kernel), NOT including time that the process spends being blocked, summed over all CPU coressys
: amount of CPU time spent in the kernel within the process (IE CPU time spent in system calls within the kernel, as opposed to library code, which is still running in user-space), NOT including time that the process spends being blocked, summed over all CPU cores
Note that time
can be used to time multiple sequential commands, including commands which are themselves being timed using time
, by placing those commands in brackets. For example:
$ time (time ps && time ls /etc/cron.daily)
PID TTY TIME CMD
1035 tty1 00:00:00 bash
1156 tty1 00:00:00 bash
1157 tty1 00:00:00 ps
real 0m0.024s
user 0m0.000s
sys 0m0.016s
apport apt-compat bsdmainutils dpkg logrotate man-db mdadm mlocate passwd popularity-contest ubuntu-advantage-tools update-notifier-common
real 0m0.026s
user 0m0.000s
sys 0m0.016s
real 0m0.052s
user 0m0.000s
sys 0m0.031s
Given a script called ./script
, running the command source script
will run script
in the current shell environment. This means that any environment variables etc set in script
will persist in the current shell. This is different from running ./script
or bash script
or bash ./script
, which will execute the commands in script
in a new shell environment, so any changes to the shell environment made by script
will not persist in the current shell (EG if script
changes an environment variable or sets a new one, the value of that environment variable will not persist once script
has finished running).
This can be useful EG if making a change to ~/.bashrc
(bashrc
stands for "Bash Run Commands", which are run every time a bash shell is started) using nano
, and wanting to apply those changes to the current shell without closing it and starting a new one:
$ nano ~/.bashrc
$ # <Make changes to the shell in the nano text editor>
$ source ~/.bashrc
To update apt
package lists, use the command sudo apt update
. This command doesn't modify, upgrade or install any new or existing packages, but should be run before upgrading or installing any new or existing packages, to make sure that the most recent versions of those packages are used.
To upgrade all existing packages to their most recent versions, use the command sudo apt upgrade
. This should be called before installing any new packages using sudo apt install package-name
, to avoid any dependency issues.
These commands are often used one after the other, before installing a new package, as follows:
sudo apt update
sudo apt upgrade
To see how much disk space is available, use the command df
. To view the output in a human-readable format which chooses appropriate memory units for each file system (GB, MB, etc.), use the -h
flag:
df -h
To see the size of a file or directory, use the du
command (du
stands for disk usage) (again, use the -h
flag for human-readable format). This program can accept multiple files and/or directories in a single command:
du -h file1 [file2 dir1 dir2 etc]
If a directory is given to du
, du
will recursively search through the directory and print the size of all files in the directory. To only print the total size of the directory, use the -s
flag (short for --summarize
).
du
can also accept wildcards. For example, to print the sizes of all files and directories in the user's home directory (printing the size of directories, but not the files and subdirectories within), use the following command:
du -sh ~/*
Note that this is different to du -sh ~
or du -sh ~/
, which would only print the size of the home directory.
To print the sizes of all directories in the root directory (note that this command runs surprisngly quickly compared to searching through the filesystem on Windows):
sudo du -sh /*
To sort the output from du
, pipe the input into sort
, and as described here, if using the -h
flag for du
, then also provide the -h
flag to sort
, so that sort
will sort according to human-readable file-sizes, as shown below:
du -sh /path/to/dir/* | sort -h
To view the N
biggest file-sizes, pipe the output from the previous command into tail
, for example:
du -sh /path/to/dir/* | sort -h | tail -n10
View the return code of the most recent command run in the current bash
process using the following command:
echo $?
It is also possible to use $?
as a regular bash
variable, EG it can be compared in logical conditions.
The stdout from one command can be used as a command-line argument in another using $()
notation, as shown in the following examples:
$ echo $(ls -p)
gui_testing_data/ gui_test.py package.json package-lock.json README.md requirements.txt src/
$ wc -l $(ls -p | grep -v "/")
39 gui_test.py
24 package.json
9671 package-lock.json
9 README.md
0 requirements.txt
9743 total
The next example automatically finds the name of the serial device to use with minicom
:
$ minicom --device $(ls -d /dev/serial/by-id/*) --baudrate 115200
(Note that the -p
flag in ls -p
is used "to append / indicator to directories", so that this can be piped into the grep -v "/"
which removes all directories from the list, and the -d
flag is used along with the *
wildcard to print the full path to the serial device, instead of the relative path to /dev/serial/by-id/
)
To install minicom
:
sudo apt-get update
sudo apt install minicom
To use minicom
with a device whose name is $DEVICE_NAME
in the /dev/
folder and with a baud-rate of $BAUD_RATE
:
minicom --device /dev/$DEVICE_NAME --baudrate $BAUD_RATE
To change to the root user, use the command sudo su
. This can alleviate some permission problems that are not solved even by using the sudo
command. To return to the previous user, either use the command sudo <username>
, or just use the command exit
, EG:
$ tail -n1 /etc/iproute2/rt_tables
103 vlan3
$ sudo echo "105 vlan5" >> /etc/iproute2/rt_tables
bash: /etc/iproute2/rt_tables: Permission denied
$ sudo su
root# echo "105 vlan5" >> /etc/iproute2/rt_tables
root# exit
exit
$ tail -n1 /etc/iproute2/rt_tables
105 vlan5
Use the stat
command to find the status of a file, including its access permissions, EG:
$ stat /etc/iproute2/rt_tables
File: /etc/iproute2/rt_tables
Size: 87 Blocks: 0 IO Block: 512 regular file
Device: 2h/2d Inode: 1125899908643251 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2019-05-21 15:43:05.544609577 +0100
Modify: 2018-01-29 16:08:52.000000000 +0000
Change: 2020-02-06 15:48:13.093754700 +0000
Birth: -
For the permissions (next to access
):
- As described in Unix file types on Wikipedia:
- The first character describes the file type
- As described in the
chmod
man page:- The next three characters (characters 2-4) describe read/write/execute permissions for the user who owns the file
- The next three characters (characters 5-7) describe read/write/execute permissions for other users in the file's group
- The next three characters (characters 8-10) describe read/write/execute permissions for other users NOT in the file's group
Therefore, -rw-r--r--
says that this is a regular file, which is readable and writeable for the user who owns the file, and readable for everyone else.
Use chmod
("change mode") to change the access permissions of a file or folder. As described in the chmod
man page, the access permissions can be specified using letters (as described above in "Finding access permissions using stat
") or in octal.
Alternatively, chmod
can be used in symbolic mode, EG:
chmod u+x file
to make a file executable by the user/ownerchmod a+r file
to allow read permission to everyonechmod a-x file
to deny execute permission to everyonechmod go+rw file
to make a file readable and writable by the group and others
The examples above are taken from the chmod
man page.
To make a file executable for all users, use the command chmod +x /path/to/file
.
Change the ownership of a file or directory using chown
. If changing ownership of a directory, use the -R
flag to also recursively change ownership of all subdirectories within that directory (source). Example:
$ sudo chown username:groupname filename
$ sudo chown -R username:groupname dirname
$ sudo chown -R jake:jake dirname
The following command can be used to recursively find line counts of all files with a particular file ending (in this case .py
for Python), excluding all files in the venv
directory (or more specifically, any files containing the substring venv
in their path). This is achieved by using a $
character in the regular expression to match a line-ending, and using \
to escape the .
character. The sum of the line counts for all matching words is displaying at the bottom:
find | grep "\.py$" | grep -v venv | xargs wc -l
TODO: turn this into a slightly more sophisticated Python scripy that accepts command line arguments specifying what filename ending to look for, and specifically ignoring directories containg the excluded words, and not filenames as well
The history
command prints out all of the previously recorded bash commands (source). To view the most recent bash commands, the output from history
can be piped into tail
. For example, to print the 20 most recent bash commands:
history | tail -n20
To search for a specific command, the output from history
can be piped into grep
, EG:
$ history | grep realpath
493 realpath ~
505 history | grep realpath
To view the full path to a file, use the realpath
command, EG:
$ realpath ~
/home/jol
As described here, this is because of a carriage return used in DOS-style line endings. The problem can be solved as follows:
sudo apt-get update
sudo apt-get install dos2unix
dos2unix name_of_shell_script.sh
./name_of_shell_script.sh
A .tar.gz
file can be unzipped easily in bash
on Linux or in WSL.
To extract a file or direcrory (source):
tar -xvzf compressed_file_name.tar.gz
To extract into a particular directory:
tar -xvzf compressed_file_name.tar.gz -C output_dir_name
Description of flags:
x
: tar can collect files or extract them. x does the latter.v
: makes tar talk a lot. Verbose output shows you all the files being extracted.z
: tells tar to decompress the archive using gzipf
: this must be the last flag of the command, and the tar file must be immediately after. It tells tar the name and path of the compressed file.C
: means change to the directory specified by the following argument (NB this directory must already exist, if it doesn't then first create it usingmkdir
)
A .tar.gz
file can be created easily in bash
on Linux or in WSL.
To zip up a file (source):
tar -czvf name-of-archive.tar.gz /path/to/directory-or-file
Here’s what those switches actually mean:
c
: Create an archive.z
: Compress the archive with gzip.v
: Display progress in the terminal while creating the archive, also known as "verbose" mode. The v is always optional in these commands, but it’s helpful.f
: Allows you to specify the filename of the archive.
The free
command can be used to view available RAM, RAM usage, and available/used memory in swap files. More information about how to create a swap file can be found in this tutorial. The -h
flag can be used with the free
command to produce a more human-readable output:
$ free -h
total used free shared buff/cache available
Mem: 15G 8.8G 6.8G 17M 223M 6.9G
Swap: 29G 56M 29G
ps
and top
are two commands which can be used to view running processes, their CPU usage, process ID, etc. They differ mainly in that "top
is mostly used interactively", while "ps
is designed for non-interactive use (scripts, extracting some information with shell pipelines etc.)", as described in this Stack Overflow answer (see here for more differences).
One thing to notice in top
is that some processes are suffixed by d
to denote that they are daemon processes (as described here), and some processes are prefixe by k
to denote that they are kernel threads (as described here).
When using ps
, the following flags are useful, as described here:
a
- show processes for all usersu
- display the process's user/ownerx
- also show processes not attached to a terminal
It is often useful to pipe the output from ps
into grep
to narrow down the list of processes to those of interest, for example:
ps aux | grep -i cron
grep
stands for Global (-ly search for a) Regular Expression (and) Print (the results). It is especially useful for filtering the outputs of other command-line tools or files. Here are some useful features of grep
(TODO
: make this into a separate Gist?):
-
The
-v
("invert") flag can be used print only the lines which don't contain the specified string (this is the opposite of the normal behaviour of grep, which prints out lines which do contain the specified string). This can be useful when piping togethergrep
commands, to include some search queries and exclude others, EG in the commandsudo find / | grep tensorrt | grep -v cpp
- Hint: put the inverted expression before the non-inverted expression to get the results of the non-inverted expression highlighted in the bash terminal output, if this feature is available and preferred
-
The
-i
flag can be used for case-insensitive pattern-matching, IEgrep -i foo
will matchfoo
,FOO
,fOo
, etc. -
grep
can be used to search for strings within a file, using the syntaxgrep <pattern> <file>
(source) -
The outputs from grep can be used as the input to a program which doesn't usually accept inputs from
stdin
using thexargs
command, EGfind | grep svn | xargs rm -rfv
will recursively delete all files and folders in the current directory that contain the stringsvn
(good riddance!) (the-v
flag will also causerm
to be verbose about every file and folder which it deletes) -
...
Flag | Meaning |
---|---|
-H |
"Print the full path of include files in a format which shows which header includes which" (note that the header file paths are printed to stderr ) (source) |
-M |
"Output a rule suitable for make describing the dependencies of the main source file. The preprocessor outputs one make rule containing the object file name for that source file, a colon, and the names of all the included files" (the dependencies include both the header files and source files) (source 1, source 2) |
-MM |
"Like -M but do not mention header files that are found in system header directories" (source) |
-fsanitize=address -fsanitize=undefined -fsanitize=float-divide-by-zero -fno-sanitize-recover |
"Enable AddressSanitizer, a fast memory error detector", and other useful Program Instrumentation Options (source 1) (source 2). Note that it is necessary "to add -fsanitize=address to compiler flags (both CFLAGS and CXXFLAGS ) and linker flags (LDFLAGS )" (source) |
-pg |
"From the man page of gcc ": "Generate extra code to write profile information suitable for the analysis program gprof . You must use this option when compiling the source files you want data about, and you must also use it when linking." After compiling and linking using the -pg flags, execute the program, EG ./name_of_exe , which should produce a file called gmon.out , and then use gprof to generate formatted profiling information as follows: gprof name_of_exe gmon.out > analysis.txt (source) |
-Xlinker -Map=output.map |
Use these flags while linking to generate a map file called output.map , describing the data and instruction memory usage in the executable (source) |
Use the program wc
(which is a mandatory UNIX command, and stands for "word count") can be used to count the number of words, lines, characters, or bytes in a file. To count the number of lines in a file, use the -l
flag, for example in the file /etc/dhcp/dhclient.conf
:
wc -l /etc/dhcp/dhclient.conf
wc
can also accept a list of files as separate arguments (separated by spaces).
As described on the Wikipedia page for wc
, the -l
flag prints the line count, the -c
flag prints the byte count, the -m
flag prints the character count, the -L
flag prints the length of the longest line (GNU extension), and the -w
flag prints the word count. Example:
$ wc -l /etc/dhcp/dhclient.conf
54 /etc/dhcp/dhclient.conf
$ wc -w /etc/dhcp/dhclient.conf
207 /etc/dhcp/dhclient.conf
$ wc -c /etc/dhcp/dhclient.conf
1735 /etc/dhcp/dhclient.conf
$ wc -m /etc/dhcp/dhclient.conf
1735 /etc/dhcp/dhclient.conf
The wc -l
command is useful for counting the number of lines in a file before printing the first or last N lines of the file using the head
or tail
commands (see below), where N ≤ the number of lines in the file.
To view the first n lines of a text file, use the head
command with the -n
flag, EG:
$ head -n5 /etc/dhcp/dhclient.conf
# Configuration file for /sbin/dhclient.
#
# This is a sample configuration file for dhclient. See dhclient.conf's
# man page for more information about the syntax of this file
# and a more comprehensive list of the parameters understood by
Similarly, use the tail
command to view the last n lines of a text file.
The bash prompt can be changed by simply setting a new value to the PS1
variable; here is an example using WSL:
PS C:\Users\Jake\Documents> bash
jake@Jakes-laptop:/mnt/c/Users/Jake/Documents$ PS1="$ "
$ echo test
test
$ date
Fri Apr 24 18:17:17 BST 2020
$
In order to change the prompt back to its previous value, store the value in a different variable before changing it:
PS C:\Users\Jake\Documents> bash
jake@Jakes-laptop:/mnt/c/Users/Jake/Documents$ DEFAULT=$PS1
jake@Jakes-laptop:/mnt/c/Users/Jake/Documents$ PS1="$ "
$ date
Fri Apr 24 18:25:49 BST 2020
$ PS1=$DEFAULT
jake@Jakes-laptop:/mnt/c/Users/Jake/Documents$
Regarding the difference between these commonly used commands, as described in this Stack Overflow answer:
apt-get update
downloads the package lists from the repositories and "updates" them to get information on the newest versions of packages and their dependencies, for all repositories and PPAs (doesn't actually install new versions of software)apt-get upgrade
will fetch new versions of packages existing on the machine if APT knows about these new versions by way ofapt-get update
apt-get dist-upgrade
will do the same job which is done byapt-get upgrade
, plus it will also intelligently handle the dependencies, so it might remove obsolete packages or add new onesYou can combine commands with
&&
as follows:
sudo apt-get update && sudo apt-get dist-upgrade
As described in this Stack Overflow answer, as to why you would ever want to use apt-get upgrade
instead of apt-get dist-upgrade
:
Using upgrade keeps to the rule: under no circumstances are currently installed packages removed, or packages not already installed retrieved and installed. If that's important to you, use
apt-get upgrade
. If you want things to "just work", you probably wantapt-get dist-upgrade
to ensure dependencies are resolved
In summary, apt-get upgrade
is likely to be safer if it works, but if not, apt-get dist-upgrade
is more likely to work.
To view the version of a installed package which is available through apt
(Advanced Package Tool), use the command apt list <package-name>
for a concise description, or apt show <package-name>
for a more verbose output. (A similar command, apt policy <package-name>
is also available, although currently I'm not sure what the difference is between apt show
and apt policy
is).
To view a list of all installed packages, use the command
apt list --installed
This list can be very large, so it might be sensible to redirect the output into a text file. To do this and then display the first 100 lines of the text file:
apt list --installed > aptlistinstalled.txt && head -n100 aptlistinstalled.txt
To achieve the same thing but without saving to a text file:
apt list --installed | head -n100
To list all installed packages which contain the string "cuda
":
apt list --installed | grep cuda
The console window can be cleared using the command clear
.
It is possible to iterate through files which match a file pattern by using a for
/in
/do
/done
loop, using the *
syntax as a wildcard character for string comparisons, and using the $
syntax to access the loop-variable (source). For example, the following loop will print out all the files whose names start with cnn_mnist_
:
for FILE in cnn_mnist_*; do echo $FILE; done
Recursively git add
-ing files (including files hidden by .gitignore
)
To recursively add all files in the current directory and all its subdirectories, use the following command (the -f
flag instructs git
to add files even if they included in .gitignore
, which is useful EG for committing specific images):
git add ** -f
The example above about "iterating through files which match a file pattern" can be modified to git
-move all the files that start with cnn_mnist_
into a subfolder called cnn_mnist
. The -n
flag tells git
to do a "dry-run" (showing what will happen/checking validity of the command without actually executing the command); remove the -n
flag to to actually perform the git mv
command:
for FILE in cnn_mnist_*; do git mv -n $FILE cnn_mnist/$FILE; done
The following will do the above, but removing the cnn_mnist_
from the start of each string using a bash parameter expansion:
for FILE in cnn_mnist_*; do NEW_FILE=${FILE//cnn_mnist_/}; git mv -n $FILE cnn_mnist/$NEW_FILE; done
Following the examples above, to recursively git
-move all files and folders in the current directory up by one directory (the -n
flag is included here again to perform a dry run; remove the -n
flag to perform an actual git
-move command):
for FILE in ./*; do git mv -n $FILE ../$FILE; done
Note that git
will recursively move the contents of any subdirectories by default.
To search for a file file_to_search_for
in the directory path/to/search
, use the find
command, EG:
sudo find path/to/search -name file_to_search_for
Note that the find
command will automatically search recursively through subdirectories; sudo must be used to allow access to restricted directories. Patterns can be used, EG to search for any filename ending or file extension, but it may be necessary to put the names
argument in single-quotes, to prevent a wildcard expansion to be applied before the program is called, as described in this Stack Overflow answer:
sudo find path/to/search -name 'file_to_search_for*'
Similarly, to check for Python scripts or shared object files:
sudo find path/to/search -name '*.py'
sudo find path/to/search -name '*.so'
To search the entire filesystem, replace path/to/search
with /
; this can be useful to check if a library is installed anywhere on the system, and return the location of that library, in case it is not on the system path (if it is on the system path, it can be found with which
).
Note that an alternative to using the -name
flag is to pipe the output from find
into grep
, EG:
sudo find / | grep nvcc
Unlike using -name
, grep
will match the search query anywhere in the filename or directory (instead of an exact filename), without further modifications.
To only return paths to files from find
and not include paths to directories, use the -type f
option, EG:
sudo find / -type f | grep nvcc | grep -v docker | wc -l
If no args are passed to find
then it will recursively search through the current directory and print out the names of all files and subdirectories, EG find | grep svn
.
As described in Part 3 of this Stack Overflow answer, a WiFi network can be easily connected to from the command line using the nmcli
command:
nmcli device wifi connect ESSID_NAME password ESSID_PASSWORD
To simply view a list of available WiFi networks:
nmcli device wifi
To view a list of all available internet connections (ethernet, wifi, etc):
nmcli device
Note that when running nmcli
commands, device
, dev
, and d
are all synonymous, and can be used interchangeably.
To view the hostname, use the following command:
hostname
An alternative command is:
echo $HOSTNAME
To view the IP address, use the following command (see this Stack Overflow answer for details):
hostname -I
The file
command can be used to view the properties of a file, EG whether a shared library is 32-bit or 64-bit, and which platform it was compiled for:
$ file lib.c
lib.c: ASCII text, with CRLF line terminators
$ file lib.dll
lib.dll: PE32 executable (DLL) (console) Intel 80386, for MS Windows
$ file lib64.dll
lib64.dll: PE32+ executable (DLL) (console) x86-64, for MS Windows
To view the system path (directories in which executables can be run from any other directory without need to specify the path to the executable):
echo $PATH
This will print every directory on the system path, separated by a colon. To print each directory on a new line, there are multiple options; one option is to use a global (g
) regular-expression substitution (s
) using the Unix program sed
(short for Stream EDitor) as follows, where :
is the regular expression to be matched, and \n
is what it is to be replaced with:
echo $PATH | sed 's/:/\n/g'
Another option is to use a shell parameter expansion:
echo -e "${PATH//:/'\n'}"
To add a new directory to the path (source):
PATH=$PATH:~/new/dir
The command lsb_release
is used to view details about the current Linux distribution under the Linux Standard Base (LSB), and optionally any LSB modules that the system supports. Using this command with flags lsb_release -irc
will show the distributer ID of the Linux distribution which is running, the release number of the distribution, and the code name of the distribution, EG:
$ lsb_release -irc
Distributor ID: Ubuntu
Release: 18.04
Codename: bionic
WSL is the Windows Subsytem for Linux, which "allows Linux binaries to run in Windows unmodified", by adding a compatability layer which presumably allows Windows to interpret Linux binary Executable Formats and Application Binary Interfaces.
To open a Windows path in WSL, open a Windows command prompt (Powershell or CMD) in that location, and run bash
(with no arguments).
To connect to a serial device using WSL (see above), the COM port for the serial device must be found in Windows Device Manager. Say the device is connected to COM3, it can be connected to from WSL with a baud rate of 115200 using the following command (source 1, source 2):
sudo chmod 666 /dev/ttyS3 && stty -F /dev/ttyS3 115200 && sudo screen /dev/ttyS3 115200
The command ls
will list files and subdirectories in the directory that is specified as an argument (with no argument, the current directory is used by default). The -l
flag is used to specify a long-list format, which gives extra data such as permissions, file-size in bytes, time of last edit, and more. The option --block-size MB
can be used with the -l
flag to specify file-sizes in megabytes. In this case, a single filename can be used as the main argument to ls
, in which case only the details for the specified file will be listed. In summary, the syntax for viewing the size of a file in megabytes is:
ls -l --block-size MB path/to/file
A machine can be rebooted from terminal using reboot
:
sudo reboot
Shutdown machine
A machine can be shut down from terminal using shutdown
:
sudo shutdown now
This is useful for example for a Coral Dev Board; as stated at the bottom of the getting started guide, the power cable should not be removed from the Dev Board while the device is still on, because this risks corrupting the system image if any write-operations are in progress. The Dev Board can be safely shutdown by calling in terminal sudo shutdown now
; when the red LED on the Dev Board turns off, the power cable can be unplugged.
To add a user to a group (which may be necessary for obtaining permissions to complete other tasks), use usermod
:
sudo usermod -aG groupname username
To see the groups of which a user is a member of, use the id
command:
id -nG username
To see if the user is a member of a particular group, pipe the output from the id
command into grep
followed by the name of the relevant group; if the user is a member of this group, then a line of text from the output of id
containing the name of that group will be printed; otherwise nothing will be printed. NB this can be used as an if
condition, EG (source):
if id -nG "$USER" | grep -qw "$GROUP"; then echo $USER belongs to $GROUP; fi
NB the q
and w
flags are being used to make grep
quiet, and only match whole words.
To view directory contents in a single column (as opposed to the default table view of ls
), using the -1
flag (as in numerical one, not a letter L or I):
ls -1
As stated in this StackOverflow answer to the question entitled Visual Studio Code always asking for git credentials, a simple but non-ideal solution to the problem is to use the following command:
git config --global credential.helper store
Note that this method is unsafe, because the credentials are stored in plain text in the file ~/.git-credentials
, and these credentials can become compromised if the system becomes hacked. Another solution, as stated in this answer, is to use the git
credential helper to store the credentials in memory with a timeout (default is 15 minutes), EG:
git config --global credential.helper 'cache --timeout=3600'
# Set the cache to timeout after 1 hour (setting is in seconds)
Yet another solution, as stated in this answer to a post on Reddit, is to use "Git Credential Manager Core (GCM Core)", as described in these instructions.
This StackOverflow answer provides instructrions for how to unset the git
credentials, using the following command:
git config --global --unset credential.helper
Note that the command rm ~/.git-credentials
should also be used after the above command in order to delete the saved credentials.
This answer also states that:
You may also need to do
git config --system --unset credential.helper
if this has been set in the system configuration file (for example, Git for Windows 2).
As stated in this StackOverflow answer, sudo
can be used with the -S
switch, which causes sudo
to read the password from stdin
:
echo <password> | sudo -S <command>
These Python commands can be used on Linux to organise $PATH
into alphabetical order and remove duplicates, and print the result to stdout
:
import os
path_list = os.getenv("PATH").split(":")
no_final_slash = lambda s: s[:-1] if (s[-1] == "/") else s
unique_path_set = set(no_final_slash(os.path.abspath(p)) for p in path_list)
sorted_unique_path_list = sorted(unique_path_set, key=lambda s: s.lower())
print("*** Separated by newlines ***")
print("\n".join(sorted_unique_path_list))
print("*** Separated by colons ***")
print(":".join(sorted_unique_path_list))
sudo apt update
sudo apt upgrade
sudo snap install --classic code # or code-insiders
Use the variable $BASH_SOURCE
to get the path to the current bash
script. Use this with realpath
and dirname
to get the absolute path of the script, and its parent directory. For example:
X1=$BASH_SOURCE
X2=$(realpath $BASH_SOURCE)
X3=$(dirname $(realpath $BASH_SOURCE))
echo $X1
echo $X2
echo $X3
To open a terminal session on a remote Linux device on a local network, use the following command on the host device:
ssh username@hostname
After using this command, ssh
should ask for the password for the specified user on the remote device.
If stdout
is not being flushed over ssh
, this problem can be fixed by passing the -t
command to ssh
, EG ssh -t username@hostname
(source)
To configure ssh
to not request a password when connecting, use the following commands on the local device, replacing $(UNIQUE_ID)
with a string which is unique to username@hostname
(the password for ssh-keygen
can be left blank, whereas the correct password for username@hostname
needs to be entered when running ssh-copy-id
):
ssh-keygen -f ~/.ssh/id_rsa_$(UNIQUE_ID)
ssh-copy-id -i ~/.ssh/id_rsa_$(UNIQUE_ID) username@hostname
Now username@hostname
can be connected to over ssh
without needing to enter a password, using the command ssh -i ~/.ssh/id_rsa_$(UNIQUE_ID) username@hostname
. To automate this further such that the path to the SSH key doesn't need to be entered when using ssh
, edit ~/.ssh/config
using the following command:
nano ~/.ssh/config
Enter the following configuration, replacing $(SHORT_NAME_FOR_REMOTE_USER)
with a short name which is unique to username@hostname
:
Host $(SHORT_NAME_FOR_REMOTE_USER)
User username
Hostname hostname
IdentityFile ~/.ssh/id_rsa_$(UNIQUE_ID)
Save and exit nano
. username@hostname
can now be connected to over ssh
using the following command, without being asked for a password (source):
ssh $(SHORT_NAME_FOR_REMOTE_USER)
This should also allow rsync
to run without requesting a password, again by replacing username@hostname
with $(SHORT_NAME_FOR_REMOTE_USER)
.
If the above steps don't work and ssh
still asks for a password, the following tips may be useful:
- Make sure that the
~
and~/.ssh
directories and the~/.ssh/authorized_keys
file on the remote machine have the correct permissions (source 1) (source 2) (source 3):~
should not be writable by others. Check withstat ~
and fix withchmod go-w ~
~/.ssh
should have700
permissions. Check withstat ~/.ssh
and fix withchmod 700 ~/.ssh
~/.ssh/authorized_keys
should have644
permissions. Check withstat ~/.ssh/authorized_keys
and fix withchmod 644 ~/.ssh/authorized_keys
- If the permissions were wrong and have been changed and passwordless
ssh
still doesn't work, consider restarting thessh
service withservice ssh restart
(source) - Make sure that the line
PubkeyAuthentication yes
is present in/etc/ssh/sshd_config
on the remote device, and not commented out with a#
(as in#PubkeyAuthentication yes
) (source). - Call
ssh-copy-id
with the-f
flag on the local device - Consider checking the permissions of the
id_rsa
files on the local machine (source 1) (source 2)
To run individual commands on a remote device over ssh
without opening up an interactive terminal, use the following syntax (the quotation marks can be ommitted if there are no space characters between the quotation marks):
ssh username@hostname "command_name arg1 arg2 arg3"
It may be found that commands in ~/.bashrc
on the remote device are not run when using the above syntax to run single commands over ssh
on the remote device, which might be a problem EG if ~/.bashrc
adds certain directories to $PATH
which are needed by the commands which are being run over ssh
. This might be because the following lines are present at the start of ~/.bashrc
on the remote device:
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
These lines cause ~/.bashrc
to exit if it's not being run interactively, which is the case when running single commands over ssh
. To solve this problem, either put whichever commands that need to be run non-interactively in ~/.bashrc
before the line case $- in
, or comment out the lines from case $- in
to esac
(inclusive) on the remote device (source).
From WSL on a Windows PC, it is possible to display graphical user interfaces which are running on a remote Linux device using X11 forwarding. To do so:
- Install Xming on the Windows machine from here
- Make sure Xming is running on the Windows machine (there should be an icon for Xming in the icon tray in the Windows taskbar when Xming is running)
- Use the
-X
flag when connecting overssh
, EGssh -X username@hostname
- Test that X11 forwarding is running succesfully by entering the command
xclock
in thessh
terminal, which should cause a clock face to appear on the Windows machine - If this doesn't work, it may be necessary to use the command
export DISPLAY=localhost:0.0
in WSL, and/or to add this command to the bottom of~/.bashrc
(EG using the commandecho "export DISPLAY=localhost:0.0" >> ~/.bashrc
) and restart the WSL terminal - If an error message is displayed from the remote machine saying
connect localhost port 6000: Connection refused
, then make sure that Xming is running on the local machine
- Sometimes it is desirable to connect to
username@hostname
overssh
, but to do so it is necessary to first connect tousername_proxy@hostname_proxy
overssh
, and fromusername_proxy@hostname_proxy
connect tousername@hostname
overssh
- This can be automated by adding entries into
~/.ssh/config
(see section "Passwordlessssh
terminals and commands" above) forusername@hostname
andusername_proxy@hostname_proxy
with aliasesshortname
andshortname_proxy
, and under the configuration forshortname
, add the lineProxyJump shortname_proxy
(following the indentation of the lines above) - Now, when using the command
ssh shortname
,ssh
will automatically connect toshortname_proxy
first, and fromshortname_proxy
connect toshortname
overssh
- Note that if using
ssh-keygen
andssh-copy-id
to log intousername@hostname
without a password (described above), then an entry forusername@hostname
should first be added to~/.ssh/config
on the local machine (including theProxyJump
entry described above), thenssh-keygen
andssh-copy-id
should be used on the local machine (not fromusername_proxy@hostname_proxy
) to enable passwordless access tousername@hostname
directly from the local machine
Install ssh
server using the following command (source):
sudo apt install openssh-server
Activate the ssh
server (source):
sudo service ssh start
To synchronise a local directory with a remote directory, use the following command:
rsync -Chavz /path/to/local/dir username@hostname:~/path/to/remote
Description of flags:
Flag | Meaning |
---|---|
-C |
Automatically ignore common temporary files, version control files, etc |
-h |
Use human-readable file sizes (EG 65.49K bytes instead of 65,422 bytes ) |
-a |
Sync recursively and preserves symbolic links, special and device files, modification times, groups, owners, and permissions |
-v |
Verbose output is printed to stdout |
-z |
Compress files (EG text files) to reduce network transfer |
- To configure
rsync
to not request a password when synchronising directories, follow the instructions in the previous section "Passwordlessssh
terminals and commands". rsync
can be used with the--delete
option to delete extra files in the remote directory that are not present in the local directory (source).- To ignore certain files (EG hidden files,
.pyc
files), use the--exclude=$PATTERN
flag- Multiple
--exclude
flags can be included in the same command, EGrsync -Chavz . hostname:~/target_dir --exclude=".*" --exclude="*.pyc"
- Multiple
- To copy the contents of the current directory on the local machine to a subdirectory of the home directory called
target_dir
on the remote machine, use the commandrsync -Chavz . hostname:~/target_dir
(note no/
character aftertarget_dir
) - To copy the contents of a subdirectory of the home directory on the remote machine called
target_dir
to the current directory on the local machine, use the commandrsync -Chavz hostname:~/target_dir/ .
(note that there is a/
character aftertarget_dir
)
Use alias
to create an alias, EG alias gcc-7=gcc
. This means that every time bash
tries to use the command gcc-7
, instead it will replace gcc-7
with gcc
(but the rest of the command will remain unchanged). This might be useful EG if a shell script assumes that gcc-7
is installed, and keeps trying to call this version specifically with the command gcc-7
, but instead a later version of gcc
is installed that works equally well. Instead of installing an earlier version of gcc
, using the command alias gcc-7=gcc
will mean that every call to gcc-7
is replaced with an equivalent call to gcc
. This can be placed in ~/.bashrc
(short for bash
Run Commands, which is run every time bash
starts up) using the command echo "alias gcc-7=gcc" >> ~/.bashrc
, and then either restarting the console, or running source ~/.bashrc
.
echo "alias gcc-7=gcc" >> ~/.bashrc
Use ln
with the -s
flag to create a symbolic link. This could be useful EG in the scenario described above in the context of alias
, if alias
is not working because the commands are not being run in bash
(this might be the case in a makefile
which uses sh
instead of bash
, see here). Instead of using alias gcc-7=gcc
, an alternative is to use sudo ln -s /usr/bin/gcc /usr/bin/gcc-7
, which creates a symbolic link in /usr/bin/
from gcc-7
to gcc
, which is more likely to be portable between different shells (not just bash
).
sudo ln -s /usr/bin/gcc /usr/bin/gcc-7
Example:
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 126
Model name: Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz
Stepping: 5
CPU MHz: 1498.000
CPU max MHz: 1498.0000
BogoMIPS: 2996.00
Virtualization: VT-x
Hypervisor vendor: Windows Subsystem for Linux
Virtualization type: container
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave osxsave avx f16c rdrand lahf_lm abm 3dnowprefetch fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl avx512vbmi umip pku avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid ibrs ibpb stibp ssbd