Skip to content

Instantly share code, notes, and snippets.

@vishnu-saini
Last active August 9, 2018 10:50
Show Gist options
  • Save vishnu-saini/cf1f6195dfcdaec0ba85b85f631b18dc to your computer and use it in GitHub Desktop.
Save vishnu-saini/cf1f6195dfcdaec0ba85b85f631b18dc to your computer and use it in GitHub Desktop.
  1. To check the avilable memory

    df -h 
    

    -h is human readable

  2. current directory

    pwd
    
  3. to list the directory with the permission

    ls -l
    

    to list all directory with hidden files

    	ls -a
    
  4. SCP commands

	Copy the file "foobar.txt" from a remote host to the local host
	
	$ scp your_username@remotehost.edu:foobar.txt /some/local/directory
	
	Copy the file "foobar.txt" from the local host to a remote host
	
	$ scp foobar.txt your_username@remotehost.edu:/some/remote/directory
	
	Copy the directory "foo" from the local host to a remote host's directory "bar"
	
	$ scp -r foo your_username@remotehost.edu:/some/remote/directory/bar
	
	Copy the file "foobar.txt" from remote host "rh1.edu" to remote host "rh2.edu"
	
	$ scp your_username@rh1.edu:/some/remote/directory/foobar.txt \
	
	your_username@rh2.edu:/some/remote/directory/
	
	Copying the files "foo.txt" and "bar.txt" from the local host to your home directory on the remote host
	
	$ scp foo.txt bar.txt your_username@remotehost.edu:~
	
	Copy the file "foobar.txt" from the local host to a remote host using port 2264
	
	$ scp -P 2264 foobar.txt your_username@remotehost.edu:/some/remote/directory
	
	Copy multiple files from the remote host to your current directory on the local host
	
	$ scp your_username@remotehost.edu:/some/remote/directory/\{a,b,c\} .
	
	$ scp your_username@remotehost.edu:~/\{foo.txt,bar.txt\} .
  1. checking folder size
	du -sh ./*
	- ./* mean everything in current directory
	- s only the directory in current directory
	- h for human readable
	
for only hidden files and folders
	du -hs .[!.]*
for all ( hidden + non hidden files and folders)
	du -hs .[!.]* *
  1. change directory color
	LS_COLORS="di=1;33"
  1. zipping a folder
	zip -r web.zip web
web == directory to zip
web.zip == zip file name
  1. The groups command will show you what groups the user belongs to
	groups
looking for groups luser belongs to
	groups luser
  1. To get running process with process id
	ps -eaf
  1. Listing all the LISTENING Ports of TCP and UDP connections
	netstat -a
Listing TCP Ports connections
	netstat -at
Listing UDP Ports connections
	netstat -au
Listing all LISTENING Connections

netstat -l

Listing all TCP Listening Ports

netstat -lt

Listing all UDP Listening Ports

netstat -lu

Listing all UNIX Listening Ports

netstat -lx

netstat -tunlp
  1. To add cronjob on linux

    crontab

  2. grep, which stands for "global regular expression print," processes text line by line and prints any lines which match a specified pattern.

    grep --color -n -i "search string" targetfile

    here --color to color the match string -n is to show the line numbers in file -i option to perform a case-insensitive match: r option, which tells grep to perform its search recursively in folder file name can be replaced with * to search all files targetfile is name of file in which we are searching

  3. Find file in directory

    find /search/directory/ -name "matching file search criteria" -actions find /dir/to/search -name "pattern" -print find /dir/to/search -name "file-to-search" -print find /dir/to/search -name "file-to-search" -print [-action]

  4. file folder listing with size in MB

    ls -l --block-size=M

  5. To run a node app forever install forever after going in to the node project

    $ [sudo] npm install forever -g

    If you are using forever programmatically you should install forever-monitor.

    $ [sudo] npm install forever-monitor

    Usages Example

    forever start app.js

  6. Installing node and npm

    Download and setup the APT repository add the PGP key to the system’s APT keychain,

    $ curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash - $ sudo apt-get install -y nodejs $ node -v $ sudo npm install npm --global

  7. change node version

    $ nvm install 0.10.25 $ nvm use 0.10.25 $ nvm alias default 0.10.25

  8. To list the current rules that are configured for iptables.

    sudo iptables -L

  9. installing mongo on ubuntu 16.04

    Step 1 — Adding the MongoDB Repository sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927

    Step 2 - add the MongoDB repository details so apt will know where to download the packages from. echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list

    Step 3 - update the packages list. sudo apt-get update

    Step 4 - Step 2 — Installing and Verifying MongoDB sudo apt-get install -y mongodb-org

    Step 5 - MongoDB with systemctl. sudo systemctl start mongod

    Step 6 - check that the service has started properly. sudo systemctl status mongod

    Step 7 - enable automatically starting MongoDB when the system starts. sudo systemctl enable mongod

    The MongoDB server is now configured and running, and you can manage the MongoDB service using the systemctl command (e.g. sudo systemctl stop mongod, sudo systemctl start mongod).

  10. setting up swap in linux

    Setup Swap By default there is no swap setup on my VPS, it is required especially on a system with limited memory. I am setting up a 4GB swap, which is the most common swap size used for a VPS.

    dd if=/dev/zero of=/mnt/myswap.swap bs=1M count=4000 mkswap /mnt/myswap.swap swapon /mnt/myswap.swap

    Now let’s add it into fstab so it will activate at boot.

    nano /etc/fstab

    Add the following line at the end of the file.

    /mnt/myswap.swap none swap sw 0 0

    Ctrl+O to save, and Ctrl+X to exit the nano editor.

    Now your swap is setup, you can modify the size in the future if you need more or less.

  11. Find file in directory

    find /search/directory/ -name "matching file search criteria" -actions find /dir/to/search -name "pattern" -print find /dir/to/search -name "file-to-search" -print find /dir/to/search -name "file-to-search" -print [-action]

luser : test luser adm cdrom sudo dip plugdev lpadmin sambashare chmod - modify file access rights su - temporarily become the superuser chown - change file ownership chgrp - change a file's group owner iptables telnet crontab nano iptables grep netstat -tunlp telnet crontab nano ps -eaf

Others

2>/dev/null

2 = Error Output ...

     = ...is redirected...

/dev/null = ...to device NULL (no Output)

nohup only writes to nohup.out if the output is otherwise to the terminal. If you redirect the output of the command somewhere else - including /dev/null - that's where it goes instead.

 nohup command >/dev/null 2>&1   # doesn't create nohup.out

If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:

 nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out

On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and OS X, that is not the case, so when running in the background, you might want to close its input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:

nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal 

Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, you can do so by running disown with no argument as the next command, at which point the process is no longer associated with a shell "job" and will not have any signals (not just HUP) forwarded to it from the shell. Explanation: In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window. But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do. The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling. The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>). Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe. So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor". When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go. The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment