The Command Line

A command line interface, often abbreviated as CLI, gives you a direct connection to the operating system of a computer. Via the CLI, you can get detailed insight in and full control over the operating system’s features and functions.

In the early days of computing history, the command line interface was the only way to interact with a computer. Later, graphical user interfaces (GUI) were developed with the goal of making this interaction easier. Sadly, this came at the expense of speed and flexibility. Many underlying capabilities were either hidden behind a multitude of dialogue windows and preference panes, or left out altogether. Nowadays, using the CLI effectively has become something of a lost art.

For security practitioners and system administrators, this skill is essential. Many tools in Cybersecurity, for example Metasploit, nmap and Snort require proficiency with the command line just to use them. When you are working via a remote connection during penetration testing, the command line will probably be your only way to interact with the target machine.

For this reason, we will begin building a robust foundation by introducing the command line and its parts; later on, we will go into detail on how you can apply your new skills in your daily cybersecurity work.

Clarifying the Term “Command Line”

Throughout this course, I will use the term “command line” in a generic way, referring to any of the various implementations installed on an operating system.

To make use of the command line effectively, you will need both an understanding of the features and behaviours of the available commands, and the knowledge how to compose them into executable scripts.

In this course, I will introduce you to a cornucopia of commands available on Linux and Windows systems. Most of the commands originate from the Unix/Linux ecosystem, but in many cases, there are ways to run them on Windows platforms as well.

The bash Shell

Command line interfaces have been around for long time. In the early days, there was no other way to interact with a computer than via the CLI. This was true not only for the historical mainframes and minicomputers, but also for the early microcomputers, also known as “home computers”. The IBM PC, for example, shipped with MS-DOS, whose A:\> or C:\> command line prompt is well-known to anyone who is old enough to have used such a machine.

On the Unix and Linux operating systems, the command line interface is called a “shell”. Over the decades, shells underwent an evolution that spawned multiple generations of shell programs, each with improved and extended functionality. The two most popular shells nowadays are bash and zsh. Since we can expect bash to be installed on any Linux (and even macOS) workstation as well as target systems, we will use bash throughout this course.

Windows operating systems come with their own command prompt. Since this CLI differs greatly from the common Linux shells, we will use Git Bash instead. This software package allows us to use the same shell commands on Windows that we would use on Linux.

Visualizing shell commands

In these course materials, you will find many examples for shell interactions. A single command will be laid out like this:

ls -l

If there is also output displayed with a command, it will look like this:

$ git status

On branch main
Your branch is up to date with 'origin/main'.

The $ sign is not part of the command, it is the so-called “command prompt” that a shell prints on the screen when it expects the user to enter a command. I include the $ sign in front of a command, and a blank line below it, to clearly distinguish the command from its output.

Unless stated otherwise, examples of commands on a Windows machine are run within the Windows Subsystem for Linux.

Running bash on Linux

To start a shell session on Linux, all you need to do is start the “terminal” application of your distribution. A terminal is basically a device that provides a user access to a command line. Back in the day, terminals first came in the form of a teletype, and then as a combination of a screen and a keyboard. Nowadays, terminals are simply software applications. There are many implementations of terminal emulators available on Linux, with many names such as “Konsole”, “rxvt”, “Ptyxis” or “Ghostty”. For ease of use, most Linux distributions name their default emulator simply “Terminal”.

Running bash on Windows

While the bash shell and the commands we will use in this course are installed by default on all Linux distributions in use today, this is not true for Windows operating systems. Fortunately, there are multiple options for running a shell and shell commands on Windows:

  • Git Bash,
  • Cygwin, and
  • the Windows Subsystem for Linux (WSL).

Git Bash

Git has become the most-used version control software for software engineers. To make it available not only to Linux users (who can simply install the ‘git’ package for their respective distribution) but also to Windows users, the Git Bash software package not only includes a Windows port of the git command but also a port of the bash shell.

Windows users can download Git Bash from the Git website. To start a shell session, right-click on the Windows desktop or in a folder, and select “Git Bash here”.

Cygwin

Cygwin is developed with the goal to provide a full-featured Linux emulator, including the ability to install a variety of additional packages. From a Cygwin shell, it’s possible to execute both native Windows commands and standard Linux commands.

Cygwin can be downloaded from the project website.

Windows Subsystem for Linux

With the Windows Subsystem for Linux, short WSL, Microsoft added a native method to run Linux (and therefore bash) to Windows version 10 and above. To install WSL, follow the steps outlined in the WSL tutorial. I recommend installing the current version of Ubuntu Linux within WSL. Later, you can additionally install a more specialised Linux distribution such as Kali Linux. But nothing I will cover in this course will require such as specialised distribution.

After installing the Windows Subsystem for Linux, you will be able to run Linux commands and bash scripts right from the Windows Command Prompt and PowerShell.

For example, to run the Linux pwd command, you enter the following:

C:\Users\geewiz\Desktop> bash -c "pwd"

/mnt/c/Users/geewiz/Desktop

If you have more than one Linux distribution installed for WSL, you can replace the bash command with the name of the distribution as follows:

C:\Users\geewiz\Desktop> ubuntu -c "pwd"

/mnt/c/Users/geewiz/Desktop

An even better way to manage multiple Linux distributions in WSL is Windows Terminal. This application allows you to install and open them in multiple tabs or window panes, and quickly switch between them or other command lines such as PowerShell.

Command Line Basics

Commands, arguments, built-ins, and keywords

When working with the command line, it’s important to understand the different components that make up shell commands:

  • Commands: Executable programs that perform specific tasks (e.g., ls, grep, find)
    • Types of commands
      • System commands: Located in system directories
      • User-created commands: Custom scripts and programs
      • Aliases: User-defined command shortcuts
    • Command location
      • PATH environment variable and how it works
      • Using which and whereis to find commands
      • Absolute vs. relative paths for commands
    • Common system commands
      • File operations: ls, cp, mv, rm
      • Text processing: grep, sed, awk
      • System information: ps, top, df
  • Arguments: Additional information passed to commands to modify their behavior
    • Argument types
      • Positional arguments: Order-dependent parameters
      • Named arguments: Key-value pairs (in some commands)
    • Argument handling
      • How commands parse and interpret arguments
      • Handling spaces and special characters in arguments
      • Using quotes (single vs. double) with arguments
    • Common argument patterns
      • Source and destination patterns (e.g., cp source dest)
      • Filter patterns (e.g., grep pattern file)
      • Multiple file arguments (e.g., cat file1 file2)
  • Options/Flags: Special arguments that start with a dash (e.g., -l, --help)
    • Option formats
      • Short options: Single-letter with single dash (-a)
      • Long options: Full words with double dash (--all)
      • Combined short options (-la vs. -l -a)
    • Option behaviors
      • Boolean options (presence/absence changes behavior)
      • Options with required values (--max-depth=2)
      • Options with optional values
    • Standard option conventions
      • Common options across commands (-h/--help, -v/--verbose)
      • GNU vs. BSD option styles
      • Command-specific option patterns
  • Built-ins: Commands that are part of the shell itself, not separate programs (e.g., cd, echo)
    • Purpose of built-ins
      • Access to shell’s internal state
      • Performance advantages over external commands
      • Operations that external commands cannot perform
    • Common built-in commands
      • Directory navigation: cd, pwd
      • Environment control: export, set, unset
      • Job control: jobs, fg, bg
      • Shell flow control: source, exit
    • Identifying built-ins
      • Using type command to determine command type
      • Differences in behavior from external commands
      • Built-in vs. external command versions (e.g., echo)
  • Keywords: Special words that have specific meaning in shell syntax (e.g., if, for, while)
    • Control flow keywords
      • Conditionals: if, then, else, fi, case
      • Loops: for, while, until, do, done
      • Grouping: {...}, (...)
    • Function-related keywords
      • function, return, local
    • Other special keywords
      • time, select, [[...]] (extended test)
      • Reserved words vs. active keywords
    • Keyword vs. command distinction
      • How keywords affect shell parsing
      • Why keywords cannot be redefined
      • Context-sensitive behavior of some keywords

Understanding these components is essential for effective command line usage and shell scripting.

STDIN, STDOUT, STDERR

Shell commands don’t access input and output devices such as the keyboard and the screen, respectively, directly. Instead, they connect to so-called channels provided by the shell software. There are three standard channels, STDIN, STDOUT and STDERR.

  • STDIN (Standard Input)
    • Basic characteristics
      • File descriptor: 0
      • Default source: Keyboard input
      • How programs read from STDIN
    • Common usage patterns
      • Interactive input from users
      • Reading from files via redirection
      • Reading from pipes
    • Commands that rely on STDIN
      • Text processors: grep, sed, awk
      • Interactive commands: read, password prompts
      • Filters: sort, uniq, tr
    • STDIN programming considerations
      • Checking for available input
      • Handling EOF (End-of-File)
      • Buffering behavior
  • STDOUT (Standard Output)
    • Basic characteristics
      • File descriptor: 1
      • Default destination: Terminal screen
      • How programs write to STDOUT
    • Output formatting
      • Text vs. binary output
      • Structured output (CSV, JSON, etc.)
      • Control characters and terminal codes
    • Common STDOUT patterns
      • Data output for user viewing
      • Machine-readable output for processing
      • Status information and results
    • STDOUT programming considerations
      • Buffering modes (line vs. block buffering)
      • Flushing output
      • Handling large volumes of output
  • STDERR (Standard Error)
    • Basic characteristics
      • File descriptor: 2
      • Default destination: Terminal screen
      • Purpose and separation from STDOUT
    • Types of messages sent to STDERR
      • Error messages and warnings
      • Diagnostic information
      • Progress information (in some programs)
    • STDERR conventions
      • Error message formatting
      • Exit codes and their relationship to STDERR
      • Verbosity levels in error reporting
    • STDERR programming considerations
      • When to use STDERR vs. STDOUT
      • Ensuring errors are noticed
      • Preventing error messages from corrupting data streams

STDIN (“Standard Input”) is the channel from which a shell command gets its input. By default, STDIN provides keyboard input.

The channel to which the shell command sends its output, in turn, is called STDOUT (“Standard Output”). By default, the shell will display this output on the screen.

These channels can be redirected by the user. This makes it easy to make a command read from a file instead of the keyboard, for example. Since, in this case, it is the shell that connects the file with STDIN, commands don’t even require a built-in ability to read files. They simply rely on reading from STDIN.

The same goes for the output of a command. If we want to send it to a file, or even a device like a printer, we can do so by redirecting STDOUT.

There’s a little catch with redirecting a command’s output, though. It’s when something goes wrong, and the output contains error messages. To prevent those messages to get mixed into other stuff unnoticed or to go to places where we don’t want them, shell commands ideally send them not to STDOUT but to STDERR, the “Standard Error” channel.

Redirection and Pipes

  • Input Redirection
    • Basic input redirection
      • Using < to read from files: command < input.txt
      • Default behavior and file handling
      • Error handling for missing input files
    • Here-documents
      • Syntax: command << DELIMITER ... DELIMITER
      • Use cases for multi-line input
      • Variable expansion in here-documents
      • Preventing expansion with quoted delimiters
    • Here-strings
      • Syntax: command <<< "string"
      • Differences from here-documents
      • Variable expansion in here-strings
    • Special input sources
      • /dev/null for empty input
      • /dev/random and /dev/urandom
      • Other special device files
  • Output Redirection
    • Basic output redirection
      • Using > to write to files (overwrite): command > output.txt
      • Using >> to append to files: command >> log.txt
      • File creation, permissions, and error handling
    • Error redirection
      • Redirecting STDERR with 2>: command 2> errors.log
      • Combining STDOUT and STDERR: command &> all.log
      • Redirecting STDERR to STDOUT: command 2>&1
      • Redirecting STDOUT to STDERR: command 1>&2
    • Discarding output
      • Redirecting to /dev/null
      • Common patterns: command > /dev/null 2>&1
    • Multiple redirections
      • Redirecting different outputs to different files
      • Order of redirections and their effects
  • Pipes
    • Basic pipe usage
      • Syntax: command1 | command2
      • How data flows through pipes
      • Buffering behavior in pipes
    • Common pipe patterns
      • Filtering: command | grep pattern
      • Sorting: command | sort
      • Counting: command | wc
      • Multiple transformations: command | grep | sort | uniq
    • Pipe limitations and considerations
      • Binary data in pipes
      • Error handling across pipes
      • Performance implications of long pipelines
    • Named pipes (FIFOs)
      • Creating with mkfifo
      • Use cases and limitations
      • Differences from anonymous pipes
  • Advanced Redirection Techniques
    • Using tee for output splitting
      • Basic usage: command | tee file.txt
      • Appending with tee: command | tee -a file.txt
      • Multiple outputs: command | tee file1.txt file2.txt
    • Process substitution
      • Using <(command) as input source
      • Using >(command) as output destination
      • Differences from pipes and when to use each
    • Redirecting specific file descriptors
      • Creating and using custom file descriptors
      • Duplicating file descriptors
      • Closing file descriptors
    • Redirection in scripts
      • Persistent redirections with exec
      • Temporary redirections for specific commands
      • Capturing command output in variables

Running commands in the background

  • Background Execution Basics
    • Starting background processes
      • Using & to run commands in background: command &
      • Process IDs and job numbers
      • Terminal output from background processes
    • Job control commands
      • Listing jobs with jobs
      • Job status indicators (running, stopped, etc.)
      • Referencing jobs by number or status
    • Foreground and background switching
      • Bringing jobs to foreground with fg
      • Sending jobs to background with bg
      • Specifying jobs by number: fg %1
    • Stopping and resuming processes
      • Using Ctrl+Z to suspend processes
      • Resuming stopped processes
  • Process Management
    • Process information
      • Viewing processes with ps
      • Common ps options: ps aux, ps -ef
      • Finding processes by name with pgrep
    • Process termination
      • Sending signals with kill
      • Common signals: TERM, KILL, HUP, INT
      • Terminating by process ID or job number
      • Using pkill and killall for name-based termination
    • Process monitoring
      • Real-time monitoring with top
      • Enhanced monitoring with htop
      • Process resource usage with time
    • Process priorities
      • Understanding nice values
      • Starting processes with modified priority using nice
      • Changing priority of running processes with renice
  • Persistent Background Processes
    • Running processes after logout
      • Using nohup to ignore hangup signals
      • Output handling with nohup
      • Limitations of nohup
    • Detaching processes from the shell
      • Using disown to remove jobs from job table
      • Differences between disown and nohup
      • When to use each approach
    • Terminal multiplexers
      • Using screen for persistent sessions
      • Using tmux for advanced session management
      • Basic multiplexer commands and workflows
    • Scheduled execution
      • One-time execution with at
      • Recurring execution with cron
      • Modern alternatives: systemd timers
  • Background Process Considerations
    • Security implications
      • Privilege considerations for background processes
      • Logging and auditing background activities
    • Resource management
      • CPU and memory usage
      • I/O considerations
      • Using nice, ionice, and cpulimit
    • Error handling
      • Logging errors from background processes
      • Handling failures and crashes
      • Automatic restart mechanisms
    • Best practices
      • When to use background processes
      • Alternatives to background processes
      • Monitoring and maintenance strategies

From command line to script

  • Script Basics
    • Creating executable scripts
      • Shebang line: #!/bin/bash
      • Setting execution permissions: chmod +x script.sh
      • Running scripts: ./script.sh vs. bash script.sh
    • Script structure
      • Comments and documentation
      • Variable declarations
      • Main code section
      • Functions and organization
    • Script parameters
      • Accessing parameters: $1, $2, etc.
      • Special parameters: $0, $#, $@, $*
      • Parameter validation and defaults
    • Exit codes
      • Setting with exit command
      • Checking previous command status with $?
      • Standard exit code conventions
  • Converting Commands to Scripts
    • Single commands to scripts
      • When to convert a command to a script
      • Adding error handling and validation
      • Making commands reusable
    • Command pipelines to scripts
      • Preserving pipeline functionality
      • Breaking complex pipelines into steps
      • Adding intermediate validation
    • Interactive commands to scripts
      • Handling user input with read
      • Providing default values
      • Non-interactive operation options
    • Command substitution
      • Capturing command output: $(command) or backticks
      • Using captured output in scripts
      • Handling multi-line output
  • Script Enhancements
    • Error handling
      • Checking command success with if statements
      • Using set -e for automatic error detection
      • Trapping signals with trap
    • Logging and output
      • Creating log functions
      • Verbosity levels
      • Colorized output for better readability
    • Configuration
      • Using config files
      • Environment variables
      • Command-line options parsing
    • Debugging techniques
      • Using set -x for tracing
      • Debug output and logging
      • Common debugging patterns
  • Script Best Practices
    • Code organization
      • Function-based organization
      • Modular design
      • Sourcing common functions from libraries
    • Security considerations
      • Handling sensitive data
      • Input validation and sanitization
      • Principle of least privilege
    • Performance optimization
      • Reducing external command calls
      • Efficient text processing
      • Parallelization techniques
    • Maintenance and documentation
      • Commenting code effectively
      • Version control integration
      • Testing strategies for shell scripts

Summary

Lab

File System Navigation

Create the following directory structure with as few commands as possible. Then, navigate between different directories using only CLI commands.

File System Navigation Example


Table of contents