9.1 Linux Security Model

Content

Overview and Objectives

Linux security is not a single configuration step or a checklist you run once. It is a way of thinking about every change you make to a system. Over three decades of managing Unix and Linux systems, I’ve watched it evolve from simple password files to sophisticated multi-layered defenses — but the underlying ideas have barely changed: least privilege, defense in depth, fail-secure design.

The model works in layers. The kernel enforces access control at the base. On top of that sit user and process isolation, file permissions, network controls, and application-level restrictions. None of these layers is impenetrable on its own. What matters is that an attacker who breaks through one still has several more to deal with.

Learning Objectives

By the end of this section, you will be able to:

  • Explain multi-user security principles and how Linux implements privilege separation between different users and processes
  • Identify security boundaries in the Linux system including user isolation, process confinement, and file system protection mechanisms
  • Recognize common attack vectors that target Linux systems and understand the vulnerabilities that make these attacks possible
  • Apply security thinking to everyday administrative tasks by considering potential security implications of configuration changes
  • Describe the relationship between different security mechanisms and how they work together to protect system integrity

Multi-user Security and Privilege Separation

Linux inherits its multi-user security model from Unix, built on the assumption that multiple users share system resources while remaining isolated from each other. That design shapes everything from file permissions to process management, and it scales from a single workstation to cloud infrastructure serving thousands of users without the underlying model changing much.

Every process runs with the identity of a specific user. The kernel checks that identity before allowing access to files, network resources, or system services — transparently, so applications don’t need their own access control logic.

When you log in as alice, the system creates a shell process with Alice’s user ID. Every program she launches inherits that identity. Alice’s processes cannot read Bob’s files or interfere with his programs even on the same physical machine — the kernel enforces that boundary automatically.

Every process belongs to a specific user and group, which determines what resources it can access. Run ps aux and you’ll see the owner of each running process. The kernel uses that ownership data to make access control decisions continuously.

Privilege separation takes this further by running different parts of the same system under different identities. A web server binds to port 80 as root, then drops to an unprivileged user like www-data for the worker processes that actually handle requests. If an attacker compromises the web application, they get the limited privileges of that worker — not root.

Least privilege follows naturally from this model: each user and process should have only what it needs. When something asks for elevated permissions, that’s worth questioning. sudo handles the controlled case — it temporarily grants root access in a way that’s authorized and logged, rather than just giving users permanent root.

User namespaces are a more recent extension of the same idea. Docker and other container tools use them to give processes an isolated view of the system while still sharing a single kernel. Once you understand traditional user separation, container isolation is a fairly short conceptual step.

Security Boundaries: Users, Processes, Files

The kernel enforces boundaries between users, processes, files, and network services — usually invisibly, until something tries to cross them.

The most fundamental division is between users. The kernel tracks identity through numerical UIDs and GIDs; the usernames you type are just labels the system maps to those numbers. Each user account is effectively its own security domain.

Filesystem access control is discretionary: file owners decide who can read, write, or execute their files. The traditional Unix model — owner, group, others — is deliberately simple. That simplicity is part of what makes it reliable.

Processes run in isolated memory spaces protected by hardware. One process cannot read another’s memory unless both explicitly agree to share it (through shared memory or signals). The kernel handles this at the hardware level.

Network services are separated by port. A web server on port 80 and a database on port 3306 operate independently even on the same machine — the network stack delivers packets only to the intended listener.

Containers use namespaces, cgroups, and capabilities to create stronger isolation between processes while sharing a kernel. From inside a container, a process sees its own filesystem, network stack, and process table. It’s convincing, but the kernel is still the same one running everything else.

Virtual machines go further by running separate kernels entirely. A security breach in one VM cannot directly touch others. The tradeoff is resource overhead — VMs are heavier than containers.

The root boundary deserves its own mention. Root bypasses most normal access controls, which makes it a significant privilege boundary to respect. sudo gives you a controlled way to cross it temporarily, with authorization checks and an audit trail.

All of this only works if you configure and maintain it. Misconfigured permissions create unintended paths between security domains. Audits catch those gaps — preferably before attackers do.

Common Linux Attack Vectors and Vulnerabilities

Most successful attacks I’ve seen over decades of managing production systems didn’t exploit clever zero-days. They exploited misconfigurations that had been sitting there for months.

Privilege escalation is what happens after an attacker gets a foothold. They start with a compromised regular account and look for a way up to root: a vulnerable SUID binary, a permissive sudo rule, a kernel exploit, a writable file in a system directory. Defense means keeping systems updated, minimizing SUID programs, reviewing sudo configs carefully, and watching for unusual privilege usage.

Service vulnerabilities are a major entry point. Web servers, SSH, databases — all expose functionality that attackers probe. Once a CVE drops, exploit code often follows within days. Buffer overflows, SQL injection, remote code execution: all can provide direct system access if you’re running unpatched software.

Configuration errors are the most common attack surface. Default passwords, overly permissive permissions, unnecessary services left running, weak SSH configurations. These aren’t sophisticated attacks — they’re automated scans probing thousands of systems for known misconfigurations. Hardening guides, security audits, and configuration management tools help you maintain consistent settings.

Social engineering targets people rather than software. Impersonating a colleague to get a password reset, tricking a user into running something malicious. Technical controls can only go so far here; user education and strong authentication policies do more.

Supply chain attacks have grown significantly as software ecosystems have gotten more complex. Compromising a legitimate package, update, or dependency is attractive because administrators install it themselves. DNF’s cryptographic signature verification helps, but it doesn’t protect against a compromised upstream source or a hijacked maintainer account.

Insider threats come from people with legitimate access — malicious insiders, or well-meaning admins who make dangerous mistakes. Audit logging and privilege separation limit the damage. Access control policies and monitoring limit the opportunity.

Physical access bypasses a lot of software controls. Someone who can boot from removable media or attach a hardware keylogger has options that firewalls can’t stop. Full-disk encryption, server room locks, and physical security policies address this, though they’re not always feasible everywhere.

Denial of service is about availability rather than access. Resource exhaustion or application-level crashes make services unreachable for legitimate users. It’s less glamorous than other attacks but can be just as disruptive.

Unpatched software and basic misconfigurations cause far more incidents than advanced persistent threats. That’s where most security effort should go.

Common pitfalls

A few patterns come up repeatedly in students learning security for the first time.

Security through obscurity doesn’t work. Moving SSH to a non-standard port or using an unusual admin username might deter bored script kiddies for a few minutes. It won’t stop anyone with real intent or tooling. Real protection is proper configuration, regular updates, and strong authentication.

Focusing only on external threats leaves internal gaps wide open. Elaborate firewall setup, then a sudo configuration that lets any user escalate to root. The multi-user model only holds if it’s actually configured and maintained.

Single-point security is fragile. If your entire strategy depends on one firewall or one authentication system, that’s one failure away from total compromise. Overlapping controls mean no single failure is catastrophic.

Installing security tools you don’t understand creates false confidence. A firewall with poorly understood rule evaluation has gaps. An IDS generating false alarms that nobody reads is no better than no IDS. Understand what you’re deploying.

Security configurations drift. An excellent initial setup degrades over time as systems change, software updates, and new services get added without proper review. Treat it as ongoing work, not a one-time task.

Technical controls can’t compensate for process and people failures. Weak passwords, skipped change management, untested incident response — none of that is fixable with better firewall rules. Security requires attention at all three levels.

Real-world context

Every sysadmin eventually deals with a security incident. The goal isn’t to build perfectly impenetrable systems — that’s not achievable. It’s to build systems that remain functional and contained when something goes wrong. A well-designed security model limits damage; it doesn’t just prevent entry.

Modern DevOps has pushed security responsibility onto everyone on the team, not just a dedicated security function. Container deployments, cloud resource management, infrastructure automation — each decision has security implications. The concepts in this section inform how you approach all of it, from user account design to service configuration choices.

  1. Linux Security Cookbook by Daniel J. Barrett, Richard Silverman, and Robert Byrnes — task-oriented reference; goes wide rather than deep, but useful when you need a quick answer on a specific hardening topic
  2. The Practice of Network Security Monitoring by Richard Bejtlich (2013, No Starch Press) — less about Linux specifically, more about how to think about detection and response; worth reading once you have the defensive configurations in place and want to understand what you’re looking for
  3. NIST Cybersecurity Framework — the vocabulary that enterprise security conversations are conducted in; useful for understanding compliance requirements and communicating with security teams
  4. Red Hat Security Guide — the most directly applicable reference for Fedora and RHEL systems; covers the same ground as this module with full option documentation
  5. SANS Linux Security Reading Room — practitioner-written white papers; quality varies, but the better ones go into more technical depth than most books

Assessment

Multiple Choice Questions

Question 1: What is the primary role of the Linux kernel in the multi-user security model?

  • a) Enforcing access control decisions based on user and process identity
  • b) Managing user passwords and authentication
  • c) Creating user accounts and home directories
  • d) Monitoring network traffic for security threats

Question 2: In Linux’s privilege separation model, why do web servers typically run worker processes as unprivileged users?

  • a) To improve performance by reducing system overhead
  • b) To limit the impact of security breaches by restricting process privileges
  • c) To comply with licensing requirements for web server software
  • d) To enable multiple web servers to run on the same system

Question 3: Which of the following represents the strongest security boundary in Linux systems?

  • a) Virtual machine boundaries with separate kernels
  • b) Process isolation within the same user account
  • c) Network service separation on different ports
  • d) File permissions between different users

Question 4: What makes privilege escalation attacks particularly dangerous in Linux environments?

  • a) They bypass all security controls automatically
  • b) They permanently damage the operating system
  • c) They allow attackers to gain higher privileges than initially compromised
  • d) They can only be detected by specialized security software

Question 5: Which attack vector is most commonly exploited due to administrator oversight rather than sophisticated techniques?

  • a) Zero-day kernel exploits
  • b) Advanced persistent threats
  • c) Hardware-level attacks
  • d) Configuration errors and weak permissions

Question 6: What is the fundamental principle behind the “least privilege” security concept?

  • a) Users should have no privileges by default
  • b) Each user and process should have only the minimum permissions necessary
  • c) All users should share the same basic set of privileges
  • d) Privileges should be assigned based on user seniority

Question 7: How do user namespaces extend the traditional Linux multi-user security model?

  • a) They eliminate the need for file permissions
  • b) They automatically detect security breaches
  • c) They create isolated views of the system for processes
  • d) They encrypt all user data by default

Question 8: What distinguishes discretionary access control from other access control models?

  • a) It requires administrator approval for all file access
  • b) It automatically grants access based on user location
  • c) It encrypts files based on user privileges
  • d) It allows file owners to decide who can access their files

Short Answer Questions

Question 9: Explain how the concept of “defense in depth” applies to Linux security and provide two specific examples of how different security layers work together to protect a system.

Question 10: Describe the relationship between user IDs (UIDs), process ownership, and file access control in the Linux security model. How does this relationship prevent unauthorized access between different users?

Question 11: Why are supply chain attacks becoming increasingly common in Linux environments, and what role do package managers like DNF play in mitigating these risks? Include at least two specific countermeasures in your answer.

Updated 2026-03-10