Operating system basics
A working mental model for what the OS is actually doing while my code runs.

When I say “it’s running on Linux” (or macOS, or Windows), I’m usually using “the operating system” as background texture. It’s just… there.
But when production gets weird, the OS stops being background. It becomes the thing I’m bouncing off of: permissions, resource limits, processes dying, files not being there, ports not opening, clocks doing something surprising.
So I want a broader, more usable model of OS basics - less “textbook sections”, more “how it tends to show up when software is actually running”.
The specific words I want to own here are: operating system, kernel, user space, system call, process, and isolation.
What I mean when I say “the OS”
An operating system is the layer that makes a shared machine usable (meaning: many programs and users active on the same hardware over time) without every program having to be a hardware expert or a bully.
It does three big things that show up everywhere:
- It gives me interfaces I can build against (files, directories, sockets, processes).
- It allocates shared resources (CPU time, memory, disk and network I/O) so multiple programs can coexist.
- It enforces boundaries so one program can’t casually ruin everyone else’s day.
The OS is not just “helpful utilities”. It’s also the set of enforcement mechanisms underneath. Without those rules, any process could overwrite another process’s memory, read any file, or consume CPU/memory until the machine becomes unusable for everything else. Isolation and permissions don’t prevent all failures, but they reduce how far a single bug can spread.
The split that matters: kernel and user space
Most of the time, my code runs in user space. That’s where applications live: web servers, CLIs, editors, language runtimes, basically everything I touch daily.
The kernel is different. It’s privileged code that can control hardware and enforce global rules. The kernel is where scheduling decisions happen, where memory protection is enforced, where device drivers live, where networking and filesystems ultimately terminate.
I like the framing that user space is where we try things, and the kernel is where the machine says yes, no, or not like that.
System calls: how my program touches reality
User-space programs still need to do real work: read configs, open files, listen on ports, write logs, talk to the network, spawn subprocesses, ask what time it is.
That’s what a system call is: a controlled transition from user space into the kernel to request an operation only the OS can do.
In practice, I rarely “call syscalls”. I call libraries and runtimes, and they talk to the OS. But I still want the syscall model in my head because it makes errors feel less mysterious:
- “permission denied” is the kernel enforcing policy
- “operation not permitted” is the kernel enforcing a boundary
- “too many open files” is the OS saying I’ve exhausted a per-process limit
- a hang is often “this process isn’t making progress because it’s blocked”, usually waiting on I/O (disk/network) or waiting to be scheduled on the CPU
Processes: what the OS actually manages
When an executable starts, the OS creates a process. This is the thing the OS knows how to schedule, limit, observe, and kill.
A process is a running program plus the OS-managed context around it: an identity (PID), permissions, open files/sockets, and (crucially) an isolated view of memory.
This is one of those places where language can trick me. I’ll say “the app is running” when what I really mean is “there is a process that exists, and it currently seems healthy”.
Isolation is why shared machines work at all
If I’m running multiple programs on one machine, the only reason that’s sane is isolation.
Isolation is the set of mechanisms that stop one process from reading/writing another process’s memory or using resources it doesn’t have the right to use. It’s enforced primarily by the kernel (memory protection, privilege separation, access control over kernel-managed resources).
It’s also why failures look the way they do. A lot of the “spooky” production symptoms are just the OS doing its job:
- crashes: the process accessed memory it shouldn’t (or the runtime did)
- killed processes: the OS decided something exceeded limits or violated a policy
- permission errors: the OS refused the request, full stop
How the pieces fit together
This is the stack in one pass:
- Application code runs in user space.
- To access files, network, time, and processes, it goes through system calls.
- Those calls are implemented and policed by the kernel.
- The OS runs programs as processes and uses isolation to keep them from freely interfering with each other.
If those boundaries are clear, a lot of common “production weirdness” becomes easier to place.
That’s it, may the force be with you!