- Sketch the TCP/IP stack and locate the Linux tools that live at each layer
- Configure and diagnose network interfaces with the ip command
- Use ping, traceroute, and dig to troubleshoot connectivity and name resolution
- Explain how SSH works and apply best practices for key-based authentication
- Transfer files efficiently with scp, rsync, and curl
Linux is the operating system of the internet. Around 96% of public web servers run it, every cloud provider is built on it, and the protocols that define how networks work were prototyped and refined on Unix long before Linux existed. This chapter is a field guide to the network tools you will reach for constantly — both for configuring your own machines and for figuring out why something on the other side of the world is refusing to answer.
The TCP/IP Stack, Briefly
Networking is organised in layers, with each layer providing services to the one above. The mental model most people use is the four-layer TCP/IP stack:
| Layer | Responsibility | Examples |
|---|---|---|
| Application | Actual data exchange | HTTP, SSH, DNS, SMTP |
| Transport | Ordering, reliability, ports | TCP, UDP |
| Internet | Routing between networks | IPv4, IPv6, ICMP |
| Link | Physical/local delivery | Ethernet, Wi-Fi, ARP |
A packet arriving on your Ethernet card travels up the stack, being handled by progressively more abstract protocols, until its payload is handed to a running program. A packet you send travels down the stack in reverse. Each Linux tool in this chapter operates at one of these layers; knowing which helps you pick the right one for a given problem.
IP Addresses, Ports, and Sockets
Every machine on an IP network has at least one IP address. IPv4 addresses are 32-bit numbers written as four decimal octets (192.168.1.42). IPv6 addresses are 128-bit and written as eight colon-separated hex groups (2001:db8::1). IPv6 adoption is slow but steady; most new deployments now support both.
Addresses by themselves are not enough, because a single machine runs many network programs at once. Ports disambiguate: a port is a 16-bit number associated with a particular program. Port 22 is conventionally SSH, 80 is HTTP, 443 is HTTPS, 53 is DNS, 25 is SMTP. These assignments are tracked by IANA and listed in /etc/services.
A pairing of (address, port) at each end of a connection defines a socket. A TCP connection is uniquely identified by the four-tuple of local address, local port, remote address, and remote port.
ip: The Modern Interface Tool
The command ip replaces a clutch of older tools (ifconfig, route, arp) with a single unified interface to the kernel's networking subsystems.
ip addr # show addresses on all interfaces
ip addr show eth0 # only eth0
ip link # show link-layer information
ip route # routing table
ip -s link show eth0 # statistics
A typical ip addr output:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
link/ether 00:15:5d:8a:4f:c2 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.42/24 brd 192.168.1.255 scope global eth0
inet6 fe80::215:5dff:fe8a:4fc2/64 scope link
You can see the interface name (eth0), its MAC address, its IPv4 address with a /24 subnet mask, and an IPv6 link-local address.
To assign a static IP manually (for ad-hoc testing, not for permanent configuration):
sudo ip addr add 10.0.0.1/24 dev eth0
sudo ip link set eth0 up
sudo ip route add default via 10.0.0.254
Permanent configuration is usually handled by NetworkManager on desktops, systemd-networkd on servers, or distribution-specific tools like netplan on Ubuntu. Manually editing /etc/network/interfaces is still possible on Debian but increasingly rare.
The old commands (ifconfig, route) still exist in many distributions via the net-tools package, but they have been deprecated for over a decade. Modern tutorials should use ip.
ping: Is It Alive?
The simplest test of whether a host is reachable is to ping it. This sends ICMP echo request packets and measures round-trip time.
ping google.com
# PING google.com (142.250.187.206) 56 bytes of data.
# 64 bytes from lhr48s22-in-f14: icmp_seq=1 ttl=115 time=14.3 ms
# 64 bytes from lhr48s22-in-f14: icmp_seq=2 ttl=115 time=13.9 ms
On Linux, ping runs until you hit Ctrl+C. Use -c 4 to send a fixed number of packets. A successful ping tells you four things at once: DNS resolution worked, the network path is open, the target responds to ICMP, and you can measure latency.
Some firewalls block ICMP, so a failing ping does not always mean a host is down.
traceroute
traceroute (or tracepath on some distributions) shows you the sequence of routers a packet passes through on its way to a destination:
traceroute google.com
# 1 router.local (192.168.1.1) 0.6 ms
# 2 isp-gateway (10.0.0.1) 8.2 ms
# 3 peer-1.net.example (203.0.113.5) 12.1 ms
# ...
It works by sending packets with progressively larger TTL (time-to-live) values. Each router that rejects a packet for expired TTL reveals its existence. The tool is invaluable when something seems to be broken beyond your own network.
A more modern and more useful tool is mtr (My TraceRoute), which combines ping and traceroute into a single live display. It traces the route to a destination and then continuously probes every hop, showing loss percentage and latency at each one. This is exactly the view you want when diagnosing intermittent packet loss or a flaky link partway through the path:
mtr google.com
# HOST: mymachine Loss% Snt Last Avg Best Wrst StDev
# 1. router.local 0.0% 20 0.6 0.7 0.5 1.2 0.1
# 2. isp-gateway 0.0% 20 8.2 8.5 7.9 10.1 0.4
# 3. peer-1.net.example 15.0% 20 12.1 14.3 11.8 22.5 3.1
# ...
Use mtr -r -c 100 host for a one-shot report that you can paste into a bug ticket. Of the network-diagnosis tools in this chapter, mtr is the one that most often saves you from blaming the wrong thing.
DNS: dig and nslookup
Humans remember names; networks route by numbers. The Domain Name System translates between them, and failing DNS is a surprising fraction of networking outages.
The modern tool for querying DNS is dig:
dig google.com
# ;; ANSWER SECTION:
# google.com. 242 IN A 142.250.187.206
dig @8.8.8.8 google.com # ask Google's DNS specifically
dig google.com MX # mail exchange records
dig google.com AAAA # IPv6 records
dig +short google.com # just the answer, no ceremony
dig +trace google.com # full resolution path from root
nslookup is an older tool that still works but is considered deprecated. host is another quick alternative.
DNS configuration on a Linux machine lives in /etc/resolv.conf:
cat /etc/resolv.conf
# nameserver 192.168.1.1
# nameserver 8.8.8.8
# search mylan.local
On modern systems this file is often managed automatically by systemd-resolved, and editing it by hand will be overwritten.
There is also /etc/hosts, a static mapping of names to addresses that the resolver consults before DNS. It is how you make localhost resolve to 127.0.0.1 without any server involved, and it is the quick and dirty way to override DNS for testing.
cat /etc/hosts
# 127.0.0.1 localhost
# 127.0.1.1 my-laptop
ss and netstat: Who Is Listening?
To find out which programs are listening on which ports, use ss (socket statistics). It replaces the older netstat.
ss -tulnp
# Netid State Local Address:Port Process
# tcp LISTEN 0.0.0.0:22 sshd
# tcp LISTEN 127.0.0.1:631 cupsd
The flags: -t TCP, -u UDP, -l listening only, -n numeric (do not resolve names), -p show processes. This is the command you run when you want to know "what is listening on my machine, and which program is it?".
SSH: The Most Important Command
If you take away one tool from this chapter, take ssh. It is how system administrators do almost all their work, how developers deploy code, how Git pushes to remote repositories, and how you connect to anything further away than the machine in front of you.
SSH is an encrypted remote shell protocol designed by Tatu Ylönen in 1995 as a replacement for the insecure rlogin and telnet. It provides three things at once: confidentiality (nobody on the wire can read the traffic), integrity (nobody can tamper with it), and authentication (both ends can verify each other).
ssh user@hostname
ssh -p 2222 user@hostname # non-default port
ssh user@hostname 'ls -la /var' # run a single command and exit
ssh -L 8080:localhost:80 user@host # port forwarding: local 8080 -> remote 80
Key-Based Authentication
Passwords are inconvenient and leakable. Public-key authentication is the professional way to use SSH. Generate a key pair once:
ssh-keygen -t ed25519 -C "you@example.com"
# produces ~/.ssh/id_ed25519 (private) and ~/.ssh/id_ed25519.pub (public)
Copy your public key to a remote server:
ssh-copy-id user@hostname
From now on, ssh user@hostname logs you in without asking for a password, because the remote server has your public key in ~/.ssh/authorized_keys and verifies that you hold the matching private key. Keep the private key private (chmod 600), and never, ever copy it between machines.
A ~/.ssh/config file lets you define shortcuts:
Host myserver
HostName server.example.com
User alice
Port 2222
IdentityFile ~/.ssh/id_ed25519
Then ssh myserver does the right thing.
File Transfer: scp, rsync, curl, wget
scp copies files over an SSH connection:
scp file.txt user@host:/path/
scp -r dir/ user@host:/path/
scp user@host:/path/file.txt .
scp is simple and widely available, but OpenSSH has deprecated it in favour of sftp because its protocol has long-standing quirks.
rsync is far superior for anything non-trivial. It transfers only the parts of files that have changed, preserves metadata, supports recursive copies, and can work over SSH seamlessly:
rsync -avz src/ user@host:dest/
rsync -avz --delete src/ user@host:dest/ # delete files on remote that are not in source
rsync -avz --progress src/ user@host:dest/ # show progress
The flags -avz mean archive (preserve everything), verbose, and compress in transit. The trailing slash on src/ matters: src/ means "the contents of src", whereas src means "the directory src itself".
For downloading from the web, curl and wget are the workhorses.
curl https://example.com/file.zip -O # -O saves to the file's name
curl -L https://example.com/redirect # follow redirects
curl -X POST -d 'name=alice' https://api.example.com/users
wget https://example.com/file.zip # saves by default
wget -r -np https://example.com/docs/ # recursive, stay in directory
curl is more flexible (it is what most API-using scripts reach for); wget is more convenient for batch downloads.
Firewalls: iptables, nftables, ufw, firewalld
Linux has had a packet-filtering subsystem in the kernel since 1998. The classic interface is iptables, which manipulates rules in chains like INPUT, OUTPUT, and FORWARD. Since 2014 a newer framework, nftables, has been replacing it, though iptables-compatible commands still work via a compatibility layer.
For most use cases, the user-friendly wrappers are enough:
ufw (Uncomplicated Firewall), the Debian/Ubuntu default:
sudo ufw allow 22/tcp
sudo ufw allow from 10.0.0.0/24 to any port 80
sudo ufw enable
sudo ufw status
firewalld, the Red Hat family default:
sudo firewall-cmd --permanent --add-service=ssh
sudo firewall-cmd --reload
sudo firewall-cmd --list-all
The principle in either case is the same: deny by default, allow only what is needed, and audit the rules regularly.
Putting It Together
A typical server-hardening troubleshooting session might go:
ip addr # what are my addresses?
ip route # can I reach the default gateway?
ping 8.8.8.8 # is the internet reachable?
dig google.com # is DNS working?
ss -tlnp # what am I serving?
ssh -v user@example.com # verbose SSH debug
Each tool answers a single question, and together they cover every layer of the stack. Networking trouble is rarely subtle when you know where to look — and Linux's command-line toolkit is among the best in the industry for looking.