How the Internet Works
A 8-minute read
You use it every day. But what actually happens when you type a URL and hit enter?
That webpage you’re reading right now arrived at your screen by being chopped into thousands of pieces, bounced across the globe, and reassembled along the way. It happened in under a second. And no single computer made it happen alone.
The short answer
When you type a web address, your computer breaks your request into tiny pieces called packets. These packets travel through a network of routers, each one deciding the best path forward. The packets find their destination using IP addresses, which work like GPS coordinates for computers. DNS servers translate human-readable names like “google.com” into those numeric addresses, much like a phone book matching names to numbers.
The full picture
Packets: The postal system of the internet
Internet data doesn’t travel as whole files it’s sliced into small chunks called packets, each labeled with a sequence number, sent independently, and reassembled at the destination. This lets traffic share routes efficiently and survive failures along the way.
Imagine trying to mail a encyclopedia set to someone across the country. You wouldn’t stuff all 50 volumes into one envelope. You’d split them across multiple packages, each labeled with a sequence number. That’s exactly what happens with packets on the internet.
When your browser requests a webpage, the data doesn’t travel as one giant file. It gets sliced into chunks, typically between 1,000 and 1,500 bytes each. A typical webpage might arrive as 50 to 200 packets. A YouTube video? Millions of them.
Each packet has a header containing its destination IP address, its sequence number, and error-checking information. This sequence number is crucial because packets don’t always take the same path. Some might go through Chicago, others through Dallas. They arrive out of order, and your computer reassembles them like a puzzle.
TCP/IP is the protocol that makes this possible. IP handles the addressing and routing. TCP ensures everything arrives correctly, requests retransmission if a packet gets lost, and puts things back in order. It’s like certified mail with return receipt.
IP addresses and DNS: The phone book
Every device connected to the internet has an IP address, a unique numeric identifier. IPv4 addresses look like “192.168.1.1” there are about 4.3 billion possible combinations, which ran out in 2012. IPv6 expanded this to a virtually unlimited number.
But humans don’t want to remember “142.250.190.46” every time they want to search something. Enter DNS, the Domain Name System. DNS servers act as the internet’s phone book.
When you type “youtube.com”, your computer asks a DNS resolver: “Where’s youtube.com?” The resolver might check its cache first. If not found, it asks root DNS servers, then .com servers, then YouTube’s specific nameservers. This whole chain happens in milliseconds, and returns an IP address your computer can actually use.
Your ISP provides your DNS resolver by default, but you can change it. Cloudflare’s 1.1.1.1 or Google’s 8.8.8.8 are popular alternatives that can sometimes speed up your browsing.
Routers: The traffic controllers
Between your computer and YouTube’s server sit dozens of routers. A router is a specialized computer whose only job is to decide where to send packets next.
Each router has a routing table, a map of the network. When a packet arrives, the router examines its destination IP address, consults this table, and forwards the packet toward its destination. This process is called routing.
Routers don’t know the full path from source to destination. They only know the next best hop. This is called hop-by-hop routing, and it’s what makes the internet resilient. If one path fails, routers quickly learn alternative routes. The internet literally routes around damage.
The average packet crosses 15 to 20 routers to get across the country. Going internationally can mean 30 or more hops.
HTTP and HTTPS: The conversation
Once packets reach YouTube’s server, something needs to interpret them. That’s where HTTP comes in, the Hypertext Transfer Protocol. It’s the language browsers and servers use to talk to each other.
Your browser sends a request that looks something like this:
GET / HTTP/1.1
Host: youtube.com
User-Agent: Mozilla/5.0...
The server responds with a status code (200 means success, 404 means “not found”), headers describing what it’s sending back, and the actual content.
HTTPS adds encryption on top of HTTP. Before sending data, your browser and the server perform a cryptographic handshake, agreeing on an encryption key. All subsequent data is scrambled so that anyone intercepting it sees only gibberish. That lock icon in your browser? It means HTTPS is protecting your connection.
The physical internet: Cables, data centers, and ISPs
All this digital traffic moves through physical infrastructure. And it’s more tangible than you’d think.
Most internet traffic travels through undersea cables. According to TeleGeography’s Submarine Cable Map, over 500 active and planned submarine cable systems span ocean floors, totaling over 1.2 million kilometers. They’re about as thick as a garden hose and contain fiber optic strands that transmit data as light pulses. A single cable can carry 400 gigabits per second or more.
On land, fiber optic cables connect cities. These terminate at data centers, warehouse-sized buildings full of servers. The biggest cloud providers, Amazon Web Services, Google Cloud, and Microsoft Azure, operate hundreds of data centers worldwide. When you “upload” something to the cloud, you’re really just copying it to someone else’s computer in a building somewhere.
Your ISP, the Internet Service Provider, connects you to this global network. Whether it’s Comcast, AT&T, or a local fiber provider, your ISP is your gateway. Your traffic flows through their infrastructure, and they assign you an IP address that identifies your connection.
Latency vs bandwidth: Not the same thing
Bandwidth is how much data you can transfer at once (the width of the pipe); latency is how long data takes to travel (the length of the pipe). They’re independent and for many applications, latency matters more.
People often confuse these two concepts. Bandwidth is how much data you can transfer at once, measured in megabits per second. It’s the width of the pipe.
Latency is how long data takes to travel, measured in milliseconds. It’s the length of the pipe.
You can have high bandwidth but high latency. Imagine a giant cargo ship: it can carry massive amounts of cargo (high bandwidth), but it takes weeks to arrive (high latency). Conversely, a courier on a motorcycle has low bandwidth (one package) but very low latency (same-day delivery).
This matters for different applications. Video streaming needs high bandwidth but can tolerate some latency. Online gaming needs low latency more than high bandwidth. A video call needs both.
Fiber optic connections typically offer latency under 10 milliseconds between major cities. Satellite internet, useful in remote areas, often has latency over 500 milliseconds because signals must travel to space and back.
The 200ms journey
Here’s what happens in the roughly 200 milliseconds between typing a URL and seeing a webpage:
- Your browser parses the URL and checks its cache (0-5ms)
- If not cached, it asks your OS to resolve the DNS name (5-20ms)
- Your computer opens a TCP connection to the server (20-50ms)
- TLS handshake for HTTPS encryption (50-100ms)
- Your browser sends the HTTP request (100-120ms)
- The server processes the request and starts sending data (120-150ms)
- The first packets arrive and rendering begins (150-200ms)
This is simplified, and conditions vary wildly. A fast fiber connection might do this in 100ms. A congested mobile network might take 500ms or more. The server’s distance matters too: accessing a server across town might take 20ms, while reaching one on another continent could add 100ms or more.
Why it matters
Understanding how the internet works helps you make better decisions as a user and developer.
When your connection is slow, knowing whether it’s bandwidth (buffering video) or latency (lag in games) helps you diagnose the problem. When you’re building websites, knowing that every extra kilobyte of JavaScript adds to download time motivates optimization.
The internet isn’t magic. It’s infrastructure, physical wires and computers making split-second decisions. And like any infrastructure, it has limits, tradeoffs, and failure modes. Understanding those limits makes you a more effective user and builder.
Who actually owns the internet’s backbone
The internet’s physical backbone the submarine cables carrying 95% of international traffic is increasingly owned by a handful of tech giants. Google, Meta, Microsoft, and Amazon now co-own a substantial portion of the world’s undersea cable capacity, not the diffuse network of telecoms the early internet implied.
The internet is often described as if it’s one thing, or as if it belongs to no one. The physical reality is more complicated, and more concentrated.
Long-distance internet traffic travels through submarine cables: fiber optic cables lying on the ocean floor connecting continents. There are roughly 500 of these cables, carrying about 95% of all international internet traffic. Satellites handle the rest, mostly for remote areas without cable access. The submarine cable map is one of the most strategically important infrastructure maps in the world.
For most of the internet’s early history, these cables were owned by telecom consortia, multiple carriers sharing costs and bandwidth. Today, tech giants have moved in. Google, Meta, Microsoft, and Amazon now own or co-own a substantial portion of the world’s submarine cable capacity. Google alone has invested in over 20 submarine cables.
Internet exchange points (IXPs) are where different networks physically connect and hand traffic to each other. The biggest, DE-CIX in Frankfurt, handled a peak of 17.09 terabits of traffic per second in April 2024. A handful of major IXPs carry a disproportionate share of global internet traffic, which means they’re also targets of strategic interest and, occasionally, censorship by governments that sit between a user and the world’s content.
This concentration matters for the concept of internet “openness.” Technically, the internet was designed to route around failures and censorship. Practically, if enough physical infrastructure is in the hands of a few actors, or runs through a few choke points, that theoretical resilience has limits. Several authoritarian governments have experimented with building domestic internets that can be disconnected from the global network, and the concentration of physical infrastructure in cables, data centers, and IXPs reveals exactly where the choke points are.
Common misconceptions
More bandwidth always means faster internet. Not true. If latency is your bottleneck, doubling bandwidth won’t help much. A 100Mbps satellite connection feels slower than a 30Mbps fiber connection for many tasks.
Incognito mode makes you anonymous. False. Incognito only stops your browser from storing history, cookies, and cache on your device. Your ISP, websites, and network administrators can still see everything you do.
The cloud is somewhere else. The cloud is just someone else’s computer. Those “remote” files are stored in data centers that might be physically closer to you than your own office. The cloud is a marketing term for distributed infrastructure.