You've got a fast internet plan, your speed test shows high download numbers, and yet your video calls stutter, your online games feel laggy, and web pages sometimes take forever to load. The culprit is often not your bandwidth but your latency โ€” the time it takes for data to travel from your device to its destination and back. Latency is the invisible factor that affects everything you do online, and understanding what causes it is the first step toward fixing it.

In over 20 years managing networks, from small offices to enterprise environments, latency has been the root cause of more user complaints than just about any other network issue. People think "slow internet" means they need more bandwidth, but often the real problem is that their data is taking too long to make the round trip. Let's break down what latency is, what causes it, and what you can do about it.

What Is Network Latency?

Latency is the time delay between sending a request and receiving a response, measured in milliseconds (ms). When you click a link on a web page, your browser sends a request that travels through your local network, through your ISP, across potentially multiple backbone networks, to the web server. The server processes the request and sends data back along a similar path. The total round-trip time is your latency.

Bandwidth and latency are related but distinct concepts. Bandwidth is the capacity of your connection โ€” how much data can flow through per second, like the width of a pipe. Latency is how long data takes to travel from one end to the other โ€” like the length of the pipe. A connection can have enormous bandwidth but high latency (a large, long pipe), or low bandwidth but low latency (a narrow, short pipe). For tasks like downloading large files, bandwidth matters most. For tasks like video calls, online gaming, and responsive web browsing, latency matters most.

To measure your latency, the simplest tool is ping. Open a command prompt and type ping 8.8.8.8 to measure the round-trip time to Google's DNS server. Typical values for a healthy connection are under 20ms for local services, 20-50ms to servers within your country, and 100-200ms for international destinations. Anything over 200ms is noticeable, and latency above 300ms makes real-time applications like voice and video calls very difficult to use. For a more detailed view of where latency occurs along the path, use a traceroute.

Physical Distance

Light travels through fiber optic cable at roughly two-thirds the speed of light in a vacuum โ€” about 200,000 kilometers per second. That sounds incredibly fast, but it adds about 5ms of latency for every 1,000 kilometers of cable. A connection from New York to London (approximately 5,500 km of submarine cable) adds at least 27ms of latency in each direction from distance alone โ€” and that's the theoretical minimum through a straight-line cable, which doesn't exist in practice.

This is why geographic distance to the server matters. If you're in Maine and connecting to a server in Tokyo, you're looking at a minimum of 80-100ms of latency just from the physical distance, plus additional delays from network equipment along the way. There's no technology that can overcome the speed of light โ€” this is a hard physical limit.

Content Delivery Networks (CDNs) exist specifically to address this problem. Services like Cloudflare, Amazon CloudFront, and Akamai place copies of website content on servers distributed around the world, so your request is served from a nearby server rather than one on the other side of the planet. You can check which server is responding to your requests using IP Lobster's DNS Lookup to see what IP address a domain resolves to, then use Geolocation to find where that server is located.

Network Hops and Routing

Data doesn't travel in a straight line from your device to the destination server. It passes through multiple routers, switches, and network interconnection points along the way. Each device that handles your traffic adds a small amount of processing delay. A typical internet connection might involve 10-20 network hops between you and the destination, and each hop adds 1-5ms of latency.

Routing efficiency also matters. The internet doesn't always choose the shortest path โ€” it follows the paths defined by routing protocols and business agreements between ISPs. Your traffic might travel from Maine to a server in Boston by routing through New York or even Chicago because of how your ISP peers with the destination's network. Traceroute is the tool that reveals this path. Run tracert on Windows or traceroute on Mac/Linux to see every hop your traffic takes and the latency added at each one.

Peering points โ€” the physical locations where different ISPs exchange traffic โ€” can be significant sources of latency and congestion. When two large ISPs disagree on peering arrangements, traffic between their customers might be routed through congested or distant interconnection points, adding unnecessary latency. This is often the explanation when you have fast internet but slow connections to specific services.

Your Local Network

Before your data even reaches your ISP, it passes through your local network โ€” and this is where many latency problems originate. Wi-Fi is the most common culprit. Wireless connections add latency compared to wired connections, typically 2-10ms on a good day but potentially much more when the signal is weak, there's interference from neighboring networks, or many devices are competing for airtime.

Wi-Fi operates on shared spectrum, meaning your device has to wait for a clear moment to transmit, similar to people waiting to speak in a crowded room. The more devices on your network, and the more neighboring networks within range, the more contention there is for airtime. This manifests as increased latency and jitter (variation in latency).

If latency matters for your use case โ€” gaming, video calls, or remote work โ€” the single most effective improvement is to use a wired Ethernet connection. A direct cable to your router eliminates Wi-Fi latency entirely. If wired isn't practical, position your device as close to the router as possible, use the 5GHz band instead of 2.4GHz (5GHz is faster but has shorter range and less interference from neighbors), and minimize the number of walls between your device and the router.

An overloaded router can also add latency. If your router's CPU is maxed out handling traffic, NAT translations, firewall rules, and QoS processing, it may buffer packets, adding queuing delay. This is particularly common during heavy downloads or when many devices are active simultaneously.

ISP Congestion

Your ISP's network has a finite capacity, and during peak usage hours (typically evenings when everyone is streaming video), that capacity can become strained. When links within the ISP's network are running near capacity, packets get queued in router buffers, adding latency. In severe cases, packets are dropped entirely, requiring retransmission and causing even more delay.

This kind of congestion-related latency is often time-dependent โ€” your ping to the same server might be 15ms at 2 AM and 50ms at 8 PM. If you notice consistent latency increases during peak hours, it's likely ISP congestion. Unfortunately, there's not much you can do about this other than switching ISPs, if alternatives are available, or scheduling bandwidth-heavy activities for off-peak times.

Bufferbloat is a specific form of congestion-related latency that affects many home connections. When your connection is fully saturated (a large download using all available bandwidth, for example), some routers buffer excessive amounts of data, causing latency to spike dramatically โ€” sometimes to 500ms or more. SQM (Smart Queue Management) firmware features, available on many routers and all routers running OpenWrt, can dramatically reduce bufferbloat by managing queue sizes intelligently.

Jitter: The Other Half of the Latency Story

Jitter is the variation in latency over time. If your ping times vary between 15ms and 18ms, you have low jitter โ€” the connection is consistent. If they vary between 15ms and 150ms, you have high jitter โ€” the connection is unpredictable. High jitter is often worse than consistently high latency because applications can adapt to consistent delay but struggle with wildly varying delay.

Video calls and VoIP are particularly sensitive to jitter. These applications buffer incoming data to smooth out timing variations, but if jitter exceeds the buffer, you get choppy audio, frozen video, or dropped calls. Online games are similarly affected โ€” consistent 80ms latency is playable, but latency that jumps between 20ms and 200ms makes the game feel broken.

Common causes of jitter include Wi-Fi interference (the most frequent cause in home networks), ISP congestion, competing traffic on your local network, and poor-quality network equipment. The same remedies that reduce latency โ€” wired connections, good router placement, QoS configuration, and SQM for bufferbloat โ€” also reduce jitter.

Diagnosing Latency Problems

Start with a ping test to a reliable destination like ping 8.8.8.8. This gives you a baseline round-trip time and shows packet loss if any exists. Next, run a traceroute to the destination you're having problems with โ€” this reveals which hop in the chain is adding the most latency. If the first hop (your router) already shows high latency, the problem is on your local network. If latency increases sharply at a particular hop in the middle of the path, the problem is at that point in the internet infrastructure.

Test at different times of day to identify patterns. If latency is fine in the morning but terrible in the evening, ISP congestion is likely the cause. Test with both Wi-Fi and a wired connection to isolate wireless issues. Test with all other devices disconnected to rule out local network congestion.

Use IP Lobster's Port Scanner and DNS Lookup tools to verify that network services are responding correctly. Sometimes what appears to be a latency problem is actually a DNS resolution delay โ€” if DNS lookups are slow, every website will feel sluggish even though your connection to the actual web servers is fine. Switching to a faster DNS provider like Cloudflare (1.1.1.1) can resolve this type of perceived latency instantly.

Network latency is rarely something you can eliminate entirely, but understanding where it comes from lets you make targeted improvements. A wired connection, a good DNS provider, a router with SQM, and awareness of peak congestion times can collectively transform a frustrating internet experience into a smooth one โ€” often without spending a single dollar on a faster plan.