back to top
Thursday, January 1, 2026
HomeHardware HubThe network is slow: 14 ways to troubleshoot without specialized tools
September 23, 2025

The network is slow: 14 ways to troubleshoot without specialized tools

Behind every slow network is a clue. Spot it without reaching for extra tools.

The tickets are pinging in with one complaint: The network is slow. The rest of the IT team has vanished for lunch, and in under an hour, the board is logging into Zoom. The pressure is real. Where do you start?

1. Inspect physical connections

Always begin at Layer 1. Confirm patch leads are seated, connectors undamaged, and optics properly inserted. Replace suspect cables before moving on. Many slowdowns trace back to bad cabling, and swapping a lead is faster than running higher-layer tests.

2. Check interface counters

Run show interface (IOS) or ip -s link (Linux). Healthy links have zero CRC errors and no incrementing input/output errors. On modern networks, even one CRC is a concern. On full-duplex links, any collision count points to a duplex mismatch.

CLI shortcut: Use show int | i error to quickly filter only error lines.

Escalate: If errors persist after reseating and swapping the cables, replace the port or NIC.

3. Verify duplex and speed settings

Run show interface status or check the OS NIC settings. Both ends must agree on full duplex and the same speed. Fast Ethernet is prone to mismatches; gigabit and above usually negotiate reliably. Persistent errors on high-speed links usually mean bad optics or cabling, not negotiation.

Skip: In pure 10G+ environments, this check can be deprioritized unless legacy gear is present.

4. Measure bandwidth utilization

Check the “5-minute input/output rate” in IOS or run sar -n DEV on Linux. If a link is consistently >90% utilized, it is saturated. Quality of service may help in the short term, but capacity upgrades are the real fix.

Escalate: If utilization is normal but complaints persist, move up the stack.

5. Look for excessive broadcast or multicast traffic

Excess broadcast or multicast drains CPU on endpoints and switches. In IOS, check interface stats; on Linux, use netstat -s. Sustained >10% of link utilization is abnormal. Common causes include loops, malware, or misconfigured discovery protocols.

Automation hint: Sweep multiple switches:
for i in $(cat switches.txt); do ssh $i "show int | i broadcast"; done

6. Check for spanning tree issues

Run show spanning-tree. Look for stability—frequent re-convergences point to a loop. MAC flaps between ports are a classic symptom. Always correlate with logs for timing.

Skip: In small, flat networks without redundant links, STP checks may not apply.

7. Examine ARP and MAC tables

On hosts, arp -a shows IP-to-MAC mappings. On switches, show mac address-table lists learned addresses. Flapping or missing entries create intermittent connectivity. If these persist after STP is cleared, escalate to deeper capture analysis.

8. Monitor CPU and memory on devices

Use show processes cpu and show memory. Sustained CPU >80% or spikes to 100% disrupt control-plane tasks. Common triggers are logging floods, small-packet storms, or code bugs. Memory leaks also manifest as sluggish forwarding.

Escalate: If utilization remains high with no config changes, plan a software upgrade or RMA.

9. Test latency with ping

Ping the gateway, an internal server, then an external IP. Run at least 10 packets per target to confirm trends. Watch for:

  • Jitter: Sustained >2 ms on LAN is unusual; >20 ms indicates congestion.
  • Loss: Any consistent LAN loss is unacceptable; WAN loss >1% breaks voice, though bulk transfers may tolerate it.

Caveat: Some devices rate-limit or drop ICMP under load or for security. Focus on sustained patterns, not isolated drops.

10. Trace the route

Traceroute identifies where the delay begins. A hop adding ~100 ms consistently is the suspect. Run at least two traceroutes spaced 30–60 seconds apart to confirm persistence. Compare runs from multiple sites to validate the scope. Ignore single missing hops—ACLs or ICMP deprioritization are common. Look for consistency instead.

11. Check DNS resolution

Use dig or nslookup. Healthy lookups are <50 ms; >200 ms consistently is a problem. Compare internal and public resolvers. If public is faster, internal DNS is at fault.

Escalate: If internal DNS remains slow after cache flush or restart, move to deeper server diagnostics.

12. Check device and system logs

Logs correlate user reports with real events. IOS show log shows flaps or bursts. Linux dmesg and /var/log/syslog reveal driver resets. Windows Event Viewer flags NIC drops. Logs are low-effort, high-yield checks.

Modern caveat: In hybrid/cloud, use CloudWatch, Azure Monitor, or equivalent. Confirm NTP or other time sync is healthy—skewed clocks make event correlation unreliable.

13. Run client-side diagnostics

If issues affect only one device, check the client. Tools include netstat -s for socket errors, OS resource monitors for CPU/disk bottlenecks, and Event Viewer or syslog for NIC resets. Many “network” complaints trace back to local endpoint health.

Skip: If multiple clients are affected, focus on shared infrastructure instead.

14. Document your diagnostics

Record commands, outputs, and timestamps. Documentation accelerates escalation, prevents repeated work, and builds a reusable knowledge base. In high-pressure incidents, good notes are as valuable as the diagnostics themselves.

Quick-reference thresholds and actions

SymptomThresholdAction
CRC errorsAny non-zero, incrementingReplace cable or port
Duplex mismatchCollisions on full duplexForce both ends to auto
Link utilization>90% sustainedApply QoS or upgrade
Broadcast traffic>10% of utilizationTrace and isolate source
CPU load>80% sustainedCheck logging, DoS, or bugs
DNS response>200 ms consistentlySwitch or fix resolver
Packet loss>1% sustainedInvestigate WAN or link health
LAN jitter>2 ms sustainedCheck congestion or oversubscription

About NetworkTigers

NetworkTigers is the leader in the secondary market for Grade A, seller-refurbished networking equipment. Founded in January 1996 as Andover Consulting Group, which built and re-architected data centers for Fortune 500 firms, NetworkTigers provides consulting and network equipment to global governmental agencies, Fortune 2000, and healthcare companies. www.networktigers.com.


Katrina Boydon
Katrina Boydon
Katrina Boydon is a veteran technology writer and editor known for turning complex ideas into clear, readable insights. She embraces AI as a helpful tool but keeps the editing, and the skepticism, firmly human.

Popular Articles

Discover more from NetworkTigers News

Subscribe now to keep reading and get access to the full archive.

Continue reading