Network Fundamentals
Part of Networking
Core concepts and vocabulary that underlie all computer networking.
Why This Matters
Networking is built on a set of foundational concepts that appear at every level of every protocol. Packets, addresses, protocols, layers, bandwidth, latency — these terms are used throughout networking literature and documentation. Without a solid grasp of these fundamentals, networking becomes a collection of disconnected procedures rather than a coherent body of knowledge.
Understanding fundamentals also enables problem solving. When something goes wrong — and in networking, something always eventually goes wrong — the ability to reason from first principles determines whether you can diagnose and fix the problem or merely try things randomly until something works. First-principles understanding is the difference between a network administrator and someone who can only follow instructions.
This article covers the vocabulary and conceptual framework needed to understand all other networking topics. It is deliberately abstract — the specifics of particular protocols are less important than grasping the underlying ideas.
Nodes, Links, and Topology
A network consists of nodes connected by links. A node is any device that participates in the network: a computer, a printer, a switch, a router. A link is a communication path between nodes: a cable, a wireless connection, or any other medium that carries signals.
The arrangement of nodes and links is the network topology. A bus topology connects all nodes to a single shared medium. A star topology connects all nodes to a central hub or switch. A ring topology connects each node to two adjacent nodes in a circular arrangement. A mesh topology connects each node directly to many or all other nodes.
Topology affects reliability, performance, and cost. Bus topologies are simple and cheap but a single fault can disrupt the entire network. Star topologies are more expensive (require a central device) but confine faults to individual links. Mesh topologies provide maximum redundancy but require many links, and the number of links grows quadratically with the number of nodes.
Real networks combine topologies. A campus network might have a star topology at the building level, with each building connected to a central switch, and a partial mesh between buildings for redundancy.
Addressing
Addressing answers the question: how does a message get to the right destination? A network address identifies a specific node (or a set of nodes, in the case of broadcast and multicast addresses) so that routers and switches can direct traffic to the correct destination.
Networks use multiple layers of addressing for different purposes. MAC addresses (48-bit hardware addresses) identify individual network interfaces at the link layer — they are used for delivery within a single network segment. IP addresses (32-bit in IPv4, 128-bit in IPv6) identify nodes at the network layer — they are used for routing across multiple network segments. Port numbers (16-bit values in TCP and UDP) identify specific services or processes within a node at the transport layer.
Address assignment can be static (manually configured) or dynamic (assigned by a server such as DHCP). Static addresses are simpler and more predictable but require manual management. Dynamic addresses are more scalable and eliminate configuration errors but require additional infrastructure.
Address translation (NAT, Network Address Translation) allows multiple devices to share a single public IP address by rewriting addresses in packet headers. This has been essential for extending the life of IPv4 (which has a limited number of addresses) but adds complexity and breaks some protocols.
Bandwidth and Throughput
Bandwidth is the theoretical maximum rate at which data can be transmitted over a link, measured in bits per second (bps). A 100 Mbps Ethernet link has a bandwidth of 100 million bits per second.
Throughput is the actual rate of useful data delivery, which is always less than bandwidth. Protocol overhead (headers, acknowledgments, error correction), processing delays, queuing, and retransmissions all reduce throughput below the theoretical maximum. On a healthy 100 Mbps Ethernet link, practical throughput for large file transfers is typically 90-95 Mbps. For small packets (many individual request-response exchanges), throughput can fall well below 50% of bandwidth.
Bandwidth is not the same as speed. Two users on a 1 Gbps link will each see 500 Mbps throughput, not 1 Gbps. Sharing is implicit in all network connections.
The bandwidth-delay product is the amount of data that can be in flight simultaneously on a link: bandwidth × round-trip time. A link with 100 Mbps bandwidth and 10 ms round-trip time has a bandwidth-delay product of 100,000 bits = 12,500 bytes. TCP’s congestion window must be at least this large to fully utilize the link. On high-bandwidth, long-delay links (such as satellite connections), the bandwidth-delay product can be very large, and special TCP tuning is needed to achieve high throughput.
Latency and Delay
Latency is the time for a single bit to travel from source to destination. It has several components:
Propagation delay is the time for a signal to travel through the medium. Electromagnetic signals travel through copper at roughly 2/3 the speed of light (~200,000 km/s). A 1-km cable has about 5 microseconds of propagation delay. Light in fiber travels somewhat slower than in free space (~200,000 km/s). Cross-continental fiber links have 30-50 ms of propagation delay; satellite links have 500+ ms.
Transmission delay is the time to push all bits of a packet onto the link. A 1,500-byte packet on a 100 Mbps link takes 120 microseconds (1,500 × 8 / 100,000,000 = 0.00012 s).
Queuing delay is the time a packet spends waiting in a buffer for the link to be available. This is zero when the link is lightly loaded and can be large when the link is congested.
Processing delay is the time routers and switches spend examining the packet header and deciding where to forward it. Modern hardware switches have processing delays of microseconds; software routers may introduce more.
Total latency is the sum of all these components on every hop from source to destination. For most local network applications, latency is imperceptible. For real-time applications (voice, video, remote control) and for high-frequency transactions, latency matters significantly.
The OSI and TCP/IP Models
Networking protocols are organized in layers, where each layer provides services to the layer above and uses services from the layer below. Layers allow each protocol to be designed independently — changes to the physical medium do not require changes to the application protocol, and vice versa.
The OSI (Open Systems Interconnection) model defines seven layers: Physical (bits on wire), Data Link (frames between adjacent nodes), Network (packets across multiple hops), Transport (reliable end-to-end delivery), Session (connection management), Presentation (data formatting), and Application (user-facing protocols).
The TCP/IP model is simpler and more closely matches how the internet actually works. It has four layers: Link (OSI Physical + Data Link), Internet (OSI Network), Transport (OSI Transport), and Application (OSI Session + Presentation + Application).
Each layer communicates logically with its peer layer on the other machine, while physically passing data to the layer below. An HTTP request from a web browser traverses down through Application, Transport, Internet, and Link layers at the sender, travels across the physical network, then traverses up through the layers at the receiver. At each layer, headers are added (encapsulation) going down and removed (de-encapsulation) going up.
Understanding the layers lets you identify at which layer a problem occurs. No link light = physical layer. Link up but no IP connectivity = network layer. IP works but specific application fails = application layer. This layered diagnosis guides troubleshooting systematically.
Protocol Design Principles
All network protocols share a common structure: a set of messages, rules for when each message is sent, and rules for what to do when each message is received. Good protocols are characterized by several properties.
They are deterministic: given the same inputs and state, a protocol always produces the same behavior. This makes protocols predictable and testable.
They handle errors explicitly: they define what to do when a message is lost, corrupted, or arrives out of order. Protocols that only handle the happy path fail in unpredictable ways when the network misbehaves.
They have clear state machines: at any moment, each participant in a protocol is in a defined state, and each received message causes a defined transition to a new state. State machine diagrams are a useful tool for designing and understanding protocols.
They are versioned or extensible: because protocols must evolve and interoperate across hardware from different eras, good protocols include version numbers or extension mechanisms that allow new features without breaking old implementations.