Shared Resources
Part of Networking
Managing shared network resources effectively: access control, coordination, and utilization.
Why This Matters
Sharing resources over a network multiplies their value: one printer serves twenty users, one storage server holds data for an entire organization, one database contains records accessible from anywhere. But sharing without management creates chaos — files overwritten, storage exhausted, resources monopolized by one user at the expense of others.
The discipline of managing shared resources addresses the practical challenges that arise when multiple users or processes compete for the same resource. Access control determines who can use what. Locking prevents simultaneous conflicting modifications. Quotas prevent overconsumption. Monitoring provides visibility into how resources are being used and when capacity needs to expand.
Understanding shared resource management translates directly to building networks that work reliably in practice, not just networks that work under ideal conditions.
File and Record Locking
When two users try to modify the same file simultaneously, without coordination, both users read the current version, both make changes, and both write back their version — the second write overwrites the first, losing that user’s changes. This is the lost update problem.
File locking prevents lost updates by allowing only one writer at a time. When a user opens a file for writing, the system places a write lock on the file. Other users attempting to open the file for writing are blocked (mandatory locking) or warned (advisory locking) until the first user closes the file.
Most operating systems implement advisory locking — applications must request locks and respect the locks they find. An application that does not request locks can still write to locked files. This design works when all applications cooperate (as they typically do for standard office files) but fails when a poorly written application ignores locks.
Mandatory locking enforces locks at the kernel level — even applications that do not request locks cannot access a mandatorily locked file. This is more robust but can cause deadlocks (two processes each waiting for the other to release a lock) that require administrator intervention.
For databases, record-level locking allows multiple users to modify different records in the same database simultaneously, with locks placed only on the specific records being modified. This fine-grained locking dramatically increases concurrency compared to locking the entire database for each modification.
Access Control Lists
An Access Control List (ACL) is an explicit list of which users or groups have which permissions on a resource. Each entry in the ACL grants or denies specific operations (read, write, execute, delete) to a specific user or group.
Traditional Unix file permissions use a simplified ACL: permissions are defined for the file owner, the file’s group, and everyone else. Each category has separate read, write, and execute bits. This is compact and simple but can represent only three permission categories.
POSIX ACLs extend Unix permissions to support arbitrary numbers of named users and groups. A file can have permissions explicitly granted to specific users who are not the owner and to groups other than the file’s group. POSIX ACLs are supported by Linux and most modern Unix systems.
Windows NTFS permissions use ACLs throughout. Every file and directory has an ACL listing each user or group (represented as a Security Identifier, SID) with their allowed and denied permissions. Permissions can be inherited from parent directories, simplifying administration of large directory trees.
For practical ACL design, use groups rather than individual users wherever possible. Adding a user to the appropriate group grants them all the permissions associated with that group, rather than requiring individual ACL updates on every resource. When a user’s role changes, remove them from old groups and add them to new groups — their permissions update automatically.
Quotas and Fair Use
Without quotas, one user can consume all available shared storage, leaving nothing for others. Storage quotas limit the maximum disk space each user can consume. Process quotas limit CPU time, memory, and other compute resources.
Disk quotas in Unix/Linux are configured per filesystem, per user, and per group. Soft limits warn the user when they approach the limit but allow temporary exceedance. Hard limits refuse writes once the quota is exceeded. Grace periods define how long a user can remain over the soft limit before the soft limit is treated as a hard limit.
In practice, set quotas well above typical individual usage to avoid interfering with legitimate work, while still preventing one user from monopolizing storage. If typical users use 5 GB, setting quotas at 20 GB catches out-of-control processes and deliberate overconsumption while never affecting normal use.
Bandwidth quotas limit network throughput for individual users or services. Traffic shaping tools (tc on Linux, traffic policies on managed switches) can assign bandwidth budgets to different users or traffic types. During periods of contention, each entity gets at least its budgeted share, preventing any one user from saturating the network connection for all others.
Contention and Priority
When more users want a resource than it can serve simultaneously, some must wait. How the system decides who waits and who gets served first is the priority and scheduling policy.
First-come, first-served (FIFO) is the default for most systems. Jobs are processed in order of arrival. This is fair in the sense that no user is privileged over others, but it means a large job that arrives first blocks many small jobs behind it — even if those small jobs would collectively finish faster.
Priority queues serve high-priority requests ahead of low-priority ones. High-priority requests are those marked as interactive (user is waiting for a response), time-sensitive (medical alerts, critical notifications), or administratively designated as important. Low-priority requests are background tasks (backups, bulk data processing) that can wait.
Round-robin scheduling gives each user a time slice and cycles through users, ensuring everyone gets periodic access rather than waiting indefinitely. This is the basis of time-sharing for CPU scheduling and is used in network access scheduling as well.
In practice, combining priority with fairness works best: high-priority tasks get immediate access; among equal-priority tasks, round-robin ensures fairness; low-priority background tasks get whatever capacity is left.
Monitoring and Utilization Tracking
Resource monitoring provides the data needed to manage shared resources effectively. Without monitoring, you are operating blind — you cannot tell whether resources are adequate until they fail, and you cannot identify who is using what.
System monitoring tools (Nagios, Prometheus, Grafana, MRTG) collect and display metrics including: disk utilization (how much storage is used and how quickly it is growing), network utilization (traffic volume and peak rates), CPU and memory usage on servers, and per-user resource consumption.
Trending metrics over time reveals patterns: storage that grows 1 GB per day needs 30 GB capacity added every month; network utilization that peaks at 80% during business hours needs expansion before it reaches 100%. Proactive capacity management, driven by trends, prevents the crises that occur when resources are exhausted unexpectedly.
Per-user utilization tracking identifies outliers — users consuming far more resources than typical — for investigation. Unusually high usage might indicate a malfunctioning process (a program that writes logs indefinitely, consuming storage), a compromised account (used for unauthorized activity), or a legitimate need that requires discussion about whether quotas should be raised.
Logging all access to sensitive resources (security cameras, medical records, financial data) creates an audit trail. Regular review of access logs — or automated alerting on anomalous access patterns — detects unauthorized access. The existence of comprehensive logs also deters misconduct, because users know their access is recorded.