What is a VPN?
A VPN (Virtual Private Network) extends a private network across a public network (like the Internet), enabling users to send and receive data as if their computing devices were directly connected to the private network.
It achieves this by creating a secure, encrypted 'tunnel' for your data. The main benefits are:
Explain what a proxy server is.
A proxy server acts as an intermediary for requests from clients seeking resources from other servers.
When a user makes a request (e.g., to open a webpage), the request first goes to the proxy server. The proxy server then forwards the request to the web server on the user's behalf. The response from the web server comes back to the proxy, which then forwards it to the user.
Key uses include:
A key difference from a VPN is that a VPN typically encrypts all of a device's traffic, while a proxy often only handles traffic for a specific application (like a web browser).
What is latency?
Latency is the time delay in data communication. It is the time it takes for a data packet to travel from its source to its destination.
It is typically measured in milliseconds (ms). Low latency is desirable for a responsive network connection. High latency results in noticeable lag.
Factors that contribute to latency include:
What is bandwidth?
Bandwidth is the maximum rate of data transfer across a given path in a network. It measures how much data can be sent over a specific connection in a given amount of time.
Bandwidth is typically measured in bits per second (bps), kilobits per second (Kbps), megabits per second (Mbps), or gigabits per second (Gbps).
It is often compared to the width of a pipe: a wider pipe (higher bandwidth) can carry more water (data) at once than a narrow pipe (lower bandwidth).
What is the difference between latency and bandwidth?
Latency and bandwidth describe two different aspects of network performance and are often confused.
Analogy: Imagine you need to move boxes with a truck.
A high-bandwidth, high-latency connection can move a lot of data, but each piece of data will take a long time to start arriving. A low-bandwidth, low-latency connection can't move much data at once, but the data it does send arrives very quickly.
What is DHCP starvation attack?
A DHCP Starvation Attack is a type of denial-of-service (DoS) attack that targets a DHCP server.
The attacker uses a tool to broadcast a huge number of DHCP Discover requests with spoofed (fake) MAC addresses. The DHCP server, believing these are legitimate requests from many different clients, responds by offering and reserving IP addresses for each one.
The attacker's tool receives these offers but never sends the final ACK. The DHCP server's pool of available IP addresses is quickly exhausted, or 'starved'. As a result, legitimate new users on the network cannot obtain an IP address and are denied service.
What is flow control in networking?
Flow control is a mechanism at the Transport Layer (Layer 4) that manages the rate of data transmission between two nodes to prevent a fast sender from overwhelming a slow receiver.
If a server sends data faster than the client can process it, the client's buffer will overflow, leading to data loss. TCP uses a flow control mechanism, often a 'sliding window' protocol. The receiver specifies the amount of data it is currently able to receive (the 'window size') in its acknowledgment packets. The sender can only send up to that amount of data before it must wait for another acknowledgment with an updated window size.
What is congestion control?
Congestion control is a mechanism used to regulate the amount of traffic a sender injects into the network to avoid overwhelming the network itself (i.e., the routers and links between the sender and receiver).
While flow control is about protecting the *receiver*, congestion control is about protecting the *network*. TCP uses mechanisms like 'slow start' and 'congestion avoidance' to probe the network for available capacity. If it detects packet loss (which it assumes is due to congestion), it dramatically reduces its sending rate to ease the load on the network.
What is a broadcast domain?
A broadcast domain is a logical division of a computer network in which all nodes can reach each other by broadcast at the Data Link layer (Layer 2).
When a device sends a broadcast frame (e.g., an ARP request), it is received by every other device within the same broadcast domain. Hubs and switches forward broadcasts to all their ports, so all devices connected to them are in the same broadcast domain. Routers do not forward broadcasts, so they are used to separate and create broadcast domains.
What is a collision domain?
A collision domain is a section of a network where data packets sent from different nodes at the same time can collide with each other. A collision forces the devices to retransmit their packets, which reduces network efficiency.