1.Bandwidth
1) Bandwidth concept
Bandwidth is defined in Baidu Encyclopedia: the "highest data rate" that can pass from one point in the network to another point in unit time.
The bandwidth of a computer network refers to the highest data rate that the network can pass, that is, how many bits per second (the common unit is bps (bit per second)).
To put it simply: Bandwidth can be compared to a highway, which represents the number of vehicles that can pass per unit time.
2) Bandwidth representative
Bandwidth is usually expressed in bps, which means how many bits per second; 1000 bit/s=1Kbit/s, 1000000bit/s=1Mbit/s, 1000000000bit/s=1Gbit/s.
"bits/second" is often omitted when describing bandwidth. For example, the bandwidth is 100M, which is actually 100Mbps. Mbps here refers to megabits/s.
But the unit of our usual download speed of software is Byte/s (bytes/second). This involves the conversion of Byte and bit. Each 0 or 1 in the binary number system is a bit. A bit is the smallest unit of data storage, and 8 bits is called a byte.
Therefore, when we apply for broadband, 100M bandwidth means 100Mbps. The theoretical network download speed is only 12.5M Bps, which may actually be less than 10MBps.
This is because the actual network speed cannot reach the theoretical network speed due to many factors such as user computer performance, network equipment quality, resource usage, network peak periods, website service capabilities, line attenuation, signal attenuation, etc.
2. Delay
Simply put, latency refers to the time it takes for a packet to travel from one end of the network to the other. For example: I am pinging on my computer (Ping refers to the round-trip time for a data packet to be sent from the user's device to the speed measurement point, and then immediately returned from the speed measurement point to the user's device. It is also commonly known as network delay, measured in milliseconds ms Calculate.) Baidu’s address, from the ping results, you can see that the delay is 12ms. This delay means that the round-trip delay required for the ICMP message from my computer to Baidu’s server is 12ms;
Network delay includes four major parts: processing delay, queuing delay, sending delay, and propagation delay. In practice, we mainly consider transmission delay and propagation delay.
1) Processing delay
Network devices such as switches and routers take a certain amount of time to process the packets after receiving them. For example, decapsulating and analyzing headers, extracting data, error checking, routing, etc. The processing latency of a typical high-speed router is usually on the order of microseconds or lower.
2) Queuing delay
Queuing delay is simply the time it takes for network equipment such as routers or switches to process packet queuing. The queuing delay of a data packet depends on whether there are other packets currently being transmitted in the queue. If the queue is empty and no other packets are currently being transmitted, the queuing delay of the packet is 0. On the other hand, if the traffic is very large and many other packets are waiting to be transmitted, the queuing delay will be very large. Actual queuing delays are typically in the range of milliseconds to microseconds.
3) Sending delay
Simply put, the transmission delay is the time required for network devices such as routers and switches to send data, which is also the time required for the router queue to be submitted to the network link. If L bits are used to represent the length of the packet, and R bps is used to represent the link transmission rate from router A to router B, the sending delay is L/R. The actual transmission delay is usually in the range of milliseconds to microseconds.
4) Propagation delay
Propagation delay refers to the time it takes for a packet to propagate data on the actual physical link. The propagation delay is equal to the distance between two routers divided by the propagation rate, that is, the propagation delay is D/S, where D is the distance between the two routers and S is the propagation rate of the link. The actual propagation delay is in the millisecond level.
3.Jitter
Network jitter refers to the time difference between the maximum delay and the minimum delay. For example, if the maximum delay when you visit a website is 10ms and the minimum delay is 5ms, then the network jitter is 5ms. Jitter can be used to evaluate the stability of the network. The smaller the jitter, the more stable the network is. Especially when we play games, we need the network to have high stability, otherwise it will affect the gaming experience. Regarding the causes of network jitter: If the network is congested, the queuing delay will affect the end-to-end delay, which may cause the delay from router A to router B to fluctuate, causing network jitter.
4. Packet loss
Simply put, packet loss means that the data of one or more data packets cannot reach the destination through the network. If the receiving end finds that the data is lost, it will send a request to the sending end based on the queue sequence number to retransmit the lost packets.
Simply put, packet loss means that the data of one or more data packets cannot reach the destination through the network. If the receiving end finds that the data is lost, it will send a request to the sending end based on the queue sequence number to retransmit the lost packets.
Packet loss rate refers to the ratio of the number of packets lost to the number of packets sent during the test. For example, if 100 data packets are sent and one data packet is lost, the packet loss rate is 1%.
5.Stacking
Stacking refers to connecting multiple switches that support stacking features together through stacking cables to logically virtualize them into one switching device and participate in data forwarding as a whole. Stacking is a horizontal virtualization technology widely used at present. It can improve reliability, expand the number of ports, increase bandwidth, and simplify networking.
1) Why is stacking needed?
Traditional campus networks use equipment and link redundancy to ensure high reliability, but their link utilization is low and network maintenance costs are high. Stacking technology virtualizes multiple switches into one switch to simplify network deployment and reduce network maintenance work. Quantitative purposes. Stacking has many advantages:
[1] Improve reliability: Redundant backup is formed between multiple member switches of the stack system. As shown in the figure below, SwitchA and SwitchB form a stack system. SwitchA and SwitchB back up each other. When SwitchA fails, SwitchB can take over from SwitchA to ensure the normality of the system. run. In addition, the stacking system supports cross-device link aggregation function and can also implement link redundancy backup.
[2] Number of expansion ports: When the number of accessed users increases and the port density of the original switch cannot meet the access needs, you can increase the number of expansion ports of the new switch and the original switch to form a stack system.
[3] Increase bandwidth: When you need to increase the upstream bandwidth of the switch, you can add a new switch and the original switch to form a stack system, and configure multiple physical links of the member switches into an aggregation group to increase the upstream bandwidth of the switch.
[4] Simplified networking: Multiple devices in the network are stacked and virtualized into a single logical device. The simplified network no longer needs to use MSTP and other loop-breaking protocols, simplifying network configuration. At the same time, it relies on cross-device link aggregation to achieve fast switching when a single device fails and improve reliability.
[5] Long-distance stacking: Users on each floor access the external network through corridor switches. Now the corridor switches that are far apart are connected to form a stack. This is equivalent to having only one access device in each building, and the network structure becomes Much simpler.
Each building has multiple links to the core network, making the network more robust and reliable. The configuration of multiple corridor switches is simplified into the configuration of a stack system, reducing management and maintenance costs.
2) What devices can be stacked?
All mainstream switches support stacking. For example, Huawei S series campus switches and CloudEngine data center switches all support stacking. For S series campus switches, only fixed switches have models that support stacking; two modular switches are built together to form a cluster. For CloudEngine data center switches, both modular switches and fixed switches have models that support stacking. The difference between the two is that modular switches only support stacking with two devices.
3) How to build a stack?
Before introducing how to establish a stack, let us first introduce the related concepts used in the process of establishing a stack. All single switches in the stack system are called member switches and can be divided into three roles according to different functions:
[1] Master switch (Master): The master switch is responsible for managing the entire stack. There is only one master switch in the stack system.
[2] Standby switch (Standby): The standby switch is the backup switch of the main switch. There is only one standby switch in the stack system. When the primary switch fails, the backup switch will take over all services from the original primary switch.
[3] Slave: The slave switch is used for business forwarding. There can be multiple slave switches in the stack system. The greater the number of secondary switches, the greater the forwarding bandwidth of the stack system.
Except for the master switch and the backup switch, all other member switches in the stack are slave switches. When the standby switch is unavailable, the slave switch assumes the role of the standby switch.
The master switch, backup switch, and slave switch can all forward business traffic. Adding, removing, or replacing stack member switches may cause stack member roles to change.
4)Stack ID
The stack ID is used to identify stack member switches and is the slot number of the member switch. Each stack member switch has a unique stack ID in the stack system.
5)Stacking priority
Stack priority is an attribute of member switches. It is mainly used to determine the role of member switches during the role election process. The larger the priority value, the higher the priority. The higher the priority, the greater the possibility of being elected as the master switch.
6) Stack establishment process
The stack establishment process includes the following four stages:
[1] Select stacking cables and connection methods based on network requirements. Different products support different physical connection methods.
For S series campus fixed switches and CloudEngine data center fixed switches, both chain and ring connection topologies are supported.
CloudEngine data center modular switches support SIP port connection and service port connection.
[2]Elect the master switch.
After all member switches are powered on, the stack system begins to elect the master switch. In a stack system, each member switch has a specific role, and the master switch is responsible for managing the entire stack system.
[3] Assign stack ID and standby switch election.
After the master switch election is completed, the master switch collects the topology information of all member switches, calculates the stack forwarding table entries based on the topology information, and sends them to all member switches in the stack, and assigns stack IDs to all member switches.
After that, the backup switch is elected to serve as the backup switch of the main switch. The switch that completes device startup first other than the primary switch will be selected as the backup switch first.
[4] Synchronize software versions and configuration files.
After role election and topology collection are completed, all member switches will automatically synchronize the software version and configuration files of the master switch.
[5] The stacking system has the function of automatically loading system software. The member switches to be formed in the stack do not need to have the same software version, they only need to be compatible between versions.
When the software version of the standby switch or slave switch is inconsistent with that of the master switch, the standby switch or slave switch will automatically download the system software from the master switch, then restart using the new system software and rejoin the stack.
[6] The stacking system has a configuration file synchronization mechanism. The main switch saves the configuration file of the entire stacking system and performs configuration management of the entire stacking system.
The standby switch or slave switch will synchronize the configuration file of the master switch to this switch and execute it to ensure that multiple devices in the stack can work as one device in the network, and after the master switch fails, the remaining switches can still Perform all functions normally.
Contact: Mr.Molle
Phone: 18823647757
E-mail: info@opticsswitch.com
Whatsapp:8618823647757
Add: A508, Hedi Chuangke Building, No. 28 Qingshui Road, Longgang District, Shenzhen, Guangdong Province, China
We chat