For most businesses, network performance forms the basis of their operations – playing a crucial role in how smoothly devices, apps and digital services load and function. While many roles within a data center do not translate directly to the end user experience, network performance can impact this vastly, as the productivity of the network is disturbed.
There are a variety of techniques that organizations can use to optimize their network performance, and here, the team at Procurri advise on some you can try to help boost your productivity and service.
What is Network Optimization?
Network optimization is the practice of monitoring and measuring network performance, then making changes to improve and enhance it wherever possible. A data-driven approach allows for businesses to make informed decisions on areas for improvement and to tune their network performance accordingly to best drive efficiency. This can negate and avoid costly downtime, and gain competitive advantage against others by maintaining a better service.
While ‘optimization’ does not necessarily mean building a system that is immune from any disaster, it does mean striking the most healthy balance possible between optimum performance and the ability to manage as and when it arises.
Key Performance Metrics to Monitor on your Network
The best way to identify opportunities for improvement in your network, and to monitor progress made in doing so, is to identify and understand the most key network metrics for your configuration.
While the specificities of metrics may vary dependent on an organization’s needs and idiosyncrasies, the most popular network metrics monitored are as follows:
Latency
Latency is the amount of time taken for data to travel from its source to its destination. Low latency is the fast transfer of data, and high latency is slow or delayed transfer. It is high latency that provides noticeable delays and disruption in consumer-facing applications such as video conferencing.
Throughput
Throughput refers to the total amount of data that has been transmitted successfully over a set period of time (determined by the organization looking to measure it). In most cases, throughput is measured over five or ten seconds, and is expressed in bps (bits per second), Mbps (Megabits per second) or Gbps (Gigabits per second).
Bandwidth
Bandwidth is the maximum amount of data that a network connection can handle to be transmitted. High bandwidth allows for higher data flow than low bandwidth, but neither guarantees good performance. Throughput must be less than bandwidth, or the system will max out and not be able to manage any further data transfer.
Packet Loss
When a data packet doesn’t reach its destination as originally intended, the network attempts retransmission. This can reduce slower speeds of data transfer and poor performance of applications overall. Most commonly, packet loss rate is calculated by the tools that send the test packets over the network system, by counting how many fail to return to the source as expected. This is usually calculated over a set time but may fluctuate slightly depending on delays that do deliver the data; just take longer to do so than intended.
Jitter
Jitter is the word used to analyze the variation in data packet arrival time. Continually inconsistent data delivery can result in disruptions and delays to voice traffic, video traffic, and other applications. Jitter is calculated by measuring any variation in latency between consecutive data packets. This identifies the difference between successive RTTs (Round Trip Times) or inter-arrival times. Any variation is then averaged over a set time period. The score is usually provided in ms (milliseconds). Consistent jitter scores can indicate instability within the network environment.
Understanding each of these key metrics allows for organizations to make data-driven decisions in what areas best to optimize and improve within their network.
Techniques to Optimize Network Performance
Network optimization can be introduced using a variety of techniques; but which should be prioritized and integrated should be decided bespoke to each business to ensure the most appropriate method of action is taken.
The following can be considered the primary methods for network optimization:
Load Balancing
Load balancing is the technique of distributing traffic across a network as evenly as possible; ensuring servers are equally utilized and that one does not become overworked or overwhelmed.
Load balancing is usually introduced through the use of an NLB (Network Load Balancer). This works to spread all incoming traffic requests across a group of healthy and available servers; rather than directing everything to one, then another, then another. This ensures that the network’s applications work as fast as they can for end users, remain reliable and available, and can redirect traffic if a server does fail or experience delays. This provides an overall better performance for users.
NLBs work by:
- Connecting users to a single address (their own) rather than that of an individual server
- Applies a set algorithm to decide which server is best placed to deal with each individual traffic request
- Ensures consistent balance between servers by loading up those healthiest or least busy first
- Continually and pro-actively checking in on server performance, so that traffic can be re-routed if required with no disruption to the end user.
Payload Compression
Payload compression uses compression techniques to reduce the size of data packets to avoid overload and/or congestion. Generally speaking, larger data packets take longer to load than smaller ones.
The overall data (referred to here as the payload) is compressed by a program that finds repetitive data within the packet as a whole and replaces the latter instances of it with shorter codes – similar to a simple document referencing system. The server transmits the compressed, smaller, data file, and this is then automatically decompressed by the browser: so the user sees the original data and not the coding, and the file takes up less space as it travels through the network to them.
QoS Prioritization
Network QoS (Quality of Service) prioritization is a technique used that ensures critical network traffic always receives its necessary bandwidth and speed, and de-prioritizes less urgent data to allow this to happen.
As default, all data packets are treated equally as they travel through a network, which can result in congestion and slower service when demand is high. If considered similarly to a busy road, a large data packet such as a large file download can be considered a slow-moving but bulky truck taking up several lanes. This could result in delays to a smaller but more urgent file such as a video conference, which is stuck behind it but is more important; such as an ambulance.
QoS prioritization classifies and marks all incoming data packets to determine what kind of application they belong to and what kind of function they carry out. These classifications can then be read by all network devices to help them understand what to prioritize. Each network device has its own virtual queue, and can then organize data packets into their appropriate queue given their classification – similar to those lining up at an aeroplane for boarding.
QoS prioritization allows for time-sensitive data to be transmitted with the least delay possible, while continuing to create efficiency through traffic shaping for fluctuations in demand and continuous bandwidth allocation.
Dynamically Routing Traffic
SD-WANs (Software-Defined Wide Area Networks) allow for the dynamic routing of traffic by leveraging centralized (and virtualized) control functions to direct all network traffic across as wide an area as possible. This software works in real-time for every application to route data as appropriately as possible.
SD-WANs work with various tactics, including:
- Switching between different internet connections (including broadband, MPLS and LTE) to create a big, flexible network
- Programming a central controller with rules and policies which it can apply as necessary as it monitors traffic paths and patterns
- Identifying applications and targeting their path bespoke to them; for example, ensuring calls are routed through a server with low latency
- Routing bulk data through cheaper broadband connections while ensuring business-critical or time-sensitive data is managed through a premium connection link.
Improving Hardware
Of course, all of the above can only be well managed if the hardware for which the network relies upon is operating correctly and to the best of its ability.
While not all parts may need replacing, upgrading or updating, it should be considered that there may be room for improvement within a network’s hardware configuration that could help to:
- Better handle data demands
- Increase overall bandwidth capacity
- Reduce latency
- Reduce jitter
- Improve overall traffic management with in-built software such as dynamic routing and QoS prioritization
- Provide better signals.
The improvement of physical hardware doesn’t necessarily mean investing lots of money into the newest possible ranges offered by OEMs (Original Equipment Manufacturers). Instead, speak to Procurri’s Level 3 and 4 engineers and investigate into low-price hardware options such as:
- Refurbished hardware
- Spare parts
- Hardware rental for seasonal demand
- Hardware resale and recycling.
Get in touch with the team today and put us to the test – how much improvement can we leverage on your network?