1041:
requests to the Web servers. If the balancer itself is not overloaded, this does not noticeably degrade the performance perceived by end-users. The downside of this approach is that all of the TLS processing is concentrated on a single device (the balancer) which can become a new bottleneck. Some load balancer appliances include specialized hardware to process TLS. Instead of upgrading the load balancer, which is quite expensive dedicated hardware, it may be cheaper to forgo TLS offload and add a few web servers. Also, some server vendors such as Oracle/Sun now incorporate cryptographic acceleration hardware into their CPUs such as the T2000. F5 Networks incorporates a dedicated TLS acceleration hardware card in their local traffic manager (LTM) which is used for encrypting and decrypting TLS traffic. One clear benefit to TLS offloading in the balancer is that it enables it to do balancing or content switching based on data in the HTTPS request.
164:(and sometimes even overload) of certain processors. Instead, assumptions about the overall system are made beforehand, such as the arrival times and resource requirements of incoming tasks. In addition, the number of processors, their respective power and communication speeds are known. Therefore, static load balancing aims to associate a known set of tasks with the available processors in order to minimize a certain performance function. The trick lies in the concept of this performance function.
20:
997:. Storing session data on the client is generally the preferred solution: then the load balancer is free to pick any backend server to handle a request. However, this method of state-data handling is poorly suited to some complex business logic scenarios, where session state payload is big and recomputing it with every request on a server is not feasible. URL rewriting has major security issues because the end-user can easily alter the submitted URL and thus change session streams.
332:
1288:—the continuation of service after the failure of one or more of its components. The components are monitored continually (e.g., web servers may be monitored by fetching known pages), and when one becomes unresponsive, the load balancer is informed and no longer sends traffic to it. When a component comes back online, the load balancer starts rerouting traffic to it. For this to work, there must be at least one component in excess of the service's capacity (
445:
1276:
different paths. Dynamic assignments can also be proactive or reactive. In the former case, the assignment is fixed once made, while in the latter the network logic keeps monitoring available paths and shifts flows across them as network utilization changes (with arrival of new flows or completion of existing ones). A comprehensive overview of load balancing in datacenter networks has been made available.
635:
532:
493:. It is then necessary to send a termination signal to the parent processor when the subtask is completed so that it, in turn, sends the message to its parent until it reaches the root of the tree. When the first processor, i.e. the root, has finished, a global termination message can be broadcast. In the end, it is necessary to assemble the results by going back up the tree.
573:
963:
exceeding the space available for the assignment table. The random assignment method also requires that clients maintain some state, which can be a problem, for example when a web browser has disabled the storage of cookies. Sophisticated load balancers use multiple persistence techniques to avoid some of the shortcomings of any one method.
922:, or least connections. More sophisticated load balancers may take additional factors into account, such as a server's reported load, least response times, up/down status (determined by a monitoring poll of some kind), a number of active connections, geographic location, capabilities, or how much traffic it has recently been assigned.
58:. Two main approaches exist: static algorithms, which do not take into account the state of the different machines, and dynamic algorithms, which are usually more general and more efficient but require exchanges of information between the different computing units, at the risk of a loss of efficiency.
930:
An important issue when operating a load-balanced service is how to handle information that must be kept across the multiple requests in a user's session. If this information is stored locally on one backend server, then subsequent requests going to different backend servers would not be able to find
889:
where external clients connect to access services. The load balancer forwards requests to one of the "backend" servers, which usually replies to the load balancer. This allows the load balancer to reply to the client without the client ever knowing about the internal separation of functions. It also
475:
In the case of atomic tasks, two main strategies can be distinguished, those where the processors with low load offer their computing capacity to those with the highest load, and those were the most loaded units wish to lighten the workload assigned to them. It has been shown that when the network is
425:
schemes are among the simplest dynamic load balancing algorithms. A master distributes the workload to all workers (also sometimes referred to as "slaves"). Initially, all workers are idle and report this to the master. The master answers worker requests and distributes the tasks to them. When he has
962:
this method may be unreliable. Random assignments must be remembered by the load balancer, which creates a burden on storage. If the load balancer is replaced or fails, this information may be lost, and assignments may need to be deleted after a timeout period or during periods of high load to avoid
1275:
Static load balancing distributes traffic by computing a hash of the source and destination addresses and port numbers of traffic flows and using it to determine how flows are assigned to one of the existing paths. Dynamic load balancing assigns traffic flows to paths by monitoring bandwidth use on
905:
pairs which may also replicate session persistence data if required by the specific application. Certain applications are programmed with immunity to this problem, by offsetting the load balancing point over differential sharing platforms beyond the defined network. The sequential algorithms paired
849:
This way, when a server is down, its DNS will not respond and the web service does not receive any traffic. If the line to one server is congested, the unreliability of DNS ensures less HTTP traffic reaches that server. Furthermore, the quickest DNS response to the resolver is nearly always the one
452:
However, the quality of the algorithm can be greatly improved by replacing the master with a task list that can be used by different processors. Although this algorithm is a little more difficult to implement, it promises much better scalability, although still insufficient for very large computing
271:
architecture. On the other hand, the control can be distributed between the different nodes. The load balancing algorithm is then executed on each of them and the responsibility for assigning tasks (as well as re-assigning and splitting as appropriate) is shared. The last category assumes a dynamic
194:
since it is not mandatory to have a specific node dedicated to the distribution of work. When tasks are uniquely assigned to a processor according to their state at a given moment, it is a unique assignment. If, on the other hand, the tasks can be permanently redistributed according to the state of
106:
For this reason, there are several techniques to get an idea of the different execution times. First of all, in the fortunate scenario of having tasks of relatively homogeneous size, it is possible to consider that each of them will require approximately the average execution time. If, on the other
1066:
HTTP compression reduces the amount of data to be transferred for HTTP objects by utilising gzip compression available in all modern web browsers. The larger the response and the further away the client is, the more this feature can improve response times. The trade-off is that this feature puts
1040:
request can become a major part of the demand on the Web Server's CPU; as the demand increases, users will see slower response times, as the TLS overhead is distributed among Web servers. To remove this demand on Web servers, a balancer can terminate TLS connections, passing HTTPS requests as HTTP
488:
Initially, many processors have an empty task, except one that works sequentially on it. Idle processors issue requests randomly to other processors (not necessarily active). If the latter is able to subdivide the task it is working on, it does so by sending part of its work to the node making the
275:
Since the design of each load balancing algorithm is unique, the previous distinction must be qualified. Thus, it is also possible to have an intermediate strategy, with, for example, "master" nodes for each sub-cluster, which are themselves subject to a global "master". There are also multi-level
186:
Unlike static load distribution algorithms, dynamic algorithms take into account the current load of each of the computing units (also called nodes) in the system. In this approach, tasks can be moved dynamically from an overloaded node to an underloaded node in order to receive faster processing.
868:
to achieve a reasonably flat load distribution across servers. It has been claimed that client-side random load balancing tends to provide better load distribution than round-robin DNS; this has been attributed to caching issues with round-robin DNS, that in the case of large DNS caching servers,
471:
The approach consists of assigning to each processor a certain number of tasks in a random or predefined manner, then allowing inactive processors to "steal" work from active or overloaded processors. Several implementations of this concept exist, defined by a task division model and by the rules
387:
Randomized static load balancing is simply a matter of randomly assigning tasks to the different servers. This method works quite well. If, on the other hand, the number of tasks is known in advance, it is even more efficient to calculate a random permutation in advance. This avoids communication
295:
When the algorithm is capable of adapting to a varying number of computing units, but the number of computing units must be fixed before execution, it is called moldable. If, on the other hand, the algorithm is capable of dealing with a fluctuating amount of processors during its execution, the
266:
Adapting to the hardware structures seen above, there are two main categories of load balancing algorithms. On the one hand, the one where tasks are assigned by “master” and executed by “workers” who keep the master informed of the progress of their work, and the master can then take charge of
1012:
Hardware and software load balancers may have a variety of special features. The fundamental feature of a load balancer is to be able to distribute incoming requests over a number of backend servers in the cluster according to a scheduling algorithm. Most of the following features are vendor
872:
With this approach, the method of delivery of a list of IPs to the client can vary and may be implemented as a DNS list (delivered to all the clients without any round-robin), or via hardcoding it to the list. If a "smart client" is used, detecting that a randomly selected server is down and
241:
computers, managing write conflicts greatly slows down the speed of individual execution of each computing unit. However, they can work perfectly well in parallel. Conversely, in the case of message exchange, each of the processors can work at full speed. On the other hand, when it comes to
388:
costs for each assignment. There is no longer a need for a distribution master because every processor knows what task is assigned to it. Even if the number of tasks is unknown, it is still possible to avoid communication with a pseudo-random assignment generation known to all processors.
1221:(partially connected and/or fully connected) by allowing traffic to load share across all paths of a network. SPB is designed to virtually eliminate human error during configuration and preserves the plug-and-play nature that established Ethernet as the de facto protocol at Layer 2.
947:: if a backend server goes down, its per-session information becomes inaccessible, and any sessions depending on it are lost. The same problem is usually relevant to central database servers; even if web servers are "stateless" and not "sticky", the central database is (see below).
1073:
Different vendors use different terms for this, but the idea is that normally each HTTP request from each client is a different TCP connection. This feature utilises HTTP/1.1 to consolidate multiple HTTP requests from multiple clients into a single TCP socket to the back-end
476:
heavily loaded, it is more efficient for the least loaded units to offer their availability and when the network is lightly loaded, it is the overloaded processors that require support from the most inactive ones. This rule of thumb limits the number of exchanged messages.
934:
Ideally, the cluster of servers behind the load balancer should not be session-aware, so that if a client connects to any backend server at any time the user experience is unaffected. This is usually achieved with a shared database or an in-memory session database like
472:
determining the exchange between processors. While this technique can be particularly effective, it is difficult to implement because it is necessary to ensure that communication does not become the primary occupation of the processors instead of solving the problem.
1271:
networks to distribute traffic across many existing paths between any two servers. It allows more efficient use of network bandwidth and reduces provisioning costs. In general, load balancing in datacenter networks can be classified as either static or dynamic.
354:), although optimizing task assignment is a difficult problem, it is still possible to approximate a relatively fair distribution of tasks, provided that the size of each of them is much smaller than the total computation performed by each of the nodes.
146:
Another feature of the tasks critical for the design of a load balancing algorithm is their ability to be broken down into subtasks during execution. The "Tree-Shaped
Computation" algorithm presented later takes great advantage of this specificity.
863:
Another approach to load balancing is to deliver a list of server IPs to the client, and then to have the client randomly select the IP from the list on each connection. This essentially relies on all clients generating similar loads, and the
942:
One basic solution to the session data issue is to send all requests in a user session consistently to the same backend server. This is known as "persistence" or "stickiness". A significant downside to this technique is its lack of automatic
1020:
A ratio can be manually assigned to cause some backend servers to get a greater share of the workload than others. This is sometimes used as a crude way to account for some servers having more capacity than others and may not always work as
195:
the system and its evolution, this is called dynamic assignment. Obviously, a load balancing algorithm that requires too much communication in order to reach its decisions runs the risk of slowing down the resolution of the overall problem.
890:
prevents clients from contacting back-end servers directly, which may have security benefits by hiding the structure of the internal network and preventing attacks on the kernel's network stack or unrelated services running on other ports.
1177:
Using load balancing, both links can be in use all the time. A device or program monitors the availability of all links and selects the path for sending packets. The use of multiple links simultaneously increases the available bandwidth.
501:
The efficiency of such an algorithm is close to the prefix sum when the job cutting and communication time is not too high compared to the work to be done. To avoid too high communication costs, it is possible to imagine a list of jobs on
854:
on the A-record helps to ensure traffic is quickly diverted when a server goes down. Consideration must be given to the possibility that this technique may cause individual clients to switch between individual servers in mid-session.
1216:
standard in May 2012, also known as
Shortest Path Bridging (SPB). SPB allows all links to be active through multiple equal-cost paths, provides faster convergence times to reduce downtime, and simplifies the use of load balancing in
1131:
Most load balancers can send requests to different servers based on the URL being requested, assuming the request is not encrypted (HTTP) or if it is encrypted (via HTTPS) that the HTTPS request is terminated (decrypted) at the load
1080:
The load balancer can buffer responses from the server and spoon-feed the data out to slow clients, allowing the webserver to free a thread for other tasks faster than it would if it had to send the entire request to the client
82:
The efficiency of load balancing algorithms critically depends on the nature of the tasks. Therefore, the more information about the tasks is available at the time of decision making, the greater the potential for optimization.
893:
Some load balancers provide a mechanism for doing something special in the event that all backend servers are unavailable. This might include forwarding to a backup load balancer or displaying a message regarding the outage.
1035:
TLS (or its predecessor SSL) acceleration is a technique of offloading cryptographic protocol calculations onto specialized hardware. Depending on the workload, processing the encryption and authentication requirements of a
429:
The advantage of this system is that it distributes the burden very fairly. In fact, if one does not take into account the time needed for the assignment, the execution time would be comparable to the prefix sum seen above.
276:
organizations, with an alternation between master-slave and distributed control strategies. The latter strategies quickly become complex and are rarely encountered. Designers prefer algorithms that are easier to control.
1250:
activities. Load balancers can be used to split huge data flows into several sub-flows and use several network analyzers, each reading a part of the original data. This is very useful for monitoring fast networks like
479:
In the case where one starts from a single large task that cannot be divided beyond an atomic level, there is a very efficient algorithm "Tree-Shaped computation", where the parent task is distributed in a work tree.
375:
In a round-robin algorithm, the first request is sent to the first server, then the next to the second, and so on down to the last. Then it is started again, assigning the next request to the first server, and so on.
51:(computing units), with the aim of making their overall processing more efficient. Load balancing can optimize response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle.
814:
as a sub-domain whose zone is served by each of the same servers that are serving the website. This technique works particularly well where individual servers are spread geographically on the
Internet. For example:
802:; clients are given IP in a round-robin fashion. IP is assigned to clients with a short expiration so the client is more likely to use a different IP the next time they access the Internet service being requested.
1660:
Workshop on
Application Based Load Balancing (Alv '98), München, 25. - 26. März 1998 - Veranst. Vom Sonderforschungsbereich 342 "Werkzeuge und Methoden für die Nutzung Paralleler Rechnerarchitekturen". Ed.: A.
970:. This is generally bad for performance because it increases the load on the database: the database is best used to store information less transient than per-session data. To prevent a database from becoming a
1169:
Load balancing can be useful in applications with redundant communications links. For example, a company may have multiple
Internet connections ensuring network access if one of the connections fails. A
245:
In reality, few systems fall into exactly one of the categories. In general, the processors each have an internal memory to store the data needed for the next calculations and are organized in successive
284:
In the context of algorithms that run over the very long term (servers, cloud...), the computer architecture evolves over time. However, it is preferable not to have to design a new algorithm each time.
171:, which distributes the loads and optimizes the performance function. This minimization can take into account information related to the tasks to be distributed, and derive an expected execution time.
218:
For example, lower-powered units may receive requests that require a smaller amount of computation, or, in the case of homogeneous or unknown request sizes, receive fewer requests than larger units.
1788:
258:. Therefore, the load balancing algorithm should be uniquely adapted to a parallel architecture. Otherwise, there is a risk that the efficiency of parallel problem solving will be greatly reduced.
1292:). This can be much less expensive and more flexible than failover approaches where every single live component is paired with a single backup component that takes over in the event of a failure (
1235:
Many telecommunications companies have multiple routes through their networks or to external networks. They use sophisticated load balancing to shift traffic from one path to another to avoid
357:
Most of the time, the execution time of a task is unknown and only rough approximations are available. This algorithm, although particularly efficient, is not viable for these scenarios.
160:
A load balancing algorithm is "static" when it does not take into account the state of the system for the distribution of tasks. Thereby, the system state includes measures such as the
989:
In the very common case where the client is a web browser, a simple but efficient approach is to store the per-session data in the browser itself. One way to achieve this is to use a
2055:
126:
Assuming that the required time for each of the tasks is known in advance, an optimal execution order must lead to the minimization of the total execution time. Although this is an
986:
State Server technology is an example of a session database. All servers in a web farm store their session data on State Server and any server in the farm can retrieve the data.
187:
While these algorithms are much more complicated to design, they can produce excellent results, in particular, when the execution time varies greatly from one task to another.
1202:
433:
The problem with this algorithm is that it has difficulty adapting to a large number of processors because of the high amount of necessary communications. This lack of
931:
it. This might be cached information that can be recomputed, in which case load-balancing a request to a different backend server just introduces a performance issue.
339:
By dividing the tasks in such a way as to give the same amount of computation to each processor, all that remains to be done is to group the results together. Using a
918:, also called load-balancing methods, are used by load balancers to determine which back-end server to send a request to. Simple algorithms include random choice,
2089:
1975:
950:
Assignment to a particular server might be based on a username, client IP address, or random. Because of changes in the client's perceived address resulting from
226:
Parallel computers are often divided into two broad categories: those where all processors share a single common memory on which they read and write in parallel (
178:
requests from a website). However, there is still some statistical variance in the assignment of tasks which can lead to the overloading of some computing units.
1113:
Some balancers can hide HTTP error pages, remove server identification headers from HTTP responses, and encrypt cookies so that end users cannot manipulate them.
2008:
1579:
328:
If the tasks are independent of each other, and if their respective execution time and the tasks can be subdivided, there is a simple and optimal algorithm.
292:
of the algorithm. An algorithm is called scalable for an input parameter when its performance remains relatively independent of the size of that parameter.
652:
598:
545:
2074:
1854:
1799:
288:
An extremely important parameter of a load balancing algorithm is therefore its ability to adapt to scalable hardware architecture. This is called the
1205:
in the year 2004, whom in 2005 rejected what came to be known as TRILL, and in the years 2006 through 2012 devised an incompatible variation known as
111:
to each task. Depending on the previous execution time for similar metadata, it is possible to make inferences for a future task based on statistics.
1759:
1623:
Eager, Derek L; Lazowska, Edward D; Zahorjan, John (1 March 1986). "A comparison of receiver-initiated and sender-initiated adaptive load sharing".
174:
The advantage of static algorithms is that they are easy to set up and extremely efficient in the case of fairly regular tasks (such as processing
1708:
1824:
391:
The performance of this strategy (measured in total execution time for a given fixed set of tasks) decreases with the maximum size of the tasks.
1538:
Punetha
Sarmila, G.; Gnanambigai, N.; Dinadayalan, P. (2015). "Survey on fault tolerant — Load balancing algorithmsin cloud computing".
1555:
1514:
1391:
1198:
1144:
At least one balancer allows the use of a scripting language to allow custom balancing methods, arbitrary traffic manipulations, and more.
978:, the database is often replicated across multiple machines, and load balancing is used to spread the query load across those replicas.
2132:
2033:
1601:
699:
1339:
1160:
Intrusion prevention systems offer application layer security in addition to the network/transport layer offered by firewall security.
671:
1240:
1027:
When the number of available servers drops below a certain number, or the load gets too high, standby servers can be brought online.
869:
tend to skew the distribution for round-robin DNS, while client-side random selection remains unaffected regardless of DNS caching.
736:
718:
616:
559:
66:
A load-balancing algorithm always tries to answer a specific problem. Among other things, the nature of the tasks, the algorithmic
794:
is an alternate method of load balancing that does not require a dedicated software or hardware node. In this technique, multiple
1174:
arrangement would mean that one link is designated for normal use, while the second link is used only if the primary link fails.
818:
one.example.org A 192.0.2.1 two.example.org A 203.0.113.2 www.example.org NS one.example.org www.example.org NS two.example.org
678:
379:
This algorithm can be weighted such that the most powerful units receive the largest number of requests and receive them first.
1319:
1004:
to pseudo-randomly assign that name to one of the available servers, and then store that block of data in the assigned server.
551:
506:. Therefore, a request is simply reading from a certain position on this shared memory at the request of the master processor.
1948:
1497:
Asghar, Sajjad; Aubanel, Eric; Bremner, David (October 2013). "A Dynamic
Moldable Job Scheduling Based Parallel SAT Solver".
770:
656:
227:
308:, it is not tolerable to execute a parallel algorithm that cannot withstand the failure of one single component. Therefore,
242:
collective message exchange, all processors are forced to wait for the slowest processors to start the communication phase.
2137:
1044:
685:
2122:
1982:
588:
1155:
955:
667:
514:
In addition to efficient problem solving through parallel computations, load balancing algorithms are widely used in
107:
hand, the execution time is very irregular, more sophisticated techniques must be used. One technique is to add some
518:
request management where a site with a large audience must be able to handle a large number of requests per second.
1349:
749:
One of the most commonly used applications of load balancing is to provide a single
Internet service from multiple
1587:
645:
1692:
422:
268:
168:
74:, must be taken into account. Therefore compromise must be found to best meet application-specific requirements.
1054:
and delayed-binding (the back-end servers don't see the client until it finishes its TCP handshake) to mitigate
1037:
67:
1865:
1407:
Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel (30 August 2016).
1293:
1194:
971:
898:
1767:
1354:
1239:
on any particular link, and sometimes to minimize the cost of transit across external networks or improve
1206:
1001:
1000:
Yet another solution to storing persistent data is to associate a name with each block of data, and use a
919:
885:
For
Internet services, a server-side load balancer is usually a software program that is listening on the
766:
464:
Another technique to overcome scalability problems when the time needed for task completion is unknown is
369:
120:
825:
on each server is different such that each server resolves its own IP Address as the A-record. On server
437:
makes it quickly inoperable in very large servers or very large parallel computers. The master acts as a
267:
assigning or reassigning the workload in case of the dynamic algorithm. The literature refers to this as
1147:
1138:
Authenticate users against a variety of authentication sources before allowing them access to a website.
438:
365:
Even if the execution time is not known in advance at all, static load distribution is always possible.
1101:
The balancer stores static content so that some requests can be handled without contacting the servers.
1832:
692:
19:
1420:
1334:
915:
865:
594:
1409:"Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment"
583:
1252:
1087:
An option for asymmetrical load distribution, where request and reply have different network paths.
762:
312:
algorithms are being developed which can detect outages of processors and recover the computation.
331:
1561:
1520:
1456:
1247:
1236:
1093:
The balancer polls servers for application layer health and removes failed servers from the pool.
774:
750:
403:
Less work: Assign more tasks to the servers by performing less (the method can also be weighted).
251:
231:
208:
55:
2056:
Minimizing Flow
Completion Times using Adaptive Routing over Inter-Datacenter Wide Area Networks
444:
1640:
1551:
1510:
1448:
1387:
1382:
Sanders, Peter; Mehlhorn, Kurt; Dietzfelbinger, Martin; Dementiev, Roman (11 September 2019).
902:
305:
16:
Set of techniques to improve the distribution of workloads across multiple computing resources
2127:
1664:
1632:
1609:
1543:
1502:
1438:
1428:
1218:
1116:
1061:
1030:
886:
344:
247:
161:
44:
1741:
Ranjan, R (2010). "Peer-to-peer cloud provisioning: Service discovery and load-balancing".
119:
In some cases, tasks depend on each other. These interdependencies can be illustrated by a
1720:
1329:
1289:
1152:
Firewalls can prevent direct connections to backend servers, for network security reasons.
874:
791:
786:
413:
Power of Two Choices: pick two servers at random and choose the better of the two options.
351:
255:
212:
71:
48:
296:
algorithm is said to be malleable. Most load balancing algorithms are at least moldable.
2107:
1658:
Sanders, Peter (1998). "Tree Shaped Computations as a Model for Parallel Applications".
1424:
130:
problem and therefore can be difficult to be solved exactly. There are algorithms, like
1443:
1408:
1344:
1193:
to have an arbitrary topology, and enables per flow pair-wise load splitting by way of
906:
to these functions are defined by flexible parameters unique to the specific database.
490:
309:
191:
100:
92:
1709:
MMOG Server-Side Architecture. Front-End Servers and Client-Side Random Load Balancing
1197:, without configuration and user intervention. The catalyst for TRILL was an event at
2116:
1636:
1475:
1314:
1201:
which began on 13 November 2002. The concept of Rbridges was first proposed to the
994:
503:
465:
458:
238:
135:
131:
24:
1565:
1524:
1067:
additional CPU demand on the load balancer and could be done by web servers instead.
95:
of each of the tasks allows to reach an optimal load distribution (see algorithm of
1952:
1908:
1890:
1213:
1122:
1096:
851:
850:
from the network's closest server, ensuring geo-sensitive load-balancing . A short
1540:
2015 2nd International Conference on Electronics and Communication Systems (ICECS)
1460:
426:
no more tasks to give, he informs the workers so that they stop asking for tasks.
28:
1058:
attacks and generally offload work from the servers to a more efficient platform.
659: in this article's section. Unsourced material may be challenged and removed.
70:, the hardware architecture on which the algorithms will run as well as required
1763:
1324:
1268:
1051:
990:
975:
799:
754:
634:
434:
289:
2009:"Largest Illinois healthcare system uproots Cisco to build $ 40M private cloud"
1384:
Sequential and parallel algorithms and data structures : the basic toolbox
1359:
1256:
795:
407:
340:
322:
167:
Static load balancing techniques are commonly centralized around a router, or
96:
1681:
1644:
1547:
810:
Another more effective technique for load-balancing using DNS is to delegate
1668:
1301:
1055:
979:
959:
936:
36:
1929:
1452:
2021:
Shortest Path Bridging will replace Spanning Tree in the Ethernet fabric.
1506:
1285:
1190:
1171:
967:
944:
758:
108:
1480:
International Journal of Computer Science and Network Security (IJCSNS)
1230:
983:
127:
99:). Unfortunately, this is in fact an idealized case. Knowing the exact
2075:"Datacenter Traffic Control: Understanding Techniques and Trade-offs,"
1433:
1255:
or STM64, where complex processing of the data may not be possible at
897:
It is also important that the load balancer itself does not become a
1377:
1375:
1476:"A Guide to Dynamic Load Balancing in Distributed Computer Systems"
123:. Intuitively, some tasks cannot begin until others are completed.
2059:
IEEE INFOCOM 2018 Poster Sessions, DOI:10.13140/RG.2.2.36009.90720
1186:
443:
330:
18:
2108:
Server routing for load balancing with full auto failure recovery
1949:"IEEE APPROVES NEW IEEE 802.1aq™ SHORTEST PATH BRIDGING STANDARD"
1897:. Radia Perlman, Sun Microsystems; Donald Eastlake 3rd, Motorola.
1107:
Some balancers can arbitrarily modify traffic on the way through.
2034:"IEEE Approves New IEEE 802.1aq Shortest Path Bridging Standard"
1297:
1125:, the ability to give different priorities to different traffic.
951:
515:
457:
Non-hierarchical architecture, without knowledge of the system:
250:. Often, these processing elements are then coordinated through
230:
model), and those where each computing unit has its own memory (
215:, which should be taken into account for the load distribution.
175:
1580:"NGINX and the "Power of Two Choices" Load-Balancing Algorithm"
1864:. Radia Perlman, Sun Microsystems Laboratories. Archived from
1189:(Transparent Interconnection of Lots of Links) facilitates an
628:
566:
525:
489:
request. Otherwise, it returns an empty task. This induces a
350:
If, however, the tasks cannot be subdivided (i.e., they are
1602:"Test Driving "Power of Two Random Choices" Load Balancing"
399:
Of course, there are other methods of assignment as well:
335:
Load balancing algorithm depending on divisibility of tasks
27:
cluster being distributed by a load balancer. (Example for
54:
Load balancing is the subject of research in the field of
1499:
2013 42nd International Conference on Parallel Processing
211:
infrastructures are often composed of units of different
234:
model), and where information is exchanged by messages.
966:
Another solution is to keep the per-session data in a
321:
Static distribution with full knowledge of the tasks:
1936:. Institute of Electrical and Electronics Engineers.
2054:Mohammad Noormohammadpour, Cauligi S. Raghavendra
993:, suitably time-stamped and encrypted. Another is
134:, that calculate optimal task distributions using
1203:Institute of Electrical and Electronics Engineers
757:. Commonly load-balanced systems include popular
361:Static load distribution without prior knowledge
280:Adaptation to larger architectures (scalability)
190:Dynamic load balancing architecture can be more
2069:
2067:
1976:"Shortest Path Bridging IEEE 802.1aq Overview"
1930:"IEEE 802.1: 802.1aq - Shortest Path Bridging"
343:algorithm, this division can be calculated in
901:. Usually, load balancers are implemented in
103:of each task is an extremely rare situation.
8:
1050:Load balancers can provide features such as
873:connecting randomly again, it also provides
2078:IEEE Communications Surveys & Tutorials
560:Learn how and when to remove these messages
1284:Load balancing is often used to implement
1246:Another way of using load balancing is in
347:with respect to the number of processors.
1831:. IDG Communications, Inc. Archived from
1798:. IDG Communications, Inc. Archived from
1704:
1702:
1700:
1442:
1432:
737:Learn how and when to remove this message
719:Learn how and when to remove this message
617:Learn how and when to remove this message
23:Diagram illustrating user requests to an
2073:M. Noormohammadpour, C. S. Raghavendra,
43:is the process of distributing a set of
1371:
406:Hash: allocates queries according to a
1754:
1752:
7:
1760:"Load Balancing 101: Nuts and Bolts"
1199:Beth Israel Deaconess Medical Center
657:adding citations to reliable sources
1974:Peter Ashwood-Smith (24 Feb 2011).
1693:Pattern: Client Side Load Balancing
1340:Common Address Redundancy Protocol
14:
1918:. Donald E. Eastlake 3rd, Huawei.
1267:Load balancing is widely used in
1141:Programmatic traffic manipulation
859:Client-side random load balancing
541:This section has multiple issues.
633:
597:has been specified. Please help
571:
530:
1891:"Rbridges: Transparent Routing"
1855:"Rbridges: Transparent Routing"
1320:Application Delivery Controller
668:"Load balancing" computing
644:needs additional citations for
549:or discuss these issues on the
1474:Alakeel, Ali (November 2009).
777:(DNS) servers, and databases.
771:Network News Transfer Protocol
1:
1045:Distributed Denial of Service
843:the same zone file contains:
798:are associated with a single
222:Shared and distributed memory
151:Static and dynamic algorithms
1766:. 2017-12-05. Archived from
1637:10.1016/0166-5316(86)90008-8
1608:. 2019-02-15. Archived from
1586:. 2018-11-12. Archived from
1031:TLS Offload and Acceleration
448:Master-Worker and bottleneck
2090:Failover and load balancing
2080:, vol. PP, no. 99, pp. 1-1.
2036:. Tech Power Up. 7 May 2012
1156:Intrusion prevention system
956:network address translation
821:However, the zone file for
2154:
2133:Load balancing (computing)
1350:InterPlanetary File System
1228:
881:Server-side load balancers
784:
587:to meet Knowledge (XXG)'s
304:Especially in large-scale
272:load balancing algorithm.
2007:Jim Duffy (11 May 2012).
1862:courses.cs.washington.edu
1300:systems can also utilize
765:networks, high-bandwidth
91:Perfect knowledge of the
1981:. Huawei. Archived from
1947:Shuang Yu (8 May 2012).
1723:. linuxvirtualserver.org
1548:10.1109/ECS.2015.7124879
1047:(DDoS) attack protection
1682:IPv4 Address Record (A)
1294:dual modular redundancy
1219:mesh network topologies
1128:Content-aware switching
972:single point of failure
899:single point of failure
753:, sometimes known as a
642:This article's section
522:Internet-based services
1951:. IEEE. Archived from
1625:Performance Evaluation
1542:. pp. 1715–1720.
1355:Network Load Balancing
1304:for a similar effect.
1212:The IEEE approved the
1207:Shortest Path Bridging
1182:Shortest Path Bridging
1008:Load balancer features
1002:distributed hash table
767:File Transfer Protocol
449:
370:Round-robin scheduling
336:
204:Heterogeneous machines
121:directed acyclic graph
32:
1669:10.5445/ir/1000074497
1229:Further information:
1135:Client authentication
916:scheduling algorithms
910:Scheduling algorithms
447:
334:
199:Hardware architecture
22:
2138:Balancing technology
1805:on 23 September 2020
1507:10.1109/ICPP.2013.20
1501:. pp. 110–119.
1335:Cloud load balancing
1263:Data center networks
1195:Dijkstra's algorithm
1084:Direct Server Return
866:Law of Large Numbers
653:improve this article
599:improve this section
418:Master-Worker Scheme
142:Segregation of tasks
2123:Servers (computing)
1721:"High Availability"
1425:2016Senso..16.1386L
1241:network reliability
1024:Priority activation
846:@ in a 203.0.113.2
763:Internet Relay Chat
1825:"All Systems Down"
1789:"All Systems Down"
1248:network monitoring
1237:network congestion
1165:Telecommunications
829:the zone file for
775:Domain Name System
450:
337:
306:computing clusters
252:distributed memory
232:distributed memory
209:Parallel computing
56:parallel computers
33:
1871:on 9 January 2022
1835:on 9 January 2022
1557:978-1-4799-7225-8
1516:978-0-7695-5117-3
1434:10.3390/s16091386
1393:978-3-030-25208-3
1104:Content filtering
974:, and to improve
903:high-availability
836:@ in a 192.0.2.1
747:
746:
739:
729:
728:
721:
703:
627:
626:
619:
589:quality standards
580:This section may
564:
383:Randomized static
2145:
2096:
2087:
2081:
2071:
2062:
2052:
2046:
2045:
2043:
2041:
2030:
2024:
2023:
2018:
2016:
2004:
1998:
1997:
1995:
1993:
1987:
1980:
1971:
1965:
1964:
1962:
1960:
1944:
1938:
1937:
1926:
1920:
1919:
1913:
1909:"TRILL Tutorial"
1905:
1899:
1898:
1895:researchgate.net
1887:
1881:
1880:
1878:
1876:
1870:
1859:
1851:
1845:
1844:
1842:
1840:
1821:
1815:
1814:
1812:
1810:
1804:
1793:
1785:
1779:
1778:
1776:
1775:
1756:
1747:
1746:
1738:
1732:
1731:
1729:
1728:
1717:
1711:
1706:
1695:
1690:
1684:
1679:
1673:
1672:
1655:
1649:
1648:
1620:
1614:
1613:
1598:
1592:
1591:
1576:
1570:
1569:
1535:
1529:
1528:
1494:
1488:
1487:
1471:
1465:
1464:
1446:
1436:
1404:
1398:
1397:
1379:
1117:Priority queuing
1062:HTTP compression
832:
824:
813:
773:(NNTP) servers,
742:
735:
724:
717:
713:
710:
704:
702:
661:
637:
629:
622:
615:
611:
608:
602:
575:
574:
567:
556:
534:
533:
526:
345:logarithmic time
62:Problem overview
2153:
2152:
2148:
2147:
2146:
2144:
2143:
2142:
2113:
2112:
2104:
2099:
2088:
2084:
2072:
2065:
2053:
2049:
2039:
2037:
2032:
2031:
2027:
2014:
2012:
2006:
2005:
2001:
1991:
1989:
1985:
1978:
1973:
1972:
1968:
1958:
1956:
1946:
1945:
1941:
1928:
1927:
1923:
1911:
1907:
1906:
1902:
1889:
1888:
1884:
1874:
1872:
1868:
1857:
1853:
1852:
1848:
1838:
1836:
1823:
1822:
1818:
1808:
1806:
1802:
1791:
1787:
1786:
1782:
1773:
1771:
1758:
1757:
1750:
1743:Cloud Computing
1740:
1739:
1735:
1726:
1724:
1719:
1718:
1714:
1707:
1698:
1691:
1687:
1680:
1676:
1657:
1656:
1652:
1622:
1621:
1617:
1600:
1599:
1595:
1578:
1577:
1573:
1558:
1537:
1536:
1532:
1517:
1496:
1495:
1491:
1473:
1472:
1468:
1406:
1405:
1401:
1394:
1381:
1380:
1373:
1369:
1364:
1330:Cloud computing
1310:
1282:
1265:
1233:
1227:
1184:
1167:
1090:Health checking
1017:Asymmetric load
1010:
928:
912:
883:
875:fault tolerance
861:
847:
837:
831:www.example.org
830:
823:www.example.org
822:
819:
812:www.example.org
811:
808:
792:Round-robin DNS
789:
787:Round-robin DNS
783:
781:Round-robin DNS
743:
732:
731:
730:
725:
714:
708:
705:
662:
660:
650:
638:
623:
612:
606:
603:
592:
576:
572:
535:
531:
524:
512:
499:
486:
462:
420:
397:
385:
373:
363:
326:
318:
302:
300:Fault tolerance
282:
269:"Master-Worker"
264:
256:message passing
224:
213:computing power
206:
201:
184:
158:
153:
144:
117:
89:
80:
78:Nature of tasks
72:error tolerance
64:
29:Knowledge (XXG)
17:
12:
11:
5:
2151:
2149:
2141:
2140:
2135:
2130:
2125:
2115:
2114:
2111:
2110:
2103:
2102:External links
2100:
2098:
2097:
2095:6 January 2019
2082:
2063:
2061:6 January 2019
2047:
2025:
1999:
1988:on 15 May 2013
1966:
1955:on 14 May 2013
1939:
1921:
1900:
1882:
1846:
1816:
1780:
1748:
1733:
1712:
1696:
1685:
1674:
1650:
1615:
1612:on 2019-02-15.
1593:
1590:on 2019-12-12.
1571:
1556:
1530:
1515:
1489:
1466:
1399:
1392:
1370:
1368:
1365:
1363:
1362:
1357:
1352:
1347:
1345:Edge computing
1342:
1337:
1332:
1327:
1322:
1317:
1311:
1309:
1306:
1290:N+1 redundancy
1281:
1278:
1264:
1261:
1226:
1223:
1183:
1180:
1166:
1163:
1162:
1161:
1158:
1153:
1150:
1145:
1142:
1139:
1136:
1133:
1129:
1126:
1121:Also known as
1119:
1114:
1111:
1108:
1105:
1102:
1099:
1094:
1091:
1088:
1085:
1082:
1078:
1075:
1071:
1068:
1064:
1059:
1048:
1042:
1033:
1028:
1025:
1022:
1018:
1009:
1006:
991:browser cookie
927:
924:
911:
908:
882:
879:
860:
857:
845:
835:
817:
807:
806:DNS delegation
804:
785:Main article:
782:
779:
745:
744:
727:
726:
641:
639:
632:
625:
624:
595:cleanup reason
579:
577:
570:
565:
539:
538:
536:
529:
523:
520:
511:
508:
498:
495:
491:tree structure
485:
482:
461:
455:
419:
416:
415:
414:
411:
404:
396:
393:
384:
381:
372:
367:
362:
359:
325:
319:
317:
314:
310:fault tolerant
301:
298:
281:
278:
263:
260:
223:
220:
205:
202:
200:
197:
183:
180:
157:
154:
152:
149:
143:
140:
116:
113:
101:execution time
93:execution time
88:
85:
79:
76:
63:
60:
47:over a set of
41:load balancing
15:
13:
10:
9:
6:
4:
3:
2:
2150:
2139:
2136:
2134:
2131:
2129:
2126:
2124:
2121:
2120:
2118:
2109:
2106:
2105:
2101:
2094:
2091:
2086:
2083:
2079:
2076:
2070:
2068:
2064:
2060:
2057:
2051:
2048:
2035:
2029:
2026:
2022:
2010:
2003:
2000:
1984:
1977:
1970:
1967:
1954:
1950:
1943:
1940:
1935:
1931:
1925:
1922:
1917:
1910:
1904:
1901:
1896:
1892:
1886:
1883:
1867:
1863:
1856:
1850:
1847:
1834:
1830:
1826:
1820:
1817:
1801:
1797:
1790:
1784:
1781:
1770:on 2017-12-05
1769:
1765:
1761:
1755:
1753:
1749:
1744:
1737:
1734:
1722:
1716:
1713:
1710:
1705:
1703:
1701:
1697:
1694:
1689:
1686:
1683:
1678:
1675:
1670:
1666:
1662:
1654:
1651:
1646:
1642:
1638:
1634:
1630:
1626:
1619:
1616:
1611:
1607:
1603:
1597:
1594:
1589:
1585:
1581:
1575:
1572:
1567:
1563:
1559:
1553:
1549:
1545:
1541:
1534:
1531:
1526:
1522:
1518:
1512:
1508:
1504:
1500:
1493:
1490:
1485:
1481:
1477:
1470:
1467:
1462:
1458:
1454:
1450:
1445:
1440:
1435:
1430:
1426:
1422:
1418:
1414:
1410:
1403:
1400:
1395:
1389:
1385:
1378:
1376:
1372:
1366:
1361:
1358:
1356:
1353:
1351:
1348:
1346:
1343:
1341:
1338:
1336:
1333:
1331:
1328:
1326:
1323:
1321:
1318:
1316:
1315:Affinity mask
1313:
1312:
1307:
1305:
1303:
1299:
1295:
1291:
1287:
1279:
1277:
1273:
1270:
1262:
1260:
1258:
1254:
1249:
1244:
1242:
1238:
1232:
1224:
1222:
1220:
1215:
1210:
1208:
1204:
1200:
1196:
1192:
1188:
1181:
1179:
1175:
1173:
1164:
1159:
1157:
1154:
1151:
1149:
1146:
1143:
1140:
1137:
1134:
1130:
1127:
1124:
1120:
1118:
1115:
1112:
1110:HTTP security
1109:
1106:
1103:
1100:
1098:
1095:
1092:
1089:
1086:
1083:
1079:
1077:TCP buffering
1076:
1072:
1069:
1065:
1063:
1060:
1057:
1053:
1049:
1046:
1043:
1039:
1034:
1032:
1029:
1026:
1023:
1019:
1016:
1015:
1014:
1007:
1005:
1003:
998:
996:
995:URL rewriting
992:
987:
985:
981:
977:
973:
969:
964:
961:
957:
953:
948:
946:
940:
938:
932:
925:
923:
921:
917:
909:
907:
904:
900:
895:
891:
888:
880:
878:
876:
870:
867:
858:
856:
853:
844:
842:
834:
828:
816:
805:
803:
801:
797:
793:
788:
780:
778:
776:
772:
769:(FTP) sites,
768:
764:
760:
756:
752:
741:
738:
723:
720:
712:
709:December 2010
701:
698:
694:
691:
687:
684:
680:
677:
673:
670: –
669:
665:
664:Find sources:
658:
654:
648:
647:
640:
636:
631:
630:
621:
618:
610:
607:December 2010
600:
596:
590:
586:
585:
578:
569:
568:
563:
561:
554:
553:
548:
547:
542:
537:
528:
527:
521:
519:
517:
509:
507:
505:
504:shared memory
496:
494:
492:
483:
481:
477:
473:
469:
467:
466:work stealing
460:
459:work stealing
456:
454:
446:
442:
440:
436:
431:
427:
424:
423:Master-Worker
417:
412:
409:
405:
402:
401:
400:
394:
392:
389:
382:
380:
377:
371:
368:
366:
360:
358:
355:
353:
348:
346:
342:
333:
329:
324:
320:
315:
313:
311:
307:
299:
297:
293:
291:
286:
279:
277:
273:
270:
261:
259:
257:
253:
249:
243:
240:
239:shared-memory
235:
233:
229:
221:
219:
216:
214:
210:
203:
198:
196:
193:
188:
181:
179:
177:
172:
170:
165:
163:
155:
150:
148:
141:
139:
137:
136:metaheuristic
133:
132:job scheduler
129:
124:
122:
114:
112:
110:
104:
102:
98:
94:
87:Size of tasks
86:
84:
77:
75:
73:
69:
61:
59:
57:
52:
50:
46:
42:
38:
30:
26:
25:Elasticsearch
21:
2092:
2085:
2077:
2058:
2050:
2038:. Retrieved
2028:
2020:
2013:. Retrieved
2011:. PC Advisor
2002:
1990:. Retrieved
1983:the original
1969:
1957:. Retrieved
1953:the original
1942:
1933:
1924:
1915:
1903:
1894:
1885:
1873:. Retrieved
1866:the original
1861:
1849:
1837:. Retrieved
1833:the original
1828:
1819:
1807:. Retrieved
1800:the original
1795:
1783:
1772:. Retrieved
1768:the original
1742:
1736:
1725:. Retrieved
1715:
1688:
1677:
1659:
1653:
1631:(1): 53–68.
1628:
1624:
1618:
1610:the original
1605:
1596:
1588:the original
1583:
1574:
1539:
1533:
1498:
1492:
1483:
1479:
1469:
1416:
1412:
1402:
1386:. Springer.
1383:
1283:
1274:
1266:
1245:
1234:
1214:IEEE 802.1aq
1211:
1185:
1176:
1168:
1123:rate shaping
1097:HTTP caching
1011:
999:
988:
965:
949:
941:
933:
929:
913:
896:
892:
884:
871:
862:
848:
840:
838:
826:
820:
809:
796:IP addresses
790:
748:
733:
715:
706:
696:
689:
682:
675:
663:
651:Please help
646:verification
643:
613:
604:
581:
557:
550:
544:
543:Please help
540:
513:
500:
487:
478:
474:
470:
463:
451:
432:
428:
421:
398:
390:
386:
378:
374:
364:
356:
349:
338:
327:
303:
294:
287:
283:
274:
265:
244:
236:
225:
217:
207:
189:
185:
173:
166:
159:
145:
125:
118:
115:Dependencies
105:
90:
81:
65:
53:
40:
34:
1934:ieee802.org
1764:F5 Networks
1606:haproxy.com
1419:(9): 1386.
1325:Autoscaling
1269:data center
1070:TCP offload
1052:SYN cookies
976:scalability
960:web proxies
926:Persistence
920:round robin
800:domain name
755:server farm
601:if you can.
435:scalability
290:scalability
2117:Categories
1916:postel.org
1774:2018-03-23
1727:2013-11-20
1367:References
1360:SRV record
1257:wire speed
1013:specific:
839:On server
679:newspapers
546:improve it
497:Efficiency
439:bottleneck
408:hash table
341:prefix sum
323:prefix sum
316:Approaches
162:load level
97:prefix sum
68:complexity
1875:9 January
1839:9 January
1809:9 January
1645:0166-5316
1584:nginx.com
1302:hot spare
1296:). Some
1280:Failovers
1225:Routing 1
1132:balancer.
1081:directly.
1056:SYN flood
980:Microsoft
937:Memcached
914:Numerous
833:reports:
759:web sites
552:talk page
510:Use cases
484:Principle
453:centers.
262:Hierarchy
138:methods.
49:resources
37:computing
1566:30175022
1525:15124201
1453:27589753
1308:See also
1286:failover
1191:Ethernet
1172:failover
1148:Firewall
1074:servers.
1021:desired.
968:database
945:failover
761:, large
582:require
248:clusters
109:metadata
2128:Routing
1829:cio.com
1796:cio.com
1663:: 123.
1444:5038664
1421:Bibcode
1413:Sensors
1231:Routing
984:ASP.net
751:servers
693:scholar
584:cleanup
192:modular
182:Dynamic
128:NP-hard
2040:11 May
2015:11 May
1992:11 May
1959:2 June
1643:
1564:
1554:
1523:
1513:
1461:391429
1459:
1451:
1441:
1390:
958:, and
695:
688:
681:
674:
666:
395:Others
352:atomic
169:Master
156:Static
1986:(PDF)
1979:(PDF)
1912:(PDF)
1869:(PDF)
1858:(PDF)
1803:(PDF)
1792:(PDF)
1562:S2CID
1521:S2CID
1457:S2CID
1253:10GbE
1187:TRILL
700:JSTOR
686:books
45:tasks
2042:2012
2017:2012
1994:2012
1961:2012
1877:2022
1841:2022
1811:2022
1661:Bode
1641:ISSN
1552:ISBN
1511:ISBN
1449:PMID
1388:ISBN
1298:RAID
952:DHCP
887:port
672:news
516:HTTP
254:and
237:For
228:PRAM
176:HTTP
2093:IBM
1665:doi
1633:doi
1544:doi
1503:doi
1439:PMC
1429:doi
1038:TLS
982:'s
852:TTL
841:two
827:one
655:by
593:No
35:In
2119::
2066:^
2019:.
1932:.
1914:.
1893:.
1860:.
1827:.
1794:.
1762:.
1751:^
1699:^
1639:.
1627:.
1604:.
1582:.
1560:.
1550:.
1519:.
1509:.
1484:10
1482:.
1478:.
1455:.
1447:.
1437:.
1427:.
1417:16
1415:.
1411:.
1374:^
1259:.
1243:.
1209:.
954:,
939:.
877:.
555:.
468:.
441:.
39:,
31:.)
2044:.
1996:.
1963:.
1879:.
1843:.
1813:.
1777:.
1745:.
1730:.
1671:.
1667::
1647:.
1635::
1629:6
1568:.
1546::
1527:.
1505::
1486:.
1463:.
1431::
1423::
1396:.
740:)
734:(
722:)
716:(
711:)
707:(
697:·
690:·
683:·
676:·
649:.
620:)
614:(
609:)
605:(
591:.
562:)
558:(
410:.
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.