541:, the application programs never see the computational nodes (also called slave computers) but only interact with the "Master" which is a specific computer handling the scheduling and management of the slaves. In a typical implementation the Master has two network interfaces, one that communicates with the private Beowulf network for the slaves, the other for the general purpose network of the organization. The slave computers typically have their own version of the same operating system, and local memory and disk space. However, the private slave network may also have a large and shared file server that stores global persistent data, accessed by the slaves as needed.
65:
745:
424:" clusters are configurations in which cluster-nodes share computational workload to provide better overall performance. For example, a web server cluster may assign different queries to different nodes, so the overall response time will be optimized. However, approaches to load-balancing may significantly differ among applications, e.g. a high-performance cluster used for scientific computations would balance load with different algorithms from a web-server cluster which may just use a simple
405:
726:
2976:
80:
276:
49:
612:
522:
169:
320:
802:" may be employed to keep the rest of the system operational. Fencing is the process of isolating a node or protecting shared resources when a node appears to be malfunctioning. There are two classes of fencing methods; one disables a node itself, and the other disallows access to resources such as shared disks.
853:
without multi-node cooperation, given that the main goal of the system is providing rapid user access to shared data. However, "computer clusters" which perform complex computations for a small number of users need to take advantage of the parallel processing capabilities of the cluster and partition
529:
One of the issues in designing a cluster is how tightly coupled the individual nodes may be. For instance, a single computer job may require frequent communication among nodes: this implies that the cluster shares a dedicated network, is densely located, and probably has homogeneous nodes. The other
497:
In terms of scalability, clusters provide this in their ability to add nodes horizontally. This means that more computers may be added to the cluster, to improve its performance, redundancy and fault tolerance. This can be an inexpensive solution for a higher performing cluster compared to scaling up
908:
can be used to restore a given state of the system when a node fails during a long multi-node computation. This is essential in large clusters, given that as the number of nodes increases, so does the likelihood of node failure under heavy computational loads. Checkpointing can restore the system to
681:
around 1989 before MPI was available. PVM must be directly installed on every cluster node and provides a set of software libraries that paint the node as a "parallel virtual machine". PVM provides a run-time environment for message-passing, task and resource management, and fault notification. PVM
291:
Greg
Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup. Pfister estimates the date as some time in the 1960s. The formal engineering basis of cluster computing as a means of doing parallel
159:
were employed; but the lower upfront cost of clusters, and increased speed of network fabric has favoured the adoption of clusters. In contrast to high-reliability mainframes, clusters are cheaper to scale out, but also have increased complexity in error handling, as in clusters error modes are not
781:
becomes a challenge. In a heterogeneous CPU-GPU cluster with a complex application environment, the performance of each job depends on the characteristics of the underlying cluster. Therefore, mapping tasks onto CPU cores and GPU devices provides significant challenges. This is an area of ongoing
315:
in the mid-1960s. This allowed up to four computers, each with either one or two processors, to be tightly coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation.
574:
workstation, which uses multiple graphics accelerator processor chips. Besides game consoles, high-end graphics cards too can be used instead. The use of graphics cards (or rather their GPU's) to do calculations for grid computing is vastly more economical than using CPU's, despite being less
590:, the cluster nodes may run on separate physical computers with different operating systems which are painted above with a virtual layer to look similar. The cluster may also be virtualized on various configurations as maintenance takes place; an example implementation is
760:
One of the challenges in the use of a computer cluster is the cost of administrating it which can at times be as high as the cost of administrating N independent machines, if the cluster has N nodes. In some cases this provides an advantage to
307:
The history of early computer clusters is more or less directly tied to the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster.
501:
When adding a new node to a cluster, reliability increases because the entire cluster does not need to be taken down. A single node can be taken down for maintenance, while the rest of the cluster takes on the load of that individual node.
190:. The activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via a
412:
Computer clusters may be configured for different purposes ranging from general purpose business needs such as web-service support, to computation-intensive scientific calculations. In either case, the cluster may use a
693:. Rather than starting anew, the design of MPI drew on various features available in commercial systems of the time. The MPI specifications then gave rise to specific implementations. MPI implementations typically use
132:
Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.
136:
Computer clusters emerged as a result of the convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, and software for high-performance
1273:
243:
Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may also be used to achieve very high levels of performance. The
2302:
1664:
Hamada, Tsuyoshi; et al. (2009). "A novel multiple-walk parallel algorithm for the Barnes–Hut treecode on GPUs – towards cost effective, high performance N-body simulation".
359:
devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were the
849:
Load balancing clusters such as web servers use cluster architectures to support a large number of users and typically each user request is routed to a specific node, achieving
1242:
197:
Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such as
902:(NOW) system gathers cluster data and stores them in a database, while a system such as PARMON, developed in India, allows visually observing and managing large clusters.
498:
a single node in the cluster. This property of computer clusters can allow for larger computational loads to be executed by a larger number of lower performing computers.
494:), and centralized management. Advantages include enabling data recovery in the event of a disaster and providing parallel data processing and high processing capacity.
186:
The computer clustering approach usually (but not always) connects a number of readily available computing nodes (e.g. personal computers used as servers) via a fast
2392:
1904:
458:, which are then used to provide service when system components fail. HA cluster implementations attempt to use redundancy of cluster components to eliminate
2244:
992:
1268:
2206:
1634:
548:
is tuned to running astrophysical N-body simulations using the
Multiple-Walk parallel tree code, rather than general purpose scientific computations.
126:
3031:
575:
precise. However, when using double-precision values, they become as precise to work with as CPU's and are still much less costly (purchase cost).
876:
Developing and debugging parallel programs on a cluster requires parallel language primitives and suitable tools such as those discussed by the
247:
organization's semiannual list of the 500 fastest supercomputers often includes many clusters, e.g. the world's fastest machine in 2011 was the
2373:
417:
approach. Note that the attributes described below are not exclusive and a "computer cluster" may also use a high-availability approach, etc.
140:. They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest
2185:
2166:
2147:
2126:
2000:
1973:
1883:
1847:
1588:
1556:
1524:
1456:
1338:
697:
and socket connections. MPI is now a widely available communications model that enables parallel programs to be written in languages such as
2640:
1796:
Gropp, William; Lusk, Ewing; Skjellum, Anthony (1996). "A High-Performance, Portable
Implementation of the MPI Message Passing Interface".
661:
2663:
571:
709:, etc. Thus, unlike PVM which provides a concrete implementation, MPI is a specification which has been implemented in systems such as
2552:
1780:
1308:
896:
2658:
2635:
2081:
1943:
1744:
1185:
57:
1388:
125:. In most circumstances, all of the nodes use the same hardware and the same operating system, although in some setups (e.g. using
1604:
2237:
1234:
1209:
641:
1238:
809:
method stands for "Shoot The Other Node In The Head", meaning that the suspected node is disabled or powered off. For instance,
2630:
2445:
2216:
482:
Clusters are primarily designed with performance in mind, but installations are based on many other factors. Fault tolerance (
2737:
2600:
3026:
3011:
2961:
2795:
2413:
2333:
706:
678:
336:
1472:
Katzman, James A. (1982). "Chapter 29, The Tandem 16: A Fault-Tolerant
Computing System". In Siewiorek, Donald P. (ed.).
633:. Clusters do not typically use physically shared memory, while many supercomputer architectures have also abandoned it.
3021:
1081:
986:
264:
3006:
2980:
2926:
2386:
2230:
1355:
833:
690:
619:
42:
1915:
933:– director-based clusters that allow incoming requests for services to be distributed across multiple cluster nodes.
212:
A computer cluster may be a simple two-node system which just connects two personal computers, or may be a very fast
106:
set to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing is
979:
platform provides pieces for high-performance computing like the job scheduler, MSMPI library and management tools.
629:. One of the elements that distinguished the three classes at that time was that the early supercomputers relied on
530:
extreme is where a computer job uses one or few nodes, and needs little or no inter-node communication, approaching
3001:
2905:
2700:
2585:
2547:
2397:
2287:
2211:
1124:
861:
762:
649:
630:
556:
421:
389:
221:
372:
Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network,
2921:
2900:
2845:
2732:
2722:
2695:
2557:
1487:
1104:
1086:
1055:
1045:
1040:
950:
905:
885:
857:
667:
545:
447:
414:
270:
237:
156:
220:
cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional
64:
3036:
2875:
2501:
2440:
2353:
1114:
889:
698:
671:
439:
of vehicle crashes or weather. Very tightly coupled computer clusters are designed for work that may approach "
233:
180:
83:
2219:, April 2015, by Abhishek Verma, Luis Pedrosa, Madhukar Korupolu, David Oppenheimer, Eric Tune and John Wilkes
685:
MPI emerged in the early 1990s out of discussions among 40 organizations. The initial effort was supported by
2790:
454:
clusters, or HA clusters) improve the availability of the cluster approach. They operate by having redundant
3016:
2936:
2931:
2381:
459:
2675:
2607:
2511:
2403:
2358:
1805:
1134:
1076:
645:
490:, and in high-performance situations, low frequency of maintenance routines, resource consolidation (e.g.
425:
2465:
2767:
2727:
2680:
2670:
2408:
2328:
2267:
1119:
1071:
1035:
865:
637:
560:
506:
206:
137:
73:
2201:
462:. There are commercial implementations of High-Availability clusters for many operating systems. The
2707:
2595:
2590:
2580:
2567:
2363:
1642:
926:
225:
1810:
179:
The desire to get more computing power and better reliability by orchestrating a number of low-cost
2870:
2825:
2651:
2646:
2625:
2491:
1364:
1050:
1008:
1004:
966:
881:
799:
744:
436:
217:
191:
995:
is also used to schedule and manage some of the largest supercomputer clusters (see top500 list).
300:, who in 1967 published what has come to be regarded as the seminal paper on parallel processing:
2895:
2744:
2717:
2542:
2506:
2496:
2297:
2277:
2272:
2253:
2101:
1681:
487:
348:
252:
187:
114:
2455:
1359:
404:
2941:
2617:
2575:
2470:
2181:
2162:
2143:
2122:
2077:
2051:
2017:
1996:
1969:
1939:
1879:
1843:
1776:
1740:
1584:
1552:
1520:
1452:
1334:
1304:
1007:
have been made to build short-lived clusters for specific computations. However, larger-scale
972:
455:
118:
103:
1448:
1442:
2951:
2685:
2532:
2348:
2343:
2338:
2307:
2041:
1871:
1815:
1673:
1423:
850:
583:
505:
If you have a large number of computers clustered together, this lends itself to the use of
435:
operations such as web service or databases. For instance, a computer cluster might support
385:
360:
122:
725:
129:(OSCAR)), different operating systems can be used on each computer, or different hardware.
2815:
2755:
2690:
2537:
2527:
2460:
2292:
2282:
1422:. International Symposium on Low Power Electronics and Design (ISLPED). pp. 371–372.
1129:
868:
via the simultaneous execution of separate portions of a program on different processors.
778:
766:
538:
173:
107:
69:
2450:
301:
1396:
917:
The Linux world supports various cluster software; for application clustering, there is
2946:
2762:
2419:
2312:
1612:
1545:
1297:
1190:
976:
820:
approach disallows access to resources without powering off the node. This may include
587:
567:
531:
440:
202:
149:
99:
35:
17:
2995:
2835:
2712:
2137:
2116:
1819:
1512:
1246:
884:
were then developed to debug parallel implementations on computer clusters which use
829:
787:
734:
626:
467:
373:
213:
141:
1685:
431:
Computer clusters are used for computation-intensive purposes, rather than handling
2435:
1150:
909:
a stable state so that processing can resume without needing to recompute results.
753:
552:
432:
198:
31:
1990:
1963:
1933:
1770:
1734:
748:
A pre-release sample of the Ground
Electronics/AB Open Circumference C25 cluster
2956:
2037:
1427:
1160:
1155:
352:
293:
145:
1677:
1109:
730:
393:
356:
340:
284:
248:
152:
79:
2055:
1875:
224:. An early project that showed the viability of the concept was the 133-node
113:
The components of a cluster are usually connected to each other through fast
2830:
2805:
2033:
1638:
1331:
Network-Based
Information Systems: First International Conference, NBIS 2007
962:
958:
942:
783:
777:
When a large multi-user cluster needs to access very large amounts of data,
666:
Two widely used approaches for communication between cluster nodes are MPI (
564:
328:
1703:
275:
183:
computers has given rise to a variety of architectures and configurations.
2046:
2021:
1420:
The K computer: Japanese next-generation supercomputer development project
335:
as the cluster interface. Clustering per se did not really take off until
48:
2880:
2860:
2785:
2118:
Blueprints for High
Availability: Designing Resilient Distributed Systems
2100:
Baker, Mark; et al. (11 Jan 2001). "Cluster
Computing White Paper".
938:
930:
749:
714:
595:
579:
463:
451:
377:
376:
began to use them within the same computer. Following the success of the
98:
that work together so that they can be viewed as a single system. Unlike
95:
611:
331:"Attached Resource Computer" (ARC) system, developed in 1977, and using
2885:
2865:
2840:
2475:
1870:. Cloud Computing Technology and Science (CloudCom). pp. 733–740.
1329:
Enokido, Tomoya; Barolli, Leonhard; Takizawa, Makoto (23 August 2007).
1022:
954:
946:
806:
738:
702:
484:
the ability for a system to continue working with a malfunctioning node
344:
953:
that provide for automatic process migration among homogeneous nodes.
2855:
2850:
2222:
918:
694:
513:, both of which can increase the reliability and speed of a cluster.
381:
347:
operating system. The ARC and VAXcluster products not only supported
332:
244:
1003:
Although most computer clusters are permanent fixtures, attempts at
311:
The first production system designed as a cluster was the
Burroughs
2106:
1447:(2nd ed.). Upper Saddle River, NJ: Prentice Hall PTR. p.
521:
168:
1581:
High
Performance Computing for Computational Science – VECPAR 2004
1269:"Nuclear weapons supercomputer reclaims world speed record for US"
1012:
982:
934:
922:
825:
743:
724:
710:
686:
625:
As the computer clusters were appearing during the 1980s, so were
610:
520:
471:
403:
318:
312:
229:
78:
63:
53:
47:
682:
can be used by user programs written in C, C++, or Fortran, etc.
2890:
2820:
2810:
510:
491:
319:
2226:
1868:
Hybrid Map Task Scheduling for GPU-Based Heterogeneous Clusters
1736:
Distributed services with OpenAFS: for enterprise and education
880:(HPDF) which resulted in the HPD specifications. Tools such as
640:
is essential in modern computer clusters. Examples include the
384:
was delivered in 1976, and introduced internal parallelism via
2800:
2777:
1608:
616:
591:
297:
280:
240:
library to achieve high performance at a relatively low cost.
2178:
High Performance Cluster Computing: Architectures and Systems
2159:
High Performance Cluster Computing: Architectures and Systems
578:
Computer clusters have historically run on separate physical
408:
A load balancing cluster with two servers and N user stations
388:. While early supercomputers excluded clusters and relied on
27:
Set of computers configured in a distributed computing system
551:
Due to the increasing computing power of each generation of
327:
The first commercial loosely coupled clustering product was
559:(HPC) clusters. Some examples of game console clusters are
121:(computer used as a server) running its own instance of an
2074:
Computational Science: ICCS 2003: International Conference
570:
clusters. Another example of consumer game product is the
555:, a novel use has emerged where they are repurposed into
41:"Cluster computing" redirects here. For the journal, see
1965:
Computer Science: The Hardware, Software and Heart of It
813:
uses a power controller to turn off an inoperable node.
1992:
Parallel Programming: For Multicore and Cluster Systems
1932:
Vargas, Enrique; Bianco, Joseph; Deeths, David (2001).
392:, in time some of the fastest supercomputers (e.g. the
1733:
Milicchio, Franco; Gehrke, Wolfgang Alexander (2007).
365:(a 1976 high-availability commercial product) and the
216:. A basic approach to building a cluster is that of a
2202:
IEEE Technical Committee on Scalable Computing (TCSC)
2022:"A Debugging Standard for High-performance computing"
1488:"History of TANDEM COMPUTERS, INC. – FundingUniverse"
1476:. U.S.A.: McGraw-Hill Book Company. pp. 470–485.
1299:
Transaction processing : concepts and techniques
836:(GNBD) fencing to disable access to the GNBD server.
798:
When a node in a cluster fails, strategies such as "
765:
with lower administration costs. This has also made
2914:
2776:
2616:
2566:
2520:
2484:
2428:
2372:
2321:
2260:
985:is a set of middleware technologies created by the
428:by assigning each new request to a different node.
2217:Large-scale cluster management at Google with Borg
1544:
1387:Hargrove, William W.; Hoffman, Forrest M. (1999).
1296:
1267:
1866:K. Shirahata; et al. (30 Nov – 3 Dec 2010).
1704:"Xen Virtualization and Linux Clustering, Part 1"
1418:Yokokawa, Mitsuo; et al. (1–3 August 2011).
148:. Prior to the advent of clusters, single-unit
1905:"Alan Robertson Resource fencing using STONITH"
1838:Patterson, David A.; Hennessy, John L. (2011).
1697:
1695:
1389:"Cluster Computing: Linux Taken to the Extreme"
1210:"Weekend Project: Build your own supercomputer"
860:of programs remains a technical challenge, but
205:which also use many nodes, but with a far more
2238:
1203:
1201:
854:"the same computation" among several nodes.
782:research; algorithms that combine and extend
8:
1833:
1831:
1829:
1354:William W. Hargrove, Forrest M. Hoffman and
949:are full-blown clusters integrated into the
769:popular, due to the ease of administration.
1666:Computer Science – Research and Development
1474:Computer Structure: Principles and Examples
1368:. Vol. 265, no. 2. pp. 72–79
2245:
2231:
2223:
1938:. Prentice Hall Professional. p. 58.
975:computer cluster Server 2003 based on the
369:(circa 1994, primarily for business use).
292:work of any sort was arguably invented by
2207:Reliable Scalable Cluster Technology, IBM
2105:
2045:
1809:
1764:
1762:
1760:
1758:
1756:
1728:
1726:
1724:
1574:
1572:
1570:
1568:
127:Open Source Cluster Application Resources
2067:
2065:
1962:Aho, Alfred V.; Blum, Edward K. (2011).
1935:Sun Cluster environment: Sun Cluster 2.2
1861:
1859:
1538:
1536:
1015:-based systems have had more followers.
729:Low-cost and low energy tiny-cluster of
274:
167:
2180:. Vol. 2. NJ, USA: Prentice Hall.
2161:. Vol. 1. NJ, USA: Prentice Hall.
2115:Marcus, Evan; Stern, Hal (2000-02-14).
1989:Rauber, Thomas; Rünger, Gudula (2010).
1957:
1955:
1899:
1897:
1895:
1635:"Evaluating the Benefits of Clustering"
1177:
840:Software development and administration
828:, fibre channel fencing to disable the
1775:. PHI Learning Pvt. pp. 109–112.
1579:Daydé, Michel; Dongarra, Jack (2005).
1324:
1322:
1320:
1094:
1025:
1519:. Gulf Professional. pp. 41–48.
1208:Graham-Smith, Darien (29 June 2012).
7:
1605:"IBM Cluster System : Benefits"
662:Message passing in computer clusters
1295:Gray, Jim; Rueter, Andreas (1993).
864:can be used to effectuate a higher
594:as the virtualization manager with
572:Nvidia Tesla Personal Supercomputer
396:) relied on cluster architectures.
1360:"The Do-It-Yourself Supercomputer"
897:University of California, Berkeley
283:11/780, c. 1977, as used in early
25:
1517:Readings in computer architecture
1243:Georgia Tech College of Computing
1239:"Cluster Computing: Applications"
1237:; Pennington, Robert (May 2001).
756:3 Model B+ and 1x UDOO x86 boards
656:Message passing and communication
86:series uses cluster architecture.
58:Chemnitz University of Technology
2975:
2974:
1840:Computer Organization and Design
878:High Performance Debugging Forum
790:have been proposed and studied.
642:IBM General Parallel File System
3032:Fault-tolerant computer systems
2446:Analysis of parallel algorithms
1912:IBM Linux Research Center, 2010
1641:. 28 March 2003. Archived from
1547:High Performance Linux Clusters
1276:from the original on 2022-01-12
525:A typical Beowulf configuration
52:Technicians working on a large
1968:. Springer. pp. 156–166.
1842:. Elsevier. pp. 641–642.
1739:. Springer. pp. 339–341.
1583:. Springer. pp. 120–121.
1303:. Morgan Kaufmann Publishers.
1272:. The Telegraph. 18 Jun 2012.
822:persistent reservation fencing
602:Data sharing and communication
102:, computer clusters have each
1:
2393:Simultaneous and heterogenous
2212:Tivoli System Automation Wiki
2176:Buyya, Rajkumar, ed. (1999).
2157:Buyya, Rajkumar, ed. (1999).
679:Oak Ridge National Laboratory
466:project is one commonly used
337:Digital Equipment Corporation
2981:Category: Parallel computing
1995:. Springer. pp. 94–95.
1820:10.1016/0167-8191(96)00024-5
1082:Distributed operating system
987:Enabling Grids for E-sciencE
323:Tandem NonStop II circa 1980
265:History of computer clusters
160:opaque to running programs.
1702:Mauer, Ryan (12 Jan 2006).
1428:10.1109/ISLPED.2011.5993668
1186:"Cluster vs grid computing"
892:(PVM) for message passing.
862:parallel programming models
834:global network block device
763:shared memory architectures
691:National Science Foundation
544:A special purpose 144-node
43:Cluster Computing (journal)
3053:
2288:High-performance computing
2072:Sloot, Peter, ed. (2003).
1772:Grid and Cluster Computing
1551:. "O'Reilly Media, Inc.".
1125:Rocks Cluster Distribution
659:
650:Oracle Cluster File System
557:High-performance computing
448:High-availability clusters
367:IBM S/390 Parallel Sysplex
268:
262:
222:high-performance computing
40:
29:
2970:
2922:Automatic parallelization
2558:Application checkpointing
2121:. John Wiley & Sons.
1678:10.1007/s00450-009-0089-1
1543:Sloan, Joseph D. (2004).
1515:; Sohi, Gurindar (1999).
1441:Pfister, Gregory (1998).
1105:DEGIMA (computer cluster)
1087:Distributed shared memory
1056:Symmetric multiprocessing
1046:High-availability cluster
1041:Heartbeat private network
906:Application checkpointing
886:Message Passing Interface
858:Automatic parallelization
677:PVM was developed at the
668:Message Passing Interface
561:Sony PlayStation clusters
437:computational simulations
271:History of supercomputing
238:Message Passing Interface
1876:10.1109/CloudCom.2010.55
1115:Microsoft Cluster Server
890:Parallel Virtual Machine
872:Debugging and monitoring
672:Parallel Virtual Machine
517:Design and configuration
507:distributed file systems
460:single points of failure
343:product in 1984 for the
255:, cluster architecture.
234:Parallel Virtual Machine
181:commercial off-the-shelf
30:Not to be confused with
2937:Embarrassingly parallel
2932:Deterministic algorithm
1769:Prabhu, C.S.R. (2008).
1492:www.fundinguniverse.com
1214:PC & Tech Authority
900:Network of Workstations
794:Node failure management
752:system, fitted with 8x
329:Datapoint Corporation's
18:Network of Workstations
2652:Associative processing
2608:Non-blocking algorithm
2414:Clustered multi-thread
2136:Pfister, Greg (1998).
2026:Scientific Programming
1135:Veritas Cluster Server
1077:Distributed data store
757:
741:
646:Cluster Shared Volumes
636:However, the use of a
622:
526:
409:
400:Attributes of clusters
324:
288:
228:. The developers used
176:
87:
76:
61:
2768:Hardware acceleration
2681:Superscalar processor
2671:Dataflow architecture
2268:Distributed computing
2139:In Search of Clusters
2040:: IOS Press: 95–108.
1444:In Search of Clusters
1120:Red Hat Cluster Suite
1072:Distributed computing
1064:Distributed computing
1036:Clustered file system
866:degree of parallelism
747:
728:
638:clustered file system
614:
586:. With the advent of
524:
407:
322:
278:
172:A simple, home-built
171:
144:in the world such as
138:distributed computing
82:
67:
51:
3027:Classes of computers
3012:Concurrent computing
2647:Pipelined processing
2596:Explicit parallelism
2591:Implicit parallelism
2581:Dataflow programming
2076:. pp. 291–292.
2016:Francioni, Joan M.;
927:Linux Virtual Server
845:Parallel programming
226:Stone Soupercomputer
3022:Local area networks
2871:Parallel Extensions
2676:Pipelined processor
2047:10.1155/2000/971291
1513:Jouppi, Norman Paul
1511:Hill, Mark Donald;
1399:on October 18, 2011
1365:Scientific American
1358:(August 16, 2001).
1051:Single system image
1009:volunteer computing
1005:flash mob computing
967:single-system image
470:HA package for the
192:single system image
115:local area networks
3007:Parallel computing
2745:Massively parallel
2723:distributed shared
2543:Cache invalidation
2507:Instruction window
2298:Manycore processor
2278:Massively parallel
2273:Parallel computing
2254:Parallel computing
2018:Pancake, Cherri M.
1798:Parallel Computing
758:
742:
721:Cluster management
623:
527:
474:operating system.
426:round-robin method
410:
351:, but also shared
349:parallel computing
325:
289:
253:distributed memory
207:distributed nature
188:local area network
177:
157:modular redundancy
88:
77:
62:
3002:Cluster computing
2989:
2988:
2942:Parallel slowdown
2576:Stream processing
2466:Karp–Flatt metric
2187:978-0-13-013785-2
2168:978-0-13-013784-5
2149:978-0-13-899709-0
2142:. Prentice Hall.
2128:978-0-471-35601-1
2002:978-3-642-04817-3
1975:978-1-4614-1167-3
1885:978-1-4244-9405-7
1849:978-0-12-374750-1
1590:978-3-540-25424-9
1558:978-0-596-00570-2
1526:978-1-55860-539-8
1458:978-0-13-899709-0
1340:978-3-540-74572-3
1170:
1169:
973:Microsoft Windows
969:implementations.
818:resources fencing
450:" (also known as
415:high-availability
386:vector processing
68:Sun Microsystems
16:(Redirected from
3044:
2978:
2977:
2952:Software lockout
2751:Computer cluster
2686:Vector processor
2641:Array processing
2626:Flynn's taxonomy
2533:Memory coherence
2308:Computer network
2247:
2240:
2233:
2224:
2191:
2172:
2153:
2132:
2111:
2109:
2088:
2087:
2069:
2060:
2059:
2049:
2013:
2007:
2006:
1986:
1980:
1979:
1959:
1950:
1949:
1929:
1923:
1922:
1920:
1914:. Archived from
1909:
1901:
1890:
1889:
1863:
1854:
1853:
1835:
1824:
1823:
1813:
1793:
1787:
1786:
1766:
1751:
1750:
1730:
1719:
1718:
1716:
1714:
1699:
1690:
1689:
1661:
1655:
1654:
1652:
1650:
1645:on 22 April 2016
1631:
1625:
1624:
1622:
1620:
1615:on 29 April 2016
1611:. Archived from
1601:
1595:
1594:
1576:
1563:
1562:
1550:
1540:
1531:
1530:
1508:
1502:
1501:
1499:
1498:
1484:
1478:
1477:
1469:
1463:
1462:
1438:
1432:
1431:
1415:
1409:
1408:
1406:
1404:
1395:. Archived from
1384:
1378:
1377:
1375:
1373:
1351:
1345:
1344:
1326:
1315:
1314:
1302:
1292:
1286:
1285:
1283:
1281:
1271:
1264:
1258:
1257:
1255:
1254:
1245:. Archived from
1231:
1225:
1224:
1222:
1220:
1205:
1196:
1195:
1182:
1097:Specific systems
1023:
1011:systems such as
999:Other approaches
989:(EGEE) project.
851:task parallelism
767:virtual machines
584:operating system
236:toolkit and the
123:operating system
92:computer cluster
21:
3052:
3051:
3047:
3046:
3045:
3043:
3042:
3041:
3037:Server hardware
2992:
2991:
2990:
2985:
2966:
2910:
2816:Coarray Fortran
2772:
2756:Beowulf cluster
2612:
2562:
2553:Synchronization
2538:Cache coherence
2528:Multiprocessing
2516:
2480:
2461:Cost efficiency
2456:Gustafson's law
2424:
2368:
2317:
2293:Multiprocessing
2283:Cloud computing
2256:
2251:
2198:
2188:
2175:
2169:
2156:
2150:
2135:
2129:
2114:
2099:
2096:
2094:Further reading
2091:
2084:
2071:
2070:
2063:
2015:
2014:
2010:
2003:
1988:
1987:
1983:
1976:
1961:
1960:
1953:
1946:
1931:
1930:
1926:
1918:
1907:
1903:
1902:
1893:
1886:
1865:
1864:
1857:
1850:
1837:
1836:
1827:
1811:10.1.1.102.9485
1795:
1794:
1790:
1783:
1768:
1767:
1754:
1747:
1732:
1731:
1722:
1712:
1710:
1701:
1700:
1693:
1663:
1662:
1658:
1648:
1646:
1633:
1632:
1628:
1618:
1616:
1603:
1602:
1598:
1591:
1578:
1577:
1566:
1559:
1542:
1541:
1534:
1527:
1510:
1509:
1505:
1496:
1494:
1486:
1485:
1481:
1471:
1470:
1466:
1459:
1440:
1439:
1435:
1417:
1416:
1412:
1402:
1400:
1386:
1385:
1381:
1371:
1369:
1356:Thomas Sterling
1353:
1352:
1348:
1341:
1333:. p. 375.
1328:
1327:
1318:
1311:
1294:
1293:
1289:
1279:
1277:
1266:
1265:
1261:
1252:
1250:
1233:
1232:
1228:
1218:
1216:
1207:
1206:
1199:
1184:
1183:
1179:
1175:
1130:Solaris Cluster
1021:
1001:
915:
913:Implementations
874:
847:
842:
796:
779:task scheduling
775:
773:Task scheduling
723:
664:
658:
620:Nehalem cluster
609:
604:
539:Beowulf cluster
519:
480:
402:
339:released their
273:
267:
261:
174:Beowulf cluster
166:
108:cloud computing
70:Solaris Cluster
56:cluster at the
46:
39:
28:
23:
22:
15:
12:
11:
5:
3050:
3048:
3040:
3039:
3034:
3029:
3024:
3019:
3017:Supercomputers
3014:
3009:
3004:
2994:
2993:
2987:
2986:
2984:
2983:
2971:
2968:
2967:
2965:
2964:
2959:
2954:
2949:
2947:Race condition
2944:
2939:
2934:
2929:
2924:
2918:
2916:
2912:
2911:
2909:
2908:
2903:
2898:
2893:
2888:
2883:
2878:
2873:
2868:
2863:
2858:
2853:
2848:
2843:
2838:
2833:
2828:
2823:
2818:
2813:
2808:
2803:
2798:
2793:
2788:
2782:
2780:
2774:
2773:
2771:
2770:
2765:
2760:
2759:
2758:
2748:
2742:
2741:
2740:
2735:
2730:
2725:
2720:
2715:
2705:
2704:
2703:
2698:
2691:Multiprocessor
2688:
2683:
2678:
2673:
2668:
2667:
2666:
2661:
2656:
2655:
2654:
2649:
2644:
2633:
2622:
2620:
2614:
2613:
2611:
2610:
2605:
2604:
2603:
2598:
2593:
2583:
2578:
2572:
2570:
2564:
2563:
2561:
2560:
2555:
2550:
2545:
2540:
2535:
2530:
2524:
2522:
2518:
2517:
2515:
2514:
2509:
2504:
2499:
2494:
2488:
2486:
2482:
2481:
2479:
2478:
2473:
2468:
2463:
2458:
2453:
2448:
2443:
2438:
2432:
2430:
2426:
2425:
2423:
2422:
2420:Hardware scout
2417:
2411:
2406:
2401:
2395:
2390:
2384:
2378:
2376:
2374:Multithreading
2370:
2369:
2367:
2366:
2361:
2356:
2351:
2346:
2341:
2336:
2331:
2325:
2323:
2319:
2318:
2316:
2315:
2313:Systolic array
2310:
2305:
2300:
2295:
2290:
2285:
2280:
2275:
2270:
2264:
2262:
2258:
2257:
2252:
2250:
2249:
2242:
2235:
2227:
2221:
2220:
2214:
2209:
2204:
2197:
2196:External links
2194:
2193:
2192:
2186:
2173:
2167:
2154:
2148:
2133:
2127:
2112:
2095:
2092:
2090:
2089:
2082:
2061:
2020:(April 2000).
2008:
2001:
1981:
1974:
1951:
1944:
1924:
1921:on 2021-01-05.
1891:
1884:
1855:
1848:
1825:
1804:(6): 789–828.
1788:
1782:978-8120334281
1781:
1752:
1745:
1720:
1691:
1672:(1–2): 21–31.
1656:
1626:
1596:
1589:
1564:
1557:
1532:
1525:
1503:
1479:
1464:
1457:
1433:
1410:
1393:Linux Magazine
1379:
1346:
1339:
1316:
1310:978-1558601901
1309:
1287:
1259:
1226:
1197:
1191:Stack Overflow
1176:
1174:
1171:
1168:
1167:
1166:
1165:
1164:
1163:
1158:
1153:
1143:Computer farms
1140:
1139:
1138:
1137:
1132:
1127:
1122:
1117:
1112:
1107:
1093:
1092:
1091:
1090:
1089:
1084:
1079:
1074:
1061:
1060:
1059:
1058:
1053:
1048:
1043:
1038:
1028:Basic concepts
1020:
1017:
1000:
997:
977:Windows Server
914:
911:
873:
870:
846:
843:
841:
838:
795:
792:
774:
771:
722:
719:
660:Main article:
657:
654:
644:, Microsoft's
627:supercomputers
608:
605:
603:
600:
588:virtualization
582:with the same
546:DEGIMA cluster
532:grid computing
518:
515:
479:
476:
441:supercomputing
422:Load-balancing
401:
398:
374:supercomputers
362:Tandem NonStop
263:Main article:
260:
257:
203:grid computing
165:
164:Basic concepts
162:
150:fault tolerant
142:supercomputers
100:grid computers
74:In-Row cooling
36:grid computing
26:
24:
14:
13:
10:
9:
6:
4:
3:
2:
3049:
3038:
3035:
3033:
3030:
3028:
3025:
3023:
3020:
3018:
3015:
3013:
3010:
3008:
3005:
3003:
3000:
2999:
2997:
2982:
2973:
2972:
2969:
2963:
2960:
2958:
2955:
2953:
2950:
2948:
2945:
2943:
2940:
2938:
2935:
2933:
2930:
2928:
2925:
2923:
2920:
2919:
2917:
2913:
2907:
2904:
2902:
2899:
2897:
2894:
2892:
2889:
2887:
2884:
2882:
2879:
2877:
2874:
2872:
2869:
2867:
2864:
2862:
2859:
2857:
2854:
2852:
2849:
2847:
2844:
2842:
2839:
2837:
2836:Global Arrays
2834:
2832:
2829:
2827:
2824:
2822:
2819:
2817:
2814:
2812:
2809:
2807:
2804:
2802:
2799:
2797:
2794:
2792:
2789:
2787:
2784:
2783:
2781:
2779:
2775:
2769:
2766:
2764:
2763:Grid computer
2761:
2757:
2754:
2753:
2752:
2749:
2746:
2743:
2739:
2736:
2734:
2731:
2729:
2726:
2724:
2721:
2719:
2716:
2714:
2711:
2710:
2709:
2706:
2702:
2699:
2697:
2694:
2693:
2692:
2689:
2687:
2684:
2682:
2679:
2677:
2674:
2672:
2669:
2665:
2662:
2660:
2657:
2653:
2650:
2648:
2645:
2642:
2639:
2638:
2637:
2634:
2632:
2629:
2628:
2627:
2624:
2623:
2621:
2619:
2615:
2609:
2606:
2602:
2599:
2597:
2594:
2592:
2589:
2588:
2587:
2584:
2582:
2579:
2577:
2574:
2573:
2571:
2569:
2565:
2559:
2556:
2554:
2551:
2549:
2546:
2544:
2541:
2539:
2536:
2534:
2531:
2529:
2526:
2525:
2523:
2519:
2513:
2510:
2508:
2505:
2503:
2500:
2498:
2495:
2493:
2490:
2489:
2487:
2483:
2477:
2474:
2472:
2469:
2467:
2464:
2462:
2459:
2457:
2454:
2452:
2449:
2447:
2444:
2442:
2439:
2437:
2434:
2433:
2431:
2427:
2421:
2418:
2415:
2412:
2410:
2407:
2405:
2402:
2399:
2396:
2394:
2391:
2388:
2385:
2383:
2380:
2379:
2377:
2375:
2371:
2365:
2362:
2360:
2357:
2355:
2352:
2350:
2347:
2345:
2342:
2340:
2337:
2335:
2332:
2330:
2327:
2326:
2324:
2320:
2314:
2311:
2309:
2306:
2304:
2301:
2299:
2296:
2294:
2291:
2289:
2286:
2284:
2281:
2279:
2276:
2274:
2271:
2269:
2266:
2265:
2263:
2259:
2255:
2248:
2243:
2241:
2236:
2234:
2229:
2228:
2225:
2218:
2215:
2213:
2210:
2208:
2205:
2203:
2200:
2199:
2195:
2189:
2183:
2179:
2174:
2170:
2164:
2160:
2155:
2151:
2145:
2141:
2140:
2134:
2130:
2124:
2120:
2119:
2113:
2108:
2103:
2098:
2097:
2093:
2085:
2083:3-540-40195-4
2079:
2075:
2068:
2066:
2062:
2057:
2053:
2048:
2043:
2039:
2035:
2031:
2027:
2023:
2019:
2012:
2009:
2004:
1998:
1994:
1993:
1985:
1982:
1977:
1971:
1967:
1966:
1958:
1956:
1952:
1947:
1945:9780130418708
1941:
1937:
1936:
1928:
1925:
1917:
1913:
1906:
1900:
1898:
1896:
1892:
1887:
1881:
1877:
1873:
1869:
1862:
1860:
1856:
1851:
1845:
1841:
1834:
1832:
1830:
1826:
1821:
1817:
1812:
1807:
1803:
1799:
1792:
1789:
1784:
1778:
1774:
1773:
1765:
1763:
1761:
1759:
1757:
1753:
1748:
1746:9783540366348
1742:
1738:
1737:
1729:
1727:
1725:
1721:
1709:
1708:Linux Journal
1705:
1698:
1696:
1692:
1687:
1683:
1679:
1675:
1671:
1667:
1660:
1657:
1644:
1640:
1636:
1630:
1627:
1614:
1610:
1606:
1600:
1597:
1592:
1586:
1582:
1575:
1573:
1571:
1569:
1565:
1560:
1554:
1549:
1548:
1539:
1537:
1533:
1528:
1522:
1518:
1514:
1507:
1504:
1493:
1489:
1483:
1480:
1475:
1468:
1465:
1460:
1454:
1450:
1446:
1445:
1437:
1434:
1429:
1425:
1421:
1414:
1411:
1398:
1394:
1390:
1383:
1380:
1367:
1366:
1361:
1357:
1350:
1347:
1342:
1336:
1332:
1325:
1323:
1321:
1317:
1312:
1306:
1301:
1300:
1291:
1288:
1275:
1270:
1263:
1260:
1249:on 2007-12-21
1248:
1244:
1240:
1236:
1230:
1227:
1215:
1211:
1204:
1202:
1198:
1193:
1192:
1187:
1181:
1178:
1172:
1162:
1159:
1157:
1154:
1152:
1149:
1148:
1147:
1146:
1145:
1144:
1136:
1133:
1131:
1128:
1126:
1123:
1121:
1118:
1116:
1113:
1111:
1108:
1106:
1103:
1102:
1101:
1100:
1099:
1098:
1088:
1085:
1083:
1080:
1078:
1075:
1073:
1070:
1069:
1068:
1067:
1066:
1065:
1057:
1054:
1052:
1049:
1047:
1044:
1042:
1039:
1037:
1034:
1033:
1032:
1031:
1030:
1029:
1024:
1018:
1016:
1014:
1010:
1006:
998:
996:
994:
990:
988:
984:
980:
978:
974:
970:
968:
964:
960:
956:
952:
948:
944:
940:
936:
932:
928:
924:
920:
912:
910:
907:
903:
901:
898:
893:
891:
887:
883:
879:
871:
869:
867:
863:
859:
855:
852:
844:
839:
837:
835:
831:
830:fibre channel
827:
823:
819:
814:
812:
811:power fencing
808:
803:
801:
793:
791:
789:
785:
780:
772:
770:
768:
764:
755:
751:
746:
740:
736:
735:Apache Hadoop
732:
727:
720:
718:
716:
712:
708:
704:
700:
696:
692:
688:
683:
680:
675:
673:
669:
663:
655:
653:
651:
647:
643:
639:
634:
632:
631:shared memory
628:
621:
618:
613:
606:
601:
599:
597:
593:
589:
585:
581:
576:
573:
569:
566:
562:
558:
554:
553:game consoles
549:
547:
542:
540:
535:
533:
523:
516:
514:
512:
508:
503:
499:
495:
493:
489:
486:) allows for
485:
477:
475:
473:
469:
468:free software
465:
461:
457:
453:
449:
444:
442:
438:
434:
429:
427:
423:
418:
416:
406:
399:
397:
395:
391:
390:shared memory
387:
383:
380:in 1964, the
379:
375:
370:
368:
364:
363:
358:
354:
350:
346:
342:
338:
334:
330:
321:
317:
314:
309:
305:
303:
299:
295:
286:
282:
277:
272:
266:
258:
256:
254:
250:
246:
241:
239:
235:
231:
227:
223:
219:
215:
214:supercomputer
210:
208:
204:
200:
195:
193:
189:
184:
182:
175:
170:
163:
161:
158:
154:
151:
147:
146:IBM's Sequoia
143:
139:
134:
130:
128:
124:
120:
116:
111:
109:
105:
101:
97:
93:
85:
81:
75:
71:
66:
59:
55:
50:
44:
37:
33:
19:
2750:
2521:Coordination
2451:Amdahl's law
2387:Simultaneous
2177:
2158:
2138:
2117:
2073:
2029:
2025:
2011:
1991:
1984:
1964:
1934:
1927:
1916:the original
1911:
1867:
1839:
1801:
1797:
1791:
1771:
1735:
1711:. Retrieved
1707:
1669:
1665:
1659:
1647:. Retrieved
1643:the original
1629:
1617:. Retrieved
1613:the original
1599:
1580:
1546:
1516:
1506:
1495:. Retrieved
1491:
1482:
1473:
1467:
1443:
1436:
1419:
1413:
1401:. Retrieved
1397:the original
1392:
1382:
1370:. Retrieved
1363:
1349:
1330:
1298:
1290:
1278:. Retrieved
1262:
1251:. Retrieved
1247:the original
1235:Bader, David
1229:
1217:. Retrieved
1213:
1189:
1180:
1151:Compile farm
1142:
1141:
1096:
1095:
1063:
1062:
1027:
1026:
1002:
991:
981:
971:
916:
904:
899:
894:
877:
875:
856:
848:
821:
817:
815:
810:
804:
797:
776:
759:
754:Raspberry Pi
684:
676:
665:
635:
624:
607:Data sharing
577:
550:
543:
536:
528:
504:
500:
496:
483:
481:
445:
430:
419:
411:
371:
366:
361:
353:file systems
326:
310:
306:
302:Amdahl's Law
290:
251:which has a
242:
211:
199:peer-to-peer
196:
185:
178:
135:
131:
117:, with each
112:
94:is a set of
91:
89:
32:data cluster
2957:Scalability
2718:distributed
2601:Concurrency
2568:Programming
2409:Cooperative
2398:Speculative
2334:Instruction
2038:Netherlands
1649:8 September
1619:8 September
1403:October 18,
1372:October 18,
1161:Server farm
1156:Render farm
731:Cubieboards
670:) and PVM (
488:scalability
433:IO-oriented
294:Gene Amdahl
287:development
2996:Categories
2962:Starvation
2701:asymmetric
2436:PRAM model
2404:Preemptive
2107:cs/0004014
1497:2023-03-01
1253:2017-02-28
1173:References
1110:K computer
394:K computer
357:peripheral
341:VAXcluster
285:VAXcluster
269:See also:
249:K computer
153:mainframes
2696:symmetric
2441:PEM model
2056:1058-9244
2034:Amsterdam
1806:CiteSeerX
1639:Microsoft
963:Kerrighed
959:openMosix
943:Kerrighed
888:(MPI) or
882:TotalView
832:port, or
784:MapReduce
580:computers
565:Microsoft
194:concept.
96:computers
60:, Germany
2927:Deadlock
2915:Problems
2881:pthreads
2861:OpenHMPP
2786:Ateji PX
2747:computer
2618:Hardware
2485:Elements
2471:Slowdown
2382:Temporal
2364:Pipeline
1686:31071570
1274:Archived
1019:See also
939:LinuxPMI
931:Linux-HA
824:via the
750:computer
733:, using
715:Open MPI
596:Linux-HA
478:Benefits
464:Linux-HA
452:failover
378:CDC 6600
84:Taiwania
2886:RaftLib
2866:OpenACC
2841:GPUOpen
2831:C++ AMP
2806:Charm++
2548:Barrier
2492:Process
2476:Speedup
2261:General
955:OpenSSI
947:OpenSSI
807:STONITH
800:fencing
739:Lubuntu
703:Fortran
648:or the
259:History
218:Beowulf
72:, with
2979:
2856:OpenCL
2851:OpenMP
2796:Chapel
2713:shared
2708:Memory
2643:(SIMT)
2586:Models
2497:Thread
2429:Theory
2400:(SpMT)
2354:Memory
2339:Thread
2322:Levels
2184:
2165:
2146:
2125:
2080:
2054:
1999:
1972:
1942:
1882:
1846:
1808:
1779:
1743:
1684:
1587:
1555:
1523:
1455:
1337:
1307:
1280:18 Jun
1219:2 June
951:kernel
921:, and
919:distcc
788:Hadoop
707:Python
695:TCP/IP
382:Cray 1
333:ARCnet
245:TOP500
232:, the
2826:Dryad
2791:Boost
2512:Array
2502:Fiber
2416:(CMT)
2389:(SMT)
2303:GPGPU
2102:arXiv
2032:(2).
1919:(PDF)
1908:(PDF)
1713:2 Jun
1682:S2CID
1013:BOINC
993:slurm
983:gLite
935:MOSIX
923:MPICH
826:SCSI3
711:MPICH
537:In a
472:Linux
456:nodes
313:B5700
230:Linux
155:with
54:Linux
2891:ROCm
2821:CUDA
2811:Cilk
2778:APIs
2738:COMA
2733:NUMA
2664:MIMD
2659:MISD
2636:SIMD
2631:SISD
2359:Loop
2349:Data
2344:Task
2182:ISBN
2163:ISBN
2144:ISBN
2123:ISBN
2078:ISBN
2052:ISSN
1997:ISBN
1970:ISBN
1940:ISBN
1880:ISBN
1844:ISBN
1777:ISBN
1741:ISBN
1715:2017
1651:2014
1621:2014
1585:ISBN
1553:ISBN
1521:ISBN
1453:ISBN
1405:2011
1374:2011
1335:ISBN
1305:ISBN
1282:2012
1221:2017
965:are
961:and
895:The
816:The
805:The
786:and
713:and
689:and
687:ARPA
568:Xbox
563:and
511:RAID
509:and
492:RAID
355:and
119:node
104:node
2906:ZPL
2901:TBB
2896:UPC
2876:PVM
2846:MPI
2801:HPX
2728:UMA
2329:Bit
2042:doi
1872:doi
1816:doi
1674:doi
1609:IBM
1424:doi
737:on
674:).
617:NEC
592:Xen
443:".
345:VMS
298:IBM
296:of
281:VAX
201:or
34:or
2998::
2064:^
2050:.
2036:,
2028:.
2024:.
1954:^
1910:.
1894:^
1878:.
1858:^
1828:^
1814:.
1802:22
1800:.
1755:^
1723:^
1706:.
1694:^
1680:.
1670:24
1668:.
1637:.
1607:.
1567:^
1535:^
1490:.
1451:.
1449:36
1391:.
1362:.
1319:^
1241:.
1212:.
1200:^
1188:.
957:,
945:,
941:,
937:,
929:,
925:.
717:.
705:,
701:,
652:.
615:A
598:.
534:.
304:.
279:A
209:.
110:.
90:A
2246:e
2239:t
2232:v
2190:.
2171:.
2152:.
2131:.
2110:.
2104::
2086:.
2058:.
2044::
2030:8
2005:.
1978:.
1948:.
1888:.
1874::
1852:.
1822:.
1818::
1785:.
1749:.
1717:.
1688:.
1676::
1653:.
1623:.
1593:.
1561:.
1529:.
1500:.
1461:.
1430:.
1426::
1407:.
1376:.
1343:.
1313:.
1284:.
1256:.
1223:.
1194:.
699:C
446:"
420:"
45:.
38:.
20:)
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.