643:
74:, which supersedes TensorFlow as the official implementation library in later StyleGAN versions. The second version of StyleGAN, called StyleGAN2, was published on February 5, 2020. It removes some of the characteristic artifacts and improves the image quality. Nvidia introduced StyleGAN3, described as an "alias-free" version, on June 23, 2021, and made source available on October 12, 2021.
988:
33:
102:, which displayed a new face on each web page reload. Wang himself has expressed amazement, given that humans are evolved to specifically understand human faces, that nevertheless StyleGAN can competitively "pick apart all the relevant features (of human faces) and recompose them in a way that's coherent."
116:, which challenged visitors to differentiate between a fake and a real face side by side. The faculty stated the intention was to "educate the public" about the existence of this technology so they could be wary of it, "just like eventually most people were made aware that you can Photoshop an image".
941:
Two, it uses residual connections, which helps it avoid the phenomenon where certain features are stuck at intervals of pixels. For example, the seam between two teeth may be stuck at pixels divisible by 32, because the generator learned to generate teeth during stage N-5, and consequently could only
933:
One, it applies the style latent vector to transform the convolution layer's weights instead, thus solving the "blob" problem. The "blob" problem roughly speaking is because using the style latent vector to normalize the generated image destroys useful information. Consequently, the generator learned
696:
At training time, usually only one style latent vector is used per image generated, but sometimes two ("mixing regularization") in order to encourage each style block to independently perform its stylization without expecting help from other style blocks (since they might receive an entirely
581:
700:
After training, multiple style latent vectors can be fed into each style block. Those fed to the lower layers control the large-scale styles, and those fed to the higher layers control the fine-detail styles.
151:
Progressive GAN is a method for training GAN for large-scale image generation stably, by growing a GAN generator from small to large scale in a pyramidal fashion. Like SinGAN, it decomposes the generator as
286:
215:
689:
array, and repeatedly passed through style blocks. Each style block applies a "style latent vector" via affine transform ("adaptive instance normalization"), similar to how neural style transfer uses
830:
436:
36:
An image generated using StyleGAN that looks like a portrait of a young woman. This image was generated by an artificial neural network based on an analysis of a large number of photographs.
1497:
949:. It also tunes the amount of data augmentation applied by starting at zero, and gradually increasing it until an "overfitting heuristic" reaches a target level, thus the name "adaptive".
85:
In
December 2018, Nvidia researchers distributed a preprint with accompanying software introducing StyleGAN, a GAN for producing an unlimited number of (often convincing) portraits of
687:
1267:
381:
972:
to the continuous signals they represent, rather than operate on them as merely discrete signals. They further imposed rotational and translational invariance by using more
1555:
428:
386:
To avoid discontinuity between stages of the GAN game, each new layer is "blended in" (Figure 2 of the paper). For example, this is how the second stage GAN game starts:
329:
1645:
764:
733:
122:
In 2021, a third version was released, improving consistency between fine and coarse details in the generator. Dubbed "alias-free", this version was implemented with
1293:
920:
875:
627:
119:
The second version of StyleGAN, called StyleGAN2, was published on
February 5, 2020. It removes some of the characteristic artifacts and improves the image quality.
607:
895:
850:
1718:
138:
took down a network of accounts with false identities, and mentioned that some of them had used profile pictures created with machine learning techniques.
1324:
957:
StyleGAN3 improves upon StyleGAN2 by solving the "texture sticking" problem, which can be seen in the official videos. They analyzed the problem by the
1352:
1733:
657:
The key architectural choice of StyleGAN-1 is a progressive growth mechanism, similar to
Progressive GAN. Each generated image starts as a constant
1231:
1588:
1530:
934:
to create a "distraction" by a large blob, which absorbs most of the effect of normalization (somewhat similar to using flares to distract a
1471:
958:
383:
are added to reach the second stage of GAN game, to generate 8x8 images, and so on, until we reach a GAN game to generate 1024x1024 images.
1067:
1378:
220:
1613:
1430:
155:
1689:
The original 2018 Nvidia StyleGAN paper 'A Style-Based
Generator Architecture for Generative Adversarial Networks' at arXiv.org
946:
44:
1636:
961:, and argued that the layers in the generator learned to exploit the high-frequency signal in the pixels they operate upon.
576:{\displaystyle ((1-\alpha )+\alpha \cdot G_{N-1})\circ u\circ G_{N},D_{N}\circ d\circ ((1-\alpha )+\alpha \cdot D_{N-1})}
1178:
976:. The resulting StyleGAN-3 is able to generate images that rotate and translate smoothly, and without texture sticking.
1124:
1041:
1723:
1635:
Karras, Tero; Aittala, Miika; Laine, Samuli; Härkönen, Erik; Hellsten, Janne; Lehtinen, Jaakko; Aila, Timo (2021).
973:
769:
109:. The collection was made using a private dataset shot in a controlled environment with similar light and angles.
1728:
1659:
Karras, Tero; Aittala, Miika; Laine, Samuli; Härkönen, Erik; Hellsten, Janne; Lehtinen, Jaakko; Aila, Timo.
1329:
969:
1149:
660:
1001:
651:
86:
935:
334:
1096:
112:
Similarly, two faculty at the
University of Washington's Information School used StyleGAN to create
1612:
Tero, Karras; Miika, Aittala; Janne, Hellsten; Samuli, Laine; Jaakko, Lehtinen; Timo, Aila (2020).
1554:
Karras, Tero; Laine, Samuli; Aittala, Miika; Hellsten, Janne; Lehtinen, Jaakko; Aila, Timo (2020).
968:
between each generator's layers, so that the generator is forced to operate on the pixels in a way
1594:
1566:
1536:
1508:
1272:
393:
294:
105:
In
September 2019, a website called Generated Photos published 100,000 images as a collection of
1020:
It is learned during the training, but afterwards it is held constant, much like a bias vector.
1584:
1526:
630:
106:
612:
1576:
1518:
877:
to the higher style blocks, to generate a composite image that has the large-scale style of
52:
642:
965:
942:
generate primitive teeth at that stage, before scaling up 5 times (thus intervals of 32).
738:
707:
17:
586:
900:
855:
1294:"'This Person Does Not Exist' Website Uses AI To Create Realistic Yet Horrifying Faces"
880:
835:
690:
82:
A direct predecessor of the StyleGAN series is the
Progressive GAN, published in 2017.
693:. It then adds noise, and normalize (subtract the mean, then divide by the variance).
1712:
1598:
1540:
1179:"Synthesizing High-Resolution Images with StyleGAN2 – NVIDIA Developer News Center"
1580:
1101:
945:
This was updated by the StyleGAN2-ADA ("ADA" stands for "adaptive"), which uses
1404:
1072:
993:
983:
63:
1522:
1263:"Progressive Growing of GANs for Improved Quality, Stability, and Variation"
1125:"NVIDIA Opens Up The Code To StyleGAN - Create Your Own AI Family Portraits"
987:
1431:"Can you tell the difference between a real face and an AI-generated fake?"
1563:
2020 IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR)
1505:
2019 IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR)
1498:"A Style-Based Generator Architecture for Generative Adversarial Networks"
1262:
1232:"NVIDIA AI Releases StyleGAN3: Alias-Free Generative Adversarial Networks"
1698:
1472:"Facebook's latest takedown has a twist -- AI-generated profile pictures"
1379:"100,000 free AI-generated headshots put stock photo companies on notice"
1298:
832:. This is called "projecting an image back to style latent space". Then,
135:
1353:"AI in the adult industry: porn may soon feature people who don't exist"
123:
71:
67:
1455:
1207:
1693:
1660:
48:
1325:"How to spot the realistic fake people creeping into your timelines"
1688:
1571:
1513:
1277:
32:
1261:
Karras, Tero; Aila, Timo; Laine, Samuli; Lehtinen, Jaakko (2018).
31:
89:. StyleGAN was able to run on Nvidia's commodity GPU processors.
735:
can be performed as well. First, run a gradient descent to find
93:
59:
650:
StyleGAN is designed as a combination of
Progressive GAN with
583:
generating and discriminating 8x8 images. Here, the functions
96:
engineer Phillip Wang used the software to create the website
1068:"NVIDIA Open-Sources Hyper-Realistic Face Generator StyleGAN"
1703:
1614:"Training Generative Adversarial Networks with Limited Data"
281:{\displaystyle D=D_{N}\circ D_{N-1}\circ \cdots \circ D_{1}}
1173:
1171:
210:{\displaystyle G=G_{1}\circ G_{2}\circ \cdots \circ G_{N}}
1661:"Alias-Free Generative Adversarial Networks (StyleGAN3)"
1556:"Analyzing and Improving the Image Quality of StyleGAN"
1403:
Timmins, Jane Wakefield and Beth (February 29, 2020).
903:
883:
858:
838:
772:
741:
710:
663:
633:
in image composing) that smoothly glides from 0 to 1.
615:
589:
439:
396:
337:
297:
223:
158:
1268:
International Conference on Learning Representations
331:
are used in a GAN game to generate 4x4 images. Then
1405:"Could deepfakes be used to train office workers?"
1202:
1200:
914:
889:
869:
844:
824:
758:
727:
681:
646:The main architecture of StyleGAN-1 and StyleGAN-2
621:
601:
575:
422:
375:
323:
280:
209:
1646:Advances in Neural Information Processing Systems
1618:Advances in Neural Information Processing Systems
1042:"GAN 2.0: NVIDIA's Hyperrealistic Face Generator"
922:. Multiple images can also be composed this way.
1496:Karras, Tero; Laine, Samuli; Aila, Timo (2019).
1318:
1316:
609:are image up- and down-sampling functions, and
390:Just before, the GAN game consists of the pair
1150:"Looking for the PyTorch version? - Stylegan2"
930:StyleGAN2 improves upon StyleGAN in two ways.
433:Just after, the GAN game consists of the pair
964:To solve this, they proposed imposing strict
825:{\displaystyle G(z)\approx x,G(z')\approx x'}
8:
1460:, NVIDIA Research Projects, October 11, 2021
1212:, NVIDIA Research Projects, August 11, 2020
1638:Alias-Free Generative Adversarial Networks
852:can be fed to the lower style blocks, and
1570:
1512:
1276:
902:
882:
857:
837:
771:
740:
709:
662:
614:
588:
558:
512:
499:
471:
438:
430:generating and discriminating 4x4 images.
414:
401:
395:
361:
342:
336:
315:
302:
296:
272:
247:
234:
222:
201:
182:
169:
157:
641:
1033:
1013:
51:researchers in December 2018, and made
1123:Larabel, Michael (February 10, 2019).
7:
1256:
1254:
1252:
1095:Beschizza, Rob (February 15, 2019).
27:Novel generative adversarial network
1719:Deep learning software applications
1323:Fleishman, Glenn (April 30, 2019).
1230:Kakkar, Shobha (October 13, 2021).
682:{\displaystyle 4\times 4\times 512}
629:is a blend-in factor (much like an
1377:Porter, Jon (September 20, 2019).
1351:Bishop, Katie (February 7, 2020).
25:
1292:msmash, n/a (February 14, 2019).
1429:Vincent, James (March 3, 2019).
986:
959:Nyquist–Shannon sampling theorem
704:Style-mixing between two images
697:different style latent vector).
1734:Applications of computer vision
897:, and the fine-detail style of
376:{\displaystyle G_{N-1},D_{N-1}}
291:During training, at first only
808:
797:
782:
776:
570:
542:
530:
527:
483:
455:
443:
440:
45:generative adversarial network
1:
62:software, GPUs, and Google's
58:StyleGAN depends on Nvidia's
1581:10.1109/CVPR42600.2020.00813
1565:. IEEE. pp. 8107–8116.
1507:. IEEE. pp. 4396–4405.
1097:"This Person Does Not Exist"
947:invertible data augmentation
1694:StyleGAN code at GitHub.com
423:{\displaystyle G_{N},D_{N}}
324:{\displaystyle G_{N},D_{N}}
217:, and the discriminator as
1750:
1699:This Person Does Not Exist
99:This Person Does Not Exist
18:This Person Does Not Exist
1183:news.developer.nvidia.com
1523:10.1109/CVPR.2019.00453
622:{\displaystyle \alpha }
916:
891:
871:
846:
826:
760:
729:
683:
647:
623:
603:
577:
424:
377:
325:
282:
211:
37:
1002:Human image synthesis
917:
892:
872:
847:
827:
761:
730:
684:
652:neural style transfer
645:
624:
604:
578:
425:
378:
326:
283:
212:
35:
936:heat-seeking missile
901:
881:
856:
836:
770:
759:{\displaystyle z,z'}
739:
728:{\displaystyle x,x'}
708:
661:
613:
587:
437:
394:
335:
295:
221:
156:
47:(GAN) introduced by
1048:. December 14, 2018
602:{\displaystyle u,d}
114:Which Face is Real?
1156:. October 28, 2021
1076:. February 9, 2019
915:{\displaystyle x'}
912:
887:
870:{\displaystyle z'}
867:
842:
822:
756:
725:
679:
648:
619:
599:
573:
420:
373:
321:
278:
207:
134:In December 2019,
92:In February 2019,
55:in February 2019.
38:
1724:Computer graphics
1590:978-1-7281-7168-5
1532:978-1-7281-3293-8
890:{\displaystyle x}
845:{\displaystyle z}
16:(Redirected from
1741:
1704:Generated Photos
1676:
1675:
1673:
1671:
1665:nvlabs.github.io
1656:
1650:
1649:
1643:
1632:
1626:
1625:
1609:
1603:
1602:
1574:
1560:
1551:
1545:
1544:
1516:
1502:
1493:
1487:
1486:
1484:
1482:
1468:
1462:
1461:
1457:NVlabs/stylegan3
1452:
1446:
1445:
1443:
1441:
1426:
1420:
1419:
1417:
1415:
1400:
1394:
1393:
1391:
1389:
1374:
1368:
1367:
1365:
1363:
1348:
1342:
1341:
1339:
1337:
1320:
1311:
1310:
1308:
1306:
1289:
1283:
1282:
1280:
1258:
1247:
1246:
1244:
1242:
1227:
1221:
1220:
1219:
1217:
1209:NVlabs/stylegan2
1204:
1195:
1194:
1192:
1190:
1175:
1166:
1165:
1163:
1161:
1146:
1140:
1139:
1137:
1135:
1120:
1114:
1113:
1111:
1109:
1092:
1086:
1085:
1083:
1081:
1064:
1058:
1057:
1055:
1053:
1046:SyncedReview.com
1038:
1021:
1018:
996:
991:
990:
921:
919:
918:
913:
911:
896:
894:
893:
888:
876:
874:
873:
868:
866:
851:
849:
848:
843:
831:
829:
828:
823:
821:
807:
765:
763:
762:
757:
755:
734:
732:
731:
726:
724:
688:
686:
685:
680:
628:
626:
625:
620:
608:
606:
605:
600:
582:
580:
579:
574:
569:
568:
517:
516:
504:
503:
482:
481:
429:
427:
426:
421:
419:
418:
406:
405:
382:
380:
379:
374:
372:
371:
353:
352:
330:
328:
327:
322:
320:
319:
307:
306:
287:
285:
284:
279:
277:
276:
258:
257:
239:
238:
216:
214:
213:
208:
206:
205:
187:
186:
174:
173:
87:fake human faces
53:source available
21:
1749:
1748:
1744:
1743:
1742:
1740:
1739:
1738:
1729:Virtual reality
1709:
1708:
1685:
1680:
1679:
1669:
1667:
1658:
1657:
1653:
1641:
1634:
1633:
1629:
1611:
1610:
1606:
1591:
1558:
1553:
1552:
1548:
1533:
1500:
1495:
1494:
1490:
1480:
1478:
1470:
1469:
1465:
1454:
1453:
1449:
1439:
1437:
1428:
1427:
1423:
1413:
1411:
1402:
1401:
1397:
1387:
1385:
1376:
1375:
1371:
1361:
1359:
1350:
1349:
1345:
1335:
1333:
1322:
1321:
1314:
1304:
1302:
1291:
1290:
1286:
1260:
1259:
1250:
1240:
1238:
1229:
1228:
1224:
1215:
1213:
1206:
1205:
1198:
1188:
1186:
1185:. June 17, 2020
1177:
1176:
1169:
1159:
1157:
1148:
1147:
1143:
1133:
1131:
1122:
1121:
1117:
1107:
1105:
1094:
1093:
1089:
1079:
1077:
1066:
1065:
1061:
1051:
1049:
1040:
1039:
1035:
1030:
1025:
1024:
1019:
1015:
1010:
992:
985:
982:
966:lowpass filters
955:
928:
904:
899:
898:
879:
878:
859:
854:
853:
834:
833:
814:
800:
768:
767:
748:
737:
736:
717:
706:
705:
659:
658:
640:
611:
610:
585:
584:
554:
508:
495:
467:
435:
434:
410:
397:
392:
391:
357:
338:
333:
332:
311:
298:
293:
292:
268:
243:
230:
219:
218:
197:
178:
165:
154:
153:
149:
147:Progressive GAN
144:
132:
80:
28:
23:
22:
15:
12:
11:
5:
1747:
1745:
1737:
1736:
1731:
1726:
1721:
1711:
1710:
1707:
1706:
1701:
1696:
1691:
1684:
1683:External links
1681:
1678:
1677:
1651:
1627:
1604:
1589:
1546:
1531:
1488:
1463:
1447:
1421:
1395:
1369:
1343:
1312:
1284:
1248:
1222:
1196:
1167:
1141:
1115:
1087:
1059:
1032:
1031:
1029:
1026:
1023:
1022:
1012:
1011:
1009:
1006:
1005:
1004:
998:
997:
981:
978:
974:signal filters
954:
951:
927:
924:
910:
907:
886:
865:
862:
841:
820:
817:
813:
810:
806:
803:
799:
796:
793:
790:
787:
784:
781:
778:
775:
754:
751:
747:
744:
723:
720:
716:
713:
691:Gramian matrix
678:
675:
672:
669:
666:
639:
636:
635:
634:
618:
598:
595:
592:
572:
567:
564:
561:
557:
553:
550:
547:
544:
541:
538:
535:
532:
529:
526:
523:
520:
515:
511:
507:
502:
498:
494:
491:
488:
485:
480:
477:
474:
470:
466:
463:
460:
457:
454:
451:
448:
445:
442:
431:
417:
413:
409:
404:
400:
370:
367:
364:
360:
356:
351:
348:
345:
341:
318:
314:
310:
305:
301:
275:
271:
267:
264:
261:
256:
253:
250:
246:
242:
237:
233:
229:
226:
204:
200:
196:
193:
190:
185:
181:
177:
172:
168:
164:
161:
148:
145:
143:
140:
131:
128:
79:
76:
26:
24:
14:
13:
10:
9:
6:
4:
3:
2:
1746:
1735:
1732:
1730:
1727:
1725:
1722:
1720:
1717:
1716:
1714:
1705:
1702:
1700:
1697:
1695:
1692:
1690:
1687:
1686:
1682:
1666:
1662:
1655:
1652:
1647:
1640:
1639:
1631:
1628:
1623:
1619:
1615:
1608:
1605:
1600:
1596:
1592:
1586:
1582:
1578:
1573:
1568:
1564:
1557:
1550:
1547:
1542:
1538:
1534:
1528:
1524:
1520:
1515:
1510:
1506:
1499:
1492:
1489:
1477:
1473:
1467:
1464:
1459:
1458:
1451:
1448:
1436:
1432:
1425:
1422:
1410:
1406:
1399:
1396:
1384:
1380:
1373:
1370:
1358:
1354:
1347:
1344:
1332:
1331:
1326:
1319:
1317:
1313:
1301:
1300:
1295:
1288:
1285:
1279:
1274:
1270:
1269:
1264:
1257:
1255:
1253:
1249:
1237:
1233:
1226:
1223:
1211:
1210:
1203:
1201:
1197:
1184:
1180:
1174:
1172:
1168:
1155:
1151:
1145:
1142:
1130:
1126:
1119:
1116:
1104:
1103:
1098:
1091:
1088:
1075:
1074:
1069:
1063:
1060:
1047:
1043:
1037:
1034:
1027:
1017:
1014:
1007:
1003:
1000:
999:
995:
989:
984:
979:
977:
975:
971:
967:
962:
960:
952:
950:
948:
943:
939:
937:
931:
925:
923:
908:
905:
884:
863:
860:
839:
818:
815:
811:
804:
801:
794:
791:
788:
785:
779:
773:
752:
749:
745:
742:
721:
718:
714:
711:
702:
698:
694:
692:
676:
673:
670:
667:
664:
655:
653:
644:
637:
632:
616:
596:
593:
590:
565:
562:
559:
555:
551:
548:
545:
539:
536:
533:
524:
521:
518:
513:
509:
505:
500:
496:
492:
489:
486:
478:
475:
472:
468:
464:
461:
458:
452:
449:
446:
432:
415:
411:
407:
402:
398:
389:
388:
387:
384:
368:
365:
362:
358:
354:
349:
346:
343:
339:
316:
312:
308:
303:
299:
289:
273:
269:
265:
262:
259:
254:
251:
248:
244:
240:
235:
231:
227:
224:
202:
198:
194:
191:
188:
183:
179:
175:
170:
166:
162:
159:
146:
141:
139:
137:
129:
127:
125:
120:
117:
115:
110:
108:
103:
101:
100:
95:
90:
88:
83:
77:
75:
73:
69:
65:
61:
56:
54:
50:
46:
42:
34:
30:
19:
1668:. Retrieved
1664:
1654:
1637:
1630:
1621:
1617:
1607:
1562:
1549:
1504:
1491:
1479:. Retrieved
1475:
1466:
1456:
1450:
1438:. Retrieved
1434:
1424:
1412:. Retrieved
1408:
1398:
1386:. Retrieved
1382:
1372:
1360:. Retrieved
1357:The Guardian
1356:
1346:
1334:. Retrieved
1330:Fast Company
1328:
1305:February 16,
1303:. Retrieved
1297:
1287:
1266:
1239:. Retrieved
1236:MarkTechPost
1235:
1225:
1214:, retrieved
1208:
1187:. Retrieved
1182:
1158:. Retrieved
1153:
1144:
1132:. Retrieved
1129:Phoronix.com
1128:
1118:
1108:February 16,
1106:. Retrieved
1100:
1090:
1078:. Retrieved
1071:
1062:
1050:. Retrieved
1045:
1036:
1016:
963:
956:
944:
940:
932:
929:
703:
699:
695:
656:
649:
385:
290:
150:
142:Architecture
133:
121:
118:
113:
111:
107:stock photos
104:
98:
97:
91:
84:
81:
57:
40:
39:
29:
1241:October 14,
1102:Boing-Boing
130:Illicit use
1713:Categories
1572:1912.04958
1514:1812.04948
1278:1710.10196
1216:August 11,
1189:August 11,
1154:github.com
1134:October 3,
1080:October 3,
1073:Medium.com
1052:October 3,
1028:References
994:Art portal
766:such that
64:TensorFlow
1599:209202273
1481:August 4,
1435:The Verge
1414:August 4,
1388:August 4,
1383:The Verge
1160:August 5,
953:StyleGAN3
926:StyleGAN2
812:≈
786:≈
674:×
668:×
617:α
563:−
552:⋅
549:α
540:α
537:−
525:∘
519:∘
493:∘
487:∘
476:−
465:⋅
462:α
453:α
450:−
366:−
347:−
266:∘
263:⋯
260:∘
252:−
241:∘
195:∘
192:⋯
189:∘
176:∘
1670:July 16,
1541:54482423
1476:ABC News
1409:BBC News
1299:Slashdot
980:See also
970:faithful
909:′
864:′
819:′
805:′
753:′
722:′
638:StyleGAN
136:Facebook
41:StyleGAN
1440:June 8,
1362:June 8,
1336:June 7,
124:pytorch
78:History
72:PyTorch
68:Meta AI
1597:
1587:
1539:
1529:
49:Nvidia
1642:(PDF)
1595:S2CID
1567:arXiv
1559:(PDF)
1537:S2CID
1509:arXiv
1501:(PDF)
1273:arXiv
1008:Notes
631:alpha
66:, or
43:is a
1672:2022
1585:ISBN
1527:ISBN
1483:2020
1442:2020
1416:2020
1390:2020
1364:2020
1338:2020
1307:2019
1243:2021
1218:2020
1191:2020
1162:2022
1136:2019
1110:2019
1082:2019
1054:2019
94:Uber
60:CUDA
1577:doi
1519:doi
938:).
677:512
70:'s
1715::
1663:.
1644:.
1622:33
1620:.
1616:.
1593:.
1583:.
1575:.
1561:.
1535:.
1525:.
1517:.
1503:.
1474:.
1433:.
1407:.
1381:.
1355:.
1327:.
1315:^
1296:.
1271:.
1265:.
1251:^
1234:.
1199:^
1181:.
1170:^
1152:.
1127:.
1099:.
1070:.
1044:.
654:.
288:.
126:.
1674:.
1648:.
1624:.
1601:.
1579::
1569::
1543:.
1521::
1511::
1485:.
1444:.
1418:.
1392:.
1366:.
1340:.
1309:.
1281:.
1275::
1245:.
1193:.
1164:.
1138:.
1112:.
1084:.
1056:.
906:x
885:x
861:z
840:z
816:x
809:)
802:z
798:(
795:G
792:,
789:x
783:)
780:z
777:(
774:G
750:z
746:,
743:z
719:x
715:,
712:x
671:4
665:4
597:d
594:,
591:u
571:)
566:1
560:N
556:D
546:+
543:)
534:1
531:(
528:(
522:d
514:N
510:D
506:,
501:N
497:G
490:u
484:)
479:1
473:N
469:G
459:+
456:)
447:1
444:(
441:(
416:N
412:D
408:,
403:N
399:G
369:1
363:N
359:D
355:,
350:1
344:N
340:G
317:N
313:D
309:,
304:N
300:G
274:1
270:D
255:1
249:N
245:D
236:N
232:D
228:=
225:D
203:N
199:G
184:2
180:G
171:1
167:G
163:=
160:G
20:)
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.