Knowledge (XXG)

Talk:ZFS

Source 📝

846:. You're suggesting that the encyclopedia give advice or warnings to users based upon issues that may or may not affect them. We don't give advice. What I added lets readers know of problems that some users have experienced, without telling them how to do things, when to do things, or where to do things. The entry is sourced, so they can inform themselves further if they are bothered by it. Same with encryption. Since the failures are not reliably repeatable, we can only inform readers that some people have experienced problems. We can't give advice, or warnings, or discouragement based on that. Again, the statement is cited; the reader can explore further if they wish. cheers. 394:"Other limitations specific to ZFS" mostly aren't specific to ZFS. "Capacity expansion is normally achieved by adding groups of disks as a top-level vdev: simple device, RAID-Z, RAID Z2, RAID Z3, or mirrored..." is not a limitation. It mostly describe how RAID topology works. "IOPS performance of a ZFS storage pool can suffer if the ZFS raid is not appropriately configured." As is the case with all RAIDs. "Resilver (repair) of a failed disk in a ZFS RAID can take a long time which is not unique to ZFS, it applies to all types of RAID, in one way or another." Statement even admits this is not particular to ZFS! 990:'Anecdotal' doesn't mean wrong or false/untrue. It means that there's a cohort of users who have reported "problems" in a specific use-case. End-users can only express/represent their personal experiences. Based on the reports, failure modes can't be characterized, they are broadly inconsistent, and users have also reported no failures in systems with approximately the same structure/load. Anecdotal only means that there is no formal characterization of the failure mode via repeatability in testing. cheers. 286: 265: 203: 234: 466:
errors fixed before being posted into the public encyclopedia. That however is dependent upon the precedent issues with the content that I described. Numerous opinions on the 'net suggest that by far the larger issue is over-utilization; fragmentation being pointed to as the proximate cause of performance issues hasn't been determined as fact.
691:
the space to be used. There won't be any solution to this issue . It surfaces when there is not much left space on dataset and also when dataset is used with rewriting prosesses eg. databases. Algorithm for managing free space is not scaled well for very fragmented free space Zfs is therefore better suited for cold data storage.
368:"This led to threads in online forums where ZFS developers sometimes tried to provide ad-hoc help to home and other small scale users, facing loss of data due to their inadequate design or poor system management." Link is not to a thread in a "online forum" but to a blog post about data recovery by preeminent ZFS developers. 1094:
But you show exactly what the problem is: "what if". Knowledge (XXG) doesn't present speculations. You say what if it's 95%. Do you have evidence that 95% frag/95% utilization is a threshold? Will they experience no performance problems at 94% frag/95% utilization? What about 75/99? 99/10? 80/50? The
627:
Ofcourse it is free-space fragmentation issue and i believe that Oracle recommendations are because of that problem with fragmentation and this is clearly problem with zfs itself. I can hide itself when we have mostly cold storage but it hit hard in cases described in Oracle docs so we need to expose
607:
Hey Robert, before we go further, I'd like your permission to copy this whole thread over to the ZFS talk page. I don't want there to be any sense that I'm "gatekeeping" the content - I'm not an expert, though I did run ZFS filesystems in production for many years. The fragmentation issue - from what
457:
The problem is there's multiple issues at work here. First, the question of whether or not free space fragmentation is the cause of performance issues, or rather over-utilization of the disk space. Virtually all filesystems are prone to performance issues as available free space diminishes, and since
715:
Unfortunately, as before, this requires reliable secondary sources. github commentary/issues don't rise to that level. What you inserted in the article wasn't neutrally worded, and all-caps are specifically unacceptable via the Manual of Style. There were also grammar and punctuation errors. Sorry I
1130:
No, the oracle documention doesn't 'clearly state' that there is a "problem". It provides guidance for good performance, which is why it's titled "Storage Pool Practices for Performance", and the very first line states "In general, keep pool capacity below 90% for best performance". That advice can
465:
Lastly - I suspect you may not be a native english speaker, based on the many grammatical and spelling errors in the content presented. That's not a problem itself, fluency in any language isn't easy. However, for the content to be appropriate to the english wikipedia, it would have to have all the
690:
Fragmentation of free space is very important issue because is not reversible like in eg. btrfs. Only solution is to use command zfs send | receive to recreate dataset without fragmentation therefore it could be problem with production systems with several TB datasets because of downtime and twice
925:
clear to me, nor do they approach reproducibility in any formal sense of the term. Achieving failure "over 50% of the time" as Rincebrain wrote doesn't constitute reliably repeatable, only that "failure happens at slightly greater than random chance in the results". Repeatability of a failure is
469:
If you can find a better source - a general technology news site would be a good start - then possibly the claims could be notable for the article. I looked around and couldn't find any discussion of the matter - only blog posts and forum commentary, which just aren't acceptable for making broad
970:
Agreed. Considering the volume of reports of problems, at worst they should do as suggested in the thread about modifying the documentation. They should either add a warning/caution, or remove the ability to add encryption to existing unencrypted data (obviously they can't remove the encryption
1023:
adjust the system to address the performance problems: you increase available disk space so that the over-utilization no longer exists. Are there any reports of someone having high free-space fragmentation and over-utilization who have significantly increased their total space available and
502:
So there is really confirming what i wrote whether is is space or fragmentation or both but I think that users should know that things because they place on zfs big TB of data so moving it will be very costly especially on production. And there is scarse info about that real problem in
955:
What I can say for sure is encryption is not maintained and not supported and from zfs i've learned that open source is good when many people use open source project - issues are fixed. If something is used rarely it is quaranted to be not maitained or broken like zfs encryption ;(
755:
I'll take a shot at it a little later today. The issue here is that these "problems" are speculative and situational, and we can't state emphatically that something is an inherent flaw when all that's available are anecdotal reports. I may be able to finesse a solution. cheers.
628:
that in article for sure it creator of zfs says we have problem ;). Also very suspicious is that Oracle doesn't give solution to revert this lower performance and if it is because of fragmentation there is no solution other than rewriting whole dataset :( which is shame.
838:, not free space fragmentation. In 2024, over-utilization is an administration matter, not a ZFS matter. Decades back I ran Solaris systems on UFS. Guess what? Performance went to hell if the disks were over-utilized. Fragmentation didn't make any difference. It was 937:
I'll note again that as written, the article now acknowledges that some users have experienced problems with native encryption; we can't claim anything more than that. Knowledge (XXG) doesn't offer advice or present conclusions that haven't been verified
862:
I think that encryption is not anegdotal because isues are known by project owners so can you delete anegdotal word ? I think they are not reliably repeatable because there is no intrest to do so - what i've said before - no support of encryption from
934:. The massive amounts of speculations in the threads discussing the problem show that it fails in broadly different ways that can't be characterized with uniformity, and nobody can figure out what the problem is, because it's not reliably repeatable. 571:
isn't accepted: "Do not combine material from multiple sources to reach or imply a conclusion not explicitly stated by any source. Similarly, do not combine different parts of one source to reach or imply a conclusion not explicitly stated by the
435:((Note: This discussion arose on my talk page after I'd reverted the changes editor Robercik st had added. I think it's at a point where more 'eyeballs' are needed to come to consensus, so I'm copying it here unaltered for further discussion.)) 461:
Secondly, most of what you've written is narrative opinion. The link to the ZFS code discussions isn't a reliable source for the claims being made - not by wikipedia's standards. You'd need to find a secondary source that describes these
579:
If you can find a reliable source that says that it is a limitation unique to ZFS, then by all means, that would be appropriate to the article. Until then, we can't make claims that can't be verified directly from the sources. cheers.
534:
If you want to craft a new segment, perhaps to go under the 'read/write efficiency' section? - I recommend posting it to the ZFS talk page, where I'll be happy to do 'wordsmithing' on it so it reads better for article space. cheers.
548:
But I didn't find such recomendation for eg.: ext4, xfs or other filesystems so I find it very specific limitation of zfs which can surprise users in very bad way in case of TB of data. An also it confirms what it is stated in
575:
We can't make the conclusion that it's a unique limitation of ZFS; a reliable source has to. Nowhere in the quote from Matt Ahrenz does he mention ext4, XFS, ReiserFS, Fat32, or any other filesystem, nor does the Oracle
494:
If all of the data is small blocks that have constant rewrites, then you should monitor your pool closely once the capacity gets over 80%. The sign to watch for is increased disk IOPS to achieve the same level of client
813:
Regarding fragmentation - the issue must be present if creator of zfs is saying that issue is unsolvable. Whats worse it is not curable - only by send | receive so at least let users know so they are warned :).
153: 885: 1080:
To this: "describing no issues at all at 95% capacity" He have only 10 % fragmented space so he could see no issue at all. Question is what if there is 95 % space used and 95 % fragmented free space.
531:
of ZFS - they're merely suggestions for maintaining good performance. So, the material would certainly be useful in the article, but it can't be presented specifically as a "limitation" unique to ZFS.
486: 694:
Encryption - not supported by project. There are numerous issues which are not fixed like it is when bug surfaces in code without encryption. USE ENCRYPTION ONLY WHEN YOU ARE PREPARED TO LOSE DATA
1028:
to have the exact same performance problems? If so, that would suggest that free-space fragmentation is the problem. Absent that evidence, it's a tossup between frag and utilization. cheers.
491:
If a large percentage (more than 50%) of the pool is made up of 8k chunks (DBfiles, iSCSI Luns, or many small files) and have constant rewrites, then the 90% rule should be followed strictly.
608:
I've read - is quite a complicated matter, since it's not fragmentation in the sense the majority of people think of it - as in, it's not file fragmentation, it's free-space fragmentation.
506:
I don't know of such requirements on ext4 for example and from my experience postgres workloads works very well after 90 % full FS on ext4 so that kind of problems shouldbe stated surely
731:
Several people who got burned by native encryption recently asked me why there were no warnings around it if it has known bad failure modes, and I didn’t really have a good answer.
1019:
Re fragmentation, again, there's no evidence to distinguish between too much free space fragmentation as being the problem, or over-utilization of space being the problem. You
553:
especially comment with paragraph from author of ZFS: Matt Ahrenz on ZFS / "Block-pointer Rewrite project for ZFS Fragmentation". So there is clearly problem specific for zfs.
1111:
If data is mostly added (write once, remove never), then it's very easy for ZFS to find new blocks. In this case, the percentage can be higher than normal; maybe up to 95%
194: 772:
Regarding the fragmentation performance issues - all reports are anecdotal, with reports concurring with the claims and others describing no issues at all at 95% capacity
886:
https://www.reddit.com/r/zfs/comments/10n8fsn/comment/j6b8k1m/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
487:
https://docs.oracle.com/en/operating-systems/solaris/oracle-solaris/11.4/manage-zfs/storage-pool-practices-performance.html#GUID-3568A4BD-BE69-4F17-8A2E-A7ED5C7EFA79
1095:
article now clearly states that users have reported performance issues related to fragmentation and utilization. Nothing more is required or appropriate. cheers.
611:
But - with your permission, let's move this over there so that we can get more eyeballs on the matter and come to a collaborative consensus. Sound okay? cheers.
344: 79: 147: 1188: 334: 85: 1116:
It clearly states that problem is with searching of free space fragmentation because when you only write there is no fragmentation of free space.
310: 1166: 416:
I concur. Much of this just seems like padding to try to give the impression that there are greater limitations than actually exist. cheers.
820:. so the worst part is that noone from project is willing to fix issues that are encryption related ;(. This is real problem - no support. 30: 1183: 778:
Regarding the native encryption issues, this seems to be better established as a problem, but it is not without contrary POV's as well
593:
Ok but can we put fragmentation as limitation specific to zfs in that it is not fixed so we can;t defragment as Matt Ahrenz says so ?
293: 270: 99: 729:
Ok so help me put it in neutrally worded sounding. As for secondary source you have comment from group in which Matt Artens works: "
401: 375: 104: 20: 775:. There's no hard evidence one way or another that ZFS's free-space fragmentation actually is the cause of the performance issues. 74: 1193: 245: 65: 739: 44: 527:. The Oracle page is a 'good/best practices' guidance page: it doesn't state anywhere that these are, specifically, 202: 185: 168: 109: 213: 135: 788:. These are matters for user to do their own due diligence on, we're not here to warn about issues that aren't 251: 817: 779: 1167:
https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Workload%20Tuning.html#metaslab-allocator
1121: 1085: 1056: 1008: 961: 912: 895: 875: 825: 746: 706: 647: 633: 598: 558: 514: 447: 405: 397: 379: 371: 306: 568: 524: 55: 309:
on Knowledge (XXG). If you would like to participate, please visit the project page, where you can join
971:
algorithms as that would be disastrous for those already using it). It's a strange situation. cheers.
843: 785: 70: 1117: 1081: 1052: 1004: 957: 908: 891: 871: 821: 742: 702: 643: 629: 594: 554: 510: 443: 1136: 1100: 1070: 1033: 995: 976: 947: 905: 851: 805: 797: 773: 761: 721: 678: 616: 585: 540: 475: 421: 129: 233: 161: 523:
That's an interesting and useful link. However - it's important to be aware of the WP rules about
218: 190: 125: 51: 215: 175: 1132: 1096: 1066: 1029: 991: 972: 943: 847: 816:
Aboud enctyption - Ofcourse you find many people who hadn't burned but read this comment:
801: 793: 757: 717: 674: 612: 581: 536: 471: 417: 1065:
Same as previous response. The article notes that users have reported problems. cheers.
769:
So I've reviewed all of the documents you've provided over the course of the discussion.
1131:
be applied to literally any filesystem. Knowledge (XXG) doesn't issue advice. cheers.
792:-- but we can acknowledge that the reports exist. I'm crafting text to do so. cheers. 1177: 716:
didn't reply earlier, I didn't get notification of this talk page addition. cheers.
695: 740:
https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit
842:
to adjust the system so that the performance wasn't terrible. Again I point you to
458:
we don't have meaningful metrics to rely on, we can't ascribe one over the other.
141: 870:
fragmentation so you can't adjust the system to revert performance problems :(
285: 264: 1048: 550: 439: 302: 818:
https://github.com/openzfs/openzfs-docs/issues/494#issuecomment-1946116133
298: 673:((This is the end of the discussion that was on my talk page)) cheers. 906:
https://github.com/openzfs/zfs/issues/11688#issuecomment-910764101
884:
Encryption problems are reproducible: read rincebrain comment:
784:
We can't put speculative claims in the encyclopedia -- remember
687:
Ok so it lasted so long that I want to add something like this:
509:
Of course you can correct gramar errors but idea stays right :)
217: 227: 219: 24: 15: 431:
Reports of fragmentation problems and encryption problems
1139: 1125: 1103: 1089: 1073: 1060: 1036: 1012: 998: 979: 965: 950: 930:
is wrong with native encryption, in specific use-cases,
916: 899: 879: 854: 829: 808: 764: 750: 724: 710: 681: 651: 637: 619: 602: 588: 562: 543: 518: 478: 451: 424: 409: 383: 642:
Shame for fs that is advertised as last word in FS :)
160: 1003:
Ok so we stay like it is. Thanks for colaboration :)
297:, a collaborative effort to improve the coverage of 866:In fragmentation the real issue is that you can't 834:Re fragmentation, the overarching co-dependence is 567:But again, you've just given the definition of why 174: 696:https://github.com/openzfs/openzfs-docs/issues/494 33:for general discussion of the article's subject. 1051:you have corruption and issue is still open :( 8: 921:Rincebrain's testing results are definitely 483:So i this docs article from oracle you have: 1049:https://github.com/openzfs/zfs/issues/11679 926:critical to characterizing a failure. Yes, 551:https://github.com/openzfs/zfs/issues/3582 529:limitations, deficiencies, or shortcomings 440:https://github.com/openzfs/zfs/issues/3582 395: 369: 259: 1108:Also to fragmentation oracle docs says: 1159: 261: 231: 319:Knowledge (XXG):WikiProject Computing 7: 291:This article is within the scope of 250:It is of interest to the following 23:for discussing improvements to the 859:Thank you for adding some remarks. 14: 1189:Low-importance Computing articles 390:Other limitations specific to ZFS 50:New to Knowledge (XXG)? Welcome! 470:claims in the article. cheers. 284: 263: 232: 201: 45:Click here to start a new topic. 339:This article has been rated as 322:Template:WikiProject Computing 1: 940:by reliable secondary sources 682:18:48, 15 December 2023 (UTC) 652:16:48, 15 December 2023 (UTC) 638:16:46, 15 December 2023 (UTC) 620:02:59, 15 December 2023 (UTC) 603:01:54, 15 December 2023 (UTC) 589:18:29, 14 December 2023 (UTC) 563:16:40, 14 December 2023 (UTC) 544:21:20, 13 December 2023 (UTC) 519:15:17, 13 December 2023 (UTC) 479:06:27, 13 December 2023 (UTC) 452:03:06, 13 December 2023 (UTC) 425:08:29, 21 November 2023 (UTC) 410:08:01, 21 November 2023 (UTC) 384:08:09, 21 November 2023 (UTC) 313:and see a list of open tasks. 42:Put new text under old text. 1140:18:53, 17 August 2024 (UTC) 1126:16:17, 17 August 2024 (UTC) 1104:18:44, 17 August 2024 (UTC) 1090:16:08, 17 August 2024 (UTC) 1074:18:40, 17 August 2024 (UTC) 1061:10:51, 17 August 2024 (UTC) 1037:22:02, 18 August 2024 (UTC) 1013:13:51, 19 August 2024 (UTC) 999:21:56, 18 August 2024 (UTC) 980:16:45, 19 August 2024 (UTC) 966:14:38, 19 August 2024 (UTC) 951:21:51, 18 August 2024 (UTC) 917:12:01, 18 August 2024 (UTC) 900:11:25, 18 August 2024 (UTC) 880:11:13, 18 August 2024 (UTC) 855:18:37, 17 August 2024 (UTC) 830:10:45, 17 August 2024 (UTC) 809:22:51, 15 August 2024 (UTC) 765:16:59, 15 August 2024 (UTC) 751:09:03, 15 August 2024 (UTC) 725:19:19, 14 August 2024 (UTC) 711:10:13, 11 August 2024 (UTC) 438:You can read about this in 1210: 1184:C-Class Computing articles 345:project's importance scale 338: 279: 258: 80:Be welcoming to newcomers 890:It is very clear to me 1194:All Computing articles 442:if you need more info 307:information technology 240:This article is rated 75:avoid personal attacks 844:what wikipedia is not 786:what wikipedia is not 624:Ofcourse You can copy 294:WikiProject Computing 244:on Knowledge (XXG)'s 195:Auto-archiving period 100:Neutral point of view 105:No original research 325:Computing articles 246:content assessment 86:dispute resolution 47: 412: 400:comment added by 386: 374:comment added by 359: 358: 355: 354: 351: 350: 226: 225: 66:Assume good faith 43: 1201: 1169: 1164: 1137:an editor he is. 1101:an editor he is. 1071:an editor he is. 1034:an editor he is. 996:an editor he is. 977:an editor he is. 948:an editor he is. 852:an editor he is. 836:over-utilization 806:an editor he is. 798:an editor he is. 762:an editor he is. 722:an editor he is. 679:an editor he is. 617:an editor he is. 586:an editor he is. 541:an editor he is. 476:an editor he is. 422:an editor he is. 327: 326: 323: 320: 317: 288: 281: 280: 275: 267: 260: 243: 237: 236: 228: 220: 206: 205: 196: 179: 178: 164: 95:Article policies 16: 1209: 1208: 1204: 1203: 1202: 1200: 1199: 1198: 1174: 1173: 1172: 1165: 1161: 433: 392: 366: 324: 321: 318: 315: 314: 273: 241: 222: 221: 216: 193: 121: 116: 115: 114: 91: 61: 12: 11: 5: 1207: 1205: 1197: 1196: 1191: 1186: 1176: 1175: 1171: 1170: 1158: 1157: 1156: 1155: 1154: 1153: 1152: 1151: 1150: 1149: 1148: 1147: 1146: 1145: 1144: 1143: 1142: 1114: 1113: 1112: 1106: 1078: 1077: 1076: 1045: 1044: 1043: 1042: 1041: 1040: 1039: 1017: 1016: 1015: 988: 987: 986: 985: 984: 983: 982: 935: 919: 888: 864: 860: 814: 782: 776: 770: 737: 734: 700: 699: 698: 692: 671: 670: 669: 668: 667: 666: 665: 664: 663: 662: 661: 660: 659: 658: 657: 656: 655: 654: 625: 609: 577: 573: 532: 507: 504: 500: 499: 498: 497: 496: 484: 467: 463: 459: 432: 429: 428: 427: 391: 388: 365: 362: 357: 356: 353: 352: 349: 348: 341:Low-importance 337: 331: 330: 328: 311:the discussion 289: 277: 276: 274:Low‑importance 268: 256: 255: 249: 238: 224: 223: 214: 212: 211: 208: 207: 181: 180: 118: 117: 113: 112: 107: 102: 93: 92: 90: 89: 82: 77: 68: 62: 60: 59: 48: 39: 38: 35: 34: 28: 13: 10: 9: 6: 4: 3: 2: 1206: 1195: 1192: 1190: 1187: 1185: 1182: 1181: 1179: 1168: 1163: 1160: 1141: 1138: 1134: 1129: 1128: 1127: 1123: 1119: 1115: 1110: 1109: 1107: 1105: 1102: 1098: 1093: 1092: 1091: 1087: 1083: 1079: 1075: 1072: 1068: 1064: 1063: 1062: 1058: 1054: 1050: 1046: 1038: 1035: 1031: 1027: 1022: 1018: 1014: 1010: 1006: 1002: 1001: 1000: 997: 993: 989: 981: 978: 974: 969: 968: 967: 963: 959: 954: 953: 952: 949: 945: 941: 936: 933: 929: 924: 920: 918: 914: 910: 907: 903: 902: 901: 897: 893: 889: 887: 883: 882: 881: 877: 873: 869: 865: 861: 858: 857: 856: 853: 849: 845: 841: 837: 833: 832: 831: 827: 823: 819: 815: 812: 811: 810: 807: 803: 799: 795: 791: 787: 783: 780: 777: 774: 771: 768: 767: 766: 763: 759: 754: 753: 752: 748: 744: 741: 738: 736:link to docs: 735: 732: 728: 727: 726: 723: 719: 714: 713: 712: 708: 704: 701: 697: 693: 689: 688: 686: 685: 684: 683: 680: 676: 653: 649: 645: 641: 640: 639: 635: 631: 626: 623: 622: 621: 618: 614: 610: 606: 605: 604: 600: 596: 592: 591: 590: 587: 583: 578: 574: 570: 566: 565: 564: 560: 556: 552: 547: 546: 545: 542: 538: 533: 530: 526: 522: 521: 520: 516: 512: 508: 505: 501: 493: 492: 490: 489: 488: 485: 482: 481: 480: 477: 473: 468: 464: 460: 456: 455: 454: 453: 449: 445: 441: 436: 430: 426: 423: 419: 415: 414: 413: 411: 407: 403: 399: 389: 387: 385: 381: 377: 373: 364:Data recovery 363: 361: 346: 342: 336: 333: 332: 329: 312: 308: 304: 300: 296: 295: 290: 287: 283: 282: 278: 272: 269: 266: 262: 257: 253: 247: 239: 235: 230: 229: 210: 209: 204: 200: 192: 189: 187: 183: 182: 177: 173: 170: 167: 163: 159: 155: 152: 149: 146: 143: 140: 137: 134: 131: 127: 124: 123:Find sources: 120: 119: 111: 110:Verifiability 108: 106: 103: 101: 98: 97: 96: 87: 83: 81: 78: 76: 72: 69: 67: 64: 63: 57: 53: 52:Learn to edit 49: 46: 41: 40: 37: 36: 32: 26: 22: 18: 17: 1162: 1025: 1020: 939: 931: 927: 922: 867: 839: 835: 789: 730: 672: 569:WP:SYNTHESIS 528: 437: 434: 396:— Preceding 393: 370:— Preceding 367: 360: 340: 292: 252:WikiProjects 198: 184: 171: 165: 157: 150: 144: 138: 132: 122: 94: 19:This is the 1118:Robercik st 1082:Robercik st 1053:Robercik st 1005:Robercik st 958:Robercik st 942:. cheers. 909:Robercik st 904:Same here: 892:Robercik st 872:Robercik st 822:Robercik st 743:Robercik st 703:Robercik st 644:Robercik st 630:Robercik st 595:Robercik st 555:Robercik st 511:Robercik st 444:Robercik st 402:68.12.71.26 376:68.12.71.26 148:free images 31:not a forum 1178:Categories 1133:anastrophe 1097:anastrophe 1067:anastrophe 1047:And here: 1030:anastrophe 992:anastrophe 973:anastrophe 944:anastrophe 848:anastrophe 802:anastrophe 794:anastrophe 758:anastrophe 718:anastrophe 675:anastrophe 613:anastrophe 582:anastrophe 537:anastrophe 472:anastrophe 418:anastrophe 932:sometimes 928:something 790:confirmed 576:document. 525:synthesis 316:Computing 303:computing 299:computers 271:Computing 88:if needed 71:Be polite 21:talk page 1026:continue 863:project. 800:cheers. 572:source." 398:unsigned 372:unsigned 199:365 days 186:Archives 56:get help 29:This is 27:article. 462:issues. 343:on the 242:C-class 154:WP refs 142:scholar 868:revert 840:my job 305:, and 248:scale. 126:Google 495:IOPS. 169:JSTOR 130:books 84:Seek 1122:talk 1086:talk 1057:talk 1009:talk 962:talk 913:talk 896:talk 876:talk 826:talk 747:talk 707:talk 648:talk 634:talk 599:talk 559:talk 515:talk 503:zfs. 448:talk 406:talk 380:talk 162:FENS 136:news 73:and 1021:can 923:not 335:Low 176:TWL 25:ZFS 1180:: 1135:, 1124:) 1099:, 1088:) 1069:, 1059:) 1032:, 1011:) 994:, 975:, 964:) 946:, 915:) 898:) 878:) 850:, 828:) 804:, 796:, 760:, 749:) 720:, 709:) 677:, 650:) 636:) 615:, 601:) 584:, 561:) 539:, 517:) 474:, 450:) 420:, 408:) 382:) 301:, 197:: 156:) 54:; 1120:( 1084:( 1055:( 1007:( 960:( 911:( 894:( 874:( 824:( 781:. 745:( 733:" 705:( 646:( 632:( 597:( 557:( 513:( 446:( 404:( 378:( 347:. 254:: 191:1 188:: 172:· 166:· 158:· 151:· 145:· 139:· 133:· 128:( 58:.

Index

talk page
ZFS
not a forum
Click here to start a new topic.
Learn to edit
get help
Assume good faith
Be polite
avoid personal attacks
Be welcoming to newcomers
dispute resolution
Neutral point of view
No original research
Verifiability
Google
books
news
scholar
free images
WP refs
FENS
JSTOR
TWL
Archives
1


content assessment
WikiProjects
WikiProject icon

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.