Knowledge (XXG)

Word error rate

Source 📝

58:
level. The WER is a valuable tool for comparing different systems as well as for evaluating improvements within one system. This kind of measurement, however, provides no details on the nature of translation errors and further work is therefore required to identify the main source(s) of error and to
444:
Whichever metric is used, however, one major theoretical problem in assessing the performance of a system is deciding whether a word has been “mis-pronounced,” i.e. does the fault lie with the user or with the recogniser. This may be particularly relevant in a system which is designed to cope with
440:
There is some debate, however, as to whether Hunt's formula may properly be used to assess the performance of a single system, as it was developed as a means of comparing more fairly competing candidate systems. A further complication is added by whether a given syntax allows for error correction
362:
One problem with using a generic formula such as the one above, however, is that no account is taken of the effect that different types of error may have on the likelihood of successful outcome, e.g. some errors may be more disruptive than others and some may be corrected more easily than others.
349:
experiment, it was shown that, if people were trained under "that matches the optimization objective for understanding", (Wang, Acero and Chelba, 2003) they would show a higher accuracy in understanding of language than other people who demonstrated a lower word error rate, showing that true
452:
For text dictation it is generally agreed that performance accuracy at a rate below 95% is not acceptable, but this again may be syntax and/or domain specific, e.g. whether there is time pressure on users to complete the task, whether there are alternative methods of completion, and so on.
45:
system. The WER metric ranges from 0 to 1, where 0 indicates that the compared pieces of text are exactly identical, and 1 indicates that they are completely different with no similarity. This way, a WER of 0.8 means that there is an 80% error rate for compared sentences.
62:
This problem is solved by first aligning the recognized word sequence with the reference (spoken) word sequence using dynamic string alignment. Examination of this issue is seen through a theory called the power law that states the correlation between
472:( X, Y ) is defined as the minimum of W( P ) / L ( P ), where P is an editing path between X and Y, W ( P ) is the sum of the weights of the elementary edit operations of P, and L(P) is the number of these operations (length of P). 448:
The pace at which words should be spoken during the measurement process is also a source of variability between subjects, as is the need for subjects to rest or take a breath. All such factors may need to be controlled in some way.
327: 49:
The general difficulty of measuring performance lies in the fact that the recognized word sequence can have a different length from the reference word sequence (supposedly the correct one). The WER is derived from the
169: 344:
It is commonly believed that a lower word error rate shows superior accuracy in recognition of speech, compared with a higher word error rate. However, at least one study has shown that this may not be true. In a
209:
The intuition behind 'deletion' and 'insertion' is how to get from the reference to the hypothesis. So if we have the reference "This is wikipedia" and hypothesis "This _ wikipedia", we call it a deletion.
370:
Hunt (1990) has proposed the use of a weighted measure of performance accuracy where errors of substitution are weighted at unity but errors of deletion and insertion are both weighted only at 0.5, thus:
435: 441:
and, if it does, how easy that process is for the user. There is thus some merit to the argument that performance metrics should be developed to suit the particular system being measured.
367:
being tested. A further problem is that, even with the best alignment, the formula cannot distinguish a substitution error from a combined deletion plus insertion error.
223: 659: 456:
The term "Single Word Error Rate" is sometimes referred to as the percentage of incorrect recognitions for each different word in the system vocabulary.
76: 593: 336:
is the number of words in the reference, the word error rate can be larger than 1.0, and thus, the word accuracy can be smaller than 0.0.
613: 377: 654: 649: 525:
Klakow, Dietrich; Jochen Peters (September 2002). "Testing the correlation of word error rate and perplexity".
565: 51: 622: 42: 570: 564:. IEEE Workshop on Automatic Speech Recognition and Understanding. St. Thomas, US Virgin Islands. 346: 38: 542: 534: 351: 617: 594:
Computation of Normalized Edit Distance and Application:AndrCs Marzal and Enrique Vidal
322:{\displaystyle {\mathit {WAcc}}=1-{\mathit {WER}}={\frac {N-S-D-I}{N}}={\frac {C-I}{N}}} 664: 501: 538: 17: 643: 631: 496: 465: 607: 610:
On the Use of Information Retrieval Measures for Speech Recognition Evaluation
562:
Is Word Error Rate a Good Indicator for Spoken Language Understanding Accuracy
64: 546: 486: 445:
non-native speakers of a given language or with strong regional accents.
213:
When reporting the performance of a speech recognition system, sometimes
164:{\displaystyle {\mathit {WER}}={\frac {S+D+I}{N}}={\frac {S+D+I}{S+D+C}}} 55: 491: 464:
The word error rate may also be referred to as the length normalized
364: 634:
Minimizing Word Error Rate in Textual Summaries of Spoken Language
481: 350:
understanding of spoken language relies on more than just high
389: 386: 383: 260: 257: 254: 238: 235: 232: 229: 88: 85: 82: 625:
Figures of Merit for Assessing Connected Word Recognisers
430:{\displaystyle {\mathit {WER}}={\frac {S+0.5D+0.5I}{N}}} 380: 226: 79: 429: 321: 163: 205:is the number of words in the reference (N=S+D+C) 468:. The normalized edit distance between X and Y, 363:These factors are likely to be specific to the 37:) is a common metric of the performance of a 8: 627:(Speech Communication, 9, 1990, pp 239-336) 54:, working at the word level instead of the 569: 397: 382: 381: 379: 301: 268: 253: 252: 228: 227: 225: 123: 96: 81: 80: 78: 70:Word error rate can then be computed as: 560:Wang, Y.; Acero, A.; Chelba, C. (2003). 517: 7: 27:Computer language processing metric 25: 660:Evaluation of machine translation 199:is the number of correct words, 181:is the number of substitutions, 1: 539:10.1016/S0167-6393(01)00041-3 193:is the number of insertions, 187:is the number of deletions, 59:focus any research effort. 681: 632:Zechner, K., Waibel, A. 431: 323: 165: 18:Single Word Error Rate 608:McCowan et al. 2005: 432: 324: 166: 67:and word error rate. 527:Speech Communication 378: 224: 215:word accuracy (WAcc) 77: 52:Levenshtein distance 655:Machine translation 584:Nießen et al.(2000) 43:machine translation 650:Speech recognition 623:Hunt, M.J., 1990: 616:2019-02-24 at the 427: 347:Microsoft Research 319: 161: 39:speech recognition 425: 317: 296: 217:is used instead: 159: 118: 16:(Redirected from 672: 596: 591: 585: 582: 576: 575: 573: 557: 551: 550: 522: 436: 434: 433: 428: 426: 421: 398: 393: 392: 352:word recognition 332:Note that since 328: 326: 325: 320: 318: 313: 302: 297: 292: 269: 264: 263: 242: 241: 170: 168: 167: 162: 160: 158: 141: 124: 119: 114: 97: 92: 91: 21: 680: 679: 675: 674: 673: 671: 670: 669: 640: 639: 618:Wayback Machine 604: 599: 592: 588: 583: 579: 559: 558: 554: 524: 523: 519: 515: 510: 478: 462: 399: 376: 375: 360: 342: 303: 270: 222: 221: 142: 125: 98: 75: 74: 31:Word error rate 28: 23: 22: 15: 12: 11: 5: 678: 676: 668: 667: 662: 657: 652: 642: 641: 638: 637: 629: 620: 603: 600: 598: 597: 586: 577: 552: 533:(1–2): 19–28. 516: 514: 511: 509: 506: 505: 504: 502:ROUGE (metric) 499: 494: 489: 484: 477: 474: 461: 458: 438: 437: 424: 420: 417: 414: 411: 408: 405: 402: 396: 391: 388: 385: 359: 356: 341: 338: 330: 329: 316: 312: 309: 306: 300: 295: 291: 288: 285: 282: 279: 276: 273: 267: 262: 259: 256: 251: 248: 245: 240: 237: 234: 231: 207: 206: 200: 194: 188: 182: 172: 171: 157: 154: 151: 148: 145: 140: 137: 134: 131: 128: 122: 117: 113: 110: 107: 104: 101: 95: 90: 87: 84: 26: 24: 14: 13: 10: 9: 6: 4: 3: 2: 677: 666: 663: 661: 658: 656: 653: 651: 648: 647: 645: 636: 635: 630: 628: 626: 621: 619: 615: 612: 611: 606: 605: 602:Other sources 601: 595: 590: 587: 581: 578: 572: 571:10.1.1.89.424 567: 563: 556: 553: 548: 544: 540: 536: 532: 528: 521: 518: 512: 507: 503: 500: 498: 497:NIST (metric) 495: 493: 490: 488: 485: 483: 480: 479: 475: 473: 471: 467: 466:edit distance 460:Edit distance 459: 457: 454: 450: 446: 442: 422: 418: 415: 412: 409: 406: 403: 400: 394: 374: 373: 372: 368: 366: 358:Other metrics 357: 355: 353: 348: 339: 337: 335: 314: 310: 307: 304: 298: 293: 289: 286: 283: 280: 277: 274: 271: 265: 249: 246: 243: 220: 219: 218: 216: 211: 204: 201: 198: 195: 192: 189: 186: 183: 180: 177: 176: 175: 155: 152: 149: 146: 143: 138: 135: 132: 129: 126: 120: 115: 111: 108: 105: 102: 99: 93: 73: 72: 71: 68: 66: 60: 57: 53: 47: 44: 40: 36: 32: 19: 633: 624: 609: 589: 580: 561: 555: 530: 526: 520: 469: 463: 455: 451: 447: 443: 439: 369: 361: 343: 333: 331: 214: 212: 208: 202: 196: 190: 184: 178: 173: 69: 61: 48: 34: 30: 29: 340:Experiments 644:Categories 508:References 354:accuracy. 65:perplexity 566:CiteSeerX 547:0167-6393 487:F-Measure 308:− 287:− 281:− 275:− 250:− 614:Archived 476:See also 56:phoneme 568:  545:  492:METEOR 365:syntax 174:where 665:Rates 513:Notes 543:ISSN 482:BLEU 535:doi 416:0.5 407:0.5 41:or 35:WER 646:: 541:. 531:38 529:. 574:. 549:. 537:: 470:d 423:N 419:I 413:+ 410:D 404:+ 401:S 395:= 390:R 387:E 384:W 334:N 315:N 311:I 305:C 299:= 294:N 290:I 284:D 278:S 272:N 266:= 261:R 258:E 255:W 247:1 244:= 239:c 236:c 233:A 230:W 203:N 197:C 191:I 185:D 179:S 156:C 153:+ 150:D 147:+ 144:S 139:I 136:+ 133:D 130:+ 127:S 121:= 116:N 112:I 109:+ 106:D 103:+ 100:S 94:= 89:R 86:E 83:W 33:( 20:)

Index

Single Word Error Rate
speech recognition
machine translation
Levenshtein distance
phoneme
perplexity
Microsoft Research
word recognition
syntax
edit distance
BLEU
F-Measure
METEOR
NIST (metric)
ROUGE (metric)
doi
10.1016/S0167-6393(01)00041-3
ISSN
0167-6393
CiteSeerX
10.1.1.89.424
Computation of Normalized Edit Distance and Application:AndrCs Marzal and Enrique Vidal
McCowan et al. 2005: On the Use of Information Retrieval Measures for Speech Recognition Evaluation
Archived
Wayback Machine
Hunt, M.J., 1990: Figures of Merit for Assessing Connected Word Recognisers (Speech Communication, 9, 1990, pp 239-336)
Zechner, K., Waibel, A.Minimizing Word Error Rate in Textual Summaries of Spoken Language
Categories
Speech recognition
Machine translation

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.