Knowledge

Machine translation of sign languages

Source 📝

229:
brought Navid Azodi onto the SignAloud project for marketing and help with public relations. Azodi has a rich background and involvement in business administration, while Pryor has a wealth of experience in engineering. In May 2016, the duo told NPR that they are working more closely with people who use ASL so that they can better understand their audience and tailor their product to the needs of these people rather than the assumed needs. However, no further versions have been released since then. The invention was one of seven to win the Lemelson-MIT Student Prize, which seeks to award and applaud young inventors. Their invention fell under the "Use it!" category of the award which includes technological advances to existing products. They were awarded $ 10,000.
257:(UFPE) from a group of students involved in a computer science project. The group had a deaf team member who had difficulty communicating with the rest of the group. In order to complete the project and help the teammate communicate, the group created Proativa Soluções and have been moving forward ever since. The current beta version in American Sign Language is very limited. For example, there is a dictionary section and the only word under the letter 'j' is 'jump'. If the device has not been programmed with the word, then the digital avatar must fingerspell the word. The last update of the app was in June 2016, but ProDeaf has been featured in over 400 stories across the country's most popular media outlets. 117:
languages have multiple articulators. Where spoken languages are articulated through the vocal tract, signed languages are articulated through the hands, arms, head, shoulders, torso, and parts of the face. This multi-channel articulation makes translating sign languages very difficult. An additional challenge for sign language MT is the fact that there is no formal written format for signed languages. There are notations systems but no writing system has been adopted widely enough, by the international Deaf community, that it could be considered the 'written form' of a given sign language. Sign Languages then are recorded in various video formats. There is no gold standard
135:
replaced by cameras due to their efficiency and fewer physical restrictions on signers. To process the data collected through the devices, researchers implemented neural networks such as the Stuttgart Neural Network Simulator for pattern recognition in projects such as the CyberGlove. Researchers also use many other approaches for sign recognition. For example, Hidden Markov Models are used to analyze data statistically, and GRASP and other machine learning programs use training sets to improve the accuracy of sign recognition. Fusion of non-wearable technologies such as
236:. The computer system analyzes the data and matches it to English words, which are then spoken aloud by a digital voice. The gloves do not have capability for written English input to glove movement output or the ability to hear language and then sign it to a deaf person, which means they do not provide reciprocal communication. The device also does not incorporate facial expressions and other 22: 380:
devices solely for American Sign Language. It allows deaf individuals to sign to the device which is then interpreted or vice versa, taking spoken English and interpreting that into American Sign Language. The device is shipping for $ 198. Some other features include the ability to interact, live time feedback, sign builder, and crowdsign.
379:
and "emerged from the Leap Motion accelerator AXLR8R." The team used a tablet case that leverages the power of the Leap Motion controller. The entire six person team was created by deaf students from the schools deaf-education branch. The device is currently one of only two reciprocal communication
129:
The history of automatic sign language translation started with the development of hardware such as finger-spelling robotic hands. In 1977, a finger-spelling hand project called RALPH (short for "Robotic Alphabet") created a robotic hand that can translate alphabets into finger-spellings. Later, the
325:
in Hungary. The team is "pioneering the first automated sign language translation solution, based on computer vision and natural language processing (NLP), to enable everyday communication between individuals with hearing who use spoken English and deaf or hard of hearing individuals who use ASL."
228:
American Sign Language (ASL) into English. In February 2015 Thomas Pryor, a hearing student from the University of Washington, created the first prototype for this device at Hack Arizona, a hackathon at the University of Arizona. Pryor continued to develop the invention and in October 2015, Pryor
277:
Asian team to create Kinect Sign Language Translator. The translator consists of two modes: translator mode and communication mode. The translator mode is capable of translating single words from sign into written words and vice versa. The communication mode can translate full sentences and the
134:
became the mainstream, and some projects such as the CyberGlove and VPL Data Glove were born. The wearable hardware made it possible to capture the signers' hand shapes and movements with the help of the computer software. However, with the development of computer vision, wearable devices were
116:
Sign language translation technologies are limited in the same way as spoken language translation. None can translate with 100% accuracy. In fact, sign language translation technologies are far behind their spoken language counterparts. This is, in no trivial way, due to the fact that signed
338:
converts the collected data from computer vision into a simple English phrase. The developer of the device is deaf and the rest of the project team consists of many engineers and linguist specialists from deaf and hearing communities. The technology has the capability of incorporating all
248:
ProDeaf (WebLibras) is a computer software that can translate both text and voice into Portuguese Libras (Portuguese Sign Language) "with the goal of improving communication between the deaf and hearing." There is currently a beta edition in production for
260:
The application cannot read sign language and turn it into word or text, so it only serves as a one-way communication. Additionally, the user cannot sign to the app and receive an English translation in any form, as English is still in the beta edition.
87:
has been possible, albeit in a limited fashion, since 1977. When a research project successfully matched English letters from a keyboard to ASL manual alphabet letters which were simulated on a robotic hand. These technologies translate
391:
said, "It wasn't hard to see just how transformative a technology like could be" and that " struck me as sort of magical."Katy Steinmetz at TIME said, "This technology could change the way deaf people live." Sean Buckley at
309:. In 2013, the project was presented at Microsoft Research Faculty Summit and Microsoft company meeting. Currently, this project is also being worked by researchers in the United States to implement 253:
as well. The original team began the project in 2010 with a combination of experts including linguists, designers, programmers, and translators, both hearing and deaf. The team originated at
789: 936: 764: 746: 1244: 665: 225: 108:
unique to sign languages, and speech recognition and natural language processing allow interactive communication between hearing and deaf people.
1140: 1017:
Zafrulla, Zahoor; Brashear, Helene; Starner, Thad; Hamilton, Harley; Presti, Peter (2011). "American sign language recognition with the kinect".
1034: 555: 282:. The translator mode can also detect the postures and hand shapes of a signer as well as the movement trajectory using the technologies of 714: 689: 313:
translation. As of now, the device is still a prototype, and the accuracy of translation in the communication mode is still not perfect.
608:"British Sign Language Recognition via Late Fusion of Computer Vision and Leap Motion with Transfer Learning to American Sign Language" 70: 538:
Weissmann, J.; Salomon, R. (1999). "Gesture recognition for virtual reality applications using data gloves and neural networks".
254: 1209: 422: 44: 430:
Creating Meaning with Robot Assistants: The Gap Left by Smart Devices (IEEE-RAS International Conference on Humanoid Robots)
224:
SignAloud is a technology that incorporates a pair of gloves made by a group of students at University of Washington that
1160: 943: 1254: 1249: 997: 360: 768: 580: 352: 335: 270: 343:
of ASL, which help the device accurately interpret the signer. SignAll has been endorsed by many companies including
499:"Sign Language Recognition and Translation: A Multidisciplined Approach From the Field of Artificial Intelligence" 93: 911: 143:
controllers have shown to increase the ability of automatic sign language recognition and translation software.
52: 48: 32: 1141:"Fort Bend Christian Academy American Sign Language Program Pilots New Technology | Fort Bend Focus Magazine" 232:
The gloves have sensors that track the users hand movements and then send the data to a computer system via
1239: 970: 340: 310: 250: 198: 188:
López-Ludeña, Verónica; San-Segundo, Rubén; González, Carlos; López, Juan Carlos; Pardo, José M. (2012),
790:"UW undergraduate team wins $ 10,000 Lemelson-MIT Student Prize for gloves that translate sign language" 371:
MotionSavvy was the first sign language to voice system. The device was created in 2012 by a group from
306: 97: 465:
Jaffe, DL (August 1994). "Evolution of mechanical fingerspelling hands for people who are deaf-blind".
92:
into written or spoken language, and written or spoken language to sign language, without the use of a
273:
and specialists of deaf education from Beijing Union University in China have been collaborating with
100:
than spoken languages, which has created obstacles for developers. Developers use computer vision and
1111: 1058: 834: 619: 203: 975: 810: 287: 191:
Methodology for developing a Speech into Sign Language Translation System in a New Semantic Domain
1040: 980: 589: 561: 433: 348: 295: 279: 274: 208: 961: 722: 693: 1030: 647: 551: 520: 474: 356: 1022: 637: 627: 579:
Bowden, Richard; Zisserman, Andrew; Windridge, Dave; Kadir, Timor; Brady, Mike (June 2003),
543: 540:
IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
510: 388: 283: 237: 101: 384: 331: 291: 136: 118: 623: 156: 642: 607: 334:
technology can recognize the handshape and the movement of a signer, and the system of
298:
technology allows the spoken language to be translated into the sign language and the
1233: 441: 330:
from Microsoft and other web cameras with depth sensors connected to a computer. The
189: 166: 89: 1019:
Proceedings of the 13th international conference on multimodal interfaces - ICMI '11
984: 960:
Chai, Xiujuan; Li, Guang; Lin, Yushun; Xu, Zhihao; Tang, Y. B.; Chen, Xilin (2013),
593: 565: 1044: 437: 418: 212: 299: 140: 131: 547: 105: 1026: 1002: 515: 498: 233: 651: 524: 478: 887: 393: 344: 240:
of sign languages, which may alter the actual interpretation from ASL.
632: 383:
The device has been reviewed by everyone from technology magazines to
937:"Kinect Sign Language Translator expands communication possibilities" 327: 321:
SignAll is an automatic sign language translation system provided by
278:
conversation can be automatically translated with the use of the 3D
606:
Bird, Jordan J.; Ekárt, Anikó; Faria, Diego R. (9 September 2020).
351:
and Hungary's Renewal. This technology is currently being used at
51:
external links, and converting useful links where appropriate into
935:
Chen, Xilin; Li, Hanjing; Pan, Tim; Tansley, Stewart; Zhou, Ming.
294:. The device also allows for reciprocal communication because the 423:"Towards Continuous Sign Language Recognition with Deep Learning" 666:"What is the difference between translation and transliteration" 863: 305:
The original project was started in China based on translating
172:
The American Sign Language Avatar Project at DePaul University
15: 1210:"MotionSavvy Is A Tablet App That Understands Sign Language" 1087: 322: 176: 376: 1119: 1066: 765:"Collegiate Inventors Awarded Lemelson-MIT Student Prize" 40: 1184: 842: 372: 963:
Sign Language Recognition and Translation with Kinect
715:"Thomas Pryor and Navid Azodi | Lemelson-MIT Program" 582:
Vision based Interpretation of Natural Sign Languages
157:
http://www.visicast.cmp.uea.ac.uk/Visicast_index.html
1161:"MotionSavvy UNI: 1st sign language to voice system" 1112:"SignAll. We translate sign language. Automatically" 1059:"SignAll. We translate sign language. Automatically" 747:"These Gloves Offer A Modern Twist On Sign Language" 741: 739: 811:"Nonmanual markers in American Sign Language (ASL)" 467:Journal of Rehabilitation Research and Development 347:and LT-innovate and has created partnerships with 167:http://www.visicast.cmp.uea.ac.uk/eSIGN/index.html 104:to recognize specific phonological parameters and 35:may not follow Knowledge's policies or guidelines 942:. Microsoft Research Connections. Archived from 417:Mocialov, Boris; Turner, Graham; Lohan, Katrin; 912:"ProDeaf Tradutor para Libras on the App Store" 8: 396:mentioned, "UNI could become an incredible 121:that is large enough for SMT, for example. 503:Journal of Deaf Studies and Deaf Education 1185:"Rochester Institute of Technology (RIT)" 974: 641: 631: 514: 377:National Technical Institute for the Deaf 302:avatar can sign back to the deaf people. 202: 71:Learn how and when to remove this message 1189:Rochester Institute of Technology (RIT) 409: 792:. University of Washington. 2016-04-12 767:. Lemelson-MIT Program. Archived from 85:machine translation of sign languages 7: 492: 490: 488: 542:. Vol. 3. pp. 2043–2046. 96:. Sign languages possess different 1088:"Dolphio | Unique IT Technologies" 14: 998:"Kinect Sign Language Translator" 497:Parton, B. S. (12 October 2005). 373:Rochester Institute of Technology 269:Since 2012, researchers from the 255:Federal University of Pernambuco 20: 1245:Applications of computer vision 1208:Tsotsis, Alexia (6 June 2014). 265:Kinect Sign Language Translator 1: 361:Sam Houston State University 353:Fort Bend Christian Academy 336:natural language processing 326:The system of SignAll uses 271:Chinese Academy of Sciences 1271: 132:gloves with motion sensors 670:english.stackexchange.com 548:10.1109/IJCNN.1999.832699 177:http://asl.cs.depaul.edu/ 1027:10.1145/2070481.2070532 311:American Sign Language 251:American Sign Language 516:10.1093/deafed/enj003 307:Chinese Sign Language 98:phonological features 323:Dolphio Technologies 41:improve this article 1255:Machine translation 1250:Gesture recognition 753:. NPR. 17 May 2016. 751:All Tech Considered 624:2020Senso..20.5151B 288:pattern recognition 53:footnote references 1006:. 29 October 2013. 398:communication tool 349:Microsoft Bizspark 296:speech recognition 275:Microsoft Research 1036:978-1-4503-0641-6 949:on 29 March 2014. 815:www.lifeprint.com 633:10.3390/s20185151 557:978-0-7803-5529-3 357:Sugar Land, Texas 238:nonmanual markers 94:human interpreter 81: 80: 73: 1262: 1224: 1223: 1221: 1220: 1205: 1199: 1198: 1196: 1195: 1181: 1175: 1174: 1172: 1171: 1157: 1151: 1150: 1148: 1147: 1137: 1131: 1130: 1128: 1127: 1118:. Archived from 1108: 1102: 1101: 1099: 1098: 1084: 1078: 1077: 1075: 1074: 1065:. Archived from 1055: 1049: 1048: 1014: 1008: 1007: 994: 988: 987: 978: 968: 957: 951: 950: 948: 941: 932: 926: 925: 923: 922: 908: 902: 901: 899: 898: 884: 878: 877: 875: 874: 860: 854: 853: 851: 850: 841:. Archived from 831: 825: 824: 822: 821: 807: 801: 800: 798: 797: 786: 780: 779: 777: 776: 761: 755: 754: 743: 734: 733: 731: 730: 721:. Archived from 719:lemelson.mit.edu 711: 705: 704: 702: 701: 692:. Archived from 686: 680: 679: 677: 676: 662: 656: 655: 645: 635: 603: 597: 596: 587: 576: 570: 569: 535: 529: 528: 518: 494: 483: 482: 462: 456: 455: 453: 452: 446: 440:. Archived from 427: 414: 284:machine learning 215: 206: 204:10.1.1.1065.5265 196: 102:machine learning 90:signed languages 76: 69: 65: 62: 56: 24: 23: 16: 1270: 1269: 1265: 1264: 1263: 1261: 1260: 1259: 1230: 1229: 1228: 1227: 1218: 1216: 1207: 1206: 1202: 1193: 1191: 1183: 1182: 1178: 1169: 1167: 1159: 1158: 1154: 1145: 1143: 1139: 1138: 1134: 1125: 1123: 1110: 1109: 1105: 1096: 1094: 1086: 1085: 1081: 1072: 1070: 1057: 1056: 1052: 1037: 1021:. p. 279. 1016: 1015: 1011: 996: 995: 991: 976:10.1.1.711.4714 966: 959: 958: 954: 946: 939: 934: 933: 929: 920: 918: 910: 909: 905: 896: 894: 892:www.prodeaf.net 886: 885: 881: 872: 870: 868:www.prodeaf.net 862: 861: 857: 848: 846: 833: 832: 828: 819: 817: 809: 808: 804: 795: 793: 788: 787: 783: 774: 772: 763: 762: 758: 745: 744: 737: 728: 726: 713: 712: 708: 699: 697: 688: 687: 683: 674: 672: 664: 663: 659: 605: 604: 600: 585: 578: 577: 573: 558: 537: 536: 532: 496: 495: 486: 464: 463: 459: 450: 448: 444: 425: 416: 415: 411: 406: 369: 341:five parameters 332:computer vision 319: 292:computer vision 267: 246: 222: 194: 187: 184: 174: 164: 154: 149: 127: 119:parallel corpus 114: 77: 66: 60: 57: 38: 29:This article's 25: 21: 12: 11: 5: 1268: 1266: 1258: 1257: 1252: 1247: 1242: 1232: 1231: 1226: 1225: 1200: 1176: 1152: 1132: 1116:www.signall.us 1103: 1092:www.dolphio.hu 1079: 1063:www.signall.us 1050: 1035: 1009: 989: 952: 927: 903: 879: 855: 826: 802: 781: 756: 735: 706: 681: 657: 598: 571: 556: 530: 484: 473:(3): 236–244. 457: 408: 407: 405: 402: 368: 365: 318: 315: 266: 263: 245: 242: 221: 218: 217: 216: 183: 182:Spanish to LSE 180: 173: 170: 163: 160: 153: 150: 148: 145: 126: 123: 113: 110: 79: 78: 33:external links 28: 26: 19: 13: 10: 9: 6: 4: 3: 2: 1267: 1256: 1253: 1251: 1248: 1246: 1243: 1241: 1240:Sign language 1238: 1237: 1235: 1215: 1211: 1204: 1201: 1190: 1186: 1180: 1177: 1166: 1162: 1156: 1153: 1142: 1136: 1133: 1122:on 2021-02-02 1121: 1117: 1113: 1107: 1104: 1093: 1089: 1083: 1080: 1069:on 2021-02-02 1068: 1064: 1060: 1054: 1051: 1046: 1042: 1038: 1032: 1028: 1024: 1020: 1013: 1010: 1005: 1004: 999: 993: 990: 986: 982: 977: 972: 965: 964: 956: 953: 945: 938: 931: 928: 917: 913: 907: 904: 893: 889: 883: 880: 869: 865: 859: 856: 845:on 2021-03-12 844: 840: 836: 830: 827: 816: 812: 806: 803: 791: 785: 782: 771:on 2021-01-13 770: 766: 760: 757: 752: 748: 742: 740: 736: 725:on 2020-09-21 724: 720: 716: 710: 707: 696:on 2020-09-21 695: 691: 685: 682: 671: 667: 661: 658: 653: 649: 644: 639: 634: 629: 625: 621: 617: 613: 609: 602: 599: 595: 591: 584: 583: 575: 572: 567: 563: 559: 553: 549: 545: 541: 534: 531: 526: 522: 517: 512: 509:(1): 94–101. 508: 504: 500: 493: 491: 489: 485: 480: 476: 472: 468: 461: 458: 447:on 2021-01-10 443: 439: 435: 431: 424: 420: 419:Hastie, Helen 413: 410: 403: 401: 399: 395: 390: 386: 381: 378: 374: 366: 364: 362: 358: 354: 350: 346: 342: 337: 333: 329: 324: 316: 314: 312: 308: 303: 301: 297: 293: 289: 285: 281: 276: 272: 264: 262: 258: 256: 252: 243: 241: 239: 235: 230: 227: 226:transliterate 219: 214: 210: 205: 200: 193: 192: 186: 185: 181: 179: 178: 171: 169: 168: 162:eSIGN project 161: 159: 158: 151: 146: 144: 142: 138: 133: 124: 122: 120: 111: 109: 107: 103: 99: 95: 91: 86: 75: 72: 64: 61:December 2021 54: 50: 49:inappropriate 46: 42: 36: 34: 27: 18: 17: 1217:. Retrieved 1213: 1203: 1192:. Retrieved 1188: 1179: 1168:. Retrieved 1164: 1155: 1144:. Retrieved 1135: 1124:. Retrieved 1120:the original 1115: 1106: 1095:. Retrieved 1091: 1082: 1071:. Retrieved 1067:the original 1062: 1053: 1018: 1012: 1001: 992: 962: 955: 944:the original 930: 919:. Retrieved 915: 906: 895:. Retrieved 891: 882: 871:. Retrieved 867: 858: 847:. Retrieved 843:the original 838: 829: 818:. Retrieved 814: 805: 794:. Retrieved 784: 773:. Retrieved 769:the original 759: 750: 727:. Retrieved 723:the original 718: 709: 698:. Retrieved 694:the original 684: 673:. Retrieved 669: 660: 618:(18): 5151. 615: 611: 601: 581: 574: 539: 533: 506: 502: 470: 466: 460: 449:. Retrieved 442:the original 429: 412: 397: 382: 370: 320: 304: 268: 259: 247: 231: 223: 190: 175: 165: 155: 147:Technologies 128: 115: 84: 82: 67: 58: 43:by removing 30: 839:prodeaf.net 690:"SignAloud" 367:MotionSavvy 300:3D modeling 141:Leap Motion 112:Limitations 1234:Categories 1219:2017-04-09 1214:TechCrunch 1194:2017-04-06 1170:2017-03-09 1146:2023-08-08 1126:2017-03-09 1097:2017-04-06 1073:2017-04-09 921:2017-03-09 897:2017-03-16 873:2017-03-09 849:2017-04-09 820:2017-04-06 796:2017-04-09 775:2017-03-09 729:2019-07-04 700:2017-02-28 675:2017-04-06 451:2020-05-04 404:References 106:epentheses 1165:Indiegogo 1003:Microsoft 971:CiteSeerX 916:App Store 888:"ProDeaf" 864:"ProDeaf" 835:"ProDeaf" 234:Bluetooth 220:SignAloud 199:CiteSeerX 45:excessive 985:17957882 652:32917024 594:67094263 566:18434944 525:16192405 421:(2017). 394:Engadget 345:Deloitte 152:VISICAST 1045:5488882 643:7571093 620:Bibcode 612:Sensors 479:7965881 438:5525834 359:and at 317:SignAll 244:ProDeaf 213:2724186 137:cameras 130:use of 125:History 39:Please 31:use of 1043:  1033:  983:  973:  650:  640:  592:  564:  554:  523:  477:  436:  328:Kinect 290:, and 280:avatar 211:  201:  1041:S2CID 981:S2CID 967:(PDF) 947:(PDF) 940:(PDF) 590:S2CID 586:(PDF) 562:S2CID 445:(PDF) 434:S2CID 426:(PDF) 389:Wired 209:S2CID 195:(PDF) 1031:ISBN 648:PMID 552:ISBN 521:PMID 475:PMID 385:Time 139:and 83:The 1023:doi 638:PMC 628:doi 544:doi 511:doi 400:." 355:in 47:or 1236:: 1212:. 1187:. 1163:. 1114:. 1090:. 1061:. 1039:. 1029:. 1000:. 979:, 969:, 914:. 890:. 866:. 837:. 813:. 749:. 738:^ 717:. 668:. 646:. 636:. 626:. 616:20 614:. 610:. 588:, 560:. 550:. 519:. 507:11 505:. 501:. 487:^ 471:31 469:. 432:. 428:. 387:. 375:/ 363:. 286:, 207:, 197:, 1222:. 1197:. 1173:. 1149:. 1129:. 1100:. 1076:. 1047:. 1025:: 924:. 900:. 876:. 852:. 823:. 799:. 778:. 732:. 703:. 678:. 654:. 630:: 622:: 568:. 546:: 527:. 513:: 481:. 454:. 74:) 68:( 63:) 59:( 55:. 37:.

Index

external links
improve this article
excessive
inappropriate
footnote references
Learn how and when to remove this message
signed languages
human interpreter
phonological features
machine learning
epentheses
parallel corpus
gloves with motion sensors
cameras
Leap Motion
http://www.visicast.cmp.uea.ac.uk/Visicast_index.html
http://www.visicast.cmp.uea.ac.uk/eSIGN/index.html
http://asl.cs.depaul.edu/
Methodology for developing a Speech into Sign Language Translation System in a New Semantic Domain
CiteSeerX
10.1.1.1065.5265
S2CID
2724186
transliterate
Bluetooth
nonmanual markers
American Sign Language
Federal University of Pernambuco
Chinese Academy of Sciences
Microsoft Research

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.