Knowledge (XXG)

Leonardo (robot)

Source đź“ť

156:
locked locations and one of them has switched the locations, Leonardo can watch the first human trying to get to where he thinks the cookies are and open a box with cookies, helping him achieve his goal. All of Leonardo's social skills work together so it can work alongside humans. When a human asks it to do a task, it can indicate what it knows or doesn't know and what it can and cannot do. Communicating through expression and gesture and through perceiving expression, gesture, and speech, the robot is able to work as part of a team.
127:
about to discuss or do something with. The Personal Robots Group has used Leonardo's tracking ability and programmed the robot so it can act human-like, bringing its gaze to an object the human is paying attention to. Matching the human's gaze is one way Leonardo seems to exhibit more natural behavior. Sharing attention like this is one of the ways that allows the robot to learn from a human. The robot's expressions, being able to give feedback on its “understanding” is also vital.
147:, however, by examining the data it gets from mimicking human facial expressions, body language, and speech. In a similar way, humans can understand what other humans might be feeling based on the same data, Leonardo has been programmed according to the rules of simulation theory, allowing it to render something like empathy. In these ways, social interaction with Leonardo seems more human-like, making it more likely humans will be able to work with the robot in a team. 131:
other's perspectives, and it's the same for a social robot. Being able to understand that “others” don't have the same knowledge it has lets the robot view its environment more accurately and make better decisions based in its programming of what to do in a given situation. It also allows the robot to distinguish between a human's intentions and their actual actions, since humans are not exact. This would allow a human without special training to teach the robot.
155:
Leonardo can work together with a human to solve a common problem as much as his body allows. He's more effective at working shoulder-to-shoulder with a human because of the theory of mind work that is blended with his programming. In a task where one human wants cookies and another crackers from two
126:
Leonardo can also track what a human is looking at. This allows the robot to interact with a human and objects in the environment. Naturally, humans will follow a pointing gesture and/or gaze and understand that what is being pointed at or looked at is the object the other human is concerned with and
93:
A camera mounted in the robot's right eye captures faces. A facial feature tracker developed by the Neven Vision corporation isolates the faces from the captures. A buffer of up to 200 views of the face is used to create a model of the person whenever they introduce themself via speech. Additionally,
76:
1, set out to create a more sophisticated social-robot in Leonardo. They gave Leonardo a different visual tracking system and programs based on infant psychology that they hope will make for better human-robot collaboration. One of the goals of the project was to make it possible for untrained humans
134:
Leonardo can explore on its own, in addition to being trained with a human, which saves time and is a key factor in the success of a personal robot. It must be able to learn quickly using the mechanisms humans already use (like spatial scaffolding, shared attention, mimicry, and perspective taking).
122:
Leonardo learns through spatial scaffolding. One of the ways a teacher teaches is by positioning objects near to the student that they expect the student to use. This same technique, spatial scaffolding, can be used with Leonardo, who is taught to build a sailboat from virtual blocks, using only the
89:
There are approximately sixty motors in the small space of the robot body that make the expressive movement of the robot possible. The Personal Robot Group developed the motor control systems (with both 8-axis and 16-axis control packages) that they've used for Leonardo. Leonardo does not resemble
130:
Another way that Leo learns is by mimicry. The same way infants learn to understand and manipulate their world is helpful for the social robot. By mimicking human facial expressions and body movement, Leo can distinguish between self and other. This ability is important for humans in taking each
113:
The goal of creating Leonardo was to make a social robot. Its motors, sensors, and cameras allow it to mimic human expression, interact with limited objects, and track objects. This helps humans react to the robot in a more familiar way. Through this reaction, humans can engage the robot in more
90:
any real creature, but instead has the appearance of a fanciful being. Its face was designed to be expressive and communicative since it is a social robot. The fanciful, purposefully young look is supposed to encourage humans to interact with it in the same way they would with a child or pet.
123:
red and blue blocks. Whenever it tries to use a green block, the teacher pulls the “forbidden” color away and moves the red and blue blocks into the robot's space. Leonardo learns, in this way, to build the boat using red and blue blocks only.
135:
It also cannot require an extensive amount of time. And finally, it should be a pleasure to interact with, which is why aesthetics and expression are so important. These are all important steps in bringing the robot into a home.
43:
Studios, leaders in animatronics. Its body was completed in 2002. It was the most complex robot the studio had ever attempted as of 2001. Other contributors to the project include NevenVision, Inc., Toyota, NASA's
606:
Andrew Brooks and Cynthia Breazeal (2006). "Working with Robots and Objects: Revisiting Deictic Reference for Achieving Spatial Common Ground". Salt Lake City: Human Robot Interaction.
114:
naturally social ways. Leonardo's programming blends with psychological theory so that he learns more naturally, interacts more naturally, and collaborates more naturally with humans.
143:
Shared attention and perspective taking are two mechanisms Leonardo has access to that help it interact naturally with humans. Leonardo also can achieve something like
581: 738:
Breazeal, Cynthia; Cory Kidd; Andrea Thomaz; Guy Hoffman; Matt Berlin. "Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork".
638: 762: 663: 453: 528: 368: 311: 331: 771: 717:
Breazeal, Cynthia; Matt Berlin; Andrew Brooks; Jesse Gray; Andrea Thomaz (2006). "Using perspective taking to learn from ambiguous demonstrations".
97:
The group plans that Leonardo will have skin that can detect temperature, proximity, and pressure. To accomplish this, they are experimenting with
553: 503: 373: 349: 378: 336: 94:
Leonardo can track objects and faces visually using a collection of visual feature detectors that include color, skin tone, shape, and motion.
692:
Brooks, Andrew; Cynthia Breazeal (2006). "Working with Robots and Objects: Revisiting Deictic Reference for Achieving Spatial Common Ground".
398: 316: 32: 17: 811: 428: 403: 337:
Applying a “Somatic Alphabet” Approach to Inferring Orientation, Motion, and Direction in Clusters of Force Sensing Resistors
45: 585: 102: 49: 253: 105:. The sensors are layered over with silicon like is used in makeup effects to maintain the aesthetics of the robot. 801: 642: 347:
Learning From and About Others: Towards Using Imitation to Bootstrap the Social Understanding of Others by Robots
393: 388: 796: 749:
Sensitive Skins and Somatic Processing for Affective and Sociable Robots Based upon a Somatic Alphabet Approach
667: 457: 57: 77:
to interact with and teach the robot much more quickly with fewer repetitions. Leonardo was awarded a spot in
780: 768: 532: 358: 806: 98: 363: 611: 557: 507: 346: 816: 408: 312:
A “Sensitive Skin” for Robotic Companions Featuring Temperature, Force, and Electric Field Sensors
705: 341: 326: 726: 697: 166: 36: 775: 624: 432: 353: 482: 383: 279: 16: 790: 69: 327:
An Embodied Cognition Approach to Mindreading Skills for Socially Intelligent Robots
321: 709: 364:
Perspective Taking: An Organizing Principle for Learning in Human-Robot Interaction
294: 284: 171: 40: 28: 730: 404:
Understanding the Embodied Teacher: Nonverbal Cues for Sociable Robot Learning
701: 374:
Robot Science Meets Social Science: An Embodied Model of Social Referencing
299: 289: 73: 429:"Furry Robots, Foldable Cars and More Innovations from MIT's Media Lab" 144: 48:, and the Navy Research Lab. It was created to facilitate the study of 53: 15: 359:
Learning from Human Teachers with Socially Guided Exploration
498: 496: 476: 474: 322:
Action parsing and goal inference using self as simulator
31:, the first created by the Personal Robots Group of the 576: 574: 379:
Robot’s Play: Interactive Games With Sociable Machines
666:. MIT Media Lab Personal Robots Group. Archived from 641:. MIT Media Lab Personal Robots Group. Archived from 584:. MIT Media Lab Personal Robots Group. Archived from 556:. MIT Media Lab Personal Robots Group. Archived from 531:. MIT Media Lab Personal Robots Group. Archived from 506:. MIT Media Lab Personal Robots Group. Archived from 456:. MIT Media Lab Personal Robots Group. Archived from 332:
An Embodied Computational Model of Social Referencing
389:
Teaching and Working with Robots as a Collaboration
317:A “Somatic Alphabet” Approach to “Sensitive Skin” 68:consortia have partially funded the project. The 384:Spatial Scaffolding for Sociable Robot Learning 56:Mobile Autonomous Robot Software (MARS) grant, 369:Robot Learning via Socially Guided Exploration 8: 409:Working Collaboratively with Humanoid Robots 399:Tutelage and Socially Guided Robot Learning 394:The dynamic lift of developmental process 751:. Massachusetts Institute of Technology. 639:"Leonardo Project Social Cognition Page" 582:"Leonardo Project Social Learning Page" 420: 20:Leonardo's body by Stan Winston Studios 620: 609: 72:Robotic Life Group, who also studied 33:Massachusetts Institute of Technology 7: 60:Young Investigators Program grant, 342:Collaboration in Human-Robot Teams 81:50 Best Robots Ever list in 2006. 14: 431:. PBS. 2011-05-20. Archived from 35:. Its development is credited to 664:"Project Leonardo Teamwork Page" 176:Lindsay MacGowan (Artistic Lead) 719:Robotics and Autonomous Systems 179:Richard Landon (Technical Lead) 529:"Leonardo Project Vision Page" 182:The Stan Winston Studios Team 46:Lyndon B. Johnson Space Center 1: 481:Robert Capps (January 2006). 103:quantum tunnelling composites 554:"Leonardo Project Skin Page" 504:"Leonardo Project Body Page" 454:"Leonardo Project Home Page" 812:Robots of the United States 747:Stiehl, Walter Dan (2005). 731:10.1016/j.robot.2006.02.004 833: 769:TED Talk: Cynthia Breazeal 483:"The 50 Best Robots Ever" 783:(New York Times article) 235:Fardad Faridi (Animator) 58:Office of Naval Research 702:10.1145/1121241.1121292 694:Human Robot Interaction 99:force-sensing resistors 50:human–robot interaction 619:Cite journal requires 21: 781:The Real transformers 763:Personal Robots Group 266:Matt Hancher (Alumni) 258:Andrea Lockerd Thomaz 218:Rodrick Khachatoorian 52:and collaboration. A 19: 765:(Leonardo home page) 244:Andrew “Zoz” Brooks 774:2020-11-09 at the 352:2017-08-08 at the 238:Graduate Students 230:Annabelle Troukens 22: 269:Hans Lee (Alumni) 66:Things that Think 39:. The body is by 824: 802:Prototype robots 752: 743: 734: 713: 679: 678: 676: 675: 660: 654: 653: 651: 650: 635: 629: 628: 622: 617: 615: 607: 603: 597: 596: 594: 593: 578: 569: 568: 566: 565: 550: 544: 543: 541: 540: 525: 519: 518: 516: 515: 500: 491: 490: 478: 469: 468: 466: 465: 450: 444: 443: 441: 440: 425: 200:Michael Ornealez 167:Cynthia Breazeal 79:Wired Magazine’s 37:Cynthia Breazeal 832: 831: 827: 826: 825: 823: 822: 821: 797:Robotic animals 787: 786: 776:Wayback Machine 759: 746: 737: 716: 691: 688: 686:Further reading 683: 682: 673: 671: 662: 661: 657: 648: 646: 637: 636: 632: 618: 608: 605: 604: 600: 591: 589: 580: 579: 572: 563: 561: 552: 551: 547: 538: 536: 527: 526: 522: 513: 511: 502: 501: 494: 480: 479: 472: 463: 461: 452: 451: 447: 438: 436: 427: 426: 422: 417: 354:Wayback Machine 308: 276: 162: 153: 141: 120: 111: 87: 12: 11: 5: 830: 828: 820: 819: 814: 809: 804: 799: 789: 788: 785: 784: 778: 766: 758: 757:External links 755: 754: 753: 744: 735: 725:(5): 385–393. 714: 687: 684: 681: 680: 655: 630: 621:|journal= 598: 570: 545: 520: 492: 487:Wired Magazine 470: 445: 419: 418: 416: 413: 412: 411: 406: 401: 396: 391: 386: 381: 376: 371: 366: 361: 356: 344: 339: 334: 329: 324: 319: 314: 307: 304: 303: 302: 297: 292: 287: 282: 280:Kismet (robot) 275: 272: 271: 270: 267: 264: 263: 262: 259: 256: 254:Jeff Lieberman 251: 248: 245: 242: 236: 233: 232: 231: 228: 225: 222: 219: 216: 213: 210: 207: 204: 201: 198: 197:Kathy Macgowan 195: 192: 189: 188:Trevor Hensley 186: 180: 177: 174: 169: 161: 158: 152: 149: 140: 137: 119: 116: 110: 107: 86: 83: 27:is a 2.5 foot 13: 10: 9: 6: 4: 3: 2: 829: 818: 815: 813: 810: 808: 807:Social robots 805: 803: 800: 798: 795: 794: 792: 782: 779: 777: 773: 770: 767: 764: 761: 760: 756: 750: 745: 741: 740:MIT Media Lab 736: 732: 728: 724: 720: 715: 711: 707: 703: 699: 695: 690: 689: 685: 670:on 2012-03-24 669: 665: 659: 656: 645:on 2012-03-24 644: 640: 634: 631: 626: 613: 602: 599: 588:on 2012-03-24 587: 583: 577: 575: 571: 560:on 2008-03-02 559: 555: 549: 546: 535:on 2012-03-24 534: 530: 524: 521: 510:on 2012-03-24 509: 505: 499: 497: 493: 488: 484: 477: 475: 471: 460:on 2012-02-14 459: 455: 449: 446: 435:on 2016-03-04 434: 430: 424: 421: 414: 410: 407: 405: 402: 400: 397: 395: 392: 390: 387: 385: 382: 380: 377: 375: 372: 370: 367: 365: 362: 360: 357: 355: 351: 348: 345: 343: 340: 338: 335: 333: 330: 328: 325: 323: 320: 318: 315: 313: 310: 309: 305: 301: 298: 296: 293: 291: 288: 286: 283: 281: 278: 277: 273: 268: 265: 260: 257: 255: 252: 249: 246: 243: 240: 239: 237: 234: 229: 227:Keith Marbory 226: 223: 220: 217: 215:John Cherevka 214: 211: 208: 205: 202: 199: 196: 193: 191:Matt Heimlich 190: 187: 184: 183: 181: 178: 175: 173: 170: 168: 164: 163: 159: 157: 151:Collaborating 150: 148: 146: 138: 136: 132: 128: 124: 117: 115: 108: 106: 104: 100: 95: 91: 84: 82: 80: 75: 71: 70:MIT Media Lab 67: 63: 59: 55: 51: 47: 42: 38: 34: 30: 26: 18: 748: 739: 722: 718: 693: 672:. Retrieved 668:the original 658: 647:. Retrieved 643:the original 633: 612:cite journal 601: 590:. Retrieved 586:the original 562:. Retrieved 558:the original 548: 537:. Retrieved 533:the original 523: 512:. Retrieved 508:the original 486: 462:. Retrieved 458:the original 448: 437:. Retrieved 433:the original 423: 306:Bibliography 295:Social robot 285:Paro (robot) 212:Rob Ramsdell 209:Grady Holder 172:Stan Winston 160:Contributors 154: 142: 133: 129: 125: 121: 112: 96: 92: 88: 85:Construction 78: 65: 62:Digital Life 61: 41:Stan Winston 29:social robot 24: 23: 817:2001 robots 250:Guy Hoffman 241:Matt Berlin 224:Rich Haugen 221:Kurt Herbel 203:Amy Whetsel 139:Interacting 791:Categories 674:2012-02-27 649:2012-02-27 592:2012-02-27 564:2012-02-27 539:2012-02-27 514:2012-02-27 464:2012-02-27 439:2017-09-02 415:References 261:Dan Stiehl 247:Jesse Gray 206:Joe Reader 165:Professor 772:Archived 350:Archived 300:Robotics 290:Robonaut 274:See also 194:Al Sousa 185:Jon Dawe 118:Learning 74:Robonaut 25:Leonardo 710:2112599 145:empathy 109:Purpose 708:  64:, and 706:S2CID 54:DARPA 625:help 101:and 727:doi 698:doi 793:: 723:54 721:. 704:. 696:. 616:: 614:}} 610:{{ 573:^ 495:^ 485:. 473:^ 742:. 733:. 729:: 712:. 700:: 677:. 652:. 627:) 623:( 595:. 567:. 542:. 517:. 489:. 467:. 442:.

Index


social robot
Massachusetts Institute of Technology
Cynthia Breazeal
Stan Winston
Lyndon B. Johnson Space Center
human–robot interaction
DARPA
Office of Naval Research
MIT Media Lab
Robonaut
force-sensing resistors
quantum tunnelling composites
empathy
Cynthia Breazeal
Stan Winston
Jeff Lieberman
Kismet (robot)
Paro (robot)
Robonaut
Social robot
Robotics
A “Sensitive Skin” for Robotic Companions Featuring Temperature, Force, and Electric Field Sensors
A “Somatic Alphabet” Approach to “Sensitive Skin”
Action parsing and goal inference using self as simulator
An Embodied Cognition Approach to Mindreading Skills for Socially Intelligent Robots
An Embodied Computational Model of Social Referencing
Applying a “Somatic Alphabet” Approach to Inferring Orientation, Motion, and Direction in Clusters of Force Sensing Resistors
Collaboration in Human-Robot Teams
Learning From and About Others: Towards Using Imitation to Bootstrap the Social Understanding of Others by Robots

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑