Knowledge (XXG)

Representational harm

Source 📝

280:, can display in many forms, including, but not limited to, erasing and alienating social groups, and denying people the right to self-identify. Erasing and alienating social groups involves the unequal visibility of certain social groups; specifically, systematic ineligibility in algorithmic systems perpetuates inequality by contributing to the underrepresentation of social groups. Not allowing people to self-identify is closely related as people's identities can be 'erased' or 'alienated' in these algorithms. Misrecognition causes more than surface-level harm to individuals: 249:, an unequal distribution of resources among social groups, which is more widely studied and easier to measure. However, recognition of representational harms is growing and preventing them has become an active research area. Researchers have recently developed methods to effectively quantify representational harm in algorithms, making progress on preventing this harm in the future. 173: 303:
Modeling stereotyping is one way to identify representational harm. Representational stereotyping can be quantified by comparing the predicted outcomes for one social group with the ground-truth outcomes for that group observed in real data. For example, if individuals from group A achieve an outcome
349:
in 2015 when an algorithm in Google Photos classified Black people as gorillas. Developers at Google said that the problem was caused because there were not enough faces of Black people in the training dataset for the algorithm to learn the difference between Black people and gorillas. Google issued
268:
Denigration is the action of unfairly criticizing individuals. This frequently happens when the demeaning of social groups occurs. For example, when searching for "Black-sounding" names versus "white-sounding" ones, some retrieval systems bolster the false perception of criminality by displaying ads
323:, the act of an algorithm generating a short description of an image. In a study on image captioning, researchers measured five types of representational harm. To quantify stereotyping, they measured the number of incorrect words included in the model-generated image caption when compared to a 332:
of the amount of stereotyping occurring in this caption generation. These researchers also attempted to measure demeaning representational harm. To measure this, they analyzed the frequency with which humans in the image were mentioned in the generated caption. It was hypothesized that if the
260:
Stereotypes are oversimplified and usually undesirable representations of a specific group of people, usually by race and gender. This often leads to the denial of educational, employment, housing, and other opportunities. For example, the
365:, which allows an individual to calculate the relationships and similarities between words. However, recent studies have shown that these word embeddings may commonly encode harmful stereotypes, such as the common example that the phrase " 327:
caption. They manually reviewed each of the incorrectly included words, determining whether the incorrect word reflected a stereotype associated with the image or whether it was an unrelated error, which allowed them to have a
304:
with a probability of 60%, stereotyping would be observed if it predicted individuals to achieve that outcome with a probability greater than 60%. The group modeled stereotyping in the context of
579:
Shelby, Renee; Rismani, Shalaleh; Henne, Kathryn; Moon, AJung; Rostamzadeh, Negar; Nicholas, Paul; Yilla-Akbari, N'Mah; Gallegos, Jess; Smart, Andrew; Garcia, Emilio; Virk, Gurleen (2023-08-29).
350:
an apology and fixed the issue by blocking its algorithms from classifying anything as a primate. In 2023, Google's photos algorithm was still blocked from identifying gorillas in photos.
476:
Abbasi, Mohsen; Friedler, Sorelle; Scheidegger, Carlos; Venkatasubramanian, Suresh (28 January 2019). "Fairness in representation: quantifying stereotyping as representational harm".
839:
Bolukbasi, Tolga; Chang, Kai-Wei; Zou, James; Saligrama, Venkatesh; Kalai, Adam (21 Jul 2016). "Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings".
245:. While preventing representational harm in models is essential to prevent harmful biases, researchers often lack precise definitions of representational harm and conflate it with 59: 257:
Three prominent types of representational harm include stereotyping, denigration, and misrecognition. These subcategories present many dangers to individuals and groups.
81:. Although not required, you are encouraged to explain why you object to the deletion, either in your edit summary or on the talk page. If this template is removed, 300:
As the dangers of representational harm have become better understood, some researchers have developed methods to measure representational harm in algorithms.
73: 600: 525: 183: 194: 316:
problems, and developed a set of rules to quantitatively determine if the model predictions exhibit stereotyping in each of these cases.
94:
If you created the article, please don't be offended. Instead, consider improving the article so that it is acceptable according to the
305: 212: 137: 626:""Asians are Good at Math. What an Awful Stereotype" The Model Minority Stereotype's Impact on Asian American Engineering Students" 109: 675:"Discrimination in Online Ad Delivery: Google ads, black names and white names, racial discrimination, and click advertising" 373:
of computer programming as a profession that is better performed by men, which would be an example of representational harm.
265:
stereotype of Asian Americans as highly intelligent and good at mathematics can be damaging professionally and academically.
116: 83: 24: 319:
Other attempts to measure representational harms have focused on applications of algorithms in specific domains such as
147:
Expired+%5B%5BWP%3APROD%7CPROD%5D%5D%2C+concern+was%3A+Fails+%5B%5BWP%3AGNG%5D%5D+by+failing+%5B%5BWP%3ANOTESSAY%5D%5D.
123: 95: 90:
The article may be deleted if this message remains in place for seven days, i.e., after 18:39, 26 September 2024 (UTC).
55: 51: 369:" is oftentimes more closely related to "man" than it is to "women" in vector space. This could be interpreted as a 229:
when they misrepresent a group of people in a negative manner. Representational harms include perpetuating harmful
763: 105: 63: 357:, which are trained using a wide range of text. These word embeddings are the representation of a word as an 277: 233:
about or minimizing the existence of a social group, such as a racial, ethnic, gender, or religious group.
424:"Othering and Low Status Framing of Immigrant Cuisines in US Restaurant Reviews and Large Language Models" 358: 187:
that states a Knowledge (XXG) editor's personal feelings or presents an original argument about a topic.
155: 36: 353:
Another prevalent example of representational harm is the possibility of stereotypes being encoded in
324: 281: 246: 242: 79:
You may remove this message if you improve the article or otherwise object to deletion for any reason
67: 866: 309: 130: 792:"Utility of General and Specific Word Embeddings for Classifying Translational Stages of Research" 840: 768: 714: 686: 655: 606: 531: 477: 435: 289: 821: 803: 706: 647: 596: 521: 453: 370: 42: 269:
for bail-bonding businesses. A system may shift the representation of a group to be of lower
811: 696: 637: 588: 513: 445: 404: 320: 313: 285: 238: 234: 237:
algorithms often commit representational harm when they learn patterns from data that have
151: 32: 423: 816: 791: 642: 625: 512:. FAccT '22. New York, NY, USA: Association for Computing Machinery. pp. 324–335. 354: 334: 262: 587:. AIES '23. New York, NY, USA: Association for Computing Machinery. pp. 723–741. 394: 860: 659: 610: 535: 329: 270: 718: 581:"Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction" 362: 505: 732: 449: 366: 230: 807: 710: 651: 504:
Wang, Angelina; Barocas, Solon; Laird, Kristen; Wallach, Hanna (2022-06-20).
457: 345:
One of the most notorious examples of representational harm was committed by
701: 674: 592: 517: 825: 624:
Trytten, Deborah A.; Lowe, Anna Wong; Walden, Susan E. (January 2, 2013).
580: 396:
Sociolinguistically Driven Approaches for Just Natural Language Processing
428:
Proceedings of the International AAAI Conference on Web and Social Media
764:"Google's Photo App Still Can't Find Gorillas. And Neither Can Apple's" 333:
individuals were not mentioned in the caption, then this was a form of
585:
Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
408: 346: 845: 482: 440: 691: 790:
Major, Vincent; Surkis, Alisa; Aphinyanaphongs, Yindalon (2018).
510:
2022 ACM Conference on Fairness, Accountability, and Transparency
550: 166: 15: 422:
Luo, Yiwei; Gligorić, Kristina; Jurafsky, Dan (2024-05-28).
292:
can emerge from this subcategory of representational harm.
184:
personal reflection, personal essay, or argumentative essay
190: 506:"Measuring Representational Harms in Image Captioning" 796:
AMIA ... Annual Symposium Proceedings. AMIA Symposium
733:"Google apologises for Photos app's racist blunder" 273:, often resulting in a disregard from society. 241:, and this has been shown to be the case with 8: 844: 815: 700: 690: 641: 481: 439: 213:Learn how and when to remove this message 549:Rusanen, Anna-Mari; Nurminen, Jukka K. 382: 93: 28: because of the following concern: 757: 755: 753: 25:proposed that this article be deleted 7: 574: 572: 570: 568: 566: 564: 499: 497: 495: 493: 471: 469: 467: 388: 386: 50:If you can address this concern by 762:Grant, Nico; Hill (May 22, 2023). 643:10.1002/j.2168-9830.2012.tb00057.x 14: 630:Journal of Engineering Education 171: 673:Sweeney, Latanya (2013-03-01). 393:Blodgett, Su Lin (2021-04-06). 150:Expired ], concern was: Fails 1: 276:Misrecognition, or incorrect 883: 450:10.1609/icwsm.v18i1.31367 702:10.1145/2460276.2460278 593:10.1145/3600211.3604673 518:10.1145/3531146.3533099 106:"Representational harm" 401:Doctoral Dissertations 193:by rewriting it in an 48: 243:large language models 227:representational harm 29: 555:ethics-of-ai.mooc.fi 290:emotional insecurity 367:computer programmer 769:The New York Times 282:psychological harm 195:encyclopedic style 182:is written like a 602:979-8-4007-0231-0 527:978-1-4503-9352-2 371:misrepresentation 223: 222: 215: 165: 164: 141: 84:do not replace it 45: 874: 851: 850: 848: 836: 830: 829: 819: 787: 781: 780: 778: 776: 759: 748: 747: 745: 744: 729: 723: 722: 704: 694: 670: 664: 663: 645: 621: 615: 614: 576: 559: 558: 546: 540: 539: 501: 488: 487: 485: 473: 462: 461: 443: 419: 413: 412: 409:10.7275/20410631 390: 359:array of numbers 321:image captioning 286:social isolation 239:algorithmic bias 235:Machine learning 218: 211: 207: 204: 198: 175: 174: 167: 140: 40: 16: 882: 881: 877: 876: 875: 873: 872: 871: 857: 856: 855: 854: 838: 837: 833: 789: 788: 784: 774: 772: 761: 760: 751: 742: 740: 731: 730: 726: 672: 671: 667: 623: 622: 618: 603: 578: 577: 562: 548: 547: 543: 528: 503: 502: 491: 475: 474: 465: 421: 420: 416: 392: 391: 384: 379: 355:word embeddings 343: 298: 255: 247:allocative harm 219: 208: 202: 199: 191:help improve it 188: 176: 172: 161: 160: 159: 148: 145: 100: 99: 96:deletion policy 91: 12: 11: 5: 880: 878: 870: 869: 859: 858: 853: 852: 831: 782: 749: 724: 665: 636:(3): 439–468. 616: 601: 560: 551:"Ethics of Ai" 541: 526: 489: 463: 414: 381: 380: 378: 375: 342: 339: 335:dehumanization 306:classification 297: 296:Quantification 294: 263:model minority 254: 251: 225:Systems cause 221: 220: 203:September 2024 179: 177: 170: 163: 162: 149: 146: 143: 142: 74:edit this page 21: 19: 13: 10: 9: 6: 4: 3: 2: 879: 868: 865: 864: 862: 847: 842: 835: 832: 827: 823: 818: 813: 809: 805: 802:: 1405–1414. 801: 797: 793: 786: 783: 771: 770: 765: 758: 756: 754: 750: 738: 734: 728: 725: 720: 716: 712: 708: 703: 698: 693: 688: 684: 680: 676: 669: 666: 661: 657: 653: 649: 644: 639: 635: 631: 627: 620: 617: 612: 608: 604: 598: 594: 590: 586: 582: 575: 573: 571: 569: 567: 565: 561: 556: 552: 545: 542: 537: 533: 529: 523: 519: 515: 511: 507: 500: 498: 496: 494: 490: 484: 479: 472: 470: 468: 464: 459: 455: 451: 447: 442: 437: 433: 429: 425: 418: 415: 410: 406: 402: 398: 397: 389: 387: 383: 376: 374: 372: 368: 364: 360: 356: 351: 348: 340: 338: 336: 331: 330:proxy measure 326: 325:gold-standard 322: 317: 315: 311: 307: 301: 295: 293: 291: 287: 283: 279: 274: 272: 271:social status 266: 264: 258: 252: 250: 248: 244: 240: 236: 232: 228: 217: 214: 206: 196: 192: 186: 185: 180:This article 178: 169: 168: 157: 153: 139: 136: 132: 129: 125: 122: 118: 115: 111: 108: –  107: 103: 102:Find sources: 97: 92: 88: 86: 85: 80: 76: 75: 69: 65: 61: 57: 53: 47: 44: 38: 34: 27: 26: 20: 18: 17: 834: 799: 795: 785: 773:. Retrieved 767: 741:. Retrieved 739:. 2015-07-01 736: 727: 685:(3): 10–29. 682: 678: 668: 633: 629: 619: 584: 554: 544: 509: 431: 427: 417: 400: 395: 363:vector space 352: 344: 318: 302: 299: 275: 267: 259: 256: 226: 224: 209: 200: 181: 134: 127: 120: 113: 101: 89: 82: 78: 71: 49: 43:Dclemens1971 41:proposed by 30: 23: 775:December 5, 434:: 985–998. 278:recognition 231:stereotypes 156:WP:NOTESSAY 154:by failing 77:and do so. 56:copyediting 37:WP:NOTESSAY 35:by failing 867:Technology 846:1607.06520 743:2023-12-06 483:1901.09565 441:2307.07645 403:(Thesis). 377:References 314:clustering 310:regression 117:newspapers 70:the page, 808:1942-597X 711:1542-7730 692:1301.6822 660:144783391 652:1069-4730 611:256697294 536:249674329 458:2334-0770 52:improving 861:Category 826:30815185 737:BBC News 719:35894627 341:Examples 64:renaming 60:sourcing 39:. ( 817:6371342 189:Please 131:scholar 72:please 68:merging 824:  814:  806:  717:  709:  658:  650:  609:  599:  534:  524:  456:  347:Google 312:, and 288:, and 152:WP:GNG 133:  126:  119:  112:  104:  33:WP:GNG 31:Fails 22:It is 841:arXiv 715:S2CID 687:arXiv 679:Queue 656:S2CID 607:S2CID 532:S2CID 478:arXiv 436:arXiv 253:Types 138:JSTOR 124:books 66:, or 822:PMID 804:ISSN 800:2018 777:2023 707:ISSN 648:ISSN 597:ISBN 522:ISBN 454:ISSN 144:PROD 110:news 812:PMC 697:doi 638:doi 634:101 589:doi 514:doi 446:doi 405:doi 361:in 863:: 820:. 810:. 798:. 794:. 766:. 752:^ 735:. 713:. 705:. 695:. 683:11 681:. 677:. 654:. 646:. 632:. 628:. 605:. 595:. 583:. 563:^ 553:. 530:. 520:. 508:. 492:^ 466:^ 452:. 444:. 432:18 430:. 426:. 399:. 385:^ 337:. 308:, 284:, 62:, 58:, 54:, 849:. 843:: 828:. 779:. 746:. 721:. 699:: 689:: 662:. 640:: 613:. 591:: 557:. 538:. 516:: 486:. 480:: 460:. 448:: 438:: 411:. 407:: 216:) 210:( 205:) 201:( 197:. 158:. 135:· 128:· 121:· 114:· 98:. 87:. 46:)

Index

proposed that this article be deleted
WP:GNG
WP:NOTESSAY
Dclemens1971
improving
copyediting
sourcing
renaming
merging
edit this page
do not replace it
deletion policy
"Representational harm"
news
newspapers
books
scholar
JSTOR
WP:GNG
WP:NOTESSAY
personal reflection, personal essay, or argumentative essay
help improve it
encyclopedic style
Learn how and when to remove this message
stereotypes
Machine learning
algorithmic bias
large language models
allocative harm
model minority

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.