Knowledge (XXG)

High-level language computer architecture

Source 📝

441:
in such code. One solution is to use a processor custom built to execute a safe high level language or at least understand types. Protections at the processor word level make attackers' job difficult compared to low level machines that see no distinction between scalar data, arrays, pointers, or code. Academics are also developing languages with similar properties that might integrate with high level processors in the future. An example of both of these trends is the SAFE project. Compare
523:) in alternative ways, primarily via compilers or interpreters: the system is still written in a HLL, but there is a trusted base in software running on a lower-level architecture. This has been the approach followed since circa 1980: for example, a Java system where the runtime environment itself is written in C, but the operating system and applications written in Java. 535:, with rather stable, non-language-specific ISAs, featuring multiple registers, pipelining, and more recently multicore systems, rather than language-specific ISAs. Language support has focused on compilers and their runtimes, and interpreters and their virtual machines (particularly JIT'ing ones), with little direct hardware support. For example, the current 504:), where other languages are second-class citizens, and often must hew closely to the main language in semantics. For this reason lower-level ISAs allow multiple languages to be well-supported, given compiler support. However, a similar issue arises even for many apparently language-neutral processors, which are well-supported by the language 487:
step of compilers, which is typically a relatively small part of compilation, and a questionable use of computing power (transistors and microcode). At the minimum tokenization is needed, and typically syntactic analysis and basic semantic checks (unbound variables) will still be performed – so there
440:
An advantage that's reappearing post-2000 is safety or security. Mainstream IT has largely moved to languages with type and/or memory safety for most applications. The software those depend on, from OS to virtual machines, leverage native code with no protection. Many vulnerabilities have been found
180:
More loosely, a HLLCA may simply be a general-purpose computer architecture with some features specifically to support a given HLL or several HLLs. This was found in Lisp machines from the 1970s onward, which augmented general-purpose processors with operations specifically designed to support Lisp.
425:
HLLCAs are intuitively appealing, as the computer can in principle be customized for a language, allowing optimal support for the language, and simplifying compiler writing. It can further natively support multiple languages by simply changing the microcode. Key advantages are to developers: fast
464:
resulted in much faster code and were easier to develop than implementing a language in microcode. Many compiler optimizations require complex analysis and rearrangement of the code, so the machine code is very different from the original source code. These optimizations are either impossible or
491:
A deeper problem, still an active area of development as of 2014, is that providing HLL debugging information from machine code is quite difficult, basically because of the overhead of debugging information, and more subtly because compilation (particularly optimization) makes determining the
546:
In computer architecture, the RISC approach has proven very popular and successful instead, and is opposite from HLLCAs, emphasizing a very simple instruction set architecture. However, the speed advantages of RISC computers in the 1980s was primarily due to early adoption of
409:
HLLCAs have often been advocated when a HLL has a radically different model of computation than imperative programming (which is a relatively good match for typical processors), notably for functional programming (Lisp) and logic programming (Prolog).
302:. Like Lisp, Prolog's basic model of computation is radically different from standard imperative designs, and computer scientists and electrical engineers were eager to escape the bottlenecks caused by emulating their underlying models. 465:
impractical to implement in microcode, due to the complexity and the overhead. Analogous performance problems have a long history with interpreted languages (dating to Lisp (1958)), only being resolved adequately for practical use by
405:
Some HLLCs have been particularly popular as developer machines (workstations), due to fast compiles and low-level control of the system with a high-level language. Pascal MicroEngine and Lisp machines are good examples of this.
495:
Further, HLLCAs are typically optimized for one language, supporting other languages more poorly. Similar issues arise in multi-language virtual machines, notably the Java virtual machine (designed for Java) and the .NET
492:
original source for a machine instruction quite involved. Thus the debugging information provided as an essential part of HLLCAs either severely limits implementation or adds significant overhead in ordinary use.
373:(2012) provides a virtual instruction set to abstract away from the underlying ISAs, and has support for HLL features such as exceptions and virtual functions, and include debugging support. 445:, where the software (especially operating system) is based around a safe, high-level language, though the hardware need not be: the "trusted base" may still be in a lower level language. 531:
Since the 1980s the focus of research and implementation in general-purpose computer architectures has primarily been in RISC-like architectures, typically internally register-rich
173:
is deemed sufficient, though in some cases (such as Java), assemblers are used to produce legal bytecode which would not be output by the compiler. This approach was found in the
257:(1981) was designed to support Ada. This was Intel's first 32-bit processor design, and was intended to be Intel's main processor family for the 1980s, but failed commercially. 1034: 51:
and primarily used in the 1960s and 1970s. HLLCAs were popular in the 1960s and 1970s, but largely disappeared in the 1980s. This followed the dramatic failure of the
199:(1961) were the first HLLCA, designed to support ALGOL (1959), one of the earliest HLLs. This was referred to at the time as "language-directed design." The 761: 560: 1039: 757: 299: 295: 614: 110:(HSAIL) provides instruction set support for HLL features such as exceptions and virtual functions; this uses JIT to ensure performance. 131: 370: 103: 437:), without requiring recompilation of an entire system. This is analogous to updating an interpreter for an interpreted language. 118:
There are a wide variety of systems under this heading. The most extreme example is a Directly Executed Language (DEL), where the
139: 64: 60: 40: 488:
is no benefit to the front end – and optimization requires ahead-of-time analysis – so there is no benefit to the middle end.
985: 501: 363: 238: 102:(1995), and these are a qualified success, being used for certain applications. A recent architecture in this vein is the 245:(Pascal compiler bytecode) as its machine code. This was influential on the later development of Java and Java machines. 394: 393:
are frequently used to support types (as in the Burroughs Large Systems and Lisp machines). More radical examples use a
324: 268: 119: 821: 640: 532: 470: 346: 264: 99: 91: 334:
processor, stemming from a design called CRISP (C-language Reduced Instruction Set Processor), was optimized to run
484: 509: 474: 543:, which it uses for type-checking and garbage collection, despite the hardware not being a tagged architecture. 904:
Chu, Yaohan; Cannon, R. (June 1976). "Interactive High-Level Language Direct-Execution Microprocessor System".
505: 466: 335: 200: 68: 43:(HLL), rather than the architecture being dictated by hardware considerations. It is accordingly also termed 497: 208: 196: 151: 79: 704: 107: 995:
Chu, Yaohan; Abrams, M. (July 1981). "Programming Languages and Direct-Execution Computer Architecture".
712:. AFIPS '67 (Fall) Proceedings of the November 14–16, 1967, Fall Joint Computer Conference. Vol. 31. 665: 570: 520: 442: 36: 477: 349: 771:. ISCA '80 Proceedings of the 7th annual symposium on Computer Architecture. ACM. pp. 97–104. 512:
to C (rather than directly targeting the hardware) yields efficient programs and simple compilers.
461: 390: 56: 1012: 956: 921: 873: 844: 741: 427: 230: 174: 433:
A further advantage is that a language implementation can be updated by updating the microcode (
618: 126:
is directly executable with minimal processing. In extreme cases, the only compiling needed is
981: 309: 166: 718: 1004: 973: 946: 913: 890: 865: 836: 772: 733: 585: 342: 291: 283: 211:(mid-1970s, designed from late 1960s) were designed to support multiple HLLs by a writable 345:
and other companies to build CPUs that directly (or closely) implemented the stack-based
17: 565: 540: 353: 331: 254: 242: 143: 95: 856:
Chu, Yaohan (December 1975). "Concepts of high-level-language computer architecture".
397:, though these are typically only hypothetical proposals, not actual implementations. 154:
order. DELs are typically only hypothetical, though they were advocated in the 1970s.
1028: 790: 548: 382: 305: 212: 190: 138:. For more conventional languages, the HLL statements are grouped into instruction + 135: 960: 848: 745: 1016: 925: 877: 575: 294:. There were also a number of simulated designs that were not produced as hardware 248: 162: 130:
the source code and feeding the tokens directly to the processor; this is found in
87: 536: 234: 123: 385:(as in the Burroughs Large Systems and Intel 432), and implemented the HLL via 165:
that is passed to the processor. In these cases, the system typically lacks an
968:
Chu, Yaohan (1978). "Direct Execution In A High-Level Computer Architecture".
320: 287: 127: 885:
Chu, Yaohan (1975). "Concepts of high-level-language computer architecture".
282:
more directly were designed in the late 1980s and early 1990s, including the
1008: 917: 869: 840: 795: 644: 386: 272: 219: 147: 52: 822:""Ideal" Directly Executed Languages: An Analytical Argument for Emulation" 769:
Proceedings of the 7th annual symposium on Computer Architecture - ISCA '80
737: 977: 951: 934: 894: 776: 551:
and room for large registers, rather than intrinsic advantages of RISC..
434: 389:
in the processor (as in Burroughs Small Systems and Pascal MicroEngine).
359: 313: 260: 170: 158: 83: 460:
The simplest reason for the lack of success of HLLCAs is that from 1980
972:. ACM '78 Proceedings of the 1978 annual conference. pp. 289–300. 515:
The advantages of HLLCAs can be alternatively achieved in HLL Computer
580: 279: 251:(1970s and 1980s) were a well-known and influential group of HLLCAs. 889:. ACM '75 Proceedings of the 1975 annual conference. pp. 6–13. 223: 204: 122:(ISA) of the computer equals the instructions of the HLL, and the 690: 86:(1960), one of the first HLLs. The best known HLLCAs may be the 71:(JIT) for HLLs. A detailed survey and critique can be found in 278:
A number of processors and coprocessors intended to implement
669: 157:
In less extreme examples, the source code is first parsed to
762:"Retrospective on High-Level Language Computer Architecture" 815:(PhD). Department of Computer Science, Stanford University. 791:
A Baker’s Dozen: Fallacies and Pitfalls in Processor Design
617:. Pascal.hansotten.com. 28 September 2010. Archived from 483:
The fundamental problem is that HLLCAs only simplify the
296:
A VHDL-based methodology for designing a Prolog processor
323:
was designed to support concurrent programming, using
970:
Proceedings of the 1978 annual conference on - ACM 78
887:
Proceedings of the 1975 annual conference on - ACM 75
263:(mid-1980s) was a minor system, designed to support 78:
HLLCAs date almost to the beginning of HLLs, in the
418:A detailed list of putative advantages is given in 67:(CISC) architectures, and the later development of 177:(1979), and is currently used by Java processors. 312:project included a custom CPU geared toward the 271:programming language in hardware, and supported 615:"Pascal for Small Machines – History of Lilith" 454: 419: 94:(1959). At present the most popular HLLCAs are 72: 275:at the instruction set level, hence the name. 191:Stack machine § Commercial stack machines 703:McKeeman, William M. (November 14–16, 1967). 362:developed ECOMP, a processor designed to run 8: 813:A Study of Language Directed Computer Design 369:The HSA Intermediate Layer (HSAIL) of the 1035:High-level language computer architecture 950: 906:IEEE Transactions on Software Engineering 719:"R68-8 Language Directed Computer Design" 90:of the 1970s and 1980s, for the language 29:high-level language computer architecture 935:"Direct-execution computer architecture" 300:A Prolog coprocessor for superconductors 48: 597: 381:HLLCA are frequently implemented via a 341:In the late 1990s, there were plans by 939:ACM SIGARCH Computer Architecture News 366:. It was never commercially produced. 39:designed to be targeted by a specific 7: 132:stack-oriented programming languages 222:(1973) series were designed with a 63:(RISC) architectures and RISC-like 794:Grant Martin & Steve Leibson, 717:Keirstead, Ralph E. (March 1968). 25: 706:Language directed computer design 581:Prolog#Implementation in hardware 371:Heterogeneous System Architecture 104:Heterogeneous System Architecture 45:language-directed computer design 453:A detailed critique is given in 292:related microcode implementation 203:(1966) were designed to support 82:(1961), which were designed for 65:complex instruction set computer 61:reduced instruction set computer 811:Wortman, David Barkley (1972). 207:for business applications. The 41:high-level programming language 829:IEEE Transactions on Computers 726:IEEE Transactions on Computers 1: 933:Chu, Yaohan (December 1977). 641:"ECOMP - an Erlang Processor" 455:Ditzel & Patterson (1980) 420:Ditzel & Patterson (1980) 215:. These were all mainframes. 73:Ditzel & Patterson (1980) 820:Hoevel, L.W. (August 1974). 395:non-von Neumann architecture 233:(1979) was designed for the 120:instruction set architecture 55:(1981) and the emergence of 1040:Programming language topics 539:runtime for iOS implements 265:object-oriented programming 226:interpreter in micro-code. 1056: 604:See Yaohan Chu references. 473:and commercialized in the 356:have been built and used. 188: 798:(early 2000s), slides 6–9 426:compilation and detailed 533:load–store architectures 467:just-in-time compilation 201:Burroughs Medium Systems 146:order is transformed to 69:just-in-time compilation 18:Language-directed design 1009:10.1109/C-M.1981.220525 918:10.1109/TSE.1976.233802 870:10.1145/1217196.1217197 858:ACM SIGMICRO Newsletter 841:10.1109/T-C.1974.224032 693:and the Clang compiler. 498:Common Language Runtime 352:. As a result, several 209:Burroughs Small Systems 197:Burroughs Large Systems 80:Burroughs large systems 738:10.1109/TC.1968.229106 521:language-based systems 443:language-based systems 108:HSA Intermediate Layer 978:10.1145/800127.804116 952:10.1145/859412.859415 895:10.1145/800181.810257 777:10.1145/800053.801914 571:Language-based system 286:, its successor (the 37:computer architecture 835:(8). IEEE: 759–767. 478:Java virtual machine 462:optimizing compilers 391:Tagged architectures 161:, which is then the 57:optimizing compilers 758:Patterson, David A. 98:, for the language 756:Ditzel, David R.; 430:from the machine. 428:symbolic debugging 231:Pascal MicroEngine 175:Pascal MicroEngine 284:Berkeley VLSI-PLM 16:(Redirected from 1047: 1020: 991: 964: 954: 929: 898: 881: 852: 826: 816: 786: 784: 783: 766: 749: 723: 713: 711: 694: 687: 681: 680: 678: 677: 668:. Archived from 662: 656: 655: 653: 652: 643:. Archived from 637: 631: 630: 628: 626: 621:on 20 March 2012 611: 605: 602: 586:Silicon compiler 343:Sun Microsystems 21: 1055: 1054: 1050: 1049: 1048: 1046: 1045: 1044: 1025: 1024: 1023: 994: 988: 967: 932: 903: 884: 855: 824: 819: 810: 806: 804:Further reading 801: 781: 779: 764: 755: 721: 716: 709: 702: 698: 697: 688: 684: 675: 673: 664: 663: 659: 650: 648: 639: 638: 634: 624: 622: 613: 612: 608: 603: 599: 594: 557: 541:tagged pointers 529: 485:code generation 469:, pioneered in 451: 416: 403: 379: 354:Java processors 350:virtual machine 332:AT&T Hobbit 193: 187: 116: 96:Java processors 49:McKeeman (1967) 23: 22: 15: 12: 11: 5: 1053: 1051: 1043: 1042: 1037: 1027: 1026: 1022: 1021: 992: 986: 965: 930: 912:(2): 126–134. 901: 900: 899: 853: 817: 807: 805: 802: 800: 799: 787: 753: 752: 751: 699: 696: 695: 682: 666:"SAFE Project" 657: 632: 606: 596: 595: 593: 590: 589: 588: 583: 578: 573: 568: 566:Java processor 563: 556: 553: 528: 525: 500:(designed for 450: 447: 415: 412: 402: 399: 378: 377:Implementation 375: 255:Intel iAPX 432 186: 183: 115: 112: 106:(2012), which 24: 14: 13: 10: 9: 6: 4: 3: 2: 1052: 1041: 1038: 1036: 1033: 1032: 1030: 1018: 1014: 1010: 1006: 1002: 998: 993: 989: 983: 979: 975: 971: 966: 962: 958: 953: 948: 944: 940: 936: 931: 927: 923: 919: 915: 911: 907: 902: 896: 892: 888: 883: 882: 879: 875: 871: 867: 863: 859: 854: 850: 846: 842: 838: 834: 830: 823: 818: 814: 809: 808: 803: 797: 793: 792: 788: 778: 774: 770: 763: 759: 754: 747: 743: 739: 735: 731: 727: 720: 715: 714: 708: 707: 701: 700: 692: 686: 683: 672:on 2019-10-22 671: 667: 661: 658: 647:on 2021-04-24 646: 642: 636: 633: 620: 616: 610: 607: 601: 598: 591: 587: 584: 582: 579: 577: 574: 572: 569: 567: 564: 562: 559: 558: 554: 552: 550: 549:on-chip cache 544: 542: 538: 534: 526: 524: 522: 518: 513: 511: 507: 503: 499: 493: 489: 486: 481: 479: 476: 472: 468: 463: 458: 456: 449:Disadvantages 448: 446: 444: 438: 436: 431: 429: 423: 421: 413: 411: 407: 400: 398: 396: 392: 388: 384: 383:stack machine 376: 374: 372: 367: 365: 361: 357: 355: 351: 348: 344: 339: 337: 333: 328: 326: 322: 317: 315: 311: 307: 306:Niklaus Wirth 303: 301: 297: 293: 289: 285: 281: 276: 274: 270: 266: 262: 258: 256: 252: 250: 249:Lisp machines 246: 244: 240: 236: 232: 227: 225: 221: 216: 214: 213:control store 210: 206: 202: 198: 192: 184: 182: 178: 176: 172: 168: 164: 160: 155: 153: 149: 145: 141: 137: 136:stack machine 134:running on a 133: 129: 125: 121: 113: 111: 109: 105: 101: 97: 93: 89: 88:Lisp machines 85: 81: 76: 74: 70: 66: 62: 58: 54: 50: 46: 42: 38: 34: 30: 19: 1003:(7): 22–32. 1000: 996: 969: 945:(5): 18–23. 942: 938: 909: 905: 886: 861: 857: 832: 828: 812: 789: 780:. Retrieved 768: 729: 725: 705: 685: 674:. Retrieved 670:the original 660: 649:. Retrieved 645:the original 635: 623:. Retrieved 619:the original 609: 600: 576:Lisp machine 545: 530: 527:Alternatives 516: 514: 508:, and where 494: 490: 482: 459: 452: 439: 432: 424: 417: 408: 404: 380: 368: 358: 340: 329: 318: 304: 277: 259: 253: 247: 228: 217: 194: 179: 163:machine code 156: 117: 77: 47:, coined in 44: 32: 28: 26: 864:(4): 9–16. 625:12 November 537:Objective-C 510:transpiling 401:Application 241:, and used 235:UCSD Pascal 124:source code 1029:Categories 987:0897910001 782:2014-11-18 732:(3): 298. 676:2022-07-09 651:2022-12-01 592:References 414:Motivation 321:Transputer 319:The INMOS 316:language. 189:See also: 128:tokenizing 114:Definition 796:Tensilica 387:microcode 290:), and a 273:recursion 220:Wang 2200 169:, as the 167:assembler 140:arguments 53:Intel 432 997:Computer 961:10241380 849:29921112 760:(1980). 750:– review 746:41983403 555:See also 480:(1999). 435:firmware 360:Ericsson 314:Modula-2 267:and the 261:Rekursiv 237:form of 185:Examples 171:compiler 159:bytecode 84:ALGOL 60 1017:3373193 926:9076898 878:9545539 517:Systems 475:HotSpot 152:postfix 35:) is a 1015:  984:  959:  924:  876:  847:  744:  364:Erlang 338:code. 310:Lilith 280:Prolog 243:p-code 239:Pascal 148:prefix 142:, and 1013:S2CID 957:S2CID 922:S2CID 874:S2CID 845:S2CID 825:(PDF) 765:(PDF) 742:S2CID 722:(PDF) 710:(PDF) 325:occam 269:Lingo 224:BASIC 205:COBOL 144:infix 33:HLLCA 982:ISBN 691:LLVM 689:See 627:2011 561:ASIC 471:Self 347:Java 330:The 288:PLUM 229:The 218:The 195:The 100:Java 92:Lisp 59:and 1005:doi 974:doi 947:doi 914:doi 891:doi 866:doi 837:doi 773:doi 734:doi 308:'s 150:or 1031:: 1011:. 1001:14 999:. 980:. 955:. 941:. 937:. 920:. 908:. 872:. 860:. 843:. 833:23 831:. 827:. 767:. 740:. 730:17 728:. 724:. 502:C# 457:. 422:. 327:. 298:, 75:. 27:A 1019:. 1007:: 990:. 976:: 963:. 949:: 943:6 928:. 916:: 910:2 897:. 893:: 880:. 868:: 862:6 851:. 839:: 785:. 775:: 748:. 736:: 679:. 654:. 629:. 519:( 506:C 336:C 31:( 20:)

Index

Language-directed design
computer architecture
high-level programming language
McKeeman (1967)
Intel 432
optimizing compilers
reduced instruction set computer
complex instruction set computer
just-in-time compilation
Ditzel & Patterson (1980)
Burroughs large systems
ALGOL 60
Lisp machines
Lisp
Java processors
Java
Heterogeneous System Architecture
HSA Intermediate Layer
instruction set architecture
source code
tokenizing
stack-oriented programming languages
stack machine
arguments
infix
prefix
postfix
bytecode
machine code
assembler

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.