2014:. A Huffman code is a variable-length string of bits that identifies a unique token. A Huffman-threaded interpreter locates subroutines using an index table or a tree of pointers that can be navigated by the Huffman code. Huffman-threaded code is one of the most compact representations known for a computer program. The index and codes are chosen by measuring the frequency of calls to each subroutine in the code. Frequent calls are given the shortest codes. Operations with approximately equal frequencies are given codes with nearly equal bit-lengths. Most Huffman-threaded systems have been implemented as direct-threaded Forth systems, and used to pack large amounts of slow-running code into small, cheap
2061:(TIL) that, unlike other TILs, allows embedding of RPL "objects" into the "runstream", i.e. the stream of addresses through which the interpreter pointer advances. An RPL "object" can be thought of as a special data type whose in-memory structure contains an address to an "object prolog" at the start of the object, and then data or executable code follows. The object prolog determines how the object's body should be executed or processed. Using the "RPL inner loop", which was invented and patented by William C. Wickes in 1986 and published in 1988, execution follows like so:
1660:
bits, can be used depending on the number of operations supported. As long as the index width is chosen to be narrower than a machine pointer, it will naturally be more compact than the other threading types without much special effort by the programmer. It is usually half to three-fourths the size of other threadings, which are themselves a quarter to an eighth the size of non-threaded code. The table's pointers can either be indirect or direct. Some Forth compilers produce token-threaded code. Some programmers consider the "
2427:
1507:, Fortran, Cobol and some Forth systems often produced subroutine-threaded code. The code in many of these systems operated on a last-in-first-out (LIFO) stack of operands, for which compiler theory was well-developed. Most modern processors have special hardware support for subroutine "call" and "return" instructions, so the overhead of one extra machine instruction per dispatch is somewhat diminished.
138:
2002:
and tables) unlike machine code which only consumes instruction cache; this means threaded code will eat into the budget for the amount of data that can be held for processing by the CPU at any given time. In any case, if the problem being computed involves applying a large number of operations to a small amount of data then using threaded code may be an ideal optimization.
2790:
2001:
of threaded code, especially token-threaded code, can allow it to fit entirely in the L1 cache when it otherwise would not have, thereby avoiding cache thrashing. However, threaded code consumes both instruction cache (for the implementation of each operation) as well as data cache (for the bytecode
1659:
Token-threaded code implements the thread as a list of indices into a table of operations; the index width is naturally chosen to be as small as possible for density and efficiency. 1 byte / 8-bits is the natural choice for ease of programming, but smaller sizes like 4-bits, or larger like 12 or 16
2093:
PROLOG -> PROLOG (The prolog address at the start of the prolog code points to itself) IF O + Δ =/= PC THEN GOTO INDIRECT (Test for direct execution) O = I - Δ (Correct O to point to start of embedded object) I = I + α (Correct I to point after embedded object where α is
299:
During the 1970s, hardware designers spent considerable effort to make subroutine calls faster and simpler. On the improved designs, only a single instruction is expended to call a subroutine, so the use of a pseudo-instruction saves no room. Additionally, the performance of these calls is almost
1205:
Indirect threading uses pointers to locations that in turn point to machine code. The indirect pointer may be followed by operands which are stored in the indirect "block" rather than storing them repeatedly in the thread. Thus, indirect code is often more compact than direct-threaded code. The
283:
To address this, threaded code systems used pseudo-code to represent function calls in a single operator. At run time, a tiny "interpreter" would scan over the top-level code, extract the subroutine's address in memory, and call it. In other systems, this same basic concept is implemented as a
279:
required several instructions to call a subroutine. In the top-level application and in many subroutines, that sequence is constantly repeated, with only the subroutine address changing from one call to the next. This means that a program consisting of many function calls may have considerable
1514:
compiler's co-creator, stated that "in contrast to popular myths, subroutine threading is usually slower than direct threading". However, Ertl's most recent tests show that subroutine threading is faster than direct threading in 15 out of 25 test cases. More specifically, he found that direct
800:
Addresses in the thread are the addresses of machine language. This form is simple, but may have overheads because the thread consists only of machine addresses, so all further parameters must be loaded indirectly from memory. Some Forth systems produce direct-threaded code. On many machines
303:
Threaded code systems save room by replacing that list of function calls, where only the subroutine address changes from one call to the next, with a list of execution tokens, which are essentially function calls with the call opcode(s) stripped off, leaving behind only a list of addresses.
323:
To save space, programmers squeezed the lists of subroutine calls into simple lists of subroutine addresses, and used a small loop to call each subroutine in turn. For example, the following pseudocode uses this technique to add two numbers A and B. In the example, the list is labeled
1206:
indirection typically makes it slower, though usually still faster than bytecode interpreters. Where the handler operands include both values and types, the space savings over direct-threaded code may be significant. Older FORTH systems typically produce indirect-threaded code.
1992:
involved in deciding what to execute is larger than the cost of actually executing it, so such interpreters are often much slower than machine code. However, for more complex ("compound") instructions, the overhead percentage is proportionally less significant.
2113:) to a different address in the thread. A conditional jump-if-zero branch that jumps only if the top-of-stack value is zero could be implemented as shown below. This example uses the embedded parameter version of direct threading so the
1502:
So-called "subroutine-threaded code" (also "call-threaded code") consists of a series of machine-language "call" instructions (or addresses of functions to "call", as opposed to direct threading's use of "jump"). Early compilers for
2226:
Separating the data and return stacks in a machine eliminates a great deal of stack management code, substantially reducing the size of the threaded code. The dual-stack principle originated three times independently: for
1515:
threading is the fastest threading model on Xeon, Opteron, and Athlon processors, indirect threading is fastest on
Pentium M processors, and subroutine threading is fastest on Pentium 4, Pentium III, and PPC processors.
315:. The addresses may be direct or indirect, contiguous or non-contiguous (linked by pointers), relative or absolute, resolved at compile time or dynamically built. No single variation is "best" for all situations.
1996:
There are times when token-threaded code can sometimes run faster than the equivalent machine code when that machine code ends up being too large to fit in the physical CPU's L1 instruction cache. The higher
260:
with the user's source code. A compiler translates from a source language to machine code, so the compiler, source, and output must all be in memory at the same time. In an interpreter, there is no output.
2030:
An example is string threading, in which operations are identified by strings, usually looked up by a hash table. This was used in
Charles H. Moore's earliest Forth implementations and in the
264:
Threaded code is a formatting style for compiled code that minimizes memory use. Instead of writing out every step of an operation at its every occurrence in the program, as was common in
300:
free of additional overhead. Today, though almost all programming languages focus on isolating code into subroutines, they do so for code clarity and maintainability, not to save space.
272:"). The top-level application in these programs may consist of nothing but subroutine calls. Many of these subroutines, in turn, also consist of nothing but lower-level subroutine calls.
540:
is so simple that it can be repeated inline at the end of each subroutine. Control now jumps once, from the end of a subroutine to the start of another, instead of jumping twice via
307:
Over the years, programmers have created many variations on that "interpreter" or "small selector". The particular address in the list of addresses may be extracted using an index,
248:
One solution is to use an interpreter which reads the symbolic language a bit at a time, and calls functions to perform the actions. As the source code is typically much
230:
on each hardware platform. The interpreter instantiates the virtual machine environment and executes the instructions. Thus, only the interpreter must be compiled.
2820:. Proceedings of the 1988 Rochester Forth Conference: Programming Environments. Vol. 8. Rochester, New York, USA: Institute for Applied Forth Research, Inc.,
2031:
2087:
Where above, O is the current object pointer, I is the interpreter pointer, Δ is the length of one address word and the "" operator stands for "dereference".
762:(DTC). Although the technique is older, the first widely circulated use of the term "threaded code" is probably James R. Bell's 1973 article "Threaded Code".
2981:
245:
had only 4 kB of RAM installed. Consequently, a lot of time was spent trying to find ways to reduce a program's size, to fit in the available memory.
2101:
microprocessors that use RPL, there is a third level of indirection made possible by an architectural / programming trick which allows faster execution.
1985:
is more complex. The decoding of single byte opcodes can be very simply and efficiently handled by a branch table using the opcode directly as an index.
1229:
and an indirect block; and any operands to the fragment are found in the indirect block following the fragment's address. This requires keeping the
792:
Practically all executable threaded code uses one or another of these methods for invoking subroutines (each method is called a "threading model").
781:
in every address, which made ITC easy and fast. Later, he said that he found it so convenient that he propagated it into all later Forth designs.
2700:
784:
Today, some Forth compilers generate direct-threaded code while others generate indirect-threaded code. The executables act the same either way.
2986:
2829:
2735:
268:
for instance, the compiler writes each common bit of code into a subroutine. Thus, each bit exists in only one place in memory (see "
2848:
181:
1010:
Alternatively, operands may be included in the thread. This can remove some indirection needed above, but makes the thread larger:
148:
2887:
2717:
1978:
2466:
203:
2867:
2768:
312:
1665:
2232:
2018:. Most published uses have been in smart cards, toys, calculators, and watches. The bit-oriented tokenized code used in
101:
2074:
Transfer control to next pointer or embedded object by setting the PC (program counter) to O_1 plus one address pointer
808:
might execute the sequence "push A, push B, add". That might be translated to the following thread and routines, where
2455:
2432:
1677:
163:
2533:, 1993. Quote: "The B compiler on the PDP-7 did not generate machine instructions, but instead 'threaded code' ..."
2440:
2050:
20:
159:
50:, which may generate code in that form or be implemented in that form themselves. The code may be processed by an
2944:
2927:
2845:
2450:
1681:
308:
113:
2791:"Data processing system and method for the direct and indirect execution of uniformly structured object types"
269:
2821:
2228:
1692:
227:
121:
51:
2710:
2679:
2273:
1989:
1977:, but often there are commonly used 1-byte instructions plus some less-common multibyte instructions (see
1748:// Any inter-instruction operations are performed here (e.g. updating global state, event processing, etc)
2899:(NB. Brief overview on the threaded languages, System and User RPL, used on the HP calculators like the
2090:
When control is transferred to an object pointer or an embedded object, execution continues as follows:
74:
336:(Stack Pointer) contains an address elsewhere in memory that is available to hold a value temporarily.
2844:(NB. This title is often cited as "RPL: A Mathematics Control Language". An excerpt is available at:
2240:
293:
97:
2684:
2259:
1781:// In a more complex encoding, there may be multiple tables to choose between or control/mode flags
2976:
2660:
2632:
2247:
1691:, which typically uses 8-bit opcodes with a stack-based virtual machine. The archetypal bytecode
1209:
For example, if the goal is to execute "push A, push B, add", the following might be used. Here,
78:
66:
2291:, such as implementations of Forth, have a simple virtual machine at heart, consisting of three
2035:
2835:
2825:
2731:
2723:
2713:
774:
234:
215:
70:
2250:
are often present in a threaded virtual machine. Another one exists for passing data between
2914:
2622:
2057:
calculator in 1986, is a type of proprietary hybrid (direct-threaded and indirect-threaded)
766:
31:
2961:
1988:
For instructions where the individual operations are simple, such as "push" and "add", the
2948:
2931:
2871:
2489:
2485:
2288:
2263:
2117:
line is the destination of where to jump if the condition is true, so it must be skipped (
2046:
2015:
778:
265:
253:
223:
219:
252:
than the resulting machine code, this can reduce overall memory use. This was the reason
2883:
2875:
1673:
1661:
289:
256:
is an interpreter: its own code had to share the 4 kB memory of machines like the
24:
1745:// Convert the next bytecode operation to a pointer to machine code that implements it
2970:
2065:
Dereference the IP (instruction pointer) and store it into O (current object pointer)
805:
359:// points to the address '&pushA', not the textual label 'thread'
242:
90:
2636:
2516:
2071:
Dereference O and store its address in O_1 (this is the second level of indirection)
2864:
2760:
2530:
2011:
1998:
285:
249:
207:
117:
62:
55:
2955:
2596:
2584:
2572:
2559:
2458:: the rediscovery of threaded code in order to exploit remote vulnerable systems.
2324:
In an indirect-threaded virtual machine, the one given here, the operations are:
773:(ITC), for his Forth virtual machine. Moore arrived at this arrangement because
2426:
380:// follow ip to address in thread, follow that address to subroutine, advance ip
257:
199:
39:
2706:
2422:
2251:
2236:
1285:// follow pointers to 1st instruction of 'push', DO NOT advance ip yet
211:
86:
43:
2839:
2543:
2815:
2461:
2280:
2098:
1423:// advance ip in thread, jump through next indirect block to next subroutine
801:
direct-threading is faster than subroutine threading (see reference below).
82:
65:
than code generated by alternative generation techniques and by alternative
2034:'s experimental hardware-interpreted computer language. It is also used in
849:// ip points to &pushA (which points to the first instruction of pushA)
567:// ip points to &pushA (which points to the first instruction of pushA)
524:// Add the two values together and store the result on the top of the stack
2627:
2610:
1802:/* Contains bytecode, not machine addresses. Hence it is more compact. */
1114:// must move ip past operand address, since it is not a subroutine address
864:// send control to first instruction of pushA and advance ip to &pushB
582:// send control to first instruction of pushA and advance ip to &pushB
2920:
2917:
describes different threading techniques and provides further references.
1688:
1669:
332:(Instruction Pointer) tracks our place within the list. Another variable
276:
238:
195:
47:
2941:
2937:
2924:
741:// Add the two values together and store the result on top of the stack
1695:
is known as a "decode and dispatch interpreter" and follows the form:
2054:
2019:
1511:
77:
slightly slower. However, a program that is small enough to fit in a
2878:
describes B (a precursor of C) as implemented using "threaded code".
633:// follow sp to available memory, store A there, advance sp to next
431:// follow sp to available memory, store A there, advance sp to next
166:. Statements consisting only of original research should be removed.
2649:
Moore, Charles H., published remarks in Byte
Magazine's Forth Issue
2492:
is ultimately based, was a compiler that ran on mainframe machines.
2900:
1504:
109:
105:
2109:
In all interpreters, a branch simply changes the thread pointer (
218:
platform, it isn't portable. A different approach is to generate
16:
Program whose source code consists entirely of calls to functions
2727:
2702:
Threaded
Interpretive Languages: Their Design and Implementation
1835:/* table = address of machine code that implements bytecode 0 */
233:
Early computers had relatively little memory. For example, most
2923:
includes the seminal (but out of print) book
Thinking Forth by
2705:(2nd printing, 1st ed.). Peterborough, New Hampshire, UK:
2676:
Generation of Fast
Interpreters for Huffman-Compressed Bytecode
924:// send control where ip says to (i.e. to pushB) and advance ip
648:// send control where ip says to (i.e. to pushB) and advance ip
2814:
Wickes, William C. (1988-10-01) . Forsely, Lawrence P. (ed.).
131:
96:
Threaded code is best known for its use in many compilers of
2010:
Huffman threaded code consists of lists of tokens stored as
2585:"Wireless World: Electronics, Radio, Television, Volume 89"
1518:
As an example of call threading for "push A, push B, add":
296:, all of which consist of a table of subroutine addresses.
2956:
Moving FORTH: Part 1: Design
Decisions in the Forth Kernel
1402:// look 1 past start of indirect block for operand address
42:
has a form that essentially consists entirely of calls to
2699:
Loelinger, R. G. (1981) . Written at Dayton, Ohio, USA.
1969:
If the virtual machine uses only byte-size instructions,
2562:. 2014. Chapter 5: "Do you need an interpreter?" p. 195.
2094:
the length of the object) INDIRECT (Rest of prolog)
155:
85:
may run faster than a larger program that suffers many
2560:"Efficient C/C++ Programming: Smaller, Faster, Better"
2546:. section "Simple and tail-recursive native compiler".
2262:) of the virtual machine (not to be confused with the
2195:// Pop/Consume top of stack and check if it's zero
1237:, unlike all previous examples where it contained the
275:
Mainframes and some early microprocessors such as the
2517:"Speed of various interpreter dispatch techniques V2"
2068:
Increment the IP by the length of one address pointer
194:
The common way to make computer programs is to use a
2283:
stack pointer for passing parameters between words)
214:is typically fast but, because it is specific to a
2022:can be seen as a kind of Huffman-threaded code.
2266:of the underlying hardware implementing the VM)
2940:online version of the book Starting FORTH by
2754:
2752:
2573:"The Theory and Practice of Compiler Writing"
1132:// Read value from variable and push on stack
93:, when other programs have filled the cache.
8:
2081:This can be represented more precisely by:
1069:// address where A is stored, not literal A
2683:
2626:
2511:
2509:
1225:) is found by double-indirecting through
182:Learn how and when to remove this message
2554:
2552:
2505:
2478:
89:. Small programs may also be faster at
812:is initialized to the address labeled
2958:covers threading techniques in depth.
2443:, which replaces the global variable
2171:// Get destination address for branch
769:invented a more compact arrangement,
38:is a programming technique where the
7:
2817:RPL: A Mathematical Control Language
2571:Jean-Paul Tremblay; P. G. Sorenson.
1687:A common approach, historically, is
2982:Programming language implementation
2531:"The Development of the C Language"
2121:) over if the branch is not taken.
2084:O = I = I + Δ PC = + Δ
1264:// points to '&i_pushA'
2674:Latendresse, Mario; Feeley, Marc.
1684:compilers, to be token-threading.
702:// Pop the top value off the stack
485:// Pop the top value off the stack
280:amounts of repeated code as well.
54:or it may simply be a sequence of
14:
2865:The Development of the C Language
2789:Wickes, William C. (1986-05-30).
717:// Pop second value off the stack
500:// Pop second value off the stack
2962:GCC extensions. Labels as Values
2425:
1979:complex instruction set computer
136:
116:, and other languages for small
2890:from the original on 2017-09-17
2771:from the original on 2023-08-03
2467:History of general-purpose CPUs
2913:Anton Ertl's explanatory page
2761:"The RPL inner loop explained"
2759:Busby, Jonathan (2018-09-07).
2059:threaded interpretive language
1:
1015:#define PUSH(x) (*sp++ = (x))
825:#define PUSH(x) (*sp++ = (x))
2987:Stack-based virtual machines
2765:The Museum of HP Calculators
2456:Return-oriented programming
2433:Computer programming portal
162:the claims made and adding
3003:
2846:RPLMan from Goodies Disk 4
2441:Continuation-passing style
2053:, first introduced in the
1668:compilers, as well as the
1213:is initialized to address
108:, some implementations of
104:, many implementations of
21:Multi-threaded programming
18:
2615:Communications of the ACM
2447:with a function parameter
1241:subroutine to be called.
816:(i.e., the address where
61:Threaded code has better
2661:"What is Threaded Code?"
2451:Just-in-time compilation
2326:
2123:
1697:
1520:
1243:
1012:
822:
546:
338:
309:general-purpose register
241:, and many of the first
122:amateur radio satellites
19:Not to be confused with
2822:University of Rochester
2609:Bell, James R. (1973).
2229:Burroughs large systems
1973:is simply a fetch from
2921:Thinking Forth Project
2915:What is Threaded Code?
2711:BYTE Publications Inc.
2254:('words'). These are:
2239:. It is used in some
2032:University of Illinois
1217:, each code fragment (
771:indirect threaded code
73:architectures, it may
46:. It is often used in
2628:10.1145/362248.362270
2241:Java virtual machines
2026:Lesser-used threading
1018:#define POP() (*--sp)
828:#define POP() (*--sp)
777:minicomputers had an
270:Don't repeat yourself
98:programming languages
1664:" generated by some
1498:Subroutine threading
760:direct threaded code
536:The calling loop at
294:virtual method table
112:, early versions of
2529:Dennis M. Ritchie,
2260:instruction pointer
67:calling conventions
58:call instructions.
2951:published in 1981.
2947:2005-11-13 at the
2934:published in 1984.
2930:2005-11-13 at the
2870:2015-03-28 at the
1201:Indirect threading
147:possibly contains
79:computer processor
2954:Brad Rodriguez's
2876:Dennis M. Ritchie
2746:(xiv+2+251 pages)
2077:Go back to step 1
2006:Huffman threading
1981:), in which case
1680:, BASIC and some
235:Data General Nova
204:symbolic language
202:(written in some
192:
191:
184:
149:original research
2994:
2898:
2896:
2895:
2882:Horn, Joseph K.
2852:
2843:
2831:978-0-91459308-9
2811:
2805:
2804:
2802:
2801:
2786:
2780:
2779:
2777:
2776:
2756:
2747:
2745:
2743:
2742:
2736:978-0-07038360-9
2696:
2690:
2689:
2687:
2671:
2665:
2664:
2656:
2650:
2647:
2641:
2640:
2630:
2606:
2600:
2597:"Byte, Volume 5"
2594:
2588:
2582:
2576:
2569:
2563:
2556:
2547:
2544:"muforth readme"
2540:
2534:
2527:
2521:
2520:
2513:
2493:
2483:
2446:
2435:
2430:
2429:
2414:
2411:
2408:
2405:
2402:
2399:
2396:
2393:
2390:
2387:
2384:
2381:
2378:
2375:
2372:
2369:
2366:
2363:
2360:
2357:
2354:
2351:
2348:
2345:
2342:
2339:
2336:
2333:
2330:
2289:virtual machines
2287:Often, threaded
2272:rp or r (return
2269:w (work pointer)
2222:Common amenities
2217:
2214:
2211:
2208:
2205:
2202:
2199:
2196:
2193:
2190:
2187:
2184:
2181:
2178:
2175:
2172:
2169:
2166:
2163:
2160:
2157:
2154:
2151:
2148:
2145:
2142:
2139:
2136:
2133:
2130:
2127:
2120:
2116:
2112:
2016:microcontrollers
1984:
1976:
1972:
1965:
1962:
1959:
1956:
1953:
1950:
1947:
1944:
1941:
1938:
1935:
1932:
1929:
1926:
1923:
1920:
1917:
1914:
1911:
1908:
1905:
1902:
1899:
1896:
1893:
1890:
1887:
1884:
1881:
1878:
1875:
1872:
1869:
1866:
1863:
1860:
1857:
1854:
1851:
1848:
1845:
1842:
1839:
1836:
1833:
1830:
1827:
1824:
1821:
1818:
1815:
1812:
1809:
1806:
1803:
1800:
1797:
1794:
1791:
1788:
1785:
1782:
1779:
1776:
1773:
1770:
1767:
1764:
1761:
1758:
1755:
1752:
1749:
1746:
1743:
1740:
1737:
1734:
1731:
1728:
1725:
1722:
1719:
1716:
1713:
1710:
1707:
1704:
1701:
1650:
1647:
1644:
1641:
1638:
1635:
1632:
1629:
1626:
1623:
1620:
1617:
1614:
1611:
1608:
1605:
1602:
1599:
1596:
1593:
1590:
1587:
1584:
1581:
1578:
1575:
1572:
1569:
1566:
1563:
1560:
1557:
1554:
1551:
1548:
1545:
1542:
1539:
1536:
1533:
1530:
1527:
1524:
1510:Anton Ertl, the
1493:
1490:
1487:
1484:
1481:
1478:
1475:
1472:
1469:
1466:
1463:
1460:
1457:
1454:
1451:
1448:
1445:
1442:
1439:
1436:
1433:
1430:
1427:
1424:
1421:
1418:
1415:
1412:
1409:
1406:
1403:
1400:
1397:
1394:
1391:
1388:
1385:
1382:
1379:
1376:
1373:
1370:
1367:
1364:
1361:
1358:
1355:
1352:
1349:
1346:
1343:
1340:
1337:
1334:
1331:
1328:
1325:
1322:
1319:
1316:
1313:
1310:
1307:
1304:
1301:
1298:
1295:
1292:
1289:
1286:
1283:
1280:
1277:
1274:
1271:
1268:
1265:
1262:
1259:
1256:
1253:
1250:
1247:
1236:
1228:
1224:
1220:
1216:
1212:
1196:
1193:
1190:
1187:
1184:
1181:
1178:
1175:
1172:
1169:
1166:
1163:
1160:
1157:
1154:
1151:
1148:
1145:
1142:
1139:
1136:
1133:
1130:
1127:
1126:variable_address
1124:
1121:
1118:
1115:
1112:
1109:
1106:
1103:
1100:
1099:variable_address
1097:
1094:
1091:
1088:
1085:
1082:
1079:
1076:
1073:
1070:
1067:
1064:
1061:
1058:
1055:
1052:
1049:
1046:
1043:
1040:
1037:
1034:
1031:
1028:
1025:
1022:
1019:
1016:
1006:
1003:
1000:
997:
994:
991:
988:
985:
982:
979:
976:
973:
970:
967:
964:
961:
958:
955:
952:
949:
946:
943:
940:
937:
934:
931:
928:
925:
922:
919:
916:
913:
910:
907:
904:
901:
898:
895:
892:
889:
886:
883:
880:
877:
874:
871:
868:
865:
862:
859:
856:
853:
850:
847:
844:
841:
838:
835:
832:
829:
826:
819:
815:
811:
804:An example of a
796:Direct threading
788:Threading models
767:Charles H. Moore
754:
751:
748:
745:
742:
739:
736:
733:
730:
727:
724:
721:
718:
715:
712:
709:
706:
703:
700:
697:
694:
691:
688:
685:
682:
679:
676:
673:
670:
667:
664:
661:
658:
655:
652:
649:
646:
643:
640:
637:
634:
631:
628:
625:
622:
619:
616:
613:
610:
607:
604:
601:
598:
595:
592:
589:
586:
583:
580:
577:
574:
571:
568:
565:
562:
559:
556:
553:
550:
543:
539:
531:
528:
525:
522:
519:
516:
513:
510:
507:
504:
501:
498:
495:
492:
489:
486:
483:
480:
477:
474:
471:
468:
465:
462:
459:
456:
453:
450:
447:
444:
441:
438:
435:
432:
429:
426:
423:
420:
417:
414:
411:
408:
405:
402:
399:
396:
393:
390:
387:
384:
381:
378:
375:
372:
369:
366:
363:
360:
357:
354:
351:
348:
345:
342:
266:macro assemblers
210:. The resulting
187:
180:
176:
173:
167:
164:inline citations
140:
139:
132:
91:thread switching
32:computer science
3002:
3001:
2997:
2996:
2995:
2993:
2992:
2991:
2967:
2966:
2949:Wayback Machine
2932:Wayback Machine
2910:
2893:
2891:
2881:
2872:Wayback Machine
2861:
2859:Further reading
2856:
2855:
2832:
2813:
2812:
2808:
2799:
2797:
2788:
2787:
2783:
2774:
2772:
2758:
2757:
2750:
2740:
2738:
2720:
2698:
2697:
2693:
2685:10.1.1.156.2546
2673:
2672:
2668:
2658:
2657:
2653:
2648:
2644:
2611:"Threaded code"
2608:
2607:
2603:
2595:
2591:
2583:
2579:
2570:
2566:
2557:
2550:
2541:
2537:
2528:
2524:
2515:
2514:
2507:
2502:
2497:
2496:
2490:Microsoft BASIC
2486:Dartmouth BASIC
2484:
2480:
2475:
2444:
2431:
2424:
2421:
2416:
2415:
2412:
2409:
2406:
2403:
2400:
2397:
2394:
2391:
2388:
2385:
2382:
2379:
2376:
2373:
2370:
2367:
2364:
2361:
2358:
2355:
2352:
2349:
2346:
2343:
2340:
2337:
2334:
2331:
2328:
2264:program counter
2224:
2219:
2218:
2215:
2212:
2209:
2206:
2203:
2200:
2197:
2194:
2191:
2188:
2185:
2182:
2179:
2176:
2173:
2170:
2167:
2164:
2161:
2158:
2155:
2152:
2149:
2146:
2143:
2140:
2137:
2134:
2131:
2128:
2125:
2118:
2114:
2110:
2107:
2095:
2085:
2044:
2028:
2008:
1982:
1974:
1970:
1967:
1966:
1963:
1960:
1957:
1954:
1951:
1948:
1945:
1942:
1939:
1936:
1933:
1930:
1927:
1924:
1921:
1918:
1915:
1912:
1909:
1906:
1903:
1900:
1897:
1894:
1891:
1888:
1885:
1882:
1879:
1876:
1873:
1870:
1867:
1864:
1861:
1858:
1855:
1853:/* table ... */
1852:
1849:
1846:
1844:/* table ... */
1843:
1840:
1837:
1834:
1831:
1828:
1825:
1822:
1819:
1816:
1813:
1810:
1807:
1804:
1801:
1798:
1795:
1792:
1789:
1786:
1783:
1780:
1777:
1774:
1771:
1768:
1765:
1762:
1759:
1756:
1753:
1750:
1747:
1744:
1741:
1738:
1735:
1732:
1729:
1726:
1723:
1720:
1717:
1714:
1711:
1708:
1705:
1702:
1699:
1657:
1655:Token threading
1652:
1651:
1648:
1645:
1642:
1639:
1636:
1633:
1630:
1627:
1624:
1621:
1618:
1615:
1612:
1609:
1606:
1603:
1600:
1597:
1594:
1591:
1588:
1585:
1582:
1579:
1576:
1573:
1570:
1567:
1564:
1561:
1558:
1555:
1552:
1549:
1546:
1543:
1540:
1537:
1534:
1531:
1528:
1525:
1522:
1500:
1495:
1494:
1491:
1488:
1485:
1482:
1479:
1476:
1473:
1470:
1467:
1464:
1461:
1458:
1455:
1452:
1449:
1446:
1443:
1440:
1437:
1434:
1431:
1428:
1425:
1422:
1419:
1416:
1413:
1410:
1407:
1404:
1401:
1398:
1395:
1392:
1389:
1386:
1383:
1380:
1377:
1374:
1371:
1368:
1365:
1362:
1359:
1356:
1353:
1350:
1347:
1344:
1341:
1338:
1335:
1332:
1329:
1326:
1323:
1320:
1317:
1314:
1311:
1308:
1305:
1302:
1299:
1296:
1293:
1290:
1287:
1284:
1281:
1278:
1275:
1272:
1269:
1266:
1263:
1260:
1257:
1254:
1251:
1248:
1245:
1234:
1226:
1222:
1218:
1214:
1210:
1203:
1198:
1197:
1194:
1191:
1188:
1185:
1182:
1179:
1176:
1173:
1170:
1167:
1164:
1161:
1158:
1155:
1152:
1149:
1146:
1143:
1140:
1137:
1134:
1131:
1128:
1125:
1122:
1119:
1116:
1113:
1110:
1107:
1104:
1101:
1098:
1095:
1092:
1089:
1086:
1083:
1080:
1077:
1074:
1071:
1068:
1065:
1062:
1059:
1056:
1053:
1050:
1047:
1044:
1041:
1038:
1035:
1032:
1029:
1026:
1023:
1020:
1017:
1014:
1008:
1007:
1004:
1001:
998:
995:
992:
989:
986:
983:
980:
977:
974:
971:
968:
965:
962:
959:
956:
953:
950:
947:
944:
941:
938:
935:
932:
929:
926:
923:
920:
917:
914:
911:
908:
905:
902:
899:
896:
893:
890:
887:
884:
881:
878:
875:
872:
869:
866:
863:
860:
857:
854:
851:
848:
845:
842:
839:
836:
833:
830:
827:
824:
817:
813:
809:
798:
790:
779:indirection bit
758:This is called
756:
755:
752:
749:
746:
743:
740:
737:
734:
731:
728:
725:
722:
719:
716:
713:
710:
707:
704:
701:
698:
695:
692:
689:
686:
683:
680:
677:
674:
671:
668:
665:
662:
659:
656:
653:
650:
647:
644:
641:
638:
635:
632:
629:
626:
623:
620:
617:
614:
611:
608:
605:
602:
599:
596:
593:
590:
587:
584:
581:
578:
575:
572:
569:
566:
563:
560:
557:
554:
551:
548:
544:. For example:
541:
537:
535:
533:
532:
529:
526:
523:
520:
517:
514:
511:
508:
505:
502:
499:
496:
493:
490:
487:
484:
481:
478:
475:
472:
469:
466:
463:
460:
457:
454:
451:
448:
445:
442:
439:
436:
433:
430:
427:
424:
421:
418:
415:
412:
409:
406:
403:
400:
397:
394:
391:
388:
385:
382:
379:
376:
373:
370:
367:
364:
361:
358:
355:
352:
349:
346:
343:
340:
328:and a variable
321:
254:Microsoft BASIC
224:virtual machine
188:
177:
171:
168:
153:
141:
137:
130:
28:
17:
12:
11:
5:
3000:
2998:
2990:
2989:
2984:
2979:
2969:
2968:
2965:
2964:
2959:
2952:
2938:Starting FORTH
2935:
2918:
2909:
2908:External links
2906:
2905:
2904:
2884:"What is RPL?"
2879:
2860:
2857:
2854:
2853:
2830:
2806:
2781:
2748:
2718:
2691:
2666:
2651:
2642:
2621:(6): 370–372.
2601:
2599:. 1980. p. 212
2589:
2577:
2575:. 1985. p. 527
2564:
2558:Steve Heller.
2548:
2535:
2522:
2504:
2503:
2501:
2498:
2495:
2494:
2477:
2476:
2474:
2471:
2470:
2469:
2464:
2459:
2453:
2448:
2437:
2436:
2420:
2417:
2327:
2322:
2321:
2316:
2306:
2302:, also called
2285:
2284:
2277:
2270:
2267:
2223:
2220:
2124:
2106:
2103:
2092:
2083:
2079:
2078:
2075:
2072:
2069:
2066:
2043:
2040:
2027:
2024:
2007:
2004:
1698:
1656:
1653:
1521:
1499:
1496:
1244:
1233:subroutine in
1202:
1199:
1013:
823:
797:
794:
789:
786:
547:
339:
320:
317:
290:dispatch table
243:microcomputers
226:and to use an
190:
189:
144:
142:
135:
129:
126:
25:Jump threading
15:
13:
10:
9:
6:
4:
3:
2:
2999:
2988:
2985:
2983:
2980:
2978:
2975:
2974:
2972:
2963:
2960:
2957:
2953:
2950:
2946:
2943:
2939:
2936:
2933:
2929:
2926:
2922:
2919:
2916:
2912:
2911:
2907:
2902:
2889:
2885:
2880:
2877:
2873:
2869:
2866:
2863:
2862:
2858:
2850:
2847:
2841:
2837:
2833:
2827:
2823:
2819:
2818:
2810:
2807:
2796:
2792:
2785:
2782:
2770:
2766:
2762:
2755:
2753:
2749:
2737:
2733:
2729:
2725:
2721:
2715:
2712:
2708:
2704:
2703:
2695:
2692:
2686:
2681:
2677:
2670:
2667:
2662:
2659:Ertl, Anton.
2655:
2652:
2646:
2643:
2638:
2634:
2629:
2624:
2620:
2616:
2612:
2605:
2602:
2598:
2593:
2590:
2586:
2581:
2578:
2574:
2568:
2565:
2561:
2555:
2553:
2549:
2545:
2542:David Frech.
2539:
2536:
2532:
2526:
2523:
2518:
2512:
2510:
2506:
2499:
2491:
2488:, upon which
2487:
2482:
2479:
2472:
2468:
2465:
2463:
2460:
2457:
2454:
2452:
2449:
2442:
2439:
2438:
2434:
2428:
2423:
2418:
2325:
2320:
2317:
2314:
2310:
2307:
2305:
2301:
2298:
2297:
2296:
2295:. Those are:
2294:
2290:
2282:
2278:
2275:
2271:
2268:
2265:
2261:
2257:
2256:
2255:
2253:
2249:
2244:
2242:
2238:
2234:
2230:
2221:
2122:
2104:
2102:
2100:
2091:
2088:
2082:
2076:
2073:
2070:
2067:
2064:
2063:
2062:
2060:
2056:
2052:
2048:
2041:
2039:
2037:
2033:
2025:
2023:
2021:
2017:
2013:
2012:Huffman codes
2005:
2003:
2000:
1994:
1991:
1986:
1980:
1696:
1694:
1690:
1685:
1683:
1679:
1675:
1671:
1667:
1663:
1654:
1519:
1516:
1513:
1508:
1506:
1497:
1242:
1240:
1232:
1207:
1200:
1011:
821:
807:
806:stack machine
802:
795:
793:
787:
785:
782:
780:
776:
772:
768:
763:
761:
545:
337:
335:
331:
327:
318:
316:
314:
310:
305:
301:
297:
295:
291:
287:
281:
278:
273:
271:
267:
262:
259:
255:
251:
246:
244:
240:
236:
231:
229:
225:
221:
217:
213:
209:
205:
201:
198:to translate
197:
186:
183:
175:
172:February 2020
165:
161:
157:
151:
150:
145:This section
143:
134:
133:
127:
125:
123:
119:
118:minicomputers
115:
111:
107:
103:
99:
94:
92:
88:
84:
80:
76:
72:
68:
64:
59:
57:
53:
49:
45:
41:
37:
36:threaded code
33:
26:
22:
2892:. Retrieved
2816:
2809:
2798:. Retrieved
2794:
2784:
2773:. Retrieved
2764:
2739:. Retrieved
2719:0-07038360-X
2701:
2694:
2678:. Elsevier.
2675:
2669:
2654:
2645:
2618:
2614:
2604:
2592:
2580:
2567:
2538:
2525:
2481:
2323:
2318:
2312:
2308:
2303:
2299:
2292:
2286:
2245:
2225:
2204:when_true_ip
2156:when_true_ip
2108:
2096:
2089:
2086:
2080:
2058:
2045:
2029:
2009:
1999:code density
1995:
1987:
1968:
1686:
1658:
1517:
1509:
1501:
1238:
1230:
1208:
1204:
1009:
820:is stored).
803:
799:
791:
783:
770:
764:
759:
757:
534:
333:
329:
325:
322:
306:
302:
298:
286:branch table
282:
274:
263:
247:
232:
220:instructions
208:machine code
193:
178:
169:
146:
95:
87:cache misses
60:
56:machine code
35:
29:
2252:subroutines
2115:&thread
1693:interpreter
1215:&thread
319:Development
258:Altair 8800
228:interpreter
200:source code
52:interpreter
44:subroutines
2971:Categories
2942:Leo Brodie
2925:Leo Brodie
2894:2017-09-17
2800:2019-12-27
2775:2019-12-27
2741:2023-08-03
2707:BYTE Books
2500:References
2293:primitives
2237:PostScript
818:&pushA
212:executable
156:improve it
100:, such as
2977:Compilers
2840:839704944
2795:uspto.gov
2680:CiteSeerX
2462:Tail call
2281:parameter
2279:sp or s (
2258:ip or i (
2248:registers
2036:Bashforth
1814:/*pushB*/
1808:/*pushA*/
1766:BYTE_CODE
1670:bytecodes
765:In 1970,
160:verifying
48:compilers
2945:Archived
2928:Archived
2888:Archived
2868:Archived
2849:Zip File
2769:Archived
2728:80-19392
2637:19042952
2587:. p. 73.
2419:See also
2276:pointer)
2105:Branches
2097:On HP's
1990:overhead
1983:decode()
1971:decode()
1964:dispatch
1907:dispatch
1880:dispatch
1757:CODE_PTR
1718:dispatch
1689:bytecode
1672:used by
277:RCA 1802
239:IBM 1130
216:hardware
196:compiler
120:and for
1958:addend2
1952:addend1
1928:addend2
1916:addend1
1820:/*add*/
1646:addend2
1640:addend1
1616:addend2
1604:addend1
1474:addend2
1468:addend1
1444:addend2
1432:addend1
1333:i_pushB
1315:i_pushA
1303:i_pushB
1297:i_pushA
1231:current
738:addend2
732:addend1
705:addend2
690:addend1
521:addend2
515:addend1
488:addend2
473:addend1
313:pointer
154:Please
128:History
75:execute
63:density
2838:
2828:
2734:
2726:
2716:
2682:
2635:
2395:unnest
2313:semi_s
2309:unnest
2246:Three
2235:, and
2144:thread
2126:thread
2099:Saturn
2055:HP-18C
2020:PBASIC
1975:thread
1796:thread
1784:return
1760:decode
1730:decode
1715:thread
1666:Pascal
1662:p-code
1523:thread
1512:Gforth
1288:thread
1261:thread
1180:result
1153:result
1051:thread
1036:thread
990:result
963:result
867:thread
846:thread
814:thread
585:thread
564:thread
383:thread
356:thread
326:thread
250:denser
222:for a
71:cached
69:. In
2901:HP 48
2633:S2CID
2473:Notes
2407:->
2386:->
2371:->
2344:->
2311:, or
2304:docol
2274:stack
2233:Forth
2141:&
2135:&
1883:pushB
1856:pushA
1850:pushB
1847:&
1841:pushA
1838:&
1829:&
1823:table
1787:table
1736:&
1712:&
1700:start
1574:pushB
1550:pushA
1538:pushB
1532:pushA
1505:ALGOL
1357:&
1351:i_add
1345:&
1339:&
1327:&
1321:&
1309:i_add
1306:&
1300:&
1294:&
1258:&
1246:start
1084:&
1078:&
1072:&
1063:&
1057:&
1033:&
1021:start
927:pushB
894:pushA
885:&
882:pushB
879:&
876:pushA
873:&
843:&
831:start
651:pushB
612:pushA
603:&
600:pushB
597:&
594:pushA
591:&
561:&
549:start
440:pushB
410:pushA
401:&
398:pushB
395:&
392:pushA
389:&
353:&
341:start
292:, or
206:) to
110:COBOL
106:BASIC
102:Forth
83:cache
2836:OCLC
2826:ISBN
2732:ISBN
2724:LCCN
2714:ISBN
2413:next
2392:next
2362:nest
2350:jump
2329:next
2319:next
2315:(;s)
2300:nest
2207:jump
2119:ip++
1961:jump
1904:jump
1877:jump
1754:addr
1751:jump
1724:addr
1678:Java
1674:.NET
1541:call
1535:call
1529:call
1477:jump
1405:jump
1363:push
1342:push
1324:push
1267:jump
1239:next
1219:push
1186:jump
1174:PUSH
1135:jump
1117:PUSH
1093:push
1075:push
1060:push
1039:jump
996:jump
984:PUSH
945:jump
933:PUSH
912:jump
900:PUSH
852:jump
775:Nova
744:jump
672:jump
636:jump
570:jump
527:jump
461:jump
434:jump
368:jump
40:code
2874:by
2623:doi
2401:*--
2180:*--
2150:brz
2147:...
2138:brz
2132:...
2051:RPL
2049:'s
2042:RPL
1934:*--
1922:*--
1910:add
1832:add
1739:vpc
1706:vpc
1649:ret
1622:*--
1610:*--
1598:add
1595:ret
1571:ret
1547:ret
1544:add
1486:*++
1450:*--
1438:*--
1426:add
1414:*++
1360:add
1312:...
1223:add
1168:POP
1159:POP
1147:add
1090:...
1087:add
978:POP
969:POP
957:add
891:...
888:add
711:*--
696:*--
684:add
609:...
606:add
542:top
538:top
530:top
494:*--
479:*--
467:add
464:top
437:top
407:...
404:add
362:top
311:or
158:by
81:'s
30:In
23:or
2973::
2903:.)
2886:.
2834:.
2824:.
2793:.
2767:.
2763:.
2751:^
2730:.
2722:.
2709:,
2631:.
2619:16
2617:.
2613:.
2551:^
2508:^
2445:ip
2410:ip
2404:rp
2389:ip
2380:++
2377:rp
2368:ip
2359:++
2353:**
2341:++
2338:ip
2243:.
2231:,
2216:++
2213:ip
2198:ip
2186:==
2183:sp
2174:if
2168:++
2165:ip
2111:ip
2047:HP
2038:.
1946:++
1943:sp
1937:sp
1925:sp
1895:++
1892:sp
1868:++
1865:sp
1769:**
1676:,
1634:++
1631:sp
1625:sp
1613:sp
1586:++
1583:sp
1562:++
1559:sp
1489:ip
1462:++
1459:sp
1453:sp
1441:sp
1417:ip
1390:ip
1375:++
1372:sp
1279:ip
1252:ip
1235:ip
1227:ip
1221:,
1211:ip
1195:++
1192:ip
1171:()
1162:()
1144:++
1141:ip
1111:++
1108:ip
1048:++
1045:ip
1027:ip
1005:++
1002:ip
981:()
972:()
954:++
951:ip
921:++
918:ip
861:++
858:ip
837:ip
810:ip
753:++
750:ip
726:++
723:sp
714:sp
699:sp
681:++
678:ip
663:++
660:sp
645:++
642:ip
624:++
621:sp
579:++
576:ip
555:ip
509:++
506:sp
497:sp
482:sp
452:++
449:sp
422:++
419:sp
377:++
374:ip
347:ip
334:sp
330:ip
288:,
237:,
124:.
34:,
2897:.
2851:)
2842:.
2803:.
2778:.
2744:.
2688:.
2663:.
2639:.
2625::
2519:.
2398::
2383:w
2374:*
2365::
2356:w
2347:w
2335:*
2332::
2210:*
2201:=
2192:)
2189:0
2177:(
2162:*
2159:=
2153::
2129::
1955:+
1949:=
1940:*
1931:=
1919:=
1913::
1901:B
1898:=
1889:*
1886::
1874:A
1871:=
1862:*
1859::
1826::
1817:0
1811:2
1805:1
1799::
1793:}
1790:;
1778:{
1775:)
1772:p
1763:(
1742:)
1733:(
1727:=
1721::
1709:=
1703::
1682:C
1643:+
1637:=
1628:*
1619:=
1607:=
1601::
1592:B
1589:=
1580:*
1577::
1568:A
1565:=
1556:*
1553::
1526::
1492:)
1483:(
1480:*
1471:+
1465:=
1456:*
1447:=
1435:=
1429::
1420:)
1411:(
1408:*
1399:)
1396:1
1393:+
1387:*
1384:(
1381:*
1378:=
1369:*
1366::
1354::
1348:B
1336::
1330:A
1318::
1291::
1282:)
1276:*
1273:(
1270:*
1255:=
1249::
1189:*
1183:)
1177:(
1165:+
1156:=
1150::
1138:*
1129:)
1123:*
1120:(
1105:*
1102:=
1096::
1081:B
1066:A
1054::
1042:*
1030:=
1024::
999:*
993:)
987:(
975:+
966:=
960::
948:*
942:)
939:B
936:(
930::
915:*
909:)
906:A
903:(
897::
870::
855:*
840:=
834::
747:*
735:+
729:=
720:*
708:=
693:=
687::
675:*
669:B
666:=
657:*
654::
639:*
630:A
627:=
618:*
615::
588::
573:*
558:=
552::
518:+
512:=
503:*
491:=
476:=
470::
458:B
455:=
446:*
443::
428:A
425:=
416:*
413::
386::
371:*
365::
350:=
344::
185:)
179:(
174:)
170:(
152:.
114:B
27:.
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.