Knowledge (XXG)

Reactive planning

Source 📝

40: 336:
means of rules one can describe behaviour only prescriptively). However, the methods have also several flaws. First, for a designer, it is much more complicated to describe behaviour by a network comparing with if-then rules. Second, only relatively simple behaviour can be described, especially if adaptive feature is to be exploited.
308:. The conditions, states and actions are no more boolean or "yes/no" respectively but are approximate and smooth. Consequently, resulted behaviour will transition smoother, especially in the case of transitions between two tasks. However, evaluation of the fuzzy conditions is much slower than evaluation of their crisp counterparts. 173:. The meaning of the rule is as follows: if the condition holds, perform the action. The action can be either external (e.g., pick something up and move it), or internal (e.g., write a fact into the internal memory, or evaluate a new set of rules). Conditions are normally boolean and the action either can be performed, or not. 280:
step then. However, more often is the latter case. Here, every state is associated with a script, which describes a sequence of actions that the agent has to perform if its FSM is in a given state. If a transition activates a new state, the former script is simply interrupted, and the new one is started.
279:
There are two ways of how to produce behaviour by a FSM. They depend on what is associated with the states by a designer --- they can be either 'acts', or scripts. An 'act' is an atomic action that should be performed by the agent if its FSM is the given state. This action is performed in every time
262:
is only one of their possible applications. A typical FSM, when used for describing behaviour of an agent, consists of a set of states and transitions between these states. The transitions are actually condition action rules. In every instant, just one state of the FSM is active, and its transitions
335:
Positives of connectionist networks is, first, that the resulted behaviour is more smooth than behaviour produced by crisp if-then rules and FSMs, second, the networks are often adaptive, and third, mechanism of inhibition can be used and hence, behaviour can be also described proscriptively (by
331:
or free-flow hierarchies. The basic representational unit is a unit with several input links that feed the unit with "an abstract activity" and output links that propagate the activity to following units. Each unit itself works as the activity transducer. Typically, the units are connected in a
474: 196:, or may include special mechanisms for changing which goal / rule subset is currently most important. Flat structures are relatively easy to build, but allow only for description of simple behavior, or require immensely complicated conditions to compensate for the lacking structure. 283:
If a script is more complicated, it can be broken down to several scripts and a hierarchical FSM can be exploited. In such an automaton, every state can contain substates. Only the states at the atomic level are associated with a script (which is not complicated) or an atomic action.
375:, which map sensor inputs directly to effector outputs, and can follow or avoid. More complex systems are based on a superposition of attractive or repulsive forces that effect on the agent. This kind of steering is based on the original work on 351:: with a proper logic representation (which is suitable only for crisp rules), the rules need not to be re-evaluated at every time step. Instead, a form of a cache storing the evaluation from the previous step can be used. 203:
algorithms is a conflict resolution mechanism. This is a mechanism for resolving conflicts between actions proposed when more than one rules' condition holds in a given instant. The conflict can be solved for example by
287:
Computationally, hierarchical FSMs are equivalent to FSMs. That means that each hierarchical FSM can be converted to a classical FSM. However, hierarchical approaches facilitate designs better. See the
504: 467: 192:
which acts in response to an appropriate input. These layers are then organized into a simple stack, with higher layers subsuming the goals of the lower ones. Other systems may use
548:. Wortham, R. H., Gaudl, S. E. & Bryson, J. J., Instinct: A Biologically Inspired Reactive Planner for Intelligent Embedded Systems, In : Cognitive Systems Research. (2018) 69: 344:
Typical reactive planning algorithm just evaluates if-then rules or computes the state of a connectionist network. However, some algorithms have special features.
580: 276:. But transitions can also connect to the 'self' state in some systems, to allow execution of transition actions without actually changing the state. 543:. Platform for fast agent prototyping in Unreal Tournament 2004 – using POSH – reactive planner designed and developed by J.J. Bryson. 148:
There are several ways to represent a reactive plan. All require a basic representational unit and a means to compose these units into plans.
128:. Second, they compute just one next action in every instant, based on the current context. Reactive planners often (but not always) exploit 263:
are evaluated. If a transition is taken it activates another state. That means, in general transitions are the rules in the following form:
170: 421: 226: 121: 91: 413:
can be driven by this technique. In cases of more complicated terrain (e.g. a building), however, steering must be combined with
371:
Steering is a special reactive technique used in navigation of agents. The simplest form of reactive steering is employed in
477:. In: Johnson, W. L. (eds.): Proceedings of the First International Conference on Autonomous Agents. ACM press (1997) 22-29 355: 212: 52: 62: 56: 48: 446: 328: 244:
Conflict resolution is only necessary for rules that want to take mutually exclusive actions (c.f. Blumberg 1996).
460: 530:
van Waveren, J. M. P.: The Quake III Arena Bot. Master thesis. Faculty ITS, University of Technology Delft (2001)
124:
in two aspects. First, they operate in a timely fashion and hence can cope with highly dynamic and unpredictable
484:. In: Proceedings of the Third International Conference on Autonomous Agents (Agents'99). Seattle (1999) 236-243 73: 488: 181: 241:
for selecting rules, but it is difficult to guarantee good behavior in a large system with simple approaches.
534: 105: 499: 495: 418: 354:
Scripting languages: Sometimes, the rules or FSMs are directly the primitives of an architecture (e.g. in
461:
Intelligence by Design: Principles of Modularity and Coordination for Engineering Complex Adaptive Agents
258:(FSM) is model of behaviour of a system. FSMs are used widely in computer science. Modeling behaviour of 524: 414: 193: 410: 372: 255: 189: 176:
Production rules may be organized in relatively flat structures, but more often are organized into a
453: 289: 386: 359: 433: 293: 259: 125: 117: 200: 113: 28: 511: 514:. In: Computer Graphics International (CGI), IEEE Computer SocietyPress, New York (2005) 132:, which are stored structures describing the agent's priorities and behaviour. The term 348: 574: 324: 527:. Ph.D. Dissertation. Centre for Cognitive Science, University of Edinburgh (1993) 507:. In: Computer Graphics, 21(4) (SIGGRAPH '87 Conference Proceedings) (1987) 25-34. 481: 305: 247:
Some limitations of this kind of reactive planning can be found in Brom (2005).
17: 546: 406: 468:
AI Game Development: Synthetic Creatures with learning and Reactive Behaviors
475:
Creatures: Artificial life autonomous software-agents for home entertainment
234: 177: 405:
The advantage of steering is that it is computationally very efficient. In
379:
of Craig Reynolds. By means of steering, one can achieve a simple form of:
540: 498:. In Computational Intelligence, 23(4), 439–463, Blackwell-Wiley, (2005) 136:
goes back to at least 1988, and is synonymous with the more modern term
238: 362:, where the rules are only one of the primitives (like in JAM or ABL). 376: 219: 156:
A condition action rule, or if-then rule, is a rule in the form:
33: 449:. PhD thesis, Massachusetts Institute of Technology (1996). 27:"Dynamic planning" redirects here. For the anime studio, see 561: 517: 456:
In: Proceedings of MNAS workshop. Edinburgh, Scotland (2005)
512:
A motivational Model of Action Selection for Virtual Humans
505:
Flocks, Herds, and Schools: A Distributed Behavioral Model
463:. PhD thesis, Massachusetts Institute of Technology (2001) 447:
Old Tricks, New Dogs: Ethology and Interactive Creatures
312: 358:). But more often, reactive plans are programmed in a 454:
Hierarchical Reactive Planning: Where is its limit?
218:learning relative utilities between rules (e.g. in 208:
assigning fixed priorities to the rules in advance,
304:Both if-then rules and FSMs can be combined with 61:but its sources remain unclear because it lacks 292:of Damian Isla (2005) for an example of ASM of 482:JAM: A BDI-theoretic mobile agent architecture 525:Computational Mechanisms for Action Selection 8: 92:Learn how and when to remove this message 565:, an implementation of reactive planning 323:Reactive plans can be expressed also by 233:Expert systems often use other simpler 535:An Introduction to MultiAgent Systems 199:An important part of any distributed 184:consists of layers of interconnected 7: 491:. In: Gamastura online, 03/11 (2005) 473:Grand, S., Cliff, D., Malhotra, A.: 152:Condition-action rules (productions) 112:denotes a group of techniques for 25: 581:Automated planning and scheduling 496:Planning in Reactive Environment 313:architecture of Alex Champandard 296:, which uses hierarchical FSMs. 38: 211:assigning preferences (e.g. in 120:. These techniques differ from 537:. John Wiley & Sons (2009) 1: 489:Handling complexity in Halo 2 180:of some kind. For example, 340:Reactive planning algorithms 144:Reactive plan representation 597: 518:Softimage/Behavior product 510:de Sevin, E. Thalmann, D.: 494:Milani, A., Poggioni, V., 392:a wall following behaviour 329:artificial neural networks 26: 383:towards a goal navigation 319:Connectionists approaches 169:. These rules are called 470:. New Riders, USA (2003) 182:subsumption architecture 47:This article includes a 106:artificial intelligence 76:more precise citations. 520:. Avid Technology Inc. 420:), which is a form of 325:connectionist networks 251:Finite State Machines 225:exploiting a form of 466:Champandard, A. J.: 373:Braitenberg vehicles 256:Finite state machine 190:finite state machine 417:(as e.g. in Milani 332:layered structure. 398:predator avoidance 387:obstacle avoidance 360:scripting language 294:computer game bots 274:activate-new-state 188:, each actually a 122:classical planning 49:list of references 434:Behavior based AI 395:enemy approaching 134:reactive planning 118:autonomous agents 110:reactive planning 102: 101: 94: 16:(Redirected from 588: 503:Reynolds, C. W. 300:Fuzzy approaches 201:action selection 138:dynamic planning 114:action selection 97: 90: 86: 83: 77: 72:this article by 63:inline citations 42: 41: 34: 29:Dynamic Planning 21: 18:Dynamic planning 596: 595: 591: 590: 589: 587: 586: 585: 571: 570: 567:by Grand et al. 558: 552: 533:Wooldridge, M. 442: 430: 401:crowd behaviour 369: 349:Rete evaluation 342: 321: 302: 253: 154: 146: 98: 87: 81: 78: 67: 53:related reading 43: 39: 32: 23: 22: 15: 12: 11: 5: 594: 592: 584: 583: 573: 572: 569: 568: 557: 556:External links 554: 550: 549: 544: 538: 531: 528: 521: 515: 508: 501: 492: 485: 480:Huber, M. J.: 478: 471: 464: 457: 450: 445:Blumberg, B.: 441: 438: 437: 436: 429: 426: 409:, hundreds of 407:computer games 403: 402: 399: 396: 393: 390: 384: 368: 365: 364: 363: 352: 341: 338: 320: 317: 301: 298: 252: 249: 231: 230: 223: 216: 215:architecture), 209: 153: 150: 145: 142: 130:reactive plans 100: 99: 57:external links 46: 44: 37: 24: 14: 13: 10: 9: 6: 4: 3: 2: 593: 582: 579: 578: 576: 566: 564: 560: 559: 555: 553: 547: 545: 542: 539: 536: 532: 529: 526: 523:Tyrrell, T.: 522: 519: 516: 513: 509: 506: 502: 500: 497: 493: 490: 486: 483: 479: 476: 472: 469: 465: 462: 458: 455: 451: 448: 444: 443: 439: 435: 432: 431: 427: 425: 423: 419: 416: 412: 408: 400: 397: 394: 391: 388: 385: 382: 381: 380: 378: 374: 366: 361: 357: 353: 350: 347: 346: 345: 339: 337: 333: 330: 326: 318: 316: 314: 309: 307: 299: 297: 295: 291: 285: 281: 277: 275: 272: 269: 266: 261: 257: 250: 248: 245: 242: 240: 236: 228: 224: 221: 217: 214: 210: 207: 206: 205: 202: 197: 195: 191: 187: 183: 179: 174: 172: 168: 165: 162: 159: 151: 149: 143: 141: 139: 135: 131: 127: 123: 119: 115: 111: 107: 96: 93: 85: 82:February 2011 75: 71: 65: 64: 58: 54: 50: 45: 36: 35: 30: 19: 562: 551: 459:Bryson, J.: 415:path-finding 404: 370: 343: 334: 322: 310: 303: 286: 282: 278: 273: 270: 267: 264: 254: 246: 243: 232: 198: 185: 175: 166: 163: 160: 157: 155: 147: 137: 133: 129: 126:environments 109: 103: 88: 79: 68:Please help 60: 306:fuzzy logic 171:productions 74:introducing 487:Isla, D.: 452:Brom, C.: 440:References 235:heuristics 563:Creatures 389:behaviour 268:condition 186:behaviors 178:hierarchy 161:condition 575:Category 541:Pogamut2 428:See also 422:planning 367:Steering 311:See the 237:such as 227:planning 239:recency 70:improve 260:agents 167:action 377:boids 327:like 290:paper 220:ACT-R 194:trees 55:, or 411:NPCs 356:Soar 271:then 213:Soar 164:then 116:by 104:In 577:: 424:. 315:. 265:if 222:), 158:if 140:. 108:, 59:, 51:, 229:. 95:) 89:( 84:) 80:( 66:. 31:. 20:)

Index

Dynamic planning
Dynamic Planning
list of references
related reading
external links
inline citations
improve
introducing
Learn how and when to remove this message
artificial intelligence
action selection
autonomous agents
classical planning
environments
productions
hierarchy
subsumption architecture
finite state machine
trees
action selection
Soar
ACT-R
planning
heuristics
recency
Finite state machine
agents
paper
computer game bots
fuzzy logic

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.