Autonomous Agents

王朝other·作者佚名  2006-01-09
窄屏简体版  字體: |||超大  

Autonomous Agents

Wooldridge and Jennings [@] provide a useful starting point by defining autonomy, social ability, reactivity and proactiveness as essential properties of an agent. Agent research is a wide area covering a variety of topics. These include:

Distributed Problem Solving (DPS)

The agent concept can be used to simplify the solution of large problems by distributing them to a number of collaborating problem-solving units. DPS is not considered here, EXCALIBUR's agents being fully autonomous: Each agent has individual goals, and there is no superior common goal.

Multi-Agent Systems (MAS)

MAS research deals with appropriate ways of organizing agents. These include general organizational concepts, the distribution of management tasks, dynamic organizational changes like team formation and underlying communication mechanisms.

Autonomous Agents

Research on autonomous agents is primarily concerned with the realization of a single agent. This includes topics like sensing, models of emotion, motivation, personality, and action selection and planning. This field is our main focus within the EXCALIBUR project.

An agent has goals (stay alive, catch player's avatar, ...), can sense certain properties of its environment (see objects, hear noises, ...), and can execute specific actions (walk northward, eat apple, ...). There are some special senses and actions dedicated to communicating with other agents.

The following subsections classify different agent architectures according to their trade-off between computation time and the realization of sophisticated goal-directed behavior.

Subsections:

Reactive Agents

Triggering Agents

Deliberative Agents

Hybrid Agents

Anytime Agents

Reactive Agents

Reactive agents work in a hard-wired stimulus-response manner. Systems like Joseph Weizenbaum's Eliza [@] and Agre and Chapman's Pengi [@] are examples of this kind of approach. For certain sensor information, a specific action is executed. This can be implemented by simple if-then rules.

The agent's goals are only implicitly represented by the rules, and it is hard to ensure the desired behavior. Each and every situation must be considered in advance. For example, a situation in which a helicopter is to follow another helicopter can be realized by corresponding rules. One of the rules might look like this:

IF (leading_helicopter == left) THEN

turn_left

ENDIF

But if the programmer fails to foresee all possible events, he may forget an additional rule designed to stop the pursuit if the leading helicopter crashes. Reactive systems in more complex environments often contain hundreds of rules, which makes it very costly to encode these systems and keep track of their behavior.

The nice thing about reactive agents is their ability to react very fast. But their reactive nature deprives them of the possibility of longer-term reasoning. The agent is doomed if a mere sequence of actions can cause a desired effect and one of the actions is different from what would normally be executed in the corresponding situation.

Triggering Agents

Triggering agents introduce internal states. Past information can thus be utilized by the rules, and sequences of actions can be executed to attain longer-term goals. A possible rule might look like this:

IF (distribution_mode) AND (leading_helicopter == left) THEN

turn_right

trigger_acceleration_mode

ENDIF

Popular Alife agent systems like CyberLife's Creatures [@], P.F. Magic's Virtual Petz [@] and Brooks' subsumption architecture [@] are examples of this category. Indeed, nearly all of today's computer games apply this approach, using finite state machines to implement it.

These agents can react as fast as reactive agents and also have the ability to attain longer-term goals. But they are still based on hard-wired rules and cannot react appropriately to situations that were not foreseen by the programmers or have not been previously learned by the agents (e.g., by neural networks).

Deliberative Agents

Deliberative agents constitute a fundamentally different approach. The goals and a world model containing information about the application requirements and consequences of actions are represented explicitly. An internal refinement-based planning system (see section on [Planning]) uses the world model's information to build a plan that achieves the agent's goals. Planning systems are often identified with the agents themselves.

Deliberative agents have no problem attaining longer-term goals. Also, the encoding of all the special rules can be dispensed with because the planning system can establish goal-directed action plans on its own. When an agent is called to execute its next action, it applies an internal planning system:

IF (current_plan_is_not_applicable_anymore) THEN

recompute_plan

ENDIF

execute_plan's_next_action

Even unforeseen situations can be handled in an appropriate manner, general reasoning methods being applied. The problem with deliberative agents is their lack of speed. Every time the situation is different from that anticipated by the agent's planning process, the plan must be recomputed. Computing plans can be very time-consuming, and considering real-time requirements in a complex environment is mostly out of the question.

Hybrid Agents

Hybrid agents such as the 3T robot architecture [@], the New Millennium Remote Agent [@] or the characters by Funge et al. [@] apply a traditional off-line deliberative planner for higher-level planning and leave decisions about minor refinement alternatives of single plan steps to a reactive component.

IF (current_plan-step_refinement_is_not_applicable_anymore) THEN

WHILE (no_plan-step_refinement_is_possible) DO

recompute_high-level_plan

ENDWHILE

use_hard-wired_rules_for_plan-step_refinement

ENDIF

execute_plan-step_refinement's_next_action

There is a clear boundary between higher-level planning and hard-wired reaction, the latter being fast while the former is still computed off-line. For complex and fast-changing environments like computer games, this approach is not appropriate because the off-line planning is still too slow and would - given enough computation time - come up with plans for situations that have already changed.

Anytime Agents

What we need is a continuous transition from reaction to planning. No matter how much the agent has already computed, there must always be a plan available. This can be achieved by improving the plan iteratively. When an agent is called to execute its next action, it improves its current plan until its computation time limit is reached and then executes the action:

WHILE (computation_time_available) DO

improve_current_plan

ENDWHILE

execute_plan's_next_action

For short-term computation horizons, only very primitive plans (reactions) are available, longer computation times being used to improve and optimize the agent's plan. The more time is available for the agent's computations, the more intelligent the behavior will become. Furthermore, the iterative improvement enables the planning process to easily adapt the plan to changed or unexpected situations. This class of agents is very important for computer-games applications and will constitute the basic technology for EXCALIBUR's agents.

 
 
 
免责声明:本文为网络用户发布,其观点仅代表作者个人观点,与本站无关,本站仅提供信息存储服务。文中陈述内容未经本站证实,其真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。
 
 
© 2005- 王朝網路 版權所有 導航