The Structure of Intelligent Agents

Agent’s structure can be viewed as −

      Agent = Architecture + Agent Program

      Architecture = the machinery that an agent executes on.

      Agent Program = an implementation of an agent function.

Simple Reflex Agents

      They choose actions only based on the current precept.

      They are rational only if a correct decision is made only on the basis of current precept.

      Their environment is completely observable.

Condition-Action Rule − It is a rule that maps a state (condition) to an action.


Model Based Reflex Agents

They use a model of the world to choose their actions. They maintain an internal state.

Model − The knowledge about how the things happen in the world.

Internal State − It is a representation of unobserved aspects of current state depending on precept history.

Updating the state requires the information about −

      How the world evolves.

      How the agent’s actions affect the world.

Goal Based Agents

They choose their actions in order to achieve goals. Goal-based approach is more flexible than reflex agent since the knowledge supporting a decision is explicitly modeled, thereby allowing for modifications.

Goal − It is the description of desirable situations.

Utility Based Agents

They choose actions based on a preference (utility) for each state. Goals are inadequate when −

      There are conflicting goals, out of which only few can be achieved.

  Goals have some uncertainty of being achieved and you need to weigh likelihood of success against the importance of a goal.