Lecture 7: Adversarial Search, Knowledge-Based Agents, and Introduction to Logic

Author

Your Name

Published

February 10, 2025

Introduction

Course Announcements: Examination Dates and Format

This lecture will address remaining topics from the previous session and provide essential course information. The examination dates for the summer session, encompassing June, July, and September, have been set as July 5th, July 22nd, and September 26th. These dates are positioned later in the session due to scheduling constraints.

The intended examination format consists of a written exam in the morning at 9:30 AM, followed by an oral exam in the afternoon. The feasibility of conducting both on the same day is contingent upon the number of students registered for each exam session. Higher student turnout, especially in the initial exam sessions, may necessitate scheduling the oral examinations on subsequent days. Contingency plans for oral examinations will be communicated during the written exam session if needed. These dates are provided to facilitate your study planning.

Review of Lecture 6: Adversarial Search and Game Playing

Continuing from the previous lecture, we will revisit the concept of search within adversarial environments, focusing on game playing as a primary example. While games are the central paradigm, the principles extend to other competitive scenarios, such as autonomous driving where vehicles might compete to reach a destination, or in scenarios involving strategic resource allocation. Historically, the study of adversarial search originated in the context of game theory, with significant early advancements like Samuel’s checkers program developed in the 1950s.

Adversarial search differs fundamentally from standard search algorithms due to the presence of an opponent actively working against the agent’s objectives. In this lecture, we will concentrate on two key algorithms for adversarial search: the Minimax algorithm and alpha-beta pruning. We will also examine how Samuel’s checkers program implemented these concepts, offering a historical perspective on their practical application and early success. These methods are crucial for understanding how to develop rational agents in competitive environments.

Knowledge-Based Agents and Logical Reasoning

Introduction to Knowledge-Based Agents: Mimicking Human Cognition

We now shift our focus to knowledge-based agents, a paradigm in AI that aims to create intelligent systems by explicitly representing and reasoning with knowledge. The fundamental idea is to emulate human cognitive processes, where knowledge and reasoning play a central role in decision-making and problem-solving. Humans possess a vast amount of knowledge about the world, and they use this knowledge to reason, infer, and ultimately choose actions. For example, a human knows that "if it is raining, then taking an umbrella is advisable." This piece of knowledge directly influences the action of whether or not to take an umbrella when going outside. Knowledge-based agents strive to replicate this capability by encoding knowledge in a structured format and employing reasoning mechanisms to derive appropriate actions. This approach contrasts with purely reactive or model-free agents, which rely on direct mappings from percepts to actions or learned associations without explicit knowledge representation.

Explicit and Declarative Knowledge Representation

A defining characteristic of knowledge-based agents is the explicit representation of knowledge. This means that the agent’s knowledge is not implicitly embedded within its code or learned parameters, but rather is stored in a form that is directly accessible and interpretable. During the 1970s and 1980s, a significant area of AI research was dedicated to exploring effective methods for knowledge representation. Various approaches were considered, including graphical notations, semantic networks, and frames. However, logic emerged as the dominant and most influential tool for explicit knowledge representation in artificial intelligence.

The adoption of logic as a representation language was driven by its formal nature, well-defined semantics, and the availability of inference mechanisms. Logic provides a precise and unambiguous way to express facts, rules, and relationships. This paradigm shift towards logic-based knowledge representation paved the way for the development of knowledge-based systems and, more specifically, expert systems. These systems were designed to capture and utilize domain-specific knowledge to solve complex problems in areas such as medical diagnosis, financial analysis, and engineering design.

Logic as the Lingua Franca for Knowledge Representation

Logic serves as a formal, unambiguous, and well-understood language for representing knowledge in AI systems. Its adoption was a pivotal moment in the field, providing a robust foundation for building intelligent agents.

Logic, in its various forms, offers a structured and mathematically grounded approach to knowledge representation. Within the realm of logic, several types exist, each with varying expressive power and suitability for different types of knowledge and reasoning tasks. These include:

  • Propositional Logic: The most basic form, dealing with simple statements (propositions) that are either true or false, and logical connectives like AND, OR, and NOT.

  • First-Order Logic (Predicate Logic): A more expressive logic that introduces predicates, objects, and quantifiers, allowing for representation of more complex relationships and generalizations.

  • Modal Logic: Extends propositional or first-order logic to include modalities such as necessity and possibility, useful for reasoning about beliefs, knowledge, and time.

  • Higher-Order Logic: Even more expressive, allowing quantification over predicates and functions, but also increasing complexity.

In our exploration of knowledge-based agents, we will begin with propositional logic due to its simplicity and its ability to illustrate fundamental concepts. We will then progress to first-order logic, which is essential for representing more complex and nuanced knowledge required for many AI applications. Understanding the distinctions and capabilities of these logical systems is crucial for designing intelligent agents that can effectively reason about the world.

Knowledge Base: The Repository of Agent’s Knowledge

At the heart of a knowledge-based agent lies the knowledge base (KB). The knowledge base is the central component that stores the agent’s accumulated knowledge about the world. It acts as a repository of facts, beliefs, and rules, all formally encoded in a chosen logical language. Typically, a KB is structured as a collection of sentences or formulas in a logical language. Each sentence represents a piece of knowledge. For instance, in propositional logic, a sentence might be "It is raining," represented symbolically as, say, \(R\). In first-order logic, a sentence could be "All humans are mortal," represented as \(\forall x (\text{Human}(x) \implies \text{Mortal}(x))\).

The knowledge base is not merely a passive data store; it is an active component that the agent uses to reason and make decisions. The effectiveness of a knowledge-based agent is heavily dependent on the quality, completeness, and organization of its knowledge base. Building and maintaining a knowledge base is a significant challenge in developing knowledge-based systems, often referred to as knowledge engineering.

Core Operations: Tell and Ask - Interacting with the Knowledge Base

Knowledge-based agents interact with their knowledge base through two fundamental operations: Tell and Ask. These operations define how the agent updates and queries its knowledge.

  • Tell (Knowledge Ingestion): The Tell operation is used to add new knowledge to the KB. This is how the agent learns about the world, either through perception or by being programmed with initial knowledge. When the agent perceives something new or is given new information, this information is translated into a sentence in the logical language and then told to the KB. For example, if a sensor detects rain, the agent uses Tell to add the sentence representing "It is raining" to its KB. Tell operations expand the KB, enriching the agent’s representation of the world.

  • Ask (Knowledge Retrieval and Inference): The Ask operation is used to query the KB. The agent uses Ask to retrieve existing knowledge or, more importantly, to pose queries that require logical inference. When an agent needs to decide on an action or answer a question, it formulates a query in the logical language and asks the KB. The Ask operation is not simply a database lookup; it involves logical reasoning. The inference engine within the KB processes the query, using the existing sentences in the KB to derive logical consequences and determine if the query is entailed by the KB. For instance, an agent might Ask "Should I take an umbrella?". The inference engine would then use the knowledge in the KB (e.g., "If it is raining, then take an umbrella") and the current facts (e.g., "It is raining") to infer the answer.

Remark. Remark 1 (title=Tell and Ask Operations, colback=white, colframe=gray!75!black). Tell and Ask are the two fundamental interfaces for a knowledge-based agent to interact with its knowledge base. Tell adds knowledge, while Ask queries and reasons with the knowledge.

Inference Engine: The Reasoning Mechanism

A critical component of a knowledge-based agent, beyond the knowledge base itself, is the inference engine. The inference engine is the reasoning mechanism that enables the agent to infer new knowledge from the explicitly represented facts and rules in the KB. It is the engine that powers the Ask operation, allowing the agent to go beyond simply retrieving stored facts and to deduce logical consequences.

The inference engine applies inference rules to the sentences in the KB to derive new sentences that are logically entailed by the existing knowledge. For example, given the sentences "Socrates is a man" and "All men are mortal," an inference engine using a rule like Modus Ponens can infer the new sentence "Socrates is mortal." This ability to derive implicit knowledge is what gives knowledge-based agents their reasoning power.

Historically, a major goal in the development of knowledge-based systems was to create a general-purpose inference engine. The vision was to build a single, domain-independent reasoning mechanism that could be coupled with different knowledge bases to create expert systems for various domains. The idea was that by simply changing the knowledge base—populating it with facts and rules relevant to a specific domain (e.g., medicine, finance, law)—the same inference engine could be used to perform expert-level reasoning in that domain. While this ambitious goal of a completely general-purpose inference engine faced challenges in practice, the concept of separating the inference mechanism from the domain-specific knowledge remains a fundamental principle in knowledge-based systems design.

Knowledge Level vs. Implementation Level: Declarative Programming

When designing and analyzing knowledge-based agents, it is crucial to distinguish between the knowledge level and the implementation level. Describing an agent at the knowledge level means focusing on what the agent knows and what reasoning it is capable of, without being concerned with the specific how of implementation. At the knowledge level, we specify the agent’s competence by describing the knowledge it possesses and the inferences it can draw, irrespective of the data structures or algorithms used to realize these capabilities.

This knowledge-level perspective leads to a declarative approach to agent programming. In declarative programming, we focus on describing what the agent should know and what goals it should achieve, rather than explicitly programming how to achieve them step-by-step. We provide the agent with a knowledge base and an inference engine, and the agent uses these components to figure out the necessary steps to achieve its goals. This is in stark contrast to procedural programming, where the programmer explicitly specifies the sequence of operations or algorithm that the agent must follow.

Declarative Programming (Knowledge Level): Focuses on what the agent knows and should achieve. Knowledge and goals are specified, and the agent reasons to find a solution.

Procedural Programming (Implementation Level): Focuses on how to achieve a goal. Explicit algorithms and step-by-step instructions are programmed.

Knowledge-based agents, through their declarative nature, offer a powerful abstraction. By programming at the knowledge level, we can design agents by focusing on the knowledge they need and the reasoning they should perform, leaving the inference engine to handle the procedural details of deriving solutions. This approach can simplify the development of complex intelligent systems and make them more understandable and maintainable.

The Wumpus World: A Practical Example for Knowledge-Based Agents

Introducing the Wumpus World: A Perceptual Reasoning Challenge

The Wumpus World is a widely recognized and utilized example in Artificial Intelligence, specifically designed to illustrate the principles of knowledge representation, reasoning, and agent design in a perceptually limited environment. It presents a scenario where an agent must navigate a dangerous cave to find gold, relying on limited sensory information to avoid deadly pitfalls and a fearsome monster called the Wumpus. The game is played on a discrete 4x4 grid, representing the cave, and is inherently single-player, focusing on the intelligent decision-making of the agent. The agent invariably begins its adventure in the bottom-left corner of this grid, at location [1,1].

Navigating the Perilous Cave: Environment and Agent’s Challenge

The Wumpus world is populated with several key elements that define the agent’s challenges and the nature of the game:

  • Bottomless Pits (Pits): Scattered randomly throughout the cave are hidden pits. Should the agent stumble into a square containing a pit, the outcome is fatal – the agent falls and the game ends. Crucially, the presence of pits is not directly perceivable. Instead, in a stroke of environmental storytelling, squares immediately adjacent (horizontally or vertically) to a pit emit a subtle breeze. This breeze is the agent’s only warning of nearby pits, requiring it to use inference to deduce potential pit locations.

  • Gleaming Gold (Gold): The objective of the game is to locate and retrieve a piece of gold hidden in one of the squares within the cave. Finding the gold is rewarding, but merely finding it is not enough to win. The agent must grab the gold, then navigate back to the starting square [1,1], and finally climb out of the cave to successfully complete the mission. The presence of gold is indicated by a perceptible glitter in the square where it is located.

  • The Wumpus: A Deadly Monster: The Wumpus is a fearsome monster lurking in a single, randomly chosen square within the cave. Like pits, the Wumpus’s location is unknown to the agent at the start. Encountering the Wumpus by entering its square results in immediate death for the agent, as it is devoured. To provide a clue to its location, the Wumpus emits a foul stench in all squares immediately adjacent to its location. This stench serves as a warning signal, allowing the agent to infer the potential presence of the Wumpus nearby and avoid fatal encounters.

  • Cave Walls (Walls): The 4x4 grid representing the Wumpus World is enclosed by impenetrable walls. These walls define the boundaries of the cave. If the agent attempts to move beyond the grid, into a wall, it will bump into it, perceiving a bump sensor reading, and its position will remain unchanged. These walls constrain the agent’s movement and define the limits of the explorable world.

Adding to the agent’s arsenal, it is equipped with a single arrow. This arrow can be used to attempt to eliminate the Wumpus. If the agent shoots the arrow in a straight line in its current facing direction, and the arrow’s path intersects with the Wumpus’s location, the Wumpus is killed. Upon successfully killing the Wumpus, a distinctive scream is emitted, which the agent can perceive, confirming the monster’s demise. However, the agent has only one arrow, making its use a strategic decision.

PEAS Description of the Wumpus World Agent

To formally define the Wumpus World problem and the agent operating within it, we can use the PEAS (Performance measure, Environment, Actuators, Sensors) framework. This framework helps to systematically analyze and design intelligent agents.

Performance Measure and Agent’s Goals

The agent’s performance in the Wumpus World is evaluated based on a scoring system designed to incentivize goal achievement and penalize undesirable outcomes:

  • Positive Reward:

    • Climbing out with Gold: +1000 points are awarded if the agent successfully climbs out of the cave from square [1,1] while carrying the gold. This is the primary objective and yields the highest reward.
  • Negative Penalties:

    • Death: -1000 points are deducted if the agent dies, either by falling into a pit or being eaten by the Wumpus. This significant penalty emphasizes the importance of survival.

    • Action Cost: -1 point is subtracted for each action performed by the agent, including moving forward, turning (left or right), grabbing the gold, and climbing. This encourages efficiency and finding solutions with the minimum number of steps.

    • Arrow Shot Cost: -10 points are deducted each time the agent shoots its arrow. This substantial penalty for using the arrow encourages careful and strategic use of this limited resource.

The game’s duration is finite, ending when the agent either dies (falls into a pit or is eaten by the Wumpus) or successfully climbs out of the cave. The agent’s ultimate goal is to maximize its accumulated score, which necessitates finding the gold, escaping the cave alive, and doing so efficiently, while strategically considering the use of its single arrow.

Environment Characteristics: The Cave Layout

The Wumpus World environment is characterized by:

  • Grid-Based Cave: The cave is represented as a 4x4 grid of locations or rooms, surrounded by walls that form the outer boundary.

  • Starting Position: The agent always begins its exploration in square [1,1], located at the bottom-left of the grid, and is initially oriented to face East (to the right).

  • Random Placement of Gold and Wumpus: The gold and the Wumpus are each located in one square within the cave, with their positions chosen randomly at the start of each game. Importantly, neither the gold nor the Wumpus can be placed in the starting square [1,1], ensuring the agent always begins in a safe location. The distribution of their placement is uniform across the possible squares (excluding [1,1]).

  • Probabilistic Pit Distribution: Pits are also distributed randomly throughout the cave. Each square, except for the starting square [1,1], has a 20% probability of containing a pit. This probabilistic placement means that the number of pits can vary from game to game, and some games might have no pits at all, while others could be densely riddled with them.

Agent Actuators: Actions Available to the Agent

The agent can interact with the Wumpus World environment through a defined set of actions, or actuators:

  • Move Forward (Forward): This action attempts to move the agent one square in the direction it is currently facing. If the movement would cause the agent to collide with a wall (i.e., move outside the 4x4 grid), the action results in a bump perception, and the agent’s position remains unchanged.

  • Turn Left (Turn Left): This action rotates the agent 90 degrees counter-clockwise, changing its facing direction (e.g., from East to North). The agent remains in the same square.

  • Turn Right (Turn Right): This action rotates the agent 90 degrees clockwise, changing its facing direction (e.g., from East to South). The agent remains in the same square.

  • Grab (Grab): The agent can use this action to pick up the gold, but only if it is currently in the same square as the gold. If gold is present in the current square, executing Grab will result in the agent carrying the gold.

  • Shoot Arrow (Shoot): This action fires the agent’s single arrow in a straight line in the direction the agent is currently facing. The arrow travels until it hits a wall or the Wumpus. If the arrow hits the Wumpus, the Wumpus is killed, and a scream is perceived. The agent only has one arrow for the entire game, making this a one-time use action. Subsequent Shoot actions have no effect.

  • Climb (Climb): This action allows the agent to climb out of the cave, but only if it is in the starting square [1,1]. To win the maximum score, the agent must be in square [1,1], possess the gold, and then execute the Climb action.

Agent Sensors: Perceptions of the Wumpus World

The agent’s knowledge about the Wumpus World is derived from its limited set of sensors, which provide local perceptions of its immediate surroundings:

  • Stench Sensor (Stench): If there is a Wumpus in any of the squares immediately adjacent (North, South, East, or West) to the agent’s current square, the agent perceives a stench in its current location. The stench does not indicate the Wumpus’s exact location, only its proximity.

  • Breeze Sensor (Breeze): If any adjacent square contains a pit, the agent perceives a breeze in its current square. Similar to the stench, the breeze is a proximal indicator, not a precise location of the pit.

  • Glitter Sensor (Glitter): If the gold is located in the agent’s current square, the agent perceives a glitter. This is a direct indication of the gold’s presence in the current location.

  • Bump Sensor (Bump): If the agent attempts to move forward into a wall, the bump sensor is activated, providing feedback that the attempted move was blocked by a wall.

  • Scream Sensor (Scream): If the agent successfully shoots and kills the Wumpus, the scream sensor is activated in the turn immediately following the Shoot action. This confirms to the agent that the Wumpus has been eliminated.

These sensors provide the agent with crucial, albeit indirect and local, information about the dangers and rewards within the Wumpus World. The agent must intelligently process these perceptions to navigate safely, deduce the locations of the Wumpus and pits, and ultimately find and retrieve the gold.

Introduction to Propositional Logic

The Fundamental Role of Logic in Artificial Intelligence Systems

Logic is indispensable to Artificial Intelligence. It provides the bedrock for formalizing knowledge and reasoning processes, enabling AI systems to move beyond mere data processing to genuine understanding and inference.

Logic, in its essence, offers a structured and unambiguous system for representing information and deriving conclusions. Its importance in AI stems from its ability to provide:

  • Formal Knowledge Representation: Logic allows us to encode knowledge in a precise and unambiguous format, eliminating the vagueness and ambiguity inherent in natural language. This formal representation is crucial for AI systems to process and manipulate knowledge effectively.

  • Sound Reasoning and Inference: Logic provides a set of inference rules that guarantee the validity of deductions. Using logical inference, AI agents can derive new, sound conclusions from existing knowledge. This deductive capability is fundamental to intelligent behavior, allowing agents to answer questions, solve problems, and make predictions based on their knowledge.

  • Rational Decision Making: By representing goals, constraints, and the state of the world in logic, AI agents can use logical reasoning to determine optimal or rational actions. Logic enables agents to evaluate different courses of action, predict their outcomes, and choose actions that are logically consistent with their knowledge and objectives.

  • Basis for AI Paradigms: Logic underpins many important areas within AI, including knowledge representation, automated reasoning, planning, and problem-solving. It serves as a foundational tool for building intelligent systems that can reason, learn, and interact with the world in a coherent and rational manner.

In essence, logic empowers AI systems with the ability to think, in a formal and computational sense, mirroring aspects of human reasoning and providing a pathway towards creating truly intelligent machines.

Previewing Propositional Logic: The Logic of Simple Statements

Our exploration of logic in AI will commence with propositional logic, which is the foundational and simplest form of logic. Despite its simplicity, propositional logic provides a powerful starting point for understanding the core principles of logical representation and inference.

Propositional Logic is the logic of simple statements. It deals with propositions – declarative statements that can be definitively classified as either true or false – and how these propositions can be combined and manipulated using logical connectives.

In propositional logic, we focus on:

  • Propositions: These are basic declarative sentences that are either true or false. Examples include "The sun is shining," "It is raining," or "2 + 2 = 4." In propositional logic, we treat these statements as atomic units, without analyzing their internal structure.

  • Logical Connectives: These are operators that combine propositions to form more complex sentences. The primary logical connectives in propositional logic are:

    • AND (\(\land\)): Combines two propositions, true only if both are true.

    • OR (\(\lor\)): Combines two propositions, true if at least one is true.

    • NOT (\(\neg\)): Negates a proposition, true if the proposition is false, and vice versa.

    • IMPLICATION (\(\implies\)): Represents "if...then..." relationships. \(P \implies Q\) is false only if \(P\) is true and \(Q\) is false.

    • BICONDITIONAL (\(\iff\)): Represents "if and only if" relationships. \(P \iff Q\) is true if \(P\) and \(Q\) have the same truth value.

In the subsequent sections and lectures, we will delve into the details of propositional logic. We will learn:

  • Syntax and Semantics: How to formally construct sentences in propositional logic and how to determine their truth values.

  • Knowledge Representation: How to use propositional logic to represent knowledge about the world, including facts, rules, and constraints.

  • Logical Inference: Methods for performing logical inference in propositional logic, allowing us to derive new knowledge from existing sentences. This will include exploring concepts like entailment and inference rules, which are crucial for building reasoning agents.

By mastering propositional logic, we will establish a solid foundation for understanding more advanced logical systems and their applications in building intelligent AI agents.

Conclusion

Summary: Bridging Adversarial Search and Knowledge-Based Systems

In today’s lecture, we have traversed two significant areas within Artificial Intelligence. We began by revisiting adversarial search, focusing on the challenges of game playing and competitive environments. We explored the Minimax algorithm as a foundational approach for optimal decision-making in zero-sum games and examined alpha-beta pruning as a crucial optimization technique to enhance its efficiency. We also discussed the necessity of heuristic evaluation functions for tackling complex games where exhaustive search is infeasible, referencing Samuel’s checkers program as a historical example of these principles in action.

Subsequently, we transitioned to the realm of knowledge-based agents. We introduced the concept of agents that explicitly represent and reason with knowledge, drawing a contrast with purely reactive approaches. We emphasized theimportance of logic as a formal language for knowledge representation and discussed the core components of knowledge-based agents: the knowledge base and the inference engine. We highlighted the Tell and Ask operations as the primary modes of interaction with a knowledge base and differentiated between the knowledge level and the implementation level of agent design, underscoring the declarative nature of knowledge-based programming. To ground these abstract concepts, we introduced the Wumpus World as a concrete and illustrative example environment that will serve as a running case study for applying knowledge representation and reasoning techniques in the lectures to come.

Looking Ahead: Propositional Logic and Reasoning in the Wumpus World

Building upon the introduction to knowledge-based agents, our next lectures will embark on a deeper exploration of propositional logic. We will systematically examine its syntax, defining how to construct well-formed logical sentences, and its semantics, establishing how to determine the truth value of these sentences in different possible worlds or interpretations. A key focus will be on inference rules and methods for performing logical deduction, enabling us to derive new knowledge from existing information.

Crucially, we will apply propositional logic to the Wumpus World. We will learn how to formally represent the agent’s knowledge about the Wumpus World environment, including the rules of the game, the agent’s perceptions (stench, breeze, glitter, bump, scream), and the consequences of its actions. Our goal is to equip our agent with the ability to reason logically about its percepts and the environment to make informed and safe decisions. We will investigate how to represent the dynamic aspects of the Wumpus World and the agent’s evolving state within a logical framework, ultimately aiming to design a knowledge-based agent capable of intelligently navigating the Wumpus World, finding the gold, and escaping unscathed. This practical application will solidify our understanding of propositional logic and its role in creating intelligent systems.