Computing Logical Consequences in Logic Programming

Author

Your Name

Published

February 5, 2025

Introduction

This lecture delves into the computation of logical consequences within logic programming. We will examine both top-down methods, exemplified by Prolog’s SLD resolution, and bottom-up strategies. A key focus will be on the challenges presented by infinite Herbrand models and the shift towards finite-world scenarios, which are particularly relevant for knowledge representation and reasoning, especially in the context of answer set programming. We will introduce the concept of grounding, which transforms a program into a set of ground rules, and the immediate consequence operator, denoted as \(T_P\), which serves as a cornerstone for a bottom-up approach to computing logical consequences. Furthermore, we will explore the properties of the \(T_P\) operator, including monotonicity and continuity, and discuss how these properties lead to the concept of fixed points, which are crucial for defining the semantics of logic programs.

Preliminaries

In our ongoing exploration of logic programming, we continue to address the problem of computing logical consequences. Previously, we examined the top-down approach, as implemented in Prolog, which employs SLD resolution. This method is complete, meaning that if a logical consequence exists, the procedure will find a proof for it. However, it is a recursively enumerable problem, which implies that while we can find proofs for true statements, we cannot definitively prove that a statement is *not* a logical consequence. This limitation motivates our interest in finding a model.

The reason we are focusing on models is that we are transitioning towards answer set programming, a variant of logic programming that will be used for knowledge modeling and reasoning.

Undecidability and Infinite Herbrand Models

The undecidability of determining logical consequences is closely tied to the presence of infinite Herbrand models. When the Herbrand model is finite, the problem becomes decidable. Our current goal is to develop an alternative perspective on computing fixed points, which will eventually lead to theoretical proofs and algorithms applicable to finite-world scenarios.

Grounding

To facilitate a bottom-up computation of logical consequences, the initial step involves transforming a logic program into a set of ground rules. This process is known as grounding.

Let \(P\) be a logic program. The grounding of \(P\), denoted as \(\text{ground}(P)\), is the set of all ground instances of the rules in \(P\). These instances are obtained by systematically substituting the variables in the rules with terms from the Herbrand universe \(\mathcal{U}_P\).

Consider a program \(P\) with a Herbrand universe \(\mathcal{U}_P = \{A, B\}\), and a rule \(R(X,Y) \land P(Y) \rightarrow Q(X)\). The grounding of this rule, \(\text{ground}(P)\), includes the following ground instances: \[\begin{aligned} & R(A,A) \land P(A) \rightarrow Q(A) \\ & R(A,B) \land P(B) \rightarrow Q(A) \\ & R(B,A) \land P(A) \rightarrow Q(B) \\ & R(B,B) \land P(B) \rightarrow Q(B)\end{aligned}\]

If the Herbrand universe is infinite, which occurs when the program includes at least one function symbol, the grounding process results in an infinite set of ground rules. This poses a challenge for bottom-up computation, as it requires processing an infinite set of rules.

Consider the rule: \(\text{numeral}(S(X)) \leftarrow \text{numeral}(X)\). The grounding of this rule yields an infinite set: \[\begin{aligned} & \text{numeral}(S(0)) \leftarrow \text{numeral}(0) \\ & \text{numeral}(S(S(0))) \leftarrow \text{numeral}(S(0)) \\ & \text{numeral}(S(S(S(0)))) \leftarrow \text{numeral}(S(S(0))) \\ & \vdots\end{aligned}\]

When the Herbrand universe is finite, the grounding process produces a finite set of ground rules, making bottom-up computation feasible with an algorithm.

The Immediate Consequence Operator \(T_P\)

The immediate consequence operator, denoted as \(T_P\), is a fundamental tool for bottom-up computation in logic programming. It allows us to derive new facts from existing ones based on the rules of the program.

Let \(P\) be a ground logic program, and let \(I\) be a set of ground atoms, which is a subset of the Herbrand base \(\mathcal{B}_P\). The immediate consequence operator \(T_P\) is defined as: \[T_P(I) = \{ A \mid A \leftarrow B_1, \dots, B_n \in \text{ground}(P) \text{ and } \{B_1, \dots, B_n\} \subseteq I \}\] In simpler terms, \(T_P(I)\) returns the set of all ground atoms \(A\) such that there is a rule \(A \leftarrow B_1, \dots, B_n\) in the grounded program \(\text{ground}(P)\) where all the body atoms \(B_1, \dots, B_n\) are in the set \(I\).

Example of \(T_P\) Iteration

Consider a program \(P\) with a Herbrand universe \(\mathcal{U}_P = \{A, B\}\) and the following rules: \[\begin{aligned} & R(A) \\ & R(B) \\ & P(A) \\ & Q(X) \leftarrow R(X), P(X)\end{aligned}\] We can compute the application of \(T_P\) iteratively, starting from the empty set:

  • \(T_P\!\uparrow\! 0 = \emptyset\)

  • \(T_P\!\uparrow\! 1 = T_P(T_P\!\uparrow\! 0) = \{R(A), R(B), P(A)\}\)

  • \(T_P\!\uparrow\! 2 = T_P(T_P\!\uparrow\! 1) = \{R(A), R(B), P(A), Q(A)\}\)

  • \(T_P\!\uparrow\! 3 = T_P(T_P\!\uparrow\! 2) = \{R(A), R(B), P(A), Q(A)\}\)

We have reached a fixed point since \(T_P\!\uparrow\! 3 = T_P\!\uparrow\! 2\).

Example with Looping Rule

Consider a program \(P\) with the following rules: \[\begin{aligned} & P(A) \\ & Q(X) \leftarrow Q(X)\end{aligned}\] The iterative application of \(T_P\) is as follows:

  • \(T_P\!\uparrow\! 0 = \emptyset\)

  • \(T_P\!\uparrow\! 1 = T_P(T_P\!\uparrow\! 0) = \{P(A)\}\)

  • \(T_P\!\uparrow\! 2 = T_P(T_P\!\uparrow\! 1) = \{P(A)\}\)

We reach a fixed point at \(T_P\!\uparrow\! 2\), demonstrating that bottom-up computation terminates even with potentially looping rules.

Example with Infinite Herbrand Universe

Consider a program \(P\) with the following rules: \[\begin{aligned} & \text{natural}(0) \\ & \text{natural}(S(X)) \leftarrow \text{natural}(X)\end{aligned}\] The iterative application of \(T_P\) is as follows:

  • \(T_P\!\uparrow\! 0 = \emptyset\)

  • \(T_P\!\uparrow\! 1 = T_P(T_P\!\uparrow\! 0) = \{\text{natural}(0)\}\)

  • \(T_P\!\uparrow\! 2 = T_P(T_P\!\uparrow\! 1) = \{\text{natural}(0), \text{natural}(S(0))\}\)

  • \(T_P\!\uparrow\! 3 = T_P(T_P\!\uparrow\! 2) = \{\text{natural}(0), \text{natural}(S(0)), \text{natural}(S(S(0)))\}\)

  • \(\dots\)

In the limit, we obtain \(T_P\!\uparrow\! \omega = \{\text{natural}(0), \text{natural}(S(0)), \text{natural}(S(S(0))), \dots\}\), which represents the set of natural numbers.

Properties of \(T_P\)

The immediate consequence operator \(T_P\) possesses several important properties that are crucial for understanding its behavior and its role in computing logical consequences. These properties include a relationship with models, monotonicity, and upward continuity.

Proposition 1. If \(T_P(I) \subseteq I\), then \(I\) is a model of \(P\).

Proof. Proof. Let \(I\) be an interpretation such that \(T_P(I) \subseteq I\). We need to show that \(I\) satisfies all the rules in the grounded program \(\text{ground}(P)\). Consider any ground rule \(A \leftarrow B_1, \dots, B_n \in \text{ground}(P)\). There are two possibilities:

  1. \(\{B_1, \dots, B_n\} \not\subseteq I\). In this case, the body of the rule is false in \(I\), so the rule is satisfied by the definition of implication.

  2. \(\{B_1, \dots, B_n\} \subseteq I\). By the definition of \(T_P\), we have \(A \in T_P(I)\). Since we are given \(T_P(I) \subseteq I\), it follows that \(A \in I\). Thus, the rule is satisfied.

Since all rules in \(\text{ground}(P)\) are satisfied, \(I\) is a model of \(P\). ◻

Proposition 2 (Monotonicity). If \(I \subseteq J\), then \(T_P(I) \subseteq T_P(J)\).

Proof. Proof. Assume \(I \subseteq J\) and \(A \in T_P(I)\). By the definition of \(T_P\), there exists a rule \(A \leftarrow B_1, \dots, B_n \in \text{ground}(P)\) such that \(\{B_1, \dots, B_n\} \subseteq I\). Since \(I \subseteq J\), we have \(\{B_1, \dots, B_n\} \subseteq J\). Therefore, by the definition of \(T_P\), \(A \in T_P(J)\). Thus, \(T_P(I) \subseteq T_P(J)\). ◻

Proposition 3 (Upward Continuity). Let \(I_0 \subseteq I_1 \subseteq I_2 \subseteq \dots\) be a chain of interpretations. Then \[T_P\left(\bigcup_{i=0}^{\infty} I_i\right) = \bigcup_{i=0}^{\infty} T_P(I_i)\]

Proof. Proof. We need to show double inclusion.

(\(\subseteq\)) Assume \(A \in T_P\left(\bigcup_{i=0}^{\infty} I_i\right)\). By the definition of \(T_P\), there exists a rule \(A \leftarrow B_1, \dots, B_n \in \text{ground}(P)\) such that \(\{B_1, \dots, B_n\} \subseteq \bigcup_{i=0}^{\infty} I_i\). This means that for each \(B_j\), there exists some \(k_j\) such that \(B_j \in I_{k_j}\). Let \(k = \max\{k_1, \dots, k_n\}\). Since the interpretations form a chain, it follows that \(\{B_1, \dots, B_n\} \subseteq I_k\). Thus, \(A \in T_P(I_k)\). Therefore, \(A \in \bigcup_{i=0}^{\infty} T_P(I_i)\).

(\(\supseteq\)) Assume \(A \in \bigcup_{i=0}^{\infty} T_P(I_i)\). Then there exists some \(k\) such that \(A \in T_P(I_k)\). By the definition of \(T_P\), this means there exists a rule \(A \leftarrow B_1, \dots, B_n \in \text{ground}(P)\) such that \(\{B_1, \dots, B_n\} \subseteq I_k\). Since \(I_k \subseteq \bigcup_{i=0}^{\infty} I_i\), we have \(\{B_1, \dots, B_n\} \subseteq \bigcup_{i=0}^{\infty} I_i\). Thus, \(A \in T_P\left(\bigcup_{i=0}^{\infty} I_i\right)\). ◻

Iterated Application of \(T_P\)

To explore the behavior of the \(T_P\) operator, we define its upward powers through iterative application, starting from the empty set.

  • \(T_P\!\uparrow\! 0 = \emptyset\)

  • \(T_P\!\uparrow\! n+1 = T_P(T_P\!\uparrow\! n)\)

  • \(T_P\!\uparrow\! \omega = \bigcup_{n=0}^{\infty} T_P\!\uparrow\! n\)

Here, \(T_P\!\uparrow\! 0\) is the empty set, \(T_P\!\uparrow\! n+1\) is the result of applying \(T_P\) to \(T_P\!\uparrow\! n\), and \(T_P\!\uparrow\! \omega\) is the union of all \(T_P\!\uparrow\! n\) for all natural numbers \(n\). This represents the limit of the iterative process.

Theorem 4. If \(P\) is a definite clause program, then \(T_P\!\uparrow\! \omega\) is the least Herbrand model of \(P\), which is also the least fixed point of \(T_P\).

This theorem is a consequence of the Knaster-Tarski theorem and Kleene’s fixed-point theorem. The Knaster-Tarski theorem states that a monotonic operator on a complete lattice has a least fixed point. Kleene’s fixed-point theorem further specifies that if the operator is also upward continuous, then the least fixed point can be obtained by iteratively applying the operator starting from the bottom element of the lattice, which in our case is the empty set.

Downward Powers of \(T_P\)

In addition to the upward powers of \(T_P\), we can also define its downward powers, which start from the Herbrand base \(\mathcal{B}_P\) and iteratively apply \(T_P\). This approach is related to coinductive reasoning.

  • \(T_P\!\downarrow\! 0 = \mathcal{B}_P\)

  • \(T_P\!\downarrow\! n+1 = T_P(T_P\!\downarrow\! n)\)

  • \(T_P\!\downarrow\! \omega = \bigcap_{n=0}^{\infty} T_P\!\downarrow\! n\)

Here, \(T_P\!\downarrow\! 0\) is the Herbrand base, \(T_P\!\downarrow\! n+1\) is the result of applying \(T_P\) to \(T_P\!\downarrow\! n\), and \(T_P\!\downarrow\! \omega\) is the intersection of all \(T_P\!\downarrow\! n\) for all natural numbers \(n\). This represents the limit of the iterative process from the top.

Example of Coinductive Reasoning

To illustrate the concept of coinductive reasoning and the use of downward powers of \(T_P\), let’s consider the following program \(P\): \[\begin{aligned} & R(0) \leftarrow R(0) \\ & P(0) \leftarrow Q(X) \\ & Q(S(X)) \leftarrow Q(X)\end{aligned}\] This program includes a looping rule \(R(0) \leftarrow R(0)\), a rule that derives \(P(0)\) if there exists some \(X\) for which \(Q(X)\) is true, and a rule that defines \(Q\) over the successor of a number.

Upward Iteration

Applying the upward powers of \(T_P\):

  • \(T_P\!\uparrow\! 0 = \emptyset\)

  • \(T_P\!\uparrow\! 1 = T_P(T_P\!\uparrow\! 0) = \emptyset\)

  • \(\dots\)

  • \(T_P\!\uparrow\! \omega = \bigcup_{n=0}^{\infty} T_P\!\uparrow\! n = \emptyset\)

The least fixed point is the empty set, which does not capture the intended behavior of the program.

Downward Iteration

Applying the downward powers of \(T_P\):

  • \(T_P\!\downarrow\! 0 = \mathcal{B}_P = \{R(0), R(S(0)), \dots, P(0), P(S(0)), \dots, Q(0), Q(S(0)), \dots\}\)

  • \(T_P\!\downarrow\! 1 = T_P(T_P\!\downarrow\! 0) = \{R(0), P(0), Q(S(0)), Q(S(S(0))), \dots\}\)

  • \(T_P\!\downarrow\! 2 = T_P(T_P\!\downarrow\! 1) = \{R(0), P(0), Q(S(S(0))), Q(S(S(S(0)))), \dots\}\)

  • \(\dots\)

  • \(T_P\!\downarrow\! \omega = \bigcap_{n=0}^{\infty} T_P\!\downarrow\! n = \{R(0), P(0)\}\)

In this case, \(T_P\!\downarrow\! \omega = \{R(0), P(0)\}\). This is a model of the program, but it is not a fixed point, as \(T_P(T_P\!\downarrow\! \omega) = \{R(0)\}\). This demonstrates that the downward iteration does not necessarily lead to a fixed point, but it can provide a different perspective on the semantics of the program, particularly for programs with looping behavior.

This example illustrates that while the upward powers of \(T_P\) converge to the least Herbrand model, the downward powers can be used for coinductive reasoning, which is particularly relevant for systems that may loop indefinitely.

Conclusion

In this lecture, we have explored the computation of logical consequences in logic programming, examining both top-down and bottom-up approaches. We introduced the concept of grounding, which transforms a logic program into a set of ground rules, and the immediate consequence operator \(T_P\), which is central to the bottom-up approach. We discussed the key properties of \(T_P\), including monotonicity and upward continuity, and demonstrated how these properties lead to the least Herbrand model as the least fixed point of \(T_P\). We also touched upon coinductive reasoning and the use of downward powers of \(T_P\) for systems that exhibit looping behavior.

For definite clause programs without function symbols, the least fixed point can be computed in polynomial time relative to the size of the grounding. However, when function symbols are present, the Herbrand universe becomes infinite, making bottom-up computation less practical due to the infinite grounding.

Key Takeaways:

  • The semantics of definite clause logic programming is based on the notion of logical consequence, which corresponds to the minimum Herbrand model.

  • Logical consequences can be computed using top-down methods, such as SLD resolution, or bottom-up methods using the \(T_P\) operator.

  • The \(T_P\) operator is monotonic and upward continuous, which guarantees the existence of a least fixed point.

  • The least fixed point of \(T_P\) is the least Herbrand model and can be computed as \(T_P\!\uparrow\! \omega\).

  • Coinductive reasoning involves the use of downward powers of \(T_P\) and is relevant for systems that loop indefinitely.

Follow-up Questions:

  • How can we adapt the bottom-up approach for programs with function symbols, given that the grounding becomes infinite?

  • What are the practical implications of coinductive reasoning in the context of logic programming, particularly for systems that do not terminate?

  • How can we use the concepts discussed in this lecture to develop more efficient algorithms for computing logical consequences?

Further Reading:

  • Lloyd, J. W. (1987). Foundations of Logic Programming. Springer.

  • Christoph Haft (book title unclear)

  • (Paper co-authored by the teacher, title and publication details unclear)