算法代写 | COMP2550/COMP4450/COMP6445 – Advanced Computing R&D Methods
这个作业是研究和证明经典规划搜索算法和启发式算法的属性
COMP2550/COMP4450/COMP6445 – Advanced Computing R&D Methods
Assignment 3: Theoretical Methods
Maximum marks 100 points + 5 bonus points
Submission deadline May 12, 11:59pm
No late submission unless permission is obtained from the convener.
Submission mode One PDF file via Wattle
This assignment contains questions to both Theoretical Methods I and Theoretical Methods II. The first part
gets 65 points the second gets 40 points. The maximum will be 105 of 100, meaning that you can earn up to 5
bonus points (and you can lose 5 points without actually losing them).
Theoretical Methods I
Although the main purpose on this lecture was to study and prove properties of algorithms (illustrated with
the classical planning search algorithm and heuristics), we start with some general planning and search exercises
before focusing on analyzing and proving theoretical properties of algorithms and, most importantly, heuristics.
Thus, by first doing the exercises within the block 1. Foundational Planning and Search Exercises you can
familiarize yourself with the topic and get a more intuitive understanding, before you move on to the more
challenging – and more theoretical – questions.
Do no leave out any exercises! When you need to prove something, check the required definition and try prove
or argue (as formal as possible) why it is fulfilled or why not. — Pascal
1. Foundational Planning and Search Exercises
A. Checking (Proving) Plan Executability (5 points)
a) Let P1 = (V, A, sI , g1) and P2 = (V, A, sI , g2) be two classical planning problems, which base upon
the same state variables, actions, and initial state. Only their goal descriptions are different.
Let ¯a
1 = a
1
1
, a1
2
, . . . , a1
n be a plan (i.e., a solution action sequence) for P1 and ¯a
2 = a
2
1
, a2
2
, . . . , a2
n be a
plan for P2.
Answer the following questions (1.5+1pts). If a statement is wrong provide a counter example and
briefly explain why it is a counter-example. If a statement is correct provide a proof sketch or make a
reasonable argument that shows that you understood why the respective property is true (we do not
demand anything overly formal here).
• If ¯a
1 ◦ a¯
2
is executable in sI , then it is also a solution for P2. (Like in the lecture, the sign ◦
represents the concatenation of actions.)
• If P1 (and thus P2) is delete-free, then a
1
1
, a2
1
, a1
2
, a2
2
, . . . , a1
n
, a2
n a solution for P3 = (V, A, sI , g1∪g2).
b) Answer the following questions (1+1.5pts). If a statement is wrong provide a counter example and
briefly explain why it is a counter-example. If a statement is correct provide a proof sketch or make a
reasonable argument that shows that you understood why the respective property is true (we do not
demand anything overly formal here).
• Let a be an action and s be a state. If a is applicable in s once, then it is also applicable twice.
Formally: If τ (a, s) = true, then τ (a, s0
) = true for s
0 = γ(a, s).
• Let a be an action and s be a state. If a is applicable in s twice in a row, then it is also applicable
thrice. Formally: If τ (aa, s) = true, then τ (aaa, s) = true.
1
B. Modeling a Planning Problem (5 points)
(Re-)Consider the sliding tile puzzle from the lecture:
2 1 4 8
9 7 11 10
6 5 15 3
13 14 12
Initial State
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15
Goal State
As the lecture explained, the left side shows the initial configuration of the puzzle (meant to represent a
shuffled picture), and the right side shows the goal configuration. The numbers are provided so we can give
individual tiles individual names – i.e., their number. The position without a number does not have a tile
in it, this is the only empty position into which neighboring tiles could be moved in.
We also assume that every “grid position” has a name that starts from 1 and ends at 16, in the same way
as the goal configuration is shown. Thus, for example:
• In the initial state, the tile with name 9 is located at position 5.
• In both the initial state and the goal state, the empty position without any tile in it is number 16.
• In the goal state, any tile with name i is also at position i.
In this exercise you should model that problem as a classical planning problem (V, A, sI , g). To make things
easier for you, we already provide you with a partial domain model.
You can (i.e., have to!) assume that we are given the following problem components:
• V = {posi-is-free|1 ≤ i ≤ 16} ∪ {tilei-at-posj |1 ≤ i ≤ 15 and 1 ≤ j ≤ 16}
• sI = {tile2-at-pos1, tile1-at-pos2, tile4-at-pos3, tile8-at-pos4
tile9-at-pos5, tile7-at-pos6, tile11-at-pos7, tile10-at-pos8
tile6-at-pos9, tile5-at-pos10, tile15-at-pos11, tile3-at-pos12
tile13-at-pos13, tile14-at-pos14, tile12-at-pos15, pos16-is-free}
Exercises (1.5+3.5pts):
a) Specify the goal description.
b) We are also still missing the action set A. However, because there are too many special cases to
consider, we do not demand that you model the entire set. Instead, we only ask you to:
• Model all (four) actions for moving tile number 6 from position 11 to all possible, directly neighboring positions.
→ Please provide names for your actions, e.g., move-tilei-from-posj -to-posk = (pre, add, del) with …
Make sure that you covered all required preconditions and effects.
C. Proving Simple Heuristic Properties (8 points)
We will again take a look at the Sliding Tile puzzle. This time, however, we do not assume it to be modeled
as a planning problem. It is simply the sliding tile puzzle! Its formal representation is not important for
this exercise.
Instead, we only look at the three heuristics that we already took a look at in the lecture:
• The “number of misplaced tiles” heuristic.
This heuristic counts the number of misplaced tiles. Please note that it counts tiles! Not positions!
In other words, if the position (i, j) is empty in the current puzzle state, but it’s supposed to be
occupied by some tile in the goal state, then the “wrong position” (i, j) does not contribute towards
the heuristic value (as there is no tile on it).
• The “Manhatten distance” heuristic.
For each tile (again, tile!, not positions), add the horizontal plus vertical distance to its goal position.
2
• The “ignore some tiles” heuristic.
This heuristic ignores a certain number of tiles, both in the current puzzle and the goal configuration.
The resulting problem is solved optimally and the resulting solution cost is used as heuristic. For a
fully defined heuristic, we had of course to define which tiles had to be ignored given some puzzle.
This, however, is not important for the sake of this exercise. Just assume that for each puzzle state
you are provided with a set of tiles that’s being ignored.
Prove or disprove the following propositions. We do not expect anything formal here, but your arguments
should be convincing enough to allow for your conclusion (i.e., using only natural language is sufficient,
but don’t be too abstract). If a proposition is wrong, a proof would consist of a counter-example and an
explanation why it is a counter-example.
a) (2pts) The number of misplaced tiles heuristic h#MT is:
• goal-aware.
• safe.
• admissible.
• perfect.
b) (3 pts) The Manhatten distance heuristic hMD is:
• goal-aware.
• safe.
• admissible.
• perfect.
c) (3 pts) The ignore some tiles heuristic hIT is:
• goal-aware.
• safe.
• admissible.
• perfect. It is meant that the heuristic is perfect for any pattern. So if it is indeed perfect, show
this for any possible pattern. It is not perfect if there exists a pattern for which it is not perfect.
(So you can pick a pattern to construct a counter-example.)
D. Executing A∗
(7 points)
In this exercise you are supposed to experience the influence of heuristics’ guiding power. For this, you will
execute the classical planning progression algorithm on the simple Cranes in the Harbor domain. You will
not have to compute heuristics. Since the entire reachable state space consist of only six states we provide
the heuristic values for you. (They are not computed by an actual heuristic but just chosen more-or-less
arbitrarily.)
This is the transition system:
move
move
put take put take
move
move
unload
load
move move
Title: Lecture Slides for Automated Planning
Source: http://www.cs.umd.edu/ nau/planning/slides/chapter01.pdf
Licence: Attribution-NonCommercial-ShareAlike 2.0 Generic
https://creativecommons.org/licenses/by-nc-sa/2.0/
legalcode
Copyright: Dana S. Nau
Note that the transition system only mentions the action name “move”, but we differentiate between
moveLeft and moveRight to differentiate between moving to the left location and the right location. Please
be meticulous when writing down the move actions, because it is tempting to write down the “wrong
3
direction” (e.g., when moving from s2 to s0, the truck moves to the right, although the node s2 is on the
left of s0, so don’t get that wrong). Assume that every action has a cost of 1. Assume that the initial state
is s2 and the goal state is s5.
Below, given the heuristic provided, you are supposed to execute the progression planning algorithm from
the lecture. You do not have to show action applicability, instead you can simply use the transition system
to identify which actions that are applicable in the respective states. You also do not have to write down
the encoding of the planning states, it suffices to use the states’ names s0 to s5 (given in the graphic).
You have to follow the annotation of the lecture! I.e., you have to draw one single search tree, just like in
slide 19, which shows the A* search example of section AI search. Write to each node of the tree:
• The content of the respective search node n. For example, the root node of the next exercise a) has
to be labeled with “n = (s2, ε)”.
• The f equation for n. E.g., you have to write “f(n) = g(n) + h(n) = 0 + 2 = 2” for the root note as
mentioned above.
• A number to indicate when you expanded the search node n (“expansion” meaning when you selected
n from the search fringe to create its successors or to return the solution). The root note would
obviously be annotated by “(1)”. Then, its successor with the smallest f value would get the “(2)”,
and so on, until the solution node gets the highest number.
a) (4 pts) Assume our heuristic, h1, has the following heuristic values: h1(s0) = 1, h1(s2) = 2, h1(s3) = 2,
h1(s1) = 0, h1(s4) = 1, h1(s5) = 0. Execute the algorithm as described above.
Hints:
• The algorithm from the lecture does not check for duplicates. Since the transition system contains
cycles it is likely that you will explore some states multiple times.
• There is only one correct solution. The respective search tree will have 14 search nodes.
• Again: Do not confuse moveLeft and moveRight!
b) (1.5 pts) Assume that instead of using h1 we would have used the heuristic h2, which has the same
heuristic values for s3, s4, s5 as h1, but for the others it uses h2(s0) = 4, h2(s1) = 3, h2(s2) = 3.
Can you make at least two observations about the resulting search tree (and, ideally, how it relates
to the one from the previous exercise)? (Just one sentence each, this exercise is meant to make you
think about the impact of using different heuristics, rather than testing your capabilities of proving
properties.)
c) (1.5 pts) Answer and prove (do not just say “yes” or “no”) the following questions:
• Is h1 admissible?
• Is h1 goal-aware?
• Is h1 perfect?
• (Remark: We do not ask whether h1 is safe because that would not make much sense as there are
no states for which we defined h1 to be ∞.)
2. Relaxed Planning Heuristics.
A. Algorithm to solve Delete-Relaxed Planning Problems (10 points)
In the lecture we introduced delete-relaxation as a basis for heuristics. They were motivated as a domainindependent problem relaxation that makes a problem “easier” because all facts made true once will always
remain true. It was claimed that solving a delete-relaxed problem can be done in polynomial time (which is
a significant improvement because solving “normal” planning problems (where delete lists are not empty)
takes exponential time).
You have to prove that deciding whether a delete-relaxed problem has a solution can be done in polynomial
time. To do so,
• (6 pts) provide a decision procedure (i.e. an algorithm in pseudo code similar to the classical planning
algorithm) that takes a delete-relaxed planning problem P
+ = (V, A, sI , g) as input and that returns
true if there is a solution and f alse if there is none.
• (2 pts) Prove that your algorithm runs in polynomial time.
• (2 pts) Prove (or explain) why the returned result (i.e., yes or no) is correct.
4
Hints:
• Exploit the fact that executing an action (to progress some state to another) can never be a disadvantage for the purpose of solving a (delete-free) planning problem,
• exploit that there is no reason to ever execute an action more than once (i.e., once an action is executed
it can be ignored). (By exploiting this the right way you can show that the algorithm runs in quadratic
time (which is polynomial).)
Recap
In the lecture (recordings) we saw the proofs for:
• h
max is not perfect.
• h
max is safe.
• h
max is goal-aware.
• h
+ dominates h
max
Two of these proofs are based on the following fact: Let ¯a a solution plan for a planning problem P. Then,
the delete-relaxed variant of ¯a, ¯a
0
is a solution for the delete-relaxed variant of P, P
+.
You can assume the correctness of this property if you need it in the following.
We will introduce two further heuristics, h
add (the add heuristic), and h
PDB (the Pattern Database Heuristic). For these, you are supposed to prove these properties as well.
B. Add Heuristic h
add (15 points)
The h
max heuristic from the lecture can be interpreted as making the hardest fact in a goal condition true
– the same applies to all the preconditions of all actions. Thus the preconditions of actions are taken into
account by only a very limited extent, as only their hardest precondition contributes to the heuristic value,
which is the reason why this heuristic is not very well informed.
The add heuristic h
add tries to compensate that. Just like h
max we define h
add based on the relaxed planning
graph. In contrast to h
max, rather than estimating an action’s cost based on it’s hardest precondition, we
do this by adding over the cost of all preconditions.
Based on the relaxed planning graph, h
add can be calculated as follows:
• Action vertex: The cost of an action vertex a ∈ Ai
is c(a) plus the sum of the predecessor vertex costs.
Mathematically, this can be expressed as follows:
h
add(a; s) = c(a) + h
add(pre(a); s)
• Variable vertex:
– The cost of a variable vertex v is 0 if v ∈ V
0
.
– For all v ∈ V
i
, i > 0, the cost of v equals the minimum cost of all predecessor vertices (these
might be either action or variable vertices).
Mathematically, this can be expressed as follows:
h
add(v; s) =
0 if v ∈ V
0
mina∈pred(v)h
add(a; s) otherwise
Here, pred(v) stands for “predecessors”, which is the set of all actions that occur in an action layer
before the vertex layer of v and add v.
• Vertex set: For a set of state variables ¯v ⊆ V , the cost equals the sum of costs of the variables in ¯v.
Mathematically, this can be expressed as follows:
h
add(¯v; s) = X
v∈v¯
h
add(v; s)
For a state s ∈ S, h
add(s) equals the cost of g in the rPG that we start constructing in s. Just like h
max
,
h
add returns ∞ if and only if there does not exist a delete-relaxed solution for g (which is equivalent to
stating that the final vertex layer does not contain all goals).
Hint: Before you answer the following questions We recommend to compute the heuristic at least once in
a small example, for instance in the Cranes in the Harbor Domain (you do already have a correct planning
graph available, so you can easily and quickly annotate the respective values).
5
Answer and prove the following questions. As always: If some proposition does not hold, you can prove it
by providing a counter-example (and showing/explaining that/why it is a counter-example).
a) (2 pts) Is h
add perfect?
b) (3 pts) Is h
add safe?
c) (2 pts) Is h
add goal-aware?
d) (4 pts) Is h
add admissible?
e) (4 pts) Does h
add dominate h
max?
C. Pattern Database Heuristics h
PDB (15 points)
Patter database heuristics (PDB heuristics) are a type of heuristic obtained by dropping a a subset of
variables from the planning problem. The main idea is closely related to ignoring tiles in the sliding tile
puzzle: Just make a world easier by completely ignoring certain parts of it!
Given a planning problem P = (V, A, sI , g) and a set of variables – called a pattern – X ⊆ V , we restrict
the planning problem to the pattern, thereby effectively ignoring all other state variables.
Formally, we can define the resulting planning problem as P
X = (X, AX, sI
X, gX) with:
• AX = {a
X|a ∈ A} with
– a
X = (pre(a) ∩ X, add(a) ∩ X, del(a) ∩ X, c(a)) for a ∈ A
• sI
X = sI ∩ X
• g
X = g ∩ X
It is easy to see that the resulting planning problem P
X is just a normal planning problem again, without
any further special properties, it’s just smaller! (This is in contrast to, for example, delete-relaxation
heuristics, because there the resulting planning problem has no delete-effects; here, however, depending on
how we choose the pattern, we might still have delete effects.) The resulting problem P
X is also referred
to as “abstraction of P”.
We will ignore many details about this heuristic as they will not be important for the purpose of this
exercise. You only have to know that each “original” state s from the original problem’s search corresponds
to exactly one “relaxed” state in the abstraction. Provided the heuristic uses a pattern X ⊆ V , we obtain
the abstract state s
X that corresponds to s by simply restricting s to X, i.e., s
X := s ∩ X. So if, during
search, we encounter a state s, say {a, c, e, f}, and our heuristic uses the pattern X = {d, e, f}, then the
PDB heuristic uses the state {e, f} for its computation.
Now, completely similar to “ignoring tiles heuristic” in the Sliding Tile Puzzle, the PDB heuristic simply
uses the cost of a perfect solution to the abstract problem as heuristic for the original state. Formally:
Given the heuristic uses the pattern X, h
PDB(s) (for P) is defined as h
∗
(s
X) = h
∗
(s ∩ X) in the problem
P
X. So, back in our example, h
PDB({a, c, e, f}) for the original problem P would be defined as h
∗
({e, f})
in the abstracted problem P
X.
Answer (try to prove/disprove) the following questions for PDB heuristics.
a) (2 pts) Is it perfect? Hint: It is completely obvious that this is not the case, so there is no harm in
revealing that it’s not. For proving that it is not you are allowed to choose any pattern you want (for
the counter example that you construct).
b) (2 pts) If P = (V, A, sI , g) is the planning problem you are facing, can you provide a pattern X ⊆ V
for which h
PDB will be perfect?
c) (3 pts) Is it safe?
d) (2 pts) Is it goal-aware?
e) (3 pts) Is it admissible?
f) (3 pts) Let P = (V, A, sI , g) be a planning problem and X ⊆ V and Y ⊆ V patterns. Let h
P BD
X
and h
P BD
Y be the respective PDB heuristics that use X and Y as patterns, respectively. Assume
X ⊇ Y . Does h
P BD
X dominate h
P BD
Y
or the other way round? Or is there no dominance among these
heuristics? Explain (prove) your answer.
Theoretical Methods II
(see next pages)
6
3 Classical Propositional Logic (8pts.).
Answer questions 1.1 to 1.3 below using only rules of propositional logic below
and the lecture content. Clearly formulate and justify every argument in your
proof using proper English. If you use a property listed as one of the questions on
this assignment or listed in the lecture slides, please clearly state which question
is it and how do you use it in your argument. Let A and B be logical statements.
Propositional Logic Rules (there are more, but these set of rules are sufficient for the purpose
of writing proofs for problems in this assignment):
1. To show that A ⇒ B holds, we assume that A holds, then we deduce B.
2. If we know that both A and A ⇒ B hold, then we can deduce B.
3. If we know that A holds and ¬A holds, then we can deduce F.
4. If we assume A holds, then deduced F, therefore ¬A must hold.
5. (Proof by Contradiction) To show that A ⇒ B holds, we assume that both A and ¬B
hold and deduce F.
6. If we know that A ∧ B holds, then we can deduce A and B (or equivalently in English,
B and A).
7. If we know that A holds and B holds, we can deduce A ∧ B
8. If we know that at least one of the following:
(1) A holds,
(2) B holds,
then, we can deduce A ∨ B.
Note: Logical operators precedence order is the following ¬, ∧, ∨, and ⇒.
Question A
Show that (A ∧ B) ⇒ (B ∧ A). (2pts.)
Question B
Show that (A ⇒ B) ⇒ (¬B ⇒ ¬A). Deduce ¬A from ¬B using proof by contradiction.
(4pts.)
Question C
Show that ¬¬A ⇒ A using proof by contradiction. (2pts.)
1
4 Natural Numbers (32pts.).
Using only given definitions below and the lecture content, answer the questions
below. Clearly formulate and justify every argument in your proof using proper
English. If you use a property listed as one of the questions on this assignment
or listed in the lecture slides, please clearly state which question is it and how
do you use it in your argument.
Definition 1 (Natural Numbers N). Let 0 be a term. For n = (S x), n is a term if and only
if x is a term, and we say that n is a successor of x. Let the set of Natural Numbers N and
for its element n, we write n ∈ N and we say that n is a natural number. We have n ∈ N
if and only if either n = 0 or n = (S x) if x ∈ N, and nothing else is a natural number.
Thus, the natural numbers N is the set N = {0,(S 0),(S (S 0)), . . . }, which we write as
{0, 1, 2, . . . }. J
As mentioned in the slides, we will also assume arithmetical operations ”+” and ”×”
over n ∈ N which for natural numbers a, b ∈ N, have the properties:
• closed: a + b and a × b are also natural numbers, and
• commutative: a + b = b + a and a × b = b × a.
• ”×” is distributive: (a + b) × c = (a × c) + (b × c)
Definition 2 (odd and even). For unary functions Odd and Even that accepts a natural
number n, we have:
• Even(n) = T if and only if ∃m ∈ N such that n = m × 2, and
• Odd(n) = T if and only if ∃m ∈ N such that n = (S (m × 2)).
J
Question A
Show by a direct proof that Odd(n) ∧ ¬Even(S n) ⇒ F. (2pts.)
For questions 2.2 to 2.7, consider the definition below.
Definition 3 (ordering). For functions Geq and Leq that take two natural numbers, we
have:
• Geq(n, m) = T if and only if ∃k ∈ N such that n = m + k, and
• Leq(n, m) = T if and only if ∃k ∈ N such that n + k = m.
J
2
Question B
Show that for n, m ∈ N, Leq(n, m) if and only if Geq(m, n) (i.e: this is equivalent to showing
Leq(n, m) ⇒ Geq(m, n) and Leq(n, m) ⇐ Geq(m, n). (2pts.)
Question C
Show that for n, m, j ∈ N such that n 6= m 6= j, if Leq(n, m) ∧ Leq(m, p), then Leq(n, p).
(4pts.)
Question D
Show that Geq(n, m) ∧ Leq(n, m) ⇒ n = m. (2pts.)
Question E
Show by induction that ∀n ∈ N, Geq(n, 0). (4pts.)
Question F
Show by a direct proof that ∀n ∈ N, ∃m ∈ N where m 6= n such that Leq(n, m). (2pts.)
Question G
Show that for all n ∈ N, there exist m ∈ N such that ((Odd(n) ⇒ Even(m)) ∨ (Even(n) ⇒
Odd(m)))∧Geq(m, n). Use a method of proof that you think is appropriate (i.e: induction,
direct, by contradiction, etc.). Justify your choice in a few sentences. (6pts.)
Hint: Consider the definition of ’+’, which is: a + b = c iff c = (S (S . . . S (a). . .) (i.e:
we apply S b-times to a).
Question H
Show that for all natural numbers n and m, we have Geq(n, m) ∨ Leq(n, m). (4pts.)
Hint: Again, consider the definition of ’+’, which is: a+b = c iff c = (S (S . . . S (a). . .)
(i.e: we apply S b-times to a). Also recall the definition of a natural number.
Question I
Show that ∀a, b ∈ N such that b 6= 0, there exist q, r ∈ N such that (a = (b × q) + r) ∧
(Leq(r, b)). (6pts.)
3