0% found this document useful (0 votes)
8 views

Constraint+Satisfaction+Problem II

- The document discusses various constraint satisfaction problem techniques including job shop scheduling, forward checking, constraint propagation through node consistency, arc consistency, and path consistency. - It provides an example of solving the 4 queens problem using forward checking and constraint propagation to reduce the domain of possible values through analyzing relationships between variables.

Uploaded by

M M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Constraint+Satisfaction+Problem II

- The document discusses various constraint satisfaction problem techniques including job shop scheduling, forward checking, constraint propagation through node consistency, arc consistency, and path consistency. - It provides an example of solving the 4 queens problem using forward checking and constraint propagation to reduce the domain of possible values through analyzing relationships between variables.

Uploaded by

M M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Constraint Satisfaction Problem-II

LECTURE # 08
TUESDAY, FEBRUARY 14, 2017
SPRING 2016
FAST – NUCES, FAISALABAD CAMPUS

Zain Iqbal
Zain.iqbal@nu.edu.pk
Agenda
 Job-Shop Scheduling
 Forward Checking
 Constraint Propagation
• Node consistency
• Arc consistency
• Path consistency
• K-consistency
• Global contraints
 Intelligent Backtracking
• Constraint Learning
Job-Shop Scheduling
 Small part of the car assembly, consisting of 15 tasks
 X = {AxleF , AxleB, WheelRF ,WheelLF ,WheelRB,WheelLB,
NutsRF ,NutsLF , NutsRB, NutsLB, CapRF , CapLF , CapRB,
CapLB, Inspect} .
 Precedence constraints
• AxleF + 10 ≤ WheelRF ; AxleF + 10 ≤ WheelLF
• AxleB + 10 ≤ WheelRB; AxleB +10 ≤ WheelLB
• WheelRF + 1 ≤ NutsRF ; NutsRF + 2 ≤ CapRF ;
• WheelLF + 1 ≤ NutsLF ; NutsLF +2 ≤ CapLF ;
• WheelRB + 1 ≤ NutsRB; NutsRB + 2 ≤ CapRB;
• WheelLB + 1 ≤ NutsLB; NutsLB + 2 ≤ CapLB
 Inspect we add a constraint of the form X +dX ≤ Inspect
 Done in 30 min So Di = {1, 2, 3, . . . , 27} .
3
Heuristic 4: Forward checking
• Idea:
• Keep track of remaining legal values for unassigned
variables
• Terminate search when any variable has no legal values

• Edge & Arc consistency are variants

4
Forward checking
• Idea:
• Keep track of remaining legal values for unassigned
variables
• Terminate search when any variable has no legal
values

5
Forward checking
• Idea:
• Keep track of remaining legal values for unassigned
variables
• Terminate search when any variable has no legal values

6
Forward checking
• Idea:
• Keep track of remaining legal values for unassigned variables
• Terminate search when any variable has no legal values

• A Step toward AC-3: The most efficient algorithm

7
Example: 4-Queens Problem

X1 X2
{1,2,3,4} {1,2,3,4}
1 2 3 4
1
2
3
4 X3 X4
{1,2,3,4} {1,2,3,4}

(From B.J. Dorr, U of Md, CMSC 421)

8
Example: 4-Queens Problem

X1 X2
{1,2,3,4} {1,2,3,4}
1 2 3 4
1
2
3
4 X3 X4
{1,2,3,4} {1,2,3,4}

9
Example: 4-Queens Problem

X1 X2
{1,2,3,4} { , ,3,4}
1 2 3 4
1
2
3
4 X3 X4
{ ,2, ,4} { ,2,3, }

10
Example: 4-Queens Problem

X1 X2
{1,2,3,4} { , ,3,4}
1 2 3 4
1
2
3
4 X3 X4
{ ,2, ,4} { ,2,3, }

11
Example: 4-Queens Problem

X1 X2
{1,2,3,4} { , ,3,4}
1 2 3 4
1
2
3
4 X3 X4
Backtrack!!!
{ , , , } { ,2,3, }

12
Example: 4-Queens Problem

X1 X2
{ ,2,3,4} {1,2,3,4}
1 2 3 4
1
2
3
4 X3 X4
{1,2,3,4} {1,2,3,4}

13
Example: 4-Queens Problem

X1 X2
{ ,2,3,4} { , , ,4}
1 2 3 4
1
2
3
4 X3 X4
{1, ,3, } {1, ,3,4}

14
Example: 4-Queens Problem

X1 X2
{ ,2,3,4} { , , ,4}
1 2 3 4
1
2
3
4 X3 X4
{1, ,3, } {1, ,3,4}

15
Example: 4-Queens Problem

X1 X2
{ ,2,3,4} { , , ,4}
1 2 3 4
1
2
3
4 X3 X4
{1, , , } {1, ,3, }

16
Example: 4-Queens Problem

X1 X2
{ ,2,3,4} { , , ,4}
1 2 3 4
1
2
3
4 X3 X4
{1, , , } {1, ,3, }

17
Example: 4-Queens Problem

X1 X2
{ ,2,3,4} { , , ,4}
1 2 3 4
1
2
3
4 X3 X4
{1, , , } { , ,3, }

18
Example: 4-Queens Problem

X1 X2
{ ,2,3,4} { , , ,4}
1 2 3 4
1
2
3
4 X3 X4
{1, , , } { , ,3, }

19
20
Constraint propagation
• Forward checking only looks at variables connected to current value in
constraint graph.

• NT and SA cannot both be blue!


• Constraint propagation repeatedly enforces constraints locally
• An algorithm can search (choose a new variable assignment from
several possibilities) or do a specific type of inference called constraint
propagation

21
Node Consistency

22
Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff
for every value x of X there is some allowed y

consistent arc.

constraint propagation propagates arc consistency on the graph.


23
Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff
for every value x of X there is some allowed y

inconsistent arc.
remove blue from source consistent arc.

24
Arc consistency

• Simplest form of propagation makes each arc consistent


• X Y is consistent iff
for every value x of X there is some allowed y

this arc just became inconsistent

• If X loses a value, neighbors of X need to be rechecked:


i.e. incoming arcs can become inconsistent again
(outgoing arcs will stay consistent).

25
Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff
for every value x of X there is some allowed y

• If X loses a value, neighbors of X need to be rechecked


• Arc consistency detects failure earlier than forward checking
• Can be run as a preprocessor or after each assignment

 Time complexity: O(n2d3) 26


Arc Consistency
• This is a propagation algorithm. It’s like sending messages to neighbors on the
graph! How do we schedule these messages?

• Every time a domain changes, all incoming messages need to be re-send.


Repeat until convergence  no message will change any domains.

• Since we only remove values from domains when they can never be part of a
solution, an empty domain means no solution possible at all  back out of that
branch.

• Forward checking is simply sending messages into a variable that just got its
value assigned. First step of arc-consistency.

27
AC-3 Algorithm

Time complexity: O(n2d3)


 From there, AC-3 does constraint propagation in the usual way, and
if any variable has its domain reduced to the empty set, the call to
AC-3 fails and we know to backtrack immediately.
28
Path Consistency
Arc consistency fail for only two constraints
• Path consistency tightens the binary constraints by using
implicit constraints that are inferred by looking at triples of
variables
 A two-variable set {Xi,Xj} is path-consistent with respect to a third variable Xm
if, for every assignment {Xi = a,Xj = b} consistent with the constraints on
{Xi,Xj}, there is an assignment to Xm that satisfies the constraints on {Xi,Xm}
and {Xm,Xj}. This is called path consistency because one can think of it as
looking at a path from Xi to Xj with Xm in the middle.
• If There are only two: {WA = red ,SA = blue} and {WA = blue,
SA = red}.
• We can see that with both of these assignments NT can be
neither red nor blue
(because it would conflict with either WA or SA).
• Because there is no valid choice for NT

29
K-consistency
• Arc consistency does not detect all inconsistencies:
Partial assignment {WA=red, NSW=red} is
inconsistent.
• Stronger forms of propagation can be defined using
the notion of k-consistency.
• A CSP is k-consistent if for any set of k-1 variables
and for any consistent assignment to those
variables, a consistent value can always be assigned
to any kth variable.
E.g. 1-consistency or node-consistency
E.g. 2-consistency or arc-consistency
E.g. 3-consistency or path-consistency
30
Strongly k-consistent
• A graph is strongly k-consistent if
It is k-consistent and
Is also (k-1) consistent, (k-2) consistent, … all the way
down to 1-consistent.
• This is ideal since a solution can be found in time
instead of
• YET no free lunch: any algorithm for establishing
n-consistency must take time exponential in n, in
the worst case it will require exponential space n.

31
Further improvements
Checking special constraints
Checking Alldif(…) constraint
• E.g. {WA=red, NSW=red}
 Checking Atmost(…) constraint
• E.g. Atmost(10, P1, P2, P3, P4).
• Domain {2, 3, 4, 5, 6}, the values 5 and 6 can be deleted
• Checking bounds propagation(…)
Bounds propagation for larger value domains
Bounds propagation for Smaller value domains
• E.g. Flight Passenger D1 = [0, 165] and D2 = [0, 385] .
OR
• F1 + F2 = 420. D1 = [35, 165] and D2 = [255, 385] .

32
Intelligent backtracking:
looking backward

33
Intelligent backtracking
Standard form is chronological backtracking i.e. try
different value for preceding variable.
 More intelligent, backtrack to conflict set.
• Set of variables that caused the failure or set of previously
assigned variables that are connected to X by constraints.
• Back-jumping moves back to most recent element of the
conflict set.
• Forward checking can be used to determine conflict set.

34
Example-Map Coloring

When we try the next variable, SA, we see that every value violates a constraint. We
back up to T and try a new color for Tasmania!-But its Silly

• we will keep track of a set of assignments that are in conflict


with some value for SA.
• The set in this case {Q=red ,NSW =green, V =blue, }, is called
the conflict set for SA

 The back-jumping method backtracks to the most


recent assignment in the conflict set; in this case,
back-jumping would jump over Tasmania and try
a new value for V .

Why Back-Jumping Over Forward Checking?


35
Example-Map Coloring
• Consider again the partial assignment {WA=red ,NSW
=red}
 Which we know is inconsistent
• Suppose we try T =red next and then assign NT, Q, V , SA.
• We know that no assignment can work for these last four
variables, so eventually we run out of values to try at NT.

Where to backtrack?

 The four variables NT, Q, V , and SA, taken together,


failed because of a set of preceding variables, which
must be those variables that directly conflict with
the four.
 In this case, the set is WA and NSW, so the
algorithm should backtrack to NSW and skip
over Tasmania.
Conflict-directed back-jumping 36
Constraint learning
• Constraint learning is the idea of finding a minimum set of
variables from the conflict set that causes the problem
• This set of variables, along with their corresponding values,
is called a no-good.
• We then record the no-good, either by adding a new
constraint to the CSP or by keeping a separate cache of no-
goods.
• No-goods can be effectively used by forward checking or by
back-jumping.
• Constraint learning is one of the most important techniques
used by modern CSP solvers to achieve efficiency on
complex problems.

37
Reading Material
• Russell & Norvig: Chapter # 7
• David Poole: Chapter # 4

38

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy