VIEWS: 44 PAGES: 77

• pg 1
```									    Constraint Satisfaction Problems

Chapter 5

1
Outline
 Constraint Satisfaction Problems (CSP)
 Backtracking search for CSPs
 Local search for CSPs

2
Constraint satisfaction problems (CSPs)
 Standard search problem:
 state is a "black box“ – any data structure that supports successor function, heuristic
function, and goal test

 CSP:
 state is defined by variables Xi with values from domain Di
 goal test is a set of constraints specifying allowable combinations of values for subsets
of variables

 Simple example of a formal representation language

 Allows useful general-purpose algorithms with more power than
standard search algorithms

3
Formulation as Standard Search Problem
 States
 Set of value assignments to some or all of the variables
 Initial state
 The empty assignment {}
 Successor function
 Value assignment to any unassigned variable without conflicts with
previously assigned variables
 Goal test
 Finding out the current assignment is complete
 Path Cost
 A constant cost
Benefits from treating a problem as a CSP

 The successor function and goal test can be written in a
generic way that applies to all CSPs.

 We can develop effective, generic heuristics that require

 The structure of the constraint graph can be used to
simplify the solution process.
Example of CSP : 8-queen problem

   Variables
 {Q1,Q2,Q3,Q4,Q5,Q6,Q7,Q8}

 Domain

•   Q1, …, Q8 ∊ {(1,1), (1,2), …, (8,8)}
   Constraint set
 No queen attacks any other
Example: Street Puzzle

1        2         3         4         5

Ni = {English, Spaniard, Japanese, Italian, Norwegian}
Ci = {Red, Green, White, Yellow, Blue}
Di = {Tea, Coffee, Milk, Fruit-juice, Water}
Ji = {Painter, Sculptor, Diplomat, Violonist, Doctor}
Ai = {Dog, Snails, Fox, Horse, Zebra}
Example: Street Puzzle
1      2        3         4        5
Ni = {English, Spaniard, Japanese, Italian, Norwegian}
Ci = {Red, Green, White, Yellow, Blue}
Di = {Tea, Coffee, Milk, Fruit-juice, Water}
Ji = {Painter, Sculptor, Diplomat, Violonist, Doctor}
Ai = {Dog, Snails, Fox, Horse, Zebra}

The Englishman lives in the Red house
The Spaniard has a Dog
Who owns the Zebra?
The Japanese is a Painter                   Who drinks Water?
The Italian drinks Tea
The Norwegian lives in the first house on the left
The owner of the Green house drinks Coffee
The Green house is on the right of the White house
The Sculptor breeds Snails
The Diplomat lives in the Yellow house
The owner of the middle house drinks Milk
The Norwegian lives next door to the Blue house
The Violonist drinks Fruit juice
The Fox is in the house next to the Doctor’s
The Horse is next to the Diplomat’s
T1

T2                   T4

T3

T1   must   be done during T3
T2   must   be achieved before T1 starts
T2   must   overlap with T3
T4   must   start after T1 is complete

• Are the constraints compatible?
• Find the temporal relation between every two tasks
Example: Map-Coloring

   Variables WA, NT, Q, NSW,V, SA,T
   Domains Di = {red,green,blue}
   Constraints: adjacent regions must have different colors

   e.g., WA ≠ NT, or (WA,NT) in {(red,green),(red,blue),(green,red),
(green,blue),(blue,red),(blue,green)}


10
CSP : Terminologies
 Consistent (or legal) assignment
 An assignment that does not violate any
constraints
 Complete assignment
 An assignment in which every variable is
mentioned
 Solution
 A complete and consistent assignment
 A complete assignment that satisfies all the
constraints
 Constraint graph
 Node : variable
 Arc : constraint
Has different color
VAR: WA,NT,SA,Q,NSW,V,T
Example: Map-Coloring

 Solutions are complete and consistent assignments, e.g.,
WA=red, NT=green, Q=red,             NSW=green, V=red,
SA=blue, T=green

12
Constraint graph
 Binary CSP: each constraint relates two variables

 Constraint graph: nodes are variables, arcs are constraints

13
Varieties of CSPs
 Discrete variables

 finite domains:
 n variables, domain size d  O(dn) complete assignments
 e.g., Boolean CSPs, incl.~Boolean satisfiability (NP-complete)
 infinite domains:
 integers, strings, etc.
 e.g., job scheduling, variables are start/end days for each job
 need a constraint language, e.g., StartJob1 + 5 ≤ StartJob3

 Continuous variables

 e.g., start/end times for Hubble Space Telescope observations
 linear constraints solvable in polynomial time by linear programming

14
Varieties of constraints
 Unary constraints involve a single variable,
 e.g., SA ≠ green

 Binary constraints involve pairs of variables,
 e.g., SA ≠WA

 Higher-order constraints involve 3 or more variables,
 e.g., cryptarithmetic column constraints

Preference Constraint
   ex) Prof. X might prefer teaching in the morning.
   ex) red is better than yellow
   A cost for each variable assignment
   Many real-world CSP
   Constrained optimization problems

15
Example: Cryptarithmetic

 Variables: F T UW
R O X1 X2 X3
 Domains: {0,1,2,3,4,5,6,7,8,9}
 Constraints: Alldiff (F,T,U,W,R,O)
 O + O = R + 10 · X1
 X1 + W + W = U + 10 · X2
 X2 + T + T = O + 10 · X3
 X3 = F, T ≠ 0, F ≠ 0
16
Real-world CSPs
 Assignment problems
 e.g., who teaches what class

 Timetabling problems
 e.g., which class is offered when and where?

 Transportation scheduling

 Factory scheduling

 Notice that many real-world problems involve real-valued variables

17
Standard search formulation (incremental)

States are defined by the values assigned so far

    Initial state: the empty assignment { }
    Successor function: assign a value to an unassigned variable that does not conflict
with current assignment
 fail if no legal assignments

    Goal test: the current assignment is complete

1. This is the same for all CSPs
2. Every solution appears at depth n with n variables
 use depth-first search
3. Path is irrelevant, so can also use complete-state formulation
4. b = (n - l )d at depth l, hence n! · dn leaves

18
Solving CSP by search : Backtracking Search
 BFS vs. DFS
 BFS  terrible!
 A tree with n!dn leaves : (nd)*((n-1)d)*((n-2)d)*…*(1d) = n!dn
 Reduction by commutativity of CSP
   A solution is not in the permutations but in combinations.
   A tree with dn leaves
 DFS
 Used popularly
 Every solution must be a complete assignment and therefore appears at depth n if
there are n variables
 The search tree extends only to depth n.
 A variant of DFS : Backtracking search
   Chooses values for one variable at a time
   Backtracts when failed even before reaching a leaf.
 Better than BFS due to backtracking, but still inefficient!!
Backtracking search
 Variable assignments are commutative}, i.e.,
[ WA = red then NT = green ] same as                          [ NT = green then
WA = red ]

 Only need to consider assignments to a single variable at each node
 b = d and there are dn leaves

 Depth-first search for CSPs with single-variable assignments is called
backtracking search

 Backtracking search is the basic uninformed algorithm for CSPs

 Can solve n-queens for n ≈ 25

20
Backtracking search

21
Backtracking example

22
Backtracking example

23
Backtracking example

24
Backtracking example

25
Improving backtracking efficiency
 General-purpose methods can give huge gains in speed:

 Which variable should be assigned next?

 In what order should its values be tried?

 Can we detect inevitable failure early?

26
Improving Backtracking Efficiency
Variable & value ordering to increase
 Which variable should be assigned next?      the likelihood to success
 Minimum Remaining Values heuristic
 In what order should its values be tried?
 Least Constraining Values heuristic
 Can we detect inevitable failure early?
 Forward checking
 Constraint propagation (Arc Consistency)
 When a path fails, can the search avoid
repeating this failure?
 Backjumping                                Early failure-detection to decrease
 Can we take advantage of problem              the likelihood to fail
structure?
 Tree-structured CSP
Restructuring to reduce the
problem’s complexity

General purpose methods
Heuristics for variable & value ordering

 Variable selection
 Most constrained variables
 MRV(minimum remaining values) heuristic
 Choose the variable with the fewest legal values.
 Most containing variables
 Degree heuristic
 In first variable selection, MRV heuristic doesn’t help at all.
 Use degree heuristic (selects the variable with the largest degree.)  Tie-
breaker !!
 Value ordering
 Least-Constraining-value heuristic
 prefer variables that rule out fewest choices of neighbors
Most constrained variable
 Most constrained variable:
choose the variable with the fewest legal values

 a.k.a. minimum remaining values (MRV) heuristic

29
Heuristics for variable & value ordering
Example for MRV

NT
NT           Q

SA                        SA      SA

NT ={R, G, B}        NT ={G, B}            NT =G
Q = {R, G, B}        Q = {R, G, B}         Q = {R, B}
WA = {R, G, B}       WA = R                WA = R
SA = {R, G, B}       SA = {G, B}           SA = {B}
NSW = {R, G, B}      NSW = {R, G, B}       NSW = {R, G, B}
V = {R, G, B}        V = {R, G, B}         V = {R, G, B}
T = {R, G, B}        T = {R, G, B}         T = {R, G, B}
Most constraining variable
 Tie-breaker among most constrained variables
 Most constraining variable:
 choose the variable with the most constraints on remaining
variables

31
Heuristics for variable & value ordering
Example for degree heuristic

NT       NT   Q
SA                        SA       SA

3   NT ={R, G, B},3       NT ={R, G}             NT = G
3   Q = {R, G, B},3       Q = {R, G}             Q = {R}
2   WA = {R, G, B},2      WA = {R, G}            WA = {R}
5   SA = {R, G, B},5      SA = B                 SA = B
3   NSW = {R, G, B},3     NSW = {R, G}           NSW = {R, G}
2   V = {R, G, B},2       V = {R, G}             V = {R, G}
0   T = {R, G, B},0       T = {R, G, B}          T = {R, G, B}
Least constraining value
 Given a variable, choose the least constraining value:
 the one that rules out the fewest values in the remaining
variables

 Combining these heuristics makes 1000 queens feasible

33
Heuristics for variable & value ordering
Example for LCV
Terminate search when any variable has no legal values
Red in Q is least constraint value

NT
WA        Q

NT
WA                WA
NT   Q
WA

NT ={R, G, B}      NT ={G, B}        NT =G               NT =G           NT =G
Q = {R, G, B}      Q = {R, G, B}     Q = {R, B}          Q=R             Q=B
WA = {R, G, B}     WA = R            WA = R              WA = R          WA = R
SA = {R, G, B}     SA = {G, B}       SA = {B}            SA = {B}        SA = ?
NSW = {R, G, B}    NSW = {R, G, B}   NSW = {R, G, B}     NSW = {G, B}    NSW = {R,G}
V = {R, G, B}      V = {R, G, B}     V = {R, G, B}       V = {R, G, B}   V = {R, G, B}
T = {R, G, B}      T = {R, G, B}     T = {R, G, B}       T = {R, G, B}   T = {R, G, B}
Forwarding checking(1/4)
Heuristics for early failure-detection

 How can we find out which variable is minimum remaining?
 What does eliminate the values from domain?

NT = red, Q = green

ⅹ
WA = {red, green, blue}
ⅹ green, blue}
SA = {red, ⅹ
ⅹ
NSW = {red, green, blue}
V = {red, green, blue}
T = {red, green, blue}
Forwarding checking(2/4)
Heuristics for early failure-detection

 Forward checking
 A variable X is assigned.
 Looks at each unassigned variable Y connected to X by a constraint
 Deletes any values of Y’s domain that is inconsistent with X
 If there is any Y with empty domain, undo this assigning.
 Backtrack!!
 If not, MRV !!
 Forward checking is an obvious partner of MRV heuristic !!
 MRV heuristic is activated after forward checking.
Heuristics for early failure-detection
Forwarding checking(3/4) – an example

NT
WA        Q
SA
NSW
V
T
Heuristics for early failure-detection
Forwarding checking(4/4) - Heuristic flow

Forward Checking

Yes
Tie?
No

Degree heuristic        Select the MRV

Forward Checking

Select the LCV

Goal?
Forward checking
 Idea:
 Keep track of remaining legal values for unassigned variables
 Terminate search when any variable has no legal values

39
Forward checking
 Idea:
 Keep track of remaining legal values for unassigned variables
 Terminate search when any variable has no legal values

40
Forward checking
 Idea:
 Keep track of remaining legal values for unassigned variables
 Terminate search when any variable has no legal values

41
Forward checking
 Idea:
 Keep track of remaining legal values for unassigned variables
 Terminate search when any variable has no legal values

42
Heuristics for early failure-detection
Constraint propagation: arc consistency

 F/C propagates information from
assigned to unassigned
variables, but doesn’t provide
early detection for all failures
 Simplest form of propagation
makes each arc consistent                    ⅹ
 The (directed) arc (XY) is
consistent                                   o
≡ For all x ∊ X, there is some y ∊ Y.
 If X loses a value, neighbors of X
need to be rechecked
Heuristics for early failure-detection
Constraint propagation: arc consistency
 Detects an inconsistency that is not detected by pure
F/C.
 Detect failure earlier than forward checking
 Makes CSP possibly with reduced domain.
 Provides a fast method of constraint propagation
O(n2d3)
n2 : the maximum number of arcs
d : the maximum number of insertion for one node
d2 : the number of forward checking
 Can be run as a preprocessor or after each assignment
Constraint propagation
 Forward checking propagates information from assigned to unassigned
variables, but doesn't provide early detection for all failures:

 NT and SA cannot both be blue!
 Constraint propagation repeatedly enforces constraints locally

45
Arc consistency
 Simplest form of propagation makes each arc consistent

 X Y is consistent iff
for every value x of X there is some allowed y

46
Arc consistency
 Simplest form of propagation makes each arc consistent

 X Y is consistent iff
for every value x of X there is some allowed y

47
Arc consistency
 Simplest form of propagation makes each arc consistent

 X Y is consistent iff
for every value x of X there is some allowed y
If X loses a value, neighbors of X need to be rechecked

48
Arc consistency
 Simplest form of propagation makes each arc consistent
 X Y is consistent iff
for every value x of X there is some allowed y
If X loses a value, neighbors of X need to be rechecked

 Arc consistency detects failure earlier than forward checking
 Can be run as a preprocessor or after each assignment

49
Arc consistency algorithm AC-3

 Time complexity: O(n2d3)

50
Heuristics for early failure-detection
Special constraints – Alldiff

 Problem-dependent constraints
 Alldiff constraint
 If there are m variables involved in the
constraint and n possible distinct
WA
values, m > n  inconsistent for Alldiff
constraint                                      NSW
 An example
 3 variables: NT,SA,Q
 NT, SA, Q ∊   {green, blue}
 m=3 > n=2
 inconsistent
Heuristics for early failure-detection

Special constraints – Almost
 Resource constraint
 Atmost constraint (Use limited resource)
 An example
 Totally, no more than 10 personnel should be assigned to jobs.
 Atmost(10, PA1, PA2, PA3, PA4)
 Job1~4 ∊ {3,4,5,6} ( min value = 3 )
   3+3+3+ 3 > 10 (minimum resource 12 but available resource 10)
   inconsistent
 Job1~4 ∊ {2, 3,4,5,6}
   2+2+2+2 <10 (consistent)
   2+2+2+5(or 6) > 10(inconsistent)
o Delete 5,6 from domain
Heuristics for early failure-detection   - Backjumping

 The weakness of Backtracking                                           1
 If failed in SA, backtrack to T.                                 Q
 But, T is not relevant to SA.

 Backjumping                                                5
NSW 2

 Backtracks to the most recent                                3   V
Next
variable in the conflict set.
variable
 Conflict set : the set of variables that                      T
caused the failure                                            4
Solving a CSP

Interweave constraint propagation, e.g.,
• forward checking
• AC3
and backtracking

+ Take advantage of the CSP structure
4-Queens Problem

X1          X2
1   2   3   4   {1,2,3,4}   {1,2,3,4}
1
2

3

4
X3          X4
{1,2,3,4}   {1,2,3,4}
4-Queens Problem

X1          X2
1   2   3   4   {1,2,3,4}   {1,2,3,4}
1
2

3

4
X3          X4
{1,2,3,4}   {1,2,3,4}
4-Queens Problem

X1          X2
1   2   3   4   {1,2,3,4}   {1,2,3,4}
1
2

3

4
X3          X4
{1,2,3,4}   {1,2,3,4}
4-Queens Problem

X1          X2
1   2   3   4   {1,2,3,4}   {1,2,3,4}
1
2

3

4
X3          X4
{1,2,3,4}   {1,2,3,4}
4-Queens Problem

X1          X2
1   2   3   4   {1,2,3,4}   {1,2,3,4}
1
2

3

4
X3          X4
{1,2,3,4}   {1,2,3,4}
4-Queens Problem

X1          X2
1   2   3   4   {1,2,3,4}   {1,2,3,4}
1
2

3

4
X3          X4
{1,2,3,4}   {1,2,3,4}
4-Queens Problem

X1          X2
1   2   3   4   {1,2,3,4}   {1,2,3,4}
1
2

3

4
X3          X4
{1,2,3,4}   {1,2,3,4}
4-Queens Problem

X1          X2
1   2   3   4   {1,2,3,4}   {1,2,3,4}
1
2

3

4
X3          X4
{1,2,3,4}   {1,2,3,4}
4-Queens Problem

X1          X2
1   2   3   4   {1,2,3,4}   {1,2,3,4}
1
2

3

4
X3          X4
{1,2,3,4}   {1,2,3,4}
4-Queens Problem

X1          X2
1   2   3   4   {1,2,3,4}   {1,2,3,4}
1
2

3

4
X3          X4
{1,2,3,4}   {1,2,3,4}
General Heuristic for complexity reduction

 Decompose into many sub-problems.
 Independent sub-problems
 Solve them respectively
 Ex) The mainland and Tasmania.
 Analysis of connected components of constraint graph
Algorithm for Tree-structured CSP
 If constraint graph is a tree, we solve the
CSP in time linear in the number of variables
1. Choose a variable as root
2. Order variables from root to leaves such
that every node’s parent precedes it in the
ordering Makes constraint graph arc-
consistent
3. For j from n down to 2, apply arc
consistency to the arc(parent(Xj), Xj)
Just finds a consistent value.(no backtrack
because of arc-consistency)
4. For j from 1 to n, assign Xj consistently with
Parent(Xj)
 Time complexity : O(nd2)
Heuristics for complexity reduction

Transforming to Tree-Structured CSP
 Conditioning
 Instantiate a variable, prune its neighbor’s domains
 Cutset conditioning
 Instantiate a set of variable such that the remaining graph is
tree
 S is called cycle cutset when constraint graph is a tree after the
removal of S
 Time Complexity : O(dc(n-c)d2)
   c : cutset size
   dc: the # of possible cutset assignment :
   Time complexity in tree-structred CSP : (n-c)d2
   Runtime O(dc . (n-c)d2)
Heuristics for complexity reduction

Transforming to Tree-Structured CSP
Iterative algorithm with Min-conflicts
heuristic
 Initially assigns a (random) value to every variable
 reassign the value of one variable at a time
 Variable selection :
 randomly select any conflict variables
 minimum of conflicts with other variables
 Hill-climb with h(n) = total number of violated constraints
 Min-conflicts is effective for many CSPs
 particularly with reasonable initial state
 Specially important in scheduling problems
 Airline schedule (when the schedule is changed)
Local search for CSPs
 Hill-climbing, simulated annealing typically work with "complete"
states, i.e., all variables assigned

 To apply to CSPs:
 allow states with unsatisfied constraints operators reassign variable values

 Variable selection: randomly select any conflicted variable

 Value selection by min-conflicts heuristic:
 choose value that violates the fewest constraints
 i.e., hill-climb with h(n) = total number of violated constraints

70
Example: 4-Queens
   States: 4 queens in 4 columns (44 = 256 states)
   Actions: move queen in column
   Goal test: no attacks
   Evaluation: h(n) = number of attacks

 Given random initial state, can solve n-queens in almost constant
time for arbitrary n with high probability (e.g., n = 10,000,000)

71
Local Search for CSP
1
2                                    2
0
3                                    2
3                                    2
2                                    2
2                                    2
3                                    2

Pick initial complete assignment (at random)
Repeat
•     Pick a conflicted variable var (at random)
•     Set the new value of var to minimize the number of conflicts
•     If the new assignment is not conflicting then return it

(min-conflicts heuristics)
Remark
 Local search with min-conflict heuristic works extremely
well for million-queen problems
 The reason: Solutions are densely distributed in the O(nn)
space, which means that on the average a solution is a few
steps away from a randomly picked assignment
Infinite-Domain CSP
 Variable domain is the set of the integers (discrete CSP) or of
the real numbers (continuous CSP)
 Constraints are expressed as equalities and inequalities
 Particular case: Linear-programming problems
 Constraint Programming
“Constraint programming
represents one of the
closest approaches
computer science has yet
made to the Holy Grail of
programming: the user
states the problem, the
computer solves it.”
Eugene C. Freuder, Constraints, April
1997
When to Use CSP Techniques?
 When the problem can be expressed by a set of variables
with constraints on their values
 When constraints are relatively simple (e.g., binary)
 When constraints propagate well (AC3 eliminates many
values)
 When the solutions are “densely” distributed in the space of
possible assignments
Summary
 CSPs are a special kind of problem
 States defined by values of a fixed set of variables
 Goal test defined by constraints on variable values
 Backtracking = depth-first search with one variable assigned per node
 Variable ordering and value selection heuristics help significantly
 Forward checking prevents assignments that guarantee later failure
 Constraint propagation (e.g. arc consistency) does additional work to
constrain values and detect inconsistencies
 The CSP representation allows analysis of problem structure
 Tree-structured CSPs can be solved in linear time
 Iterative min-conflicts is usually effective in practice

```
To top