# Chapter 1 Introduction

Document Sample

```					Chapter 5 Job Shop Scheduling

5.1. INRODUCTION
Job shop scheduling with a finite number of jobs.
There are n jobs and a certain objective function.
Each job follows a predetermined route.
In some models a job may visit each machine at most once, and
in other models a job may visit each machine more than once
(recirculation).
Job shop problem: each customer order has specified
characteristics and order sizes are relatively small.
EG/ Wafer fab, hospitals (patients as jobs).
Some of the job shop problems considered in this chapter are
special cases of the resource-constrained project scheduling.
These job shop problems are NP-hard and cannot be formulated
as linear programs.
They can be formulated either as integer programs or as
disjunctive programs.

5.2 DISJUNCTIVE PROGRAMMING AND BRANCH AND
BOUNDS
Consider n jobs and m machines. Each job is processed by a set
of machines in a given order, and there is no recirculation.
Operation (i,j): the processing of job j on machine i.
pij : processing time of operation (i,j).

Chap5. 1
The problem of minimizing the makespan in a job shop without
recirculation can be represented by a disjuncitve graph.
G: a direct graph
N: a set of nodes; all the operations.
A: conjunctive (solid) arcs: represent the route of the jobs. Arc
(i, j)  (k, j) : job j must be processed on machine i before
machine k.
B: disjunctive arcs: Two operations that belong to two different
jobs and that have to be processed on the same machine.
They are connected to one another by two disjunctive
(broken) arcs going in opposite directions. They forms m
cliques of double arcs, one clique for each machine.
All arcs emanating from a node have as length the processing
time of the operation that is represented by that node.
U: source node; n conjunctive arcs emanating to the first
operation of the n jobs.
V: sink node; conjunctive arcs coming from all the final
operations.
U and V: dummy nodes, zero processing time.
We denote the graph G=(N, A, B).
A feasible schedule of a machine: a selection of one disjunctive
arc from each pair such that the directed graph is acyclic.
Such selection determines the sequence in which the operations
are to be performed on a machine.
Argument: If there were a cycle within a clique, a feasible

Chap5. 2
sequence of the operations on the corresponding machine
would not have been possible.
An example among more than one machine:
the sequence of job j: (h, j), (i, j)
the sequence of job k: (i, k), (h, k)
If a given schedule, (i, j) precede (i, k), and (h, k), precede (h, j),
then the schedule contain a cycle with four arcs.
Such a schedule is physically impossible.
D: the set of selected disjunctive arcs.
G(D): the set of conjunctive arcs and the subset D.
D is corresponding to a feasible schedule, if and only if G(D)
contains no directed cycle.
The makespan of a feasible schedule is determined by the
longest path in G(D) from U to V.
Each operation on this path is immediately followed by either
the next operation on the same machine or the next
operation of the same job on another machine.
Minimizing the makespan is reduced to finding a selection of
disjunctive arcs that minimize the length of the longest path
(critical path).
There are a number of integer programming formulations for job
shop scheduling. See section 4.6 and exercise 5.1
Disjunctive Programming formulation:
yij : the starting time of operation (i, j).

N: the set of all operations

Chap5. 3
A: the set of all routing constrains. (i, j)  (k, j): job j to be
processed on machine i before it is processed on machine k.
Formulation: p83

Example 5.2.1 (p83)
Four machines, three jobs

Minimizing makespan in a job shop is a very hard problem, and
solution procedures are either based on enumeration or on
heuristics.
Branch-and-Bound Procedure:
Bounding procedure is specially designed
Branching procedure limits ourselves to a specific class of
schedules (the class of active schedules)

Active schedules: A feasible schedule is active if it cannot be
altered in any way so that some operation is completed earlier
and no other operation is completed later.
Figure 5.2 (p84) a nonactive schedule. The idle period in two
operations on machine 2 is long enough to accommodate
operation (2, 1).
A property of an active schedule is that it is impossible to reduce
the makespan without increasing the starting time of some
operation.
Optimal schedule is among all active schedules.

Chap5. 4
All active schedules can be generated by a simple algorithm.
 : the set of all operations whose predecessors have already
been scheduled.
rij : the earliest possible starting time of operation (i, j) in  .

 : a subset of  .
Algorithm 5.2.1 Generation of all active schedules (p84)
Algorithms 5.2.1 is basis for the branching process.
Step 3 performs the branching from the node of a given partial
schedule.
The number of branches is equal to the number of operations in
 .
The nodes at the very bottom of the tree correspond to all the
active schedules.
A node W in the tree correspond to a partial schedule and the
partial schedule is characterized by a selection of disjunctive
arcs that correspond to the order in which all the predecessors
of a given set  have been scheduled.
A branch out of W corresponds to the selection of (i* , j )  

as the next to go on machine i * . For all the operations (i* , k )
still to be schedule on machine i * , the disjunctive arcs
(i* , j )  (i* , k ) have to be added.
A newly created node W  corresponds one additional operation
and a number of additional selected disjunctive arcs. See
Figure 5.3 (p86)

Chap5. 5
D : the set of disjunctive arcs selected at the newly created
node.
G( D ): the graph with all the conjunctive arcs and set D .
To find a lower bound for the makespan at node W  , consider
G( D ).
The length of the critical path result in a lower bound for the
makespan at node W  . Call this lower bound LB(W ) .
Better (higher) lower bounds can be obtained:
Because not all disjunctive arcs are selected yet in G( D ), it may
be the case that, at some points in time, multiple operations
that require processing on a machine are processes
simultaneously.
Consider machine i and assume that all other machines are
allowed to process, at any point in time, multiple operations
simultaneously.
We force one machine i to process its operations one after
another:
(1) compute the earliest possible starting time rij of all the

operations (i, j) on machine i. That is, determining the length
of the longest path from the source to (i, j) in G( D ).
(2) compute d ij , which is equal to LB(W ) minus the length of

the longest path from node (i, j) to sink plus pij

(3) consider the operations on machine i as a single machine
problem with job arriving at different times, no preemptions

Chap5. 6
allowed, and the maximum lateness as the objective to be
minimized. (consider in section 4.3). Even it is NP-hard,
relatively effective algorithms are available. The optimal
sequence obtained for this problem implies a selection of
disjunctive arcs that be added (temporarily) to D .
This may lead to a longer overall critical path in the graph, a
larger makespan, and a better (higher) lower bound for node
W .
At node W  this procedure can be done for each machine
separately.
The largest makespan obtained this way can be used as a lower
bound at node W  .
Although it appears somewhat of a burden to have to solve m
NP-hard scheduling problems to obtain one lower bound for
another NP-hard problem, this type of bounding procedure
has performed reasonably well in computational experiment.

Example 5.2.2 (p86)

The approach described here is based on complete enumeration
and is guaranteed to lead to an optimal schedule.
With 20 machines and 20 jobs, finding the optimal schedule is

5.3 THE SHIFTING BOTTLENECK HEURISTIC AND

Chap5. 7
THE MAKESPAN
One of most successful heuristic developed for minimization of
makespan in a job sop is the shifting bottleneck heuristic.
M: the set of m machines.
M 0 : a subset of M. the selection of their disjunctive arcs has
been fixed. Each one of the machines in M 0 , a sequence of
An iteration results in a selection of a machine from M  M 0
for inclusion in set M 0 .
The sequence of the operations on the machine to be processed
is also generated in this iteration.
An attempt is made to determine which unscheduled machine
causes in the serverest disruption.
Original directed graph is modified by deleting all disjunctive
arcs of the machines yet to be scheduled and keeping only the
relevant disjunctive arcs of the machines in M 0 . Call this
graph G .
Deleting all disjunctive arcs implies that operations on a
machine can be done in parallel.
The G has one or more critical path that determine the
makespan. Call this makespan Cmax ( M 0 ) .
Suppose operations (i,j), i  {M  M 0 }, has to be processed in a
time window of which the release date and due date are
determined by the critical (longest) paths in G .

Chap5. 8
Release date is the longest path in G from the source to node
(i, j).
Due date is equal to Cmax ( M 0 ) minus the longest path from
node (i,j) to the sink plus pij .

Consider each machine in M  M 0 as a separate
nonpreemptive single-machine problem with release date and
due date and with the maximum lateness to be minimized.
Lmax (i ) : Maximum lateness of machine i; a measure of the
criticality of machine i.
After solving all these single machine problems, the machine
with the largest maximum lateness is chosen. “bottleneck”.
Label it as machine k, call its maximum lateness Lmax (k ) ,
and schedule it according the optimal solution.
The disjunctive arcs of the schedule are inserted into graph G .
The makespan of the current partial schedule increases by at
least Lmax (k ) .
Additional step: resequences all machines in the original set
M 0 to see whether the makespan can be reduced.
Say, machine l , is taken out of set M 0 , and a graph G  is
constructed by modifying graph G through the inclusion of
the disjunctive arcs specify the sequence of operations on
machine k and exclusion of the disjunctive arcs associated
with machine l.
Machine l is resequenced by solving the corresponding single

Chap5. 9
machine maximum lateness problem.
Resequencing each machine in the original set M 0 completes
the iteration.

Algorithm 5.3.1. Shifting Bottleneck Heuristic (p91&92)

Example 5.3.1 Application of Shifting Bottleneck Heuristic
(p92)

The precedence constraints may be necessary because of the
sequence of the operations on the machine already scheduled.
Delayed precedence constraints: It may be even be the case that,
between the processing of two operations subject to these
precedence constraints, a certain minimum amount of time
(delay) must elapse.
Without these constraints the shifting bottleneck heuristic may
end up in a situation where there is a cycle in the graph.

Example 5.3.2 Delayed Precedence Constraints (p95)

Extensive numerical research has shown that the shifting
bottleneck heuristic is extremely effective.
The branch and bound approach needed many hours of CPU
time.
The disadvantage of the heuristic is that there is no guarantee

Chap5. 10
that the solution it generates is optimal.

5.4 THE SHIFTING BOTTLENECK HEURISTIC AND
THE TOTAL WEIGHTED TARDINESS
job shop without recirculation with the total weighted tardiness
n

 w jT j   as the objective.
j 1

Combine shifting bottleneck heuristic (previous section) and
apparent tardiness cost first (ATC) rule (chapter 3).
Disjunctive graph is different.
Makespan problem: only the completion time of last job is
important. Single sink node.
Weighted total tardiness problem: the completion time of n jobs
V
are important. n sink nodes. (V1,V2 ,... n ). Figure 5.9.
The length of the path from the source U to sink Vk represents
the completion time of job k.
The approach is as follows:
Machines are scheduled one at a time.
All machines in M 0 have already been scheduled (that is, all
disjunctive arcs have been selected).
In each iteration, one machine is selected for inclusion in M 0 .
Each of the remaining machines has to be analyzed separately.
A measure of criticality is computed.
All disjunctive arcs belonging to the machine still to be
scheduled are deleted, and all disjunctive arcs selected for the
Chap5. 11
machines already scheduled ( M 0 ) are kept in place.

Ck : the completion time of job k.

To avoid an increase in Ck , operation (i,j) j=1,2,…,n, must be
k
completed on machine i by some local due date d ij .

The local due date is computed by considering the longest path
from operation (i,j) to sink Vk .
If there is no path from node (i,j) to sink Vk , the local due date
is infinity.
k
Because of job k, there may be a local due date d ij for

operation (i,j).
k
That is, If operation (i,j) is completed after d ij , then the

completion time of job k is postponed.
Operation (i,j) is subject to n local due dates.
Each operation is subject to a piecewise linear cost function
(Figure 5.10).
Operation may be subject to delayed precedence constraints.
The single machine problem with n jobs subject to precedence
constraints and total weighted tardiness as objective. (chapter
3).
A simple dispatching rule: ATC rule.
A effective function assigns to operation (i,j) the priority value:
n            
 d k  p  r  t   
I ij (t )   exp   ij
wk           ij   ij       

k 1 pij
         K p          
                      

Chap5. 12
t: the earliest time at which machine i can be used
K: scaling parameter
p : integer part of the average length of the operations to be
processed on machine i.
The measure of criticality of a machine is equal to the value of
objective function.
The new disjunctive graph the new completion times of all n

jobs: Ck . Ck  Ck .
     
One function to provide a measure of criticality for machine i:
n                      d k  Ck  

 wk Ck  Ck  exp   K 
               
k 1                                  
The machine with the highest measure is selected as the next
one to be included in M 0 .
Rescheduling all the machines in the original set M 0 is

Example 5.4.1 (p100)

The optimal solution with a value of 18 can be obtained with
more elaborate versions of this heuristic.
This version uses backtracking techniques as well as machine
reoptimization (similar to step 4 in Algorithm 5.3.1)

5.5. THE TOTAL WEIGHTED TARDINESS IN A
FLEXIBLE FLOW SHOP WITH SETUPS
Chap5. 13
This section considers a job shop environment that is prevalent
in many industries. There are a number of stages in series
with several machines in parallel at each stage.
The machines at a particular stage may be different.
The more modern machines can accommodate a greater variety
of jobs and usually operate at a higher speed.
One stage (sometimes two) constitutes a bottleneck.
When a job is completed on a given machine and another has to
start, a setup is required. The setup time depends on both
jobs.
This machine configuration is a flexible flow shop.
A number of objectives in mind
One is to meet due dates. The sum of the weighted tardiness

 w jT j .
Another important objective is the maximization of throughput.
This is related to the minimization of the sum of setup times,
especially for machines in bottleneck.
A third objective is minimization of the work-in-process
inventory.
The final schedules are the result of compromises between the
three objectives.
We assume that only a single parameter is associated with the
setup time of job j on machine i, say aij .

If job k follows job j on machine i, then the setup time between
job j and k is
Chap5. 14
sijk  hi (aij , aik )

hi may be machine-dependent.
The algorithmic framework consists of five phases:
(i) bottleneck identification process:
At least one of the stages has to be bottleneck.
If two stages constitute bottlenecks, the scheduler starts with the
most downstream bottleneck.
The possibility of a moving bottleneck is not taken in account in
this phase, as the planning horizon is assumed to be short; the
bottleneck therefore does not has sufficient time to move.
(ii) computation of time windows for jobs at the bottleneck
stage:
After leaving the bottleneck, have a relatively short wait at each
downstream stage.
The length of a job’s stay at any one of these stage can be
estimated by multiplying the processing time with a safety
factor.
We can estimate for a local due date for each job at the
bottleneck.
The release date for a job at the bottleneck is computed as
follows: The status  j of job j may be 0 (the raw material

for this job has not yet arrived); 1 (the raw material has
arrived, but no processing has yet taken place); 2 (the job has
gone through the first stage, but not through the second); and
so on.
Chap5. 15
The release date of job j at bottleneck stage is rbj = f  j  .

f is decreasing in the status of job j and has to be determined
empirically.
(iii) machine capacity computations at the bottleneck stage:
The capacity of each machine over the planning horizon is
computed based on its speed, the number of shifts assigned to
it, and an estimate of the amount of time spent on setup.
The statistics are gathered for different machines.
We can determine which machine(s) at this stage have the
largest loads and are the most critical.
(iv) scheduling of the jobs at the bottleneck stage:
ATCS rule of section 3.3.
(v) scheduling of the jobs at the nonbottleneck stages:
The sequence in which the job go through the bottleneck stage
more or less determine the sequence in which the jobs go
through the other stages.
Some minor swap may still be made in these sequence.
For example, a local, adjacent, pairwise interchange may reduce
the setup times on a particular machine.

Example 5.5.1 (p106)

If a machine usually operates at a very high speed (in
comparison with other machines), there is a preference to put
longer jobs on this machine.

Chap5. 16
5.6 DISCUSSION
The disjunctive graph formulation for the job shop problem
without recirculation also extend to the job shop problem
with recirculation.
The set of disjunctive arcs for a machine subject to recirculation
is not a clique.
If two operations of the same job have to be performed on the
same machine, a precedence relationship is given. These two
operations are not connected be a pair of disjunctive arcs,
since they are already connected by conjunctive arcs.
The branch and bound approach described in section 5.3 still
applies.
The bounding mechanism is based on the solution of a similar
nonpreemptive single-machine problem with the jobs subject
to delayed precedence constraints.
One variation of the shifting bottleneck heuristic is based on
decomposition principles. The following five-step approach
can be used for the flexible job shop:
p.109 & p110.
The subproblem now becomes a nonpreemptive,
parallel-machine scheduling problem with the job subject to
different release dates and the total weighted tardiness as the
objective to be minimized.
The multiple objectives in the overall problem also lead to

Chap5. 17
multiple objectives in the single-machine (or parallel
machines) subproblem.
All the heuristic approaches described here can be linked to
neighborhood search procedures, as described in Chapter 3.
This is solution is then fed into a neighborhood search procedure
that hopefully yields an even better solution.

Chap5. 18

```
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
 views: 2 posted: 8/8/2012 language: pages: 18
How are you planning on using Docstoc?