Docstoc

Optimal Reward-Based Scheduling of Periodic Real-Time Tasks

Document Sample
Optimal Reward-Based Scheduling of Periodic Real-Time Tasks Powered By Docstoc
					                 Optimal Reward-Based Scheduling of Periodic Real-Time Tasks 

   Hakan Aydın, Rami Melhem, and Daniel Moss´
                                            e                                              Pedro Mej´a-Alvarez y
                                                                                                      ı
          Computer Science Department                                                                     o            o
                                                                                   CINVESTAV-IPN. Secci´ n de Computaci´ n
             University of Pittsburgh                                                    Av. I.P.N. 2508, Zacatenco.
              Pittsburgh, PA 15260                                                              e
                                                                                             M´ xico, DF. 07300
   faydin,mosse,melhemg@cs.pitt.edu                                                      pmejia@cs.pitt.edu


                              Abstract                                       improved approach to provide hard real-time guarantees to k
                                                                             out of n consecutive instances of a task.
    Reward-based scheduling refers to the problem in which                       The techniques mentioned above tacitly assume that a task’s
there is a reward associated with the execution of a task.                   output is of no value if it is not executed completely. How-
In our framework, each real-time task comprises a manda-                     ever, in many application areas such as multimedia appli-
tory and an optional part, with which a nondecreasing reward                 cations [17], image and speech processing [4, 6, 19], time-
function is associated. Imprecise Computation and Increased-                 dependent planning [3], robot control/navigation systems [21],
Reward-with-Increased-Service models fall within the scope of                medical decision making [9], information gathering [7], real-
this framework. In this paper, we address the reward-based                   time heuristic search [12] and database query processing [20]
scheduling problem for periodic tasks. For linear and con-                   a partial or approximate but timely result is usually acceptable.
cave reward functions we show: (a) the existence of an optimal                   The Imprecise Computation [5, 15] and IRIS (Increased Re-
schedule where the optional service time of a task is constant               ward with Increased Service) [10, 13] models were proposed
at every instance and (b) how to efficiently compute this service             to enhance the resource utilization and provide graceful degra-
time. We also prove that RMS-h (RMS with harmonic periods),                  dation in real-time systems. In these models, every real-time
EDF and LLF policies are optimal when used with the opti-                    task is composed of a mandatory part and an optional part. The
mal service times we computed, and that the problem becomes                  former should be completed by the task’s deadline to provide
NP-Hard, when the reward functions are convex. Further, our                  output of minimal quality. The optional part is to be executed
solution eliminates run-time overhead, and makes possible the                after the mandatory part while still before the deadline, if there
use of existing scheduling disciplines.                                      are enough resources in the system that are not committed to
                                                                             running mandatory parts for any task. The longer the optional
                                                                             part executes, the better the quality of the result (the higher the
1 Introduction                                                               reward).
                                                                                 The algorithms proposed for imprecise computation appli-
   In a real-time system each task must complete and produce                 cations concentrate on a model that has an upper bound on
correct output by the specified deadline. However, if the sys-                the execution time that could be assigned to the optional part
tem is overloaded it is not possible to meet each deadline. In               [5, 15, 18]. The aim is usually to minimize the (weighted) sum
the past, several techniques have been introduced by the re-                 of errors. Several efficient algorithms are proposed to solve
search community regarding the appropriate strategy to use in                optimally aperiodic scheduling problem of imprecise compu-
overloaded systems of periodic real-time tasks.                              tation tasks [15, 18]. A common assumption in these studies
   One class of approaches focuses on providing somewhat                     is that the quality of the results produced is a linear function
less stringent guarantees for temporal constraints. In [11],                 of the precision error; consequently, the possibility of having
some instances of a task are allowed to be skipped entirely.                 more general error functions is usually not addressed.
The skip factor determines how often instances of a given task                   An alternative model allows tasks to get increasing reward
may be left unexecuted. A best effort strategy is introduced                 with increasing service (IRIS model) [10, 13] without an upper
in [8], aiming at meeting k deadlines out of n instances of a                bound on the execution times of the tasks (though the deadline
given task. This framework is also known as (n,k)-firm dead-                  of the task is an implicit upper bound) and without the sepa-
lines scheme. Bernat and Burns present in [2] a hybrid and                   ration between mandatory and optional parts [10]. A task ex-
   This work has been supported by the Defense Advanced Research Projects   ecutes for as long as the scheduler allows before its deadline.
Agency through the FORTS project (Contract DABT63-96-C-0044).                Typically, a nondecreasing concave reward function is associ-
  y Work done while at the University of Pittsburgh                          ated with each task’s execution time. In [10] the problem of
maximizing the total reward in a system of aperiodic indepen-       lem to multiple resources and quality dimensions. Further, de-
dent tasks is addressed. The optimal solution with static task      pendent and independent quality dimensions are separately ad-
sets is presented, as well as two extensions that include manda-    dressed for the first time in this work. However, a fundamen-
tory parts and policies for dynamic task arrivals.                  tal assumption of that model is that the reward functions and
    Note that imprecise computation and IRIS models are             resource allocations are in terms of utilization of resources.
closely related, since the performance metrics can be defined as     Our work falls rather along the lines of Imprecise Computa-
duals (maximizing the total reward is a dual of minimizing the      tion model, where the reward accrued has to be computed sep-
total error). Similarly, a concave reward function corresponds      arately over all task instances and the problem is to find the
to a convex error function, and vice versa.                         optimal service times for each instance and the optimal sched-
    We use the term “Reward-based scheduling” to encompass          ule with these assignments.
scheduling frameworks such as Imprecise Computation and
                                                                    Aspects of Periodic Reward-Based Scheduling
IRIS models, where each task can be decomposed into manda-
tory and optional subtasks. A nondecreasing reward function         Problem
is associated with the execution of each optional part.                 The difficulty of finding an optimal schedule for a periodic
    An interesting question concerns types of reward functions      reward-based task set has its origin on two objectives that must
which represent realistic application areas. A linear reward        be simultaneously achieved, namely: (i) Meeting deadlines
function [15] models the case where the benefit to the overall       of mandatory parts at every periodic task invocation, and (ii)
system increases uniformly during the optional execution. Sim-      Scheduling optional parts to maximize the total (or average)
ilarly, a concave reward function [10, 13] addresses the case       reward.
where the greatest increase/refinement in the output quality is          These two objectives are both important, yet often incom-
obtained during the first portions of optional executions. The       patible. In other words, hard deadlines of mandatory parts may
first derivative of a nondecreasing concave function is nonin-       require sacrificing optional parts with greatest value to the sys-
creasing. Linear and general concave functions are considered       tem. The analytical treatment of the problem is complicated by
the most realistic and typical in the literature since they ade-    the fact that, in an optimal schedule, optional service times of
quately capture the behavior of many application areas such as      a given task may vary from instance to instance which makes
those mentioned above [4, 6, 19, 3, 21, 12, 7, 17, 20]. In this     the framework of classical periodic scheduling theory inappli-
paper, we show that the case of convex reward functions is an       cable. Furthermore, this fact introduces a large number of vari-
NP-Hard problem and thus focus on linear and concave reward         ables in any analytical approach. Finally, by allowing nonlin-
functions. Reward functions with 0/1 constraints, where no re-      ear reward functions to better characterize the optional tasks’
ward is accrued unless the entire optional part is executed, or     contribution to the overall system, the optimization problem
step reward functions have also received some interest in the       becomes computationally harder.
literature. Unfortunately, this problem has been shown to be            In [5], Chung, Liu and Lin proposed the strategy of assign-
NP-Complete in [18].                                                ing statically higher priorities to mandatory parts. This de-
    Periodic reward-based scheduling remains relatively unex-       cision, as proven in that paper, effectively achieves the first
plored, since the important work of Chung, Liu and Lin [5].         objective mentioned above by securing mandatory parts from
In that paper, the authors classified the possible application ar-   the potential interference of optional parts. Optional parts are
eas as “error non-cumulative” and “error cumulative”. In the        scheduled whenever no mandatory part is ready in the sys-
former, errors (or optional parts left unexecuted) have no effect   tem. In [5], the simulation results regarding the performance
on the future instances of the same task. Well-known examples       of several policies which assign static or dynamic priorities
of this category are tasks that periodically receive, process and   among optional parts are reported. We call the class of algo-
transmit audio, video or compressed images [4, 6, 19] as well       rithms that statically assign higher priorities to mandatory parts
as information retrieval tasks [7, 20]. In “error cumulative” ap-   Mandatory-First Algorithms.
plications, such as radar tracking, an optional instance must be        In our solution, we do not decouple the objectives of meet-
executed completely at every (predetermined) k invocations.         ing the deadlines of mandatory parts and maximizing the total
The authors further proved that the case of error-cumulative        (or average) reward. We formulate the periodic reward-based
jobs is an NP-Complete problem. In this paper, we restrict          scheduling problem as an optimization problem and derive an
ourselves to error non-cumulative applications.                     important and surprising property of the solution for the most
    Recently, a QoS-based resource allocation model (QRAM)          common (i.e., linear and concave) reward functions. Namely,
has been proposed for periodic applications [17]. In that study,    we prove that there is always an optimal schedule where op-
the problem is to optimally allocate several resources to the       tional service times of a given task do not vary from instance
various applications such that they simultaneously meet their       to instance. This important result immediately implies that
minimum requirements along multiple QoS dimensions and              the optimality (in terms of achievable utilization) of any policy
the total system utility is maximized. In one aspect, this can      which can fully use the processor in case of hard-real time peri-
be viewed as a generalization of optimal CPU allocation prob-       odic tasks also holds in the context of reward-based scheduling
(in terms of total reward) when used with these optimal ser-           t reaches the threshold value oi , the reward accrued ceases to
vice times. Examples of such policies are RMS-h (RMS with              increase.
harmonic periods) [14], EDF [14] and LLF [16] scheduling                  A schedule of periodic tasks is feasible if mandatory parts
disciplines.                                                           meet their deadlines at every invocation. Given a feasible
   Following these existence proofs, we address the problem            schedule of the task set T, the average reward of task Ti is:
                                                                                                      P=Pi
                                                                                                      X
                                                                                       REWi = Pi
of efficiently computing optimal service times and provide
polynomial-time algorithms for linear and/or general concave
                                                                                                    P j =1 Ritij                 (2)
reward functions. Note that using these optimal and con-
stant optimal service times has also important practical advan-
                                                                       where P is the hyperperiod, that is, the least common multiple
                                                                       of P1; P2; : : :; Pn and tij is the service time assigned to the j th
tages: (a) The runtime overhead due to the existence of manda-
tory/optional dichotomy and reward functions is removed, and
(b) existing RMS-h, EDF and LLF schedulers may be used                 instance of optional part of task Ti . That is, the average reward
without any modification with these optimal assignments.                of Ti is computed over the number of its invocations during the
   The remainder of this paper is organized as follows: In             hyperperiod P, in an analogous way to the definition of average
Section 2, the system model and basic definitions are given.            error in [5].
The main result about the optimality of any periodic policy               The average weighted reward of a feasible schedule is
which can fully utilize the processor(s) is obtained in Section        then given by:
                                                                                                     X  n
3. In Section 4, we first analyze the worst-case performance of                           REWW =            wi REWi                         (3)
Mandatory-First approaches. We also provide the results of ex-                                       i=1
                                                                       where wi is a constant in the interval (0,1] indicating the rela-
periments on a synthetic task set to compare the performance
                                                                       tive importance of optional part Oi. Although this is the most
of policies proposed in [5] against our optimal algorithm. In
                                                                       general formulation, it is easy to see that the weight wi can al-
Section 5, we show that the concavity assumption is also nec-
                                                                       ways be incorporated into the reward function fi , by replac-
essary for computational efficiency by proving that allowing
                                                                       ing it by wi fi . Thus, we will assume that all weight (impor-
convex reward functions results in an NP-Hard problem. We
present details about the specific optimization problem that we
use in Section 6. We conclude by summarizing our contribu-                                                                  Pn
                                                                       tance) information are already expressed in the reward function
                                                                       formulation and that REWW is simply equal to i=1 REWi .
tion and discussing future work.
                                                                          Finally, a schedule is optimal if it is feasible and maximizes
                                                                       the average weighted reward.
2 System Model                                                            A Motivating Example:
   We consider a set T of n periodic real-time tasks                      Before describing our solution to the problem, we present
T1; T2 ; : : :; Tn on a uniprocessor system. The period of Ti is       a simple example which shows the performance limitations
denoted by Pi , which is also equal to the deadline of the cur-        of any Mandatory-First algorithm. Consider two tasks where
rent invocation. We refer to the j th invocation of task Ti as Tij .   P1 = 4; m1 = 1; o1 = 1; P2 = 8; m2 = 3; o2 = 5. Assume
All tasks are assumed to be independent and ready at t = 0.            that the reward functions associated with optional parts are lin-
   Each task Ti consists of a mandatory part Mi and an op-             ear and f1 t1  = k1 t1 ; f2t2  = k2t2 , where k1       k2. In
tional part Oi . The length of the mandatory part is denoted by        this case, the “best” algorithm among “Mandatory-First” ap-
mi ; each task must receive at least mi units of service time be-      proaches should produce the schedule shown in Figure 1.
fore its deadline in order to provide output of acceptable qual-
ity. The optional part Oi becomes ready for execution only
                                                                                                                    000000
                                                                                                                    111111
                                                                                                                    000000
                                                                                                                    111111
                                                                                                                    000000
                                                                                                                    111111
                                                                                                                    000000
                                                                                                                    111111
                                                                                M1                             M1       O1
                                                                                                                    111111
                                                                                                                    000000
when the mandatory part Mi completes.
                                                                                                                    000000
                                                                                                                    111111
                                                                                                                    000000
                                                                                                                    111111
                                                                                                                    111111
                                                                                                                    000000
                                                                           0         1                     4        5        6        8
    Associated with each optional part of a task is a reward
function Ri tij  which indicates the reward accrued by task                                                                00000000000
                                                                                                                             11111111111
Tij when it receives tij units of service beyond its mandatory                              M2
                                                                                                                             11111111111
                                                                                                                             00000000000
                                                                                                                             11111111111
                                                                                                                             00000000000
                                                                                                                                 O2
                                                                                                                             00000000000
                                                                                                                             11111111111
                                                                                                                             11111111111
                                                                                                                             00000000000
portion. Ritij  is of the form:
                                                                                                                             11111111111
                                                                                                                             00000000000
                                                                                                                             11111111111
                                                                                                                             00000000000
                                                                                                                             11111111111
                                                                                                                             00000000000


                                  if 0  tij  oi
                                                                            0        1                  4                    6        8
                        t
        Ri tij  = fii oij
                    f i           if tij oi                     (1)
                                                                          Figure 1. A schedule produced by Mandatory-First Algorithm

where fi is a nondecreasing, concave and differentiable func-             In Figure 1, the Rate Monotonic Priority Assignment is used
tion over nonnegative real numbers and oi is the length of en-         whenever more than one mandatory task are simultaneously
tire optional part Oi . We underline that fi tij  is nondecreas-     ready, as in [5]. Yet, following other (dynamic or static) prior-
ing: the reward accrued by task Tij can not decrease by al-            ity schemes would not change the fact that the processor will
lowing it to run longer. Notice that the reward function Rit         be busy executing solely mandatory parts until t = 5 under
is not necessarily differentiable at t = oi . Note also that in        any Mandatory-First approach. During the remaining idle in-
this formulation, by the time the task’s optional execution time       terval [5,8], the best algorithm would have chosen to schedule
O1 completely (which brings most benefit to the system) for 1                       The above constraint allows us also to readily substitute fi 
time unit and O2 for 2 time units. However, an optimal algo-                   for Ri  in the objective function. Finally, we need to express
rithm would produce the schedule depicted in Figure 2.                         the “full” feasibility constraint, requiring that mandatory parts
                                                                               complete in a timely manner at every invocation. Note that it
                                                                               is sufficient to have one feasible schedule for task Ti with mi
             111111
             000000                            000000
                                               111111
             111111
             000000
             000000
             111111                            000000
                                               111111
                                               000000
                                               111111
             111111
             000000                            000000
                                               111111

                                                                               and the involved optimal ftij g values.
        M1    O1
             111111
             000000
             000000
             111111                       M1    O1
                                               000000
                                               111111
                                               000000
                                               111111
             111111
             000000                            111111
                                               000000
    00000000000000000000000000000000000000000
    11111111111111111111111111111111111111111
    0    1
             111111
             000000
              2         4    5    6         8
                                               111111
                                               000000

                                                                                   To re-capture all the constraints, the periodic reward-based
                                                                               scheduling problem, which we denote by REW-PER, is to find
                                                                               ftij g values so as to:
                                                              111111
                                                              000000
                                                              000000
                                                              111111
                                                              000000
                                                              111111
                                                              111111
                                                              000000
                           M2                            M2   111111
                                                              000000
                                                              111111
                                                              000000  O2
                                                                                                        P
                                                                                               P Pi P=Pi f t 
                                                              111111
                                                              000000

    0               2                 4              6
                                                              000000
                                                              111111
                                                              7            8
                                                                                                n
                                                                                 maximize           P        i ij                             (4)
                                                                                              i=1     j =1
                   Figure 2. The optimal schedule
                                                                                 subject to
    As it can be seen, the optimal strategy in this case consisted                                    n P
                                                                                              P P m + P P=Pi t  P
                                                                                              n
of delaying the execution of M2 in order to be able to exe-                                     Pi i          ij                              (5)
cute ’valuable’ O1 and we would still meet the deadlines of                                   i=1            i=1 j =1
all mandatory parts. By doing so, we would succeed in exe-                                                                               P
                                                                                              0  tij  oi i = 1; : : :; n j = 1; : : :; Pi   (6)
cuting two instances of O1, in contrast to any Mandatory-First
scheme which can execute only one instance of O1. Remem-                       A feasible schedule exists with fmi g and ftij g values         (7)
bering that k1     k2, one can conclude that the reward accrued
by the ’best’ Mandatory-First scheme may only be around half                      We express this last constraint in English and not through
of that accrued by the optimal one, for this example. Also, ob-                formulas since the algorithm producing this schedule including
serve that in the optimal schedule, the optional execution times               optimal tij assignments need not be specified at this point.
of a given task did not vary from instance to instance. In the                 Pn P m stating , ouris mainpossible to schedule mandatory
                                                                                  Before
                                                                                                 P it not
                                                                                                               result, we underline that if
next section, we prove that this pattern is not a mere coinci-                    i=1 Pi i
dence. We further perform an analytical worst-case analysis of                                                                    Pn
                                                                               parts in a timely manner and the optimization problem has no
Mandatory-First algorithms in Section 4.                                       solution. Note that this condition is equivalent to i=1 mii
                                                                                                                                        P
                                                                               1, which indicates that the task set would be unschedulable,
3 Optimality of Full-Utilization Policies for Pe-                                               Pn
                                                                               even if it consisted of only mandatory parts. Hence, hereafter,
  riodic Reward-Based Scheduling                                               we assume that i=1 mii  1 and therefore there exists at least
                                                                                                       P
                                                                               one feasible schedule.
   The objective of the Periodic Reward-Based Scheduling
problem is clearly finding optimal ftij g values to maximize                    Theorem 1 Given an instance of Problem REW-PER, there
the average reward. By substituting the average reward expres-                 exists an optimal solution where the optional parts of a task
sion given by (2) in (3), we obtain our objective function:                    Ti receive the same service time at every instance, i.e. tij =
                                                                                                   P
                                                                               tik 1  j k  Pi . Furthermore, any periodic hard-real
                             X P=Pi
                              n      X Pi
               maximize                       Ritij                          time scheduling policy which can fully utilize the processor
                                   i=1 P j =1                                  (EDF, LLF, RMS-h) can be used to obtain a feasible schedule
   The first constraint we must enforce is that the total proces-               with these assignments.
sor demand of mandatory and optional parts during the hyper-                      Proof: Our strategy to prove the theorem will be as fol-
period P may not exceed the available computing capacity:                      lows. We will drop the feasibility condition (7) and obtain a
                   X P=Pi
                    n X                                                        new optimization problem whose feasible region strictly con-
                                   mi + tij   P                             tains that of REW-PER. Specifically, we consider a new op-
                        i=1 j =1                                               timization problem, denoted by MAX-REW, where the ob-
   Note that this constraint is necessary, but not sufficient for               jective function is again given by (4), but only the constraint
feasibility of the task set with fmi g and ftij g values. Next, we             sets (5) and (6) have to be satisfied. Note that the new prob-
observe that optimal tij values may not be less than zero, since               lem MAX-REW does not a priori correspond to any schedul-
negative service times do not have any physical interpretation.                ing problem, since the feasibility issue is not addressed. We
In addition, the service time of an optional instance of Ti does               then show that there exists an optimal solution of MAX-REW
not need to exceed the upperbound oi of reward function Rit,                 where tij = tik 1  j                P
                                                                                                              k  Pi . Then, we will return to
since the reward accrued by Ti ceases to increase after tij = oi .             REW-PER and demonstrate the existence of a feasible sched-
Hence, we obtain our second constraint set:                                    ule (i.e. satisfiability of (7)) under these assignments. The re-
                                                    P
         0  tij  oi i = 1; : : :; n j = 1; : : :; P
                                                                               ward associated with MAX-REW’s optimal solution is always
                                                                  i            greater than or equal to that of REW-PER’s optimal solution,
for MAX-REW does not consider one of the REW-PER’s con-                Corollary 1 Optimal ti values for the Problem REW-PER can
straints. This will imply that this specific optimal solution of        be found by solving the optimization problem given by (8), (9)
the new problem MAX-REW is also an optimal solution of                 and (10).
REW-PER.
                                                                          We discuss the solution of this concave optimization prob-
     Now, we show that there exists an optimal solution of
MAX-REW where tij = tik 1  j k  Pi .       P                         lem in Section 6.
Claim 1 Let ftij g be an optimal solution to MAX-REW, 1 
i  n 1  j  Pi = qi. Then ft0ij g where t0i1 = t0i2 = : : : =
                     P                                                 4 Evaluation and comparison with other ap-
t0iqi = t0i = ti1 +ti2 qi:::+tiqi 1  i  n 1  j  qi, is also
                       +                                                 proaches
an optimal solution to MAX-REW.                                           We showed through the example in Section 2 that the re-
                                                                       ward accrued by any Mandatory-First scheme [5] may only be
     We first show that ft0 g values satisfy the constraints (5)
                             ij                    Pq                  approximately half of that of the optimal algorithm. We now
     and (6), if ftij g already satisfy them. Since ji tij =
     Pqi t0 = q t0 the constraint (5) is not violated by the
                                                      =1               prove that, under the worst-case scenario, the ratio of the re-
        j =1 ij      ii                                                ward accrued by a Mandatory-First approach to the reward of
     transformation. Also, by assumption, tij  oi 8j , which          the optimal algorithm approaches zero.
     implies maxftij g  oi . Since t0 , which is arithmetic
                                           i
                  j                                                    Theorem 2 There is an instance of the periodic reward-based
     mean of ti1; ti2; : : :; tiqi is necessarily less than or equal   scheduling problem where, for any integer r  2, the ratio
     to maxftij g, the constraint set (6) is not violated either       Reward of the best mandatory, rst scheme                                     2
                                                                                                                                                   =r
          j                                                                  Reward of the optimal scheme
     by the transformation.
                                                                       Proof:    Consider two tasks T1 and T2 such that P2=P1 = r;
     Furthermore, the total Preward does not decrease by this
                                q                                      f1t1  = k1 t1; f2 t2  = k2 t2 and k1=k2 = rr , 1. Further-
     transformation, since ji fi tij   qi fi t0 . The
                                  =1                  i                                  1
                                                                       more, let m2 = 2 r o2  and

                                                                                    P1 = m1 + o1 + m2 = m1 + rm21
     proof of this statement is presented in the Appendix.

   Using Claim 1, we can commit to finding an optimal so-                                                  r           ,
lution of MAX-REW by setting ti1 = ti2 = : : : = tiqi =                                             m .
                                 P                                     which implies that o1 = r r,1                   2

ti i = 1; : : :; n. In this case, P=Pi fi tij  = Pi fi ti  and
                                    j =1
                                                   P
                                                                          This setting suggests that during any period of T1 , a sched-
PP=Pi          P
   j =1 tij = Pi ti . Hence, this version of MAX-REW can be            uler has the choice of executing (parts of) O1 and/or M2 , in
re-written as:                                                         addition to M1 .
                                                                          Note that under any Mandatory-First policy, the proces-
                          P f t 
                          n                                            sor will be continuously busy executing mandatory parts until
           maximize           i i                               (8)    t = P2 , P1 + m1 . Furthermore, the best algorithm among
                         i=1
                          P Pt P,P Pm
                          n              n                             Mandatory-First policies should use the remaining idle times
                                                                       by scheduling O1 entirely (since k1 k2) and t2 = m2 = o2   2
                             Pi i                  i                                                                         r
                                        i=1 Pi
              subject to                                        (9)
                         i=1                                           units of O2. The resulting schedule is shown in Figure 3.
                         0  ti  oi i = 1; : : :; n           (10)
                                                                                                                               1
                                                                                                                               0                   1
                                                                                                                                                   0                               1
                                                                                                                                                                                   0
                                                                                                                               0
                                                                                                                               1                   1
                                                                                                                                                   0                      11
                                                                                                                                                                          00       1
                                                                                                                                                                                   0
   Finally, we prove that the optimal solution t1; t2; : : :; tn of                                                            0
                                                                                                                               1                   0
                                                                                                                                                   1                      00
                                                                                                                                                                          11       0
                                                                                                                                                                                   1
                                                                                                                                    ........
                                                                               m                m              m               1
                                                                                                                               0
                                                                                                                               1
                                                                                                                               0                   01
                                                                                                                                                   1
                                                                                                                                                   1
                                                                                                                                                   0
                                                                                                                                                    m             m       00
                                                                                                                                                                          11
                                                                                                                                                                          o        1
                                                                                                                                                                                   0
                                                                                                                                                                                   0
                                                                                                                                                                                   1
                                                                                   1                1               1
                                                                                                                               01
                                                                                                                               1                   1
                                                                                                                                                   0
                                                                                                                                                                      1
                                                                                                                                                                          00
                                                                                                                                                                          111
MAX-REW above, automatically satisfies the feasibility con-                 0                P                2.P1            3.P               (r-2).P 1    (r-1).P 1           r P0
                                                                                                                                                                                   1
                                                                                                1                                                                                  1
straint (7) of our original problem REW-PER. Having equal
                                                                                                                           0
                                                                                                                           1
optional service times for a given task greatly simplifies the                                                        0000
                                                                                                                     1111  0
                                                                                                                           1
                                                                                                                      o2 0 1
verification of this constraint. P t1 ; t2; : : :; tn (by assump-                                                     0000
                                                                                                                     1111           ......
                                 Since                                                 m2               m2              m2
                                                                                                                     0000
                                                                                                                     1111  0
                                                                                                                           1                               m2
                                                                                                                      2 0  1
                                    n
tion) satisfy (9), we can write i=1 P  mi +ti  P , or equiv-
                                                                               r-1     r-1     r-1            r-1    0000
                                                                                                                     1111
                                                                           1111111111111111111111111111111111111111111111
                                                                           0000000000000000000000000000000000000000000000  0
                                                                                                                           1
         Pn mi +ti  1.                      Pi                            0                                             P
                                                                                                                           2

alently, i=1 Pi
   This implies that any policy which can achieve 100% pro-               Figure 3. A schedule produced by Mandatory-First Algorithm
cessor utilization in classical periodic scheduling theory (EDF,
LLF, RMS-h) can be used to obtain a feasible schedule for
tasks, which have now identical execution times mi + ti at               The average reward that the best mandatory-first algorithm
every instance. Hence, the “full feasibility” constraint (7) of        (MFA) can accrue is therefore:

                                                                                                        RMFA = f1 o1  + f2 t2 
REW-PER is satisfied. Furthermore, this schedule clearly max-
imizes the average reward since fti g values maximize MAX-
REW whose feasible region encompasses that of REW-PER.
                                                                                                                  r
                                                                         However, an optimal algorithm (shown in Figure 4) would
2                                                                      choose delaying the execution of M2 for o1 units of time, at
                            1
                            0
                            1
                            0 ........        1
                                              0
                                              0 00
                                              1 11   0
                                                     1
                                                     0
                                                     1
         11
         00     11
                00     00
                       11   1
                            0             11
                                          00  1 11
                                              0 00   0
                                                     1                                      usually much higher than the other five policies, BIR is used as
         11
         00     11
             m 00   m 00
                       11   0
                            1             11
                                       m 00   1 11
                                              0 1 00 0
                                                     1
         11
       1 00     11
              1 00     11
                     1 00                 11
                                        1 00
         o
         m      o      o                  o   m o
         11
         001
                11
                001
                       11
                       001
                            0
                            1
                            0
                            1             11
                                          001
                                              1 00
                                              0 11
                                              0
                                              1
                                                   1
                                                     0
                                                     1
                                                     1
                                                     0
                                                                                            a yardstick for measuring the performance of other algorithms.
     1111111111111111111111111111111111111111111111
     0000000000000000000000000000000000000000000000
     0            P           2.P1        3.P                (r-2).P 1   (r-1).P 1    rP
                      1                         1                                       1      We have used a synthetic task set comprising 11 tasks whose
                                                    0
                                                    1                                       total (mandatory + optional) utilization is 2.3. Individual task
                                                    0
                                                    1
             m2             m2       m2             1
                                                    0
                                                    ......
                                                    1
                                                    0                    m2          m2     utilizations vary from 0.03 to 0.6. Considering exponential,
             r              r        r              1
                                                    0
                                                    0
                                                    1
                                                                         r           r      logarithmic and linear reward functions as separate cases, we
     1111111111111111111111111111111111111111111111
     0000000000000000000000000000000000000000000000
     0                                            P
                                                    2                                       compared the reward of six Mandatory-First schemes with our
                                                                                            optimal algorithm (OPT). The tasks’ characteristics (including
                          Figure 4. An optimal schedule                                     reward functions) are given in the Table below. In our exper-
                                                                                            iments, we first set mandatory utilization to 0 (which corre-
every period of T1 . By doing so, it would have the opportunity                             sponds to the case of all-optional workload), then increased it
of accruing the reward of O1 at every instance.                                             to 0.25, 0.4, 0.6, 0.8 and 0.91 subsequently.
   The total reward of the optimal schedule is:
                                                                                             Task    Pi mi + oi            fi1 t            fi2 t      fi3 t
                                                                                             T1      20   10                       ,
                                                                                                                       15 1 , e,3tt    7 ln20 t + 1      5t
                                     o
                          ROPT = r f1r 1  = f1 o1                                         T2
                                                                                             T3
                                                                                                     30
                                                                                                     40
                                                                                                          18
                                                                                                           5
                                                                                                                       20 1 , e 
                                                                                                                        4 1 , e,t 
                                                                                                                                         10 ln50 t + 1
                                                                                                                                         2 ln10 t + 1
                                                                                                                                                             7t
                                                                                                                                                             2t
                                                                                             T4      60    2          10 1 , e,0:5t     5 ln5 t + 1      4t
   The ratio of rewards for the two policies turns out to be (for                            T5      60    2          10 1 , e,0:2t    5 ln25 t + 1      4t
any r  2)                                                                                   T6      80   12            5 1 , e,t      3 ln30 t + 1      2t
                                                                                             T7      90   18                       ,
                                                                                                                       17 1 , e,tt      8 ln8 t + 1      6t
                                                                                             T8     120   15            8 1 , e         4 ln6 t + 1      3t
                                                                                             T9     240   28            8 1 , e,t       4 ln9 t + 1      3t
    RMFA = 1 + f2 t2  = 1 + 1 m2 rr , 1 = 2                                              T10
                                                                                             T11
                                                                                                    270
                                                                                                    2160
                                                                                                          60
                                                                                                         300
                                                                                                                      12 1 , e,0t:5t
                                                                                                                        5 1 , e, 
                                                                                                                                         6 ln12 t + 1
                                                                                                                                         3 ln15 t + 1
                                                                                                                                                             5t
                                                                                                                                                             2t
    ROPT r f1 o1  r rr , 1 r m2           r
which can be made as close as possible to 0 by appropriately                                    In our experiments, a common pattern appears: the opti-
choosing r (i.e., choosing a large value for r).                                            mal algorithm improves more dramatically with the increase
                                                                                            in mandatory utilization. The other schemes miss the opportu-
2                                                                                           nities of executing “valuable” optional parts while constantly
   Theorem 2 gives the worst-case performance ratio of                                      favoring mandatory parts. The reward loss becomes striking
Mandatory-First schemes. We also performed experiments                                      as the mandatory workload increases. Figures 5 and 6 show
with a synthetic task set to investigate the relative perfor-                               the reward ratio for the case of exponential and logarithmic
mance of Mandatory-First schemes proposed in [5] with differ-                               reward functions, respectively. The curves for these strictly
ent types of reward functions and different mandatory/optional                              concave reward functions are fairly similar: BIR performs
workload ratios.                                                                            best among Mandatory-First schemes, and its performance de-
   The Mandatory-First schemes differ by the policy accord-                                 grades as the mandatory utilization increases; for instance the
ing to which optional parts are scheduled when there is no                                  ratio falls to 0.73 when mandatory utilization is 0.6. Other
mandatory part ready to execute. Rate-Monotonic (RMSO) and                                  algorithms which are more amenable to practical implemen-
Least-Utilization (LU) schemes assign statically higher priori-                             tations (in terms of runtime overhead) than BIR perform even
ties to optional parts with smaller periods and least utilizations                          worse. However, it is worth noting that the performance of
respectively. Among dynamic priority schemes are Earliest-                                  LAT is close to that of BIR. This is to be expected, since task
Deadline-First (EDFO) and Least-Laxity-First (LLFO) which                                   sets with strictly concave reward functions usually benefit from
consider the deadline and laxity of optional parts when assign-                             “balanced” optional service times.
ing priorities. Least Attained Time (LAT) aims at balancing                                     Figure 7 shows the reward ratio for linear reward functions.
execution times of optional parts that are ready, by dispatching                            Although the reward ratio of Mandatory-First schemes again
the one that executed least so far. Finally, Best Incremental Re-                           decreases with the mandatory utilization, the decrease is less
turn (BIR) is an on-line policy which chooses the optional task                             dramatic than in the case of concave functions. However,
contributing most to the total reward, at a given slot. In other                            note that the ratio is typically less than 0.5 for the five prac-
words, at every slot BIR selects the optional part Oij such that                            tical schemes. It is interesting to observe that the (impractical)
the difference fi tij +  , fi tij  is the largest (here tij is                         BIR’s reward now remains comparable to that of optimal, even
the optional service time Oij has already received and  is the                             in the higher mandatory utilizations: the difference is less than
minimum time slot that the scheduler assigns to any optional                                15%. The main reason for this behavior change lies on the fact
task). However, it is still a sub-optimal policy since it does                              that, for a given task, the reward of optional execution slots
not consider the laxity information. The authors indicate in [5]                            in different instances does not make a difference in the lin-
that BIR is too computationally complex to be actually imple-                               ear case. In contrast, not executing the “valuable” first slot(s)
mented. However, since the total reward accrued by BIR is                                   of a given instance creates a tremendous effect for nonlinear
             Reward Ratio with                                                          Reward Ratio with
             Respect to Optimal                                                         Respect to Optimal
      1.00                                              OPT                      1.00                                                OPT

      0.90                                                                       0.90
                                                                                                                             BIR
      0.80                                                                       0.80
                                                                                 0.70
      0.70
                                                                                 0.60
      0.60
                                                                                 0.50
      0.50
                                                                                 0.40
      0.40
                                                                                                                             LLFO
                                                                                 0.30                                        RMSO
      0.30                                                                                                                   EDFO
                                                     BIR                         0.20                                        LAT
      0.20                                           LAT
                                                     RMSO                        0.10
      0.10                                           LLFO                                                                       LU
                                                     EDFO
                                                       LU                                 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9          Mandatory
               0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9         Mandatory                                                                   Utilization
                                                           Utilization
                                  (a)
                                                                             Figure 7. The Reward Ratio of Mandatory-First schemes:
                                                                             linear reward functions
  Figure 5. The Reward Ratio of Mandatory-First schemes:
  exponential reward functions                                               SUBSET-SUM: Given a set S = fs1 ; s2 ; : : :; sn g of posi-
             Reward Ratio with                                            tive integers and the integer M, is there a set SA  S such that
                                                                           X
      1.00
             Respect to Optimal
                                                     OPT                         si = M ?
      0.90                                                                si2SA
      0.80                                                                                   Pn s . Now consider a set of n periodic
                                                                             We construct the corresponding REW-PER instance as fol-
                                                                          lows. Let W =         i=1 i
      0.70
                                                                          tasks with the same period M and mandatory parts mi = 0 8i.
      0.60                                                                The reward function associated with Ti is given by:
      0.50
      0.40
                                                                                   Riti  = fi ti  if 0  o  os = si
                                                                                               fi oi    if t i = i
                                                                                                                   t i
      0.30

                                                                          where fi ti  = t2 + W , si ti is a strictly convex and in-
                                                 BIR
      0.20
                                                                                            i
                                                 LAT
                                                 LLFO
      0.10                                       RMSO
                                                 EDFO                     creasing function on nonnegative real numbers.
                                                                             Notice that fi ti  can be re-written as ti ti , si  + W ti .
                                                 LU


               0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9          Mandatory
                                 (b)                        Utilization   Also we underline that having the same periods for all tasks
                                                                          implies that REW-PER can be formulated as:
  Figure 6. The Reward Ratio of Mandatory-First schemes:                                                     P t t , s  + W P t
                                                                                                              n                n
  logarithmic reward functions                                                 maximize                           i i i           i                  (11)
                                                                                                             i=1              i=1
concave functions. The improvement of the optimal algorithm                                                     Pt M
                                                                                                                 n
                                                                               subject to                            i                               (12)
would be larger for a larger range of ki values (where ki is                                                   i=1
the coefficient of the linear reward function). We recall that                                                  0  ti  si                           (13)
the worst-case performance of BIR may be arbitrarily bad with
respect to the optimal one for linear functions, as Theorem 2                 Let us denote by MaxRew the total reward of the optimal
suggests.                                                                 schedule. Observe that for 0      ti si , the quantity ti ti ,
                                                                          si  0. Otherwise, at either of the boundary values 0 or si ,
5 Periodic Reward-Based Scheduling Problem                                ti ti , si  = 0. Hence, MaxRew  WM .
  with Convex Reward Functions is NP-Hard                                     Now, consider the question: ”Is MaxRew equal to WM ?”.
                                                                          Clearly, this question can be answered quickly if there is a
   As we mentioned before, maximizing the total (or average)              polynomial-time algorithm for REW-PER where reward func-
reward with 0/1 constraints case had already been proven to               tions are allowed to be convex. Furthermore, the answer can be
be NP-Complete in [15]. In this section, we show that convex                                  P t = M and each t is equal to either 0
                                                                                               n
                                                                          positive only when      i              i
reward functions also result in an NP-Hard problem.                                           i=1
   We now show how to transform the SUBSET-SUM prob-                      or si . Therefore, MaxRew equal to WM , if and only if there
                                                                                                    P
lem, which is known to be NP-Complete, to REW-PER with                    is a set SA  S such that si 2SA si = M , which implies that
convex reward functions.                                                  REW-PER with convex reward functions is NP-Hard.
6 Solution of Periodic Reward-Based Schedul-                                       When F contains nonlinear functions then the procedure
  ing Problem with Concave Reward Functions                                     becomes more involved. In the next two subsections, we in-
                                                                                troduce two auxiliary optimization problems, namely Problem
   Corollary 1 reveals that the optimization problem whose so-                  OPT (which considers only the equality constraint) and Prob-
lution provides optimal service times is of the form:                           lem OPT-L (which considers only the equality and lower bound
                                       P
                                       n
         maximize                           fi ti                             constraints), which will be used to solve OPT-LU.
                                      i=1
                                      P
                                      n                                         6.1 Problem OPT: Equality Constraints
          subject to                      bi ti  d                                 Only
                                      i=1
                                      ti  oi i = 1; 2; :::n                          The problem OPT is characterized by:
                                                                                                                 P
                                                                                                                 n
                                       0  ti i = 1; 2; :::n                                       maximize            fi ti 
                                                                                                                 i=1
          d (the ’slack’ available for optional parts) and                                                        P
                                                                                                                  n
where
b1; b2; : : :; bn are positive rational numbers. In this section,                                  subject to          bi ti = d
                                                                                                                 i=1
                                                                                where F = ff1 ; : : :; fng is the set of nondecreasing concave
we present polynomial-time solutions for this problem, where
each fi is a nondecreasing, concave and differentiable1 func-
                                                                                functions, possibly including some linear function(s). As it
tion.
                                                                                can be seen, OPT does not consider the lower and upper bound
   First note that, if the available slack is large P
                                                    enough to ac-
commodate every optional part entirely (i.e., if n bi oi 
                                                                                constraints of Problem OPT-LU. The algorithm which returns
                                                      i=1
d), then the choice ti = oi 8 i clearly maximizes the objective                 the solution of Problem OPT, is denoted by “Algorithm OPT”.
                                                                                   When F is composed solely of non-linear reward functions,
function due to the nondecreasing nature of reward functions.
   Otherwise, the slack d should be used in its entirety since
                                                                                the application of Lagrange multipliers technique to the Prob-
                                                                                lem OPT, yields:
the total reward never decreases by doing so (again due to
the nondecreasing nature of the reward functions). In this                                  1 f 0 t  ,  = 0     i = 1; 2; : : :; n
case, we obtain a concave optimization problem with lower                                   bi i i                                            (18)

                                                                                where  is the common Lagrange multiplier and fi0 ti  is the
and upper bounds, denoted by OPT-LU. An instance of OPT-
                                                                                derivative of the reward function fi . The quantity bi fi0 ti 
                                                                                                                                           1
LU is specified by the set of nondecreasing concave functions
F = ff1 ; : : :; fng, the set of upper bounds O = fo1 ; : : :; ong
and the available slack d. The aim is to:                                       in (18) actually represents the marginal return contributed by
                                      P f t 
                                      n                                         Ti to the total reward, which we will denote as wi ti . Ob-
         maximize                           i i                        (14)     serve that since fi ti  is non-decreasing and concave by as-
                                      i=1                                       sumption, both wi ti  and fi0 ti  is non-increasing and posi-
                                       P
                                       n
                                          bi ti = d                             tive valued. Equation (18) implies that the marginal returns
          subject to                                                   (15)
                                      i=1                                       witi  = b1i fi0 ti  should be equal for all reward functions in
                                      ti  oi i = 1; 2; :::n           (16)     the optimal solution ft1 ; : : :; tng. Considering that the equal-
                                       0  ti i = 1; 2; :::n           (17)           P b t = d should also hold, one can obtain closed for-
                                                                                      n
                                                                                ity      i i
                    Pn                                                                i=1
where 0 d           i=1 bi  oi.                                                mulas in most of the cases which occur in practice. The closed
   Special Case of Linear functions: If F comprises solely                      formulas presented below are obtained by this method.
linear functions, the solution can be considerably simplified.
Note that for a function fi ti  = ki  ti , if we increase ti by                     For logarithmic reward functions of the form
 then total reward increases by ki . However by doing so,                            fi ti  = lnki  ti + cni ,
                                                                                                 Pc c P
                                                                                                  n
                                                                                               d+ kj , k11b1 bj
we make use of bi  units of slack (d is reduced by bi  due                                          j
to (15)). Hence, the “marginal return” of task Ti per slack                            t1 = j=1 1 P j=1  n
unit is wi = ki . It is therefore possible to order the functions
              b
                i                                                                                    b1     bj
                                                                                                        j=1
                                                                                       tj = b1 t1 + k1c1b1 , kjcjbj 8j 1 j  n.
according to their marginal return, and distribute the slack in
decreasing order of marginal returns, while taking account the
upper bounds. We note that this solution is analogous to the                           For exponential reward functions of the form
one presented in [17]. The dominant factor in the time com-                            fi ti  = ci 1 , e,ki ti ,
plexity comes from the initial sorting procedure, hence in the                                    P
                                                                                                  n          cj
                                                                                               d, k1j ln  c1 b1 kj 
                                                                                                                bj k1
special case of all-linear functions, OPT-LU can be solved in
time On logn.
                                                                                       t1 = j=1 P k    n
                                                                                                          k1
                                                                                                           j
   1 In the auxiliary optimization problems which will be introduced shortly,                         j=1
                                                                                       tj = kj  1 k1 t1 + ln  cj b1 kj  8j 1 j  n.
the differentiability assumption holds as well.                                                                  c1 bj k1
     For “kth root” reward functions of the form                      6.2 Problem OPT-L: Equality Constraints
     fi ti  = ci t1=k k 1,
                        i                                                 with Lower Bounds
     t1 = P bj d1 1
              n c
                  b1 cj  k,1                                          Now, we consider the optimization problem with the
             j=1                                                      equality and lower bound constraints, denoted by OPT-L.
     tj = t1  bj1c1  k,1 8j 1 j  n.
                  b cj
                            1
                                                                      An instance of Problem OPT-L is characterized by the set
                                                                      F=ff1 ; f2; ::; fng of nondecreasing concave reward functions,
                                                                      and the available slack d:
    When it is not possible to find a closed formula, follow-                                               P
                                                                                                           n
ing exactly the approach presented in [10, 13], we solve  in
                   Pn                                                           maximize                         fi ti                       (19)
the equation i=1 bi hi  = d, where hi k is the inverse                                                i=1
function of bi fi0 ti  = wi ti  (we assume the existence of the
                 1                                                                                          P
                                                                                                            n
                                                                                subject to                     bi ti = d                       (20)
derivative’s inverse function whenever fi is nonlinear, com-                                               i=1
plying with [10, 13]). Once  is determined, ti = hi; i =                                                0  ti i = 1; 2; :::n              (21)
1; : : :; n is the optimal solution.
    We now examine the case where F is a mix of linear and               To solve OPT-L, we first evaluate the solution set SOPT
nonlinear functions. Consider two linear functions fi t = ki t     to the corresponding problem OPT and check whether all in-
and fj t = kj  t. The marginal return of fi ti  is witi  =     equality constraints are automatically satisfied. If this is the
ki = wi and that of fj is wj tj  = kj = wj . I wj wi then           case, the solution set SOPT ,L of Problem OPT-L is clearly
 bi                                         bj                        the solution SOPT . Otherwise, we will construct SOPT ,L it-
the service time ti should be definitely zero, since marginal re-      eratively as described below.
turn of fi is strictly less than fj everywhere. After this elimina-                          0
                                                                         Let  = fxj b1 fx 0  bi fi0 0 8ig. Remember that
                                                                                                       1
tion process, if there are p 1 linear functions with the same          1 0
                                                                                          x
(largest) marginal return wmax then we will consider them as a        bx fx tx  is the marginal return associated with fx tx  and
single linear function in the procedure below and evenly divide       was denoted by wx tx . Informally,  contains the functions2
the returned service time tmax among tj values corresponding          fx 2 F with the smallest marginal returns at lower bound 0,
to these p functions.                                                 wx0.
    Hence, without loss of generality, we assume that fn t =        Lemma 1 If SOPT violates some lower bound constraints of
kn  t is the only linear function in F. Its marginal return is       Problem OPT-L, then, in the optimal solution tm = 0 8m 2 .
wntn  = kn = wmax . We first compute the optimal distri-
                b
                  n
bution of slack d among tasks with nonlinear reward functions
f1; : : :; fn,1. By the Lagrange multipliers technique, witi  ,     The proof of Lemma 1 is based on Kuhn-Tucker optimality
 = 0 and thus w1t  = w2t  = : : : = wn,1t ,1 =  at
                         1           2                 n              conditions for nonlinear optimization problems and is not pre-
the optimal solution t ; t ; : : :; t ,1.
                          1 2          n                              sented here for lack of space (the complete proof can be found
    Now we distinguish two cases:                                     in [1]). The time complexity COPT n of Algorithm OPT is
                                                                      On (If the mentioned closed formulas apply, then the com-
       max. In this case, t ; t; : : :; t ,1 and tn = 0 is
                                 1 2         n                        plexity is clearly linear. Otherwise the unique unknown  can
     the optimal solution to OPT. To see this, first remember          be solved in linear time under concavity assumptions, as indi-
     that all the reward functions are concave and nondecreas-        cated in [10, 13]). Lemma 1 immediately implies the existence
     ing, hence wit ,   wit   wn   = wmax i =
                        i            i                                of an algorithm which sets tm = 0 8m 2 , and then re-
     1; : : :; n , 1 for all = 0. This implies that transferring      invokes Algorithm OPT for the remaining tasks and slack (in
     some service time from another task Ti to Tn would mean          case that some inequality constraints are violated by SOPT ).
     favoring the task which has the smaller marginal reward          Since the number of invocations is bounded by n, the com-
     rate and would not be optimal.                                   plexity of the algorithm which solves OPT-L is On2 .
                                                                         Furthermore, it is possible to converge to the solution in
           wmax . In this case, reserving the slack d solely         time On log n by using a binary-search like technique on La-
     to tasks with nonlinear reward functions means violating         grange multipliers. Again, full details and correctness proof of
     the best marginal rate principle and hence is not optimal.       this faster approach can be found in [1].
     Therefore, we should increase service time ti until witi 
     drops to the level of wmax for i = 1; 2; : : :; n , 1, but not   6.3 Problem OPT-LU: Equality Constraints
     beyond. Solving hi wmax  = ti for i P 1; 2; : : :; n , 1
                                              =,                          with Upper and Lower Bounds
                                          d, n=11 ti
     and assigning any remaining slack         bn
                                                 i      to tn (the       An instance of Problem OPT-LU is characterized by the
     service time of unique task with linear reward func-             set F= ff1 ; f2; : : :; fng of nondecreasing, differentiable, and
     tion) clearly satisfies the best marginal rate principle and         2 We use the expression “functions” instead of “indices of functions” unless
     achieves optimality.                                             confusion arises.
concave reward functions, the set O= fo1; o2 ; ::; ong of upper     On  logn.     Furthermore, the cardinality of F decreases by
bounds on optional parts, and available slack d:                    at least 1 after each iteration. Hence, the number of iterations
                               P
                               n                                    is bounded by n. It follows that the total time complexity of
        maximize                     fi ti                (22)
                                                                    Algorithm OPT-LU is On2  logn. However, in case of all-
                               i=1
                                P
                                n                                   linear functions, OPT-LU can be solved in time On  log n as
        subject to                 bi ti = d                (23)    shown before.
                               i=1
                               ti  oi i = 1; 2; :::n       (24)
                                                                    7 Conclusion
                                0  ti i = 1; 2; :::n       (25)
                                                                       In this paper, we have addressed the periodic reward-based
   We first observe the close relationship between the prob-         scheduling problem in the context of uniprocessor systems. We
lems OPT-LU and OPT-L. Indeed, OPT-LU has only an ad-               proved that when the reward functions are convex, the prob-
ditional set of upper bound constraints. It is not difficult to      lem is NP-Hard. Thus, our focus was on linear and concave
see that if SOPT ,L satisfies the constraints given by Equation      reward functions, which adequately represent realistic appli-
(24), then the solution SOPT ,LU of problem OPT-LU is the           cations such as image and speech processing, time-dependent
same as SOPT ,L . However, if an upper bound constraint is          planning and multimedia presentations. We have shown that
violated then we will construct the solution iteratively in a way   there exists always a schedule where the optional execution
analogous to that described in the solution of Problem OPT-L.       times of a given task do not change from instance to instance.
                     0
   Let , = fxj b1 fx ox   bi fi0 oi  8ig. Informally, , con-
                  x
                              1                                     This result, in turn, implied the optimality of any periodic real-
tains the functions fx 2 F with the largest marginal returns at     time policy which can achieve 100% utilization of the proces-
the upper bounds, wx ox .                                         sor. The existence of such policies is well-known in real-time
                                                                    systems community: RMS-h, EDF and LLF. We have also pre-
Lemma 2 If SOPT ,L violates some lower bound constraints            sented polynomial-time algorithms for computing the optimal
of Problem OPT-LU, then, in the optimal solution tm =               service times.
om 8m 2 ,.                                                             We underline that besides clear and observable reward im-
                                                                    provement over previously proposed sub-optimal policies, our
   The proof of Lemma 2 is again based on Kuhn-Tucker op-
                                                                    approach has the advantage of not requiring any runtime over-
timality conditions and can be found in [1].
                                                                    head for maximizing the reward of the system and for con-
   The algorithm ALG-OPT-LU (see Figure 8) which finds so-
                                                                    stantly monitoring the timeliness of mandatory parts. Once
lution to the problem OPT-LU is based on successively solving
instances of OPT-L. First, we find the solution SOPT ,L of the
                                                                    optimal optional service times are determined statically by our
                                                                    algorithm, an existing (e.g., EDF) scheduler does not need to
corresponding problem OPT-L. We know that this solution is
                                                                    be modified or to be aware of mandatory/optional semantic dis-
optimal for the simpler problem which does not take into ac-
                                                                    tinction at all. This appears as another major benefit of hav-
count upper bounds. If the upper bound constraints are auto-
                                                                    ing pre-computed and optimal equal service times for a given
matically satisfied, then we are done. However, if this is not the
case, we set tq = oq 8q 2 ,. Finally, we update the sets F, O
                                                                    task’s invocations in reward-based scheduling.
and the slack d to reflect the fact that the values of tm 8m 2 ,
                                                                       In addition, Theorem 1 implies that as long as we are con-
                                                                    cerned with linear and concave reward functions, the resource
have been fixed.
                                                                    allocation can be also made in terms of utilization of tasks
                     Algorithm OPT-LU(F,O,d)                        without sacrificing optimality. In our opinion, this fact points
  1    Set SOPT ,LU =   ;                                           to an interesting convergence of instance-based [5, 15] and
  2          ;
      if F = then exit                                              utilization-based [17] models for the most common reward
  3   Evaluate SOPT ,L /* consider only lower bounds */             functions.
  4   if all upper bound constraints are satisfied then                 About the tractability issues regarding the nature of reward
      SOPT ,LU = SOPT ,LU SOPT ,L ; exit                          functions, the case of step functions was already proven to be
 5    compute ,                                                     NP-Complete ([15]). By solving efficiently the case of con-
                  8 2
      set tq = oq q , in SOPT ,LU
                 ,P
 6                                                                  cave and linear reward functions and proving that the case of
 7    set d = d       x 2 , bx ox                                   convex reward functions is NP-Hard, we believe that efficient
 8           ,
      set F=F ,                                                     solvability boundaries in (periodic or aperiodic) reward-based
 9           ,f j 2 g
      set O=O ox x ,                                                scheduling problem have been reached by our work in this as-
 10    Goto Step 2                                                  pect (assuming P 6= NP).
                                                                       We believe that considering dynamic aperiodic task arrivals,
          Figure 8. Algorithm to solve Problem OPT-LU               fault tolerance issues and investigating good approximation al-
                                                                    gorithms for intractable cases such as step functions and er-
   Complexity: Notice that the worst case time complexity of        ror cumulative jobs can be major avenues for reward-based
each iteration is equal to that of Algorithm OPT-L, which is        scheduling.
Acknowledgements: The authors would like to thank the                    [17] R. Rajkumar, C. Lee, J. P. Lehozcky and D. P. Siewiorek. A Re-
anonymous reviewers whose suggestions helped to improve                       source Allocation Model for QoS Management. In Proceedings
the paper.                                                                    of 18th IEEE Real-Time Systems Symposium, December 1997.
                                                                         [18] W.-K. Shih, J. W.-S. Liu, and J.-Y. Chung. Algorithms for
                                                                              scheduling imprecise computations to minimize total error.
References
                                                                              SIAM Journal on Computing, 20(3), July 1991.
 [1] H. Aydın, R. Melhem and D. Moss´ . A Polynomial-time Al-
                                       e                                 [19] C. J. Turner and L. L. Peterson. Image Transfer: An end-to-end
     gorithm to solve Reward-Based Scheduling Problem. Technical              design. In SIGCOMM Symposium on Communications Architec-
     Report 99-10, Department of Computer Science, University of              tures and Protocols, August 1992.
     Pittsburgh, April 1999.                                             [20] S. V. Vrbsky and J. W. S. Liu. Producing monotonically im-
 [2] G. Bernat and Alan Burns. Combining (n,m) Hard deadlines and             proving approximate answers to relational algebra queries. In
     Dual Priority scheduling. In Proceedings of 18th IEEE Real-              Proceedings of IEEE Workshop on Imprecise and Approximate
     Time Systems Symposium, December 1997.                                   Computation, December 1992.
 [3] M. Boddy and T. Dean. Solving time-dependent planning prob-         [21] S. Zilberstein and S.J. Russell. Anytime Sensing, Planning and
     lems. Proceedings of the Eleventh International Joint Confer-            Action: A practical model for Robot Control. In IJCAI 13, 1993.
     ence on Artificial Intelligence, IJCAI-89, Aug 1989.
                                                                         APPENDIX
 [4] E. Chang and A. Zakhor. Scalable Video Coding using 3-D Sub-        We will show that:
     band Velocity Coding and Multi-Rate Quantization. In IEEE Int.
                                                                                            qi
                                                                                            X
                                                                                                   fi tij   qi fi t0i 
     Conf. on Acoustics, Speech and Signal processing, July 1993.
 [5] J.-Y. Chung, J. W.-S. Liu and K.-J. Lin. Scheduling periodic jobs
                                                                                                                                         (26)
     that allow imprecise results. IEEE Transactions on Computers,                          j =1
     19(9): 1156-1173, September 1990.                                                     t +t +:::+tiqi and the function f is concave.
                                                                         where t0 = i1 i2 qi
                                                                                  i                                            i
                                                                         If fi is a linear function of the form fi t = ki  t, then
 [6] W. Feng and J. W.-S. Liu. An extended imprecise computation
     model for time-constrained speech processing and generation. In     Pqi                                                         0
     Proceedings of the IEEE Workshop on Real-Time Applications,            j =1 fi tij  = ki ti1 + ti2 + : : : + tiqi  = ki qi ti and the
     May 1993.                                                           inequality (26) is immediately established.
 [7] J. Grass and S. Zilberstein. Value-Driven Information Gather-          If fi is general concave, we recall that:

                                                                                  fi x + 1 , fi y  fi  x + 1 , y
     ing. AAAI Workshop on Building Resource-Bounded Reasoning
     Systems, Rhode Island, 1997.                                                                                                   (27)
 [8] M. Hamdaoui and P. Ramanathan. A dynamic priority assign-
                                                                         8x; y and for every such that 0   1. In this case, we
                                                                         prove the validity of (26) by induction. If qi = 1, (26) holds
     ment technique for streams with (m,k)- firm deadlines. IEEE

                                                                         trivially. So assume that it holds for qi = 1; 2; : : :; m , 1.
     Transactions on Computers, 44(12): 1443-1451, Dec 1995.
 [9] E.J. Horvitz. Reasoning under varying and uncertain resource
                                                                         Induction assumption implies that:
     constraints Proceedings of the Seventh National Conference on
     Artificial Intelligence, AAAI-88, pp. 111-116, August 1988.              m,1
                                                                             X                               t + :::+ t
[10] J. K. Dey, J. Kurose and D. Towsley. On-Line Scheduling Poli-                   fi tij   m , 1 fi  i1 m , 1im,1            (28)
     cies for a class of IRIS (Increasing Reward with Increasing              j =1
     Service) Real-Time Tasks. IEEE Transactions on Computers
     45(7):802-813, July 1996.                                                               ,
                                                                            Choosing = mm 1 ; x            = ti +ti +:::+ti m, ; y = tim in
                                                                                                                 1    2
                                                                                                                    m,1
                                                                                                                                 1

[11] G. Koren and D. Shasha. Skip-Over: Algorithms and Complex-          (27), we can write:
     ity for Overloaded Systems that Allow Skips. In Proceedings of
     16th IEEE Real-Time Systems Symposium, December 1995.                   m , 1 f  ti1 + : : : + tim,1  + 1 f t  
[12] R. E. Korf. Real-time heuristic search. Artificial Intelligence,          m i             m,1                  m i im
     42(2): pp.189 -212, 1990.                                                                                         :
                                                                                                           fi  ti1 + :m: + tim         (29)
[13] C. M. Krishna and K. G. Shin. Real-time Systems. Mc Graw-
     Hill, New York 1997.
                                                                            Combining (28) and (29), we get:
[14] C.L. Liu and J.W.Layland. Scheduling Algorithms for Multi-
     programming in Hard Real-time Environment. Journal of ACM                       1 X f t   f  ti1 + : : : + tim  8m
                                                                                        m
     20(1), 1973.
                                                                                     m j =1 i ij   i         m
[15] J. W.-S. Liu, K.-J. Lin, W.-K. Shih, A. C.-S. Yu, C. Chung, J.
     Yao and W. Zhao. Algorithms for scheduling imprecise compu-
     tations. IEEE Computer, 24(5): 58-68, May 1991.                        establishing the validity of (26).
[16] A.K. Mok, Fundamental Design Problems of Distributed sys-
     tems for the Hard Real-Time Environment. Ph.D. Dissertation,
     MIT, 1983.

				
DOCUMENT INFO