Document Sample

Static Slicing of Threaded Programs Jens Krinke krinke@ips.cs.tu-bs.de TU Braunschweig Abteilung Softwaretechnologie Abstract input. The nondeterministic behavior of a program is hard to understand and ﬁnding harmful nondeterministic behav- Static program slicing is an established method for analyz- ior is even harder. Therefore, supporting tools are required. ing sequential programs, especially for program understand- Unfortunately, most tools for sequential programs are not ap- ing, debugging and testing. Until now, there was no slicing plicable to threaded programs, as they cannot cope with the method for threaded programs which handles interference nondeterministic execution order of statements. One simple correctly. We present such a method which also calculates way to circumvent these problems is to simulate these pro- more precise static slices. This paper extends the well known grams through sequentialized or serialized programs [18]. structures of the control ﬂow graph and the program depen- These are “product” programs, in which every possible exe- dence graph for threaded programs with interference. This cution order of statements is modeled through a path where new technique does not require serialization of threaded pro- the statements are executed sequentially. This may lead to grams. exponential code explosion, which is often unacceptable for analysis. Therefore, special representations of parallel pro- grams have been developed. 1 Introduction In the following sections we will ﬁrst introduce our no- Static program slicing [19] is an established method for an- tation of threaded programs and show how to extend con- alyzing sequential programs, especially for program under- trol ﬂow graphs (CFGs) and program dependence graphs standing, debugging and testing. But today even small pro- (PDGs) to threaded PDGs, which are our base for slicing. grams use parallelism and a method to slice such programs is The problem of static slicing threaded programs is explained required. Dynamic slicing of threaded (or concurrent) pro- in section 4, where we also present an algorithm to slice grams has been researched by several authors. But only one these programs. The last two sections present some related approach for static slicing of threaded programs is known work and discuss the conclusions and further work. to us [1, 2]. A drawback of this approach is that the cal- culated slices are not precise enough, because it does not 2 The threaded CFG handle interference. Interference is data ﬂow which is intro- duced through use of variables which are common to parallel A common way to represent procedures of a program are executing statements. We approach that problem and present control ﬂow graphs (CFG). A CFG is a directed graph G = a more precise algorithm for static slicing of threaded pro- (N, E, s, e) with node set N and edge set E. The statements grams with interference. and predicates are represented by nodes n ∈ N and the The analysis of programs where some statements may ﬂow of control between statements is represented by edges explicitly be executed in parallel is not new. The static analy- (n, m) ∈ E and written as n → m. Two special nodes s and sis of these programs is complicated, because the execution e are distinguished, the START node s and the EXIT node e order of parallel executed statements is dynamic. Testing and which represent the beginning and the end of the procedure. debugging of threaded programs have increased complexity: Node s does not have predecessors and node e does not have they might produce different behavior even with the same successors. The variables which are referenced at node 1 n are denoted by ref (n), the variables which are deﬁned (or assigned) at n are denoted by def(n). 1 In the rest of this paper we will use “node” and “statement” interchangeable, as they are bijectively mapped A path in G is a sequence P = n 1 , . . . , n k where n i → n i+1 for all 1 ≤ i < k. A node p is reachable (q → p) from another node q, if there is a path q, . . . , p in G. S1 : x = ...; i. e. “→ ” is the transitive, reﬂexive closure of “→”. We S2 : i = 1; assume that every path in a CFG is a possible execution or- cobegin { der of the statements of the program. If we pick some state- if (x>0) { ments out of this sequence they are a witness of a possible S3 : x = -x; execution. S4 : i = i+1; } else { Deﬁnition 2.1 We call a sequence n 1 , . . . , n k of nodes a S5 : i = i+1; witness, iff n i → n i+1 for all 1 ≤ i < k. } This means that a sequence of nodes is a witness, if all }{ nodes are part of a path through the CFG in the same order S6 : i = i+1; as in the sequence. Every path is a witness of itself. S7 : z = y; A thread is a part of a program which must be executed } coend; on a single processor. Threads may be executed in paral- S8 : ... = i; lel on different processors or interleaved on a single proces- sor. In our model we assume that threads are created through Figure 1: A threaded program cobegin/coend statements and that they are properly syn- chronized on statement level. Let the set of threads be = {θ0 , θ1 , . . . , θn }, n = | | + 1. For simplicity we consider the main program as a thread θ 0 . A sample program with two threads is shown in Figure 1. Thread θ1 is the block of statements S3 , S4 and S5 and the START other thread θ 2 is the block with S6 and S7 . S1 , S2 and S8 are part of the main program θ 0 . control flow A threaded CFG (tCFG) extends the CFG with two spe- x = ... S1 cial nodes COSTART and COEXIT which represent the cobe- parallel flow gin and coend statements. The enclosed threads are han- i = 1 dled like complete procedures and will be represented by S2 whole CFGs, which are embedded in the surrounding CFG. The START and EXIT nodes of these CFGs are connected to COSTART the COSTART and COEXIT nodes with special parallel ﬂow START START cf edges. We will distinguish the edges through p −→ q for a sequential control ﬂow edge between nodes p and q and if (x>0) pf i = i+1 p −→ q for a parallel ﬂow edge. Figure 2 shows the tCFG S6 for the example program of Figure 1. x = -x S3 θ ( p) is a function which returns for every node p its in- i = i+1 z = y nermost enclosing thread. In the example we have θ (S 2 ) = i = i+1 S5 S7 θ0 , θ (S4 ) = θ1 and θ (S6 ) = θ2 . ( p) is a function that S4 returns for every node p the set of threads which cannot EXIT EXIT execute parallel to the execution of p, e. g. (S 4 ) = ∅ or (S2 ) = {θ1 , θ2 }. COEXIT The deﬁnition of witnesses in CFGs may also be applied to tCFGs. But this does not take the possible interleaving of ... = i nodes into account and we have to extend the deﬁnition: S8 Deﬁnition 2.2 A sequence l = n 1 , . . . , n k of nodes is a EXIT threaded witness in a tCFG, iff c f, p f j −1 ∀t∈ : l t = m 1 , . . . , m j ⇒ ∀i=1 : m i −→ m i+1 Figure 2: A threaded CFG where l |t is the subsequence of l = n 1 , . . . , n k in which all nodes n i with θ (n i ) = t have been removed. START A control or parallel flow control dependence data dependence x = ... i = 1 COSTART ... = i S1 S2 B S8 interference dependence START START C D E if (x>0) S6 i = i+1 z = y S7 x = -x i = i+1 i = i+1 S3 S4 S5 Figure 3: A threaded PDG Intuitively, a threaded witness can be interpreted as a wit- Deﬁnition 3.2 A node j is called (direct) control dependent ness in the sequentialized CFG. This deﬁnition assures that on node i , if a sequence of nodes, which are part of different threads, is 1. there is a path P from i to j in the CFG (i → j ). a witness in each of the different threads. Every ordinary witness in the tCFG is automatically a threaded witness. In 2. j is a postdominator for every node in P except i our example of Figure 2, S 1 , S4 , S6 and S1 , S2 , S8 are threaded witnesses and S5 , S6 , S4 or S1 , S4 , S5 are not. 3. j is not a postdominator for i . The sequence S1 , S2 , S8 is also an ordinary witness but the The PDG consists of the nodes of the CFG and control sequence S1 , S4 , S6 is not. cd dependence edges p −→ q for nodes q which are control dd dependent on nodes p, and data dependence edges p −→ q 3 The threaded PDG for nodes q which are data dependent on nodes p. A program dependence graph [5] is a transformation of a Deﬁnition 3.3 A node j is called transitive dependent on CFG, where the control ﬂow edges have been removed and node i , if two other kinds of edges have been inserted: control depen- dence and data dependence edges. 1. there is a path P = i = n 1 , . . . , nl = j where every n k+1 is control or data dependent on n k Deﬁnition 3.1 A node j is called data dependent on node i , 2. P is a witness in the CFG if 1. there is a path P from i to j in the CFG (i → j ). Note that the composition of control and data dependence is always transitive: A dependence between x and y and a 2. there is a variable v, with v ∈ def (i) and v ∈ ref (j) dependence between y and z are implying a path between x 3. for all nodes k = i of path P ⇒ v ∈ def(k). / and z from the deﬁnition of control and data dependence. There have been some attempts to deﬁne threaded vari- Node j is called a postdominator of Node i , if any path ants of PDGs. To the best of our knowledge none of these from i to EXIT must go through j . A node i is called a pre- explicitly represents the dependences which result from in- dominator of j if any path from START to j must go through terference. Interference occurs if a variable is deﬁned in one i . In typical programs, statements in loop bodies are pre- thread and referenced in another parallel executing thread. dominated by the loop entry and postdominated by the loop In the example of Figure 1 we have an interference for the exit. variable i between θ 1 and θ2 . The value of i at statement S1 : i = 1; START cobegin { while (z>0) { cobegin { i = 1 COSTART x = z S2 : x = i; S1 S5 }{ S3 : y = x; START START } coend; } }{ while(z>0) z = y S4 : z = y; S4 } coend; S5 : x = z; COSTART Figure 4: A program with nested threads x = i y = x S2 S3 S6 may be the value computed at S 2 , S4 or S5 . The value of i at statement S8 may be the value computed at S 4 , S5 or control or parallel flow S6 . However, if the statements S4 , S5 and S6 are properly control dependence synchronized, the value of i will always be 3. data dependence Deﬁnition 3.4 A node j is called interference dependent on interference dependence node i , if 1. θ (i ) = θ ( j ) and θ ( j ) ∈ (i ), i. e. θ (i ) and θ ( j ) may / Figure 5: The tPDG of Figure 4 potentially be executed in parallel, 2. there is a variable v, such that v ∈ def (i) and v ∈ ref (j) removed, we need the control and parallel ﬂow edges for reasons we will explain later. As usual, the EXIT and COEXIT Dependences between threads which are not executed in par- nodes can be removed, if the control and parallel ﬂow edges allel are ordinary data dependences. are adapted accordingly. The tPDG of the example is shown The dependences introduced by interference cannot be in Figure 3. handled with normal data dependence as normal dependence More complicated structures like loops or nested threads is transitive and interference dependence is not. The transi- may be handled in the same way. An example is shown in tivity of the data and control dependence results from their Figure 4. In the tPDG in Figure 5 there is both a data and an deﬁnitions, where a sequential path between the dependent interference dependence edge between statement S 2 and S3 . nodes is demanded. The composition of paths in the CFG Both statements and their threads may be executed in paral- always results in a path again. lel (therefore the interference dependence). The statements Interference dependence is not transitive: If a statement x and their threads may also be executed sequentially through is interference dependent on a statement y, which is interfer- different iterations of the enclosing loop. ence dependent on z, then x is only dependent on z iff there The technique to calculate the edges is beyond the scope is a possible execution where these three statement are exe- of the papers, they can be calculated with standard algo- cuted one after another: The sequence x, y, z of the three rithms [8]. A simple version would assume the existence statements has to be a threaded witness in the tCFG. In the of a boolean function parallel(i, j ) which returns true if it is example of Figure 3 statement S 4 is interference dependent possible for nodes i and j to execute in parallel (see [12] for on statement S6 , which in turn is interference dependent on an overview of ways to calculate this function). An interfer- id statement S5 . However, there is no possible execution where ence dependence edge i −→ j will be inserted for all (i, j ) S4 is executed after S5 and thus S4 cannot be interference if there is a variable v which is deﬁned at i , referenced at j dependent on S5 , S5 , S6 , S4 is no threaded witness. and parallel(i, j ) is true. A threaded program dependence graph (tPDG) consists of the nodes and the edges of the tCFG with the addition of control, data and interference dependence edges. In contrast to the standard PDG, where the control ﬂow edges have been 4 Slicing the tPDG Input: the slicing criterion s, a node of the tPDG Output: the slice S, a set of nodes of the tPDG Slicing on the PDG of sequential programs is a simple graph Initialize the worklist with an initial state tuple: reachability problem [14], because control and data depen- s if θ (s) = θi dence is transitive. C = (s, (t0 , . . . , t| | )) ti = ⊥ else worklist w = {C} Deﬁnition 4.1 The (backward) slice S( p) of a (sequential) slice S = {s} PDG at node p consists of all nodes on which p (transitively) repeat depends: remove the next element c = (x, T ) from w S( p) = {q|q → p} Examine all reaching edges: cd,dd The node p is called the slicing criterion. for all edges e = y −→ x do T = [y/θ(y)]T This deﬁnition may easily implemented through a graph if θ (y) = θ (x) then reachability algorithm. As interference dependence is not Normal dependence between threads: transitive, this deﬁnition of a slice for PDGs is not valid reset the exited threads for tPDGs and hence the standard algorithms are not really (which cannot execute parallel to y) applicable.2 for all t ∈ (y) do The basic idea of our approach stems from a simple ob- T = [⊥/t ]T servation: Because every path in the PDG is a witness in the c = (y, T ) corresponding CFG, every node p which is reachable from if c has not been already calculated then a node q in the PDG, is also reachable from q in the cor- mark c as calculated responding CFG. This does not hold for the threaded vari- w = w ∪ {c } ants. The deﬁnition of a slice in the tPDG establishes a S = S ∪ {y} similar property, because it demands that the tPDG contains id a threaded witness between every node in the slice and the for all edges e = y −→ x do slicing criterion. t = T [θ (y)] c f, p f if t = ⊥ or y −→ t = y then Deﬁnition 4.2 The (backward) slice S θ ( p) of a tPDG at a The inclusion of the edge still results node p consists of all nodes q on which p transitively de- in a threaded witness pends: c = (y, [y/θ(y)]T ) if c has not been already calculated then Sθ ( p) = {q | P = n1, . . . , nk , mark c as calculated d1 dk−1 q = n 1 −→ . . . −→ n k = p, w = w ∪ {c } di ∈ {cd, dd, i d}, 1 ≤ i < k. S = S ∪ {y} until worklist w is empty. and P is a threaded witness in the tCFG} Figure 6: Slicing algorithm A slice from the statement S4 of the example program in Figure 1 is shown in Figure 3 as framed nodes. The respon- Therefore we present a different slicing algorithm in Fig- sible edges are drawn in a thicker style. Note that there are ure 6. Its basic idea is the coding of possible states of exe- interference edges between statement S 6 and S5 which does cution in all threads in tuples (t 0 , t1 , . . . , t| |−1 ), where the not force the inclusion of statement S 5 into the slice because ti are nodes in the tPDG with θ (t i ) = θi . The value ti repre- S4 is not reachable from S 5 in the tCFG. The standard slic- sents a node which has not yet been reached by the execution ing algorithm would include the statement S 5 into the slice, of thread θ i and it is still possible to reach node t i . A value which is, albeit correct, to inaccurate. of ⊥ does not restrict the state of execution. This is used The algorithm to slice sequential programs is a simple to keep track of the nodes p where a thread has been left reachability algorithm. However, it is not easy to transform through following an interference edge. If we follow another the deﬁnition of a threaded slice into an algorithm because interference edge back into the thread at node q, we are able the calculation of threaded witnesses would be too costly. to check that p is reachable from q. This assures that paths over interference edges are always threaded witnesses in the 2 The “classical” deﬁnition of a slice is any subset of a program that does not change tCFG. This is the reason why we have to keep the control the behaviour in respect to the criterion: a program is a correct slice of itself. There- fore, if interference is modelled with normal data dependence, the resulting slices are and parallel ﬂow edges in the tPDG. correct but unprecise. We denote the extraction of the i th element t i in a tuple w : {(S4 , (⊥, S4 , ⊥))} T = (t0 , t1 , . . . , tn ) with T [i ]. The substitution of the i th cd E −→ S4 ⇒ (E, (⊥, E, ⊥)) element ti in a tuple T = (t0 , t1 , . . . , tn ) with a value x will dd be denoted as [x/ i ](T ). S2 −→ S4 ⇒ (S2 , (S2 , ⊥, ⊥)) id The algorithm keeps a worklist of pairs of nodes and state S6 −→ S4 ⇒ (S6 , (⊥, S4 , S6 )) tuples which have to be examined. Every edge reaching the w : {(E, (⊥, E, ⊥)), (S2 , (S2 , ⊥, ⊥)), (S6 , (⊥, S4 , S6 ))} cd node is examined and is handled dependently of its type. In C −→ E ⇒ (C, (⊥, C, ⊥)) dd case of a control or data dependence edge, a new pair con- S1 −→ E ⇒ (S1 , (S1 , ⊥, ⊥)) sisting of the source node and the modiﬁed state tuple is in- w : {(S2 , (S2 , ⊥, ⊥)), (S6 , (⊥, S4 , S6 )), (C, (⊥, C, ⊥)), (S1 , (S1 , ⊥, ⊥))} cd serted into the worklist. The new state tuple has the source A −→ S2 ⇒ ( A, ( A, ⊥, ⊥)) node as the actual state of its thread. If the edge crosses w : {(S6 , (⊥, S4 , S6 )), (C, (⊥, C, ⊥)), (S1 , (S1 , ⊥, ⊥)), ( A, ( A, ⊥, ⊥))} threads, the state of the left threads are resetted. In the other dd S2 −→ S6 ⇒ (S2 , (S2 , ⊥, ⊥)) already visited case its an interference dependence edge. It may only be cd D −→ S6 ⇒ (D, (⊥, S4 , D)) considered if the state node of the source node thread is id c f, p f reachable from the source node in the tCFG (all examined S5 −→ S6 ⇒ S5 −→ S4 is not fulﬁlled (T [θ (S5 )] = S4 ) id paths are still threaded witnesses). Then, the new pair with S4 −→ S6 ⇒ T [θ (S4 )] = S4 is not fulﬁlled (T [θ (S4 )] = S4 ) the updated state tuple is inserted into the worklist. The re- w : {(C, (⊥, C, ⊥)), (S1 , (S1 , ⊥, ⊥)), ( A, ( A, ⊥, ⊥)), (D, (⊥, S4 , D))} cd sulting slice is the set of nodes which is constructed of the B −→ C ⇒ (B, (B, ⊥, ⊥)) ﬁrst elements of the inserted pairs. w : {(S1 , (S1 , ⊥, ⊥)), ( A, ( A, ⊥, ⊥)), (D, (⊥, S4 , D)), (B, (B, ⊥, ⊥))} cd In the following we will demonstrate an application of A −→ S1 ⇒ ( A, ( A, ⊥, ⊥)) already in worklist the algorithm to calculate a backward slice for node S 4 . The w : {( A, ( A, ⊥, ⊥)), (D, (⊥, S4 , D)), (B, (B, ⊥, ⊥))} worklist w is initialized with the element (S4 , (⊥, S4 , ⊥)). no edge reaching A exists w : {(D, (⊥, S4 , D)), (B, (B, ⊥, ⊥))} This element is immediately removed from the worklist and cd cd B −→ D ⇒ (B, (B, ⊥, ⊥)) already in worklist all edges reaching S4 are examined. The edge E −→ S4 w : {(B, (B, ⊥, ⊥))} does not cross threads and the state of the thread θ (S 4 ) = cd A −→ B ⇒ ( A, ( A, ⊥, ⊥)) already visited θ (E) is updated before the created element (E, (⊥, E, ⊥)) dd is inserted into the worklist. The edge S 2 −→ S4 does cross ⇒ Sθ (S4 ) = {S4 , E, S2 , S6 , C, S1 , A, D, B} threads and the state of the exited threads is reset. This cre- id ates a new element (S2 , (S2 , ⊥, ⊥)). The edge S6 −→ S4 Figure 7: Calculation of Sθ (S4 ) creates (S6 , (⊥, S4 , S6 )), because the state of θ (S6 ) is ⊥. Let us step forward in the calculation and assume the work- list is {(S6 , (⊥, S4 , S6 )), (C, (⊥, C, ⊥)), . . .}. There are four If we assume that the analyzed programs has no threads, edges reaching S6 : = {θ0 }, then this algorithm is similar to the sequential slicing algorithm. In that case, the second iteration over all dd 1. S2 −→ S6 crosses threads and creates the element interference dependence edges will not be executed and the (S2 , (S2 , ⊥, ⊥)). As this element has already been vis- worklist will only contain tuples of the form (n, (n)), where ited, it is not inserted into the worklist again. n is a node of the PDG. Hence the standard slicing algorithm cd on PDGs is a special case of our algorithm, which has the 2. D −→ S6 does not cross threads and inserts the ele- same time and space complexity for the unthreaded case. ment (D, (⊥, S4 , D)) into the worklist. c f, p f In the threaded case the reachability y −→ x has to id 3. S5 −→ S6 : as (⊥, S4 , S6 )[θ (S5 )] = S4 and the con- be calculated iteratively. This determines the worst case for c f, p f time complexity in the number of interference edges: the tra- dition S5 −→ S4 is not fulﬁlled, this edge has to be versal of these edges might force another visit of all nodes ignored. that may reach the source of the edge. Therefore, the worst id case is exponential in the number of interference dependence 4. S4 −→ S6 : the condition T [θ (S4 )] = S4 cannot be edges. We believe that the number of interference depen- fulﬁlled and this edge has to be ignored. dence edges will be very small in every program, as inter- In the third step, the edge has to be ignored because it would ference is error prone, hard to understand and to debug. The destroy the property that every node in the slice is part of a required calculation time will be much less than the time re- threaded witness. The condition which is not fulﬁllable in quired to analyze serialized programs. step four may be relaxed if we drop our assumption that the program is properly synchronized on statement level. The remaining calculations are presented in Figure 7. 5 Related work object orientation. The problem of slicing object oriented programs is orthogonal to slicing threaded programs, There are many variations of the program dependence graph the integration of slicing object oriented programs like for threaded programs like parallel program graphs [15, 2, 1, [11] should be possible, following similar techniques 4]. However, most of them are unusable for static slicing. as [20]. Dynamic slicing of threaded or concurrent programs has been approached by different authors [4, 13, 3, 9] and is sur- Our next goal is the integration of this technique in our veyed in [17]. slicing tool [6, 16] for sequential standard C programs. As The only other approach to static slicing of threaded pro- this tool is able to generate and simplify path conditions grams known to the author is the work of Cheng [1, 2]. He based on program slices, we will develop new constraints introduces some dependences, which are even more special- stemming from threaded program for these path conditions ized than our interference dependence. These are needed for to obtain an even better slice accuracy. a variant of the PDG, the program dependence net (PDN). His selection dependence is a special kind of control depen- Acknowledgments dence and his synchronization dependence is a mixture of control and data dependence. Our interference dependence The author wishes to thank Gregor Snelting, Torsten Rob- is most similar to his communication dependence, where de- schink and especially Bernd Fischer for their helpful sup- pendence is introduced through explicit interprocess com- port. This work was funded by the Bundesministerium f¨ ru munication. Although our tPDG is not mappable to his PDN Bildung und Forschung, FKZ 01 IS 513 C9. and vice versa, both graphs are similar in the number of nodes and edges. Cheng deﬁnes slices simply based on graph reachability. References The resulting slices are not precise, as they do not take into account that dependences between parallel executed state- [1] J. Cheng. Slicing concurrent programs. In Automated ments are not transitive. Therefore, the integration of his and Algorithmic Debugging, 1st Intl. Workshop, LNCS technique of slicing threaded programs into slicing threaded 749, 1993. object oriented programs [20] has the same problem. [2] J. Cheng. Dependence analysis of parallel and distrib- uted programs and its applications. In Intl. Conf. on Ad- 6 Conclusions and further work vances in Parallel and Distributed Computing, 1997. [3] J.-D. Choi, B. P. Miller, and R. H. B. Netzer. Tech- We have presented extended versions of the control ﬂow and niques for debugging parallel programs with ﬂowback program dependence graphs for threaded programs, called analysis. ACM Transactions on Programming Lan- the threaded control ﬂow graph and threaded program de- guages and Systems, 13(4), 1991. pendence graph. The tCFG is similar to other extensions of the CFG for threaded programs. The tPDG is new, as it cap- [4] E. Duesterwald, R. Gupta, and M. L. Soffa. Distrib- tures the interference in threaded programs. With the tPDG uted slicing and partial re-execution for distributed pro- we are able to calculate better static slices of threaded pro- grams. In 5th Workshop on Languages and Compilers grams than previous approaches. for Parallel Computing, LNCS 757, 1992. We believe that, as more and more programs are using threads, static slicing of them will become more important. [5] J. Ferrante, K. J. Ottenstein, and J. D. Warren. The We plan to extend our method to handle program dependence graph and its use in optimiza- tion. ACM Transactions on Programming Languages procedures. The presented algorithm works only intrapro- and Systems, 9(3), 1987. cedural. However, known techniques [7] for interpro- cedural slicing can be integrated straightforward. [6] M. Goldapp, U. Grottker, and G. Snelting. Validierung softwaregesteuerter Meßsysteme durch Program Slic- synchronization. For simplicity, we have assumed implicit ing und Constraint Solving. In Statusseminar des synchronization of the analyzed programs. Our plan is BMBF Softwaretechnologie, Berlin, 1996. to integrate explicit synchronization similar to [2]. [7] S. Horwitz, T. Reps, and D. Binkley. Interprocedural different threads. The cobegin/coend model is not al- slicing using dependence graphs. ACM Transactions ways sufﬁcient to model different types of parallelism. on Programming Languages and Systems, 12(1), 1990. We are planning to extend our technique for different kind of threads like fork/join. [8] J. Knoop, B. Steffen, and J. Vollmer. Parallelism for free: Efﬁcient and optimal bitvector analyses for par- allel programms. ACM Transactions on Programming Languages and Systems, 18(3), 1996. [9] B. Korel and R. Ferguson. Dynamic slicing of distrib- uted programs. Applied Mathematics and Computer Science, 2, 1992. [10] B. Korel and J. Laski. Dynamic program slicing. In- formation Processing Letters, 29(3), 1988. [11] L. D. Larsen and M. J. Harrold. Slicing object-oriented software. In Proc. 18th Intl. Conf. on Software Engi- neering, 1996. [12] C. E. McDowell and D. P. Helmbold. Debugging con- current programs. ACM Computing Surveys, 21(4), 1989. [13] B. P. Miller and J. D. Choi. A mechanism for efﬁcient debugging of parallel systems. In Proc. ACM SIGPLAN Conf. on Programming Language Design and Imple- mentation, 1988. [14] K. J. Ottenstein and L. M. Ottenstein. The program dependence graph in a software development environ- ment. In Proc. ACM SIGSOFT/SIGPLAN Software En- gineering Symposium on Practical Software Develop- ment Environments, 1984. [15] V. Sarkar and B. Simons. Parallel program graphs and their classiﬁcation. In Proc. 6th Workshop on Lan- guages and Compilers for Parallel Computing, LNCS 768, 1993. [16] G. Snelting. Combining slicing and constraint solv- ing for validation of measurement software. In Static Analysis; Third Intl. Symposium, LNCS 1145, 1996. [17] F. Tip. A survey of program slicing techniques. Journal of Programming Languages, 3(3), 1995. [18] N. Uchihira, S. Honiden, and T. Seki. Hypersequen- tial programming. IEEE Concurrency, July-September 1997. [19] M. Weiser. Program slicing. IEEE Transactions on Software Engineering, 10(4), 1984. [20] J. Zhao, J. Cheng, and K. Ushijima. Static slicing of concurrent object-oriented programs. In Proc. 20th IEEE Annual Intl. Computer Software and Applica- tions Conf., 1996.

DOCUMENT INFO

Shared By:

Categories:

Tags:
dynamic slicing, concurrent programs, program slice, dependence graph, acm transactions on programming languages and systems, respect to, parallel programs, jens krinke, model checking, dependence graphs, static slicing, data dependence, program dependence graph, source code, international conference

Stats:

views: | 14 |

posted: | 11/10/2009 |

language: | German |

pages: | 8 |

OTHER DOCS BY pp00pp

How are you planning on using Docstoc?
BUSINESS
PERSONAL

By registering with docstoc.com you agree to our
privacy policy and
terms of service, and to receive content and offer notifications.

Docstoc is the premier online destination to start and grow small businesses. It hosts the best quality and widest selection of professional documents (over 20 million) and resources including expert videos, articles and productivity tools to make every small business better.

Search or Browse for any specific document or resource you need for your business. Or explore our curated resources for Starting a Business, Growing a Business or for Professional Development.

Feel free to Contact Us with any questions you might have.