Technical Note #14 Siebel Analytics ETL Performance

Document Sample
Technical Note #14 Siebel Analytics ETL Performance Powered By Docstoc
					Microsoft Technical Champs for Siebel                                       www.siebelonmicrosoft.com




Technical Note #14: Siebel Analytics and ETL Performance

 Technical Note                     #14: Siebel Analytics and ETL Performance
 Last Modified:                     July 12, 2004
 Version                            2.0 For latest version see: www.siebelonmicrosoft.com
 Area:                              Siebel Analytics 7.5.2
 Siebel Releases:                   Siebel CRM 7.5.2, Siebel Analytics 7.5.2
 Informatica Release                Release 5.x
 Windows Releases                   N/A
 SQL Server Releases                SQL Server 7.0 and SQL Server 2000


The following scenario was a real customer experience where Microsoft and Siebel
were called in to trouble shoot performance problems for a Siebel Analytics
implementation. Specifically the performance was around ETL on the Analytics
database. Microsoft, Siebel and Informatica together have recognized that similar
problems may exist at other customer sites. The specifics of each customer site can
vary greatly so please consult your Siebel Support representative before making any
changes that are described in this document. Many of the problems described herein
are largely eliminated by upgrading to Informatica release 6.x or higher. Siebel will
discontinue support of Informatica 5.x at the end of this year, recommending that all
customer’s upgrade to the latest version.

Problems and Solutions:
Load Performance
Siebel customers using the Industry Vertical versions of Siebel Analytics are loading
far more fields than customers using the horizontal versions of Siebel. The affect of
large numbers of records and the fact that these records contain far more fields can
significantly impact performance. In addition to loading the Industry Verticals field
mappings instead of the Horizontal field mappings, the number of unused indexes
being generated at load time can also impair performance.
Before loading the data warehouse, records must be read from the Siebel OLTP and
from the staging area where these records are temporarily stored. Once read these
records are then loaded into the data warehouse. Extensive analysis was performed
at the customer site to determine the bottlenecks occurring in the whole process.
The following table contains this analysis and will be used as reference in the
discussion to follow.
The table contains the analysis performed on one table that was taking several hours
to load. Conclusions based on analyzing this table were applicable to several other
tables at the customer site and may be used as a reference for your own data
loading analysis. The LENGTH is how long it took to load the records; RATE is the
number of records loaded per second. FF refers to flat file, while DB refers to
database. The No Xform indicates a loading of data with no mapping
transformations done on the data. This indicates the significant impact the
transformations have on loading the data.
Microsoft Technical Champs for Siebel                                              www.siebelonmicrosoft.com

Reading and writing to flat files sets a “best possible” base line as these are the
fastest to read from and write to. It gives a reference point for what might be
achievable with enough tuning to the database.




                         Table 1: Performance Statistics for loading table W_PERSON_D
LENGTH RATE              NOTE
   0:00:07         6670 No Xform, FF to FF (loading all fields as if Vertical)
   0:02:30         5100 No Xform, DB to FF (loading all fields as if Vertical)
                     350 No Xform, FF to DB, no Index (loading all fields as if Vertical)
                     520 No Xform, FF to DB, no Index, using Bulk (loading all fields as if Vertical)
   0:02:41         1650 Horizontal, DB to FF (loading fewer fields than vertical)
   0:08:32           350 Horizontal, DB to DB, Index (loading fewer fields than vertical)
   0:06:19           625 Horizontal, DB to DB, no Index (loading fewer fields than vertical)
   0:02:19           900 Horizontal, DB to DB, no Index, Using Bulk (loading fewer fields than vertical)
   0:00:00           650 Horizontal, DB to DB, Cluster Index, Using Bulk (loading fewer fields than vertical)
   0:06:03           450 Vertical, DB to FF
   0:10:04           225 Vertical, DB to DB, Index
   0:07:22           350 Vertical, DB to DB, no Index
   0:03:40           465 Vertical, DB to DB, no Index, using Bulk
   0:00:00 Not Run       Vertical, DB to DB, Cluster Index, using Bulk
   0:03:40           470 Vertical, FF to FF
   0:10:06           225 Vertical, FF to DB, Index
   0:06:42           350 Vertical, FF to DB, no Index
                     465 Vertical, FF to DB, no Index, using Bulk
             Not Run     Vertical, FF to DB, Cluster Index, using Bulk


Looking at the Rate column in Table 1, we can see that the source system format
really had very little impact on the overall load. The following chart makes this much
easier to see. The Rate columns are very close for both types of sources no matter
what the configuration of the destination tables.

 FF     Flat File
 DB-I   Database Table                                     Data Load of Verticals Tables
        with Index
 DB-NI Database Table                              500
        with No Index
 DB-NIB Database Table                             400
        with No Index                              300
        using Bulk Load                 Rate                                                             Source DB
 DB-CIB Database Table                             200                                                   Source FF
        with Clustered
        Index using Bulk
                                                   100
        Load                                         0
                                                           FF      DB-I    DB-NI DB-NIB DB-CIB
                                                                      Target System
Microsoft Technical Champs for Siebel                                  www.siebelonmicrosoft.com




The Bottle neck is more apparent from observing the target system. The following
chart illustrates the major differences in the data load rates distinguished by the
target system, this time showing the comparison for loading both the Vertical
industry version of the tables as well as the horizontals version of the tables.


                    Load Comparison for Target Systems



                1500

                1000
       Rate
                  500

                     0
                            FF          DB-I   DB-NI   DB-NIB DB-CIB
      Verticals     Horizontals         Target Systems

This chart shows the major difference between the rate obtained loading the
horizontals version of the tables and the rates obtained from loading the verticals
version of the tables which have far more columns. The point is to also reveal the
difference in the target systems shown on the X axis for the horizontal and the
vertical systems.
NOTE: The Verticals load with a target destination of DB-CIB (DB Clustered Index
with Bulk Loading) was not performed ergo the rate of 0 in the chart.
It’s also interesting to note the impact of loading the tables when the target DB
contains an Index and when it doesn’t as well as the performance gains obtained by
using Bulk Load operations.
Note on Clustered Indexes: One might be tempted to drop all indexes except the
cluster index since rebuilding the cluster index would cause the database to
physically rearrange the data, which could take a long time. But, since the cluster
index is built on the ROW_WID, which is always sequential, the data is already
ordered and does not need to be reordered by the database. Loading the
W_PERSON_D table with the cluster index enabled caused the session to take about
45 minutes longer than without the cluster index, while rebuilding the cluster index
only took about 5 minutes.
No Xform: Look at a chart showing the load rates as distinguished by mapping or
transforming the vertical tables as opposed to doing no transformations on the
vertical tables. Observe the two columns showing Flat File as the destination type.
These clearly show the tremendous impact transformations have on the load rate.
The second two indicate data being loaded into non indexed tables. Here we see that
the transformations load is being masked or obscured by the load going to the non
indexed tables. This tells us that the bottleneck is the non-indexed tables and not the
transformations.
Microsoft Technical Champs for Siebel                                          www.siebelonmicrosoft.com


                Load Rates with Transformations vs. without
                             Transformations
              7000
              6000
              5000
              4000
   Load Rates
              3000
              2000
              1000
                 0
         No Xform               FF - FF     DB - FF   FF - DB-NI FF - DB-NIB

         Xform                            Target Table Systems



Recommendation:
All this data leads to the following conclusions and recommendations for the ETL
process.

    1. The source is not the bottleneck for performance
    2. Use the Horizontal mappings vs. the Vertical mappings whenever possible
    3. Load to an un-indexed table using the bulk option whenever possible.




More Loading Performance Tuning:
Having identified the bottleneck in the process we can make several observations
and suggestions to reduce this bottleneck.

SQL Server Recovery Model:
Please read SQL Server Books Online and understand the recovery implications for
your specific database before switching recovery models.
The ETL for the data warehouse is a good example of how you can use Simple or
Bulk logged recovery models to gain performance. The recovery model can be
returned to Full Recovery after data loading completes. This not only increases
performance but reduces the required log space while maintaining server protection.
However, you should understand that while bulk logged recovery produces smaller
logs, it can make for larger log backups. You should also understand that returning
to Full Recovery mode from Simple Recovery mode requires a full offline backup.

Dropping and rebuilding indexes
For more information on rebuilding indexes, refer to TechNote #6 Index Creation
Performance on the Technical Champs Website.

The ETL Process provides place holders to add scripts for dropping and rebuilding
indexes before and after loading tables. It’s not necessary to drop and rebuild all the
indexes. In this case the following tables were identified as candidates for dropping
and rebuilding indexes.
   • W_ACTIVITY_F
   • W_CAMP_HIST_F
   • W_MAP_DIM_M
Microsoft Technical Champs for Siebel                                          www.siebelonmicrosoft.com

    •    W_ORG_D
    •    W_PERSON_D
    •    W_REVN_F
    •    S_ETL_R_IMAGE
    •    S_ETL_I_IMAGE


Permanently dropping NULL indexes or indexes on unused columns
Eliminating unused indexes improves load performance as well as speeding
incremental loads when indexes are not dropped by reducing the overhead required
to load each record. Again TechNote #6 Index Creation Performance has more
details on finding and eliminating Null indexes. But you can also drop indexes if the
column is not used by Analytics or if the data warehouse keys (ROW_WIDS) only
have the default value “0”. For our customer we realized the following gains by
eliminating unused indexes.

                  Index                 Time to rebuild      Time to rebuild   Estimated
                   Type                   All indexes      remaining indexes     Savings
        Dimension Tables                54 minutes        8 minutes            46 minutes
        Fact Tables                     5 minutes         2 minutes            3 minutes



Extract, Transform & Load (Informatica)

Eliminate unnecessary steps

Each Siebel customer should identify the sessions that do not need to be run and
should follow Siebel’s recommended practices to reduce overall ETL run time by
eliminating these steps from the out-of-the-box process.

All mappings and sessions with the suffix ‘DB2390’ are specific to OLTP tables on a
DB2 database running on IBM’s OS 390. These sessions should be disabled if you
are not loading from DB2 on the OS390

Tune remaining steps

Run Out of the Box and Custom steps in Parallel

The Out-of-the-box settings are to run several mappings in parallel. Normally,
custom mappings are run separately from these mappings partly because there is a
dependency with the standard mappings but primarily because it is difficult to run
objects in separate folders at the same time. The solution to this is to move the
standard mapping to the custom folder then run the standard and custom mapping
at the same time. It is important to document this modification so that it can be
replicated after an upgrade and should only be done when significant savings can be
realized.

Convert some lookups to persistent:

Additional mappings may be improved through the use of persistent lookups.
Several dimension mappings must perform a lookup to the target table to determine
if the record already exists, which can take some time if there are many records. A
Microsoft Technical Champs for Siebel                               www.siebelonmicrosoft.com

persistent cache can be stored at the end of the session to be reused during the next
run, avoiding this expensive lookup.

This change only benefits refresh/incremental loads

Application Level Performance Parameters:

Use Bulk load option for SDE mappings and full load for SIL mappings.

Informatica is able to take advantage of SQL Server’s bulk data loading utility. The
default settings are to use normal loading. Performance drops dramatically when
bulk loading a table with indexes, but staging tables do not have indexes so it is safe
to set all SDE mappings to bulk load. However, indexes must be dropped when bulk
loading the data warehouse tables (during full loads only). Since updates take place
during incremental loads, it is not recommended to use this method for incremental
SIL mappings, use Normal load method instead.

Bulk Loading issues:
    •    For versions of Informatica PowerCenter/PowerMart prior to version 6, loading
         NULLS into columns with not-NULL constraints caused errors. This is a known
         issue (CR 39049). In the event you need to do this in releases prior to version
         6, you must use normal loading as opposed to bulk loading or remove the not
         null constraint for that particular column in the target table.
    •    Bulk Option errors out when target definition does not match the physical
         definition such as when the order of columns being inserted is different than
         the order of the columns in the database This was described on the
         Informatica support site and the solution is to re-import the table definition
         from the database. This can have the unintended effect of invalidating each
         mapping that uses this target and since there is not a mass validate mapping
         option (although there is one for mass validating sessions) this can be a
         difficult process. There is no evidence that this problem has been resolved.

Increase DTM and buffer settings to twice default setting (24,000,000 and
128,000).
This allocates more memory to the Informatica server for ETL processing. The
standard settings are normally adequate but for particularly complex mappings or
particularly ‘wide’ records, Informatica may need additional resources assigned. This
usually does not have much effect but does not hurt, particularly on systems with
large amounts of memory available.



Storage System Constraints
The customer purchased a very large EMC SAN with 180G disk drives. While this
provided plenty of adequate disk space the speed of the I/O system suffered. The
customer would have realizes far better performance from a higher volume of
smaller disk drives.
The following test was performed on the customer’s system (A) to obtain the MB/sec
throughput on the SAN.
         BACKUP DATABASE <DBNAME> TO DISK = ‘NUL:’ WITH STATS=1
Microsoft Technical Champs for Siebel                             www.siebelonmicrosoft.com

This was compared to the same test performed on a similar SAN (B) with more but
smaller disks and better disk slicing.

         A) Customer SAN                 99.417 MB/sec
         B) Benchmark SAN               319.689 MB/sec

The Benchmark SAN was actually two years older than the Customer SAN but better
performance was gained by using smaller hard drives and better slicing of the
disks. The specific results below show that the benchmark configuration had more
data files than the customer’s configuration. For more details on Disk Layout, consult
TechNote # 9 – SQL Server Disk Configuration for Siebel which can be found on the
Microsoft Technical Champs for Siebel Website.

Here are the specifics results for the test:


Customer System (A):
         99 percent backed up.
         Processed 46104 pages for database xxx, file 'A_primary' on file
         1. Processed 5520112 pages for database xxx, file 'A_data1_f1' on
         file 1. Processed 1320 pages for database xxx, file 'A_data2_f1'
         on file 1. 100 percent backed up. Processed 12 pages for database
         xxx, file 'A_log1_f1' on file 1.

         BACKUP DATABASE successfully processed 5567548 pages in 458.767
         seconds (99.417 MB/sec).



Benchmark System (B):


         99 percent backed up.
         Processed 13430888 pages for database 'P01', file 'data1' on file
         1. Processed 13426680 pages for database 'P01', file 'data3' on
         file 1. Processed 13434664 pages for database 'P01', file 'data5'
         on file 1. Processed 13436648 pages for database 'P01', file
         'data2' on file 1. Processed 13439904 pages for database 'P01',
         file 'data4' on file 1. Processed 13434848 pages for database
         'P01', file 'data6' on file 1. Processed 13434944 pages for
         database 'P01', file 'data7' on file 1. Processed 13425312 pages
         for database 'P01', file 'data8' on file 1. Processed 13444456
         pages for database 'P01', file 'data9' on file 1. Processed
         13441120 pages for database 'P01', file 'data10' on file 1.
         Processed 13432112 pages for database 'P01', file 'data11' on
         file 1. Processed 13417296 pages for database 'P01', file
         'data12' on file 1. 100 percent backed up. Processed 1298 pages
         for database 'P01', file 'log1' on file 1.

         BACKUP DATABASE successfully processed 161200170 pages in
         4130.729 seconds (319.689 MB/sec).


Deadlocking
Microsoft Technical Champs for Siebel                                   www.siebelonmicrosoft.com

The customer was having several occurrences of SQL Server deadlocking during the
ETL process largely due to problems with Informatica versions prior to version 6.x.
The following examples show detailed tracking, analysis and a proposed solution to
these deadlocking events. Please note that the best way to eliminate these issues is
to make sure you are using the latest supported version of Informatica for your
Siebel Analytics Database.

Deadlocking Quick Overview (from Books-On-Line)
“A deadlock occurs when there is a cyclic dependency between two or more threads
for some set of resources.
Deadlock is a condition that can occur on any system with multiple threads, not just
on a relational database management system. A thread in a multi-threaded system
may acquire one or more resources (for example, locks). If the resource being
acquired is currently owned by another thread, the first thread may have to wait for
the owning thread to release the target resource. The waiting thread is said to have
a dependency on the owning thread for that particular resource.
If the owning thread wants to acquire another resource that is currently owned by
the waiting thread, the situation becomes a deadlock:”
For more details and diagrams explaining deadlocking, consult SQL Server Books-
On-Line (BOL). Open BOL, use the menu Go      URL and paste the following into the
URL field: acdata.chm::/ac_8_con_7a_8i93.htm



Deadlock incident example 1:
Consecutive deadlocks occurred within a span of 9 seconds on 10/15 20:53. The
update lock on resource KEY 5:740197687 points to a non-clustered unique index
OPB_RTSEQ.BLAH (SEQNAME), while the lock on resource RID: 5:1:8955 points to
table OPB_RTSEQ.
At 20:53:28.16, the server killed SPID 62 as the deadlock victim. The deadlock
chain consists of 3 nodes: SPID 63, 62 and 53:
Node    SPID    SQL                             Owned (locked)     Requested (wait      Blocked
                                                Resource           for) Resource        by
1       63      SELECT SEQVAL FROM OPB_RTSEQ    Exclusive RID      Shared KEY lock      62
                WHERE SEQNAME = 'SESSLOGSEQ'    lock
                                                RID: 5:1:8955:11
2       62      UPDATE OPB_RTSEQ SET SEQVAL =                      Update KEY lock      53
                SEQVAL + SEQINCR WHERE                             KEY:
                SEQNAME = 'SESSLOGSEQ'                             5:740197687:2
                                                                   (350145c4a548)
3       53      UPDATE OPB_RTSEQ SET SEQVAL =   Update KEY lock    Shared RID lock      63
                SEQVAL + SEQINCR WHERE          KEY:
                SEQNAME = 'SESSLOGSEQ'          5:740197687:2
                                                (350145c4a548)


At 20:53:33.16, server killed SPID 60 as the deadlock victim. The deadlock chain
consists of 3 nodes: SPID 54, 60 and 53:
Node    SPID    SQL                             Owned (locked)     Requested (wait      Blocked
                                                Resource           for) Resource        by
1       54      UPDATE OPB_RTSEQ SET SEQVAL =                      Update KEY lock      60
                SEQVAL + SEQINCR WHERE                             KEY:
                SEQNAME = 'SESSLOGSEQ'                             5:740197687:2
                                                                   (350145c4a548)
2       60      UPDATE OPB_RTSEQ SET SEQVAL =   Update KEY lock    Shared RID lock      53
                SEQVAL + SEQINCR WHERE          KEY:
                SEQNAME = 'SESSLOGSEQ'          5:740197687:2
                                                (350145c4a548)
Microsoft Technical Champs for Siebel                                    www.siebelonmicrosoft.com

3       53      SELECT SEQVAL FROM OPB_RTSEQ    Exclusive RID      Shared KEY lock       54
                WHERE SEQNAME = 'SESSLOGSEQ'    lock
                                                RID: 5:1:8955:11


At 20:53:35.66, server killed SPID 54 as the deadlock victim. The deadlock chain
consists of 3 nodes: SPID 55, 54 and 53:
Node    SPID    SQL                             Owned (locked)     Requested (wait       Blocked
                                                Resource           for) Resource         by
1       55      UPDATE OPB_RTSEQ SET SEQVAL =                      Update KEY lock       54
                SEQVAL + SEQINCR WHERE                             KEY:
                SEQNAME = 'SESSLOGSEQ'                             5:740197687:2
                                                                   (350145c4a548)
2       54      UPDATE OPB_RTSEQ SET SEQVAL =   Update KEY lock    Shared RID lock       53
                SEQVAL + SEQINCR WHERE          KEY:
                SEQNAME = 'SESSLOGSEQ'          5:740197687:2
                                                (350145c4a548)
3       53      SELECT SEQVAL FROM OPB_RTSEQ    Exclusive row      Shared KEY lock       55
                WHERE SEQNAME = 'SESSLOGSEQ'    lock
                                                RID: 5:1:8955:11


At 20:53:36.91, server killed SPID 55 as the deadlock victim. The deadlock chain
consists of 3 nodes: SPID 55, 54 and 53:
Node    SPID    SQL                             Owned (locked)     Requested (wait       Blocked
                                                Resource           for) Resource         by
1       61      UPDATE OPB_RTSEQ SET SEQVAL =                      Update key lock       55
                SEQVAL + SEQINCR WHERE                             KEY:
                SEQNAME = 'SESSLOGSEQ'                             5:740197687:2
                                                                   (350145c4a548)
2       55      UPDATE OPB_RTSEQ SET SEQVAL =   Update key lock    Shared lock           53
                SEQVAL + SEQINCR WHERE          KEY:
                SEQNAME = 'SESSLOGSEQ'          5:740197687:2
                                                (350145c4a548)
3       53      SELECT SEQVAL FROM OPB_RTSEQ    Exclusive row      Shared lock           61
                WHERE SEQNAME = 'SESSLOGSEQ'    lock
                                                RID: 5:1:8955:11


At 20:53:37.53, server killed SPID 61 as the deadlock victim. The deadlock chain
consists of 3 nodes: SPID 58, 61 and 53:
Node    SPID    SQL                             Owned (locked)     Requested (wait       Blocked
                                                Resource           for) Resource         by
1       58      UPDATE OPB_RTSEQ SET SEQVAL =                      Update key lock       61
                SEQVAL + SEQINCR WHERE                             KEY:
                SEQNAME = 'SESSLOGSEQ'                             5:740197687:2
                                                                   (350145c4a548)
2       61      UPDATE OPB_RTSEQ SET SEQVAL =   Update key lock    Shared lock           53
                SEQVAL + SEQINCR WHERE          KEY:
                SEQNAME = 'SESSLOGSEQ'          5:740197687:2
                                                (350145c4a548)
3       53      SELECT SEQVAL FROM OPB_RTSEQ    Exclusive row      Shared lock           58
                WHERE SEQNAME = 'SESSLOGSEQ'    lock
                                                RID: 5:1:8955:11


Analysis & Solutions
This problem has been fixed in Informatica version 6.0. The following Analysis and
Solutions were done on a system using Informatica version 5.2.

This series of deadlocks was caused by the lack of a clustered index and the way the sequence
value for SESSLOGSEQ was generated in and retrieved from the OPB_RTSEQ table.

Although the table OPB_RTSEQ has a unique index on column SEQNAME, the update
and select statements requesting locks at the index level and the row level are
causing the deadlock. This happens because the update statement was granted the
Microsoft Technical Champs for Siebel                                    www.siebelonmicrosoft.com

update lock on the key and is trying to obtain an exclusive lock on the row. At the
same time the select statement has the lock on the row and is trying to obtain the
update lock on the key. This contention can be alleviated by creating a clustered
index on column SEQNAME instead of the current non-clustered unique index. This
causes the lock grants and waits to go against the clustered index, which is the same
entry in the b-tree, thus eliminating the deadlock.
The update and retrieval of the value seems to be a two step process:
               1. Update SEQVAL value, with an update lock on the index.
               2. Retrieve the value, with an exclusive row lock.
The transaction isolation and lock change is inefficient. Without a major schema
overhaul, this process can be streamlined using the following update syntax:
         Declare @SEQVAL int

         UPDATE OPB_RTSEQ
         SET @SEQVAL=SEQVAL=SEQVAL+SEQINCR
         WHERE SENAME = ‘SESSLOGSEQ’

         SELECT @SEQVAL

This code retrieves the sequence value in the same update transaction scope with
much greater efficiency thus providing higher concurrency, and also eliminating
deadlocks.

Deadlock incident example 2:
This deadlock occurred as the conflict between SPID 59 and SPID 54. SPID 59 is
blocked from its request for an update (U) lock on KEY (range) 5:1396200024:2 –
_WA_Sys_OBJECT_TYPE_ID_53385258 (which is in fact a table level lock due to the
lack of an index on the column OBJECT_TYPE_ID), because SPID 54 already has an
exclusive lock on it. In Node 2, SPID 54 is blocked from its request for an exclusive
lock on RID(row): 5:1:10213:7 because SPID 51 holds a shared lock.
Node    SPID    SQL                             Owned (locked)      Requested (wait      Blocked
                                                Resource            for) Resource        by
1       59      SELECT A.OBJECT_LOCK_TYPE,      Update KEY Lock     Shared RID lock      54
                A.OBJECT_LOCKER_ID,             KEY:
                A.LOCK_TIME, B.SERVER_ID,       5:1396200024:2
                B.LAST_SAVED,                   (1901ddf4e005)
                A.OBJECT_TYPE_ID, A.HOSTNAME,   Note: this is a
                A.PROC_ID FROM                  statistics record
                OPB_SERVER_INFO B,
                OPB_OBJECT_LOCKS A WHERE
                A.OBJECT_ID    = B.SERVER_ID
                AND B.SERVER_ID = 10 AND
                A.OBJECT_TYPE_ID…
2       54      DELETE FROM OPB_OBJECT_LOCKS    Exclusive RID       Exclusive KEY        53
                WHERE OBJECT_TYPE_ID = 41 AND   lock                lock
                OBJECT_ID = 10 AND              RID: 5:1:10213:7
                OBJECT_LOCK_TYPE = 8 AND
                OBJECT_LOCKER_ID = 2 AND
                SUBJECT_ID = 0 AND
                VERSION_ID = 0 AND LOCK_TIME
                = '10/12/2003 20:47:38'


Analysis & Solutions
This problem has been fixed in Informatica version 6.0. The following Analysis and
Solutions were done on a system using Informatica version 5.2.
Microsoft Technical Champs for Siebel                            www.siebelonmicrosoft.com

The occurrence of this deadlock may be eliminated, or significantly reduced if table
OPB_OBJECT_LOCKS is properly indexed. It originally has indexes and dropping the
original indexes seems to eliminate some locking problems. However it is possible to
gain additional improvements by adding the following indexes. Test both
environments to make sure you are getting the best performance in situations like
this.
    •    Clustered Primary Key Index
    •    Non-clustered index covering OBJECT_TYPE_ID

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:86
posted:7/9/2010
language:English
pages:11