Migrating DBF code to SQL

Document Sample
Migrating DBF code to SQL Powered By Docstoc
					Migrating from DBF to SQL
Geoff Schaller

Tutorial 2
This is the third paper in a series of four tutorials which seek to guide the reader into the world of SQL from DBF. The four papers are: 1. 2. 3. 4. A Combined SQL/DBF Server Using Dual Inheritance Coding in VO for the SQL Environment Migrating from DBF to SQL Employing the Sovo SQL Classes and Tools

This session is devoted solely to the migration process itself. We need to develop a clear strategy to get to SQL and we will be giving some things up we will crave just minutes later. We may also change things that cause our code to break so being able to roll back our changes is very important. This is only possible if we were structured in our approach. Moving your code is also as much about moving your users because it will be your users that will remind you about what you have taken from them! But the news is certainly not all bad and very quickly you will be able to offer integrity and stability such as was never available with DBF. You will also be able to trade some performance and flexibility losses with performance gains that should eventually win over your clients The process itself is important if it is to be successful but the implementation is important if your users are to support your endeavors. This session is about the process. The implementation is left to the final paper.

Migrating from DBF to SQL
Geoff Schaller

Migrating DBF to SQL: this is an angst-causing process at even the smallest of sites. We are faced with the dual challenge of ensuring that data translates to the new environment and that all our business logic still behaves the same. It is impossible for the process to go completely smoothly and it is impossible to get everything right first up. Do not expect otherwise. Our goal here is to at least guide the process to a successful conclusion by dealing with some carefully managed stages of change. Planning is the key. Before you embark on this process, please be sure to have read the second tutorial of this series titled “Coding in VO for the SQL Environment”. The next paper in this series deals with the specifics of employing the SOVO SQL Class library. The purpose of this paper is to advise you on the process surrounding the conversion task. As indicated in the opening paragraph, this process involves data and code. We have devised a simple sequence of events that should help you deliver the conversion successfully: 1. 2. 3. 4. 5. 6. Code Analysis and Conversion Data Analysis and Repair PONR - Data Migration Raising Stored Procedures Using Advanced Database Objects Data Normalisation

The PONR or Point of No Return is critical. It represents that stage in the process where you cannot go back to pure DBF no matter what happens. Only if you are using dual mode dual inheritance will it be possible to delay this step until the next stage of conversion: the application of stored procedures. As long as you planned and executed the first three stages well enough, the rest will happen smoothly. If you are using dual mode dual inheritance you may still be able to revert to DBF but take extra care with referential audits if this is the case. The contents of this tutorial are written from the context that you are about to deploy the SOVO SQL Classes to your organisation. Whilst we will make significant reference to the classes we will leave a detailed discussion of the specific functionality and its implementation to the final paper. This paper seeks to advise you on the process, not the physical implementation.

Stage 1: Code Analysis and Conversion
The code analysis stage will lay the ground work for a successful data conversion. There are essentially two things we have to do to our existing code:   Review the code as discussed in the second tutorial, and Reconstruct our data access classes to provide our SQL compatibility.

As you go through these steps, comment everything! It is important that you leave messages to yourself about what you did and why. This is very important from the perspective that if you need to find out why something stopped working, you need to know what you changed. If you proceed blindly through your code changing everything in sight, chances are you will break things and never find out why. This of course presumes you are starting from a stable base. When you comment, use your initials and a date. It is most useful to be able to refer to actual developer who made the change in order to find out why. Further, if you reverse something because the change didn‟t work, LEAVE IT IN THE CODE …albeit commented out. Until you are completely satisfied with the result, it is often useful to know that something did not work. There have been countless times when we have reversed a change and deleted the text only to find that someone else comes along later to try the same failed change. It has often been helpful to see that someone tried something but that it was not successful. Migrating from DBF to SQL


The next thing to consider is how much code to change. The temptation is to simply change everything: don‟t. The process we advise is to change only what is necessary to achieve the task. Also, don‟t hurry. Even if your migration process proceeds smoothly, sit back on your changes for a few months. Some problems are obscure or involve code not often visited by the end user. You need time to be sure all possibilities have been covered. React to genuine criticism but reserve code changes and turnover into stored procedures for those things that are genuinely slow or can benefit from the process. A stored procedure or advanced database object may not always give you a positive return on investment because they include overheads of their own. Consider for example an invoice entry screen. You have a form with 100 lines on it, you edit a few and then save. Normally you would now retotal the lines, update the invoice value and then balance the invoice into its appropriate ledger. If your unconverted code takes 200ms to complete under SQL then there is nothing to be gained by re-writing it as a stored procedure. If you had a rebalance process for the invoice headers that needed to go back through three years of history and took five minutes, it is very likely a stored procedure could reduce this to mere seconds. So, be critical when reviewing your business logic for potential conversion.

Code Analysis Checklist
The following lists will guide you through the process. Done Item DB Function Calls Commands Server Parameters Description Check that you do not use any DBxxxxx() functions in your code to access or manage data with. This would extend to UDFs Check that you do not use Commands of any sort. These too can be found in UDFs. Make sure all database access is OO based Where possible, you should define and pass a server object to a DataWindow and not rely on defining the server in the WED property for a DataWindow. This will make such things as setting orders, scopes and filters much more visible and make it easier to track server-related issues. We will look at code-specific issues later but anywhere you use a server, be certain you specify the order or tag required. Do not rely on the default order because under SQL there is no “default” order. Try to remove all of these. Set the No Access/Assign property to YES and fix up the code to refer explicitly to Fieldget/fieldput. This will also have the side effect of making your code leaner and faster. Cut the DBServer editors from your code and just leave the raw class code behind. We will be substantially modifying this later. This will reduce the size of your project substantially. Whilst such code will work with our SOVO Classes, we do not recommend them. For example, on a DataWindow with a server attached which had a field called ADD1, you can reach the field with SELF:Server:Add1. This works but it is inconsistent with our aims for predictable code writing. Remove server-specific fieldspecs and use ours. For example, a 20 character field in any server should use our CHARACTER20{} field spec. The benefit of this will be to use our behind-the-scenes management of SQL data type conversions. Our fieldspec classes already carry this information. Establish whether you are doing something clever here or not. If you are making a special business of re-using deleted records then you may need to include a further generic column to your SQL tables. If not, ignore deleted records. Your SQL tables certainly will! Make sure that none of your DBServer class method calls pass codeblocks as parameters. They will simply not operate in SQL. Migrating from DBF to SQL

Server Instantiation

WED Accesses/Assigns

DBServer Editors

Late Bound Field Accesses and Assigns


Deleted Record Management


3 Business Logic Components Referential Checks Where you have server-specific routines held as window class methods, move the code back to the server level. This will reduce the time spent wondering where certain code elements are and also aid testing. You should code yourself a short routine for every server class which checks index fields for empty values or lookup fields for illegal and/or empty values. This might seem tedious at first but it will save you substantial time in the data conversion stage. In fact, all DBF systems should already have such referential checking mechanisms but we know most don‟t. Consider any browsers you may have that normally open up on large tables. The ideal circumstance would be for you to provide some scoping option and pre-fill it with something reasonable. For example, in an alphabetic listing, just show the „A‟s. For a ledger system, show just the last month. The SQL purists would have you start with a blank window and demand the user enter at least some criteria but users hate this. Given them something but with care. Note in your code the field list required. For example, if you process something in a loop and use a second table to look up codes or prices, you probably only need 1 or maybe 2 fields. Knowing this now will seriously boost SQL performance when we reach that step. Have a look to find any code anywhere which executes a backwards skip. It would be wise to replace this with our SkipNextLeast() method or replace the call with a two step process. Look for these calls and investigate whether they might replaceable with another construct. A record number in SQL is only going to have context as a position within the current record set and not the whole table. Your code may be safe but it is best to inspect it now.

Browser Filters

Required Fields


GoTo(nnnnnn) or Server:Recno := nnnnn

Modifying Your Code
Whilst the final paper will go into the specifics of deploying the SOVO Classes to your application environment, there are a few coding constructs we would like discuss here. We also refer you to the class Help which describes the classes and methods in some detail. The purpose of this segment of our paper is to introduce you to some of the underlying techniques which make this conversion process work and how you should employ them in your code. The ones covered here are the minimum you should know directly about.

We have developed a special set of fieldspecs for two reasons: (1) to cut down the enormous overhead generated by the DBServer editor, and (2) to automate several features necessary with window, browser and server conversions for SQL. The process is quite comprehensive so we will illustrate by example:
CLASS GCSFieldSpec INHERIT FIELDSPEC PROTECT cSQLDataType AS STRING // CHAR(), VARCHAR(), TEXT, DATETIME, NUMERIC, TINYINT, SMALLINT, INT, BIT PROTECT cSQLValType AS STRING // "C", "M", "T", "D", "N", "L" PROTECT nSQLLength AS DWORD // needs for MEMO/VARCHAR and NUMERIC only. PROTECT uDefault AS USUAL // holds the default value to be used, overrides DEFAULTVALUE and EMPTYVALUE ACCESS DefaultValue CLASS GCSFieldSpec LOCAL uReturn AS USUAL DO CASE CASE !IsNil(SELF:uDefault) uReturn := SELF:uDefault CASE SELF:cType = "C" uReturn := "" CASE SELF:cType = "M"

Migrating from DBF to SQL

uReturn := "" CASE SELF:cType = "N" IF SELF:Decimals > 0 uReturn := 0.0 ELSE uReturn := 0L ENDIF CASE SELF:cType = "L" uReturn := FALSE CASE SELF:cType = "O" uReturn := NULL_OBJECT CASE SELF:cType = "D" uReturn := Today() OTHERWISE GCSErrorBox("CTYPE of " + SELF:cType + " not found",5) ENDCASE RETURN uReturn ASSIGN DefaultValue(uValue) CLASS GCSFieldSpec SELF:uDefault := uValue RETURN uValue ACCESS EmptyValue CLASS GCSFieldSpec LOCAL uReturn AS USUAL DO CASE CASE !IsNil(SELF:uDefault) uReturn := SELF:uDefault // must override for SQL benefit CASE SELF:cType = "C" uReturn := "" CASE SELF:cType = "M" uReturn := "" CASE SELF:cType = "N" IF SELF:Decimals > 0 uReturn := 0.0 ELSE uReturn := 0L ENDIF CASE SELF:cType = "L" uReturn := FALSE CASE SELF:cType = "O" uReturn := NULL_OBJECT CASE SELF:cType = "D" uReturn := NULL_DATE OTHERWISE GCSErrorBox("CTYPE of " + SELF:cType + " not found",5) ENDCASE RETURN uReturn ACCESS SQLDataType CLASS GCSFieldSpec RETURN SELF:cSQLDataType ACCESS SQLLength CLASS GCSFieldSpec IF Instr(SELF:ValType, "M N") RETURN SELF:nSQLLength ELSE RETURN SELF:Length ENDIF ACCESS SQLValType CLASS GCSFieldSpec RETURN SELF:ValType

You can see that the FieldSpec class has been significantly enhanced to provide data type information back to our server and window classes. These accesses are used most frequently during the construction of SELECT statements and is the mechanism for achieving data type checking. Each major data type has a generic type to support a SQL type and then there are specific length fieldspecs. For example, take a single character field:

Migrating from DBF to SQL

CLASS CHARACTER1 INHERIT CHARFIELDSPEC METHOD Init(uCaption) CLASS CHARACTER1 Default(@uCaption, "") SUPER:Init(HyperLabel{#CHARACTER1, uCaption, "", "" }, "C", 1, 0 ) RETURN SELF METHOD INIT(uWidth, oHyperLabel, uType, nLength, nDecimals) CLASS CHARFIELDSPEC SUPER:Init(uWidth, oHyperLabel, uType, nLength, nDecimals) SELF:cSQLDataType := "CHAR" RETURN SELF ACCESS SQLDataType CLASS CHARFIELDSPEC RETURN SELF:cSQLDataType + " (" + NTrim(SELF:Length) + ")"

Most fieldspecs take this form and you will note the complete absence of all the self-validation code. Important features to note are the uCaption parameter and cSQLDataType assignment. The data type establishes to the SQL engine how to construct the update and insert statements and the uCaption is used as a default column heading in browsers. The data type is even more important for memos because we need to determine which type of SQL data type is required. Choosing the wrong one can lead to poor performance and inaccurate storage requirements. We have provided a suggested range in the FieldSpec module.

DBServer Class
We head into some serious territory here. We have provided a DBServer class which you will need to inherit all your servers from. By now you will have ditched all your DBServer binaries so now we can move forward to making them SQL compatible. We shall investigate the important components. Method Init() Your class declaration and Init() method will be reduced to the following:
METHOD Init(oWin, cInitOrder, lShare, lNew, lAddDefaultRecords) CLASS Staff oHyperLabel := HyperLabel{#Staff, "Staff", "Staff", "Staff"} SUPER:Init(oWin, "STAFF", cInitOrder, lShare, lNew, lAddDefaultRecords) RETURN SELF

The detailed help will describe all the parameters in greater detail but key factor is the physical file name here of “STAFF” is buried as an Init parameter. The file will be looked for in the folder described in a global object path but this can be overridden at the server left with a PreInit() method. For example:
METHOD PreInit() CLASS Postcodes LOCAL oGlobal AS GlobalObject oGlobal := GetGlobal() SELF:cDBFPath := oGlobal:ROOT_DIRECTORY SELF:cCatalog := "softwareobjectives" RETURN

There are two important features here: the DBF path override (from a global or your own item) and the catalog override. Under SQL the catalog decides the database to use and in DBF this usually translates to a folder. FieldDesc() The FieldDesc() is practically the same. The only difference is the default column caption as shown in the FieldSpec parenthesis. Choose you memo fieldspecs carefully, try to choose a VARCHAR model which minimises data storage without compromising space efficiency.
ACCESS FieldDesc CLASS Postcodes LOCAL aRet AS ARRAY aRet := ArrayCreate(4) aRet[1] := {#POSTCODE, "POSTCODE", PostCodeFS{"P/Code"} } aRet[2] := {#PLACE, "PLACE", CHARACTER30{"Town"} } aRet[3] := {#STATE, "STATE", CHARACTER3{"State"} } aRet[4] := {#BOXTYPE, "BOXTYPE", CHARACTER30{"Box Type"} } RETURN aRet

Migrating from DBF to SQL

6 IndexList() The index list produced is basically similar to the original DBServer one but we have added two extra elements to each row of the index array.

The first added element records the fields required by the index order and the second one is the SQL equivalent expression for the order condition. Under SQL, all scopes and filter conditions merely add to the WHERE clause (note how the LOGIC is treated) and the fields element allows for us to test dynamically all parameters passed to scoping or filtering conditions. You will need to modify all your IndexList methods. Keys, Defaults and Default Records Some of the newer features of our DBServer class were “nice” for DBF but now they are necessary for SQL. Firstly, the key generators:
ACCESS ForeignKey CLASS CashFunds LOCAL aRet AS ARRAY aRet := ArrayCreate(2) aRet[1] := {"FK_TRUSFUND_T_AGENTS", "AGENT", "T_AGENTS", "AGENT"} aRet[2] := {"FK_TRUSFUND_STAFF", "MANCODE", "STAFF", "MANCODE"} RETURN aRet ACCESS PrimaryKey CLASS CashFunds RETURN {"PK_TRUSFUND", "FUND", "'000000'"}

The primary key represents the “master” order for the DBF and will be referenced in other tables. There can only be one primary key but it can contain multiple fields. If there is no primary key, our table generator will create one from the RECID field it creates. The foreign key is obviously a lookup into another table‟s primary key. There is a simple naming convention based on the table names involved. All keys must have unique names and must be created in the correct order to work. Now that we have primary keys in a table, those tables that refer to this table must have values in the key columns. Thus, every time a new row is to be appended, we enforce a default value for those columns. Without a default (or a real value), the row could not be inserted.
ACCESS FieldDefaults CLASS CashFunds LOCAL aRet AS ARRAY aRet := ArrayCreate(2) aRet[1] := {#AGENT, "000000"} aRet[2] := {#MANCODE, "ADMIN"} RETURN aRet

Now note that these field defaults must also exist in the primary key table so now we must ensure that the primary key table includes at least sufficient records to carry all defaults. Here is an example:
METHOD DefaultRecords() CLASS CashFunds LOCAL aRecords AS ARRAY aRecords := ArrayCreate(2) aRecords[1] := {"FUND", "FUNDNAME", "MANCODE", "UPDATED", "AGENT", ; "AGENTNAME", "DESCRIPT", "REASON"} aRecords[2] := {{"000000", "AUDIT ENTRIES", "ADMIN", Today(), "000000", ; "AUDIT ENTRIES", "Audit Entry", "Created"}} RETURN aRecords

Migrating from DBF to SQL

7 And now we have all the things necessary to create servers that will provide sufficient details to the SQL engine to describe what it is and how it operates. There are a range of other new or altered measures but these will be discussed in greater detail in the Classes manual. This is about it. There are the issues of preparing your code as discussed in the previous tutorial maybe investigating some of the seek and scoping issues but largely, that‟s it!

Stage 2: Data Analysis and Repair
The forgoing process does not really involve a lot of work: its more a process of understanding the mechanics. The important thing now is to check your DBF data to see if it referentially pure. Now that you have decided on your primary and foreign keys, you should build data audits in your server classes and test them. Here is an example of a simple audit:
METHOD AuditKeys(lQuiet AS LOGIC, cLogFile AS STRING) AS DWORD PASCAL CLASS CashFunds LOCAL oAgent AS TrustAgencies LOCAL oGroup AS TrustFundGroups LOCAL cAgent, cAgentName, cFund, cFundName, cGroup AS STRING LOCAL aLines, aSubLine AS ARRAY LOCAL nLines, nN, nFund, nNewFund AS DWORD LOCAL cMancode AS STRING LOCAL oStaff AS OBJECT oAgent := TrustAgencies{SELF:oWindow, "AGENT"} oAgent:SuspendNotification() oGroup := TrustFundGroups{SELF:oWindow, "FGROUP"} oGroup:SuspendNotification() oStaff := CreateInstance(#Staff, SELF:oWindow, "LOGON") oStaff:SuspendNotification() aLines := {"Trust Funds: TRUSFUNDS.DBF", ""} // First, inspect key fields in all orders SELF:SuspendNotification() SELF:SetOrder("FUND") SELF:GoTop() nFund := 0 DO WHILE !SELF:GCSEof .and. !oProgWin:OK cFund := SELF:FIELDGET(#FUND) cGroup := SELF:FIELDGET(#FGROUP) nNewFund := Val(cFund) cFundName := SELF:FIELDGET(#FUNDNAME) cAgent := SELF:FIELDGET(#AGENT) cAgentname := Trim(SELF:FIELDGET(#AGENTNAME)) cMancode := Trim(SELF:FIELDGET(#MANCODE)) aSubLine := {"Record No: " + NTrim(SELF:RecNo) + " for " + cFund + ": " + cFundName} IF Empty(cAgent) AAdd(aSubLine, " Empty Agent field ") ENDIF IF !oAgent:Seek(cAgent) AAdd(aSubLine, " Illegal Agent Code of " + cAgent) ELSE IF !(Trim(oAgent:FIELDGET(#AGENTNAME)) == cAgentname) AAdd(aSubLine, " Illegal Agent Name of " + cAgentname + " for Agent " + cAgent) ENDIF ENDIF IF !Empty(cGroup) IF !oGroup:Seek(cGroup) AAdd(aSubLine, " Illegal Group Name of " + cGroup + " for " + cFundname) ENDIF ENDIF IF nFund = nNewFund AAdd(aSubLine, " Fund Number duplication of " + cFund ) ENDIF IF !(nNewFund = nFund+1) AAdd(aSubLine, " Fund Number sequence error - " + cFund + " > " + NTrim(cFund) + "+1") ENDIF nFund := nNewFund // Update for next line check but maintain new sequence IF Empty(cMancode)

Migrating from DBF to SQL

SELF:FIELDPUT(#MANCODE, "ADMIN") AAdd(aSubLine, " Repaired Empty MANCODE field to ADMIN") ELSEIF cMancode = "SUPER" SELF:FIELDPUT(#MANCODE, "ADMIN") ELSEIF !oStaff:Seek(cMancode) AAdd(aSubLine, " Illegal MANCODE of " + cMancode ) ENDIF // now amalgamate output nLines := ALen(aSubLine) IF nLines > 1 FOR nN := 1 UPTO nLines AAdd(aLines, aSubLine[nN]) NEXT ENDIF SELF:Skip() ENDDO // Close files and print any results SELF:Commit() oAgent:Close() oStaff:Close() oGroup:Close() nLines := ALen(aLines) LogAndDiarise("Trust Funds Audit", cLogFile, aLines, lQuiet, oProgWin:OK) SELF:ResetNotification() RETURN nLines-2

The audit process here writes to a log file and it did have a progress bar but it was removed for clarity. If you look down the code you will see a range of measures – looking up primary keys for uniqueness, non empty and existence. The foreign keys are tested for existence also. The process above is what a DBMS will do internally to the SQL engine and it is such that should any test fail, the data will not be inserted. Your task now is to go through your DBF server classes adding these referential checks. If your data can get through this, it is very likely you will succeed uploading your DBF data. We will now investigate that process.

Stage 3: (PONR) Data Conversion
Once we are satisfied our data will load, we can begin to trial conversion. We have provided a number of tools to do this. Firstly you need to arrange connection to a provider. Please see the last tutorial for a more detailed discussion on this point. What we are interested in now is lifting the DBF data into the SQL table. Figure 1 shows the window we provide to assist that process. The window allows you to choose the DBF location and it fills with all available tables, the size of the file and whether or not the files belong to our class name list. The class name list is a very important feature and can be found as a property of our global object. The class name list is an array of table names and their associated class name. As long as you include all your servers in this list the various automated procedures for indexing, rebuilding, packing and locating will make sure your table is included. There is also one more important purpose of the class name list. It defines the order in which tables and their keys must be created. Although much of this can be deduced from the classes, they need to be instantiated first. The class name Figure 1. Create Table From DBF Migrating from DBF to SQL

9 list gives us a way to determine which ones must exist before others. This is also important when replacing or rebuilding tables. If a table contains a primary key field in use by other tables, all those foreign keys must be dropped first, the table rebuild, the primary key reinstated and the foreign keys rebuilt. It is a complex process but has been fully coded for you in the class library. The final part of the process is to perform the uplift. In your errors folder (again specified as part of the Global Object) will be details of all the failed insert statements or key applications. The way the uplift works is to attempt to load 20 rows at once. If one fails they all fail but to attempt to load any sizeable table one row at a time would simply take forever. The good part about the log of the failures is that they contain the full select statement or CREATE KEY statement involved. You can now take that statement and cut and paste it into the SQLMaster tool for testing. You will quickly find the offending data element causing the failure. Once you are satisfied that your data converts, you are now ready to commit to SQL. The biggest battle is now over. Thoroughly test all your windows, reports and calculations for integrity because now you are ready to begin optimising your application to take advantage of the advanced features of an RDBMS

Stage 4: Raising Stored Procedures
There are several important database objects that we can now seek to take advantage of but firstly we‟ll start with stored procedures (SPs). They deserve a stage of their own for the simple reason that we have mechanisms in our classes to allow us to with DBF or SQL even though we choose to employ SPs. This gives us the perfect platform to test the integrity of our new SP code. Without being able to revert at runtime to non-SP code it would be very difficult to prove that the SP reproduced your existing calculations correctly and faithfully. And now to an obvious comment: to write stored procedures with any efficacy means that you will need to get out your SQL handbook and make sure you understand how to employ them. Again SQLMaster will come to your rescue but we will discuss how a little later. First, it is instructive to see how our SOVO SQL Classes handle the implementation of SPs. Basically the SP must exist in the database so there is an issue of managing code in two places. We have chosen to manage them from the application. Basically, when a new code version is delivered to the client site, we drop all stored procedures and reload them. This makes sure our SLQ engine stays in synchronisation with out code. You can see this happening in SQLServer:RebuildStoredProcedure(). The method looks for a defined list of SPs and processes them all. In the case of runtime code execution, the case is a little simpler. We just need to know if a particular method has an SP. Here is an example:
METHOD ReTotal() AS FLOAT PASCAL CLASS Transactions LOCAL …. Ivars // …some code here assessing cTranno and lCredit IF IsMethod(SELF, #SQLReTotal) IF SELF:Connection:USESP RETURN Send(SELF, #SQLReTotal, cTranno, lCredit, SELF:FIELDGET(#RECID)) ENDIF ENDIF // … now proceed with regular method code oLines := TransactionLines{SELF:oWindow, "TRANNO"} oLines:SuspendNotification() oLines:SetScopes(cTranno,cTranno) DO WHILE !oLines:GCSEof mBalance += oLines:FIELDGET(IF(lCredit, #Credit, #Debit)) oLines:Skip() ENDDO oLines:Close() oLines := NULL_OBJECT SELF:FIELDPUT(IF(lCredit, #CREDIT, #DEBIT), mBalance) SELF:Commit() RETURN mBalance

Notice we are testing for the existence of the SP invoking method. Our suggested syntax for this is SQLxxxx where xxxx matches the original DBF method name. The reason for doing this is in case we wish to quickly Migrating from DBF to SQL

10 remove the method from the runtime. We can TEXTBLOCK the method and this code will still run. Strictly speaking you could of course just comment out this code but we found TEXTBLOCK‟d methods more visible in the IDE entity browser. So, for every method for which you have written an SP, you need to include these five lines of code. Now we shall investigate the SP invocation method. Before we do, I want to explain this specific example. This code is a simple loop accumulating some values and saving them into a table. This is quite ordinary code but very slow when executed in SQL. But this particular code does NOT need an SP. It can be recast as a single SQL statement and this is how we do it:

You see, it is quite straight forward. Write the SQL statement and call the ExecCommand method on the connection object. Simple! One word of caution, however... The above example is TSQL specific so it may not work for Oracle or Sybase. The better solution is to call for an Update/Select action via the provider class, return a string correctly formatted for the specific provider and then execute it. Burying syntax-specific solutions this far down the class hierarchy makes your code provider-specific. The first example showed us how to code a replacement statement but the more usual requirement is something more complex. We should write an SP script, which is ultimately just a collection of statements. Here is a similar method which calls for an SP instead:
METHOD SQLRebalance(mEntrydate) CLASS Transactions RETURN SELF:oConn:ExecSP("Rebalance", {{"entrydate", adDate, adParamInput, 10, mEntrydate-1}})

This method assumes the SP titled Rebalance exists, passes the parameters listed and returns a value. The SP must deal with all updates required and generally should only return a scalar value. The SP itself will have been loaded as part of the table creation process or checked and reloaded on a version update. We keep the SP text in the application and one might look like this:
METHOD SPRebalance() CLASS Transactions LOCAL oSObject AS ScriptObject oSObject := ScriptObject{} oSObject:AddScript("CREATE PROCEDURE Rebalance @EntryData DATETIME") oSObject:AddScript("AS ") oSObject:AddScript("--<Version:1.01><From Code>") oSObject:AddScript(" DECLARE @fBalance FLOAT ") oSObject:AddScript(" DECLARE @fPaid FLOAT ") oSObject:AddScript(" DECLARE @fTotBal FLOAT ") oSObject:AddScript(" DECLARE @fTotPaid FLOAT ") oSObject:AddScript(" DECLARE @bCont BIT ") oSObject:AddScript(" DECLARE @nRecID INT ") oSObject:AddScript(" DECLARE @nRecCount INT") oSObject:AddBlankLine() oSObject:AddScript(" SET NOCOUNT ON") oSObject:AddScript(" SET @nRecCount = 0") oSObject:AddBlankLine() oSObject:AddScript(" DECLARE TRANS_CURSOR CURSOR ") oSObject:AddScript(" FOR SELECT RECID, fPAID = CASE WHEN PAID = 1 THEN CREDIT-DEBIT ELSE 0 END, ") oSObject:AddScript(" fBAL = CREDIT - DEBIT FROM TRANS WHERE ") oSObject:AddScript(" CONVERT(CHAR(10), ENTRYDATE, 112) >= CONVERT(CHAR(10), @EntryData, 112) ") oSObject:AddScript(" AND CONVERT(CHAR(10),ENTRYDATE,112) <= CONVERT(CHAR(10),GETDATE(),112) ") oSObject:AddScript(" ORDER BY ENTRYDATE, AMTSTATUS ")

Migrating from DBF to SQL

oSObject:AddScript(" oSObject:AddScript(" oSObject:AddScript(" oSObject:AddBlankLine() oSObject:AddScript(" oSObject:AddScript(" oSObject:AddScript(" oSObject:AddScript(" oSObject:AddScript(" oSObject:AddScript(" oSObject:AddScript(" oSObject:AddScript(" oSObject:AddBlankLine() oSObject:AddScript(" oSObject:AddScript(" oSObject:AddScript(" oSObject:AddScript(" oSObject:AddScript(" OPEN TRANS_CURSOR ") FETCH NEXT FROM TRANS_CURSOR INTO @nRecID, @fPaid, @fBalance ") SET @bCont = @@FETCH_STATUS ") -- get the initial balance ") IF @bCont = 0 ") BEGIN ") SET @fTotPaid = @fPaid ") SET @fTotBal = @fBalance ") FETCH NEXT FROM TRANS_CURSOR INTO @nRecID, @fPaid, @fBalance ") SET @bCont = @@FETCH_STATUS ") END ") WHILE @bCont = 0 ") BEGIN ") SET @fTotPaid = @fTotPaid + @fPaid ") SET @fTotBal = @fTotBal + @fBalance ") UPDATE TRANS SET BALANCE=@fTotBal,PAIDBAL=@fTotPaid WHERE “ + ; “RECID=@nRecID ") FETCH NEXT FROM TRANS_CURSOR INTO @nRecID, @fPaid, @fBalance ") SET @bCont = @@FETCH_STATUS ") SET @nRecCount = @nRecCount + 1") END ")

oSObject:AddScript(" oSObject:AddScript(" oSObject:AddScript(" oSObject:AddScript(" oSObject:AddBlankLine() oSObject:AddScript(" CLOSE TRANS_CURSOR ") oSObject:AddScript(" DEALLOCATE TRANS_CURSOR ") oSObject:AddBlankLine() oSObject:AddScript(" RETURN @nRecCount") oSObject:AddBlankLine() RETURN oSObject:Script

You can see that this uses an object of type ScriptObject to manage the creation of a string. This string is loaded into the server as an SP and that is all there is to it. The script language is not difficult but again, this is a MS SQL Server specific script. It may be different for different DBMS. You will need to check this. The last items to mention are the SQLMaster SP support provisions. SQLMaster has a complete script editor with syntax highlighter. You can use this tool to develop and test your SPs, even upload them into the server. Further you can open and inspect any SP in your DBMS. Many of the system SPs are quite instructive and can be used to learn some of the syntax provisions. They would certainly provide good examples for setting up and receiving parameters. The last tool of significance is the SP Execution window. See Figure 2.

Figure 2. SQL Master SP Execution Feature When you run an SP from the TreeView, the application assesses all input and output parameters and builds an editable interface to work from. You will be able to play “what if…” games with input parameters. Should the SP produce a recordset output, this will be displayed beneath the input pane.

Migrating from DBF to SQL


Stage 5: Using Advanced Database Objects
Once we move into this stage we are really beginning to work with SQL features that will not allow us to return back to the DBF version. They can also become quite DBMS specific. Views for example are relatively universal but triggers are not. Or, if a feature does appear to be available, be careful that the syntax of its use is correct for the DBMS you are using. Do not assume anything works: test everything. For this stage of our conversion we will just consider some of the more common features:      Views Triggers Users Roles Rules

There are more advanced features such as OLAP Cubes, data mining models and provider specific functionality but for now this is enough. We will try to remain relatively provider independent.

Views are of second most importance to us after tables themselves. In the DBF world we use relations to control linked files and the runtime notification that exists to support this is substantial. These linked files are often found in browsers as expression columns (to related tables) or as simple lookup values in calculation loops. Although VO2ADO and our SOVO SQL Classes give us related tables, the reality is that a view is a far superior mechanism to display data. For starters, the “view” can be treated just like a normal server – and it contains a recordset behind the server. Secondly, like a real table, the view can be used to update and insert data. Thirdly, it reduces the complexity of browsers by allowing us to replace slow expression columns with real fields. A view is just a virtual table made up of fields from one or more tables. Here is a simple example:
CREATE VIEW AccountGroups AS SELECT a.GroupName, b.Acct, b.AcctName FROM ACCTGRP a LEFT JOIN ACCCODES b ON a.GroupNo = b.GroupNoTriggers

When you create a view, you are not actually manipulating, creating or changing tables or data. The SELECT statement component is stored in the database and waits for another action to query the view. The view might be just a collection of specific columns from one table, filtered by WHERE clauses or it can be a complex SELECT with joins from several tables. The benefit is that views can just then be treated like tables and this helps reduce the complexity of further data manipulation. Again, a final enhancement will see the SOVO SQL classes managing views almost invisibly. They will be instantiated as SQLServer objects with hard coded references to the view. It would appear as if it were a precoded SELECT statement (which it could be argued it is). Basically you will just define and use views directly in place of other “regular” servers. Remember that it is likely that the fields of a view are very limited. There are three important uses: (1) to create pre scoped/filtered recordsets (ie normalized views), (2) as display servers for browsers, and (3) in place of relation servers. Once you apply views you will not be able to go back to DBF and we have no intention of mimicking view behaviour in DBF because of the substantial complexity possible. It is not reasonable to attempt to depict all the different types of joins and unions.

I need to confess a bias here: I don‟t like triggers. A trigger is nothing more than a database event. For example, here is a simple trigger:

Migrating from DBF to SQL

13 This trigger will be assessed for every update on the T_AGENTS table and in fact will pop up a dialog box. Of course they can be used to automatically update other data elements, warn of values exceeded or provide validation of entered data. Triggers can even be nested. On the face of it, they seem harmless enough and indeed they are but they introduce code management complexities for us. My objection comes from two sources: (1) they are not a common feature to all DBMS, and (2) they do not form part of the application code base. What I see as a serious issue is the ability to debug and watch a process. Normally you would be able to step through all operational aspects of your, even line by line. Where your code involves event handling there is still an avenue of visibility about the process (usually from event handlers or the dispatch). Enter triggers. Now we have a problem because when the trigger fires and causes some other action, we have no way of seeing this in our debugger windows or with other regular tools. It becomes a nightmare to debug or trace cause and effect from fired triggers. I would contend that you are simply better off coding for the circumstance the trigger is to be employed for. If you find you must use triggers (or you have an outstanding use for one), make sure your classes contain comments that properly describe what may be happening right under your nose.

To be completed

To be completed

To be completed

Stage 6: Data Normalization
Now that we understand the power of views and can employ inner and outer joins, it is time revisit our tables. Having moved our data from a DBF environment, it is very likely we have redundant data in our tables. Take for example our Customer table. The table is organised on a 6 digit unique numerical code and has two main orders: code order for lookups and alphabetic order (on the names) for browser listings. Now consider the invoice table. It has an invoice number as the primary sort order but also a date column and the customer code column to perform look ups. What if I want to present the data in customer alphabetic order? Unless we incorporate the customer name into the invoice table we simply cant‟ and this throws us a range of serious limitations. All I can do is present the data in customer “code” order which generally has nothing to do with an alphabetic sort. Thus with our DBF tables we often had a redundant column from a lookup table in order to solve our ordering problems. The existence of such duplicated data introduced all sorts of problems: change management of the underlying code, wasted space, duplication of data, complex referential auditing provisions and so on. Fortunately the problem does not exist with SQL and it allows us to normalize our tables. So what do we mean by “normalization”? Let me quote from Microsoft Books Online: Normalizing a logical database design involves using formal methods to separate the data into multiple, related tables. A greater number of narrow tables (with fewer columns) is characteristic of a normalized database. A few wide tables (with more columns) is characteristic of an non-normalized database. Reasonable normalization often improves performance but it is also very easy to overdo it. When useful indexes are available (and generally we have these when coming from a DBF background), the DBMS query optimizer is more efficient at selecting rapid, efficient joins between tables. Some of the benefits of normalization include:  Faster sorting and index creation. Migrating from DBF to SQL

14    Narrower and more compact indexes. Fewer indexes per table, which improves the performance of INSERT, UPDATE, and DELETE statements. Fewer null values and less opportunity for inconsistency, which increase database compactness.

As normalization increases, so do the number and complexity of joins required to retrieve data. Too many and you risk making the program excessively complex to read and maintain. Although slightly contradictory in some respects, here are Microsoft‟s six rules for data normalization. See if you can apply most of them: Rule 1. There should be a one-to-one relationship between instances of an entity and the rows of the table This is just fancy lingo for saying that you should only ever see one row per entity. Take a stock table: Table: Stock (stock table… the „entity‟ table) Fields: st_code, st_name, st_status, st_price, st_eff_date Keys: st_code, st_status Now, the table holds current prices (st_status=”C”) and future prices (st_status=”F”) so the raw table data has multiple entries for the same stock code and so violates the first rule of normalization. A normalised view of the current would look like this:
CREATE VIEW StockCurrent AS SELECT st_code, st_name, st_price, st_eff_date FROM Stock WHERE st_status = 'C'

Rule 2. A field in a table shall have the same meaning for all rows All our transaction DBFs broke this rule! Consider the above stock table again. Here the price might mean a current price or a future price, depending on the status flag. This is one field with two potential meanings. Our transaction tables did something similar. We used a type field to indicate whether the transaction value was a credit or a debit. The better thing to do is separate the values into credit and debit fields or again provide views scoped to the required condition. Remember though that the non-normalized table is less efficient for all the additional processing logic required to interpret the appropriate value of the field for each row. Rule 3. Each table should represent only one entity type How often do you see status flags used to separate entity types within a table? For example, I have seen many customer tables that double as a delivery address tables. Not only is this confusing but views are constantly needed to resolve the correct relationship of data. Again you cause performance issues by combining the two entity types into the one table. Try to separate them. Rule 4. Multiple instances of an entity must be represented as multiple rows in a table This sounds ordinary but it is commonly violated. The first way often found is with archiving processes. In order to keep tables to reasonable sizes they are often split into year groups. This makes the representation of a consolidated account quite difficult in DBFs but with SQL the problem is solved with union views. It is a complexity best avoided. The second most common violation is having multiple columns for multiple values. Take for example our invoice table: what if we wished to express three possible contact parties for the delivery? We might be tempted to use three columns to contain lookup codes. The better way is to have a separate table to hold the relationship between invoice and delivery parties. Rule 5. Joins should be based on primary and foreign key equality This is more an issue for performance and is relatively self evident. Base your join on a primary key in one table and then use the second table to satisfy WHERE conditions.

Migrating from DBF to SQL

15 Rule 6. Make sure keys are linked correctly Again, this is fairly self-evident. Especially if the join is a complex one involving three or more tables, ensure that your keys do not contain circular references. The keys must also exist and you need to ensure that where field names differ (between the primary and foreign key) that they agree in context.

If you got through to Stage 6 then you have reached your coding Nirvana: your conversion is complete. It is a complex process but one made infinitely easier by having a class library to work from a powerful support tool in SQLMaster. Stage 1 is obviously the most critical so don‟t skimp – review your code ruthlessly and test everything as you go. The outcome will be worth it: SQL conversion and all the advanced features that will certainly impress your client. You are now ready for the last tutorial – installing SOVO SQL Class Library and applying the classes to your own code. You will probably come back to this tutorial frequently so keep it handy

Microsoft Press. Microsoft Press. OLE DB Programmer’s Reference, ADO 2.8 Programmer’s Guide VO,

Geoff graduated as a computer engineer from the UNSW in 1981. He worked as an Army Officer and Computer Engineer for the Dept of Defence for 10 years, managing several of their mainframe and distributed computing solutions. This provided a solid background in data management and communications. He left the Army in 1991 to pursue his own business interests and operated a food manufacturing business employing 20 people for several years. The dissatisfaction with software available for business management, inventory and financial management caused him to become involved in writing his own. Eventually, the software interests grew to take over the manufacturing interests and so in 1999, he joined with Bruce Boddington to form Software Objectives and concentrate solely on software. His personal business experiences offer the team substantial real world credentials. Geoff runs the Melbourne VO User Group and organises the Australian annual Visual Objects training conference. He can be contacted via

Migrating from DBF to SQL

Shared By: