; SQL Server 2000 - Completed
Learning Center
Plans & pricing Sign in
Sign Out

SQL Server 2000 - Completed


  • pg 1
									                                     “For Windows XP”
Shortcut Keys:-
Window Key + U:-
   i.     For using on Screen Keyboard.
   ii.    For using the Narrator.
   iii.   For using the Magnifier.

Control + Alt Arrow Keys:-
   i.     Control + Alt + Up arrow used for normal window.
   ii.    Control + Alt + Down arrow used for rotate the desktop 180 degree.
   iii.   Control + Alt + Right arrow used for rotate the desktop 90 degree.
   iv.    Control + Alt + Left arrow used for rotate the desktop 270 degree.

Alt + Tab:-
Alt + Tab is used for moving the window from one to another or perform multitask.

Window + R:-
Open the Run dialog box.

Window Key + L:-
Lets another user log on while your program and files remain open.

Use to rename the selected item, folder or file.

F3 or Window Key + F:-
Use to open search window.

Focus to address bar in windows explorer and internet explorer.

Refresh the active window (Right click  Refresh).
Select or clear the checkbox if the active option is a checkbox (mouse click or mouse select).

Shift + Delete:-
Delete selected item permanently without placing the item in the recycle bin.

Alt + Enter:-
Display the properties of the selected folder of file.

Control + Esc:-
To open a start menu (Shortcut for window Key).

Window Key + Break:-
Display the system properties dialog box (properties of my computer).

Window Key + D:-
Use to show the desktop.

Window Key + M:-
Use to minimize all windows.

Window Key + Shift + M:-
Use to restores all minimized windows.

Window Key + E:-
Open the Explorer.

Alt + F4:-
Close the active item, or quit the active program.

To make a directory in DOS MD Command is used.
D:\>MD Temp(directory name)

To go inside the directory CD command is used.
D:\>CD Temp  D:\>Temp>

To remove a directory RD command is used.
D:\>RD Temp

To back one place cd.. command is used.
D:\>Temp>CD..  D:\>

To stand at root.
D:\>Temp>cd\  D:\>

It is used to see the list of directories.

Setup process for formatting a computer:-

   1. Press delete and enter CMOS setup
   2. Set the first boot device CD or DVD
   3. Press F10 to save and exit CMOS setup
   4. Start the computer from bootable CD or DVD
   5. Partitioning (if required)
   6. Formatting
   7. Install Operating System

Partitions Type (File System):-
Windows XP Support two types of partition:-
   1. FAT32 (File Allocation Technology)
   2. NTFS (New Technology File System)
                                      “Visual Basic.Net”

Visual basic is visual, object oriented, event driven programming language.
VB.Net is used to develop windows based applications.

Types of Application:-

   1. VB.Net is desktop application
   2. ASP.Net is web application
   3. Database application (Desktop Web application)


Windows is a program of operating system.

MS Word:-
MS Word is a word processor program.

Excel is a program of spread sheet.

Power point:-
Power point is a program of presentation.

Photoshop is a photo editor.

Flash is 2D animation software.
                         “Structured Query Language (SQL)”
SQL & Access is a program of RDBMS (Relational Database Management System)

 What is Database?
Answer: An organize collection of information is called as Database.

 What is DBMS?
Answer: The process of managing (Select, Insert, Update or Delete) information in a
database is called DBMS.

At least one data file and one log file are needed to create a database, the database can span
multiple data files and multiple log files. Database file are of these three types.

Primary Data Files (Extension .MDF):-
A database must have one .MDF file. The primary data file contains the database system and
user table.

Secondary Data File (Extension .NDF):-
A database can have upto 32,766 .NDF files. The Secondary data files contain the user and
system data not stored in the primary data file. Secondary data files are optional.

Log File (Extension .LDF):-
A database can have upto 32,766 .LDF files. The log file contains the transaction log.

Note that once an operation is finished, its effect remains in the data cache, but the statement
is written in the transaction log on the disk.

User-Defined Filegroups:-
Secondary Data File is also known user-defined filegroups. User-Defined filegroups may
contain user tables and are created during database creator or modification.

Size and Growth Options:-
In SQL Enterprise Manager, you can only define the file growth in MB or in percent.
Never allow a file to have an unlimited growth. It’s better to give a limit, event if it’s a very
high limit. With an upper limit, you have more efficient space management.

To define a filegrowth in percent is always an good idea.

 First of all, every modification in a RDBMS is (or may be) a transaction. A transaction is
defined by the following properties (referred to as the ACID Properties):

All operations in a transaction are atomic, meaning if one operation fails, the whole
transaction fails.
Before the transaction, the database was in a consistent state. After the transaction, it is back
in a consistent state.

A running transaction is isolated from the outside.

Once a transaction is validated (committed), the effects of the transaction remain in the
database forever. The transaction log provides this property.

Syntax of Create Database:-
Create Database database_name
[ ( [Name = logical_File_Name, ]
Filename = ‘Physical_File_Name’
[,Size = Size]
[, Maxsize = {max_size | Unlimited }]
[, FileGrowth = growth_Increment ]) [,…n]]
[log on
[ ( [ Name = logical_file_name,]
Filename = ‘’OS_File_Name’
[,Size = Size]
[,Maxsize = { Max_size | Unlimited }]
[, FileGrowth = growth_increment ] ) [,…..n]]]

Create Database Statement:-
Create Database without specifying path and Secondary Data File:-
Create Database Records

Create Database with Path and Secondary File:-
Create Database FakeDB
(Name = FakeDB_Data_,
FileName = ‘D:\FakeDB_Data.mdf’,
Size = 100 MB,
MaxSize = 1 GB,
Filegrowth = 10 MB),
(Name = FakeDB_Data2,
FileName = ‘E:\FakeDB_Data2.ndf’,
Size = 50 MB,
Maxsize = 1 GB,
Filegrowth = 10 MB)
Log on
(Name = FakeDB_log,
FileName = ‘D:\FakeDB_log.ldf’,
Size = 50 MB,
Maxsize = 2GB,
Filegrowth = 10%)

Altering a Database:-
Altering a database means modifying its file structure by adding, removing, or modifying
files, or by changing its name and size.

Syntax of Alter Database:-
Alter database db_name {Add file <filespec>
[to filegroup filegroup_name] | Add log file < filespec> [,…n]
| Remove file logical_name | Add filegroup filegroup_name
| Remove filegroup filegroup_name | Modify file <filespec>
| Modify Name = New_dbname | Modify filegroup filegroup_name
{ filegroup property | Name = New_filegroup_name }}

<filespec>:- (Name = logical_file_name [, NewName = New_logical_name] [, filename =
‘OS_file_name’] [Size = Size] [, MaxSize = [Max_size | Unlimited }]
[, filegrowth = growth_increment])[,…n]

Modifying a File:-
This following example increases the size of the primary file of the FakeDB database:
Alter Database FakeDB
Modify File
(Name = FakeDB_Data1,
Size = 200 MB)

Changing a Database Name:-
You can change a database name very easily. Two methods exist to do so:
  1. Alter Database
  2. sp_Renamedb

The Following example renames the FakeDB database to TestDB:
Sp_rename ‘FakeDB’, TestDB’

With the Alter Database Statement:
Alter Database FakeDB
Modify Name = TestDB

It is not possible to rename a database through SQL Enterprise Manager.

The following example modifies, in one statement, the database name, maxsize and
filegrowth increment:
Alter Database FakeDB
Modify File
(Name = FakeDB_data1,
Newname = FakeDB_primary,
Maxsize = 1.5 GB,
Filegrowth = 50 MB)

The Following example adds a file to the FakeDB database:
Alter Database FakeDB
Add File
(Name = FakeDB_Data3,
Filename = ‘D:\FakeDB_data3.ndf’,
Size = 100 MB,
Maxsize = 1 GB,
Filegrowth = 10 MB)

Removing a File:
You can remove a file if and only if it is empty. You can empty a file with DBCC Shrinkfile
and the empty file property. The following statements empty the third file from FakeDB and
remove it.

DBCC Shrinkfile (‘FakeDB_data3’, Emptyfile)

Alter Database FakeDB
Remove File FakeDB_data3

The following is the matching Alter Database statement that will set the FakeDB_data2
to the default filegroup:
Alter Database FakeDB
Modify filegroup FakeDB_data2 Default

The following statement puts the SalesFG filegroup in read-only mode, then puts it
back into read-write mode:
Alter Database SalesFG
Modify Filegroup SalesFG Readonly

Alter Database SalesFG
Modify Filegroup SalesFG Readwrite

Note: - Only Secondary file (.ndf) can be read-only.

To on Auto Shrink:-
Alter Database MyfirstDB
Set Auto_Shrink on

To off Auto Shrink:-
Alter Database MyfirstDB
Set Auto_Shrink off

Shrinking a File:-
For example, if you have a 200 MB file in which only 50 MB are used, and you want to
release 40 MB, just run the following line:
DBCC Shrinkfile (Myfile, 60)

 As you can see, only the set options are listed. If you want to query a specific option
value, the Database property ex function is the best way to do it. The result of the following
statement is one, since the autoshrink option is set:-

Select Databasepropertyex (‘Northwind’, IsAutoShrink)

 The following examples show you which statements to run to execute common

Restrict a database to one user:-
Alter Database dbname set Single_user

Set a Database in Read-only Mode:-
Alter Database dbname set Read_only

To Attach the Database:-
sp_attach_db [ @dbname = ] 'dbname',
        [ @filename1 = ] 'filename_n' [ ,...16 ]

In the above command:-

       [@dbname =] 'dbname' is the database name. dbname is nvarchar(128), a default
        value is NULL.
       [@filename1 =] 'filename_n' is the database file name. filename_n is
        nvarchar(260), a default value is NULL.
       There can be up to 16 file names specified.

Sp_attach_db ‘Northwind’, ‘E:\Program Files\Microsoft SQL

Attach database with log file:-
Sp_attach_db ‘Northwind’, ‘E:\Program Files\Microsoft SQL
Server\MSSQL\Data\Northwind_data.mdf’, ‘E:\Program Files\Microsoft SQL

Note: - If log file is not exist it will created automatically.

To Detach the Database:-
sp_detach_db [ @dbname = ] 'dbname'
       [ , [ @skipchecks = ] 'skipchecks' ]
In the above command:-

      [@dbname =] 'dbname' is the database name. 'dbname'is nvarchar(128), a default
       value is NULL.
      [@skipchecks =] 'skipchecks' The 'skipchecks' parameter indicates will be can
       UPDATE STATISTICS run or skipped. The 'skipchecks' is nvarchar(10), a
       default value is NULL. If 'skipchecks' is true, UPDATE STATISTICS is skipped.
       If 'skipchecks' is false, UPDATE STATISTICS is run.

Sp_detach_db ‘Northwind’

To find out the database Information:-
   1. Sp_helpdb
   2. sp_helpdb Northwind

                             “Database Logical Modeling”
The Entity / Relationship Model:-
It has become very popular because an ER Model is simple to create, and read, and can be
used directly to create a relational model and transform its elements into database elements.
An ER model is made of three different elements.

   1. Entity (Table)
   2. Relationships
   3. Attribute (Column)

Entity (Table):-
Which represents real world concepts, such as places, objects, events, persons, orders,
customers, and so on.

Which represents associations between objects, such as the fact that a customer may place an

Attribute (Column):-
Which describes the entity, such as the invoice data or the customer first name.

 Normalization:-
The goal of normal forms is to remove redundant data from relational tables by splitting the
tables into smaller tables, without losing any data. There are three Normal Forms:

First Normal Form:-
A relational table is in first normal form if:
   1. Every table should have a primary key.
   2. Ever column should be smallest & no further decomposable.
   3. There is no repeating group of columns.

Second Normal Form:-
The second normal form states that every field in a table should relate to the primary key

Third Normal Form:-
A relational table is in third normal form if
   1. It is in 2NF
   2. Every non-key column is functionally dependent only on the primary key.
   3. Look for calculated columns which normally destroy the third normal form.

Denormalization is the process of deliberately breaking the rules of normalization in the hope
that you will gain some performance benefits in needed areas.

Data Integrity (Reliable):-
Enforcing data integrity ensures the quality of all data in the database. Data integrity falls
into four categories:

Entity Integrity:-
Defines a row as a unique entity for a particular table.

Domain Integrity:-
Verifies that data entered in a column will be accurate and valid. Domain integrity is defined
    1. The datatype and length
    2. The null value acceptance
    3. Through check constraints and rules
    4. The default value

Referential Integrity:-
Preserves the defined relationship between tables which records are entered or deleted
(foreign key and check constraint).

User-Defined Integrity:-
Allows you to define specific business rules that do not fall into one of the other integrity
categories. User-Defined Integrity is defined by:

   1. Column-level and table-level constraint
   2. Stored Procedure
   3. Triggers

 Relationships:-
One-To-One Relationship:-
For every record in one table there exist only one record in another table.
One-To-Many Relationship:-
For a one record in parent table there exists multiple records in child table. To create this type
of relationship we need key in parent table which reference is a field in another table called
Foreign Key.

Many-To-Many Relationship:-
Many-To-Many relationship between tables are accommodated in databases by means of
junction tables. A junction table contains the primary key columns of the two tables you want
to relate. You then create a relationship from the primary key columns of each of those two
tables to the matching columns in the junction table. In the pubs database, the titleauthor
table in junction table.

 Constraints:-
Planning tables require identifying valid values for a column and deciding how to enforce the
integrity of the data in the column:

Primary Key Constraints:-
A table usually has a column or combination of columns whose values uniquely identify each
row in the table. A table can have only one primary key and a column that participate in the
primary key cannot accept null values.

Composite Primary Key:-
When more than one column is to be make primary it is composite primary key.

Foreign Key Constraints:-
A foreign key is a column or combination of column used to establish and enforce a link
between the data in two tables. A foreign key constraint can be linked to a primary key
constraint or unique constraint in another table. A foreign key can contain null values.

Unique Constraints:-
You can use unique constraints to ensure that no duplicate values are entered in specific
columns. Multiple unique constraints can be defined on a table. Unique constraints allow null

Check Constraints:-
Check constraints enforce domain integrity by limiting the values that are accepted by a
column. It is possible to apply multiple check constraints to a single column.

Default Constraints:-
Each column in a record must contain a value, event if that value is null. A column can have
only one default constraints.

Types of Constraints:-
Column-Level Constraints:-
A column level constraint is specified as part of a column definition and applies only to that
Table-level Constraints:-
A table constraint is declared independently from a column definition and can apply to more
than one column in a table. Table constraint must be used when more than one column must
be included in a constraint. (It is good to use table-level Constraint).

Syntax of Select Statement:-
SELECT statement ::=
  < query_expression >
  [ ORDER BY { order_by_expression | column_position [ ASC | DESC ] }
     [ ,...n ] ]
     { { AVG | COUNT | MAX | MIN | SUM } ( expression ) } [ ,...n ]
     [ BY expression [ ,...n ] ]
         [ , XMLDATA ]
         [ , ELEMENTS ]
         [ , BINARY base64 ]
  [ OPTION ( < query_hint > [ ,...n ]) ]

< query expression > ::=
  { < query specification > | ( < query expression > ) }
  [ UNION [ ALL ] < query specification | ( < query expression > ) [...n ] ]

< query specification > ::=
     [ { TOP integer | TOP integer PERCENT } [ WITH TIES ] ]
     < select_list >
  [ INTO new_table ]
  [ FROM { < table_source > } [ ,...n ] ]
  [ WHERE < search_condition > ]
  [ GROUP BY [ ALL ] group_by_expression [ ,...n ]
     [ WITH { CUBE | ROLLUP } ]
  [ HAVING < search_condition > ]

Simple Select Statement (Select all columns):-
Use Northwind
Select * from Customers

Select one or more columns:-
Use Northwind
Select CustomerID, CompanyName, ContactName from Customers
Order By Clause:-
The order by clause allows you to order the table by any of the columns. When an order by
clause is included, the default is to display the data in ascending order. We can alter this by
adding a descending (Desc) option to the order by clause.

Order by in Ascending Order:-
Select CustomerID, CompanyName, ContactName from Customers
Order By ContactName Asc

Order by in Descending Order:-
Select CustomerID, CompanyName, ContactName from Customers
Order By ContactName Desc

Note: - Default is ascending order as per primary key field.

Where Clause:-
The where clause is used to limit the rows that are included in the result set by specifying
certain criteria that each row must meet. Each row in the table is run through the filter, and
only rows that match the criteria are allowed through the filter.

Select * from Customers
Where Country = ‘USA’

We can include multiple criteria. For example order by and where clause:
Select * from Customers
Where Country = ‘USA’
Order By ContactName

 Specifying Aliases:-
There are three different ways that an alias is being specified.
   1. CustomerID ‘Customer’
   2. CompanyName as ‘Company’
   3. ‘Contact’ = ContactName

We would write the following statement:-
Select CustomerID, ‘Customer Code’, CompanyName as ‘Company’,
‘ContactName’ = ContactName from Customers

Limiting the Number of Returned Results (Rows):-
There are two ways to limit the size of the result set returned from a query: Set Rowcount
and Top. The main place where Set Rowcount has an advantage over Top is reusing code.
One’s the set rowcount is set the same query will return limited rows. Select rowcount 0 is
issued after used it.

Set Rowcount:-
Select Rowcount 10
Select CustomerID, CompanyName, ContactName from Customers

Select Top 10 * from Customers

Top Percent:-
Select Top 5 Percent * from orders

Aggregate Operators:-
Aggregate operators provide a summary of information in a query. The aggregate operations
that are supported in SQL Server are:
Sum, Avg, Count, Min and Max

The aggregate operators return single or scalar value.

Sum Operator:-
Use Pubs
Select Sum(ytd_sales) from titles

Average (Avg) Operator:-
Select Avg(Price) from titles

Minimum (Min) Operator:-
Select Min(Price) from titles

Maximum (Max) Operator:-
Select Max(Price) from titles

Count Operator (used for calculating group of rows):-
Select Count(*) from titles

Aggregate operators also work in criteria and in joins. For example:-
Select Avg(Price, Sum(ytd_Sales) from titles
Where type = ‘business’

 Group By Clause:-
Whenever a non-aggregated field is included in a query, it must be accompanied by the group
by clause and listed in this clause.

Use Pubs
Select pub_id, total = Sum(ytd_Sales) from titles
Group By pub_id

 Having Clause:-
The having clause sets conditions on the group by clause similar that the way where interacts
with select. The having clause occurs only with group by. The having syntax is similar to the
where syntax, except having can contain aggregate function. Having clause can reference any
of the items that appear in the select list.

This query finds publishers whose ytd_Sales are greater than $40,000

Use Pubs
Select pub_id, total = Sum(ytd_Sales) from titles
Group by pub_id
Having Sum(ytd_Sales) > 40000

The group by and having clause can also work with criteria and joins.

 Union Command:-
The union command is used to join two select queries. It is placed between two queries. The
results can include or exclude duplicates. Both queries that are included in the union must
have the same number of columns.
The following query will return a list of cities following both customers and employees:
Select C.City
From Customers
Select E.City
From Employees E

 Distinct Command:-
The Distinct command is also used to remove duplicates from a single result set in the same
manner that union removes them from two result sets. If we issue the following query:

Select Distinct City
From Employees

 Scalars:-
A Scalar, also known as a literal, is essentially a constant value. This may be a simple result
that returns some value, such as a name, to the user:

Select ‘Steve’ As Name

Column-level constraint definition at table creation:
Create table tablename
(Columnname datatype [Costraint Constraintname] columnconstrainttype [,…..n]

Table-level constraint definition at table creation:
Create table tablename
(Columnname datatype [,….n],
[Costraint Constraintname] tableconstrainttype

Column-level constraint definition at table modification:
Alter table tablename
Add columnname datatype [constraint constraintname] columnconstrainttype [,…n]

Table-level constraint at table modification:
Alter table tablename
[With check | with nocheck] Add [Constraint constraintname] tableconstrainttype

With the check and foreign key constraints, specifying with check (this is the default) tells
SQL Server to check existing values. Specifying with nocheck tells SQL Server not to check
existing values but only check future inserts and updates.

To find a constraint name:-
Sp_helpconstraint authors(tablename)

To drop any constraint, use the following statement:-
Alter table tablename
Drop [constraint] constraintname
Enable and Disable Constraint:-
You can disable check and foreign key constraints during massive inserts or updates. To
disable one, many, or all constraints, use the following syntax:

Alter table tablename
Nocheck constraint {All | Constraintname [,…n]}

To Reenable one, many or all constraints, use the following syntax:-
Alter table tablename
Check constraint {All | Constraintname [,…n]}

Define the Default constraint at Table creation at column level:-
Create table tablename
(Columnname datatype [null | not null]
[constraint constraintname] Default expression [,....n]}

Defining the Default constraint for an existing table:
Two types of situations can happen:
   1. You add a default constraint to an existing column.
   2. You add a new column with a default constraint.

The syntax to add a default constraint to an existing column is the following:
Alter table tablename
Add [constraint constraintname] Default expression for columnname.

The syntax for adding a new column with a default constraint is a select follows:
Alter table tablename
Add columnname datatype [null | not null]
[Constraint constraintname Default expression [with values] [,…n]
Defining a check constraint at a table creation:-
Column level definition is:
Create table tablename
(columnname datatype [constraint constraintname] check [not for Replication]

At table level, the definition is:
Create table tablename
(columnanme datatype [,…n],
[constraint constraintname]
check [not for replication] (logical_expression)

Defining a check at Table Modification:-
You can add a check constraint at column level when defining a new column for the table
like this:-
Alter table tablename
Add columnname datatype [constraint constraintname] check [Not for Replication]
(logical_expression) [,…n]

To define a new check constraint for an existing column, use the following syntax:
Alter table tablename
[with check | with nocheck]
Add [constraint constraintname]
Check [Not for Replication] (logical_expression)

Defining a Foreign key constraint at Table Creation:-
Column-level Foreign key constraint definition at table creation:
Create table tablename
(columnname datatype [constraint constraintname] [Foreign key]
References ref_table [(ref_column)]
[on Delete {Cascade | No Action}]
[on Update {Cascade | No Action}]
[Not for Replication] [,…n]

Table-level Foreign key constraint definition at table creation:
Create table tablename
(columnname datatype [,…n], [constraint constraintname]
Foreign key [Column [,…n])]
References ref_table [(ref_column)[,…n])]
[on Delete {Cascade | No Action}]
[on Update {Cascade | No Action}]
[Not for Replication]

Defining a Foreign Key constraint at table modification:-
Column-level foreign key constraint definition at table modification:
Alter table tablename
Add columnname datatype [constraint constraintname]
[Foreign Key]
References ref_table [(ref_column)]
[on Delete {Cascade | No Action}]
[on Update {Cascade | No Action}]
[Not for Replication] [,…n]

Table-level Foreign Key constraint definition at table modification:
Alter table tablename
[with Check | with Nocheck] Add [Constraint constraintname]
Foreign Key [(column [,…n])]
References ref_table [(ref_column) [,…n])]
[on Delete {Cascade | No Action}]
[on Update {Cascade | No Action}]
[Not for Replication]

 Joins:-
Inner Join:-
An inner join works by comparing columns in two tables and returning the requested
information if the values of the columns match,

First Methods:-
Select C.CustomerID, C.CompanyName, O.OrderID, O.OrderDate
From Customers C, Orders O
Where C.CustomerID = O.CustomerID

Second Method:-
Select C.CustomerID, C.CompanyName, O.OrderID, O.OrderDate
From Customers C Inner Join Orders O On
C.CustomerID = O.CustomerID

Outer Joins:
Outer joins allow all rows from one or more tables to be included in the result set. There are
three types of outer joins:

   1. Left Outer Join
   2. Right Outer Join
   3. Full Outer Join

Left Outer Join:-
The left outer join includes all the rows from the table on the left side of the join syntax:

Select C.CustomerID, C.CompanyName, O.OrderID, O.OrderDate
From Customers C Left Outer Join Orders O On
C.CustomerID = O.CustomerID

Right Outer Join:-
The right outer join includes all the rows from the table on the right side of the join.

First Method:-
Select C.CustomerID, C.CompanyName, O.OrderID, O.OrderDate
From Customers C Right Outer Join Orders O
on C.CustomerID = O.CustomerID

Second Method:-
Select C.CustomerID, C.CompanyName, O.OrderID, O.OrderDate
From Customers C, Orders O
Where C.CustomerID = *O.CustomerID

Full Outer Join:-
The full outer join returns all rows from both tables, matching up the rows wherever a match
can be made and placing nulls in the places where no matching row exists.

Select C.CustomerID, C.City, C,Country, E.FirstName, E.LastName
From Employees E Full Outer Join Customers C
on E.City = C.City

 Comparison Operators:-
= (Equal):-
Select * from Employees
Where firstname = ‘nancy’

<>/!= (Not equal to):-
Select * from Employees
Where firstname <> ‘Nancy’

> (Greater):-
Select * from Employees
Where Hiredate > ‘12/31/93’

< (Less than):-
Select * from Employees
Where Hiredate < ‘12/31/93’

>= / !< (Greater than or equal to):-
Select * from Products
Where Unitprice >= 20

<= / !> (Less than or equal to):-
Select * from Products
Where Unitprice <= 20

Select * from Employees
Where lastname = ‘Davolio’ and firstname = ‘Nancy’

Between, And:-
Select * from Products
Where Unitprice between 4 and 10

Select * from Orders
Where Shippeddate is null

Is Not Null:-
Select * from Orders
Where Shippeddate is not null

Select * from Customers
Where Companyname like ‘D%’

Not Like:-
Select * from Customers
Where Companyname not like ‘D%’

Any / Some:-
Select * from Authors
Where City <> any
(Select city from Publishers)

Select * from Authors
Where City = some
(Select city from Publishers)

This function is used to find whether expression is a valid date or not.

Select isdate(title_id), isdate(Pubdate)
From titles

It is used to find whether the expression is numeric or not.

Select isnumeric(title), isnumeric(pub_id)
From titles

In both the function Zero (0) means is not valid and One (1) is valid.

Select * from Customers
Where region = ‘SP’ or region = ‘BC’

Select * from Customers
Where Not Exists
(Select * from customers C, Orders O Where C.CustomerID = O.CustomerID)

 Subqueries:-
A subquery is a query embedded inside another query. There are many possibilities for
   1. Subquery inside the select list in place of a column.
   2. Subquery in from clause in place of a table.
   3. Subquery in the where clause in place of a column or literal qualifier.

Subquries in the Select list:-
When a subquery is used in the select list, it retusn data that takes the place of a column.
This type of subquery is called a scalar subquery since it can only return one value.

Select (Select Contactname from Suppliers where ContactTitle = ‘Sales Agent’) ‘Agent’,
From Orders O

Subqueries in the From Clause:-
When a subquery is used in the from clause, this is referred to as a derived table. An alias
must be specified.

Select C.*, O.*
From Customers C, (Select Orderid, OrderDate, CustomerID, From Orders) O
Where C.CustomerID = O.CustomerID

Subqueries in the Where Clause:-
When used as a part of a qualifying statement, a subquery can return a single row, which is
used with the >, <, = and like operators. If the subquery returns multiple rows, then it would
be part of an IN Operator and take the place of a comma-delimited list.
        There is a special type of subquery called a correlated subquery that can be used in
the where clause.

Subqueries with IN (in where clause):-
Select * from Orders
Where CustomerID in (Select CustomerID from Customers Where Country = ‘USA’)

Correlated Subqueries:-
Use Pubs
Select * from Authors
Where 100 in (Select royaltyper from titleauthor)
Where titleauthor.au_ID = authors.au_ID
Correlated Subqueries with Comparison Operators:-
Use Northwind
Select * from Orders O
Where OrderDate = (Select Max(OrderDate) From Orders O1
Where O1.CustomerID = O.CustomerID and O1.RequiredDate < O1.ShippedDate)

Subqueries in Update and Delete Statements:-
Subqueries can be nested in Update and Delete statement.

This query updates the title tables, its subquery references the publishers table:
Update titles
Set Price = Price * 2
Where pub_id (Select pub_id from publishers where pub_name = ‘New Moon Books’)

You can remove all salesrecords of business books with this nested query:
Delete Sales where title_id in (Select title_id from titles)
Where type = ‘business’

 Wildcard Characters:-
% (Percent Symbol):-
Zero or more characters in that position.

_ (Under scope):-
One character at their position.

[ ]:-
Any single character within the specified range (for example, [a-f] or set (for example,

Any single character not within the specified range (for example, [^a-f] or set (for example,

 Insert statement:-
This section will look at the two formats of the Insert statement that add single rows of data
to a table. The multiple row format of the Insert Command will then be presented. Insert
statement can only affect one table at a time.

Syntax of Insert Statement:-
Insert [Into] {table_name with (<table_hints_limited>) […n]) |
view_name | rowset_function_limited}
({Default | Null | Expression} [,…n]) | derived_table | execute_statement}}

Inserting a Single Row:-
Insert “Order Details”
(OrderID, ProductID, UnitPrice, Quantity, Discount)
(10248, 14, 20, 2, 0)

Insert into Products
(ProductID, ProductName)
(15, ‘Chai’)

Inserting a Single Row with Default and Null:-
Insert into Products
(ProductName, SupplierID, UnitPrice, Discontinued)
(‘SQL Server Book’, Null, Default, 0)

Inserting Rows using Insert….Select or Inserting Multiple Rows:-
Use Pubs
Insert into Mybooks
Select title_id, title, type
From titles

Insert into [Order Details]
Select OrderID from Orders

Inserting Record in standing another Database:-
Insert into Northwind.dbo.Shippers
(CompanyName, Phone)
(‘ShowFlake Shipping’, (503)555-7233)

Inserting Single Row with an Identity Field:-
An identity column is a special column in SQL Server that holds an integer value (Tinyint,
Smallint, Int or Bigint) that is automatically incremented by SQL Server with each insert.
Only one column in a table can be defined as an identity column.
        This is a property that allows a programmer to disable the automatic insertion of
identity values. The Syntax is as follows:

Set Identity_Insert <table_name> [on | off]

Set Identity_Insert Categories on

 Update Statement:-
This command allows the user to specify that data in a specific table be changed to new
values. The update statement can only affect one table at a time through multiple rows may
be involved.
Syntax of Update Statement:-
Update {table_name | view_name}
Set {Column = Expression [,…n]}
[From <table_list>]
[Where <Search_Condition>

Update All Rows:-
Update Products
Set UnitPrice = 10

Update a Set of Rows:-
Update Products
Set UnitPrice = UnitPrice +5
Where CompanyName = ‘Tokyo Traders’

Updating Multiple Columns:-
Update Products
Set UnitPrice = UnitPrice-5, Unitsonorder = 0, Discontinued = 1
From Suppliers

Temporary Update (Example of transaction):-
It will update the value for temporary period before Rollback transaction:

Begin transaction
Update [Order Details]
Set Quantity = Quantity * 10
Select * from [Order Details ]
Rollback transaction

 Delete Statement:-
This command removes rows from a table in the same way that update can be used to modify
the data in a row. Only a single table can be directly affected.

Syntax of Delete Statement:-
Delete [From] {table_name with (<table_hint_limited> […n])
| view_name | rowset_function_limited}
[From {<table_source>} [,…n]]
[Where {<Search_Conditio> | {[Current of {[Global] cursor_name} |
cursor_variable_name}]}}] [option (<query_hint> [,…n])]

Delete All Rows:-
Delete “Order Details”

Deleting a Set of Rows:-
Delete [Order Details]
Where OrderID = 10248
Temporary Delete (Example of transaction):-
It will delete rows for temporary period before Rollback transaction:

Begin transaction
Delete [Order Details]
Select * from [Order Details]
Rollback transaction

 Views:-
A view is essentially a named select statement. It acts as a table, but does not contain any
data. It relies on the data stored in the underlying table. Like a table, a view can be queried,
and data can be inserted, deleted and modified through a view.
         Views can also be used to combine result sets coming from different servers.
         Views can be designed with where clause, or with specific join and column selections
to restrict the data available to users.
         A view is a pseudo table, so wherever you can use a table, you can also use a view.

Before you can create a view, consider these guidelines:-
You can create views only in the current database. However, the tables and views referenced
by the new view can exist in other database or even other servers.

The name must not be the same as any tables and views.

You cannot associate rules or default definitions with views.

You cannot associate after triggers with view, only instead of triggers.

The query defining the view cannot include the order by, compute or compute by clauses or
the into keyword.

You cannot create temporary views, and you cannot create views on temporary table.

Views or tables participating in a view created with the schemabinding clause cannot be
dropped unless view is dropped or changed so that it no longer has schemabinding.

There are few restrictions on the select clauses in a view definition. A create view statement
Include Compute or Compute by Clauses

Include order by clause, unless there is also a Top Clause in the select list.
Include the into keyword.

A view can be created only in the current database. A view can reference maximum of 1024

Syntax of View Statement:-
Create View [<database_name>.] [<ownder>.] view_name [(Column [,…n])]
[with {Encryption | Schemabinding | View_Metadata}] [,…n]]
[With Check option]

Updating Data:-
If the view is based on more than one table, you cannot run a Delete statement, and all the
columns referenced in the Insert and Update Statements should belong to the same
underlying table.

You cannot update, insert or delete data in a view created with the Distinct clause, using
grouping functions, if it contains calculated columns.

Your insert, update or delete operation may fail because of column or table constraint or of
column properties.

Encryption Option:-
Indicates that SQL Server encrypts the system table columns containing the text of the create
view statement using with encryption in other words, if you want to protect your code, you
can encrypt it with the encryption option. Once the view encrypt than the only way to remove
the encrypt is alter view.

This example uses the with encryption option and shows computed columns, renamed
columns, and multiple column:

Create view Accounts (title, Advance, Amount_due)
With encryption
Select title, Advance, Price * royalty * ytd_sales
From titles
Where Price > 5

With Check Option:-
By Default, you can update any record through a view, and make it disappear, because the
where condition applied to the view does not select the data anymore. The view restricts data
access but not data updates and inserts! To avoid this situation, you can use the with check

Create View CAonly
Select au_lname, au_fname, City, State
From authors
Where State = ‘CA’
With Check option

 Schemabinding Option:-
The schemabinding option is useful to protect your view definition against any structure
modifications of the underlying table. Views or tables participating in a view created with the
schemabind clause cannot be dropped unless that view is dropped so that it no longer has
schemabinding. When schemabinding is specified, the select_statement must include the
two-part names (owner-object) of tables, views, or user-defined functions referenced. If you
use the schemabinding option, you have to define every column in he select list: you cannot
use the * symbol. For example, a column could be added to the table, but a column used in
the view cannot be dropped.

Create View vwProducts
With schemabinding
Select Companyname, Product, UnitPrice
From dbo.Suppliers inner join dbo.Products
ON Suppliers.Supplierid = Products.Supplierid

With encryption, with check option and schemabinding can be removed with the help of
Alter View.

Select_Statement must have more than non-tabular expressions, which are expressions not
derived from a table. For example this view is not updatable:

Create View Notable
Select GetDate() As CurrentDate, @@Language As CurrentLanguage,
Current_User As CurrentUser

You can use this command to find out the sum, min, max, of any field:

Create View (Sum, Max, Min)
With encryption
Select Sum(advance), Max(advance), Min(advance)
From titles

It will first check that view is created or not. If view is already created than first it drop the
view and recreate it:
If exists (Select table_name from Information_schema.Views)
Where table_name = ‘CAonly’)
Drop View CAonly
Create View CAonly
With encryption
Select au_lname, au_fname, city, state
From authors
Alter View:-
Syntax of Alter View:-
Alter View [<database_name>.] [<owner>.]
view_name [(Column [,…n])]
[with {encryption | schemabinding | view_metadata } [,…n]
[with check option]

Create View CustomerInfo
With schemabinding
Select Customerid, Companyname
From Customers

Alter View CustomerInfo
Select Customerid, Companyname, Contactname, Country
From Customers
Where Country = ‘Germany’
With check option

 Scripting:-
It returns the scalar value. In this we can use print statement. There are two types of
     1. Scalar Variable
     2. Table Variable

Scalar Variable:-
Use Northwind
Declare @Maxfreight money, @Minfreight money
Select @Maxfreight = (Select Max(Freight) From Orders)
Select @Minfreight = (Select Min(Freight) From Orders)
Print ‘Maximum Freight is ‘ + Convert(Varchar(10), @Maxfreight)
Print ‘Minimum Freight is ‘ + Convert(Varchar(10), @Minfreight)

Declare @Orderid int, @Freight Money
Set @Orderid = 10250
Select @Freight = Freight, @Orderid = Orderid
From Orders
Where Orderid = @Orderid
Print ‘OrderID ’ + Convert(Varchar(10), @Orderid)
Print ‘Freight ’ + Convert(Varchar(10), @Freight)

Table Variable:-
Declare @big table
(Orderid int, Customerid Varchar(10))
Insert into @big
Select Orderid, Customerid
From Orders
Select * From @big

Flow Control:-
Declare @Maxunitprice Money
Select @Maxunitprice = Max(UnitPrice)
From “Orderdetails”
If @Maxunitprice > 0
Print ‘Maximum Amount is ‘ + Convert(Varchar(10), @Maxunitprice)
Print ‘No Amount’

 Stored Procedures:-
A stored procedure is a batch of Transact-SQL statements stored under a name and executed
as a single unit of work.
A stored procedure can be called from another stored procedure, from a client application, or
from a Transact-SQL batch to perform a predefined action.

Fast Execution:-
Stored procedure are precompiles and optimized once, then their execution plan is stored
directly in memory.

Network load reduction:-
The client application calls only the stored procedure that is executed on the server.

The stored procedure definition contains two primary components: the specification of the
procedure name and its parameters, and the body of the procedure, which contains Transact-
SQL statements that perform the procedure’s operations.

A stored procedure can have as many as 2,100 parameters, with each parameter having a
name, datatype, direction and default value.

Stored procedures in SQL Server are similar to procedures in other programming languages
in that they can:
Accept input parameters and return multiple values in the form of output parameters. Return
a status value to a calling procedure or batch to indicate success or failure.

The maximum size of a stored procedure is 128 megabytes (MB)

Create Procedure:-
Procedures can be created for permanent use or for temporary use within a session (local
temporary procedure) or for temporary use within all sessions (global temporary procedure).
Syntax of Create Procedure Statement:-
Create Proc[edure] Procedure_name [;number]
[{@parameter data_type} [ = default] [output]] [,…n]
{Recompile | Encryption | Recompile, Encryption}]
[For Replication]
Sql_statement […n]

Create Stored Procedure without any parameter:-
Create Procedure Customerinfo
Select * from Customers

You can create a stored procedure only in the current database.

Example of Input Parameter:-
Create Procedure get_sales_for_title
@title Varchar(80)
Select ytd_sales from titles
Where title = @title

Create a Stored Procedure that uses default values for parameter:-
Create Proc Pub_info
@Pubname Varchar(40) = ‘Algodata Infosystems’
Select * from Publishers
Where @Pubname = Pub_name

Returning Data Using Output Parameter:-
Create Procedure get_sales
@title Varchar(80), -- This is the input parameter
@ytd_sales int output -- This is the output parameter
Select @ytd_sales = ytd_sales
From titles
Where title = @title

--Declare the variable to receive the output value of the procedure
Declare @ytd_sales int
Execute get_sales ‘Sushi, Anyone?’,
@ytd_sales = @ytd_sales output
Print ‘Sales for Sushi, Anyone:’ + Convert(Varchar(10), ytd_sales)

Create a Stored Procedure that uses parameter (with using inner join):-
Create Procedure au_info
@lastname Varchar(20),
@firstname Varchar(20)
Select au_lname, au_fname, title_id, royaltyper
From authors inner join titleauthor on
authors.au_id = titleauthor.au_id
where au_fname = @firstname and au_lname = @lastname

Create a Stored Procedure that uses return value:-
Create Procedure Gettotalsales
@titletype Varchar(20),
@totalsales int output
Select @totalsales = Sum(ytd_sales)
From titles
Where type = @titletype
If @totalsales > 0
Return 0
Return -1

Calling Stored Procedure (gettotalsales):-
Declare @ret int
Declare @sales int
Exec @ret = gettotalsales ‘Business’,
@totalsales = @sales output
If @ret = 0
Select @sales as totalsales
Select ‘No Records found’

Create Stored Procedure for Insert:-
Create Proc Addnewcustomer
@Customerid Varchar(10),
@Companyname Varchar(20)
Insert into Customers
(Customerid, Companyname)
(@Customerid, @Companyname)

If @@rowcout > 0
Return 1
Return -1
 Compiling:-
When SQL Server indicates that it is recompiling a stored procedure, it is saying it is
recalculating its execution plan.

      Recompilation is normally an automatic process, but can be done on demand,
depending on the stored procedure.

       Procedure needs to be recompiled when the parameter changes, since its execution
plan may very from one execution to another. That’s the purpose of the Recompile keyword.

Create Procedure Listcustomer @Orderid int
With recompile
Select * from [Order Details]
Where Orderid = @Orderid

If the plan is always the same, the procedure should not be created with the Recompile
clause. If different parameter values lead to different estimated plans, the procedure should
be created with the Recompile clause.

The following example forces every procedure and trigger referencing the orders table
to be recompiled:

Sp_recompile ‘Orders’

 Grouping:-
A procedure can be created with the same name with different identification number, which
allows the procedures to be grouped logically. Grouping procedures with the same name
allows them to be deleted at the same time. After procedures have been grouped, individual
procedures with in the group cannot be deleted.

For example:
Create Procedure myProc; 1
@CustomerID Varchar(10)
Select * from Customers
Where Customerid = @Customerid

Create Proc myproc2 ; 2
@Orderid int
Select * from Orders
Where Orderid = @Orderid

Create Proc myproc; 3
@Productid int
Select * from “Order Details”
Where Productid = @Productid

You can drop all this procedures easily by writing:
Drop Procedure myproc

You can get an idea of the memory allocation for stored procedures with the DBCC
Proccache statement.

You free the procedure cache by running DBCC freeproccache

Syntax of Drop Procedure:-
Drop Procedure {Procedure_name} [,…n]

Syntax of Alter Procdure:-
Syntax of alter procedure is same as create procedure.

User-Defined Functions:-
A user-defined function (UDF) is a named set of Transact-SQL statements used like system
functions or views. There are two main types of UDFs:
   1. Scalar
   2. Table-Valued

A scalar UDF returns a single value and can be used wherever on expression or variable can
be used, for example, in a select list of a select statement, or in the set clause of an update

Table UDFs return a result set and can be used wherever a table or a view can be used (under
some limitations). Table-Valued UDFs can be referenced in a from clause of a select

Functions are subroutines made up of one or more Transact-SQL statements that can be used
to encapsulate code for resuse.
       Each fully qualified user-defined function name
(database_name.owner_name.function_name) must be unique.

Scalar User-Defined Function:-
A developer usually thinks of UDFs as functions similar to the mathematical or string-
manipulation, functions. They can implement a complex calculation or data manipulation and
return one value.

Syntax of Scalar UDF:-
Create Function [owner.name.] function_name
data_type [ = default]} [,…n]])
Returns scalar_return_data_type
[with {Encryption | Schemabinding} [[,]…n]]
Return Scalar_expression

The function_body can contain any Transact-SQL statement, as in a stored procedure. The
Return statement contains the value to return to the calling statement.

Create Function TotalAmount (@Unitprice money, @Quantity smaillint, @Discount real)
Returns money
Return (@Unitprice * @Quantity) * (1-@Discount)

Calling the Function:-
Select ProductID, DBO.TotalAmount (Unitprice, Quantity, Discount)
From [Order Details]

The table-valued UDFs are split into two subtypes: Inline and multistatement table-

Inline Table-Valued UDF (User Defined Function):-
As inline table-valued UDF can be seen as a view with parameters. They execute one select
statement, as in a view but can include parameters, like a stored procedure.

Inline UDF are a subset of UDFs that return a table. Inline functions can be used to achieve
the functionality of parameterized views.

Inline UDF can be used to support parameters in the search conditions specified in the where

Create Function SalesbyBookshop
(@stor_id Varchar(10))
Returns Table
Return (Select Stores.Stor_name, titles.title, Sum(sales.qty) As Totalqty
From Stores inner join Sales on Stores.stor_id = Sales.stor_id inner join titles on
Sales.title_id = titles.title_id
Where Stores.stor_id = @stor_id

Calling the Function:-
Select * From Salesbybookshop (7066)
Multistatement Table-Valued UDF:-
Multistatement table-valued UDFs are the most complex form of UDF. This type of function
builds the result set from one or many select statements.

Syntax of Table-Valued UDF:-
Create Function [Owner_name.] Function_name
data_type [ = default]} [,…n]])
Returns @return_variable
Table ({Column_definition | table_constraint} [,…n])
[With {Encryption } Schemabinding} [[,]…n]]

Create Function Contacts (@Suppliers bit = 0)
Returns @Contacts Table
(ContactName Varchar(30), Phone Varchar(20), ContactType Varchar(20))
Insert @Contacts
Select ContactName, Phone, ‘Customer’
From Customers

Insert @Contacts
Select LastName, HomePhone, ‘Employee’
From Employees

If @Suppliers = 1
Insert @Contacts
Select ContactName, Phone, ‘Supplier’
From Suppliers

The function uses a temporary table @Contacts declared in a table variable.

Calling the function:-
Select * From Contacts(1)

Using UDF (User-Defined Function):-
Depending on the UDF type, the execution call is different. A scalar UDF is always called by
a two-component: owner.functionname.
Syntax of Inline Table-Valued UDF:-
Create function [owner_name.] function_name
data_type [ = default]} [,…n]])
Returns Table
[with {Encryption | Schemabinding} [[,]…n]]
Return [(] Select-statement [)]

Alter Function:-
Syntax of alter function as same as create function statement.

Drop Function:-
Syntax of Drop Function:-
Drop Function {[owner_name.]function_name}[,…n]

Drop Function Salesbybookshop

A trigger is a special form of stored procedure, bound to a table or to a view, and fired
automatically by a particular statement. Triggers are generally used to enforce referential
integrity and business rules. While triggers are similar to check constraints, they have one
major difference : triggers are reactive while constraints are procactive. This means a
constraint is fired before the effect of the statement takes place, while a trigger is fired after
or instead of the firing statement. A trigger is part of the transaction started by the statement
that fired it.

Two special tables are used in trigger statements : the deleted table and the inserted table.
SQL Server 2000 automatically creates and manages these tables.

Triggers are bound to a table. If you drop the table, all the triggers associated to this table are

Creating a trigger:-
You can create a trigger only in the current database, although a trigger can reference objects
outside of the current database.

A trigger cannot be created on temporary or system table, although triggers can reference
temporary tables.

Instead of delete and instead of update triggers cannot be defined on a table that has a foreign
key defined with a delete or update action.

Syntax of Create Trigger statement:-
Create trigger trigger_name
On {table | view}
[with Encryption]
{{for | After |Instead of } {[Insert] [,] [Update] [,] [Delete]}
[with append]
[Not for Replication]
[{If update (column)
[{And | OR} Update (Column)] […n]}] sql_statement […n]}}

There are two types of triggers:-
   1. After triggers (that run after the statements that fired them).
   2. Instead of triggers(that run instead of statements that fired them)

The Insert trigger:-
Create Trigger Inssales on Sales
After Insert
Update titles
Set ytd_sales = t.ytd_sales + I.qty
From Inserted I Join titles t
on I.title_id = t.title_id

The Delete trigger:-
It uses the Deleted table, which contains the record that has just been deleted.

Create trigger Delsale on Sales
After Delete
Update titles
Set ytd_sales = t.ytd_sales – D.qty
From Deleted D Join titles t
On D.title_id = t.title_id

If any one delete a record, that record will in del_save table:

Create table del_save
(au_id Varchar(15), title_id Varchar(10), au_ord int, royaltyper int)

Create trigger savedel on titleauthor
For delete
Insert del_save
Select * from Deleted
The Update trigger:-
An update operation can be seen as a delete operation followed by an insert: the old values
are deleted and the new one are inserted.

Create trigger UpdTask on Tasks
After Update
If Udpate(TaskID)
Begin Raiseerror(‘The Taskid cannot be updated’,16,1)
Rollback transaction
If Not exists (Select * from employees E join Inserted I on E.employeeid = I.employeeid)

After triggers can only be created on tables. A table can have any number of After triggers
defined for inserts, deletes, and updates.

Instead Of:-
Instead of triggers can be created on tables or views, but a table or view can have only one
instead of trigger per action. An instead of trigger cannot be created on a table that has a
foreign key with cascade delete or cascade update. When created on a view, an instead of
trigger enhances the updatability of the view. It is possible with an instead of trigger to
update any number of tables.

Create trigger Insorddet on [Order Details]
Instead of Insert
Declare @qty int
Select @qty = quantity From Inserted
If @qty <= (Select Unitinstock from products p join Inserted I on P.ProductID =
Insert Into [Order Details]
Select * From Inserted
Raiseerror(‘Not enough products in stock’,16,1)

The Delete trigger:-
Instead of delete triggers function in the same manner as instead of insert triggers, but they
use the Deleted table.

The Update trigger:-
Instead of Update triggers work with the Inserted and Deleted table to store value before and
after the update. No data is modified in the base table.

The Alter table statement allow you to enable and disable trigger on demand:-
Alter table table_name
{Enable | Disable} trigger
{All | trigger_name [,…n]}

Alter table [Order Details]
Disable trigger All

Run the same statement with enable in place of Disable to reenable all triggers.

Display information about database object dependencies. (for example, the views and
procedures that depend on a table or view). Reference to objects outside the current database
are not reported.

Sp_depends [@objname = ]’object’

This example lists the database objects that depend on the Customers table.

Sp_depends ‘Customers’

Returns the types of triggers defined on the specified table for the current database.

Sp_helptrigger [@tabname = ]’table’[,[@triggertype = ]’type’]

Sp_helptrigger Sales

Prints the text of a rule, a default, or an unencrypted stored procedure, UDF, trigger or view.

Sp_helptext [@objname = ]’name’

Sp_helptext reminder

Cursor is a database object that points to a result set. Cursor is also be called as Rowset.
Cursors can be used to update data as well as retrieve it row by row, but cannot be used to
insert data. Certain configuration options can either ensure that changes to the underlying
data show up in SQL Server or that the data is “Frozen” When the cursor is declared.

SQL Server will be needed or just used to produce a formatted report.
Cursor Declaration:-
There are a number of different cursor types and options that can be used in SQL Server,
each one creating a slightly different cursor that is suited to different types of operation.

Syntax of Cursor declaration:-
Declare <cursor_name> Cursor
[Local | Global]
[Forward_only | Scroll]
[Static | Keyset | Dynamic | Fast_forward]
[Read_only | Scroll_Locks | Optimistic]
[Select statement]
[For update [ of column name [,…n]]]

[ Local | Global ]:-
Be default, if this is omitted, the default is to create a local cursor database option controls.

[ Forward_only | Scroll ]:-
Forward_only means that the program that uses the cursor can only move from the beginning
row to the ending row in that direction.

Scroll Cursors allow the programmer to move in both forward and backward directions.

[ Static | Keyset | Dynamic | Fast_Forward ]:-
These options control how the cursor interacts with the data.

Static cursor consist of a temporary copy of the data from the underlying sources. This type
of cursor is also not updateable.

Keyset:- Keyset Cursors also use Tempdb, but only to store the key values that identify the
rows in the underlying data.

Dynamic cursors are essentially view of the underlying data source and reflect all changes to
data made outside the cursor. Absolute fetches are not supported in this type of cursor.

Fast_forward cursors are forward-scrolling, read-only cursors that have performance
enhancements to make them the fastest type of cursor to use.

[For Update [of columnname [,…n]]]:-
This option specifies columns in the cursor that are updateable. If no column list is included,
then those columns specifically listed are updateable.
Sql Statement for cursor processing:-
Defines a new cursor.

Opens a Transact-SQL Server cursor and populates the cursor by executing the Transact-
SQL statement.

Retrieves a specific row from the cursor

Ends row at a time data retrieval initiated by the open statement.

Deletes the cursor definition and releases all system resources associated with the cursor.

Syntax of Fetch:-
[Next | Prior | First | Last | Absolute {n | @nvar } | Relative { n | @nvar}]
From {{[Global] cursor_name} | @cursor_variable_name}
[into @variable_name [,…n]]

Returns the next row from the cursor moving forward.

Returns the prior row in the cursor moving backward.

Returns the first row from the cursor.

Returns the last row in the cursor.

Returns the nth row from the cursor where n is counted from the first row.

Returns the nth row from the cursor starting from the current row(n may be negative).

Use Fetch without Variables:-
This example uses @@Fetch_status to control cursor activities n a while loop.
Declare employee_cursor cursor for
Select lastname, firstname, from employees
Open employee_cursor
Fetch Next from employee_cursor
While @@fetch_status = 0
     Fetch Next from employee_cursor
Close employee_cursor
Deallocate employee_cursor

Use Fetch to store values in variables:-
Declare @au_lname Varchar(20), @au_fname Varchar(20)
Declare authors_cursor cursor for
Select au_lname, au_fname from authors
Open authors_cursor
Fetch Next from authors_cursor into
@au_lname, @au_fname
While @@fetch_status = 0
     Print ‘Authors: ‘ + @au_fname + ‘ ‘ + @au_lname
     Fetch Next from authors_cursor into
     @au_lname, @au_fname
Close authors_cursor
Deallocate authors_cursor

Changing data with a cursor:-
Declare abc Cursor for
Select Companyname From Shippers
Open abc
Fetch Next from abc
Update Shippers Set Companyname = ‘Speedy express Inc.’
Where Current of abc
Close abc
Deallocate abc

Deleting Rows in Result Sets:-
Declare abc Cursor for
Select * From Shippers
Open abc
Fetch Next from abc
Delete Shippers
Where Current of abc
Close abc
Deallocate abc

                       “Creating and Maintaining Tables”
Creating a Table:-
A table is a set of rows and columns. Columns define the attributes of our Entity/Relationship

Syntax of Create Table Statement:-
Create table table_name
({column_name data_type} [ Null | Not Null ] [,…n])
[on { filegroup | Default }]
[Textimage on {filegroup | Default}]

Example of Creating Table:-
Create table Customers
(CustomerID int, Companyname Varchar(40), Address Varchar(40), City Varchar(20) Not

Column Name:-
Every column name must be unique within a table. You can define up to 1,024 columns per

Every column has a datatype, except the computed columns. Datatypes are part of entity

Allow Nulls:-
The Null value plays a significant role in RDBMS. A null value is different from a Zero or an
empty string.

The real nullability depends on the Ansi null default database option. To check you database
default, run the following:

Select Databasepropertyex(‘database’, InAnsinulldefault’)

If the result I one, Ansi null default is on, if it is zero, the option is off.

To set it on, run the following:
Alter database databasename
Set Ansi_Null_Default on

To set it off, run the following:
Alter database databasename
Set Ansi_Null_Default off

If this option is on, a column allows Null value by default. If this option is off, it does not
allow Null value.

Unique Identifiers:-
The identity and Rowguidcol properties are probably the easiest one.
Identity is just the name of SQL Server’s auto-numbering property. It allows the creation of
automatic incrementing values in columns. It is possible to choose the seed (the value added
automatically to the last the one each time a new record is inserted).

When you insert a new row, a new identity value is automatically created. If you delete that
row, the identity value it used will never be reused automatically by the system.

Create table Test
(Col1 int identity (1,1))
Insert Test Default Values
Insert Test Default Values
Delete Test Where Col1 = 1

As you can see, the identity_insert option can be set to on to allow explicit inserts in the
identity column.

Set identity_insert Test on
Insert Test (Col1) Values (1)

If the seed and increment values are not supplied in the identity property, their default value
is one. So, Identity(1,1) are synonyms.

A table can have only one identity column. This identity must be an tinyint, smallint, int,
bigint, decimal or numeric.

If a table has an identity column, you can query it without knowing its name using the
IdentityCol keywod.

Select IdentityCol, Firstname, LastName
From Employees

This global variable returns the last identity value inserted in the current session across all
scopes. A scope is a stored procedure, a trigger, a function, or a batch.

Select @@Identity

Ident_Current is a function that returns the last inserted identity value in a specific table in
any session and any scope.

Scope_Identity is a function that returns the last inserted identity in the current session and

Select Scope_Identity()

Example of Ident_Current:-
IDENT_CURRENT( 'table_name' )

SELECT IDENT_CURRENT ('mtbArea') AS Current_Identity

The uniqueidentifier datatype and the Rowguidcol property are used to indicate that a column
is a globally unique identifier (Guid). When automatically generated by the system, it is
guaranteed to be unique. If you want the value to be computer-generated, you need to use the
NewID() function.

Create table Testcompany
(CompanyID uniqueidentifier Rowguidcol Default NewID() Not Null, CompanyName

In the create table statement, the Rowguidcoll keyword is not necessary to create a globally
unique identifier. The property is actually enforced by the uniqueidentifier.

With the Rowguidcol property set, the column can be queried with the Rowguidcol keyword.

Select Rowguidcol, Companyname
From Testcompany

Altering a Table:-
Alter table table_name
{[Alter Column Column_name
{New_datatype [(precision[,scale])]
[Null | Not Null]
| {Add | Drop} Rowguidcol}]

Alter a table to Add new column:-
Use Northwind
Alter tables Shippers
Add Email Varchar(40)

Alter a table to drop a column:-
Alter table Shippers
Drop column Email

Alter a table to Add a column with a unique constraint:-
Alter table Shippers
Add Email Varchar(40) unique

Add a Nullable column with default values:-
If with values is not used, each row has the value null in the new column.

Alter table Shippers
Add Adddate smalldatetime
Default getdate() with values

Disable and Reenable a Constraint:-
Alter table [Order Details]
Nocheck Constraint ck_unitprice

To Re-enable a Constraint run the same statement with check option in place of Nocheck.

Disable and Reenable Trigger:-
Alter table [Order Details]
Disable trigger reminder

To Re-enable a trigger runt the same statement with enable option.

Changing a Datatype:-
Alter table Customers
Alter Column Companyname Varchar(40)

Changing a Nullability:-
Alter table Customers
Alter column Contactname nvarchar(30) Not Null

Computed Columns:-
They are virtual columns, not physically stored in the database but still parts of a table
structure, and whose values are calculated on the fly. A computed column is based on the
values of one or many other columns of the same table. The computed column is not stored,
so retrieving its values involves some extra CPU work. Computed columns in SQL Server
may be indexed.

Create table Orders
(OrderID int not null, Price money not null, Qty int not null, Total As Price * Qty)

If a column is defined as Sql_variant some rows may contain integer values, some others
character values, and so on. The Sql_variant value is converted to a base datatype to allow
Create table Sqlv
(SName Sql_variant, Company varchar(40))
Insert Sqlv Values
(1,’Public Limited’)
Insert Sqlv Values
(‘John’, ‘Private Limited’)

User-Defined Datatypes:-
It is possible to create your own datatypes, based on the system datatype. The creation of a
datatype in Transact-SQL is done with the stored procedure Sp_addtype.

Syntax of Sp_addtype:-
Sp_addtype [@typename =] type,
[@phystype =]’Null_type’]
[,[@owner = ]’owner_name’]

Create a birthday datatype that allow nulls:-
Sp_addtype birthday, datetime, ‘Null’

Create a telephone datatype that not allow null values:-
Sp_addtype telephone, ‘Varchar(20)’, ‘Not Null’

Drop User-Defined Datatypes:-
Deletes a user-defined datatype from systypes.

Syntax of Sp_droptype:-
Sp_droptype [@typename =]’type’

This example drops the user-defined dataype birthday.
Sp_droptype ‘birthday’

Identifier Rules:-
The identifier is delimited by double quotation marks (“”) or square brackets ([]). These
examples use identifiers for table names. Both methods for specifying delimited identifiers
are shown:

Quoted Identifier:-
Select * From “Order Details”

Bracketed Identifier:-
Select * From [Order Details]
Set Quoted_Identifier on:-
It is used to on Quoted_Identifier (“”)

Set Quoted_Identifier off:-
It is used to off Quoted_Identifier off. So, no one able to use this (“”).

Create table in another Database:-
Create table Northwind.dbo.TableX
(Col A int, ColB Varchar(10))

Create a copy of the table:-
Select * into authors2 from authors
Select * into custinfo from Northwind.dbo.Customers

Select table on another database:-
Select * from Northwind..Customers
Select * from Northwind.dbo.Customers

Drop Constraint:-
Alter table “Order Details”
Drop Constraint Ck_quantity

To see the Constraintname:-
Sp_helpConstraint “Order Details”

Changes the name of a user-created object (for example table, column, trigger, procedure,
view or user-defined datatypes) in the current database.

Syntax of Sp_rename:-
Sp_rename [@objname =]’object = name’, [@newname =]’new_name’
[,[@objtype = ]’object_type’]

Example of Rename Table and Column Rename a table:-
Sp_rename ‘Customers’, ‘Custs’

Rename a Column:-
Sp_rename ‘Customers.[Contact title], ‘title’, ‘column’

Drop table:-
Removes a table definition and all data, indexes, triggers, constraints and permission
specification for that table.

Syntax of Drop table:-
Drop table table_name
Drop a table in the current database:-
Drop table title

Drop a table in another database:-
Drop table pubs.dbo.authors

Return the tables and views created.

To find out table and view store in the database:-
Select * from information_schema.tables

To find out information of table:-
Sp_help Customers

To find out the location of database:-
Select Name, Filename
From Master..sysaltfiles

To View the spaceused by the tables:-
Sp_spaceused Customers

To known the Autostats in table:-
Sp_autostats Customers

To Recompile the table:-
Sp_recompile Customers

To Create Temporary Table:-
To create local temporary table use single # sign. Use double ## to globally create temporary

Create table #Customers
(CustomerID int, CustomerName Varchar(20))

It will delete automatically when SQL Server disconnect.
         All temporary stored procedures or tables are created and stored in tempdb.

                         “Creating and Maintaining Indexes”
The indexes are the heart of fast data access. In fact, as the database grows, indexes are your
guarantee to fast data access. If the table contains thousands or millions of rows, data access
has to be done through indexes. It helps you to find information about a specific row or rows
without having to search through the entire table.

Indexes are used to:
   1. Increase the speed of data access
   2. Enforce uniqueness of data

Though indexes have their advantages, they also have drawbacks. The first drawback is that
they consume a lot of disk and memory space. The second drawback is that they cause
slower inserts and may cause slower updates and deletes.

SQL Server proposes two types of Index:
  1. Clustered Index
  2. Non clustered Index

Clustered Index:-
In RDBMS, a cluster may mean different things, but in general it refers to two “objects”
being considered as one. In SQL Server 2000, a cluster is an index mixed with a table. The
table is part of the index, or the index is part of the table, depending on your point of view.
As the table is part of the clustered index, only one clustered index can be created on a table.

Non Clustered Index:-
Non clustered indexes have a leaf level that contains all the key values, sorted in the same
manner as the index is defined, along with the row ID or clustered index key. The actual data
is not stored in the index and is retrieved using the row ID or the Clustered Index Key.

Composite Indexes:-
An index can be created based on two or more columns. The only restriction is that the index
key has to be less than 900 bytes. It the index contains variable-length columns; the sum of
their maximum size may be more than 900 bytes, but the stored value cannot be over 900

Unique Indexes:-
A Unique index enforces entity integrity. Entity integrity can be enforced by unique or
primary key constraints, or by unique indexes. Behind a unique or a primary constraint lies a
unique index. In the case of a composite index, that every group of values is unique.

Statistics and Index Choices:-
A book sometimes contains two or more indexes, one by word and one by theme. If a table
has one clustered index and four nonclustered indexes, how does SQL Server know which
index or indexes to use? The distribution statistics help the query optimizer to choose the
appropriate index.
        Every index has a distribution statistics zone, stored in the statblob column of the
sysindexes table. The size of the index does not have any impact on statistics accuracy.

DBCC Show Statistics:-
Displays the current distribution statistics for the specified target on the specified table.

DBCC Show_Statistics (table, target)
DBCC Show_Statistics (orders, orderdate)

As far as density is concerned, the smaller, the better. It indicates that the index is very
selective. If it is above 10 percent, the index is considered useless.

To list all the indexes created on the orders table, run the following:-
Select name, first, root
from sysindexes
Where id = object_id(‘orders’)

Statistics created automatically on columns are dropped automatically if they are not used.
So, you do not have to bother with them. Just let the system manage them.

You can discover the statistics that exist on a table by running the sp_helpstats system
Sp_helpstats ‘orders’

These statistics have been created because the database Auto_Create_Statistics option is on.

Statistics Maintenance:-
What would happen if the statistics were not up-to-date. Your index choice may be wrong.

By default, statistics are updated automatically in SQL Server 2000. To check whether the
statistics automatic update is activated in a specific database, run the following:

Select Databasepropertyex (‘dbname’, ‘IsAutoupdatestatistics’)

If the result is 1, then the statistics are automatically updated when needed. To set the
Auto_Update_Statistics option, run the following:
Alter database dbname
Set Auto_Update_Statistics ON

To turn if off, run the following:
Alter database dbname
Set Auto_Update_Statistics OFF

It is possible to enable or disable automatic computation of statistics with the Sp_autostats
system stored procedure. Run the following:

Sp_autostats ‘Orders’

If you want to turn off all the automatic statistics for the orders table, run the following:

Sp_autostats [@tablename =]’table_name’
[,[@flagc = ]’stats_flag’]
[,[@indname = ]’index_name’]

Sp_autostats ‘Orders’, ‘off’

To turn it on again, run the following:-
Sp_autostats ‘Orders’, ‘ON’

Now, you can turn on automatic statistics updates for one specific index, by running the

To turn if off, run the following:
Sp_autostats ‘Orders’, ‘off’, ‘indexname’

Creating Indexes and statistics:-
There are two types of Indexes:-
   1. Clustered Index
   2. Nonclustered Index

There are only one clustered index in a table and 249 nonclustered index in a table.

Syntax of Create Index:-
Create [Unique] [Clustered | Nonclustered ] Index Index_name
On {table | view} (Column [Asc | Desc] [,…n]
[with {Pad_index | fillfactor = fillfactor | Ignore_Dup_Key | Drop existing |
Statistics_NoRecompile | Sort_In_Tempdb} [,…n]
[on filegroup]

A table can have only one clustered index. If you created a primary key constraint on the
table, you may created a clustered index to support the constraint.

The Sp_helpindex system stored procedure gives you the complete list of indexes created on
a table.

Sp_helpindex Orders

It is for Notclustered Index

It is for Clustered Index.

Example of Nonclustered Index:-
The default index is always nonclustered.
Create Index orderdate on Orders(orderdate)

Example of Clustered Index:-
As the default index is always nonclustered. If you want a clustered index, you have to
specify it.

Create Clustered Index idxCustID on Orders(CustomerID)

Example of Unique Index:-
Create Unique Clustered Index UidxorderID ON Orders(OrderID)

Example of Composite Index:-
A Composite Index is made of more the one column:
Create Nonclustered Index idxdatecountry on Orders (orderdate, shipcountry)

Example of Fillfactor:-
Create Nonclustered Index idxcountry
On Orders(shipcountry)
With fillfactor = 80

Syntax of Create Statistics:-
Create Statistics Statistics_name
on {table | view} (column [,…n])
        [[fullscan | sample number {percent | Rows}] [,]]

Create Statistics Statempcountry
ON Orders (EmployeeID, Shipcountry)

Having statistics on different columns help it to have a better understanding of data
distribution, while consuming less space than indexes. Be aware that statistics do not replace

Drop Index:-
Drop Index table_name.Index_name

Drop Index Orders.Idxcountry

Fragmentation and Index Maintenance:-
In case of performance decrease, the DBA and the SQL developers should first think about
data fragmentation. Why and when does data fragmentation occur? Because data is updated
and rows may move around. Internal fragmentation refers to empty spaces inside page.
External fragmentation refers to page links.

The DBCC showconfig statement helps you determine external and internal fragmentation.

DBCC contig ‘Orders’ / DBCC contig (‘Orders’)

The explanation of the result in the following:
Extents scanned shows the number of extents used by the table.

Scan Density represent the value of external fragmentation. The lower the value, the more
external fragmentation. This value should always be as close as possible to 100.

Logical scan fragmentation gives the percentage of out-of-order pages. The higher the value,
the more page-to-page fragmentation.

After having problem with fragmentation there is option Defragment to solve this problem.
Luckily, SQL Server proposes three ways to defragment data files:

   1. DBCC Indexdefrag
   2. DBCC Dbreindex
   3. Create Index with Drop_Existing

DBCC Indexdefrag:-
It guarantees that pages are linked in the physical order and compacted. A heavily
fragmented index will not really benefit from this statement.

Syntax of DBCC Indexdefrag:-
DBCC Indexdefrag
       ({database_name | database_id | 0}, {table_name | table_id | view_name | view_id}
,{index_name | index_id})
[with No_Infomsgs]

The following code defragments the PK_Orders index of the orders table:

DBCC Indexdefrag(Norhwind, Orders, PK_Orders)

Its major drawback is that it has an online operation and is not as efficient as a DBCC
Dbreindex or a Create Index Drop_Existing.

DBCC DbreIndex:-
Rebuild and then defragment any or all indexes of a table, and possible change the index fill
Syntax of DBCC Dbreindex:-
DBCC Dbreindex
[With No_Infomsgs]

The following statement rebuilds the PK_Orders index of the Orders table:

DBCC Dbreindex (‘Northwind..Orders’, PK_Orders)

The following statement rebuilds all the indexes of the orders table:
DBCC Dbreindex (Orders, ‘’)

Note that the database name is not given in the previous statement, which means the database
on which the statement is executed is the database the connection is using.

Create Index with Drop Existing:-
The with Drop_Existing clause of the create Index statement allows an index to be dropped
and recreated in the same statement. This has a major advantage if you are running this
statement on a clustered index, because the nonclustered index may not be rebuilt.

The following statement recreates the orderdate index of the Orders table:

Create Index Orderdate on Orders(Orderdate)
with Drop_existing

                           “Importing and Exporting Data”

SQL Server includes two utilities for the bulk copying of text data: BCP and Bulk Insert.
These two utilities are designed for the movement of data into and out of SQL Server
quickly. Each is discussed below. This utility was written to import and extract data between
SQL Server and a data file. The data file can be in a text format, or it could be in a binary
format that is understood by SQL Server, called native format.

Syntax of BCP:-
{[[database_name.][owner].]{table_name | view_name | “query”}
{IN | OUT | Queryout | Format} data_file
[-m max_errors] [-f format_file] [-e err_file] [-F first_row] [-L last_row] [-b batch_size] [-n]
[-c] [-w] [-N] [-V (60 | 65 | 70)] [-6] [-q] [-C code_page] [-t field_term] [-r row_term]
[-i input_file] [-o output_file] [-a packet_size] [-S Server_name [\instance_name]]
[-U login_id] [-P password] [-T] [-v] [-R] [-K] [-E] [-h “hint [,…n]”]
In | Out | QueryOut | Format:-
Only one of these options can be specified. The In and Out parameters specify the direction
of the transfer with In being a load of data into SQL Server and Out being an extraction of
data to a text file. Queryout is specified when copying data out from a query. Format will
create a format file.

This option specifies that the data being copied is stored in the text file in a character format.
The user is not prompted for formatting information. Each field is assumed to be a char
datatype with the tab character as the field separator and the newline character as the row

This option specifies that the BCP utility connect to the SQL Server using a trusted

-h “hint [,…n]”:-
This parameter specifies a hint that should be used during the bulk copy operation. The
available hints are:

This hint will cause a table lock to be acquired and held for the duration of the copy
operation. This hint can significantly improve performance of the operation since there will
be no lock contention. Multiple clients can load the table if there are no indexes on the table
and TABLOCK is specified.

Check Constraints:-
This hint specifies that any check constraints on the destination table are checked during
copy operation. By default, these constraints are ignored.

This hint specifies that any triggers on the destination table will execute during the copy
operation. By default, no insert triggers are executed.

Example of Using the BCP Utility:-
   1. Open a Command Prompt
   2. Type the following:
   BCP Northwind.dbo.Customers Out C:\Cust.txt|.rtf|.html|.xml –T –C

   In the previous statement the .txt |.rtf |.html |.xml represents that they can be in any type
   of data file.

   3. Start Programs  Run type Notepad C:\Cust.txt and Press enter. The Notepad
      program should run and display the customer table in text format.
   Bulk Insert:-
   Unlike the BCP program, only members of the sysadmin or Bulkadmin fixed server roles
   have permission to run the Bulk Insert Command.

   Example of Using Bulk Insert:-
   You will use the Bulk Insert Command to import a text file into SQL Server.

   1. Open Wordpad
   2. Enter the following Information in the wordpad, separating each city from its team
       with tab:
   Denver           Broncos
   San Diago       Chargers
   Kansas City      Chiefs

   3. Save this file to your drive as C:\BulkTest.txt
   4. Open the SQL Server Query Analyzer
   5. Type the following query:
   Create Table BulkTest
   (City Varchar(20), Team Varchar(20))

   Run the statement by pressing F5

   6. Now type the following query:
   Bulk Insert BulkTest From ‘C:\BulkTest.txt’

   Select * from BulkTest

   Run this statement.
In SQL Server 2000, as in most relational database systems, locks are crucial in ensuring that
data loss and integrity can be maintained.

The Lock Manager:-
SQL Server 2000 has a lock manager that controls and manages the locking process. The
lock manager must also resolve deadlock issues.
       The lock manager changes its behavior based on the transaction isolation level that
has been set. The isolation level is set to Read Committed by default, but can be changed for
any individual transaction using the set transaction Isolation Level Command. There are four
transaction Isolation Levels:

   1. Read Uncommitted:-
   This is the lowest level of isolation, and is also known as ‘dirty read’. It gives the high
   performance but does not solve any concurrency problem. A process that is reading data
   does not acquire shared locks.

   2. Read Committed:-
   This is SQL Server 2000’s default isolation level. A process at this level never read data
   that is changed but not committed. A process will acquire shared locks when reading

   3. Repeatable Read:-
   This level works in the same manner as Read Uncommitted, but ensures that is a row of
   data is revisited within a transaction, the data cannot be changed, but new rows may
   appear. This level will hold shared locks until the transaction completes.

   4. Serializable:-
   The serializable level is the highest level of isolation. When set at this level, SQL Server
   will behave as though it is a single user system for each user. This level adds to
   Repeatable Read by preventing new rows from appearing within a transaction.

The lock manager tries to prevent problems from occurring between multiple users of a
system. The main problems that can occur are:

      Last Updates
      Uncommitted dependencies
      Inconsistent analysis
      Phantom reads

Levels of Locking:-
Each of these types of locks can be applied at various levels or granularities.

A row is the lowest level of lock. A row lock protects a single row of data in a table on a
single page. Other rows on the same page of data are not affected.

A page lock affects all rows on a single 8kb page of data.

An extent lock affects a complete extent or eight data pages. Since an extent can contain
pages from different objects, this type of lock can affect more than one object.

A table lock affects all rows in the table.

This type of lock is held on the entire database and all objects within the database.

Transactions and Locking:-
A transaction in SQL Server ensures either that all work done by a single statement or a
group of statements is completed, or that none of the work is performed.
       One way in which the integrity of all data can be maintained during a transaction is
through the use of locks. Locks prevent changes to the data until the transaction is either
committed. This ensures that one transaction cannot interfere with another. These locks,
however, cause a few side effects to the system, the first of which is blocking.

Occasionally, two processes will both be performing transactions and they will block each
other. When this occurs, neither process can proceed because it needs a resource that is being
locked by the other process. This situation is called a deadlock and cannot be resolved
without one process being terminated. In this situation, SQL Server will usually choose one
process as the deadlock victim and terminate it. All work done by the deadlock victim is
rolled back and its locks are released. This allows the other process to complete its
        Oneway in which deadlocks can be controlled is through the use of the Set
Deadlock_Priority option, which can be set to low or normal. It his option is set to low by a
transaction, the transaction will terminate when a deadlock occurs, even if it is not the
transaction that completed the deadlock.

Viewing Locks:-
There are two ways that locks can be viewed in SQL Server through Enterprise Manager or
using the sp_Lock system stored procedure.

Reports Information about locks. It is used to find out which resources are lock.

Syntax of Sp_Lock:-
Sp_Lock [ [@spid1 =]’spid1’] [,[@spid2 =]‘spid2’]

Where the arguments are both the system process ID numbers of one or two processes on the
SQL Server. This stored procedure can have none, one or two parameters. If no parameters
are passed, then all lock information about all processes is returned.

   1. List all Locks:-

   2. List a lock from a single-server process:-
       This example displays information, including locks on process ID 53
       Sp_lock 53

It provides the name of the users or processes who have locked the resources.

Syntax of Sp_who:-
Sp_who [@login_name =]’login’]

   1. List a specific user’s processes:-
       Sp_who ‘Surender’

   2. Display all active processes:-
       Sp_who ‘Active’

   3. Display a specific process with process ID:-
       Sp_who ‘10’

Lock Isolation Levels:-
In SQL Server 2000, the default isolation level is Read Committed. All transactions will be
behave in a manner consistent with this level by default.
        SQL Server 2000 includes a set option that allows the programmer to change the
isolation level for a particular transaction. The isolation level is one of the following:
    1. Read Uncommitted
    2. Read Committed
    3. Repeatable Read
    4. Serializable

Set Transaction Isolation Level
{Read Committed | Read Uncommitted | Repeatable Read | Serializable}

Set Transaction Isolation Level Repeatable Read
Begin Transaction
Select * from Publishers
Commit Transaction

                            “Developing a Security Plan”
Overview of SQL Server Security:-
SQL Server Security is a fairly simple model that is based on the same “allow nothing by
default” principle that exists in many firewall products. This principle starts by assuming that
no one has access to an object or resource in SQL Server. Any access rights must be
explicitly granted to a user before they can access the server.

       SQL Server has two levels of security built into the product:-
       1. The access needed to connection to the server, and
       2. The database level where access to all the objects in a database is granted

The Login Process:-
Before a user can access data in a SQL Server, he or she must first log in the sever and be
authenticated by SQL Server as being allowed to access the server. The two methods of
authentication allowed by SQL Server are windows Authentication and SQL Server

Creating and Managing Database Access:-
Each database contains a sysuserS table that defines each user and the login to which they are
mapped. When a user is added to the database, the server inserts their login ID into this table
along with a unique UserID that is generated.

Creates a new SQL Server login that allows a user to connect to an instance of SQL Server
using SQL Server Authentication.

Sp_addlogin [@loginname = ] ‘login’
[,[@passwd = ] ‘Password’] [,@defdb = ]’database]
[,[@deflanguage = ]’language’][,@spid = ] sid]
[,[@encryptopt = ]’encryption_option’]

Create login without password:-
Sp_addlogin ‘kyle’

Create login with password:-
Sp_addlogin ‘K’, ‘Test’

Add a security account for a new user in the current database.

Sp_adduser [@loginname = ]’login’
[,[@name_in_db = ]’user’] [,[@grpname = ]’group’]

Sp_adduser ‘Kyle’, ‘Kyle’
Object Rights:-
There are a number of different types of rights that can be granted to a user or role. Each type
of access for an object requires that a command be run that explicitly details the access
allowed. Rights are granted to users or roles using the Grant Command.

       Allow user to work data in the current database or execute specific Transact-SQL

Statement Permission:-
Grant {All | statement [,…n] }
To Security_Account [,…n]

Object Permission:-
Grant {All [Privileges] | Permission [,…n] }
{[(Column [,…n])] on {table | View}
| on {table | view} [(Column [,…n])]
| on{stored_procedure | extended_procedure}
| on {user_defined_function}}
To Security_account [,…n]
[with Grant Option]
[As {group | role}]

Grant Statement Permissions:-
Grant Create Database, Create table To Kyle

Grant Object Permissions:-
Grant Select, Insert on authors To Kyle

The access can be removed with the Revoke command. The application of a Revoke
Command to a user overrides any other rights that the user has been given explicitly or as a
member of group or role.

Same as syntax of Grant

Revoke Statement Permission:-
Revoke Create table from Kyle

Revoke Object Permission:-
Revoke Insert on authors to Kyle

The Deny Command prevents a specific type of command from being run on an object.

Same as Syntax of Grant

Deny Statement Permission:-
Deny Create table to Kyle

Deny Object Permission:-
Deny Insert on authors to Kyle

User-Defined Roles:-
Rights need to be granted to each user for each object. The administrator can create a role
and add a series of users to this role. A user can be in multiple Roles. Roles can also be
included in other roles.

This procedure adds a new role to a database.

Sp_addrole [@rolename = ]’role’
[,[@ownername = ]’owner’]

Sp_addrole ‘managers’

This procedure deletes a role from a database.

Sp_droprole [@rolename = ]’role’

Sp_droprole ‘managers’

Removes a SQL Server login

Sp_droplogin [@loginname = ]’login’

This procedure adds a user or another role to a role. The role name that is gaining new
members is required along with the username or role name that is being added.
Sp_addrolemember [@rolename = ]’role’, [@membername = ]’security_account’

Sp_addrolemember ‘Engineering’, ‘Kyle’

This procedure removes a member user or role from an existing role. Removes a
security_account from a SQL Server role.

Sp_droprolemember [@rolename = ]’role], [@membername = ]’security_account’

Sp_droprolemember ‘Engineering’, ‘Kyle’

Application Roles:-
Adds a special type of role in the current database used for application security. Application
roles have a few differences from user roles. First, application roles have no members.
Second, an application role is inactive by default. Once connected to SQL Server, the
application runs Sp_setapprole to activate the role using the role name and a password.
Lastly, all permissions assigned to the current user are removed and only those permissions
assigned to the application role applied.

Sp_addapprole [@rolename = ]’role’,[@password = ]’password’

Sp_addapprole ‘Salesapp’, ‘xyz’

Activates the permissions associated with an application role.

Sp_setapprole[@rolename = ]’role’, [@password = ] {encrypt N’Password }|’Password’
[,[@encrypt = ]’encrypt_style’]

Sp_setapprole ‘Salesapp’, ‘xyz’

Granting rights to selected column in a Table:-
Grant Select on Customers (CustomerID, Contacttitle) to public

Granting rights to view:-
The object permissions granted on a view are the same ones that are available for tables.
Granting rights to a stored procedure:-
Grant Execute on Custorderhist to Kyle

                          “Analyzing and Optimizing Data Access”

Execution Plans:-
The steps that the relational engine will go through to satisfy a query make up an expression
plan. There are two items that all execution plans determine.
    1. The order in which the source tables are accessed when performing the query.
    2. The method of extraction of the data from these tables.
If the table is small(with few rows), then it is almost always quicker to use a table scan. If a
large number of rows will be returned, but there is an index on columns in an order by clause,
then the index may be quicker.

         For complex queries, there may be thousands of possible execution plans that could
exist. Instead of examining each plan and assigning a cost it, the SQL Server will “guess” at
which plans will have a cost close to the theoretical minimum cost.

        You can view the execution plan for a query in a couple of different ways. You can
use the set showplan_text or set showplan_All statements. You can also view a graphical
showplan in Query Analyzer by pressing Control + K or selecting Query  show execution
plan from the menu.

Analyzing Queries:-
Using Set statements:-
The options that pertain to analyzing and optimizing queries are turned on or off by the client
and remain in effect for the duration of the connection. To turn off the effects, another set
statement must be issued.

The following options are useful in analyzing and optimizing queries and batches. Each is
described below:
Set Forceplan
Set Noexec
Set Showplan_All
Set Showplan_text
Set Statistics IO
Set Statistics Profiler
Set Statistics Time

Set Forceplan:-
This forces the SQL Server query optimizer to process the join in a query in the same order
as the tables physically appear in the from clause of the query. This can be used to force the
optimizer into different joins orders so query plans for each can be compared.

Set Forceplan {on | off}

Permission for this option default to all users.

Set Noexec:-
This option does not really report any information that can be used to analyze a query by
itself. Instead, this option prevents the execution of the query, allowing only the parsing of
the syntax and the creation of the execution plan by the query optimizer.

When you are trying to analyze a query that is extremely long running or contains a very
large result set, you will welcome this option.

Set Noexec {on | off}

Permission for this option default to all users.

Set showplan_All:-
This statement will instruct SQL Server not to execute the statements in the batch, but rather
to return the detailed information about how the statements are executed. With this
information, estimates of the resource requirements are returned as well.

Set Showplan_All {on | off}

This setting applies at run time only, not a parse time. This statement must be executed
within its own batch. Permission for this option defaults to all users.

Set Showplan_text:-
This returns the same information that is returned by show showplan_All, but in a more
readable format for MS-DOS application (those returning text output).

Set showplan_text {on | off}

Permission default to all users.

Set Statistics IO:-
This statement determines whether or not statistical information about the various I/O
operations is returned to the client.
        Unlike most other set statements, the output from this statement is not displayed in
the results window, Instead, the statistics are sent to the messages tab.

Set Statistics IO {on | off}

Permission default to all users.
Set Statistics Profile:-
This statement determines whether the profile information for ad hoc queries, views, triggers,
and stored procedure is displayed.

Set Statistics Profile {on | off}

Permission default to all users.

Set Statistics Time:-
This statement determines whether the server returns the time required (in milliseconds) to
the client that the server used to parse, compile, and execute each statement.

        As with Set Statistics IO, this information is reported on the messages tab.

Set Statistics Time {on | off}

Permissions for this option default to all users.

Set showplan_text on
Select C.CustomerID, O.OrderID
From Customers C, Orders O
Where C.CustomerID = O.CustomerID

Optimizing Queries:-
If you cannot find another way to write the query to use existing indexes, then adding another
index is likely the next best way to improve the efficiency of the query.

Clustered Index:-
Clustered indexes are best suited for range queries, such as dates or a particular numerical
range. When the optimizer detects this type of query, the clustered index is usually chosen.

Nonclustered Indexes:-
This type of index works best on large tables when very few rows are being returned. The
query optimizer will often choose a nonclustered index that is highly selective when the
index columns are included in the join statements.

Query Hints:-
There are situations, however, where the programmer or DBA may better understand the data
then SQL Server.

       There are four types of hints that can be specified in a query: Join hints, Index hints,
Query processing hints and Lock hints.

Join Hints:-
These hints are only available when using Ansi-style join syntax. In between the type of join
(Inner, Left, Right, Outer) and word join, the programmer can insert the join method to be
used. The valid choices are Hash, Loop, Merge or Remote.

Select C.Companyname, O.Orderid
From Customers C inner Hash join Orders O
on C.Customerid = O.Customerid

Index Hints:-
Index hints can force the query processor to choose a specific index for a table when
processing the query. The index can be specified by name or ID, and more than one can be

Select Select_list
From table (Index { index_name | IndexID } {,Index_name | IndexID})

Select CustomerID
From Customers(Index(Companyname))
Where Companyname = ‘old world Delicatessen’

Query Processing Hints:-
These hints are placed at the end of the select query following the keyword option. More than
one option clause can be used, but only one hint of each type can be used.

Lock Hints:-
Lock hints can be used to control how SQL Server will apply lock in various queries. These
hints can impact performance dramatically in heavily loaded systems by requiring or not
enforcing locks on the data in tables and indexes. These hints are placed in the from clause of
a query.

Select * From Customers(Nolock)
Adding and Removing Comments:-
These procedure describes how to add and remove comments in the editor pane.

To Comment out text:-
Select the text to comment out and
Press control + Shift + C

To remove comments marks:-
Select the text from which comments marks are to be removed and Press control + shift + R

Rtrim or Ltrim:-
Returns a character string after truncating all trailing blanks


This example demonstrates how to use Rtrim to remove trailing spaces from a character
Declare @string Varchar(60), @string2 Varchar(60)
Set @string = ‘ Four space are after the period’
Set @string2 = ‘Here is the string : ‘
Select Ltrim(@string)
Select Rtrim(@string2)

Upper Case or Lower Case:-
Select upper(CustomerID), lower(CustomerID)
From Customers

Returns the square of the given expression


Select Square(100)

Returns the square root of the given expression


Select Sqrt(16)
Converting Datatypes with Convert and Cast:-
Converting datatypes with convert function:-
Select Convert(Varchar(10), ‘123’)
Select Convert(int, ‘123’)

Converting datatypes with cast function:-
Select Cast(‘XYZ’ As Varchar(10))
Select Cast(‘123’ As int)

Select title, Cast(ytd_Sales As Varchar(20)), Convert (Varchar(20), GetDate())
From titles
Where Cast(ytd_Sales As Varchar(20)) like ‘1%’

To find out code of number:-
Select Convert(int, ASCII(‘A’))
Select Cast(ASCII(‘A’) as int)

Encloses a series of Transact-SQL statements so that a group of Transact-SQL statements
can be executed. Begin and End are control of flow language keywords.

(Sql_Statement | Statement_block)

Use Pubs
Create trigger deltitle on titles for delete
If (Select Count(*) from deleted, Sales
Where Sales.title_id=deleted.title_id) > 0
Rollback transaction
Print ‘you can’t delete a title with Sales.’

Returns the number of date and time boundaries crossed between two specified dates.
Datddiff(datepart, startdate, enddate)

Select Dateddiff(day, pubdate, getdate()) As no_of_days
From titles

Date Add:-
Returns a new datetime value based on adding on interval to the specified date.

Select Dateadd(day, 21, pubdate) as timeframe
From titles

Returns a string of repeated space size.


Select Rtrim(au_lname) + ‘,’ + Space(2) + Ltrim(au_fname)
From Authors

DBCC useroptions:-
Returns the set option active (set) for the current connection

DBCC useroptions

This example returns the active set options for the current connection.

DBCC useroptions

Returns the number of characters rather than the number bytes, of the given string expression
excluding trailing blanks:


Select Len (Companyname) as Length, Companyname
From Customers
Where Country = ‘Finland’
Returns the number of bytes used by represent any expression.

Select Datalength(Contacttitle) As Length, Contacttitle
From Publishers

Specifies a time, time interval or event that triggers the execution of a statement block, stored
procedure or transaction.


Waitfor Time ’19:54’
Select * from Customers

To find out the maximum Connections:-
Select @@max_connections

Displays the number of rows, disk space reserved and disk space used by a table in the
current database or displays the disk space reserved and used by the entire database.

Sp_spacedused [[@objname = ]’objname]
              [, [@updateusage = ] ‘updateusage]

   1. Sp_spaceused
   2. Sp_spaceused Customers

To find out the number of connection:-
Select @@Connections

To see the list of database:-
Select name from Master..Sysdatabases

To see the list of datatypes:-
Select name from systypes
Returns the date, version and processor type for the current installation of SQL Server


Select @@Version

Returns the name of the local server

Select @@Servername

Returns the database identification (ID) number:-

DB_ID ([database_name])

Use Master
Select name, DB_ID(name) As DatabaseID
From Sysdatabases

Returns the number of rows affected by the last statement

Update Customers Set CustomerID = ‘KKK’
Where CustomerID = ‘Anton’
If @@RowCount = 0
Print ‘No Row is updated’
Print ‘Number of Rows is updated’

To Start Service Manager with command prompt:-
Net Start MSSQLServer

To Stop:-
Net Stop MSSQLServer

To Pause:-
Net Pause MSSQLServer

Returns the first nonnull expression among its arguments

Coalesce (expression [,…n])

Select Emp_ID, Coalesce (hourlywge, salary, commission X Sales) as Total Wages
From Wages

Using Substring:-
This example displays the first initial and firstname of each employee,
For Example:-
A for Andrew:

Select Substring(firstname, 1, 1) FirstName

To top