ClientServer by pengxiang

VIEWS: 54 PAGES: 9

									                                Chapter 12 Client/Server


What is Client/Server?

       Client/Server describes the issue of where computer processing takes place.

With its description it uses the two terms  client and server

       A client requests specific services from a server processor.
       A server provides processing services for clients.

       client and server can reside on the same or on different computers connected by a
       network.

       results from client/server are applications in which part of the processing is done
       at the client side and part of the processing is done at the server side.

       Client

       The client presents the user interface, ie. the client application program controls
       the PC screen, interprets data sent to it by the server, and presents the results of
       database queries.

                 client forms queries (SQL) and sends to server for processing

       Server

       The server responds to queries from clients, checks syntax, verifies access rights
       of client, provides error messages, executes commands per instructions from
       client.


A client/server database system is different from a centralized database system on a
mainframe in that each client is an intelligent part of the database processing system. The
application program is running on the client, not only on the host or server. The
application program handles all interactions, data manipulation functions, with the user,
and local devices. The server handles all database access and control functions. Here, we
have a true sharing of processing duties between the client and server.
Why Client/Server systems?

Client/Server Computing makes it possible to mix and match data with disparate data
sources and hardware it provides an ability to use the power of local PCs to access
mainframe, minicomputer and/or PC network data and to process such data locally by
using user-friendly PC software.

       PC power/$ > Mainframe power/$
       Move process Close By thus, Reduced Network Traffic and Quick Response
       Facilitates Graphical User Interfaces (GUI) built for PCs and not for Mainframes
       Encourages Open Systems


Some Goals of Client/Server

development of systems that are independent of hardware or software (cross boundaries)
optimize processing resources

note* Client/server is based on a very complex technology that generates its own set of
management problems but goals focus on 

       applications development and implementation costs
       advantages of scalability and portability (modular and flexible)
       system operations costs
       change of function from development to end user support (user productivity)



Some Database Servers for Client/Server Environment

       Microsoft's SQL Server
       Oracle Server from Oracle Corp.
       DB2/2 from IBM
       SQL Server from Sybase, Inc. etc.


Some client/server environments use mainframe host database systems, such as IBM's
DB2, and the database server. Some speculate that mainframe computer will more and
more serve as hubs or servers, providing a kind of central data warehouse function. That
is, either the client PCs will access the mainframe database to retrieve data, or the
mainframe will act as a clearinghouse into which raw data are stored and checked for
integrity before distributed to decentralized databases.
What does Client/Server have to do with Databases?

Because of the trend toward distributed databases is firmly established, many database
vendors have used the client/server label to indicate distributed database capability.


Distributed Databases via Distributed Processing

 division of logical database processing among two or more network nodes.

       distributed database stores logically related data in two or more physically
       independent sites connected via a computer network. ….. Different from a
       decentralized database where data stored by location but not part of a shared
       network environment!!

       Data allocation strategies for distributed databases are:

       Centralized. The entire database is stored at one site.

       Partitioned. The database is divided into several disjoint parts and stored at
       several sites.

       Replicated. Copies of one or more database fragments are stored at several sites.

       data allocation studies focus on the issue of which data to locate where via
       performance (response time) and data availability goals and the fragments
       required within the design so it may be managed as a centralized database.

       Objectives and Trade-offs

       Plus Side:

       increase reliability and data availability from data dispersion..not central failure,
       local control of data by users,
       modular control…incremental data required for incremental growth,
       lower communications costs and faster response.

       Minus Side:

       software cost and complexity,
       coordination processing overhead,
       data integrity (additional exposure to improper updating)
       slow response from improper allocation
Distributed databases requires distributed processing.

To have a distributed database, there must be a distributed DBMS that coordinates the
access to data at the various nodes. Requests for data by users are first processed by the
distributed DBMS, to determine the nature of the request and if the request is local or
global. The distributed DBMS consults the data directory and routes the request to
nonlocal sites as necessary.

       A distributed database management system (DDBMS) governs the processing and
       storage of logically related data via interconnected computer systems.

       The main components of a DDBMS are the transaction processor TP (software on
       each node that request data) and the data processor DP( software on each node
       that stoes and retrieves data).

The client/server architecture can be used to implement a DBMS in which the client is
the transaction processor (TP) and the server is the data processor (DP). Client/server
interactions in a DDBMS are carefully scripted. The client (TP) interacts with the end
user and sends a request to the server (DP). The server receives, schedules, and executes
the request, selecting only those records that are needed by the client. The server then
sends the data to the client only when the client requests the data.

A distributed DBMS should isolate users from the complexities of distributed database
management.

By location transparency, it appears to users as if all of the data are located at a single
node. By replication transparency, the user may treat the item as if it were a single item
at a single node. With failure transparency, either all the actions of a transaction are
completed at each site, or none are committed. With concurrency transparency, each
transaction appears to be the only activity in the system.
Why may client/server computing be considered an evolutionary, rather than a
revolutionary, change?

Client/server computing didn't happen suddenly. Instead, it is the result of years of slow-
paced changes in end user computing.

       1970s mainframes with users accessing dumb terminals

       1980s PC with data residence on mainframes (manual extraction)
       1980s PC with data residence on mainframes (intelligent terminal extraction)
              (Proliferation of Snapshots of mainframe database) ….

       1990s Networks of heterogeneous computers..Data access without regard to the
       data location, data model, or communication characteristics of the other
       computers in the network. (software and hardware independence)

       Today Modern end users use intelligent computers, GUIs, user friendly
       systems, and data analysis tools to effectively increase their productivity. In
       addition, data sharing requirements make efficient use of network resources a
       priority issue.


Client/Server Architecture

       Client/Server architecture is based on a set of principles. These are:

       Hardware and Software Independence,
       Open Access to Services,
       Process Distribution,
       Standards


       Components

               client,
               server and
               communications channel.

       Some include middleware as a separate component since middleware (software
       that is used to manage client/server interactions) is critical to management of the
       communications channel !!
          middleware provides services to insulate the client from the details of network
         protocols and server processes.

The Open Systems Interconnection (OSI) network reference model was developed by the
International Standards Organization as an effort to standardize diverse network systems.
The OSI model is based on seven layers, each isolated from the other and functions
within a system that requires considerable infrastructure.

         The OSI network model is made up of the following layers:

                                             application

                                            presentation,

                                               session,

                                              transport,

                                               network,

                                              data-link

                                               physical


where:

the application and presentation layers provide end user application-oriented functions,

         Eg. preperation and forematting of data to be sent

the session layer ensures and controls program-to-program communications,

the transport, network, data-link, and physical layers provide network-oriented functions.

         Eg. the physical layer provides standards dealing with the electrical details of the
         transmission and the data-link layer creates 'frames' for transmission and controls
         the shared access to the network physical medium.

The client/server network infrastructure includes the network cabling, network topology,
network type, communication devices, and network protocols. But, network protocols
constitute the core of the network infrastructure

The network protocols determine how messages between computers are sent, interpreted
and processed.
The main network protocols in use today are:

Transmission Control Protocol/Internet Protocol (TCP/IP)

       Main communications protocol used by UNIX (important operating system for
       medium-large database servers) systems and is the official communications
       protocol of the Internet TCP/IP well on its way of becoming the de facto standard
       for heterogeneous network connections.


Sequenced Packet Exchange/Internet Protocol (SPX/IPX),

       Protocol used by Novell, LAN operating systems. Has become the de facto
       standard for PC LANs.


Network Basic Input Output System (NetBIOS).

        IBM product supported by the majority of PC operating systems. Limitations
render it unusable in geographically dispersed internetworks.


Application Program to Program Communications (APPC)

      IBM mainframe protocol. Carries communication between personal computers
and IBM mainframe applications such as DB2, running on the mainframe.


Most first-generation middleware software used in client/server applications is oriented
toward providing transparent data access to several database servers. The use of database
middleware yields:

Network independence by allowing the front-end application to access data without
regard to the network protocols

Database server independence by allowing the front-end application to access data from
multiple database servers without having to write code that is specific to each database
server.

Eg. can use generic SQL to access different database servers. The middleware layer
isolates the programmer from the differences amon SQL dialects by transforming the
generic SQL into the server's expected syntax.
MIS managers are usually concerned with finding ways to improve end user data access
and to improve programmer productivity. By using middleware software, end users can
access legacy data and programmers can write better applications faster. The applications
are network independent and database server independent.. Such an environment yields
improved productivity, thereby generating development costs savings.

Suppose you are currently considering the purchase of a Client/Server
DBMS. What characteristics should you look for? Why?

Client/Server databases provide transparent data access to multiple and heterogeneous
clients regardless of the hardware, software, or network platform used by the client
application. Client/Server DBMSs differ from other DBMSs in terms of where the
processing takes place and what data are sent over the network to the client computer.
Client/Server DBMSs free the clients from processing the data locally (at the client) and
reduce network traffic because only the rows that match a query are returned.

A client/server DBMS is just one of the components in an information system. The
DBMS should be able to support all applications, business rules, and procedures
necessary to implement the system. Therefore, the DBMS must match the system's
technical characteristics, it must have good management capabilities, and it must provide
the desired level of support from vendor and third parties. Specifically:
On the technical side the database should include data distribution, location transparency,
transaction transparency, data dictionary, good performance, support for access via a
variety of front-ends and programming languages, support several client types (DOS,
UNIX, Windows, etc.), third party support for CASE tools, Application Development
Environments, etc.
On the managerial side the database must provide a wide variety of managerial tools,
database backup and recovery, GUI based tools, remote management, interface to other
management systems, performance monitoring tools, database utilities, etc.
On the support side the DBMS must have good third party vendor support, technical
support, training and consulting.

From a managerial point of view, client/server data processing tends to be more complex
than traditional data processing. In fact, client/server computing changes the way in
which we look at the most fundamental computing chores and expands the reach of
information systems. These changes create a managerial paradox. On the one hand, MIS
frees end users to do their individual data processing and, on the other hand, end users are
more dependent on the client/server infrastructure and on the expanded services provided
by the MIS department.

Client/server computing changes the way in which systems are designed, developed and
managed by forcing a change from:
proprietary to open-systems maintenance-oriented coding to analysis, design and service
data collection to data deployment a centralized to a distributed style of data management
a vertical, inflexible organizational style to a more horizontal, flexible style.
What, if any, client/server standards exist and how do such standards affect the
client/server database environment?

There is no single standard to choose from at this point. However, there are several de
facto standards, created by market acceptance. Therefore, client/server developers have
many "standards" to choose from when developing applications. The important issue in
database is how the selection of one of the de facto standards affects database design,
implementation, and management.

								
To top