What is a Protocol?
A Protocol is a format for exchanging data. Protocols are settings for computers connected to
the Internet. They determine the speed of data transfer and the way data is packaged. All
computers connecting to the Internet must use the same protocol for them to communicate with
Types of Protocols
TCP/IP stands for - Transfer connectivity protocol/Internet protocol
FTP stands for - File Transfer Protocol
HTTP stands for - Hypertext Transfer Protocol
SHTTP stands for - Secure Hypertext Transfer Protocol
TELNET allows Remote access
TCP/IP is the tool that provides the way in which information flows between a computer and the
Internet. In order to connect to the Internet and for networks to communicate with each other on
the net, each computer must use TCP/IP.
TCP and IP were developed by a Department of Defence (DOD) research project to connect a
number different networks designed by different vendors into a network of networks (the
"Internet"). It was initially successful because it delivered a few basic services that everyone
needs (file transfer, electronic mail, remote logon) across a very large number of client and
server systems. Several computers in a small department can use TCP/IP (along with other
protocols) on a single LAN. The IP component provides routing from the department to the
enterprise network, then to regional networks, and finally to the global Internet. On the
battlefield a communications network will sustain damage, so the DOD designed TCP/IP to be
robust and automatically recover from any node or phone line failure. This design allows the
construction of very large networks with less central management. However, because of the
automatic recovery, network problems can go undiagnosed and uncorrected for long periods of
As with all other communications protocol, TCP/IP is composed of layers: IP - is responsible for
moving packet of data from node to node. IP forwards each packet based on a four-byte
destination address (the IP number). The Internet authorities assign ranges of numbers to
different organisations. The organisations assign groups of their numbers to departments. IP
operates on gateway machines that move data from department to organisation to region and
then around the world. TCP - is responsible for verifying the correct delivery of data from client
to server. Data can be lost in the intermediate network. TCP adds support to detect errors or
lost data and to trigger retransmission until the data is correctly and completely received.
Sockets - is a name given to the package of subroutines that provide access to TCP/IP on most
systems. TCP/IP protocols are not used only on the Internet. They are also widely used to build
private networks, called internets (spelled with a small 'i'), that may or may not be connected to
the global Internet (spelled with a capital 'I'). An internet that is used exclusively by one
organisation is sometimes called an intranet.
HOW FTP WORKS...
FTP is a powerful protocol that allows files to be transferred from "computer A" to "computer B"
or vice versa. FTP works on the principal of a client/server. A client program enables the user to
interact with a server in order to access information and services on the server computer. Files
that can be transferred are stored on computers called FTP servers.
The File Transfer Protocol is used to send files from one system to another under user
commands. Both text and binary files are accommodated and the protocol provides features for
controlling user access. When a user wishes to engage in File transfer, FTP sets up a TCP
connection to the target system for the exchange of control messages. These allow used ID and
password to be transmitted and allow the user to specify the file and file action desired. Once
file transfer is approved, a second TCP connection is set up for data transfer. The file is
transferred over the data connection, without the overhead of headers, or control information at
the application level. When the transfer is complete, the control connection is used to signal the
completion and to accept new file transfer commands.
HTTP is the protocol that a Web browser uses to access and retrieve Web pages. All World
Wide Web pages are transferred over the Internet using this system. Web addresses (or URLs)
begin with the letters http:// - this lets the browser know it is suppose to be accessing a Web
server. The Hypertext Transfer Protocol (HTTP) is an application-level protocol with the
lightness and speed necessary for distributed, collaborative, hypermedia information systems.
HTTP has been in use by the world-wide Web global information initiative since 1990.
HTTP is not a protocol for transferring hypertext; rather, it is a protocol for transmitting
information with the efficiency necessary for marking hypertext jumps. The data transferred by
the protocol can be plain text, hypertext, audio, images, or any Internet-accessible information.
HTTP is a transaction-oriented client-server protocol. The most typical use of HTTP is between
a web browser and a web server. To provide reliability, HTTP makes use of TCP.
Each transaction is treated independently. A typical implementation will create a new TCP
connection between client and server for each transaction and then terminate the connection as
soon as the transaction completes, although the specification does not dictate this one-to-one
relationship between transaction and connection lifetimes. Another important feature of HTTP is
that it is flexible in the formats that it can handle. When a browser issues a request to a server,
it may include a prioritised list of formats that it can handle, and the server replies with the
Understanding Internet addresses
What is a URL?
What does it mean?
How do web addresses work?
URL stands for Uniform Resource Locator and is another name for an Internet or web address.
A web address is composed of four parts:
A protocol name
The location of the site
The name of the organisation that maintains the site
A suffix that identifies the kind of organisation it is
For example, a dot com is used for a commercial organisation and a dot co dot uk is used for a
company that trades in the United Kingdom. For example, the address-
http://www.myknowledgemap.com provides the following information: http: This web server
uses Hypertext Transfer Protocol www dot - This site is on the World Wide Web.
myknowledgemap - The web server is at the myknowledgemap Company. dot com - This is a
SHTTP is a secure protocol used to encrypt and host sensitive information on the web. This is
particularly important when dealing with financial and confidential information. Secure HTTP
was developed by Enterprise Integration Technology (EIT) as part of the CommerceNet Project
in Silicon Valley but has been released as a public specification. The system provides security
enhancements to the Web transport standard, hypertext transfer protocol (HTTP). It allows
clients and server to negotiate independently encryption, authentication, and digital signature
methods, in any combination, in both directions. It supports a variety of encryption, triple DES,
and others. The use of SHTTP begins with an exchange of messages that specify security
management information such as the encryption, hash, and signature algorithms to be used in
each direction. Theses can be specified separately for header and content information.
SHTTP can provide confidentiality, authentication, integrity guarantees on an individual file
basis. Web sites with security features are used when displaying information such as Credit
card numbers, Personal information, passwords and Contact details. Security for Commerce on
the Internet One of the main problems for retailing electronically on the Internet is the lack of
security. Two general-purpose approaches that are broadly representative and probably the
most important as well: the Secure Socket Layer (SSL) from Netscape and Secure HTTP (S-
HTTP) from Enterprise Integration Technology. For payment systems that provide strong
security for Internet purchases of goods and services two are Secure Electronic Transactions
(SET), proposed for bankcard transactions by MasterCard and Visa or a more sophisticated
payment particularly in anonymity, is E-cash developed by DigiCash.
Security for Commerce on the Internet
Secure Socket Layer (SSL) Secure Socket Layer provides security at the lowest level of the
protocol hierarchy. The security furnished is transparent to the user; it is provided at a level just
above the basic TCP/IP service. Software using TCP often specifies a "socket" at each end of a
communication, which maps the software processes at each end to the communication. At this
level SSL can encrypt all communication between the sockets on the fly and transparently.
Therefore, it can support security for virtually any Internet application. In particular, electronic
mail, TELNET, and FTP transactions as well as Web exchanges can be protected using SSL.
Most of the SSL process is involved with the initial exchange of information to set up the secure
channel. The protocol begins with the client-requesting authentication from the server, the
request from the client specifies the encryption algorithms it understands and has some
challenge text. (Challenge text is essentially random material that is returned in encrypted
material to prevent retransmission of earlier ciphertext, which would be different challenge text).
The authentication that is returned by the server is in the form of a certificate with a public-key
signature of the server. The authentication also includes the server's preferences for encryption
algorithms. The client then generates a master key, encrypts with the server's key, and sends
the result to the server. The server then returns a message encrypted with the master key. This
key is used to generate the keys used to send messages.
Telnet provides a remote logon capability, which enables a user at a terminal or personal
computer to logon to a remote computer and function as if directly connected to that computer.
The protocol was designed to work with simple scroll-mode terminals. Telnet is actually
implemented in two modules: User TELNET interacts with the terminal I/O module to
communicate with a local terminal. It converts the characteristics of real terminals to the network
standard, and vice versa. Server TELNET interacts with an application, acting as a surrogate
terminal handler so that remote terminals appear as local to the application. Terminal traffic
between user and server TELNET is carried on a TCP connection. The TELNET application
provided a common-denominator terminal. If software was written for each type of computer to
support the "TELNET terminal," one terminal could interact with all computer types.
"Prevention is better than cure".
Today, there are many programs available to assist with network management. These
programs can help identify conditions that may lead to problems, prevent network failures, and
troubleshoot problems when they occur.
One program, "Netcracker", is a design application that allows network creators to design a
simulated version of their network before putting the real thing together. Another example is
"ConfigMaker", by Cisco. This program allows the designer to configure network components by
using proper operating system syntax and then tests the implementation as a simulation. These
programs are priced in the thousands rather than the hundreds. However, this is justified by the
amount of time that can be saved by eradicating problems before network installation. While
these programs allow us to build and monitor networks, they are not a comprehensive solution,
and monitoring software should be used in order to continually check the on-going status of the
network. There are many software monitoring packages available. Sun Microsystems have an
entire range, from small LAN management to Enterprise Network management packages.
"Solstice Site ManagerTM 2.3" is one example: a state-of-the-art method for managing sites of
up to 100 nodes. It simplifies management of network resources to keep the network running at
There are many factors that can inhibit the performance of a network, leading to a situation
called "bottlenecking". - a sharp and notable reduction in performance. This can be caused by
equipment not capable for the demands that are being placed on the network. Equipment like
network cards, hubs, repeaters etc. Also, the bandwidth of the cable may not be sufficient for
traffic demands. Users can cause slowdown by playing resource hungry games across the LAN,
or engaging in heavy Internet downloads, like MP3 files and video. The problems can also be
caused by poor LAN organisation, where all nodes populate a single segment, and therefore a
single collision domain. In other words, the network is like a small room packed with lots of
people all talking at the one time, leading to chaos.
It is recommended that a baseline be established that will assist the network administrator in
monitoring performance. A baseline defines a point of reference against which to measure
network performance and behaviour when problems occur. In other words, it has to be
established what is "normal" for your network, before it can be determined what is "abnormal". A
baseline can be established by using performance-monitoring software. There may be no need
to buy expensive management software. Users running Windows servers are provided with
integrated management tools at no extra charge. They do not provide the same range or
capability of the higher end solutions, but they are still powerful tools. These tools allow the
administrator to view various logs that maintain error, security and system information. Other
tools can track processor, disk and memory usage and analyse protocol performance.
Trends gathered by these tools can indicate the problems previously mentioned, and can help
the administrator prescribe solutions to the problem.
Possible solutions include:
- Moving to a faster technology by upgrading cable, Interface cards and components
(switches, hubs and bridges).
- Increasing memory
- Installing additional CPUs
- Subnetting (breaking the network into smaller more manageable chunks using routers).
- Preventing users from running power hungry games or applications across the network.
The philosophy of networking is providing the best service at the cheapest price. It is not difficult
to have a high-performance network. All that is required is the best equipment, the best
technologies, the best methodologies and the best personnel to tie it all together. However, in
the real world this is seldom, if ever the case, due to costs. Therefore, a trade-off is sought, and
ideal performance gives way to acceptable performance. As users, we demand the best; we
want the fastest access to resources and faster links to the Internet. We want our applications to
run better, and we want more bandwidth to run multimedia applications. Cost constraints
prevent this from always being possible. In fact, our requirements as users (playing power-
hungry games over the LAN) are often sacrificed to support business needs. Organisations are
not willing to spend huge amounts of money simply to keep its users happy, preferring systems
that suit business needs, and get the job done.
Building and hosting a Web site is not going to be cheap. The intrinsic nature of IT is one of
great expensive. Granted, it is possible to purchase cheaper equipment, however, it is either a
brand of ill repute, out of date, or both. Equipment of this type, must be avoided, otherwise
problems are likely to reveal themselves further down the line in time.
The server machine will be the most expensive item that you will have to purchase. Its cost will
vary, depending on its configuration. The sites functionality will determine to a large degree the
choice of components to be made, and the way in which they will be configured. If you are
running a mission-critical application, then you will need to buy the fastest, and the most
You may wish to consider the following for your system:
Since the CPU is able to process only one instruction at a time, then it will make sense to install
perhaps two or even four, depending on the server load. The CPU is the most expensive part of
any computer system, so multiple CPUs will drive costs up.
RAID (redundant array of inexpensive disks)
RAID is a technique that uses multiple hard disk drives. When a file is saved, it is striped across
the disks, with parity information being created also. Should one of the disks fail, the technician
is able to quickly swap the disk out without having to down the server. Once the new disk is in
place, the parity information on the other disks will rebuild the lost information on the damaged
disks. Even though the cost of storage is decreasing, multiple disks still incur greater costs.
SCSI (small computer systems interface)
This is an interface standard, required to connect devices to the system. SCSI hard disk drives
are used in servers because they are faster and perform better. Of course faster means costlier.
RAM (Random Access Memory)
The greater the amount of RAM the better. RAM is not anywhere near as expensive as it used
to be, and a gigabyte of memory is very affordable these days.
The main system circuit-board, which facilities the connection of all devices to the system.
Using the components mentioned above, by nature drives up the cost of the motherboard. A
motherboard with at least four SCSI controllers, and two CPU sockets, will be required, these
do not come cheap. Some motherboard manufacturers build boards with a propriety bus and
multiple CPU sockets, which further increases costs.
Statistically speaking, the hard disk is the component most likely to fail. RAID cannot account
for fire or flooding, and is therefore limited in what it can do. This is why backup is essential.
Should disaster strike, having backup copies of the hard disk drive will allow you to reinstate the
server. There are many different backup drives and techniques available, these can be internal
or external to the server. External drives tend to cost more, due to the extra casing.
UPS (Uninterruptible Power Supply)
This is an essential component to keep your server running if the power fails. Power spikes,
brown outs and black outs can be very damaging to your sensitive electrical equipment.
Fluctuations in the power supply can cause memory and hard disk errors, and black outs can
totally corrupt the operating system. The UPS is like a battery pack. It is plugged into the main
power supply, and the server is then plugged into the UPS. The UPS then takes care of drops
and surges in power, providing a smooth electrical current to the server's sensitive components.
In situations of total power failure, the UPS will hold charge for a certain amount of time, until
either the power is resumed, or the administrator is able to get to the server and close it down
gracefully, to prevent data corruption. UPS vary in price, the more expensive they are, the better
they handle rogue power outlets and will hold charge for longer in power-failure situations.
Below, is an excerpt from the Dan Web site, that the price of a typical server. Many of the
options mentioned above are not listed, so the price will likely rise by at least another £1000
perhaps double, or even treble that.
Software is not cheap either. Unfortunately, there are many software components required,
again driving up costs. You will most likely need most of the following.
Server Operating System
The core of your Web site is obviously the server machine, therefore it required a stable and
reputable operating system that will not fall over under increased stress. As mentioned in other
modules, the choice comes down to two types, Unix or NT. These days, more and more people
are opting for NT (now Server 2000), simply because it is easier to operate, maintain and
configure. The server machine comes with Server 2000 per installed, but is likely to cost around
£1000 if you but it separately.
Web Server Software
This software is required to run in coalition with the server operating system, and provides Web
services on your machine across the Internet. There are many free solutions available from the
Internet, that are often better than the commercial versions. NT users need not worry about cost
in this department, since they get fully functional Web server software integrated with the NT
software. It pays to research and test the various freebies, before throwing money at software,
which may be more than you need.
Below is an example of a commercial solution, $995. per licence!!!
It is possible to build quit elaborate and functional Websites from basic tools. In fact, many
designers will still use the Windows Notepad and browser software to design Web pages,
however, most professionals enjoy the functionality and ease of design tool software. These can
be purchased individually or in suites. Macromedia is one such company who provide a suite of
applications to aid With Web design. These applications automate the design process, with the
An example is shown below.
As mentioned in previous units, software is required to perform load balancing and monitoring.
Again, Windows NT users enjoy the pleasure of having integrated monitoring tools for "free",
however, load-balancing software is very expensive. It may actually be advantageous to enlist
the services of an external contractor to test your system for you. Alternatively, be prepared to
pay big money if you decide to buy your own.
One expense often over looked in IT, is staff training. Training is a much-needed provision.
There is little point in deploying state-of-the-arts systems when no one knows how to use it or
maintain it. Keeping staff on the ball, will compliment your IT strategy. Training can be provided
in-house, which will require the services of a dedicated team with a broad skills and knowledge
base. Often the services of a third party training team can save money, since they can be used
as and when required.
Major players in the IT market have begun to set up creditable qualifications that are becoming
widely known and accepted in industry.
Microsoft has a whole range of different modules available to individuals and companies. These
modules can be acquired and built into group awards, like Microsoft Certified Professional,
Microsoft Certified System Engineers, Microsoft Certified Systems Administrator etc. Training of
this type normally assumes the form of 3-day and 5-day courses, followed by a multiple choice
test taken at a later date in a Microsoft approved test centre. These courses normally cast
around a minimum of £1000 pounds each, however, it is possible to buy a Microsoft approved
textbook, study the material, and apply for the test when you are ready to take it. The tests cost
around £100 each.
Cisco is another major player who has joined the training market. They provide training mainly
networking, and so their qualification is extremely advantageous to individuals working in the
Web server branch of IT. The CCNA (Cisco Certified Network Associate) involves a web-based
teaching approach, with an in-class tutor to help with enquires, set-up on-line tests and oversee
practical tasks. The course runs over 4 semesters and can cost around £1000 per semester.
Acceptance Trials & Pilot Schemes
Part of the development budget, will involve acceptance trials and pilot schemes. Acceptance
trials will likely involve installing a prototype solution that will allow the organisation to test their
ideas and modify them as required. In other words, should an idea fail, it is either discarded, or
improved. The newly improved idea is then subjected to the same testing regime until it
functions sufficiently well. The new server machine can be purchased and run alongside the
existing system, until it is safe to migrate to the new system. .
Alternatively, an organisation may adopt a pilot scheme of sorts. This will involve building a
system entirely separate from the existing system, to serve as a tentative model for future
experiment or development. Like before, this system would be deployed and monitored, often
modified and re-assessed, until acceptable results are acquired. The model may well remain
after a satisfactory conclusion is reached, and an entirely new system purchased and deployed.
The testing system may well remain as a tool for future development, either for entirely new
systems or to test modifications to the existing system.
It should be noted, that the deployment of any new system, is very likely to impact applications
and/or data, causing them to function undesirably on the new model. This depends upon the
extent of the change. If the new system is the same in principle to the older one, i.e. moving
from Windows NT to Windows Advanced Server 2000, then perhaps all that will be required are
upgrades in the applications. However, if the new system is an entirely different one, with cross-
platform data formats, i.e. Unix to Windows, then all previous applications will not run on the
new system, and a new suite of applications will must be purchased. On the other hand, there
are two options with data, either hard copy formats are maintained and then painstakingly re-
entered, or it may actually be possible to convert the cross-platform data to the new formats.
There are companies who offer these services for a fee, one such example is noted below.
From the developed model to the actual model, costs continue. There are several factors that continue to
make demands of financial resources.
The IT industry is dominated by change. No sooner are new, cutting-edge technologies
installed, it seems they are out of date. Technology and growth move at a staggering rate. In the
High Street, as the higher-purchase consumer makes the final payment on equipment
purchased a scant 30 months ago, knows fine and well that the equipment is already
antiquated. This of course affects not just High Street consumers, but businesses also.
Money lavished on development costs and system implementation, is not the end of the matter,
indeed, this is just the beginning. Granted, it is highly unlikely that such large sums of money
will be spent on one-off purchases again, but ongoing costs are intrinsic to the nature of this
The table below is a summary of some situations that incur the need for upgrade at one point or
Install faster access media (ADSL etc)
More traffic to Web server Upgrade RAM, CPU, Hard disks
Install Firewall/Proxy Server
Increased security risk Upgrade to more secure Firewall software
Upgrade to more complex monitoring software
Upgrade network by:
Installing hubs switches, bridges and routers
More Local users Adding new workstations
Installing additional server(s)
Install more peripherals
Faster network cards
Segment and Subnet (Switches and bridges)
Upgrade to faster cable
Upgrade to faster components (Switches and hubs)
Upgrade Operating systems
New Software Incompatibilities Upgrade Applications Software
Upgrade Software for smoother integration
New Hardware Incompatibilities
Upgrade Existing Hardware
Upgrading can be kept at bay for a time, limited for another time, but can never be avoided.
When installations are put in place, it is advisable, if at all possible to provide more than is
required. This way, your application will deal with growth without incurring immediate costs. As
use increases, the system can cope because of the measured over-compensation, wisely inbuilt
at the design stage.
Inevitably, the need will arise for new components, and this is where a well-designed system
can stave off higher costs for a time. Building your system with the ability to expand, will lessen
the amount of immediate spending, preventing it from being too costly in the early stages.
Applying some savvy at the design stage, will allow upgrading to be carried out without having
to replace core components. For example, installing hubs on the network with spare ports, will
allow network expansion without cost (apart from the new node!). If there are no spare ports,
then a new hub is required. Use latest software releases to prevent hardware and software
incompatibilities at an early stage. Use the fastest technologies affordable to avoid wholesale
upgrades. The basic rule here is more of everything as budget permits in the design stage.
Installing an IT solution, or in this case a network, often provides several options. The myriad of
choice can usually be contained and understood within a cost versus performance paradigm.
Even if you do not fully understand all of the options, one of these factors may well decide for
you. If money is not an option, then you simply tell your development team to build the best, and
care not for the cost, however, if you live in the real world with the rest of us, then your choice
will fall much closer to the cost boundary.
Having decided upon the topology, the architecture, configuration, size and purpose of your
network, then it is time to decide upon the infrastructure, that is, the internal organisations for
data access, data storage and most importantly application services.
There are basically three ways in which to organise this infrastructure:
- Thin Client
- Fat Client
Thin clients have been deemed as the future of computing, not just for the business user, but
also for the home user. The thin-client computing environment consists of an application server,
a network, and thin-client devices. The thin client is a simple terminal or other computing device
used to connect to servers where applications and data are accessed and viewed. Thin client
machines are so called, because they contain no CPU, hard drive or RAM. The system box is
simply a case containing a motherboard, facilitating the connection of peripherals such as mice,
monitors and keyboards. Also, connected to the motherboard, is a network card, providing the
interface for the client to connect to the to the network. No applications or complex operating
systems are installed on the machines either.
This next generation of computing enables organisations to reduce costs due to the fact that the
terminal machines are “empty” boxes, costing considerable less than a modern workstation,
thus providing an attractive approach for cutting hardware costs.
Granted, thin clients pre-empt the need for a fat server loaded with resources, like several fast
CPUs, masses of storage and plenty of RAM. The server will be responsible for all processing
on the network and support for multiple users, therefore the need for extra resources. Even
though the cost of such a machine is likely to be high, the money saved on thin clients still
enables the purchase of a high performance server, with money to spare.
Thin clients lower the Total Cost of Ownership (TCO). As well as costing a fraction of the price
to what we normally pay for a power PC, they are more reliable than traditional computing
environments and require less maintenance. Management, systems support, and downtime are
the fastest growing and least manageable costs to owning technology. With computing power,
applications, and data centralised on secure servers, fewer technical staff can support many
more users. Thin clients have no need for hard drives, memory, or other hardware upgrades.
Software updates are distributed at once to all clients across the network, creating a few
standard desktop configurations to ease training, troubleshooting, and maintenance. Server
backups secure and protect data.
As the term suggests, fat clients are the very opposite of their thin client relatives. Fat Clients,
also known as bloated clients are fully loaded PC's, desktops or laptops, containing a full suite
of PC applications, a PC operating system like Windows, and network connectivity software.
The software resident on these machines is known as "bloatware". These machines are costly
and known to be complex to manage and introduce the need for more, and better trained
support staff. However, machines of this type actually reduce network traffic since they run
applications and process locally, requiring network access for print and data services only.
Money can be saved on server costs, but probably not enough to justify a fat client
Which infrastructure is right for your environment? It would seem that they both are. The fat
client will continue to serve the heavy PC user or the mobile user, and the thin client will provide
PC capabilities to large organisations that are largely application and data-centric, with perhaps
the desire for Internet access.
Again, we are faced with the cost versus performance paradigm, and it is either of these two
factors which will decide what infrastructure is the correct one. In terms of objectivity, there is no
right and wrong, choices are purely subjective and are made on requirement or taste. The only
requirement, whichever you choose, must fulfil the purpose for its implementation.
Once your system is tested and fully developed, the actual model has to be taken from its
prototype form to its actual installation. Installation goes beyond simply placing the equipment in
the desired location, sticking it all together and plugging it in.
The services of skilled professionals from many fields, some more specialised that others, must
be enlisted. Carpenters must fit workbenches and desks that the equipment will rest on,
electricians will install cable ducting and the cable itself. Network specialists will then assemble
the network, connecting servers, hubs, bridges, routers and patch panels, building a central
location, from which the network’s services emanate, known as a wiring closet or main
distribution facility (MDF). Workstations will be placed on benches and connected to the
relevant ports, which are wired to the MDF, thus giving them network connectivity. If the network
is designed to use fibre optic cable, perhaps as a backbone, then cable specialists are required
to splice and polish cables, a highly skilled job, and a very costly service.
Organisations must ensure that they hold valid licences for all deployed software. Failure to
manage licensing properly, can and often does, result in heavy fines. Licences vary in price, but
bulk buying will reduce costs per licenses. All installed network operating systems must have a
licence per user and per machine. Applications software, even though installed on a central
server and distributed across the network, must have a license purchased per user. Some
licences are calculated on the basis of the machine that is running the software.
Recently, the development of ASPs (application service providers), are changing the way
licensing works. ASPs offer users the chance to operate on a pay-per-use basis - renting out
software rather than charging a flat fee for installation on a company's own systems. This
method is causing traditional licensing to undergo significant changes in the industry. Users of
Microsoft’s Office 10 offers a simple choice for users: they can either acquire Office on an open-
ended licence for a single payment as normal, or make a much smaller down payment followed
by an annual subscription. This would entitle the user to all updates and point revisions
automatically. At the end of the 12-month period, users have the option of renewing their
subscription. If they choose not to, Office allows existing documents to be viewed and printed,
but nothing new can be created or saved.
Licensing has become a very complex and confusing area for network administrators to
manage, now that the conventional method is undergoing such change. In the ensuing disorder,
one thing is for sure; this is an inescapable and costly requirement.
External Charges Helpdesk
Since we live in a world subject to entropy, sooner or later, wear and tear will affect IT
equipment, causing devices like hard-drives to capitulate, backup tapes to wear-out, keyboards
and mice to fail and printers to burn out etc. The environment that we live in also affects
equipment. Changes in temperature, airborne dust particles, smoke etc, finally take its toll. Also,
no one can predict disaster events, like flooding and/or fire.
Providing help facilities to users is another cost consideration. The level of internal user-support
will vary, depending on the size of the company and the extent of the support facility. Smaller
organisations may well manage user-support with the services of an individual or small team,
but large enterprise networks will require a dedicated 2-teir system, called a helpdesk.
Users with problems contact the helpdesk by telephone and state the nature of their fault to the
helpdesk operative. The operative immediately logs the fault, and other relevant information into
the online database system. Next, the operative attempts to resolve the problem by using
various tools of the trade. Online, pre-programmed aids, or manuals can be used, which guides
the operative through a series of questions regarding the fault, and providing possible solutions.
Remote control software may also be used. This method allows the operative to take control of
the problematic machine, and to control it as though it were a local machine. The operative can
then attempt to fix the machine. This software can allow remote connectivity to the next room or
across the world.
Should these measures fail, the helpdesk assistant logs the fault, and assigns it a priority. The
priority will determine whether or not the fault is logged onto the system for the next available
technician to pick up, or if the technician should be contacted directly.
In most cases, the above system will take care of user-support requests, however, problems
that lie out with the capabilities of the help team are then escalated to an external organisation
who will then resolve the problem.
No organisation is totally self-sufficient. Even if they are tremendously efficient at handling
user’s requests and problems, there will undoubtedly be situations that they will be unable to
resolve, and so the services of a maintenance contract will most likely be employed, albeit, to
varying degrees. External aid of this nature provides a safety net of sorts. Maintenance
contracts are reassuring, for there will be times when the skills, knowledge or resources
required for a job, may well not exist within the company, contractors can then come to the
rescue. This is the thinking that leads most enterprises to invest significant amounts of money
every year with support companies.
It is up to the network administrator and or similar staff to decide what level of support is
required. Contracts can be negotiated that will handle all repairs, some with only major repairs,
and some perhaps only being used in emergencies. One important factor in negotiating a
contract is to agree to its extent. Will the equipment be sent to the contractor’s base, or will the
technician attend the site? Also, It is very important to agree on a response time and a
resolution time. If your company is running a mission-critical application, then it is crucial to
ensure that the contractor can offer the level of service that is being paid for. Contractors can
even provide a swap-out option. That is, if equipment fails, the contractor is able to appear at
your organisations door with a full-working replacement, which will replace the defective
equipment until it is repaired. Some large companies, like IBM can swap out, not just single
machines, but entire systems. Once put in place, you would simply configure the network to
your needs, install applications and reinstate data from backups.
So, maintenance contracts can vary according to an organisations needs and size, however,
regardless of needs it is important to observe the following:
- When negotiating a maintenance contract firms should decide exactly what they will need from
their maintainer. Network managers should ensure that they drive the negotiations, rather than
allowing the maintenance company to set the agenda.
- Network managers should detail all requirements in an invitation to tender, and make sure the
winning maintenance firm agrees to it. Bids should leave nothing to assumption.
- Firms should insist on cost breakdowns for both the basic service and any value add-ons.
They should make sure they know what they are paying for so they can tell if it is worth it.
- Firms have a right to know about the maintenance staff who will be working on their behalf.
Inspecting CVs, site visits, and lots of face-to-face meetings can make sure that IT managers
know what level of service they can expect.
Connecting your organisations network to the Internet is often another unforeseen cost. Costs
are incurred through leased lines. As mentioned in the networks module, there are several
methods available for connection of your Web server to the Internet. These broadband solutions
vary in performance, some being faster and more reliable than others, and are made available
usually for a monthly charge. The table below shows various options that are available from BT.
Notice, the faster your connection, the greater the expense.
Product Max. Speed Installation VAT
Business 2000PLUS 2000 £260 £159.99 Ex
Business 1000PLUS 1000 £260 £129.99 Ex
Business 500PLUS 500 £130 £99.99 Ex
Business 500 500 £75 £39.99 Ex
Home 500 500 £74.99 £39.99 Inc
E-Commerce is a massive growth area, were colossal sums of money are being made and
spent every day. This is largely to do with the hype of the Internet and on-line shopping. The
Internet is growing exponentially, and will continue to grow for some time to come. This, coupled
with good advertising, can provide a solid foundation from which to launch a stake in the
Internet and e-commerce boom.
E-commerce is big business. The fact that people want it is probably the biggest lure for
companies to jump onto the bandwagon, but there are other factors, making e-commerce a
good sense solution.
- Lower transaction costs. If the site is implemented well, the web can significantly lower order
taking costs and customer service costs after the sale by automated processes.
- Variety for shoppers: It gives people the opportunity to shop in different ways.
- The ability to build an order over several days
- The ability to configure products and see actual prices
- The ability to compare prices between multiple vendors
- The ability to search large catalogues easily
- Larger catalogues: On the web, a company can display their entire range. If they were to
print a glossy catalogue to do the same job, it would be too big. E.G. Amazon sells 3 million
books. Imagine trying to fit all their available books into a paper catalogue. The Web cuts out
printing cost and distribution.
- Global availability: Anyone with an Internet connection, in any part of the world can access
online services, without costing the company a penny, over and above the marketing costs. This
ubiquitous presence is far greater reaching than a catalogue drop.
- New business model: E-commerce allows people to create completely new business models.
A mail order company has high costs for staff and catalogue printing & distribution. However in
e-commerce these costs fall practically to zero.
There are several ways in which to implement an e-commerce application. Consider the
Out of the Box / Server-Based
Out-of-the-box is an ideal that eliminates, or at least reduces time consuming and complex e-
commerce site building, by providing all of the various functions of an e-commerce solution from
one source. Out-of-the-box is an “instant” low price solution requiring minimal IT recourses for
implementation and operation. It is has a “Lego” mentality by shielding the user from heavily
involved technical programming and scripting etc, simply by assembling the various software
components into an established framework. This is done using a GUI interface, where the
developer moves the software objects around analogous to the child with Lego bricks, snapping
them into place, and building a functional entity.
Microsoft’s Commerce Server 2000 provides a comprehensive set of features to let developers
quickly build scalable, user-centric, business-to-consumer and business-to-business e-
commerce sites. Commerce Server 2000 offers robust and easy-to-implement functionality that
makes it the ideal solution for building e-commerce applications. Its features include the
Development Tools - Commerce Server 2000 gives developers the power to quickly build and
deploy effective e-commerce sites. Developers get a fast start with a choice of two starter
applications that provide comprehensive e-commerce functionality. - Personalisation,
merchandising, catalogue search, customer service, and business analytics. - Secure user
authentication and group access permissions, purchase order and requisition handling. - Built-
in, online auction capabilities. - Speed up and simplify development efforts with code samples
and in-depth documentation.
Administrative Tools - simplify and centralise administrative tasks such as site configuration,
deployment, operations and maintenance, reducing cost of ownership, and increasing
Partners – Microsoft in partnership with independent software vendors, ensure the highest
availability and quality possible for solutions, including credit card validation, taxation, shipping
and handling and content management.
Profile System - helps you manage information about millions of users and groups of users. In
addition, it establishes secure authentication of site to ensure users have access to authorised
areas of your site. You also serve users custom content, such as special pricing or products,
based on the user's profile.
Targeting System - enables one-to-one marketing, determining the most appropriate content
(ads, discounts, up-sells, cross-sells, catalogue data, and more) to provide to a given user in a
given context, based on explicit user preferences. There are many more features in Commerce
Server 2000. For more information, visit Microsoft’s web site at
http://www.microsoft.com/commerceserver/default.asp The price of this solution is around
£6000 with the same being paid again for each licence required. Licensing is calculated on a
per per-CPU basis. In other words, if the server machine running Commerce 2000 has 4 CPUs,
then expect to pay £24000 for your solution!
Currently, one of the most fashionable technologies within the Internet is “Push” technology.
Contrary to the “Pull' world of web pages where users request data from another program or
computer, via a web browser, “Push” enables services to be targeted at the user, without them
having to initiate the information collection activity. Instead, information finds the user. In other
words, an automated retrieval of data from the Internet, corporate data sources and e-
commerce web sites, is delivered directly to specific user populations in a personalised manner.
“Push” Technology allows you to become an integral part of your customers daily lives by
enforcing your brands and services directly to them every day. Key messages and personalised
information that they have requested, and critical information can be delivered to their desktop,
screen saver, any wireless device, mail account and more. “Push” amplifies and extends your
current Web presence while providing new and valuable services. Your customer is directed
back to your Web site for more in-depth information. This technology eliminates the need to wait
for customers to visit your site, instead, allowing an organisation to take their business to their
In order for companies to be able to use this technology, they require their customers to
download and install a piece of client software onto their computers. The software interacts with
the Web, and provides the interface through which context sensitive content is delivered.
Infogate, a Web based company, claim to have over 10 years experience delivering critical
content on behalf of partners to their customers and end-users, and also claim to increase
customer retention, and thus increase revenue.
Another good example of “push” technology is the PointCast news system that broadcasts the
news, sports and weather, covering topics selected through the desktop client. In the case of
PointCast, they are then displayed as an attractive screensaver saver application.
Perhaps, an unusual point about “push” broadcasters is that they still use the HTTP protocol,
and though they are called “push”, it is the client that starts the session with the server using
BackWeb is another Web based company also offering “push” technology. However, BackWeb
do not use HTTP protocols, but has its own set of propriety protocols that allow a non-HTTP
session to be established between the client and the server, again, client software is required,
and has to be downloaded from their site.
Another method of creating a “push” server, is to program exactly the type of broadcast station
you want. A good software development system for such, is Marimba’s Castanet software that
is designed to push Java applets and Shockwave animation, entertainment and interactive
multimedia content to specific users.
Marimba's Server Management also includes integration with Microsoft Internet Information
Server (IIS). This functionality allows Marimba’s Server Management product family to package
and distribute commonly used server applications that are based on Microsoft IIS, such as
Microsoft Commerce Server 2000 and Allaire’s ColdFusion.
Marimba’s solutions can be priced by following the “About Us” link on their Web site, were
contact information is available. Also, the Marimba site offers Flash demos, white papers,
datasheets and Web seminars for download by following their “Product” link.
Intelligent Agent Software
E-Commerce is changing the way business is getting done in the Information Age. To gain a
competitive edge, businesses are in need of new ways to get ahead of the competition, new
models and a new infrastructure. To address this need, an inter-organisational electronic
commerce model is being developed.
According to this model, different users are represented by autonomous software agents
interconnected via the Internet. The agents act on behalf of their human users/organisations to
perform information-gathering tasks, such as locating and accessing information from various
sources, filtering unwanted information, and providing decision support.
Currently, existing software agents mostly help to search for product and price information,
validate purchaser’s credit, billing and accounting information. However, soon will come the time
when agents will be able to match buyers and sellers based on some criteria, find prices and
makes bids on behalf of users, notifying of new books or CDs, notifying when specific products
are available at a specific price etc.
The Intelligent Fridge?
A system has been devised that frees a human home owner from having to open their
refrigerator and make a shopping list of low-stock items, and then frees them from having to
leave their home to purchase the required goods!
Granted, the user has to place goods in designated areas for the system to work, e.g. milk
would always have to be in the same place, as would eggs etc. Sensors would then be able to
determine when items needed replaced and subsequently send a message from the fridge, over
the Internet, to the local grocery store. There, the order is packaged up and delivered to the
consumer's door, thanks to the fridge! This is a somewhat contrived method of software agency,
since it lacks any real intelligence, but perhaps in the future, software agents will be able to
search databases of several stores within a given area, and then order the stock form the
cheapest provider. Why stop at milk, indeed why stop at food? Imagine being able to pick up the
telephone, dial an international number, then a software agent interfaces on behalf of the caller,
to collect bids from various carriers, asses the bids and selects the cheapest per minute!
In an e-commerce context, software agents can revolutionise the way we trade on the Web.
Perhaps in the future we will see dramatic developments that will have agents operate without
direct intervention from humans (apart from the initialisation stage), having them take control of
auctions or internal state; negotiate air fares, car prices, book prices and holiday deals etc. Who
knows what the future may bring!
See http://www.aaai.org/Press/Books/Bradshaw/bradshaw.html for more information on the
development and potential of intelligent software agents.