DOT_NET by harish1991

VIEWS: 57 PAGES: 23

									Web Services

Web Services allow access to software components through standard web protocols
such as HTTP and SMTP. Using the Internet and XML, we can now create software
components that communicate with others, regardless of language, platform, or culture.
Until now, software developers have progressed toward this goal by adopting proprietary
componentized software methodologies, such as DCOM; however, because each
vendor provides its own interface protocol, integration of different vendors' components
is a nightmare. By substituting the Internet for proprietary transport formats and adopting
standard protocols such as SOAP, Web Services help software developers create
building blocks of software, which can be reused and integrated regardless of their
location.



Web Services in Practice

You may have heard the phrase "software as services" and wondered about its
meaning. The term service, in day-to-day usage, refers to what you get from a service
provider. For example, you bring your dirty clothing to the cleaner to use its cleaning
service. Software, on the other hand, is commonly known as an application, either off-
the-shelf, or a custom application developed by a software firm. You typically buy the
software (or in our case, build the software). It usually resides on some sort of media
such as floppy diskette or CD and is sold in a shrink-wrapped package through retail
outlets.
How can software be viewed as services? The example we are about to describe might
seem far-fetched; however, it is possible with current technology. Imagine the following.
As you grow more attached to the Internet, you might choose to replace your computer
at home with something like an Internet Device, specially designed for use with the
Internet. Let's call it an iDev. With this device, you can be on the Internet immediately. If
you want to do word processing, you can point your iDev to a Microsoft Word service
somewhere in Redmond and type away without the need to install word processing
software. When you are done, the document can be saved at an iStore server where you
can later retrieve it. Notice that for you to do this, the iStore server must host a software
service to allow you to store documents. Microsoft would charge you a service fee based
on the amount of time your word processor is running and which features you use (such
as the grammar and spell checkers). The iStore service charges vary based on the size
of your document and how long it is stored. Of course, all these charges won't come in
the mail, but rather through an escrow service where the money can be piped from and
to your bank account or credit card.
This type of service aims to avoid the need to upgrade your Microsoft Word application.
If you get tired of Microsoft Word, you can choose to use a different software service
from another company. Of course, the document that you store at the iStore server is
already in a standard data format. Since iStore utilizes the iMaxSecure software service
from a company called iNNSA (Internet Not National Security Agency), the security of
your files is assured. And because you use the document storage service at iStore, you
also benefit from having your document authenticated and decrypted upon viewing, as
well as encrypted at storing time.
All of these things can be done today with Web Services.
In fact, Microsoft has launched a version of the "software as service" paradigm with its
Passport authentication service. Basically, it is a centralized authentication service that
you can incorporate into your web sites. At sites using the Passport authentication
service, it's no longer necessary to memorize or track numerous username/password
pairs.
Recently, Microsoft also announced .NET My Services (formerly codenamed
"HailStorm"), a set of user-centric Web Services, including identification and
authentication, email, instant messaging, automated alert, calendar, address book, and
storage. As you can see, most of these are well-known services that are provided
separately today. Identification and authentication is the goal of the Passport project.
Email might map to Hotmail or any other web-based email services. Instant messaging
and automated alert should be familiar to you if you use MSN Messenger Service or
AOL Instant Messenger. A calendar and address book is usually bundled together with
the web-based email service. Consolidating these user-centric services and exposing
them as Web Services would allow the user to publish and manage his own information.
A .NET My Services customer can also control access permission to the data to allow or
restrict access to content. These services also allow other users, organizations, and
smart devices to communicate and retrieve information about us. For example, how
many times have you been on the road with your mobile phone and wanted your contact
list from Outlook? Your mobile phone should be able to communicate with your address
book Web Service to get someone's phone number, right? Or better yet, if your car
broke down in the middle of nowhere, you should be able to use your mobile phone to
locate the nearest mechanic. The user is in control of what information is published and
to whom the information will be displayed. You would probably have it set up so that only
you can access your address book, while the yellow pages Web Service that publishes
the nearest mechanic shop to your stranded location would be publicly accessible to all.
Currently, users store important data and personal information in many different places.
With .NET My Services, information will be centrally managed. For example, your
mechanic might notify you when it's time for your next major service. Or when you move
and change your address, instead of looking up the list of contacts you wish to send the
update to, .NET My Services will help you publish your update in one action.
The potential for consumer-oriented and business-to-business Web services like .NET
My Services is great, although there are serious and well-founded concerns about
security and privacy. In one form or another, though, Web Services are here to stay, so
let's dive in and see what's underneath.



Introduction



Information is only any good if you can find what you’ re looking for. The core idea
behind the Web is that documents should be available not in a hierarchical structure, but
should be interlinked in a way that makes it easy for readers to find information related to
their current paper. This idea didn’ t start with Tim Berners-Lee: Vannevar Bush
outlined a model for organizing electronic documents in an article in the Atlantic Monthly
in 1945. .NET is the latest vision for how people can arrange and display data from
many different sources in a way that makes sense depending on what the client needs.

At the Professional Developers' Conference in Orlando last July, Microsoft unveiled its
latest architecture, .NET. Its various features and components were explained to the
large audience by a number of speakers.
     Microsoft.NET (formerly Next Generation Windows Services) is the umbrella term
     for the Microsoft strategy to move from a client-centric model to a network-centric
model. It took Microsoft a while to come around to this point of view; Sun’ s been
preaching “ the network is the computer” for the past decade, and Microsoft only
introduced its first terminal-services enabled operating system in 1998. However, just as
Microsoft embraced the Web once they finally realized that there was something in it,
they are now embracing the server-based model.

Other parts of this strategy are manifest in Microsoft’ s work on Windows 2000 Data
Center, their announcement of a finalized model for licensing Windows applications
through an application service provider (ASP) and their decision to include terminal
services in the core version of Windows 2000 Server operating systems. For a company
that made its name around the personal computer, the shift is significant.

.NET is in three chunks. On the server side, it’ s operating systems such as Windows
Data Center, which Microsoft has been positioning to compete with the mainframe
market. In the middle, it’ s XML, combined with the Simple Object Access Protocol
(SOAP) to expose information in sources such as databases and spreadsheets so that
developers can call them with XML. On the client side, it’ s operating systems that
support XML parsing to display the information based on the tags assigned to it.

Why is Microsoft moving to a more network-centric model?

According to Steve Ballmer, it’ s because this is what customers are looking for. More
to the point, it’ s what customers will be looking for. Deploying applications in server-
centric environment reduces client-side administration, simplifies application updates
and installation and supports a more mobile computing environment. The client-centric
model is hard for network administrators to support and even harder for home users. As
other options become more viable, client-centric computing will be supplemented with a
server-based model.



So what is .NET? The term is, essentially, a new marketing label, which Microsoft is
sticking on existing and future products. The .NET label now features on server products
such as BizTalk Server 2000 and Application Center 2000, which are based on Windows
DNA 2000 technology. The most interesting feature of .NET, however, lies in the
development platform, languages and protocols, which it emphasizes.

By bringing us .NET, Microsoft is presenting us with a new platform designed to facilitate
development of interoperable Web applications, based on a totally new architecture. For
Microsoft, .NET will be a way of "programming the Web," no less. Today, the first
versions of Visual Studio .NET are available, and they enable us to sketch out a
relatively accurate profile of how the .NET platform is likely to look in the long run.


So what is .NET?
There are lots of ways to think about .NET, but here's the way I prefer: Over the last 10 years or
so, Microsoft has gradually been improving the Windows platform, and the associated APIs and
developer tools. So we've seen for example the emergence of COM, then DCOM, then COM+ to
enable reuse of software. For Data Access, we had first ODBC, then OLE DB and ADO in
various versions. For dynamic web sites we had different versions of ASP. And for programming
there were a variety of languages, C++, VB, scripting languages etc.
One problem with this is that as the tools and languages have got more sophisticated
they have also got more complex, due to the need to support earlier tools or versions of
the tools. So for example, these days, COM is an extremely powerful way to package up
code for reuse, but if you want to use COM effectively the learning curve is frightening.
I've routinely heard people talk about 6 months as the minimum time for an experienced
developer to learn COM. And if you want to master DCOM or COM+, you've got to learn
COM first! Add to this the fact that many people have (rightly in my view) complained
that many of these APIs and tools are in some ways badly designed, and you can see
why Microsoft has had such a poor reputation amongst many developers up until now.
.NET appears to be a serious attempt to change all that. Rather than just updating the
existing tools and languages (and so making them even more complex), Microsoft has
started from the ground up again, and developed a completely new framework, within
which most programming tasks can be easily accomplished. Not only that but they
haven't destroyed backward compatibility. Your old code will still work fine; it's just that
new code can be written more easily. .NET has had a very favorable reaction on
newsgroups - there seems to be a general consensus that .NET is well designed, and
Microsoft are seriously listening to the needs of developers as far as .NET is concerned.
The .NET framework differs from previous developer frameworks because it actually sits
between your code and the operating system, providing the environment that is seen by
your code. Before .NET, if you coded in C++, VB6, or any other language, your code still
got compiled to native executable code and executed directly by the operating system.
With .NET, however, code is processed by the .NET framework as it is executed. This
has a number of benefits, including improved security and cross-language
interoperability. There is a small impact on performance, but for most applications that is
minimal – typically between 5% and 10%. This also means that managed applications
will only run on machines that have the .NET framework installed. .NET will install on
Windows 98/ME/NT4/W2K/XP, but not Windows 95




Microsoft’ s .NET is just "Its 1 part marketing, 1 part paradigm-shift, and 3 parts cool
new products!"



Marketing and Paradigm Shifts

First, an important part of Microsoft’ s .NET Initiative is a new and exciting platform
called The .NET Frameworks, which I will be discussing shortly. Second, Microsoft has
re-focused across its various product groups to address software development in an
interconnected world.


Before we jump headlong into the .NET Initiative, lets review how the software industry
is changing. Here is my take on the current thinking.

Software and the Internet

The purpose of computers is to manage information. The nature of the information
varies a lot from application to application, but at the core it is still all about information.
In recent years, the Internet has largely convinced the general public that computers,
indeed, have the potential to simplify their lives in terms of information management. As
a result the consumer’ s expectation of the Internet and technology in general has
taken a huge leap!
People need to create information easily. They need to move, share, and organize
information flexibly. They need to know what is going on, find the latest price, and get to
their movie on time. People have stopped being impressed merely by the fact that
computers work at all. They now want computers to tell them where they can get a New
York steak in a semi-romantic setting within five miles at 7:30 tonight; "OK, now make a
reservation for two." They want their shipping information to be remembered from one
web site to the next. And each web site should be personalized to individual tastes as
well. All of these "needs" are about the organization and communication of information.
By the way, most of this information already exists without computers or the Internet
(such as which restaurants have available tables, or the fact that a customer wants to
make reservations), but customers increasingly expect technology to move and react to
this information naturally.
For all of this to happen, somebody somewhere has to write software. But this isn’ t the
software that you pick-up off the shelf and install on the computer. Half of the software
required to implement all of this doesn’ t even run on the consumer’ s computer. This
is called software as a service. Remember this term.
People will pay to be served. And if your company’ s service deals with information or
communication, then you need software as a service to serve your customer effectively.
People also want these services to come from thousands of vendors, not just one. This
will improve both the quality and the price of whatever it is they are buying. Additionally
many of these services will have to work together seamlessly no matter who
implemented them. Suddenly, the demands on the average software developer are
getting pretty steep!
Before talking solutions, I want to describe a specific scenario. Imagine that you are a
realtor and you want to create a website that your client can use to keep track of their
home buying or selling process. I have seen real estate sights that offer some nice
features such as up-to-date listings, mortgage calculators, and the like. Now imagine
that you want to integrate some more advanced features such as real-time loan-approval
status, or escrow status. Suddenly the computer that runs your website has to
communicate information with computers from the escrow company and the bank. This
isn’ t an impossible problem to solve, but it gets tougher when you consider that the
customer wants to choose from a selection of escrow companies and you don’ t want to
lose their business just because you don’ t support a particular escrow company’ s
data-protocol.
Tough problems like these have been solved in the past by defining standards.
Standards are great, but they carry with them the burden of meeting everyone’ s
needs. This often makes their creation slow, and their implementations tedious. If you
were this realtor, what you really want are a few general-purpose standards and a
development platform robust and simple enough to meet your agile business needs.
Imagine that it only took your web-developer a half-day to understand and incorporate a
new escrow company’ s data interface and another half-day to test it. All of the
communication details just worked. And your developer did the whole thing without once
picking up the phone and calling a developer from the escrow company. Now you are in
business!
This is Microsoft’ s .NET Initiative. And in reality .NET is just the cherry on the tree that
is an entire industry swaying in this direction. Standard protocols like SOAP will make
data exchange so simple that your software will be able to keep up with your business.
Simple data standards like XML will expose your businesses information to anyone who
needs to consume it and vice-versa. Finally, the platform that brings these features to
developers in a simple, consistent, reliable, and scalable fashion will be a major
contender for the foreseeable future. This platform is The Microsoft .NET Frameworks.


Aims and objectives:

The goal that Microsoft has set itself is ambitious, to say the least, both in technical and
strategic terms. The new .NET platform has not evolved from the DNA 2000 technology
currently available; rather, it is a totally new technology that is likely to shake up more
than a few deep-rooted ideas.

. NET is an entirely new platform and technology that introduces a host of new products,
whose compatibility with existing technology is not always guaranteed. It offers support
for 27 programming languages, which share a hierarchy of classes providing basic
services. . NET applications no longer run in native machine code, having abandaned
Intel x86 code in favor of an intermediate language called MSIL that runs in a sort of
virtual machine called the Common Language Runtime (CLR).

In addition, .NET makes intensive use of XML, and places a lot of emphasis on the
SOAP protocol. Thanks to SOAP, Microsoft is hoping to bring us into a new era of
programming which, rather than relying on the assembly of components or objects, is
based on the reuse of services. SOAP and Web Services are the cornerstones of the
.NET platform.

      The IIS Web Server has dropped its effective but fragile multi-threaded model in favor of
       a multi-process model reminiscent of the Apache model.
       ASP technology gives way to ASP.NET (initially called ASP+), where interpreted scripts
        are replaced by codes compiled when they are first invoked, as for JSPs.

       Win32 APIs such as ATL and MFC are replaced by a coherent set of Base Framework
        classes.

       VB.NET no longer ensures ascending compatibility from VB6, as this language receives
        a lot of contributions (inheritance, …) in order to comply with the Common Language
        Specification (CLS) agreement.

       COM+ 2.0 is a totally original distributed components model which does not retain any
        inherited element from the COM/DCOM/COM+ lineup. To this end, COM+ 2.0 no
        longer uses the Windows Registry to register local or remote components: deployment of
        components in .NET will take you back to the good old days when installing a program
        meant copying files into a directory and uninstalling involved nothing more complicated
        than deleting the files.

       A new programming language called C# ("C sharp") is born: this is a modern object-
        oriented language, something of a cross between C++ and Java. C# was created by
        Anders Hejlsberg, architect of a number of languages and tools at Borland, including the
        famous Delphi.

       The new programming model, based on SOAP and Web Services, fundamentally changes
        the way in which applications are designed, and opens the way for a new profession:
        online provision of Web services.
These changes are moving towards a looser coupling between the Windows 2000
operating system and upper layers offering application server services.
Today, with .NET, Microsoft is sending us a vision of an Internet made up of an infinite
number of interoperable Web applications that together form a global service exchange
network. These Web Services are based on the Simple Object Access Procotol (SOAP)
and XML. SOAP was initially submitted to the IETF by DevelopMentor, Microsoft and
Userland Software. Today, a number of vendors, including IBM, are greatly involved in
SOAP.
Not only are these Web Services likely to develop on the Internet, but they may also
change the way we plan information systems in enterprises, by making SOAP
systematically used as application integration middleware, playing the role of a simple
but efficient, standard EAI. An enterprise information system could then also become a
network of front and back-office applications, which interoperate through SOAP, and
reciprocally, use the Web Services that they implement.
IBM and, more recently, Oracle have announced offerings that enable the creation of
Web Services. IBM, which has long been a supporter of SOAP, offers its "Web Services
Development Environment" on its Alphaworks site, while Oracle has also just adopted
SOAP, within 9i. Oracle has dubbed its offering "Dynamic Services", but it does not
seem to be clearly defined as yet.




The .NET Frameworks

Did I just call the .NET Frameworks a "platform"? Did I mean development platform, or
did I mean operating system? Well, the answer to these questions are "Yes I did", and
"Both". So let me begin to describe the .NET Frameworks.
At the core of The .NET Frameworks is a component called the Common Language
Runtime or CLR which is a lot like an operating system that runs within the context of
another operating system (such as Windows ME or Windows 2000). This is not a new
idea. It shares traits in common with the Java Virtual Machine, as well as the
environments of many interpreted languages such as BASIC and LISP which have been
around for decades. The purpose of a middleware platform like the CLR is simply that a
common OS like Windows is often too close to the hardware of a machine to retain the
flexibility or agility required by software targeted for business on the Internet. Software
running on the CLR (referred to as Managed Code) is exceptionally agile!
Unlike interpreted languages managed code runs in the native machine language of the
system on which it is launched. In short, developers write code in any of a number of
languages. The compiler generates binary executable software in a p-code format
called Common Intermediate Language or CIL for short. When the software is launched,
the CLR re-compiles or JIT-compiles (Just In Time) the CIL into native code such as x86
machine language. Then the code is executed full speed ahead. Again, p-code
technology is not a new idea (Pascal p-code compilers have existed since the mid-
seventies).
Another component of the .NET Frameworks is a massive library of reusable object-
types called the Frameworks Class Library or FCL. The FCL contains hundreds of
classes to perform tasks ranging from the mundane, such as file reads and writes, to the
exotic, such as advanced cryptography and web services. Using the FCL you get
software as a service with trivial development costs.
The CLR is intrinsically object-oriented; even its CIL (the p-code, which can be viewed
as a virtual assembly language) has instructions to manipulate objects directly. The
Frameworks Class Library reflects the platform’ s object-oriented nature. In fact the
FCL is the most extensive and extendible class library I have ever worked with.
Finally, The .NET Frameworks contains a collection of tools and compilers that help to
make programming to this new environment productive and enjoyable. Up until now I
have made little mention of C# (pronounced See-Sharp) or Visual Basic. NET. The
reason is that the real guts of this new environment are the CLR. However, over twenty
language compilers are currently being designed for the .NET Frameworks, including
five offerings from Microsoft: Visual Basic, C#, C++, JavaScript and CIL.
The CLR and the .NET Frameworks in general, however, are designed in such a way
that code written in one language can not only seamlessly be used by another language,
but it can also be naturally extended by code written in another programming language.
This means that (depending on the needs of a project’ s work-force) developers will be
able to write code in the language with which they are most comfortable, and continue to
equally reap all the rewards of the .NET environment as well as the efforts of their
coworkers!



.NET architecture
What exactly do we mean by .NET? In Microsoft marketing speak, all forthcoming
versions of desktop and server software will carry the ". NET" label; this will be the case
for the Office suite, the SQL Server database, and the Biztalk Server.
With this in mind, we can describe the .NET architecture as follows:
      It is a set of common services that can be used from a number of object languages.

      These services are executed in the form of intermediate code that is independent of the
       underlying architecture.

      They operate in a runtime (Common Language Runtime) which manages resources and
       monitors application execution.
The primary goal of .NET is to provide developers with the means to create
interoperable applications using "Web Services" from any sort of terminal, be it a PC,
PDA, mobile phone, and so forth.




The Common Language Runtime (CLR)
The CLR is a specification that allows the execution of programs in any language that
can be compiled down to an intermediate language (IL). In the immediate future, this
means it should be possible to run traditional non-Windows languages in a Windows
environment. In spite of the flak you will get from language fanatics, no single language
can lay claim to being the best tool for all purposes. We are always going to need
special purpose languages.

This is where .NET could score one over a J2EE (Java 2 Enterprise Edition) based
solution. J2EE/Java is powerful, well designed and widely adopted, but where it is
platform independent, .NET would be language independent as well.

The CLR could be the formula that allows scaling beyond the Windows platform. The
.NET framework will provide enterprise environment features like memory and thread
management, automatic garbage collection, process setup and teardown and granular
security for code.

VisualStudio.NET plans to be an open language framework and development
environment for supporting more than a dozen languages, and the common libraries that
the CLR plans to provide should allow easy calling and debugging of code in language
from another language. Success for the CLR will be dependent on the scope and quality
of porting it to other platforms.

By all accounts, it will be more than a year before a full-fledged .NET framework will be
available. .NET could very well be the make or break strategy for Microsoft. But whether
Microsoft delivers on .NET or not, the foundations and the concepts that .NET is based
on are sound and will persist.

Create your applications as loosely coupled web services talking to each other using
open languages. If things go as planned for Microsoft, the impact of .NET will be
profound and something that you cannot afford to miss out on. So bookmark this site
and ride along with us as the journey begins.



As has been mentioned already, the CLR is, like the Java virtual machine, a runtime
environment that takes charge of resource management tasks (memory allocation and
garbage collection) and ensures the necessary abstraction between the application and
the underlying operating system.
In order to provide a stable platform, with the aim of reaching the level of reliability
required by the transactional applications of e-business, the CLR also fulfills related
tasks such as monitoring of program execution. In DotNet-speak, this involves
"managed" code for the programs running under CLR monitoring, and "unmanaged"
code for applications or components which run in native mode, outside the CLR.
The CLR watches for the traditional programming errors that for years have been at the
root of the majority of software faults: access to elements of an array outside limits,
access to non-allocated memory zones, memory overwritten due to exceeded sizes.



XML--Data Encoded

To understand .NET you need to understand XML. XML, or the extensible Markup
Language, is as foundational to .NET as the language we speak and write is to our own
communication. We may have grand ideas and information to share, but if we can't
communicate our ideas and information in a way that others can understand, our hard
work and thought will lie fallow. XML is the lingua franca of .NET and is the basis for all
.NET is and will become. Databases will read and write record sets in XML. Web
browsers will accept XML and display it when accompanied with style sheets. Visual
Studio will even generate XML code! If you're not familiar with XML, you need to be.
Without an understanding of XML and XML-related technologies you won't be able to
communicate with .NET enabled resources (people or sites!).

SOAP--Data Communicated

We've long used the Hypertext Transport Protocol (HTTP) to ship Web pages and
content to and fro. But when you combine HTTP (or some other Internet transport
protocol) with XML and specify the format of the XML document itself you get the Simple
Object Access Protocol, or SOAP for short. SOAP, at least as it was originally
conceived, was designed to transport remote method calls from a local system to a
remote one. What differentiates a SOAP-based architecture from other contemporary
remote architectures--DCOM, CORBA, and RMI, to name a few,--is that the SOAP
protocol can penetrate nearly every corporate firewall and the SOAP packets contain
XML-encoded data, which is easy to parse and use. SOAP is also highly scalable, so we
can serve a great number of users at one time.




SOAP/XML is truly the lifeblood of web services, using a universal language (XML) and
protocol (SOAP) to describe what data means. The world of distributed computing is
largely transactions and messaging, and while we use COM/DCOM, CORBA and EJBs
for this purpose, web applications today are for the most part hand-crafted or use
intricate mechanisms to communicate between the different technology camps.

SOAP is certainly not the best solutions for all applications. For instance, if you require
tight synchronous coupling in your applications, keep your options open to technologies
like COM and RMI.

HTTP is not a high performance data transfer protocol, and XML is quite verbose which
also implies a translation overhead. There are cases where the higher efficiency of pure
binary data flowing between applications is necessary. But if you can design your
applications to work as a web service making loosely coupled asynchronous calls (using
SOAP/XML messages), I think the payoffs just in terms of ease of integration is more
than a fair price to pay.

The result is that an application consuming a service does not have to know or care
about the pedigree of the service, as long as the lingua franca is XML.



ASP.NET
The new technology for creating dynamic Web pages is a full rewrite, based on the
services of the CLR. To this end, any of the languages offered by .NET will be usable in
the ASP.NET pages. ASP.NET is the introduction of controls on the server side. With
these controls, your ASP.NET pages will benefit from visual or non-visual components
that provide advanced services: TreeView, ListBox, Calendar, and so on. All these
components analyze the type of Web client calling them, and generate a suitable
representation. Typically, an entry field will use the client scripting functionalities of
Netscape or Internet Explorer (JavaScript or DHTML) to validate the entry, but will
validate on the server side for browsers where Javascript is not supported or has been
deactivated.
We shall look at the main changes in order to give you an idea of the effort required to
migrate from ASP to ASP.NET.
These changes will take place on three levels:
      Changes in the API

      Changes to page structure

.Changes between VBScript and VB.NET ASP.NET only supports one type of language
per page. In DNA, an ASP page could contain alternate sections of JScript and
VBScript. In ASP.NET, this would be impossible, as one page leads to the creation of an
MSIL code file after compilation.




What can you do in .NET.
The list of tasks that are suitable for coding in .NET is huge, and includes most tasks you
might want to write on Windows. Amongst the projects you could easily write in .NET are:
      ASP.NET pages, and reusable web controls.

      Windows Forms-based applications and corresponding reusable controls.

      Networking programs.

      Business logic applications, including applications that interact with databases.

      Performance Counters and related code.

      Developer environments, developer and office tools.

      Console or Windows-based utilities.

      Graphics-intensive programs.
Tasks that are not suitable for .NET development include
      Device drivers

      Debuggers and profilers.

      Real-time-critical applications (but then, Windows isn't really a suitable platform
       for those types of applications anyway!)

      Programs that are so performance-critical and processor-intensive that you can't
       afford to lose even 5-10% of processor time, and for which processor time is the
       bottleneck. Note though, that for many performance-critical applications, including
       most business logic code, the bottlenecks tend to be related to network or database
        communications. Such applications are perfectly suitable for transfer to .NET and
        are unlikely to be affected by the greater processor demands made by managed
        code.

How do you Write Code for .NET
The answer to that is quite easy – if you're using Visual Studio 6 or VB6, then simply start
using VS.NET for your code development instead. What happens then depends on whether
you are using C++ or VB.
VB code will always be compiled to target the .NET framework (if you try to load an old
VB6 project in VS.NET, the project will automatically be upgraded to a .NET one, and
the VB6 code converted to VB.NET code).
If you are using C++, then you get a choice of having a project that targets the .NET
Framework, or an old-style native executable project. If you go for a native executable
project, then it's just like a VS6 C++ project, except that you get performance
improvements since the C++ version 7 compiler does more optimizations than its VS6
counterpart. This is what will happen if you load up an existing VS6 project in VS.NET.
If you wish to migrate an existing C++ project to target the .NET framework, then the
process is more complex. You need to load the project in VS.NET, and add the /clr flag
to the compiler options. This means that your code will target the .NET framework and
will compile successfully. However, you'll need to gradually edit the code for your
individual classes in your project if you want them to be able to take advantage of .NET
features. There's no automatic conversion in the way there is for VB.
If you prefer to work without a developer environment, then you can instead simply
invoke the command line compiler for your chosen language. To do this, you'll simply
need to have the .NET framework SDK installed on your development machine.




What does .NET give you?
If you write code targeted at the .NET framework (such code is formally known as managed
code) then you get quite a few benefits:
       Easier coding and hence shorter development time

       New security-related features to help lock down your machines.

       Much better performance and more powerful web pages.

       Easier deployment.
Let's look at these in detail.
Easier Coding
It's not uncommon to find that the code to perform various tasks becomes a lot simpler when you
use .NET. This is largely because of the existence of the .NET Framework Class Library, which
implements a large number of boilerplate tasks, and which is only available to managed code.
Well the equivalent .NET library, the Framework Class Library, is not only far more
powerful than anything Microsoft has previously released, but it's easier to use too.
Amongst other things, the Framework Class Library contains support for data access,
windows forms and web forms, networking, collections, security and XML.

More Power without C++
This is another aspect of way that .NET makes coding easier and faster. Before the days of .NET,
there were two common choices of programming language for Windows-based applications: C++
or VB6. If you picked C++ then your coding task became much more complex. But you got the
full benefit of the entire Windows API. If you picked VB6 then writing code became relatively
simple, at least for smaller projects, but the feature set you could use was much smaller. For
example, you couldn't normally do things like create threads or write windows services in VB6.
Not only that, but VB6 lacked implementation- inheritance, which severely restricted the possible
architectures you could use for your programs, and could easily make large projects difficult to
maintain. Well with .NET, you get both the simplicity and the power at the same time. The .NET
version of VB maintains the simplicity of syntax of VB6 but is far more powerful and supports
implementation- inheritance. Alternatively, you can write .NET code using Microsoft's new
language, C#, which gives you a C++ style syntax while retaining VB's ease of coding and rapid
development.
Security
.NET brings its own security model which sits on top of the Windows model. For the first time,
using .NET code access security (CAS), you can precisely control what actions some code is
allowed to perform based on how much you trust the code, not how much you trust the user under
whose account the code is running. This is extremely significant in these days when a lot of code
is downloaded over intranets or the Internet. Separately, .NET implements role-based security,
which amongst other things provides for compatibility with COM+/MTS role-based security.
ASP.NET
Language Interoperability
Before the days of .NET, there was only a limited ability for components written in different
languages to work together, normally via COM. Using COM, it was possible for C++ code to use
a component written in VB or vice versa, but that was as far as it goes – and on the C++ side, the
C++ developers needed to do a lot of work to master the intricacies of COM in order for this to
work. With .NET, not only is it a lot simpler to use components written in different languages,
but also there is virtually complete cross-language interoperability. You can for example, write a
component in VB, then derive a class from it in C#, and have the VS.NET debugger effortlessly
swap between the languages as you are debugging. This makes it very easy for different teams of
developers to work together, even though each team is using the language it is most skilled in, or
which is most appropriate to its particular task. The potential for reduction in staff training costs
should be obvious. Easier Deployment
.NET makes it easier to deploy software because .NET code is packaged in assemblies
which are fully self-describing. In order to deploy COM components, you had to make
registry entries that described the components and indicated for example the relevant
GUIDs. This not only made deployment harder, but introduced the possibility of bugs
due to registry entries getting out of sync with the deployed components – those kinds
of bugs are far less likely to occur with .NET applications, for which the number of
changes you need to make to a computer to deploy an application is typically smaller.
Also, it was difficult to deploy reusable components privately, for use only by one
organization's applications. This kind of deployment is trivially easy with .NET. The .NET
deployment model also allows for side-by-side installation of different versions of the
same components, which largely removes the potential for dll-hell bugs (Bugs caused by
one version of a dll being replaced by a supposedly better but in fact incompatible
version).
.NET also supports another means of deployment, known as no-touch deployment. With
no-touch deployment, code can be distributed onto a central server, and is automatically
downloaded onto the client machines (typically by the end user typing in a URL to the
executable code in Internet Explorer). Applications distributed this way are fully
protected by the .NET security mechanisms, and any updated versions are automatically
downloaded as required the next time the user uses the application. The great thing is
that, beyond possibly setting up security policy, there is nothing the systems
administrator needs to do on the client machines to install the applications. This means
that the you can get the power of for example a native Windows application, combined
with the ease of deployment that was previously only available to HTML web-browser-
based applications.




. NET is multi-language
With the .NET platform, Microsoft will provide several languages and the associated
compilers, such as C++, JScript, VB.NET (alias VB 7) and C#, a new language which
emerged with .NET.
Third party vendors working in partnership with Microsoft are currently developing
compilers for a broad range of other languages, including Cobol, Eiffel, CAML, Lisp,
Python and Smalltalk. Rational, vendor of the famous UML tool Rose, is also understood
to be finalizing a Java compiler for .NET.


Applications are hardware-independent
All these languages are compiled via an intermediate binary code, which is independent
of hardware and operating systems. This language is MSIL: Microsoft Intermediate
Language. MSIL is then executed in the Common Language Runtime (CLR), which
basically fulfills the same role as the JVM in the Java platform. MSIL is then translated
into machine code by a Just in Time (JiT) compiler.


Applications are portable
Applications compiled as intermediate code is presented as Portable Executables (PEs).
Microsoft will thereby be able to offer full or partial implementations of the .NET platform
over a vast range of hardware and software architectures: Intel PCs with Windows 9x,
Windows NT4, Windows 2000 or future 64 bit Windows versions, microcontroller-based
PDAs with PocketPC (e.g. Windows CE), and other operating systems too, no doubt.


All languages must comply with a common agreement


Computer languages are numerous. Traditionally, new languages have been created to
respond to new needs, such as resolving scientific problems, making calculations for
research, or meeting strong needs in terms of application reliability and security. The
result is that existing languages are heterogeneous: some are procedural, others object-
oriented, some authorize use of optional parameters or a variable number of
parameters, some authorize operator overload, others do not, and so it goes on.
For a language to be eligible for the range of languages supported by the .NET platform,
it must provide a set of possibilities and constructions listed in an agreement called the
Common Language Specification, or CLS. To add a language to .NET, all that is
required in theory is for it to meet the requirements of the CLS, and for someone to
develop a compiler from this language into MSIL.
This seems fairly innocuous at first glance, but the restrictions imposed by CLS-
compliance on the different .NET languages mean that, for example, Visual Basic .NET
ends up becoming a new language that retains little more than the syntax of Visual Basic
6.
The fact that all the .NET languages are compiled in the form of an intermediate code
also means that a class written in a language may be derived in another language, and it
is possible to instantiate in one language an object of a class written in another
language.
Today, if you want to create a COM+ object, you generally have the choice between VB6
and Visual C++. But VB6 does not give access to all possibilities, and for certain
requirements, you are restricted to VC++. With .NET, all languages will offer the same
possibilities and generally offer the same performance levels, which means you can
choose between VB.NET and C# depending on your programming habits and
preferences, and are no longer restricted by implementation constraints.
At this point, you may be wondering how this can all be possible. Magic? Not really. In
our opinion, there is no magic wand being waved here. To give a more even view of the
multi-language aspect of .NET, we would prefer to say that .NET only supports one
language, MSIL. Although Microsoft does let you choose whether to write this MSIL code
using Visual Basic syntax, or C++ syntax, or Eiffel…
To put it frankly, in order to be able to provide the same services from languages as
remote as Cobol or C#, you have to make sure these languages have a common
denominator which complies with the demands of .NET. This means that the .NET
version of Cobol has had to receive so many new concepts and additions that it has
practically nothing left in common with the original Cobol. This applies just as much to
the other languages offered in .NET, such as C++, VB, Perl or Smalltalk.
So what we need to understand is that when Microsoft announces the availability of 27
languages, we should interpret that as meaning there are 27 different syntaxes.
The most symptomatic example concerns Java. It is one of the intended .NET
languages, thanks to Rational, who are currently working on a Java to MSIL compiler.
But what kind of Java are we talking about? It is a Java that runs as MSIL code, not
byte-code. This Java does not benefit from the traditional APIs offered by the J2EE
platform, such as JMS, RMI, JDBC, and JSP. This is a Java in which EJBs are replaced
by. NET’ s distributed object model. The label says Java, the syntax says Java… but
Java it isn’ t!
Of course, the case of Java is a bit of an exception. Indeed, Java specialists see .NET
overall as a rather pale copy of Java itself, and consider it to be proof of Microsoft's
successive attempts to undermine Java's future. Relations between Sun and Microsoft
have been peppered with disputes and lawsuits in recent years. It was out of the
question that Microsoft would participate in the construction of Java by offering total
support for the language in its new .NET platform.


All the languages use a coherent set of basic services


A hierarchical set of classes provides all the services and APIs necessary for application
development. Thanks to the introspection capabilities offered by the reflection API, code
is self-documented, which gives the developer exhaustive documentation, as with
Javadoc.




Disadvantages of .NET

Performance
The fact that managed applications are run by the .NET framework is obviously going to
affect performance because it will put extra overhead on. The performance loss is going to
vary considerably according to precisely what an application is doing, but in general at
present you can typically expect a 5-10% increase in CPU time to perform a given task.
You'd have to judge whether that is acceptable for your application. These days the vast
majority of Windows applications tend to spend most of their time waiting for user input or
waiting for some data over the network anyway – for those kinds of applications 5-10% on
CPU time is hardly an issue!



Learning Curve
You can't write .NET code unless you learn about .NET. That means that you're going to have to
get to grips with either VB.NET, MC++ or C#. C++ guys are probably going to have the easiest
time of it – the transition from C++ to C# is a relatively painless one and you do very quickly
get rewarded by that greater productivity. VB people who haven't encountered inheritance-
based OOP will have a far stiffer learning curve. Even if you are an expert in VB6 and you
opt to upgrade to VB.NET you'll find many new concepts around. VB.NET is for all
practical purposes a new language, but which has been designed to allow backwards
compatibility. The syntax is largely the same as for VB6 but many of the concepts that
govern how you design your code are very different.
Conclusion:
Remember, The .NET Frameworks is the real guts of Microsoft’ s .NET Initiative. This
new product will enable your company to produce software that exposes itself as a
service and/or uses services in an interconnected world. This is the next step.
Meanwhile Microsoft will be releasing .NET versions of their enterprise servers and other
products, such as Visual Studio. NET, which take full advantage of and complete the
abilities of the .NET Frameworks. Microsoft will also be marketing their own web-
services such as authentication services and personalization services. This is all part of
The .NET Initiative.
However, Microsoft does not corner this initiative. In fact significant portions of The
.NET Frameworks have been submitted for ECMA standardization, i.e. Microsoft would
not retain control of the technology. Meanwhile, third parties will release server products
that tightly integrate with the .NET Frameworks. Third parties will expose web-services
using the .NET Frameworks. Some of these products will compete with Microsoft’ s
products; others will be new innovative products that were never before feasible.
Software as a service is here with or without the .NET Initiative. The .NET Initiative
brings new tools, a new platform, and a cohesive plan designed from the ground up to
exploit software as a service. What we all get is an Internet that begins to meet its own
potential.


                       *** THE END ***




                                       CONTENTS
1. Web Services
2. Introduction
      . Software and internet
3. Aims and objectives of .Net
4. The .Net Frame Work
5. .Net architecture
6. What can we do in .Net
7. How do we do write code for .Net
8. Advantages of .Net
9. Applications of .Net
10. Disadvantages of .Net
11. Conclusion

								
To top